Your Intake Form Is the Bottleneck
How I stopped collecting AI use cases the broken way and started getting 91 of them
Most enterprise intake forms are graveyards.
You build the SharePoint form. You write the launch email. You send it on a Monday. By Friday you have six responses, three of which say “automate my emails,” and you conclude, with seasoned resignation, that people don’t know what they want from AI.
I assumed the same thing for a while. Then I built a Custom GPT instead of a form, and got 91+ use case submissions. Same population, same week-of-the-month. The difference was the front door.
The form was the problem
Here’s what’s actually happening when intake forms underperform:
A form asks people to translate a fuzzy frustration (”the way I update this report every Friday is making me lose my mind”) into structured fields they don’t have the vocabulary for (”Use case category. Tools involved. Desired outcome.”). That translation work is the entire ask. It’s invisible, it’s cognitively expensive, and it’s why your form has a 12% completion rate.
A Custom GPT can do that translation work for them. Not a chatbot. Not a coworker. A conversational schema collector that absorbs free-form frustration and hands you back something a triage queue can route.
The reframe
Every intake form is, underneath, a schema. A Custom GPT lets you keep the schema and replace the interface.
Our schema is the same one you’d find in any sensible form: what’s the task, who does it today, what tools are involved, how often it happens, what “done” looks like. What changes is what the user experiences. Instead of a wall of fields, they describe their problem in their own words. The GPT asks one or two clarifying questions, rephrases ambiguous answers, and translates “I want this thing to be less annoying” into a populated field.
The user thinks they’re brainstorming. You’re getting clean structured data. Both things are true.
The pattern
The conversation is the front door. The interesting part is the routing.
[ User opens GPT ]
↓
[ Workflow Scout (Custom GPT) ]
- Asks for the task in plain language
- Asks 2–4 follow-up questions
- Confirms a structured summary back
- Outputs a JSON payload
↓
[ Power Automate (HTTP webhook) ]
- Authenticates against the destination
- Maps fields → custom fields
↓
[ Asana ]
- Creates a task in the AI Use Cases queue
- Auto-populates category, team, tools, frequency
↓
[ Confirmation back to submitter ]
That’s it. Not technically impressive. Almost embarrassingly basic. The reason it matters is that it works, and almost no enablement teams I talk to have built it.
The number that sells the pattern: 91 submissions in the first weeks, with no email campaign. The number that sells it more honestly: roughly a third of those are submissions I would never have surfaced from a form, because they came from people whose two paragraphs of stream-of-consciousness the GPT successfully tagged into “Reporting & Analytics” without making them say the words.
What the GPT does that the form couldn’t
The original goal was clean intake. In practice the GPT is doing three jobs at once, and the data collection is the least interesting of them.
It meets people where they are. A form demands a fully formed idea — you have to know what you want before you can fill in the fields. That requirement quietly excludes everyone who is curious but uncertain: the person who senses something in their week is wasteful but couldn’t tell you what AI would do about it, the person who hasn’t used the tool yet, the person worried their idea is too small to submit. A form sorts these people out. A conversation includes them.
The GPT collapses that whole spectrum into a single doorway. Whether you bring a fully formed automation pitch or “I don’t know, my Mondays are bad,” you start in the same place and end in the same place: a structured Asana task in the queue. No submission is better than another at the point of intake. Triage happens later, by humans, with context. That equality is the whole game. It also makes triage easier — every submission lands in the same shape, so routing stops being a creative act and becomes a sorting act.
It pushed people into the tool we were rolling out. To use Workflow Scout, you have to be inside ChatGPT Enterprise. That’s the only door. The intake itself becomes the activation event — a low-stakes, concrete reason to log in for the first time, instead of a vague invitation to “explore the tool when you have a chance.”
It modeled the platform, which is the big one. To deploy a great Custom GPT in your own workflow, you first have to understand what a great Custom GPT is. That’s normally a course — a deck about system instructions, a walkthrough of when to constrain a conversation. It’s the kind of training that’s genuinely useful and that almost nobody completes, because it’s abstract and it’s about a tool they haven’t built yet.
A well-designed intake GPT does that teaching by being itself. The person submitting a use case experiences, in the act of submitting, what tight instructions feel like. They watch a GPT shaped to a specific job, doing it well, on a problem they care about. They walk away having logged a use case and having absorbed more about how to deploy a Custom GPT than any course module would have given them — without sitting through a single slide.
We didn’t design Workflow Scout for any of this. The equity, the activation, the modeling, they all emerged. But once you can see them, you can’t un-see them: three jobs, one piece of infrastructure. That ratio is rare in enablement work.
The honest caveats
Auth and security are real work. A Custom GPT calling out to your tenant requires a service account, an API token, and somewhere safe to store it. We use Azure Key Vault, called via Power Automate. Skip this conversation with security and the pattern gets pulled.
A GPT can hallucinate a field. Loose instructions and you’ll get categories that don’t exist in your taxonomy. Tight system instructions, a fixed enum of allowed values, and a confirmation step before the payload sends are the difference between clean data and a queue full of made-up tags.
The conversation is not the contract. Make the confirmation step explicit. “We received your idea, here is who triages it, here is what happens next.” Five minutes that prevents a month of “did anyone get my submission” Slack messages.
What this is really about
Good enablement isn’t telling people what they should know. It’s understanding what they need.
A form sorts people into haves and have-nots before you’ve even seen them. A conversational intake holds the door open for both. The person with the fully formed automation pitch and the person who said “I don’t know, my Mondays are bad” both leave having been heard by the system. One of them might have a smaller ask. Neither had to translate themselves to be counted.
That equity at the front door is what I’d build for first if I were starting over. The clean data is real. The activation is real. The modeling is real. But the thing underneath all three is that the intake doesn’t punish people for showing up uncertain. It rewards them for showing up at all.
If your intake is underperforming, stop iterating on the form. Replace it. Put a GPT in front of the schema. Wire it to a real system of record. Send a confirmation.
You’ll get more submissions than you expect. You’ll get cleaner data than you expect. You’ll get people in the tool you’re rolling out. And you’ll teach them how to build their own version of what they just used, without ever calling it a course.
The best Custom GPT training I’ve ever deployed is the Custom GPT people use to tell me what training they need.
If you build something with this pattern, I’d like to hear what you change. The architecture above is the version that worked for us, not the version that has to work for everyone. The point is the reframe — that enablement means meeting people where they are, and the door is the first place that has to do that work.
