I recently heard a Stanford professor say most people are using generative AI ineffectively because they treat it as a simple Q&A tool rather than a business partner, or a coach. His key tip was to get AI to ask you questions in order to help it generate more fruitful responses for you.
So, I started adding something like this to the end of my prompts:
Before proceeding, please identify any ambiguities or additional information needed to generate a complete and accurate analysis based on the context provided.
The results improved significantly.
Why This Works
When AI encounters gaps in a prompt, it doesn't flag them. It fills them by making reasonable assumptions based on patterns it has seen, which can result in it producing that "generic but technically correct" output that we all know.
By explicitly asking the AI to identify what is missing, you're forcing it to surface those decision points instead of silently choosing for you. It's the difference between the AI assuming you want standard federal court procedures versus asking whether you're in state or federal court, which district you are in, who the judge is, and whether your judge has standing orders.
I first put this to use in my attempts at vibe coding (coding with natural language) aspects of this website. Without basic domain knowledge of coding terminology, I didn't even know how to properly explain what I wanted the AI to do.
I started asking Claude to flag any clarifying questions before starting, and it would come back with considerations I hadn't thought of: Should the sidebar dynamically change based on the page selected, should it be unified with separate sub-sections, or should there be a persistent toggle at the top of the sidebar for the user to select?
Once I saw how well this worked for coding, I started implementing this approach with legal prompts.
The Unexpected Pushback from Claude
Interestingly, when I asked Claude whether this is actually good practice, it essentially said no. Modern AI models can supposedly detect ambiguity on their own. Adding explicit instructions to ask questions apparently creates unnecessary friction.
But when I pressed specifically about complex legal work, the answer shifted. For complex legal tasks—with various sets of rules, several different strategic considerations, and client variables—getting clarification upfront can make sense before having the AI waste tokens on producing a full response.
What I've Noticed
The few extra minutes prompting and answering clarifying questions in a quick manner is worth the additional effort if it means, at minimum, the AI is providing a backstop to ensure my prompt considers all necessary variables. Often times that AI won't have any clarifying questions to ask, and it proceeds as normal.