One underutilized feature in AI prompting is a surprisingly simple technique. It's the XML tag, which is just a basic formatting technique that takes seconds to implement but could fundamentally change how AI interprets your prompts. Here's why we should be using them.
What are XML Tags?
XML tags are simply labels you put around different parts of your prompt to tell an AI what each section is. Think of them like the subject line dividers in a litigation binder—"Pleadings," "Discovery," "Exhibits." Same content, but now everyone knows exactly what they're looking at.
Here's what they look like:
<instructions>
Tell the AI what you want it to do
</instructions>
<document>
The actual text you want the AI to analyze
</document>That's it. You're just putting labels around different sections using these brackets: <label> to start and </label> to end.
The Problem They Solve
Without these labels, the AI reads your prompt like one continuous document. Imagine sending an email that contains a contract excerpt, your analysis instructions, some background facts, and relevant case law, all in one long block of text with no formatting. The reader has to guess where the contract ends and your instructions begin.
That's exactly what happens when you paste content into the AI without XML tags. AI is sophisticated, but it's still guessing at boundaries. Sometimes it treats your instructions as part of the document. Sometimes it analyzes your example as if it were the actual assignment.
XML tags eliminate the guessing. You're literally drawing boxes around different parts of your prompt and labeling each box. AI was specifically trained to recognize these labels, so when it sees <contract>, it knows "everything inside here is the contract text." When it sees </contract>, it knows "okay, the contract section is over."
A Specific Example - Deposition Transcript Analysis
Here's an example of a prompt without XML tags regarding analysis of a non-compete provision:
Please help me analyze a deposition transcript and related case documents. I've attached the deposition transcript of [Witness Name], our complaint, the opposing party's deposition transcript, the exhibits to the complaint, and the exhibits that were used during the depositions.
First, please review and summarize the deposition transcript, identifying the key topics discussed and the major testimony provided. Then review and analyze all of the attached case documents so you understand the full context of the case, including what claims we've alleged in the complaint and what evidence supports those claims based on the exhibits.
Once you've reviewed everything, please analyze the deposition transcript and identify the key admissions made by the witness that help our case, as well as any other important statements. For each admission or important statement, explain why it matters in the context of the case and how it relates to the claims in our complaint. Also flag any statements that might be problematic for our position.
Please organize your analysis in a way that would be useful for trial preparation. Include the specific page and line citations for each statement you identify. The summary of the deposition should be comprehensive but not overly long. Focus on what matters most.Here's the revised prompt with XML tags:
<role>
You are a senior litigation associate assisting with deposition analysis and trial preparation. Your task is to review case documents and provide strategic analysis of deposition testimony.
</role>
<documents_provided>
Deposition transcript of [Witness Name] ("Subject Deposition")
Complaint filed in this matter
Deposition transcript of [Opposing Party Name] ("Opposing Party Deposition")
Exhibits to the Complaint (Exhibits A through [X])
Exhibits used during depositions (Deposition Exhibits 1 through [X])
</documents_provided>
<task_sequence>
<task_1>
DOCUMENT REVIEW AND CONTEXT BUILDING
Review and internalize all attached case documents in the following order:
Complaint: Identify each cause of action, the key factual allegations supporting each claim, and the elements we must prove
Exhibits to Complaint: Understand what documentary evidence supports our allegations
Opposing Party Deposition: Note any prior testimony relevant to the Subject Deposition
Deposition Exhibits: Familiarize yourself with documents referenced during testimony
</task_1>
<task_2>
DEPOSITION SUMMARY
Provide a structured summary of the Subject Deposition including:
Witness background and relationship to the matter
Chronological overview of key topics addressed
Major areas of testimony with brief descriptions
</task_2>
<task_3>
ADMISSION AND KEY STATEMENT ANALYSIS
Identify and analyze the following categories of testimony from the Subject Deposition:
Category A - Favorable Admissions: Statements that directly support our claims or undermine the opposing party's defenses
Category B - Important Corroborating Statements: Testimony that aligns with or strengthens other evidence in the case
Category C - Potentially Problematic Statements: Testimony that could be harmful to our position or that opposing counsel might use against us
Category D - Impeachment Opportunities: Statements that contradict the Opposing Party Deposition or other evidence
</task_3>
</task_sequence>
<output_format>
SECTION 1: CASE CONTEXT SUMMARY
Brief overview of claims and key issues (3-5 paragraphs)
SECTION 2: DEPOSITION SUMMARY
Witness background (1 paragraph)
Topic-by-topic summary with page references
SECTION 3: KEY TESTIMONY ANALYSIS
For each identified statement, provide:
Direct quote with citation [Page:Line]
Category designation (A, B, C, or D)
Relevant complaint allegation or claim it relates to
Strategic significance (2-3 sentences explaining why this matters)
SECTION 4: PRIORITY STATEMENTS FOR TRIAL
Most significant statements ranked by importance to the case
</output_format>
<citation_requirements>
All references to deposition testimony must include page and line numbers in [Page:Line] format. When referencing complaint allegations, cite the paragraph number.
</citation_requirements>It's a night and day difference. The AI immediately understood the structure and delivered a focused and thorough analysis. Here's the value the XML tags added:
1. Clear Role Definition
The XML version opens with an explicit role assignment, which frames the AI's perspective and expertise level. The basic version jumps straight into tasks without establishing context for how the AI should approach the work.
2. Document Inventory
The XML version explicitly lists and labels all documents, making it clear what's been provided and creating consistent terminology (e.g., "Subject Deposition" vs. "Opposing Party Deposition"). The basic version mentions documents in passing, which can create ambiguity about what exactly has been attached.
3. Sequenced Task Structure
The basic version has a subtle sequencing issue: it asks for a deposition summary, then says to review "everything," then asks for analysis—creating some ambiguity about whether the summary comes before or after full document review. The XML version uses numbered tasks that establish a clear logical order: context building first, then summary, then analysis.
4. Categorized Analysis Framework
The XML version provides specific categories for classifying testimony (Favorable Admissions, Corroborating Statements, Problematic Statements, Impeachment Opportunities). The basic version asks for "admissions" and "important statements" without defining what makes something important or how to categorize different types of significant testimony.
5. Explicit Output Structure
The XML version specifies exactly what sections the output should contain and what each section should include. The basic version says to "organize your analysis in a way that would be useful for trial preparation" but leaves the structure to interpretation.
6. Standardized Citation Format
The XML version explicitly requires [Page:Line] format for citations. The basic version asks for "page and line citations" but doesn't specify format, which could result in inconsistent citation styles throughout the output.
7. Scannable and Editable
If you need to modify just the output requirements or add a document, the XML structure makes it immediately clear where to make changes. The basic version would require re-reading the entire prompt to find the right place to edit.
Building Your XML Toolkit for Litigation
There's no official XML tag dictionary. You create tags that make sense for your work. For example, for document review, you could use tags like <document> for the primary text to analyze, <instructions> for what I want the AI to do, <focus_areas> for specific issues to prioritize, <context> for background information, and <output_format> for how I want the response structured.
Advanced Moves: Nested Tags
For complex litigation tasks, you can nest tags like Russian babushka dolls. This works particularly well for multi-issue analyses where an AI is susceptible to losing track of which facts relate to which claims. Take a look below at the template skeleton for the XML tag prompt used above. Notice the nested tags within the <task_sequence> tag. Click the copy button in the top right to try it out yourself.
<role>
[Define the AI's role and expertise level for this task]
</role>
<documents_provided>
1. [Document type] - [Brief description or label for reference]
2. [Document type] - [Brief description or label for reference]
3. [Document type] - [Brief description or label for reference]
[Add additional documents as needed]
</documents_provided>
<task_sequence>
<task_1>
[TASK NAME IN CAPS]
[Describe what the AI should do first]
- [Specific sub-instruction]
- [Specific sub-instruction]
- [Specific sub-instruction]
</task_1>
<task_2>
[TASK NAME IN CAPS]
[Describe the second task]
- [Specific sub-instruction]
- [Specific sub-instruction]
</task_2>
<task_3>
[TASK NAME IN CAPS]
[Describe the third task]
[Category/Classification A]: [Description of what falls in this category]
[Category/Classification B]: [Description of what falls in this category]
[Category/Classification C]: [Description of what falls in this category]
[Add additional categories as needed]
</task_3>
</task_sequence>
<output_format>
SECTION 1: [SECTION TITLE]
- [What this section should contain]
SECTION 2: [SECTION TITLE]
- [What this section should contain]
- [Specific elements to include]
SECTION 3: [SECTION TITLE]
For each item, provide:
- [Required element 1]
- [Required element 2]
- [Required element 3]
SECTION 4: [SECTION TITLE]
- [What this section should contain]
</output_format>
<citation_requirements>
[Specify citation format and standards]
</citation_requirements>
<constraints>
[Optional: Add any limitations, things to avoid, or guardrails]
</constraints>
<context>
[Optional: Add case-specific background, procedural posture, or other relevant context the AI should know]
</context>
The Practical Payoff
After using XML tags, here's what I've noticed. First, I spend much less time clarifying or reprompting because the AI gets it right the first time more often. Second, complex prompts with multiple components stay organized instead of turning into a confused mess. Third, I can easily swap out content within tags without rewriting my entire prompt structure. And fourth, the AI's responses mirror the organization I provide, making the output cleaner and more usable.
Common Pitfalls to Avoid
Not everything needs XML tags. If you're asking the AI a simple question, tags would be overkill. Save them for prompts with multiple components or when you need to clearly separate different types of content.
Also, consistency matters. If you start with <context> in one part of your prompt, don't switch to <factual_background> later. The AI will treat them as different containers.
This structure ensures the AI understands exactly what role each piece of information plays in the overall task.
The Bottom Line
For lawyers trying to get consistent, high-quality outputs from AI, XML tags might be the difference between hoping the AI understands your prompt and knowing it does.
Start simple. Next time you're asking the AI to review a document, wrap your instructions in <instructions> tags and the document in <document> tags. Once you see the improvement, you'll naturally start developing your own tagging system for different litigation tasks.
The learning curve is about five minutes. The improvement in output quality is permanent and exactly as advertised.