If you’ve spent any time seriously engineering prompts for Large Language Models (LLMs), you’ve probably felt the pain. Your once-simple instruction balloons into a complex wall of text, stuffed with <context>, <persona>, and <instructions> brackets. It gets the job done, but it’s unwieldy, hard to read, and even harder to debug.

We're told to provide clear context, but the "best practices" often lead to prompts that look like messy, first-draft code.

What if we could do better? What if we could structure our prompts in a way that’s not only more powerful for the AI but also dramatically clearer for us humans? What if we can use a specification that other humans have already standardized

The idea is simple: borrow from HTML.

Introducing Semantic Prompting with <details> and <summary>

Instead of generic XML-style brackets, let's use a standard, semantic HTML element that LLMs already understand deeply from their training on billions of web pages: the <details> disclosure widget.

It looks like this:

HTML

<details>
  <summary>This is the high-level summary.</summary>
  All the detailed context and information goes right here.
</details>

The <summary> tag provides a clean, human-readable label. The <details> tag holds all the rich context related to that label.

Yes technically this is still XML tags, but I'm proposing you just use one simple disclosure element that the AI or humans can obtain information about from its hierarchy and attributes.

Let’s revisit a complex prompt, but structured this new way. Imagine you're asking an AI to act as a marketing strategist.

HTML

Your task is to act as a Senior Marketing Strategist. Analyze all the provided context within the <details> blocks. Then, complete the final task outlined in the "TASK" block.

<details name="PRODUCT_INFO">
  <summary>Product Information: The "Aura" Smart Lamp</summary>
  The "Aura" is a new smart lamp that syncs with your digital calendar and changes color based on your schedule...
</details>

<details name="TARGET_AUDIENCE">
  <summary>Target Audience Profile: "The Focused Professional"</summary>
  - Age: 28-45
  - Profession: Remote workers, tech employees, freelancers...
</details>

<details name="BRAND_VOICE">
  <summary>Brand Voice and Tone Guidelines</summary>
  - Tone: Innovative, helpful, calming, and slightly aspirational...
</details>

<details name="TASK">
  <summary>Your Specific Assignment</summary>
  Based on all the context above, generate a 3-part marketing campaign...
</details>

Why This is a Game-Changer

This isn't just a formatting trick. It’s a fundamental improvement to the prompting workflow for two reasons.

1. It’s Better for the Human.

This structure is a dream to work with. Because modern text editors and note-taking apps (like Obsidian, Notion, or even a simple HTML file) render these elements, you get a clean, collapsible interface for your prompt.

Obsidian details summary

Look at how that prompt renders. It's beautiful. It's clarity.

This is a user interface principle called "Progressive Disclosure." You see the high-level structure first and can dive into the details only when you need to. This drastically reduces cognitive load and makes editing and refining your prompts a thousand times easier.

2. It’s Better for the AI.

LLMs understand the relationship between <summary> and <details> innately. You’re giving the model more than just data; you’re giving it a pre-organized mental model.

  • Hierarchical Context: The AI knows the <summary> is a parent concept for the text inside <details>. This is much richer than a flat list of tags.

  • Scoped Attention: You’re essentially telling the model’s attention mechanism what to focus on. By providing a summary first, you prime the model: "Get ready, the following block of text is all about the Target Audience." This prevents context from bleeding together and leads to more precise, relevant outputs.

Let's Build on This

This approach moves us from simply "prompting" to a more robust practice of "Context Engineering." It’s a system where the very act of writing the prompt enforces a clear, structured, and effective way of thinking.

But this is just the beginning. There's a lot of potential to explore in creating a more formal standard around this idea.


I'm looking to connect with others who find this idea compelling. If you're a prompt engineer, a developer, or just someone passionate about the future of AI interaction, I'd love to connect. Let's discuss, experiment, and see if we can build a better standard for communicating with our AI counterparts. You can find me on watthem on bluesky or my linkedin.