Headshot photo of man with short dark hairAlex Jitianu, Syncro Soft/Oxygen XML Editor
November 1, 2025

As artificial intelligence becomes an integral part of technical communication, one question persists: how can we manage prompts with the same rigor and efficiency we apply to structured content?

Prompt engineering has evolved rapidly, yet it still faces familiar challenges — duplication, inconsistency, and maintenance complexity. This article explores how structured authoring principles, particularly those from DITA XML, can be applied to AI prompt development. The goal: to achieve reuse, profiling, and maintainability in prompt engineering, using the same foundations that have guided technical documentation for decades.

The Current Landscape

Our industry has already integrated AI-assisted DITA XML content generation for creating drafts and reviews. But this is only the beginning. DITA XML has the potential to play a far greater role in AI workflows — from being a knowledge source in retrieval-augmented generation (RAG) pipelines to serving as structured input for chatbots via WebHelp outputs.

The next frontier is using DITA not just as input for AI, but as the framework to manage AI prompts themselves.

Why Prompt Development Matters

Every interaction with an AI model starts with a prompt — the set of instructions and context guiding the model’s response. A well-crafted prompt defines not only what the AI produces, but how it reasons, communicates, and structures information.

Prompt engineering, therefore, is the art and science of designing, refining, and maintaining these instructions to ensure clarity, consistency, and performance across tasks.

The Challenge of Complex Prompts

Simple tasks often need simple prompts. But as tasks grow in complexity — such as generating multi-audience proposals, performing conversions, or maintaining consistent tone and structure — the challenge escalates.

Take, for example, a task to generate a structured proposal for multiple audiences. Developers may require a technical overview, while executives want a business impact summary. The same base instruction must adapt across personas, tones, and deliverable types.

This demands audience-aware, style-aware, and context-rich prompts — a level of sophistication that quickly becomes difficult to maintain without structure or modularity.

Multi-Engine Complexity

Another emerging challenge is managing prompts across different AI models.

  • Lightweight models (like GPT-4o mini) perform best with explicit, step-by-step instructions.
  • Full models (such as GPT-4o) can handle more abstract reasoning and creativity.
  • Advanced reasoning models (like GPT-5) may require minimal guidance to avoid over-constraint.

Each variant of a task — from summarization to content generation — needs tailored prompt designs for different engines. The result? Duplication, divergence, and growing technical debt.

To solve this, we need structured reuse — a way to define, adapt, and maintain related prompts without rewriting from scratch.

Building a Prompt Toolbox

In our AI Positron Assistant project, we started with a modest set of ten predefined prompts — tasks like rewriting, summarizing, and conversions. We kept things simple, representing actions with JSON and Markdown.

But as the assistant evolved, the prompts library exploded to 47 prompts. With every new task or translation variant, copy–paste reuse became unsustainable. Small divergences crept in — sometimes as trivial as a Markdown separator difference ( “`xml vs —), yet they multiplied over time.

This experience led to a key insight: prompt reuse must be managed systematically, not manually.

“Do we already have a framework that supports reuse, profiling, and modular design — one that has been proven in industry?”
We do: DITA.

Prompt Development Frameworks

Frameworks for prompt engineering are emerging, offering structured approaches for designing, refining, and testing prompts. Best practices consistently emphasize:

  • Clarity — precise, goal-oriented instructions
  • Iterative refinement — test different cases and refine when needed
  • Modular design — building reusable components like personas, audiences, and style templates

These principles mirror those long established in structured content authoring. Which raises an interesting question:

“Do we already have a framework that supports reuse, profiling, and modular design — one that has been proven in industry?”

We do: DITA.

DITA Fundamentals

DITA (Darwin Information Typing Architecture) was designed for modular, topic-based authoring. Its key mechanisms — conref, keyref, and profiling attributes — enable large-scale content reuse and conditional variation.

In documentation, DITA allows authors to manage thousands of topics efficiently, producing multiple outputs (like user guides, APIs, or quick references) from a shared content base.

The same principles can be directly applied to prompt development.

Applying DITA to Prompt Development

1. Treat Prompts as DITA Topics

Each prompt can be represented as a DITA topic, containing structured components such as:

  • Instruction – the main task directive
  • Persona – the role the AI should assume
  • Style – tone or format (e.g., concise, persuasive)
  • Input – variables or placeholders for user data
  • Output – desired structure or response format

2. Reuse Through Components

DITA’s reuse mechanisms allow components like personas or tone definitions to be shared across multiple prompts. For instance:

  • A “technical writer” persona could be referenced by dozens of task prompts.
  • Style modules could enforce consistent formatting across engines.
  • Profiling attributes can automatically adjust prompts for different AI engines (fast, full, or reasoning).

A Practical Implementation: Hybrid DITA–Markdown Project

In practice, we built a hybrid DITA–Markdown setup for AI Positron.

The project includes:

  • DITA-OT project files for structure and publication
  • Markdown for lightweight flexibility
  • Reusable resources such as personas, prompt components and instructions

 

This structure allows different chat modes — e.g., Agentic Chat, DITA XML Chat, or Ask and Make Plans — each reusing and extending common modules.

 

 

 

 

Translation prompts further illustrate the advantage of this approach. Each translation prompt (English, German, French, Japanese, etc.) references a shared base structure, reusing metadata and logic while applying language-specific parameters through profiling attributes.

Looking Ahead

The journey doesn’t end here. For us, the next phase focuses on:

  • Refining the project architecture.
  • Evolving from Prompts to AI Actions — executable units that combine prompts with metadata. Using DITA, we can define action properties (such as title, description, and type) within metadata sections and automatically generate YAML front matter.
  • This bridges structured content authoring with programmatic execution, enabling seamless integration between DITA topics and AI action configurations.

Conclusion

Prompt development is evolving from ad-hoc experimentation to disciplined engineering. By applying DITA’s framework for reuse, profiling, and modular design, we can build AI systems that are scalable, consistent, and maintainable. In short, why reinvent the wheel? Structured content principles already solve many of the same problems prompt engineers face today.

DITA has long empowered human authors. Now, it can empower AI authors too.