Skip to main content
Post categories: Code

Ubiquitous Language as a Control System for LLM-Assisted Development

19 March 2026 - 6 minutes reading

Large Language Models (LLMs) can write code, generate architectural suggestions, and even propose domain models. Their linguistic fluency often creates the impression that they truly understand the system they are helping to design.

However, this fluency hides a fundamental limitation:

The model does not understand your domain; it simply reacts to the language you provide.

For this reason, the most critical discipline when collaborating with an LLM is not about writing sophisticated prompts or mastering specific techniques. It is the very same discipline at the heart of Domain-Driven Design: Ubiquitous Language.

When you use an LLM to generate code, design APIs, define events, or propose architectural structures, the quality of the output depends almost entirely on the precision and consistency of the language you use.

In practice, Ubiquitous Language becomes the control system for AI-assisted development.

Why Language Matters Even More with LLMs

In a traditional development team, ambiguous language is inconvenient but manageable: developers ask clarifying questions, review decisions, and domain experts correct any misunderstandings.

Humans naturally resolve ambiguity. LLMs do not!

When language is vague or inconsistent, the model cannot ask for clarification. Instead, it fills knowledge gaps by generating what appears statistically plausible. The result may look correct, but it might not reflect the true rules of your domain.

This means that ambiguity has a different consequence when collaborating with an LLM: it expands the space of possible interpretations.

To help you better understand my point of view, consider the following simple request made to an LLM agent:

Create the system events.

A teammate would immediately ask you a few questions: “Which system?”, “Which part of the system?”, “What naming conventions should I use?”, “Which system boundaries?”.

An LLM won’t ask these questions; instead, it will immediately generate an output that seems reasonable.

Now compare the result with what the same LLM generates for you from a prompt written in the relevant technical language:

“Define the domain events for the Order Bounded Context in the BrewUp platform. Use past-tense names and do not introduce new Aggregates.”

You will certainly notice a difference that is not just stylistic, but structural.

In the second case, the model has a clear vocabulary and explicit boundaries. Possible interpretations are drastically reduced, and the generated output becomes much more aligned with the intended domain.

In other words:

Language precision reduces ambiguity in the generated results.

Ubiquitous Language as a Boundary System

In Domain-Driven Design, Ubiquitous Language ensures that every important concept has a shared meaning within the team.

Terms like Order, Payment, or Inventory Reservation are not generic words. They are precise domain concepts that exist within well-defined boundaries.

When collaborating with an LLM agent, or AI agent, these boundaries become even more critical.

If you were to alternate between multiple terms to indicate the same concept in your prompts (for example, Order, Purchase, Request), the model would interpret them as different signals and start generating structures that were never intended.

Let’s look at an example using the term Order. By asking an AI agent to define events for an Order, you might end up with results like PurchaseCreated or RequestSubmitted, simply because those three terms often appear in similar contexts.

The model isn’t making a conceptual decision. It is following linguistic patterns.

For this reason, a disciplined Ubiquitous Language becomes essential.

If the domain concept is Order, then everything must consistently refer to Order:

  • OrderCreated
  • OrderPaid
  • OrderCancelled

Consistency stabilizes the generation process and prevents conceptual drift.

Writing Code with an LLM Requires Linguistic Discipline

When you ask an LLM to write code, language precision becomes even more critical.

Code generation depends heavily on how concepts are described. If the domain vocabulary you choose to use is unclear, the generated structures will be as well.

Let me give another example. Consider the following prompt:

Generate a service that handles purchases.

The model will inevitably have to guess the meaning of the term “purchase,” likely asking itself: “Is it the creation of an order?”, “The payment process?”, “The checkout?”.

Now, let’s make the same request using clear domain language:

Generate a service that manages Order placement within the Ordering bounded context. The Order aggregate emits an OrderPlaced event when the payment is confirmed.

You will notice that I used a specific vocabulary to define the system:

  • Order, the core concept
  • OrderPlaced, the event
  • Ordering, the context boundary

Because the language is structured, the generated code will tend to reflect that structure.

The AI agent does not understand the domain but, by following the signals present in the language, it generates the code. When these signals are precise, the resulting code aligns with the domain model.

Ubiquitous Language as a Constraint System

Now, let’s look at Ubiquitous Language as a system of constraints.

As previously seen, when the language of a prompt is vague, the LLM can take multiple directions to complete the request. Conversely, when the language is precise, the set of possible responses narrows and becomes more predictable. This is particularly useful when generating:

  • Domain Events
  • Aggregates
  • Commands
  • APIs
  • Service boundaries

To better illustrate this concept, let’s these two prompts: the first written without constraints and the second one using constrained language.

Unconstrained prompt:

“Create the system commands.”

Constrained prompt:

“Create the commands for the Order aggregate within the SalesOrder bounded context. Commands must represent user intentions and must not bypass aggregate invariants.”

As you may have noticed, the second prompt directly embeds domain knowledge into the language. The output generated by the LLM will therefore tend to align with that knowledge, because the vocabulary limits what is statistically plausible.

The LLM becomes more useful not because it is more “intelligent,” but because the language drives the result.

When the LLM Drifts, Check the Language

When a model produces an incorrect result, the most common reaction is to think that the model made a mistake.

Often, however, the real problem lies in the language being used.

If the output produced by the AI agent introduces unexpected concepts or boundaries, it usually means that the domain language used was generic or lacked detail.

Instead of asking, “Why did the model get it wrong?” it is better to ask:

“Which parts of the domain language were unclear?”

These are some of the most common causes of sparse or weak language:

  • Poorly defined boundaries between Bounded Contexts;
  • Inconsistent naming of Aggregates;
  • Domain rules that are implicit but never made explicit;
  • A mix of technical jargon and business terms.

Whenever a prompt suffers from these gaps, the LLM fills them by applying patterns learned elsewhere.

In this sense, LLMs also act as a mirror for the clarity of your domain model:

If the language is precise, the output tends to remain consistent.
If the language is confused, the output reveals that confusion.

The Role of the Software Architect Does Not Disappear

Using an LLM to generate code or explore architectural solutions does not eliminate the need for architectural ownership.

The model can help to:

  • Structure ideas;
  • Explore alternatives;
  • Generate code or domain artifacts.

However, the model itself cannot verify the validity of the results in relation to business rules. This remains the responsibility of software architects and domain experts.

The software architect’s responsibility is shifting: from the manual production of every artifact to the design of linguistic constraints that guide the generation process.

The work is therefore evolving from hand-coding to the engineering of the language that drives collaboration.

The Key Insight

When working with LLMs, the relationship between language and modeling becomes more direct than ever.

Domain models are expressed through language.
LLM behavior is driven by language.

This concept creates a very powerful alignment, but at the same time, it introduces a risk.

If we treat LLMs as language-driven collaborators constrained by Ubiquitous Language, they become extremely useful assistants for exploration and code generation.

The same discipline that Domain-Driven Design introduced for human collaboration becomes just as vital in the collaboration between humans and AI agents.

The core principle remains simple:

Precise language produces precise systems.

And when an LLM enters the development process, Ubiquitous Language becomes the rudder that keeps the system on course with the domain.

Article written by