Beyond Perfect Prompts: Why Some AI Tasks Need Translation, Not Engineering

When translating the Persian poet Hushang Ebtehaj—known as Sayeh, and celebrated as the modern Hafez—into English, my translator partner, the poet Chad Sweeney, and I sometimes worked through several pages of notes—usually from me explaining cultural and historical context to him—to translate a single line of poetry. This happened especially in the first phase of collaboration, but the challenge persisted: each new poem brought references that required building shared understanding before the words in each poem could cross into English with their meaning intact.

A line might reference centuries of Persian literary tradition, deploy metaphors that carry completely different weight across languages, or assume cultural and historical knowledge that has no equivalent in English. Those pages of notes weren’t inefficiency. They were the actual work of translation—constructing the common ground that would make understanding possible.

This experience offers an unexpected lens for thinking about how people work with AI. When results feel unsatisfying, the instinct is often to question one’s own capability. Perhaps “prompt engineering” mastery is required. Perhaps the problem is not understanding the system well enough.

But what if that’s not quite the right way to think about it?

When Perfect Prompts Actually Work

Here’s what makes this complicated: perfect prompts genuinely do work—in certain contexts.

Consider what happens with an AI image generator like Midjourney: “A photorealistic portrait, 85mm lens, golden hour lighting, shallow depth of field, f/1.8, soft focus on eyes.” The result tends to be remarkably close to what was requested.

Or think about coding prompts: “Write a Python function that takes a list of integers and returns only the unique values, preserving original order.” AI generates exactly that.

These work for a specific reason: the vocabulary and concepts being deployed already carry shared meaning. When someone says “85mm lens,” both user and AI understand what that specification means. “Golden hour lighting” isn’t poetic—it’s established photographic terminology. “Shallow depth of field” denotes a precise technical effect.

Similarly with coding: “function,” “list,” “return,” and “unique” all have definitions established by programming language specifications. Computer science has spent decades codifying its vocabulary.

The shared context already exists. Users aren’t building understanding through conversation; they’re deploying agreements that technical domains have already established.

When Shared Context Must Be Built

But consider different requests: “Help me think through whether this strategic approach makes sense.” Or “Analyze what makes this narrative compelling.” Or “Research how different cultures approach this concept and find patterns.”

These involve no codified vocabulary and concepts. What makes a strategy “make sense”? What constitutes “compelling”? What counts as a meaningful pattern versus surface similarity? The shared ground that would allow precise specification simply doesn’t exist yet.

This is where the dynamic shifts entirely. Users aren’t working in domains where vocabulary and concepts have been pre-established. They’re engaging in exploratory thinking, strategic analysis, research across ambiguous territory—work that requires building shared understanding through exchange.

And this is where the translation model becomes relevant.

Communication Across Interpretive Gaps

What happens in these exploratory contexts involves multiple layers of interpretation, each creating potential gaps.

First translation: What exists in a person’s thought must become language. Already a gap emerges between intent and expression.

Second translation: AI encounters that language through patterns from training data and whatever understanding it has about how this user works. It builds interpretation—not accessing actual meaning but constructing understanding from what the words seem to signal.

Third translation: The response arrives carrying AI’s logic and assumptions, which the user must interpret back into terms that make sense for their original purpose.

This mirrors what happens in translation work across genuinely different frameworks. Those pages of notes explaining cultural context, historical reference, literary tradition—that’s the work of building bridges where shared understanding doesn’t yet exist. Each layer requires active construction of meaning, not just deployment of pre-existing agreements.

Why “Prompt Engineering” Can Feel Wrong

The expectation that users master “prompt engineering” for exploratory contexts treats all AI interaction as if it operates like codified domains. Learn precise syntax and the system executes correctly.

But collaborative thinking doesn’t work that way. It’s iterative, involves negotiation of understanding that can’t be resolved through syntax alone.

When people hear they need better prompting skills—or when that thought arises as self-doubt—it echoes something from earlier technology eras. The old IT culture that made users feel incompetent, even ashamed, when systems required specialized expertise for basic tasks. The issue often wasn’t user capability. It was design demanding too much specialized knowledge.

What if expecting “perfect prompts” in exploratory contexts represents a similar mismatch?

People are increasingly trusting AI with significant work. Research shaping strategy. Analysis influencing decisions. Thinking partnerships affecting how problems get framed. Getting this communication right matters.

And there’s an additional challenge: AI communication happens primarily through text—without facial expressions, vocal tone, or body language that help humans navigate ambiguity. The work is happening with less bandwidth for conveying nuance, which suggests the system itself needs to work harder to bridge interpretive gaps, not demand that users become more technically precise to compensate for what the medium inherently lacks.

What Actually Helps Build Shared Ground

Here’s what seems to work in exploratory rather than codified domains.

Establishing contextual framing. Just as translators help collaborators understand conventions before meaning can cross between languages, giving AI perspective about the intended lens or role helps shape interpretation. Prompts establishing stance—”approach this as a UX strategist” or “think from a narrative structure perspective”—create interpretive ground when codified vocabulary doesn’t exist.

Providing concrete examples when shared vocabulary is absent. In domains where technical terms carry meaning, examples aren’t necessary. “85mm lens” requires no illustration. But when asking for a certain kind of analysis, showing instances establishes the pattern. “Here are examples of the insight depth I’m looking for” builds the framework that codified domains already possess. This is how understanding can be constructed when you can’t deploy pre-existing terms.

Verifying interpretation before investing effort. In translation, checking understanding happens continuously: “When I reference X, you’re interpreting that as Y, correct?” Asking AI to reflect back what it thinks is being requested catches misalignment early.

Creating continuity when systems can’t maintain it. When conversations exceed context limits, requesting summary documents that capture the exchange—then providing that as foundation when beginning fresh—acknowledges that building shared understanding requires continuity.

Treating interaction as iterative development. Translation emerges through refinement, not perfect first passes. The first exchange establishes direction. Subsequent rounds develop shared ground.

What AI Systems Might Consider

Could systems adapt to make this collaboration more natural in exploratory contexts?

Acknowledging uncertainty rather than proceeding on assumption. When a request supports multiple interpretations, saying “I could approach this as A or B—which serves your purpose?” seems more useful than generating content based on a guess. AI already does this inconsistently—sometimes asking, sometimes assuming. Making this more reliable would help users participate effectively in building shared ground.

Deepening learning from corrections within conversation. This observation requires acknowledging a useful comparison: search engines had certain structural advantages—billions of queries, clear success metrics, well-defined matching tasks. AI’s challenge in exploratory contexts is harder: infinite possible requests, ambiguous success, generating new understanding rather than retrieving existing content.

But that doesn’t mean improvement is impossible. Many systems already demonstrate capacity for learning within conversation—remembering corrections, adapting to effective phrasing.

The question is whether that capacity could deepen. Developing a personal repository of effective patterns for each user—learning not just from billions of interactions but from specific ways individuals think and work—could be transformative. It would reduce interpretive gaps by building shared context collaboratively rather than expecting users to somehow know in advance what will be understood correctly.

Maintaining consistency with established agreements. At times, definitions or approaches settled earlier will quietly shift in subsequent responses unless users actively monitor for drift. When understanding has been built through exchange, it should persist unless both parties explicitly reconsider it.

Making interpretive frameworks visible and collaborative. When working on a research project exploring how AI handles literary fairy tales—TaleDecode—a pattern emerged. Asked to adapt stories, AI would default to applying learned patterns from popular narratives. The result: stories became generic, lost distinctive details, felt shallow. It was as if the system prioritized familiar templates over examining what made each story unique.

This points to something broader. AI applies frameworks because interpretation requires lens. But those frameworks often operate invisibly. Users might not even realize the mismatch—they just notice the output feels generic or misses something, without understanding why.

What if AI made framework choices collaborative? For the story example: “I could analyze this by looking for common patterns across similar narratives, or I could focus on what makes this specific story distinctive—which approach serves your purpose?” Making the lens choice explicit rather than invisibly shaping everything downstream.

Building Rather Than Deploying Understanding

This distinction between codified and exploratory domains matters because it clarifies what’s actually being asked in different contexts. When working with established technical vocabulary—photography, coding, design systems—precision through shared language makes sense.

But when work involves thinking through strategy, conducting research across ambiguous territory, analyzing complex material, or exploring ideas without predetermined frameworks—tasks where context must be built rather than deployed—the dynamic is fundamentally different. Expecting “perfect prompts” in those contexts misunderstands the collaboration’s nature.

The goal isn’t flawless initial specification. It’s constructing enough shared understanding that meaning can develop productively across the gap.

When systems begin participating more actively in that construction—acknowledging uncertainty, learning from specific ways individuals work, maintaining established context, making frameworks visible—we might discover the challenge was never really about prompting technique. It was about recognizing that meaningful collaboration, whether between languages, cultures, or different forms of intelligence, requires both parties willing to do the work of building understanding together. Not through commands and perfect syntax, but through patient, iterative construction of shared ground where none existed before.