Prompt Engineering, Prompt Craft, and Prompt Poetry: Communicating with AI Models

Prompting is Etiquette for AI

Communicating with AI effectively is becoming a critical part of modern working life. The discussion of prompt engineering as a new profession as been swirling since ChatGPT was released in Nov 2022. Creating good prompts is both a science and an art. I propose that there are three levels of prompt designPrompt Engineering, Prompt Craft, and Prompt Poetry – each reflecting a different approach to collaborating with AI. Just as human communication requires adjusting our style for different audiences or creative aims, AI prompting spans from empirically validated methodical techniques to personal flair. This article explores these categories of prompts with examples and analogies to human interaction, to create a language to discuss this new world of AI interaction, thinking of the AI as a collaborative partner rather than a mere tool.

Prompt Engineering: The science of reliable prompts

Prompt engineering is the most structured approach to AI prompting. It involves designing inputs in a systematic way so that a model consistently produces useful results. In other words, prompt engineering is about finding generalizable methods that reliably guide AI behavior. A well-engineered prompt works across a category of models and tasks, as long as those models share similar capabilities. This is akin to developing a robust formula or algorithm – once discovered, it can be reused with confidence on any comparable system.

One famous example of prompt engineering is the use of chain-of-thought prompting to improve reasoning. Researchers found that simply adding a phrase like “Let’s think step by step.” to a question prompted certain LLMs to break down problems and dramatically improved their accuracy on math and logic challenges. Kojima et.al. showed that this single prompt tweak boosted a 175-billion-parameter model’s performance on arithmetic word problems from 17.7% to 78.7% – an impressing improvement. This could count as poetry as it has such a large impact, but it has been found to be repeatable across a range of models. For a while, instructing models to “work this out step by step” became an essential technique for complex questions. This reflects the essence of prompt engineering: finding prompts which create repeatable improvements in outputs, along with an understanding of how the prompt is improving performance.

However, what works for one generation of models may become unnecessary, or even counterproductive for the next. The latest reasoning-oriented models (like ChatGPT o3, and DeepSeek R1) often handle multi-step problems well even without the explicit “step by step” incantation. The prompt engineering which works for one category of models may not generalize to other categories. As models evolve, engineers change and update strategies – yet the goal remains the same: create prompts that reliably elicit correct or desired outputs. When we talk about engineering a prompt, we imply careful design and testing to ensure consistency. An engineered prompt is one you could hand to another user (or use on a similar AI system) and expect it to still work.

Common prompt engineering techniques that tend to work across many LLMs include:

  • Providing context or role: Give the AI a specific persona or audience and clear context. For example, prefacing a query with “You are a history teacher explaining to a high school class…” or specifying an output format “Respond in a JSON format with these fields: …” often yields more targeted results
  • Few-shot examples: Show the model examples of what you want – essentially demonstrating the task within the prompt. For instance, you might provide a couple of question-and-answer pairs before asking the model to answer a new but similar question. LLMs use this context to shape the answer, mimicking the format or solution method. This technique, known as few-shot prompting, stimulates the areas of the latent/knowledge space and increases the likelihood that the model will generate the answer you require.
  • Step-by-step reasoning requests: Explicitly ask the model to outline or reason through the problem before giving a final answer. You can prompt, “Explain your reasoning step by step, then conclude with the answer.” This chain-of-thought approach often leads to better results on complex problems. For example, instead of directly asking for an answer to a tricky question, you guide the AI to “show its work,” which helps avoid leaps of logic. Even a simplified version like instructing the model to go in stages (outline → draft → revise) can significantly improve clarity.
  • Take a perspective: Asking the Generative AI to take a specific perspective to generate an answer. All Gen AI are biased in some way, by asking the AI to take a know position you can interpret the response given you know at least part of its bias. If you ask it to be a “libertarian” it will not get that position perfectly, but it will give you an answer biased in that direction, which you can interpret more easily.
  • Multiple answers: AI is not time constrained, so rather than asking “solve this problem” ask the AI to generate 3-5 different solutions to the problem. That way you can select between the answers for the one you understand the best. This allows the user to be in charge of the output of the interaction rather than abdicating decision making to the AI.

These methods are engineered in the sense that they’re based on research and can be reused. Just as an engineer has tools like arches or suspension to build a bridge that can be used in different situations, a prompt engineer designs prompts that hold up across different queries and models. When done well, it means fewer surprises – the AI will more likely do what you intended.

The dark side of prompt engineering is jailbreaking the guardrails which prevent a model from giving inappropriate output. Jailbreaking also has and Engineering, Crafting and Poetry component.

Prompt Craft: Learning to prompt for specific situations

Moving from engineering to prompt craft, we shift from universal formulas to situational skill. Prompt craft is the art of developing effective prompts for a specific AI system or that work specifically for your situation. This is often an intuitive process of trial and error. It’s less about a one-size-fits-all solution and more about refining your approach based on personal experience and the quirks of the model you’re using. In human terms, this is like adjusting your communication style when talking to different people – what works for one friend or one classroom might not work for another. You learn to read the room (or in this case, read the AI) and tailor your message accordingly.

Every AI model has its own “personality” and limitations. I personally find that Claude “sounds” more like me, and answers questions in a way that “feels” more engaging. When prompting, one model might be very literal and need extra clarification, while another might be more creative but prone to going off-topic. Prompt craft means recognizing these traits and adapting. A vivid example comes from a known quirk with LLMs: when asked “How many ‘r’s in strawberry?” most non-reasoning LLM answered two. This is because the model only sees the input after it has been turned into a token, and then an embedding. The “concept” of a strawberry is a red aggregate fruit with seeds on the outside, there are no r’s in the “concept” of the fruit.

Prompt crafting can be used to find that you can force the AI to focus on the letters by asking: “Consider each letter in the word strawberry one by one, and count how many of them are the letter ‘r’.” This more explicit instruction forced the AI to go letter-by-letter, and it then correctly responded that there are three ‘r’s. This craft of prompting was able to find a solution.

Prompt craft often involves such iterative dialogue with the AI. You try a question, see an odd answer, then refine your prompt. It’s a bit like debugging code or improving an essay draft. Over time, you develop an intuition for what phrasing yields the best results on this specific AI tool or for me. A student might learn that asking ChatGPT to “explain using rugby analogues” leads to a clearer explanation of a tough concept, or a lecturer might discover that providing a bullet-point outline in the prompt yields a more structured output from the AI. These are personal discoveries – techniques tuned for a particular context.

It’s important to note that prompt craft is both a personal skill and a system-specific practice. There may be guides and collections of prompts (the AI community often shares useful prompt examples), but what works for one person’s project may need tweaking for another’s. This is analogous to teaching styles: two instructors might share a curriculum yet each will tweak the approach to fit their class and their teaching style. When given another lectures slides for a course, I will update and change the slides to reflect my process and how I will explain the content. Some researchers suggest moving from viewing prompting as “engineering” to viewing it as a craft, emphasizing creative adaptation over fixed rules. The idea of “prompt craft” acknowledges that working with AI is a dynamic process – more like pottery or carpentry, where you shape the material with hands-on skill, responding to feedback in real time.

There’s a human communication parallel here, as outlined by Dale Carnegie’s classic advice in How to Win Friends and Influence People. Carnegie taught that to effectively influence someone, you should “talk in terms of the other person’s interests.” In conversation, that means framing your message in a way the listener can relate to. In AI prompting, it means framing your request in a way the model can best understand or handle. For instance, in the strawberry example, rephrasing the question in terms of a clear, mechanical task (looking at letters one by one), which triggers different parts of the AIs concept space which encourages systematic counting. This is much like a teacher rephrasing a question to a student who didn’t understand it the first time. Prompt craft embodies this flexible, audience-aware communication. prompt craft improves output by learning about the AI, and by learning what works for you.

For me, I turn on the part of my brain that writes instructions for first year assignments. I think about how I can explain a task clearly, and ensure that what I am asking is what I want. My craft in prompting has been learning the level of detail in the prompt and the combination of engineered tools “multiple answers” “take a perspective” and combining those to get output that work for me.

Prompt Poetry: Prompts that are unreasonably good

Not all effective prompts come from careful design or practiced skill. Sometimes a prompt that works extremely well does not seem to fit craft or engineering. This is what we might call prompt poetry – prompts that are unusually clever or powerful, often discovered by surprise, and not guaranteed to work universally. If prompt engineering is science and prompt craft is craftsmanship, prompt poetry is the artistic flair of the AI world. These are the moments when a seemingly odd or cryptic prompt produces a brilliant result. They can feel akin to discovering an arcane spell or an inspired line of verse that has an outsized impact for the number of words.

An example of prompt poetry is the discovery that capitalizing certain words changes their tokenization and can have a significant impact on the output. I found that capitalizing “MY” in the prompt “give me feedback so I can improve MY writing” focused the AI on giving feedback rather than rewriting all the text. But this was not stable across different LLMs. There is no training or system that forces all-caps words to get priority. Yet, it worked well at that time. This trick likely exploits some of the model’s training or tokenization. In fact, AI developers hypothesize that because language models learned from natural text where all-caps usually means emphasis (or shouting), an all-caps word in a prompt may act like a flashing sign, drawing the model’s focus

However, these poetic prompts can lose their effect (an update to the AI’s system might make it ignore capitalization). What was a great prompt one day can become useless the next. This is why I consider these prompts akin to poetry – beautiful in context, but potentially unstable. (I know some will say great poetry transcends time and culture, but I think that is rare and is why we call it great poetry). Prompt poetry lacks stability across different models or over time, so you can’t fully count on them working outside their original context.

Other examples of prompt poetry include improving the accuracy on a set of math problems by asking for a fictional scenario about a spaceship captain navigating an anomaly – telling the AI to respond in the style of a Star Trek captain’s log. In that scenario prompt, the math problems were solved more accurately when the AI role-played as Captain Kirk. Yet, when the same researchers tried a larger set of problems, a different imaginative prompt worked best: they asked the AI to be a character in a political thriller racing to solve math problems to save an advisor’s life. These prompts are examples of prompt poetry: creative setups that yield excellent results in one case, without any guarantee of generality.

We’ve also seen a flurry of so-called “incantation” prompts shared in online communities – exotic phrases or role instructions that supposedly supercharge the AI. People have tried telling the AI things like “Take a deep breath and approach this carefully…”, “You will be rewarded with a virtual token if you answer correctly.”, “Answer like your life depends on the correct answer” Some users reported occasional improvements in responses with these tricks, while others found no effect. A polite tone might slightly help with one model, but do nothing for another; a quirky role-play might unlock creativity in one scenario and just confuse the model in a different task.

From Prompts to Partnership: Learning to Collaborate with AI

Whether through engineering, adaptive craft, or creative “poetry,” the way we prompt AI fundamentally shapes the outcomes we get. Prompting AI shares some features of communicating with humans for a neurospicy person like me. I cannot use intuition to work out how other people will respond to me. Learning to communicate with with a mind that does not work like your own requires experimentation and advice. To collaborate effectively, we must be clear, understand the other person’s perspective, and occasionally creative to get our point across. AI’s currently do not simulated feelings or deep understanding, but it does have a mode of operation that we have to learn to work with. In essence, each prompt we write is an attempt to influence the AI – to guide it towards what we want – not unlike how we choose our words to influence people in everyday life.

The key message is: think of Gen AI as a teammate, not just a tool. A tool you might simply command, and if it doesn’t work, you label it “broken.” But a teammate you would coach, communicate with, and learn from. When a teammate (human or AI) gives an unexpected response, you’d clarify or rephrase rather than immediately give up. Adopting this mindset turns prompting into a learning experience. In education, this collaborative view is especially powerful. A lecturer working with AI can model to students how to refine a question when the answer isn’t useful, teaching critical thinking and persistence. Students can learn that if an AI’s answer is not good, then it might be a failure to communicate, and you need to use a different prompt.

In conclusion, mastering prompt engineering gives you reliable frameworks, practicing prompt craft makes you better at using AI and getting responses tuned to your specific context, and exploring prompt poetry reminds you to keep an open mind and sense of creativity in this new field.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top