Prompt Playbook: Prompting Fundamentals PART 2

Prompt Playbook: Prompting Fundamentals

Hey Prompt Entrepreneur,

If you've ever looked at a prompt engineering guide, you'll quickly find yourself in a world of strange terminology – zero-shot, few-shot, chain-of-thought, retrieval-augmented generation... it can get complicated fast, and maybe even a little intimidating.

In the last Part we talked about how prompt engineering is, at its core, about communicating.

All these layers of technical terminology get in the way of this fact.

You're probably already using many of these techniques intuitively – you just don't know the formal names for them!

I’ll describe ways of prompting and you’ll probably have a few “well, duh, I’m already doing that” moments. And that’s great!

We're going to explore this terminology for two important reasons: first, to expand our toolkit of options, and second, so you know what someone is talking about when they use these terms in articles or discussions.

But always remember that beneath all the technical jargon, it's still fundamentally about clear communication with AI systems!

Let’s get started:

Summary

Prompting Terminology

  • Widening our prompting toolkit with established techniques

  • Zero-shot, one-shot, and few-shot prompting explained

  • The power of examples (both positive and negative)

  • Alternative ways to provide context beyond the prompt itself

Expanding Your Prompting Vocabulary

Previously we covered the RISEN™ framework as a solid foundation for talking to AI. Today, we're widening out our approach.

You might already be using some of these approaches intuitively, but understanding their formal patterns and when to apply them gives you greater control over your AI interactions.

It’s sort of like learning a foreign language and not necessarily knowing the finer points of grammar. That’s fine - you can talk to people no problem. But layering in grammar helps you understand why things work a certain way

Most importantly, no single technique is universally "better" than others – each has specific strengths and ideal use cases. This is a recurring theme in this Playbook!

Think of these techniques as different tools in your toolkit, each designed for particular situations:

  • Need quick results for a straightforward task? Zero-shot prompting might be perfect.

  • Working on a highly structured output with specific formatting? Few-shot prompting with examples is likely your best bet.

  • Need to expand to a full knowledge base? File uploads, Projects and RAG are your friends.

The key is matching the technique to your objective – and often, combining multiple approaches for optimal results. Let’s review what we’re working with.

The "Shot" Spectrum: From Zero to Few

One of the most fundamental distinctions in prompting is how many examples you provide before asking for a specific output. This is commonly referred to as "shot" prompting, ranging from zero-shot (no examples) to few-shot (multiple examples).

Zero-Shot Prompting: No Examples Needed

Zero-shot prompting is the simplest approach – you provide instructions without any examples, asking the AI to perform a task it hasn't explicitly been shown how to do in your prompt.

For example: "Write a short poem about artificial intelligence."

Zero-shot works remarkably well for straightforward tasks, especially with more powerful models. It's quick and requires minimal setup. Most of your ad hoc quick queries with AI will be zero shot.

When to use zero-shot:

  • For simple, common tasks

  • When you're not concerned about specific formatting

  • For creative or open-ended requests

  • When you want to test the AI's default approach

  • When you're in a hurry and need a quick response

One-Shot Prompting: Learning from a Single Example

One-shot prompting involves giving the AI a single example before asking it to perform a similar task:

Here's an example of a product description:
"The Echo Dot is a voice-controlled smart speaker with Alexa. It has a sleek, compact design that fits anywhere in your home. Ask Alexa to play music, answer questions, read the news, check the weather, set alarms, control compatible smart home devices, and more."

Now, write a similar product description for the Philips Hue Smart Bulb.

This approach significantly improves consistency and helps the AI understand exactly what you're looking for in terms of style, tone, and format.

When to use one-shot:

  • When you have a specific format or style in mind

  • For tasks where consistency with previous work matters

  • When zero-shot responses aren't quite hitting the mark

  • When you want to guide the AI without extensive examples

One shot is still pretty fast especially if you are just copy pasting in an example.

As an aside it used to be recommended you place examples first in ChatGPT and last in Claude. And I’m certain that all the different models have different recommendations. Increasingly though it matters less and less - the models can work out what you are aiming for.

That said: try both before and after (in new chats) and see what works better. Then use that moving forward. Always test.

Few-Shot Prompting: Patterns from Multiple Examples

Few-shot prompting takes this concept further by providing multiple examples:

Here are three examples of professional email responses to customer inquiries:

Customer: "When will my order arrive?"
Response: "Thank you for your inquiry about your order status. Based on our records, your package is scheduled to arrive on [date]. You can track your shipment using the tracking number provided in your confirmation email. Please let us know if you need any further assistance."

Customer: "I received the wrong item in my order."
Response: "I'm sorry to hear that you received the incorrect item. We sincerely apologise for this inconvenience. Please provide your order number so we can resolve this issue promptly. We can arrange for a return of the incorrect item and ensure the correct item is sent to you as soon as possible."

Customer: "Do you offer refunds?"
Response: "Yes, we do offer refunds for items returned within 30 days of purchase in their original condition. Our full refund policy can be found on our website under 'Customer Service.' If you'd like to initiate a return, please provide your order number, and we'll guide you through the process."

Customer: "Can I change my shipping address after placing my order?"
Response:

and so on and so on…

This technique is powerful for establishing complex patterns. With multiple examples, the AI can identify consistent elements across variations, leading to more nuanced understanding of your expectations.

When to use few-shot:

  • For complex tasks with subtle patterns

  • When consistency is critical

  • When working with formats that have specific structural requirements

  • When the task involves multiple dimensions (tone, format, reasoning approach)

  • For specialised knowledge domains

Few shot is more advanced and requires more work to set up. Also, increasingly we don’t put all the examples in the prompt itself but instead we supply them "externally”. I’ll talk more about how later.

Examples: Quality, Variety, and Counterexamples

While the number of examples matters, their quality and what type are even more important. A couple of pointers:

Quality Over Quantity

Boring but important. A single well-chosen example often outperforms multiple mediocre ones. Your examples should clearly demonstrate the exact pattern you want followed, with all the elements you consider important.

Hint: you can use AI to help you brainstorm good examples.

Include Variety

If you provide multiple examples that are too similar, the AI might fixate on irrelevant patterns. Include variety to help the model understand which aspects should remain consistent and which can vary.

For instance, if creating customer service responses, vary the scenarios, customer tones, and specific solutions while maintaining a consistent professional tone and structure.

Use Negative Examples

One of the most powerful but still underused strats is providing negative examples. Specifically showing what you DON'T want alongside what you do want.

GOOD EXAMPLE:
The quarterly report shows a 15% increase in revenue, primarily driven by our new product line which captured significant market share in the enterprise segment.

BAD EXAMPLE:
Revenue went up by 15% this quarter because our new products sold really well to big companies.

Please write a professional summary of our customer satisfaction survey results in the style of the GOOD example, not the BAD example.

Negative examples help define boundaries and clarify expectations, often resolving ambiguities that positive examples alone might miss. They're particularly valuable when you've received outputs that have specific flaws you want to avoid.

If you get some bad results back from the AI then you can feed them back as bad examples. Even better - explain why they are bad.

System, Contextual, and Role Prompting

The "shot" techniques we've discussed are fundamentally about providing examples and context. But sometimes, including all necessary context directly in the prompt becomes impractical due to length limitations or complexity.

Beyond the "shot" techniques, there are several complementary approaches that help establish the foundation for your AI interactions. Increasingly you’ll use these tools rather than trying to put ALL the context into the prompt itself.

Think of these techniques as “external” to the prompt. We can have an external repository of examples and context that works alongside our prompts. These will (for the most part) replace few-shot prompting.

System Prompting

Many AI interfaces distinguish between system prompts and user prompts. System prompts define how the AI should operate throughout your entire conversation and even across conversations, while user prompts contain your specific requests.

System prompting involves establishing parameters that persist across the entire conversation:

For example: "You are an AI assistant helping with product design. Always approach questions from first principles, focus on user needs, and suggest specific, actionable next steps. Challenge assumptions when they seem ungrounded. Use simple language without jargon."

The main value here is allowing you to build context into all your interactions without having to repeat yourself.

External Reference Methods

All of these approaches serve a similar purpose: giving the AI more information to work with beyond what fits in a standard prompt. You can thus give many more examples than if you added them to the prompt itself.

File Upload lets you reference specific documents without copying their content into the prompt. When you upload a file, the AI can analyse it and respond to questions about it, making this ideal for working with specific documents, data sets, or images. Certain models have much higher context windows (think: memory) allowing you to upload many more and larger files - Google’s Gemini models in particular excel here.

Projects and Knowledge Bases create persistent collections of information the AI can access across multiple conversations. These might include company documentation, previous conversations, or specialised resources that inform the AI's responses without needing to be included in each prompt. Both ChatGPT and Claude have Projects directly in their webapps and phone app interfaces.

Retrieval-Augmented Generation (RAG) takes this a step further by dynamically fetching relevant information from a knowledge base when needed. Your content is stored in a vector database, and the system pulls only the most relevant information for each query, effectively extending the AI's knowledge with your specific information.

This is more for when working with the API rather than via the webapp. It’s a more advanced technique for sure.

These approaches all address similar needs:

  • Working with information too extensive to fit in a prompt

  • Maintaining consistent access to proprietary information

  • Keeping references up-to-date without rewriting prompts (simply add to or change the uploaded information)

The goal remains the same as with our "shot" techniques: provide the AI with the context it needs to generate accurate, relevant, and helpful responses. The difference is merely in how and where that context is stored and accessed.

And which you use depends entirely on how many examples and how much context is needed to do the task at hand well.

What's Next?

We've expanded your prompting vocabulary to include a range of established techniques – from zero-shot to few-shot and then how we can manage larger bodies of examples. Next, we'll explore how to control AI outputs more precisely, including the technical aspects of parameters like temperature and token management.

Keep Prompting,

Kyle

When you are ready

AI Entrepreneurship programmes to get you started in AI:

70+ AI Business Courses
✓ Instantly unlock 70+ AI Business courses ✓ Get FUTURE courses for Free ✓ Kyle’s personal Prompt Library ✓ AI Business Starter Pack Course ✓ AI Niche Navigator Course Get Premium 

AI Workshop Kit
Deliver AI Workshops and Presentations to Businesses with my Field Tested AI Workshop Kit  Learn More

AI Authority Accelerator 
Do you want to become THE trusted AI Voice in your industry in 30-days?  Learn More

AI Automation Accelerator
Do you want to build your first AI Automation product in 30-days?  Enrol Now

Anything else? Hit reply to this email and let’s chat.

If you feel this — learning how to use AI in entrepreneurship and work — is not for you → Unsubscribe here.