- Prompt Entrepreneur by Kyle Balmer
- Posts
- Prompt Playbook: Prompting Fundamentals PART 3
Prompt Playbook: Prompting Fundamentals PART 3
Prompt Playbook: Prompting Fundamentals
Hey Prompt Entrepreneur,
During our first AI Automation Accelerator, I made a silly mistake.
I try to always keep my educational material as simple as possible. Not requiring additional context wherever possible.
However, early on in the Accelerator jumped straight into discussing the ChatGPT API.
I was assuming that people knew what an API was (they didn’t) and how it was different to “normal” ChatGPT.
Oops. Mea culpa.
If you are currently thinking “what the hell is an API” then you’re in luck! We’re covering that today and why it matters.
For me, the distinction between the ChatGPT app and the API seemed obvious – I'd been building with APIs for years. But the confused messages from several participants quickly made me realise my error. What was second nature to me was completely new territory for many talented entrepreneurs.
That moment was eye-opening. I realised there's a fundamental divide in the AI world – between casual users of web interfaces and those building with APIs. And knowing which to use when is super important for how we build and how we prompt.
This fundamental difference changes everything about how we approach prompting. When we're just chatting with AI through a web interface, we can fudge our way through prompts and iterate until we get decent outputs. But when we're building with the API, we need solid, reliable prompts that work consistently – much closer to the "engineering" part of prompt engineering.
But I'm getting ahead of myself! First, let's look at what the app vs API distinction actually means!
Use AI as Your Personal Assistant’
Ready to save precious time and let AI do the heavy lifting?
Save time and simplify your unique workflow with HubSpot’s highly anticipated AI Playbook—your guide to smarter processes and effortless productivity.
Let’s get started:
Summary
From App to API
Moving beyond web interfaces to API integration
API vs. App: When to graduate from chat interfaces to direct integration
No-code options for using AI APIs without programming
Temperature: Controlling creativity vs. consistency
Token length management and context windows
From App to API: The Next Level of AI Usage
So far in our series, we've primarily focused on techniques you can use in the standard app versions of AI systems – that's the web applications like ChatGPT, Claude or their mobile app equivalents.
These apps are what we called GUIs (pronounced gooey, which is endlessly funny). GUI stands for Graphical user interfaces. It’s a term from the age of computing when having an graphical interface was actually novel. Before it was all text based (think DOS if you are my age or above!).
Instead of GUI we can just use the catch-all word “app” as it’s close enough. This includes web apps (via a browser like Chrome, going to a website like chatgpt.com) , phone apps like the ChatGPT iPhone app or even desktop apps like the ChatGPT you can install on your computer.
Either way these interfaces are where most people start their AI journey, and for many users, they're perfectly adequate.
The Limitations of App-Based AI
Whilst these apps are great starting points, they have significant limitations that become apparent when you start using AI more seriously:
Manual Intervention: Every interaction requires someone to type prompts and copy-paste results, making automation impossible.
Inconsistent Parameters: Settings may change between sessions or updates, affecting output consistency.
Limited Integration: These apps mainly exist as islands, separate from your existing systems and workflows.
Restricted Customisation: You're limited to the features and controls the provider chooses to expose in the interface.
As your AI usage evolves from casual experimentation to core business processes, these limitations start to become real bottlenecks. When we want to start getting sophisticated therefore we need to move away from the apps and towards the API.
Understanding APIs: Direct Access to AI Power
Oh great. Another acronym!
API stands for Application Programming Interface. Software engineers aren’t great at naming things so we’ll have to excuse these complex names!
APIs are basically a way for different software systems to communicate with each other. That’s it.
In the context of AI models, an API allows your software to connect directly to the AI service, send prompts and receive answers automatically.
No more fussing about in the app interface writing prompts. No more manual entry. Everything gets sent back and forth behind the scenes without you. VERY different. And essential when you are building automations and software tools that need to work when you aren’t around.
Let’s hammer this home with the key differences:
Automation: APIs allow your systems to interact with AI automatically, without human intervention.
Consistency: You can set and maintain exact parameters for every request, ensuring consistent outputs.
Integration: APIs let you incorporate AI capabilities directly into your existing software and workflows.
Volume: You can process hundreds or thousands of requests efficiently.
Cost: API requests are fractions of a penny. You pay per usage rather than a flat $20/month.
Customisation: APIs offer more control parameters and options than are typically exposed in apps.
Using an AI app is like ordering food at a restaurant counter – you're limited to what's on the menu board and how the staff is trained to serve it. Using an API is like having direct access to the kitchen – you can customise ingredients, cooking techniques, and presentation to your exact specifications.
This does not mean that the API is always better than using the app!! There are pros and cons here. Ordering via the menu is much easier and you know what you’ll get. If you head back into the kitchen then yes, absolutely you have more control! BUT you are also more likely to make a huge mess. 🤣
Shifting Prompting Paradigm
This transition from GUI to API fundamentally changes how we approach prompting. Here's why:
In the app, prompt engineering is often exploratory and iterative:
You can try different approaches in real-time
Immediate feedback lets you adjust on the fly
Inconsistencies might be annoying but aren't catastrophic
You can clarify or refine through conversation
Basically you can fudge your way through with a combination of zero, one and few shot prompting as we described before, mixed with providing context with file uploads and Projects.
We can get it done. Even if it’s a bit messy! The app versions give us this flexibility.
In API environments, prompt engineering becomes much more rigorous:
Your prompts need to work reliably without human intervention
Consistency across thousands of interactions becomes critical
Failures can affect automated systems and end users at high volume
Edge cases need to be handled gracefully without human supervision
This is where prompt engineering truly becomes engineering. You're no longer casually crafting messages – you're designing robust systems that need to work reliably at scale.
Your prompts need to work without you there nudging them along and fixing them on the fly. They need to work whilst you are sleeping. So we need to get more sophisticated. We’ll touch on this shortly, after I explain how exactly we use the API.
Two Ways to Access APIs
Once you decide to move beyond GUI interfaces, there are two main approaches to using APIs:
1. No-Code Tools: Platforms like Zapier, Make, and Bubble let you connect to AI APIs without writing code. You simply:
Get an API key (which looks something like "sk-ajhdf172kasdfy2...")
Connect this key to the no-code platform
Build workflows visually using drag-and-drop interfaces
This is perfect for automating straightforward tasks like generating content based on triggers, analysing incoming data, or connecting AI to other services like email or Slack.
This, by the way, is exactly what we do in the AI Automation Accelerator. We show you how to build your first API-based automation. And more than that how to build something you can actually sell.
2. Custom Development: For more complex needs, you can build custom applications that directly integrate with AI APIs. This requires some programming knowledge but offers maximum flexibility. Again, you'll need an API key to authenticate your requests.
Either way, the fundamental concept is the same: you're using an API key to send and receive messages directly to the model, bypassing the chat interface entirely. This direct connection is what enables automation and consistency. Allowing you to build actual products with AI.
Temperature: The Creativity Dial
When using the API we have some additional dials we can play with.
One of the most important controls at your disposal is "temperature" – the setting that essentially determines how “creative” or predictable the AI will be.
Temperature controls randomness in the AI's selection process. Without getting into the weeds when generating text, the AI assigns probabilities to possible next words or tokens. Temperature determines how funky it gets with those probabilities:
Low temperature (0.0-0.3): The AI almost always chooses the most probable next token, resulting in more predictable, consistent outputs. This is like following a recipe exactly as written.
Medium temperature (0.4-0.7): The AI sometimes chooses less probable tokens, introducing moderate variability. This is like a chef who mostly follows the recipe but occasionally adds their own twist.
High temperature (0.8-1.0+): The AI frequently chooses less probable tokens, creating more surprising, creative, and sometimes erratic outputs. This is like experimental cooking—exciting but not always successful!
When using APIs, temperature becomes especially important because you need to deliberately choose the right setting for each use case:
For factual tasks, structured outputs, or code generation, a lower temperature (0.0-0.3) provides consistency and accuracy.
For content creation, marketing copy, or ideas generation, a medium temperature (0.4-0.7) balances creativity with coherence.
For brainstorming, creative writing, or generating diverse alternatives, a higher temperature (0.8+) introduces more variety.
In API settings, you'll often set different temperatures for different endpoint functions within the same application – using low temperatures for data processing and higher temperatures for creative content generation.
How do you know which exact temperature will work best? You can use the above ranges as guidelines but ultimately: testing! When building prompts for repeat use we’ll test results at a range of temperatures and see which works best.
Context Windows and Token Management
The next big consideration when using the API is dealing with context windows.
Every AI system has limits on how much text it can process at once—its "context window." In API settings, understanding and managing these limits becomes even more critical. Much more so than in the apps where we can just start a new chat!
The context window is the AI's working memory—everything it can "see" at once when generating a response. This includes your prompt, any examples, previous messages, and system instructions.
Context windows are measured in tokens. A token is roughly (don’t come at me!) 3/4 of a word in English, so a 4,000-token context window can handle about 3,000 words of combined prompt and response. Ish. Sorta. More a less.
In API settings, you need to manage context explicitly:
Cost Considerations: Most API providers charge per token. Larger prompts = higher costs. We could just use the maximum context in each and every prompt but we’ll rack up costs much much faster. So we need to be smart.
Performance Impact: Larger contexts generally mean slower responses and higher computation costs. This makes sense. If every prompt you send includes a whole brand book the AI needs to read first then you’re clogging up the works.
Error Prevention: Exceeding context limits in an API call causes errors that can break automated workflows. Go over this memory and the AI will “forget” parts of the context and your results will get real bad, real soon.
This means that when building with the API we need to actually pay attention to the volume of information we are putting in. How much additional context do we really need to upload? Before with the apps we just chucked everything in and hoped for the best but when building with the API we need to be more strategic.
Context Window Sizes Are Growing
The good news is that context windows have expanded dramatically:
When ChatGPT launched in 2022, its window was just 4,000 tokens (3000 words)
Current standard models typically offer ~128,000 tokens
Gemini models have context windows in the millions range.
Experimental systems like Magic.dev's LTM-2-Mini claim context windows of 100 million tokens
These numbers are shifting all the time - so whatever I write here will be out of date in a month or two! For the latest information on context window sizes, check comparison sites like Artificial Analysis (artificialanalysis.ai) that track capabilities across different models.
On a positive note though limits are expanding all the time: so the question of context window management may become moot as we move forward. That said efficient token usage will likely remain important for cost, performance and environmental reasons. Just because we could use 1M tokens to help us write a tweet does not mean we should!
Playgrounds: The Bridge Between GUI and API
All of this may sound intimidating. Sorry! Here’s a nice easy segue into working with this more advanced level of prompt engineering. I strongly recommend spending time in "playground" environments:
OpenAI Playground (platform.openai.com/playground)
Anthropic Console (console.anthropic.com)
Google AI Studio (ai.google.dev)
These interfaces offer the best of both worlds:
The user-friendly experience of apps
The parameter control and visibility of APIs
Playgrounds let you:
Experiment with Parameters: Adjust temperature, context limits, and other settings with immediate feedback. Basically you are given the dials within a nice GUI.
See API Calls: View the exact API code that would generate your results.
Test Prompts: Validate that your prompts work consistently before integrating them into systems.
Compare Models: Try different model versions side-by-side.
Think of playgrounds as training wheels for API usage. They let you experiment and refine your approach before committing to full integration. I highly recommend you go and kick the tyres of the API over on a playground to see all of this in action.
What's Next?
Today we've explored the transition from apps to APIs and how this shift fundamentally changes our approach to prompting. We've covered the technical controls like temperature and context management that become crucial when working with APIs.
Tomorrow, we'll examine model selection and optimisation, helping you understand when to use different AI models and how to adapt your approach based on the specific capabilities of each system.
Keep Prompting,
Kyle

When you are ready
AI Entrepreneurship programmes to get you started in AI:
70+ AI Business Courses
✓ Instantly unlock 70+ AI Business courses ✓ Get FUTURE courses for Free ✓ Kyle’s personal Prompt Library ✓ AI Business Starter Pack Course ✓ AI Niche Navigator Course → Get Premium
AI Workshop Kit
Deliver AI Workshops and Presentations to Businesses with my Field Tested AI Workshop Kit → Learn More
AI Authority Accelerator
Do you want to become THE trusted AI Voice in your industry in 30-days? → Learn More
AI Automation Accelerator
Do you want to build your first AI Automation product in 30-days? → Enrol Now
Anything else? Hit reply to this email and let’s chat.
If you feel this — learning how to use AI in entrepreneurship and work — is not for you → Unsubscribe here.