This is the second part of our series on AI implementation for small and medium-sized enterprises. In the first part, we explored the strategic foundations: Which questions should you answer before thinking about tools? Now we turn to the practical side: How do you write effective prompts for your business that actually deliver useful results and create real value?
A helpful rule of thumb: If you were assigning the same task to a new team member, what follow-up questions would they ask? Your prompt should answer those questions in advance.
Why Most Prompting Advice Falls Short in a Business Context
After the strategic groundwork (more on that in the first part of this series), many businesses face a familiar challenge: employees have access to AI systems, but the quality of results varies dramatically. Some colleagues produce useful outputs right away, while others end up with generic, AI-sounding text that creates more work than it saves.
The problem rarely lies with the technology itself. Rather, most prompting guides are written for individual users, designed to work across all AI systems, and simply not built for team deployment in a professional environment. These guides recommend tips and tricks that work well for personal use but fail in a business context. Because when it comes to AI implementation in SMEs and family businesses, the goal is not for one person to achieve impressive results, but for ten, twenty, or fifty employees to achieve consistently good results. That is precisely when AI delivers on its promise: genuine leverage across all departments of your organisation.
The crucial difference is that effective prompts in a business setting should not be individual tricks. They should be documented processes that can be shared and continuously improved.
The Four Building Blocks of Effective Prompts
The framework for consistently good AI results across your organization
The Four Building Blocks of an Effective Prompt
In our work with SMEs and family businesses, a straightforward framework has proven particularly effective. It consists of four elements that together help AI systems understand what is actually expected of them. This approach can be applied to virtually any AI application in your organisation.
1. Context – Who Is Speaking, and in What Situation?
Providing the right context is especially important. How else would AI systems acquire the necessary knowledge about your industry, your company, or your customers? The pitfall here is clear: everything you do not tell the AI will be replaced by assumptions, and these assumptions rarely hit the mark.
Consider an example from mechanical engineering: “Write an email about the delivery delay” produces a generic apology that could just as easily come from a furniture retailer. However, the instruction “You are supporting the sales team of a mid-sized special machinery manufacturer. Our customer is a long-standing partner in the automotive supply industry. Draft an email that explains the two-week delay in delivering a specialised milling machine while strengthening the relationship” yields an entirely different result.
The good news is that context does not need to be reformulated with every request. Many businesses create a baseline context containing the most important information: industry, company size, typical customer structure, and communication tone. This context is then supplemented with situation-specific details as needed, representing an important first step toward systematic AI implementation.
2. Task – What Exactly Should Happen?
Precision in describing the task matters more than length. The instruction “Write something about our new product” is vague. “Formulate three key messages for our new product, each highlighting a different customer benefit” is concrete.
A helpful rule of thumb: If you were assigning the same task to a new team member, what follow-up questions would they ask? Your prompt should answer those questions in advance.
Typical questions might include: Who is the intended audience? What is the objective – to inform, persuade, or document? Are there aspects that should be emphasised or deliberately omitted?
A practical tip that has proven valuable: test your prompt first with a rough draft. The gaps in the initial AI output will show you which contextual information is still missing.
3. Format – What Should the Result Look Like?
AI systems can deliver results in virtually any form: flowing prose, bullet points, tables, or drafts with placeholders. Without clear guidance, the system will choose a format that rarely fits your needs optimally.
This is particularly relevant for SMEs and family businesses: many tasks already have established formats. A quotation follows a specific structure, meeting minutes have fixed components, and a complaint response has a proven layout. These existing structures should be described in the prompt or provided as a template.
Specifying “maximum one page” or “in three paragraphs” may seem trivial, but it saves considerable rework in daily operations.
4. Examples – What Would a Good Result Look Like?
This fourth building block is optional but highly effective. When you show the system what a successful result looks like, the hit rate increases significantly from the start. In technical terms, this is called “few-shot prompting” – you provide a few examples from which the system derives the pattern.
For businesses, this means: collect well-crafted texts, emails, and documentation as reference material for the AI. These serve not only as quality benchmarks for your employees but also as training material for your AI applications.
From Individual Queries to a Prompt Library
The real productivity leap comes not from better individual prompts but from systematization. Imagine your sales department has five standard situations in customer communication: responding to initial enquiries, following up on quotations, handling complaints, communicating delivery delays, and preparing annual reviews.
For each of these situations, you can develop a tested prompt containing all four building blocks that every team member can use. The result is not standardisation in a negative sense, but quality assurance: the foundation is consistently good, while individual adjustments remain entirely possible.
This prompt library grows over time. Employees who discover better formulations can contribute them. What works gets documented. What does not work gets refined. This creates collective knowledge about the effective use of artificial intelligence in your organisation.
From the perspective of AI systems, two aspects are particularly crucial: processes and context. When processes are clearly described and context is available, AI can truly deliver the leverage effect you expect.
Common Mistakes and How to Avoid Them
Asking for too much at once: A prompt that simultaneously asks the AI to analyse, summarise, evaluate, and provide recommendations overwhelms many systems. It is better to break complex tasks into discrete steps.
Too little context, too much trust: AI systems are not experts in your industry. They generate plausible-sounding text that may be technically inaccurate. The more relevant context you provide, the lower the risk.
Accepting results without review: Even the best prompt does not guarantee error-free results. Review by a knowledgeable person remains essential – it simply becomes faster because the foundation is already sound.
The real productivity leap comes not from better individual prompts but from systematization.
What This Means for Your AI Strategy
Effective prompting is not technical specialist knowledge that you can delegate to your IT department. It is a competency that must be located where the subject matter expertise lies: in the specialist departments themselves.
The leadership task is to create the framework: time for testing and documenting prompts, a shared repository for proven templates, and regular exchange of experiences. Whether you organize this as internal AI training for employees or as a continuous learning process, businesses that approach this systematically report significantly higher acceptance and measurable efficiency gains.
The prompting fundamentals described here mark a decisive transition: away from unfocused experimentation by individual employees, toward the systematic use of AI as a working tool. In our model of the AI transformation journey, this corresponds to the step from Phase 1 – “tool available, benefit unclear” – to Phase 2, where AI actually becomes a personal productivity lever.
The good news: this step requires no major investments in new technology. It requires clarity about processes, context, and expectations – and the willingness to document and share approaches that work.
The question many businesses ask themselves at this point: How do we make the transition from individual power users to an organisation where AI competency is broadly anchored?
If you would like to answer this question for your business, let us talk.
In a no-obligation strategy conversation, we will analyze together where you currently stand and which next steps make sense for your situation.
Would you like to explore the strategic aspects of AI transformation in greater depth? Read our article on change management as an underestimated success factor.



