5 Prompt Engineering Secrets Every Business Leader Must Know

published on 16 December 2025

I Read a 68-Page Prompt Engineering Whitepaper. Here Are the 5 Most Surprising Takeaways for Business Leaders

Most people still treat AI like a slightly smarter search engine. They type a question, skim the answer, and move on. The whitepaper I read makes a very different point: the way you talk to an AI is quickly becoming a real source of competitive advantage. Two teams with the same model can get radically different results depending on how they structure their prompts.

The uncomfortable truth is that there is now a widening gap between “we tried AI and it kind of helped” and “we use AI to meaningfully change how we write, analyze, and decide.” The gap is not about models. It is about prompts. If your team only knows how to ask casual, one-shot questions, you are leaving a lot of value on the table.

The purpose of this article: great prompting is not a bag of tricks. It is a disciplined communication habit. You guide the model’s reasoning, you give it the right context at the right time, and you learn how to dial the randomness up or down for the job in front of you. The five takeaways below are the most useful and counter-intuitive ways to do that in a business setting.

Takeaway 1: Get Better Decisions by Telling the AI to “Think Step by Step”

Chain of Thought prompting sounds technical, but it is straightforward. You are simply asking the model to show its work instead of jumping straight to the answer. This matters whenever the question involves logic, numbers, or trade-offs.

Imagine you ask: “Should we close our small Toronto office and move those roles to Calgary? Consider cost, talent, clients, and risk.” If you ask this as a one-shot question, the model may give you a shallow answer that sounds polished but does not explain how it got there. Now compare that to: “Think step by step. First, list the main factors we should consider. Second, analyze each factor with pros and cons. Third, give a recommendation and explain why.”

In the second version, you are forcing the model to walk through the reasoning process in a structured way. You can see how it weighed real estate, salaries, client proximity, and internal disruption. If something looks off, you can challenge it or ask it to redo a step with better data. The quality of the answer improves, and so does your ability to trust or correct it.

The key move is simple: for important decisions, do not ask for “the answer.” Ask the model to think step by step, and tell it how you want that reasoning laid out.

Takeaway 2: Use “Step-Back Prompting” to Get Sharper, Less Generic Output

The paper shows that you often get better, more specific results by starting general and then narrowing down. Most people do the opposite. They immediately ask for something highly specific and then complain when the answer feels generic.

For example, instead of asking, “Write a change-management plan for our HRIS implementation,” start with, “First, list the five most common reasons HR technology implementations fail in companies with 500–2,000 employees. Focus on communication, training, data issues, and leadership support.” Once you have that list, you then ask, “Using those five failure risks as a backbone, draft a change-management plan for our HRIS implementation. We have 800 employees, most are not at desks, and our HR team is small.”

The result is no longer a generic template. It reflects real risks, in your context, using content the model just generated as its own briefing document. Step-back prompting is basically saying, “Before you try to solve my exact problem, show me you understand the landscape. Then solve it using that understanding.”

Takeaway 3: Tell the AI What To Focus On, Not Just What To Avoid

The whitepaper reinforces something I see constantly: people overload prompts with “do not” language. “Do not mention X. Do not be too technical. Do not sound like marketing.” The model hears a long list of red lines, but has no clear picture of where you actually want to go.

A better approach is to define a positive target. Instead of, “Summarize this employee survey but do not talk about compensation,” you can say, “Summarize this employee survey, focusing only on leadership, workload, and career growth themes. Ignore compensation.” Now the model knows what to pay attention to, not just what to avoid.

In practice, this is how you keep AI helpful instead of vague. Give it a clear frame: what to talk about, which levers matter, which outcomes you care about. Use “do not” as a small guardrail, not the main structure of the prompt.

Takeaway 4: Temperature, Top-k, and Top-p: What They Actually Do and How To Use Them

This is where the whitepaper gets very practical, and where almost every non-technical audience gets lost. These settings sound abstract, but they directly control how the model behaves. Once you understand them, you can tune the model for different business tasks instead of fighting with it.

What temperature does, in plain language

Temperature controls how “bold” the model is when choosing the next word. At a low temperature, it plays it safe and picks the most likely next word almost every time. At a high temperature, it is more willing to pick less obvious words and explore.

An easy way to think about it:

Low temperature (around 0.1–0.3) behaves like a conservative analyst. It sticks close to the facts and uses familiar phrasing.

·      Medium temperature (around 0.4–0.7) behaves like a thoughtful consultant. It is still clear and grounded but more willing to offer options and nuance.

High temperature (around 0.8–1.0 or higher) behaves like a creative copywriter. It tries unusual angles, surprising metaphors, and less predictable structure.

Concrete temperature example: drafting a client email

Prompt:

“Write an email to a key client explaining that our product release will be delayed by two weeks. Maintain trust, take responsibility, and suggest a path forward.”

Temperature 0.2 (very low):

·      You will get a very safe, formal email. The phrasing will sound conventional: “We regret to inform you…” “We apologize for any inconvenience…” It will be clear and correct, but slightly stiff and generic. This is good if legal or compliance are nervous and you want minimal creativity.

Temperature 0.5 (medium):

·      Now the email is still professional but more natural. It might say, “I wanted to let you know personally that our release date has shifted by two weeks,” and then explain why, what you are doing about it, and how you will keep the client updated. It feels human and trustworthy without going off the rails. This is a good default for most business communication.

Temperature 0.9 (high):

·      At this level, the email may open with a more informal or unusual framing: “I will be candid: we have hit a bump with the upcoming release.” It might propose creative gestures like early access to features or a small service credit, even if you did not specify that. Sometimes this is brilliant; sometimes it crosses lines you did not intend. This is useful when you are exploring tone options, not when you are sending the final version without review.

You can literally say, “Regenerate this at temperature 0.2” or “Now try the same email at temperature 0.8 and make it more conversational and bold,” if your tool lets you set that value. If it uses a slider, think of the left side as “steady analyst” and the right side as “creative partner.”

What top-k and top-p do, in plain language

Temperature is about how adventurous the model is. Top-k and top-p are about how many candidate words it is allowed to consider at each step.

Top-k:

·      The model looks at only the top k most likely next words. If top-k is 10, it chooses from the 10 most probable options. If top-k is 50, it chooses from a wider set.

Top-p (also called nucleus sampling):

·      Instead of a fixed number of words, top-p uses probability mass. If top-p is 0.9, the model selects from the smallest set of words whose combined probability is 90 percent. Low top-p keeps the choices tight; higher top-p opens things up.

Most business users will not touch these directly, but the pattern is simple: lower values mean more focused, repetitive language; higher values mean more variety and risk.

Using these settings to fix real problems

Problem: The AI keeps repeating itself (“looping”).

You ask the model to “outline a 5-step implementation plan,” and it keeps cycling through similar sentences or repeating the same phrase. This often happens when temperature is low and top-p or top-k are also tight, so the model is trapped on a narrow path.

What to do: Slightly increase temperature (for example, from 0.2 to 0.4). Increase top-p a bit (for example, from 0.8 to 0.9) so it has more words to choose from.

Then say, “You are repeating yourself. Rewrite this outline with distinct steps and no repeated sentences.”

Problem: The answer is technically correct but dull and generic.

You ask for a “one-page description of our new AI training program for HR leaders,” and you get something that reads like a brochure template you have seen a hundred times.

What to do: Increase temperature (for example, from 0.5 to 0.8). Keep top-p moderate so it does not go completely off topic.

Add a prompt like, “Regenerate this, focusing on specific outcomes for HR (time saved, quality of decisions, reduced rework), and use more concrete examples.”

You will see the language become more colorful and the examples more varied. If it goes too far and feels salesy or strange, pull temperature back down and tell it to keep the concrete examples but return to a more professional tone.

Problem: You need reliable analysis, not creativity.

You ask, “Review this financial summary and list three key risks and three opportunities for the next 12 months,” and you do not want the model improvising.

What to do: Lower the temperature (around 0.1–0.3). Keep top-p on the lower side so the model selects clear, predictable language.

Be precise in the prompt: “Base your answer only on the data provided. Do not invent numbers or assumptions.”

Now you have a more deterministic, repeatable result that behaves like a careful analyst, not a brainstorm partner.

The practical rule is this: if you want stable, factual output, keep temperature low. If you want ideas, voice, options, push temperature up and give the model more room with top-p and top-k. You do not have to remember the formulas. You just decide whether you want the model to behave more like a cautious accountant or more like a creative strategist, then set the dials accordingly.

Takeaway 5: Prompt Engineering Is Becoming a Core Management Skill

The whitepaper’s final message is not about math or jargon. It is that the best prompt engineers are not the most technical people in the room. They are the ones who know how to frame problems clearly, structure questions, and iterate.

When a manager says, “Give me a market analysis,” they get whatever the model thinks that means. When they say, “Analyze the Canadian market for our mid-market HR analytics product. Focus on three things: size of the opportunity, buying barriers for HR leaders, and how AI is changing expectations. Then recommend two go-to-market moves,” the output suddenly looks like it could sit in an actual management deck.

What this paper really shows is that prompt engineering is just good management communication, applied to a machine: be clear about the goal, show your work, start broad then narrow, give positive direction, and choose how much creativity you actually want.

Closing thought

AI is no longer the hard part. The hard part is getting people to talk to it in a way that produces reliable, valuable work. Chain-of-thought prompts, step-back prompts, positive instructions, and smart use of temperature and sampling are not academic ideas. They are practical tools any leader can use today. The organizations that learn how to use these tools with intent will not just “have AI.” They will get materially better thinking, faster.

Ready to take your team’s performance to the next level? Our hands-on AI training sessions are designed to get your people operating at an accelerated level—fast. By the end of the program, your team will not only understand AI but will be applying it with confidence and impact. If you’re interested in equipping your team with cutting-edge AI skills, reach out today.

Read more