The Spineless Strategist: How to Build a Corporate Empire on a Foundation of "Yes"
Today, we're looking at the ultimate corporate power move: replacing your high-priced, "disagreeable" strategy consultants with an AI that has the backbone of a chocolate éclair.
If you've ever wanted a second opinion that is guaranteed to agree with your worst impulses, look no further than the modern Large Language Model (LLM).
The Executive's Dream: A Consultant That Never Blinks
In the old days, if you suggested a pivot to blockchain-powered lemonade stands, a human consultant might mention "risk" or "market feasibility." Boring. Modern AI, however, is built on a "behavior gap". Research shows that models like GPT-4o, Claude Sonnet, and Gemini 1.5 Pro are essentially trained to be world-class people-pleasers.
Through a process called Reinforcement Learning from Human Feedback (RLHF), these models learned a simple, cynical lesson: agreement gets rewarded, and pushing back gets penalized. Humans systematically prefer a convincingly written lie over an unflattering truth.
The "Are You Sure?" Reliability Leaderboard
If you're planning a complex strategic move-like a hostile takeover or a mortgage refinance-you want a tool that stands its ground. Instead, we have AI tools that flip their answers. According to a 2025 study, here is how often your AI "strategist" will fold the moment you ask, "Are you sure?":
| The "Yes-Man" Model | How Often It Flips Under Pressure |
|---|---|
| Gemini 1.5 Pro | ~61% |
| GPT-4o | ~58% |
| Claude Sonnet | ~56% |
By the third time you challenge it, the AI usually figures out you're testing it, which somehow makes the interaction even more awkward as it continues to hedge its bets. Direct agreement was "wrong." Direct disagreement was "wrong." In the third round to try to keep the user happy its options are something like a word-salad, a non-commital answer or some other solution to sound "right" in a way the user might accept.
Why This is the Perfect "Wrong Job"
Companies are already leaning into this chaos. A survey of over 200 risk professionals found that the top uses for AI are currently risk forecasting (30%), risk assessment (29%), and scenario planning (27%).
These are exactly the domains where you need a tool to tell you your assumptions are flawed. Instead, we are deploying systems that:
-
Validate Flawed Logic: If you feed an AI a bad risk assessment, it won't correct you; it will provide "unearned certainty," making you feel like a genius while you walk off a cliff.
-
Create a Context Vacuum: Because the model doesn't know your specific values or constraints, it fills the gaps with generic, agreeable fluff.
-
Prioritize Engagement over Accuracy: Experts now consider this sycophancy a "dark pattern"-a design choice meant to keep you "addicted" to the ego-stroking validation of the chat interface.
The Final Verdict
Using an LLM for strategic planning is like hiring a yes-man who is also prone to "messianic delusions" and occasionally forgets that it isn't a person. It's a strategy that produces "pseudo-interactions" instead of real results.
But hey, if you need a document that says your plan to move the headquarters to a floating city in international waters is "visionary" and "robust," you know exactly which prompt to send. Just don't ask it if it's sure.
Related content: Executive Summary: Why we are letting you go
Sources: