Manifesto: If you don’t run a Plan B, you don’t have one—and then you call it “Realpolitik.”
In early January 2026, Bob Sternfels, Global Managing Partner at McKinsey, revealed during an appearance at the Consumer Electronics Show (CES) in Las Vegas that AI technology alone had saved approximately 1.5 million hours in research and synthesis over the past year.
He argued that:
- AI cannot develop ambition, nor can it set the right ambitions.
- AI models, by their nature, lack judgment: “In these models, there is no right or wrong.”. Humans, however, can develop the ability to set the right parameters based on stable foundations.
- AI possesses no true creativity. Instead, the technology merely calculates probabilities and is thus dependent on constant input.
As a result, Artificial Intelligence is very poor at acting orthogonally—outside of patterns—and developing new, original approaches.
On the other hand, the futurist Sven Gábor Jánszky shows who is building the future regardless: those who implement and scale, and those who have the interest (and the money) to do so.
(Links to Jánszky’s interview and Sternfels’ full talk are at the end of this post.)
Between these two poles lies the actual question: Who remains in charge when forecasts tip over—and geopolitics or war flips the rules?

There are moments when a speech sounds “reasonable”—and is dangerously reassuring precisely for that reason.
McKinsey boss Bob Sternfels essentially says: AI is a productivity lever. It can accelerate research, summarization, and routine work. But it does not replace the human. And then he names three things AI doesn’t really “have”: Ambition, Judgment, Creativity.
At first, this sounds like a consolation prize: Don’t worry—in the end, we are needed.
But as soon as you take these three terms seriously, they cease to be a comfort and become a burden. Because: If AI does not have these things, they are not simply “left over”. They become our jurisdiction—and not as a private virtue, but as societal infrastructure.
And this is exactly where Sternfels meets Jánszky.
1) Three Human Residual Competencies—or Three Final Responsibilities?
Sternfels’ triad is easy to narrate:
- Ambition: AI sets no goals. It has no inner drive, no dignity, no hope, no fear.
- Judgment: AI knows no “right” and “wrong” in the human sense. It calculates, it sorts, it weights—but it cannot bear responsibility.
- Creativity: AI follows patterns because it is built from patterns. It can do variation, remix, combination—but true pattern-breaking is something else.
Up to this point, this almost looks like a career tip: “Focus on the things AI can’t do.”
Except: Career tips are not the format we need right now.
Because if AI mills away the routine and accelerates decisions simultaneously, the question shifts : Not whether humans have ambition, judgment, and creativity—but who exercises them practically in reality.
And that is where Jánszky’s image of the future comes into frame.
2) Jánszky: Technology Doesn’t Decide.
Implementation Decides.
For Jánszky, the focus is not on technology as a “miracle,” but on Implementation Power: Who shapes the future?
Those who invest, execute, scale—and push things into everyday life.
He drives this logic further: Robotics (“AI with hands”), autonomous systems, drones; agents that act; and transactions that increasingly run between machines.
As a description, this isn’t automatically “wrong.” It is simply a perspective: Future as a result of Speed + Capital + Scaling.
And a critical reading hits the sore spot: This view is honest about power—but it often treats democratic counter-forces, cultural ruptures, regulation, crises, and questions of dignity merely as friction.
In other words: Jánszky explains how something can happen if no one pulls the plug. But he does not explain when society must pull the plug—or if it even still can.
3) The Confrontation: “Skills” Become Questions of Fate
When you harness Sternfels and Jánszky together, something interesting happens : The three “human advantages” are suddenly no longer abilities, but three political questions.
A) Ambition: Who sets the goals? Sternfels says: Humans set goals, AI does not. Jánszky shows: In practice, a few actors set the goals—those who implement. Thus, Ambition is no longer “motivation,” but Governance : Whose ambition becomes the standard—and who pays for the side effects?
B) Judgment: Who is liable—who controls? Sternfels says: AI has no normative judgment. Jánszky often describes development as value-free and shifts ethics into individual choice (“Advantage vs. Renunciation”). But as soon as systems shape large decisions, “Judgment” does not become smaller, but more systemic: Rules, liability, chains of legitimacy, and protected spaces become infrastructure.
C) Creativity: What must not be sacrificed to pattern-fit? Sternfels emphasizes pattern logic. Jánszky emphasizes scaling. The more perfect patterns become, the more valuable pattern-breaking becomes: Art, taboo-breaking, moral imagination, “useless” questions. Simultaneously, creativity threatens to shrink into marketable variation. This is the point where the “AI Debate” tips into a Dignity Debate—without pathos, just logic.
4) Then Comes the Crystal Ball Reality: Geopolitics
And now, the sentence that relativizes everything again—but does not devalue it:
We have no crystal ball.

Today we live in a geopolitical situation where a single event can flip framework conditions within days: wars, sanctions, blockades, energy shocks, financial shocks, information warfare.
In such moments, much of what sounded “reasonable” yesterday suddenly seems old.
This doesn’t mean everything is “gibberish.” It means: Forecast is fragile. Posture is more robust.
The correct conclusion follows from this: away from Forecast (“How will it be exactly?”) toward Posture (“How do we remain capable of acting if it turns out differently?”).
And exactly there, it finally becomes practical.
5) CONCRETE – A Call to Action for Politics
To politics—to governments, parliaments, administrations, parties, and institutions: If AI takes over work, it shifts power. Not someday. Now.
And precisely for this reason, “AI Politics” must not end at mission statements, ethics councils, and Sunday speeches.
It must organize jurisdiction—so concretely that it functions in the noise and holds up in the shock.
Many people form and consume their opinions on the internet and social media, from propagandists, public broadcasters, ideological reporting, and distorted worldviews. They have neither the time nor the nerves for footnotes and independent opinion formation.
Whoever claims democratic steering must cast it into forms that are faster than outrage—and more stable than the next crisis.
This is not cynicism. This is realism.
What follows is not a technical program. It is a Governance Program. It is the translation of “We need judgment” into a series of decisions that can actually be made.
Six steps that a government—and yes, even a parliament—must set in motion.

1. Map Power: Who has the levers—and who only has the speaking time? Before you write rules, you must see power. Create a state “Power Map” as a 2-pager: the most important actors, their levers (money, energy, chips, cloud, data, narratives), their stoppers (courts, parliaments, dependencies), and three triggers that actually change behavior. These maps do not belong in the archive—they belong on the table of every cabinet meeting, every faction retreat, and every relevant committee meeting. Why? Because otherwise, you play politics in the fog: You regulate what is visible and overlook what is effective. Power mapping is not a conspiracy theory—it is the prerequisite for democratic sobriety.
2. Enforce Jurisdiction: Liability, Audit, Complaint Path—Before Scaling If AI systems accelerate decisions, responsibility must not become smaller, but clearer. The principle is simple: No relevant decision without a liable human authority—and without a verifiable process. Whoever uses AI must be able to explain: Who decided? With what data? According to what rules? With what audit trail? This is not an innovation brake. It is the only way to make innovation democracy-compatible. Concrete politics here means: (a) A binding liability principle for “high-impact” applications, (b) Audit obligations (internal controls + external verifiability), (c) A complaint and correction path for those affected. Without these three elements, “AI Introduction” is not modernization, but delegation into the uncontrolled.
3. Define Red Lines: Three Sentences That Apply Even in a State of Emergency In shock, everything becomes an “exception”. That is exactly when it is decided whether legitimacy has substance. That is why politics needs a Red Lines Charter—not a brochure, but three sentences. Three. Examples (adaptable, but not dilute-able):
- First: No decision on existential questions without a liable human authority.
- Second: No mass measure without a clear purpose, clear limit, and Sunset (automatic end).
- Third: No emergency communication without source and correction duty. And the decisive point: Every red line needs an Owner—an authority allowed to say “No.” Every exception gets a logbook, a review date, and ends automatically if no one actively extends it. This is how morality becomes mechanics. And only mechanics survive the pressure.
4. Reduce Dependencies: A National Dependency Heatmap with Switch-Over Times Strategic autonomy is not a slogan. It is a list. And this list must have traffic lights. Create a Dependency Heatmap (12 fields) at the national or EU level: Energy, Payments, Cloud, Chips/Hardware, Logistics, Personnel, Communications, Data, Law/Compliance, Suppliers, Location, Reputation. For each field, three things become politically relevant: Single Point of Failure, Plan B, and Switch-Over Time. Politics becomes concrete here when it shortens switch-over times: 24 hours, 7 days, 30 days, 120 days—and not in PowerPoint, but in real contracts, capacities, and jurisdictions. He who knows no switch-over time has no plan—only hope.
5. Institutionalize Shock-Capability: 72-Hour Playbooks in Government and Administration Geopolitics can turn everything around. So politics needs a standard tool: the 72-Hour Playbook. One page, three blocks: Trigger, Roles, Communication.
- Trigger: When do we switch to crisis mode? (Cyber incident, payment disruption, supply stop, energy shock, violence/unrest, regulatory breach) .
- Roles: Who decides (Name/Role)? Who implements? Who speaks? Who logs?
- Communication: 1 Spokesperson, 3 Messages (what we know / don’t know / do next), and an update rhythm.
- Minimal Operation: Which five state functions must run, no matter what happens? This is not crisis romanticism. It is administrative hygiene and governance business. And yes: It must be practiced. Resilience is practice, not hope.
6. Recapture the Information Space: Not Louder—Clearer, Repeatable, Verifiable When people consume opinion, form decides effect. That is why politics needs two public standard formats that disrupt propaganda—without moral preaching.
- Format 1: Claim – Cost – Choice. Three sentences: What is the point? Who pays if we ignore it? What decision is pending (A/B)? Rule: No debate without Choice. Otherwise, it is entertainment.
- Format 2: The 15-Second Filter against Manipulation. Source, Incentive, Proof, Alternative, Action. This is banal—and precisely for that reason, suitable for the masses. If a government fails to spread simple checking routines, it will be overtaken in the next information war by those who have no scruples.
A 100-Day Plan so it doesn’t evaporate again:
- Day 1–10: Create publishable Power Map (internal), clarify responsibilities, first Red Lines Draft (3 sentences).
- Day 11–30: Define Liability Principle + Audit Framework for “high-impact” applications, design complaint path, create Dependency Heatmap in 12 fields (Traffic Light + Switch-Over Time).
- Day 31–60: Roll out 72-Hour Playbooks in core ministries/authorities; conduct first exercise; implement two communication formats (Claim-Cost-Choice + 15-Second Filter) as standard in press and crisis communication.
- Day 61–100: Finalize draft laws/ordinances (Liability/Audit/Complaint), public accountability (What is red? What is yellow? Which switch-over times were shortened?), and schedule the next exercise—before the next shock forces it.
This is the core: AI Politics is not a debate about “The Future”. It is a decision about whether democratic systems organize jurisdiction—or whether they silently surrender it to implementation power. Whoever wants to protect democracy must operate it: visibly, verifiably, practicably.
6) The Human Advantage is Not a Skill Set, But a Jurisdiction
If I have to pull all this together into one punchline, it is this:
AI takes over work. And exactly thereby, it shifts power.
That is why Ambition, Judgment, and Creativity are not “things AI cannot do,” but things we must not delegate if dignity, legitimacy, and the ability to steer are not to erode.
And that is perhaps the most honest, non-hysterical form of future thinking:
We do not know what will happen tomorrow. But we can decide what must not happen tomorrow—and which structures allow us to actually enforce that in a shock.
© Robert F. Tjón, January 2026 | Creative Commons CC BY-NC-ND 4.0 International
Legend
- AI: Artificial Intelligence
- CES: Consumer Electronics Show
- Switch-over Time: The time required to activate a viable backup system (Plan B) when the primary system fails.
- Posture: A state of readiness/robustness under uncertainty (as opposed to forecast).
Related Articles:
Jánszky’s Future Picture | How AI, Robotics, and Longevity Could Reshape the Next 50 Years (Amazing!)

Schumpeter, Frankenstein, and Jánszky

Links to the original sources in German:
- Bob Sternfels (McKinsey) Talk: Link
- Sven Gabor Jánszky Interview (with Philip Hopf): YouTube Link
- Related Article: “Wie wird die Welt 2030 aussehen?” Substack Link
