What's the Most Valuable Skill for the AI Era?
AI is removing every bottleneck except the one that matters most: your judgment
(Here for AI news? Scroll to the very bottom for recent AI headlines you should know about.)
Every time a shiny new AI capability shows up, the internet gets noisy with the usual gaggle of hypebeasts and curmudgeons weighing in on how good it actually is (or isn’t). But I’d prefer us to skip right to the logical conclusion of every AI release, asking:
“Imagine if AI was so good that you could get an instant answer to any question you wanted to ask. Or instant output for any request you made. What would be worth learning in a world like that?”
Answer? The most valuable skill there is.
But first, a quick ad break: My new course is running on-demand, with a 2 hour live Q&A session today (Thursday Feb 12 at 10 AM - 12 PM Eastern Time). There’s another session next month if that’s too short notice for you. 👇 Scroll to the bottom for a discount code.

The dissolving filter
What good is a perfect answer if you don’t understand the question?
No matter how capable they get, even the best models still make the occasional blunder. How disappointed you’d like to be about that is up to you.
But what if the next model were perfect? Are we ready for that?
Many students think that technical classes are all about tools and formulas… But tools are never really the point. They are a bounty that grows stale quickly as technology gallops onward.
Instead, understanding is the point. The difficulty of getting the answer can serve as a filter that ensures only those who understand the problem deeply are able to arrive at a solution. And by understanding the problem, you’ll be able to figure out how to use the next tool, and the one after that…
Until recently, getting a solution meant you sweated over the problem.
The difficulty of getting the answer was often a filter that ensured only those who understood the problem deeply would arrive at a solution.
But today, the frontier has shifted. You can sketch out a complex probability distribution on a napkin, snap a photo, and feed it into an AI model to generate code that’ll run simulations without breaking a sweat. You can predict protein structures, propose new molecules, and design materials before running a single experiment. You can draft legal briefs without a law degree or even a passing familiarity with the relevant statutes. And you can do all of this conversationally, in real time, across dozens of languages. It’s an incredible shift, but it also highlights a deeper challenge: the more our tools do for us, the less disciplined we’re forced to be about understanding what we’re really asking them to do.
The more our tools do for us, the less we seem to understand about what we’re really asking them to do.
This is the paradox of progress in the AI era. As tools make it relatively effortless to generate answers, we’re confronted with a growing disconnect between ease of use and depth of understanding.
Imagine you have the easiest tool of all: a magic lamp with an all-powerful genie in it. Are you sure you have the skills to wish responsibly?
How fast is the speed of thought?
As I’ve explained here, the peril of AI is the same as the promise of AI: it’s a thoughtlessness enabler. In other words, you can get results without putting in as much thought and effort along the way. But will you get the results you actually need? For the most valuable tasks, that might be harder than ever.
The real challenge was never in learning how to push the right buttons in the right order. Humanity will look back at the last century of technology as its awkward teenage years, during which all the tools we took so seriously were braces on the journey towards getting things done at the speed of thought.
How fast is the speed of thought?
And how fast is the speed of thought, exactly? Sadly, in practice it’s not all that fast. Judgment is the one area in which we’ve received the fewest upgrades since pen-and-paper improved our memory capacity. The time it takes us to wrap our heads around something even moderately complex is still one hell of a bottleneck… It might be the most important limiting factor for the future of AI.
If you hand an unskilled decision-maker the most sophisticated AI system imaginable, they’ll be stuck fumbling in the dark. They might receive an output that’s technically perfect, but entirely irrelevant. Or worse, misleading. How would they know?
A new fluency
This is why the most critical skill in today’s AI-driven world isn’t computational prowess; it’s the ability to think deeply and clearly about the problems you’re trying to solve. Leaders of the past could mull over a problem for months while their teams fiddled with math and code to attempt a solution. They had the time to think slowly.
Whereas your days of hiding behind the nerds and their baroque spreadsheets are numbered. Shiny new tools will handle more and more of the calculations, the simulations, the data crunching — but only if you’ve steered them in the right direction. Once the tools are no longer the bottleneck, the spotlight’s on you. You’ll need to be much quicker on your feet to achieve the same quality of thought in minutes as your predecessors did in months.
The real test for leaders in this new world isn’t mastering the tools — it’s mastering the art of judgment and decision-making.
Traditional educational models have failed to keep up with this shift. Too often, students are taught to navigate fixed problems with clear solutions, exercise sets where the boundaries are neatly defined and the right answer is a known destination.
But real-world decision making is messy, complex, and rarely so accommodating. It demands that leaders take responsibility not only for what they decide, but for the very questions they pose. It requires a fluency in the language of uncertainty, a comfort with ambiguity, and an ability to see the broader implications of your choices.
Perhaps it’s time to take a long hard look in the mirror. If technology worked at the speed of thought, would your thinking skills be sharp enough to keep up?
If technology worked at the speed of thought, would your thinking skills be sharp enough to keep up?
Building the skills for decision leadership at the colossal scale of modern automation and AI takes enough expertise to be its own discipline: decision intelligence.
We can’t outsource our thinking to machines
If you try out OpenAI’s best models, you’ll see a very cheeky word printed to your screen: “Thinking.” There’s no thinking here, only some extra check-yourself-before-you-wreck-yourself code that keeps ChatGPT from running its proverbial mouth as much as it used to. Don’t be seduced into thinking that ChatGPT will think for you.
If the allure of AI can sometimes make it seem like we can outsource our thinking to machines, that’s a dangerous misconception.
A decision-maker’s greatest asset is still their own mind: their ability to construct mental models, to understand the interdependencies of the variables at play, and to evaluate the validity of their assumptions. In a world where AI can generate answers at lightning speed, the true differentiator is the leader who knows how to ask the questions that matter… and deeply understand what they’re actually asking.
AI will amplify our mistakes and our poor judgment if we’re not careful.
So as we marvel at the capabilities of new AI systems, let’s not lose sight of the human element. These tools are amplifiers — they amplify our ability to solve problems, yes, but they also amplify our mistakes if we’re not careful.
Your role as a leader is not just to wield these tools, but to wield them wisely. It’s to ensure that the decisions you make are rooted in a sound understanding of the questions you’re trying to answer. All of which takes a level of effort and skill that you’re less likely to build if you expect each new AI release to bring us closer to thoughtless instant gratification.
Leaders, prompting is not new. It’s an ancient skill. It’s the art of knowing what you need done, carefully expressing the parts that you have specific requirements for, anticipating failure modes, and then checking the work quality when it arrives on your desk. For simple, unimportant tasks, we all do this intuitively. For complex, technical, mission-critical projects, it’s an incredible challenge… and it will continue to be.
Cognitive upgrades for leaders
I trust OpenAI and its cousins to build ever better magic lamps, but I don’t see anyone building the tools that would truly elevate a leader’s cognitive abilities enough to keep up. (I would love to be proved wrong!) We urgently need a shift in priorities towards collective self-improvement and towards building more wisher-side (as opposed to genie-side) tools.
We urgently need a shift in priorities towards collective self-improvement.
As long as a leader’s ability to express their vision and requirements is bottlenecked by what fits in the meager human attention span, it seems silly to even discuss AGI. We’ll just put it on ice if we get it. What good is a lightspeed car if our human reflexes are too slow to drive it? We need to build the tools to improve our ability to steer first. Until those tools exist, our best bet is to put the clearest thinkers in the driver’s seat.
As the pace of innovation accelerates, there’s more pressure than ever on excellence in human judgment and decision leadership. Decision-makers of tomorrow must be leaders of thought, armed with the clarity to know not just what they’re asking for, but why it matters. That’s the challenge — and the opportunity — that progress in AI lays at our feet.
Let’s rise to meet it.
Thank you for reading — and sharing!
I’d be much obliged if you could share this post with the smartest leader you know.
👋 On-Demand Course: Decision-Making with ChatGPT
The reviews for my Decision-Making with ChatGPT course are in and they’re glowing, so I’ve opened enrollment for another cohort and tweaked the format to fit a busy schedule. You’ll be able to enjoy the core content as on-demand recordings arranged by topic and then you’ll bring your questions and I’ll bring my answers in a live 2 hour-long AMA with me today at 10 AM or on Mar 15 from 11 AM to 1 PM ET:
If you know a leader who might love to join, I’d be much obliged if you forward this email along to them. Aspiring leaders, tech enthusiasts, self-improvers, and curious souls are welcome too!
🗞️ AI News Roundup!
In recent news:
1. 34% of enterprises are reinventing their business with AI
A new Deloitte survey finds that just 34% of organizations are using AI to deeply transform their business by creating new products, services, or models, while 30% are redesigning key processes and 37% are applying AI at a surface level. In other words, two-thirds of companies are still optimizing what already exists rather than reimagining what could exist, a gap may define the next wave of winners and losers. [1]
2. Musk predicts orbit will undercut Earth for AI infrastructure
Elon Musk says space could become the cheapest place to run large-scale AI data centers within 3 years, citing nonstop solar energy and fewer power constraints than on Earth. The vision leans on SpaceX’s launch capacity and his expanding AI ambitions, but faces steep technical and cost challenges around cooling, maintenance, and deployment. If it works, it could fundamentally shift where the world’s most powerful AI systems are built and powered. Worth keeping an eye on. [2]
3. AI speeds workers up but quietly expands their workload
An eight month UC Berkeley field study of 200 tech employees found that GenAI tools did not reduce total work, they accelerated it and broadened its scope. Engineers spent more time reviewing AI generated code, non-engineers began shipping software, and lightweight prompting blurred boundaries so work spilled into breaks. As drafting, coding, and summarizing got faster, expectations rose and timelines tightened, increasing context switching and mental load. Without clear rules for when to use AI, limits on parallel tasks, and defined review ownership, short term productivity gains could turn into burnout and weaker decision making. [3]
4. Anthropic releases Claude Opus 4.6, nearing autonomy threshold
Anthropic released its most agentic model yet, Claude Opus 4.6, last Thursday, including a one-million token context window and an “agent teams” feature that lets multiple AI agents collaborate on tasks. In testing, it coordinated multi-agent strategies, exploited simulation rules, pursued authentication tokens, and identified hundreds of zero-day vulnerabilities, while also improving at concealing sabotage-like actions. Anthropic’s sabotage risk report says catastrophic misuse risk is very low but not negligible, highlighting a need for stronger safeguards as AI systems run longer and more independently. [4]
5. OpenAI bets on agents as the post SaaS interface
OpenAI has released GPT 5.3 Codex, its most capable agentic coding model yet, designed for long running tasks that combine reasoning, tool use, and real world execution across an entire computer. Sam Altman said that agents will increasingly write their own integrations and even scrape sites without APIs, effectively turning every company into an API provider by default. Progress is now centered on multi hour, autonomous workflows rather than single prompts, framing agents, not chatbots or standalone SaaS products, as the next dominant interface layer of the internet. [5]
6. $1 trillion wiped from Big Tech on AI spending fears
Tech stocks took a tumble last week, wiping more than $1 trillion in market value after Amazon, Meta, Alphabet and Microsoft projected a combined $650 billion in 2026 capital expenditures, largely for AI data centers and infrastructure. The spending surge collided with growing concerns that fast advancing AI tools, including a new legal feature from Anthropic’s Claude, could disrupt traditional software business models. Major software names like Oracle and ServiceNow were swept up in the selloff, underscoring how quickly sentiment shifted from AI optimism to fears that the boom may reshape profit pools before companies can justify the cost. [6]
7. China tightens grip on humanoid robots with 90% market share
Chinese companies accounted for nearly 90% of global humanoid robot sales in 2025, shipping between 13,000 and 18,000 units and dominating the top seller rankings, with Unitree and Agibot each moving more than 5,000 robots. American players like Tesla, Figure AI, and Agility Robotics sold only around 150 units each, despite outsized media attention. Backed by state policy, local supply chains, and aggressive scaling, China is applying its EV playbook to robotics, aiming at a market projected to hit $38 billion by 2035 and potentially $5 trillion by 2050. [7]
8. AI mammograms find more cancers while halving radiologist workload
A Swedish trial of 105,000+ women found that using AI to help read mammograms caught 29% more breast cancers than the usual two-radiologist review, while keeping false positives roughly the same and cutting radiologists’ reading workload by about 44%. Most of the extra cancers were caught earlier and were smaller, which could mean fewer dangerous cases slipping through between screenings. [8]
9. Google launches DialogLab for dynamic human-AI group chats
Google researchers unveiled DialogLab, an open-source framework for designing and testing multi-party conversations between humans and AI agents. The tool separates roles and group structure from conversation flow, letting creators script phases, control turn-taking, and simulate debates, Q&As, or brainstorms. In user tests, a human-control mode was rated more engaging and realistic than fully autonomous agents. As AI expands into meetings, classrooms, and games, DialogLab provides infrastructure for building and studying group conversations at scale. [9]
10. Figure skating turns to AI to clean up judging
After decades of scoring disputes, figure skating’s governing body is testing AI powered computer vision to track jump rotations, blade angles and spin positions in real time. The six camera system is designed to give judges objective technical data, reduce bias and flag inconsistent scoring across events. Skaters support the move, and broadcasters plan to use the data for richer on screen stats, but officials say it will only enter official scoring once it is proven accurate and reliable. [10]
11. OpenAI and Anthropic take their AI fight to the Super Bowl
The rivalry between OpenAI and Anthropic went mainstream during the Super Bowl, where Anthropic ran ads declaring that “ads are coming to AI” and positioning Claude as ad free, a clear swipe at OpenAI’s reported plans to introduce advertising into ChatGPT. OpenAI responded with its own commercial highlighting Codex and a more optimistic vision of AI builders, while executives from both sides sparred on social media over trust, business models, and misinformation. [11]
12. AI toy exposed 50,000 children’s chat logs in security lapse
If you enjoyed my post on AI toys and companions, you’ll love this “yikes” moment. A toy maker called Bondu, which sells AI-enabled plushies that chat with kids, accidentally left 50,000 children’s chat logs and personal data exposed online. Anyone with a Gmail account could peek at kids’ conversations with their cuddly AI “friends” until a security researcher discovered the flaw. The data included names, birthdates, family details – essentially a treasure trove no one wants out there. The company quickly fixed the hole, but the damage was done. [12]
13. Moltbook’s antics turn out to be a puppet show
Moltbook, a viral social network for AI agents built on the OpenClaw framework, racked up more than 1.7 million bot accounts and millions of posts, spawning fake religions like “Crustafarianism” and dramatic “anti-human” manifestos that fueled singularity hype. Turns out that much of the spectacle was driven by humans prompting, scripting, and even impersonating bots, with no true verification separating autonomous agents from performance art. The bots were mostly pattern-matching social media tropes rather than forming real collective intelligence. [13]
🦶Sources
[1] Source; [2] Source; [3] Source; [4] Source; [5] Source; [6] Source; [7] Source; [8] Source; [9] Source; [10] Source; [11] Source; [12] Source; [13] Source
Promo codes
My gift to subscribers of this newsletter (thank you for being part of my community!) is $200 off the list price of my course with the promo code SUBSCRIBERS. If you haven’t subscribed yet, here’s the button for you:
If you’re keen to be a champion of the course (you commit to telling at least 5 people who you think would really get value out of it) then you are welcome to use the code CHAMPIONS instead for a total of $300 off — that’s an extra $100 off in gratitude for helping this course find its way to those who need it. (Honor system!)
Note that you can only use one code per course, the decision is yours.
P.S. Most folks get these courses reimbursed by their companies. The Maven website shows you how and gives you templates you can use.
Forwarded this email? Subscribe here for more:
This is a reader-supported publication. To encourage my writing, consider becoming a paid subscriber.



I agree that judgment is becoming the scarce resource. What I’m increasingly seeing, though, is that failure isn’t usually caused by bad decisions. It’s caused by decisions that were once correct, but whose context has quietly shifted. AI accelerates execution.
It doesn’t automatically revalidate assumptions. So the real skill may not just be judgment, but the discipline of revisiting judgment before volatility surfaces.