
In 1997, a machine beat the best chess player who ever lived. Garry Kasparov sat across from IBM’s Deep Blue, lost Game 6 in 19 moves, and the world collectively panicked.
Chess was dead. Humans were obsolete. The machines had won.
That was almost 30 years ago. And you know what happened to chess? It exploded.
62 million households watched The Queen’s Gambit. Chess.com saw a 500% spike in sign-ups. Book sales jumped 603%. More people play chess today than at any point in human history.
The relationship between chess and AI is one of the longest-running experiments in human-machine collaboration. And the lessons are finally relevant to everyone.
The game didn’t die. It transformed. And as someone who spent years as a competitive player and chess instructor, I watched that transformation happen in real time. The patterns I saw then are the same patterns I see playing out right now with AI across every industry.
Here’s what I mean.
How Chess Players Responded to AI: The Five Stages
When engines first got strong enough to matter, the chess world went through something that looks a lot like grief.
Denial. Top players insisted computers couldn’t really “understand” chess. They were just calculating. Hikaru Nakamura famously claimed his brain was “better than Rybka” (the strongest engine at the time). He wasn’t alone. Most grandmasters believed human intuition would always have the edge.
Anger. Kasparov accused IBM of cheating during the 1997 match. He demanded a rematch. IBM refused and dismantled Deep Blue. Players complained that engines were “ruining the game.” Sound familiar? Swap “chess” for “creative work” and you’ve got half the internet in 2024.
Bargaining. Kasparov proposed a compromise. What if humans and computers played together? He organized the first “Advanced Chess” event in Leon, Spain in 1998. Kasparov with an engine versus Topalov with an engine. A month earlier, Kasparov had crushed Topalov 4-0 in regular chess. With engines? The match ended 3-3. The computer equalized the skill gap.
Depression. By 2005, even mid-range laptops could beat grandmasters. Magnus Carlsen admitted: “I rarely play against engines at all, because they just make me feel so stupid and useless.” The existential crisis was real. If a $500 laptop plays better chess than you after a lifetime of study, what’s the point?
Acceptance. And then something shifted. Players stopped fighting the engines and started working with them. That’s when things got really interesting.
How Chess Engines Made Human Players Better
Here’s a fact that surprised even the researchers. Kenneth Regan at the University at Buffalo spent years analyzing the quality of chess decisions across decades. His finding? The quality of play among top players improved steadily, and the improvement maps directly to when strong engines became available in the mid-1990s.
Engines didn’t replace human thinking. They raised it.
Before computers, opening preparation meant studying printed encyclopedias and memorizing maybe 8-12 moves deep. Today, grandmasters prepare lines 20-30 moves deep using cloud-based engine clusters. Fabiano Caruana put it simply: “Computers just taught us that pretty much anything is playable in the opening.”
Entire opening systems were resurrected from the dead. Vladimir Kramnik revived the Berlin Defense against Kasparov in the 2000 World Championship. Everyone thought it was a boring, dead-end sideline. Engine analysis proved it was rock solid. Kramnik drew all four games he played it. Kasparov never recovered. The Berlin is now one of the main lines of the entire Ruy Lopez.
But the real mind-bender? The concept of “computer moves.”
What Are “Computer Moves” in Chess?

A computer move is one that looks wrong to every experienced player in the room. It violates principles you’ve spent years learning. It makes no intuitive sense. But it’s objectively, provably correct.
Let me give you a specific example.
World Championship 2018. Carlsen versus Caruana. Game 6. Caruana had a piece versus three pawns in what looked like a drawn endgame. The Norwegian supercomputer “Sesse” running Stockfish found a forced checkmate. The winning move? 68…Bh4. A quiet bishop retreat.
Not a flashy sacrifice. Not an attacking move. A retreat. In a position where every human instinct screams “push forward.”
The engine found a forced mate in 58 moves from that position. Caruana didn’t find it over the board. Neither did the commentating grandmasters watching live. When Carlsen was told about it afterward, he said: “I am not going to disagree with the computers, I just don’t understand it.”
The best chess player in history looked at the best move in the position and said “I don’t understand it.”
That sentence should be tattooed on every wall in every boardroom discussing AI strategy right now.
What Is Centaur Chess? The Human-AI Collaboration Experiment
In 2005, a freestyle chess tournament on Playchess.com let anyone enter with any combination of human and computer assistance. Grandmasters entered with top engines. Expert players entered with multiple engines.
The winners? Steven Cramton and Zackary Stephen. Two amateurs. Cramton was rated 1685. Stephen was rated 1398. For context, a strong club player is around 1800. These guys weren’t even that.
They beat a team that included a grandmaster.
Kasparov analyzed the result and wrote what became maybe the most important observation about human-machine collaboration ever published:
“Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
Read that again. The amateurs with a better workflow beat grandmasters with a worse workflow. The differentiator wasn’t chess skill. It wasn’t computing power. It was process.
Cramton and Stephen ran three ordinary computers with four different engines simultaneously. They cross-referenced evaluations. They knew when to trust which engine and when to override. Their chess was mediocre. Their system for managing AI outputs was world-class.
If that doesn’t sound exactly like the skill gap opening up right now between people who use AI well and people who don’t, I don’t know what does.
Did AI Kill Chess? What Actually Happened
So the machines got better. And then what?
The number of grandmasters exploded. In the first 25 years of the title’s existence, about 4.7 players per year earned the GM title. Over the past 25 years? That number is 53.6. More than a tenfold increase. There are roughly 1,823 living grandmasters today.
Kids are reaching grandmaster level at younger and younger ages. Abhimanyu Mishra earned the title at 12 years, 4 months, and 25 days in 2021. Before 2010, only four players under 14 had ever achieved it. Since 2010, twelve have.
Engines didn’t kill the grandmaster. They mass-produced grandmasters.
But here’s the nuance. A Harvard/BCG study from 2023 found something similar in knowledge work. Below-average consultants improved 43% with AI assistance. Above-average consultants improved only 17%.
The floor rises. The ceiling barely moves.
Engines made average chess players significantly better. They made great chess players somewhat better. They made the very best players only marginally better, because those players were already operating near the limits of what pattern recognition and calculation could achieve.
The same thing is happening right now in writing, coding, analysis, design. AI in the workplace is raising the baseline fast. The top tier is still the top tier, but their relative advantage is shrinking.
The End of Centaur Chess: When AI Passed Humans
Here’s where I have to be honest with you about something. The centaur story has a sequel that most people leave out.
Tyler Cowen wrote about it in February 2024: “Centaur chess is now run by computers.” By 2007, engines got so strong that the human contribution in freestyle chess became negligible. Pure engines outperformed human-engine teams. The major freestyle tournaments stopped running by the late 2000s.
The window where humans added meaningful value to the machine’s output was real. But it was also temporary.
That’s worth sitting with for a minute.
It doesn’t mean the collaboration phase is useless. Those years of centaur chess produced genuine insights. Players learned to think differently about positions. The process skills they developed translated into better solo play. The collaboration period was a bridge, not a destination.
And right now, across every industry, we’re on that bridge. The question isn’t whether to cross it. You’re already on it. The question is what you’re learning while you’re here.
3 Lessons from Chess for Working with AI Today
1. The people who thrive aren’t the ones who resist the tool. They’re the ones who learn to work with it first.
Kramnik didn’t fight engines. He used engine analysis to find the Berlin Defense and beat Kasparov. Caruana used engine-assisted preparation to score 3103 performance rating at the 2014 Sinquefield Cup and beat Magnus Carlsen. The players who adapted fastest got the biggest advantage. But that advantage was temporary. Eventually everyone caught up.
First-mover advantage in learning to use the new tool is real. But it’s a window, not a permanent state.
2. Process beats raw talent when machines are involved.
Cramton and Stephen proved it. Two amateurs with great process beat grandmasters with poor process. In your work, the question isn’t “how smart are you?” It’s “how well do you manage the interaction between your judgment and the AI’s output?”
3. The machine will show you moves you don’t understand. Your job is to figure out when to trust them anyway.
Anish Giri said it best: “Sometimes the computer moves are so sophisticated, if you try to understand them, it would take a year.” In chess, we learned to accept that the engine’s suggestion might be right even when we can’t explain why. Then we worked backward to understand the logic.
That’s exactly what working with AI looks like now. It’ll give you outputs that feel wrong. Sometimes they are wrong. Sometimes they’re a bishop retreat that leads to a forced mate in 58 moves. Learning to tell the difference? That’s the skill of the decade.
Frequently Asked Questions
Did AI kill chess?
No. The opposite happened. After Deep Blue beat Kasparov in 1997, chess exploded in popularity. Today more people play chess than at any point in history. Chess.com saw a 500% spike in sign-ups after The Queen’s Gambit. The game didn’t die, it transformed.
What is centaur chess?
Centaur chess (also called “Advanced Chess” or “freestyle chess”) is a format where human players use chess engines during the game. Garry Kasparov invented the concept in 1998. The most famous centaur chess result came in 2005 when two amateur players with great process beat grandmasters with better engines but worse workflows.
Can humans still beat chess computers?
No. Since roughly 2005, even consumer-grade laptops running chess engines can beat the best human players. The last serious human victory against a top engine was in the early 2000s. Today’s engines like Stockfish are rated around 3500 Elo, while the best humans peak around 2850.
What are “computer moves” in chess?
Computer moves are positions where the engine recommends something that looks wrong to experienced human players. They violate intuitive principles but are objectively correct. The famous example is from the 2018 World Championship where Stockfish found a forced checkmate starting with a quiet bishop retreat that no grandmaster saw.
How has AI changed how chess is played?
AI transformed chess preparation. Before engines, players studied printed opening encyclopedias and memorized 8-12 moves deep. Today, grandmasters prepare 20-30 moves deep using engine analysis. Entire opening systems like the Berlin Defense were resurrected after engines proved they were sound. The quality of play at the top level has measurably improved since engines became available.
What’s Coming Next
This is part one of a series. I spent years in competitive chess as a National Master and chess instruction, and I keep seeing the same patterns repeat as AI reshapes how we work and think.
Next time, I’ll dig into something specific: how chess engines changed the way we teach. Because the shift from “memorize the right moves” to “understand the patterns” is exactly what education needs to figure out right now. And chess already ran that experiment.
If you’ve played competitive chess, you know exactly what I’m talking about. If you haven’t, stick around. The parallels are going to surprise you.