
There’s a particular kind of exhaustion that comes from watching people confuse convenience with intelligence. Students now use chatbots to finish homework in minutes, while employees increasingly rely on AI-generated emails and summaries without fully engaging with the material themselves.
The tools get faster, answers arrive instantly, and somewhere in that shift, AI stopped being merely a technological shift and became a cultural transformation.
That transformation is also where the idea of “brain rot” enters the conversation, not as a scientific diagnosis, but as a growing fear that overdependence on AI may gradually weaken concentration, memory, and critical thinking skills.
Mark Cuban seems deeply aware of that distinction. Once among the loudest voices championing artificial intelligence, the billionaire entrepreneur is now offering a more layered perspective. He isn’t against AI itself, but against the passive dependence forming around it.
His warning isn’t really about technology. It’s about what happens to people when they stop engaging with their own minds.
The Two Types of AI Users
Speaking at the Dallas Regional Chamber’s Convergence AI event, Cuban described what he sees as a growing divide between AI users. According to him, there are people who use AI to learn everything, and people who use AI so they no longer have to learn anything at all.
The difference sounds subtle at first. In practice, it changes everything.
One approach to AI is curiosity-driven. They ask questions not to escape effort, but to deepen understanding. AI becomes less of a replacement for thought and more of a companion to it. A tutor, researcher, translator, or sounding board capable of making knowledge feel suddenly accessible.
The other type uses AI as an escape hatch from concentration itself. Tasks are delegated before they are understood. Opinions are formed before reflection happens. Work gets produced, but the person behind it remains strangely absent from the process.
During the event, Cuban compared careless AI usage to relying on a “drunk intern,” warning that people who depend on AI without understanding the work themselves will eventually struggle.
AI as the Great Democratizer of Knowledge
There’s something undeniably transformative about the way AI has collapsed barriers to expertise. A teenager in a small town can now ask advanced coding questions, receive business feedback, or learn philosophy without entering a classroom. Knowledge that once required institutional access now exists inside a conversational window.
Cuban calls this “the great democratizer of knowledge.” And in many ways, he’s right. His earlier comments about using AI as a tool for smarter decision-making and learning further reinforce that perspective.
But democratized access does not automatically create deeper understanding. Information has become easier to obtain than ever before, while discernment has quietly become harder. The abundance of answers often disguises the absence of thinking.
This is perhaps the contradiction at the center of modern AI culture: people have unprecedented access to learning tools, yet many use them primarily to avoid learning altogether.
The Quiet Appeal of Cognitive Offloading
Avoiding effort has always carried a certain seduction. AI simply makes that temptation frictionless.
Instead of wrestling with an idea, users can ask a chatbot to summarize it. Instead of drafting an argument, they can request one. Instead of enduring the uncomfortable process of uncertainty, they can receive instant confidence wrapped in polished language.
The danger is not merely incorrect information. It’s the gradual erosion of intellectual ownership.
Researchers at the MIT Media Lab recently found that students who relied heavily on AI-generated essays showed weaker memory retention and reduced brain activity compared to those who wrote independently. Many could barely quote from work they had supposedly authored minutes earlier.
Another study uncovered a negative correlation between frequent AI usage and critical thinking abilities. Meanwhile, researchers at the Wharton School observed that participants trusted chatbot responses nearly 80 percent of the time, even when those responses were incorrect.
The language researchers use is clinical: “cognitive surrender,” “cognitive debt,” “offloading difficult tasks.” But outside academic circles, the feeling is easier to describe. The mind grows softer when it stops carrying weight.
When Efficiency Starts Replacing Understanding
There’s an important distinction between removing friction and removing engagement. AI excels when handling repetitive tasks, sorting large datasets, cleaning formatting errors, or accelerating tedious workflows. These are meaningful efficiencies that free people for more valuable thinking.
But problems emerge when it begins replacing the very mental work that builds expertise.
Cal Newport recently argued that many people use AI not because it improves their work, but because it helps them avoid sustained concentration. The concern isn’t technological dependency alone; it’s the slow decline of cognitive endurance.
Thinking deeply has always required discomfort. Reflection is slow. Creativity is often repetitive and frustrating before it becomes rewarding. AI can assist those processes, but it cannot substitute the intellectual muscle built through them.
The irony is difficult to ignore: the same technology capable of expanding human capability can also quietly weaken it.
AI Works Best as a Tutor, Not a Substitute
Human learning has never been purely about receiving answers. It involves confusion, interpretation, failure, repetition, and eventually clarity. It cannot fully replicate that emotional architecture of learning, but it can support it remarkably well.
When used intentionally, AI becomes less like an employee and more like an endlessly patient teacher. It can explain concepts in multiple ways, personalize examples, identify gaps in understanding, and compress hours of searching into minutes of direction.
Research from the Brookings Institution suggests AI tutoring systems can meaningfully improve educational outcomes and help scale individualized learning. The potential is enormous, particularly for people without access to expensive education or mentorship.
But even here, intention matters.
A person asking AI to explain a concept step by step is engaging their curiosity. While a person asking AI to complete the task entirely is often disengaging from the learning process itself. The technology may appear identical from the outside, but cognitively, the experiences are worlds apart.
The Real Question Isn’t About AI
What makes the Cuban’s warning resonate is that it ultimately says more about people than machines.
Technology rarely invents human tendencies from scratch. It amplifies what already exists. AI amplifies curiosity for some and intellectual laziness for others. It sharpens ambition in one person while dulling discipline in another.
The divide Cuban describes is not really between AI users and non-users. It’s between people who still wish to participate in their own thinking and people increasingly comfortable outsourcing it.
That distinction may shape careers more than technical skill alone.
Learning Still Requires Presence
There’s a strange cultural assumption forming that speed automatically equals progress. But understanding has never operated at the pace of automation. Some things still require lingering with confusion, struggling through complexity, and arriving at insight slowly.
AI can shorten the path to information. It cannot eliminate the need for intellectual presence.
For everyday users, that may ultimately be the most important takeaway from Mark Cuban’s warning: use artificial intelligence as a tool for learning and productivity, but not as a replacement for thinking itself. The future may belong to people who use it thoughtfully rather than passively.










