top of page

Overreliance on AI: Painting A Scary Future

  • ottilife
  • Sep 12
  • 5 min read
ree

We live in an age where artificial intelligence has become as commonplace as morning coffee. From university students crafting homework essays to C-suite executives making strategic decisions: people from all walks of life are using AI in daily life. I mean, why wouldn’t you? AI promises enhanced productivity and solutions to complex problems at the click of a button.


While these benefits are great, we at Otti are acutely aware of the other side of the story: the negative effects of AI usage. We call this phenomenon: “Overreliance on AI”.


The signs are everywhere:


"I don't know how I used to spend so long writing," people say, marveling at AI's speed. 


"I can't imagine doing this work without AI anymore." 


“Why are my workers always sending me AI-sounding work, I can’t use this!”


Sound familiar? 


Just as GPS navigation has left many of us unable to find our way without digital assistance, AI dependency is similarly eroding our cognitive abilities and motivation. While getting lost without our handphones might mean a detour, losing our capacity for clear thinking has far more serious consequences.



How Did We Get Here? Falling Into AI Reliance

Technology: From Fire to AI
Technology: From Fire to AI

Humans have long used technology to make our lives simpler. Inventions like on-demand fire, electricity, calculators, and handphones are all technologies that have greatly made life easier for us. As humans, we have a natural tendency to lean on technology to reduce effortful work wherever possible. As such, it is only too easy for humans to lean heavily on AI – one of the most if not the most powerful technology of the 21st century. Let’s take a closer look at the factors surrounding this.


So what else makes overrelying on AI ever so easy? AI systems, particularly advanced ones like ChatGPT and Claude, have mastered something that humans find irresistible: confident presentation. When AI delivers an answer, it doesn't express uncertainty. It presents information with the same authoritative tone whether it's discussing established scientific facts or completely fabricating details that sound plausible.



ree


Next, let's be honest: thinking is effortful, hard work. Our brains are naturally wired to seek cognitive shortcuts, and AI offers the ultimate shortcut. Why spend hours wrestling with a complex analysis when AI can provide a comprehensive breakdown in seconds? Why struggle through writer's block when AI can generate polished essays instantly?


This tendency toward "automation bias" means we automatically trust and depend on automated systems, especially when they make our lives easier. 



Inside the black box of AI, hallucinations go undetected
Inside the black box of AI, hallucinations go undetected

Many people using AI daily don't fully understand how these systems actually work. They don't know about training data limitations, the potential for "hallucinations" (when AI confidently presents made-up information), or the biases embedded in AI responses.

Without this understanding, users are less likely to critically evaluate AI output. This knowledge gap creates a dangerous blind spot in our decision-making process.


What’s more? In today's workplace, the pressure for speed and results is relentless. Deadlines loom, expectations soar, and the complexity of tasks seems to increase daily. In this environment, AI appears as a lifeline—a way to meet impossible demands and tackle overwhelming challenges.


What’s The Future For Humans?

Fictional Future?
Fictional Future?

Or Realistic Outcome?
Or Realistic Outcome?


The convenience of AI is immediately apparent, but its costs are often invisible until it's too late. These hidden consequences affect not just our work quality, but our fundamental capacity to think, learn, and grow.


Neuroscience has found that our brains operate on a "use it or lose it" principle. When we consistently outsource our thinking to AI, we're gradually weakening our cognitive abilities.


Consider this: one of the clearest paths to critical thinking, is writing clearly. Much of our thinking happens in natural language. We are thinking when we:

  1. ask ourselves questions

  2. reason through problems

  3. form judgments primarily through internal dialogue


When we contract out this critical cognitive process to AI, we lose more than just writing skills; we lose our ability to think through complex issues independently.


Research reveals the scope of this problem. Studies have found that overreliance on AI chat systems significantly impacts crucial thinking abilities, encouraging what researchers call "passive thinking habits." Instead of actively wrestling with information and forming our own judgments, we become accustomed to receiving pre-packaged answers.


Perhaps even more concerning is how overreliance affects our decision-making quality. A Nature published study on university students found that 27.7% of the decline in decision-making quality among students was directly connected to AI's influence. When we consistently defer to AI recommendations without proper critical evaluation, we develop a dangerous habit of unquestioning acceptance.


This creates a vulnerability in our judgment. We might accept AI suggestions even when they're wrong, incomplete, or unsuitable for our specific situation. The ability to distinguish between AI-generated insights and human wisdom diminishes, making us more susceptible to AI's inherent biases and potential wrong information.


Another main side effect of overreliance on AI is that we tend to become more demotivated. Research has found that AI usage is correlated with increased human laziness. When AI takes over the thinking work, people become less willing to put in mental effort, leading to what researchers call "inert thinking", a state of mental sluggishness where active cognitive processing is avoided.


AI systems aren't perfect—they make mistakes, exhibit biases, and sometimes fabricate information that sounds completely believable. When overreliant users don't critically examine AI outputs, they miss these flaws, leading directly to poorer results and less effective performance. This undermines the very efficiency that AI promises to deliver.



The Six-Year-Old Genius Problem


“Here's a useful way to think about AI: imagine a six-year-old who has memorized every encyclopedia in the world but still has the problem-solving skills of a six-year-old. AI has access to vast amounts of information, but it lacks the wisdom, contextual understanding, and nuanced judgment that comes from real-world experience.” - Ashvin Pravin, Cleve AI.


Otti agrees with this viewpoint wholeheartedly. This becomes particularly problematic with complex, interdisciplinary challenges that require deep integration of multiple factors—which describes most real-world decisions. AI might provide solutions that look comprehensive but miss crucial contextual elements that a human expert would immediately recognize.


Human-AI Synergy As The Solution


Recognizing the dangers of overreliance doesn't mean we should abandon AI altogether. The answer lies in what experts call Human-AI synergy—a collaborative approach where human intelligence and artificial intelligence leverage each other's unique strengths


This means using AI as a thinking partner, not a thinking replacement. It means maintaining our critical faculties while leveraging AI's computational power. It means staying in the driver's seat of our own cognitive processes while using AI as a sophisticated navigation system.


Building healthy AI habits requires intentional effort and ongoing vigilance. Here are some practical strategies:


  1. Maintain Critical Distance. Always approach AI output with healthy skepticism. Ask yourself: Does this make sense? What might be missing? What assumptions is the AI making?

  2. Preserve Core Skills. Regularly practice essential skills without AI assistance. Write important communications yourself. Analyze data manually. Solve problems through your own reasoning process.

  3. Verify and Cross-Reference. Never rely on AI as your sole source of information. Cross-check important facts, verify statistics, and consult multiple sources for critical decisions.

  4. Stay Connected to Your Process. Understand not just what AI recommends, but why. Be able to explain and defend your AI-assisted work based on your own understanding and judgment.


Conclusion: What Next?

The challenge of AI overreliance reflects a broader question about human agency in an increasingly automated world. We need people who can think clearly when the systems fail, who can spot the biases and errors that AI perpetuates, and who can provide the wisdom and contextual understanding that no algorithm can replicate.


Your Next Step

At Otti NeuroLearning Institute, we develop AI training programs that are specifically designed to enhance human thinking rather than replace it. We focus on developing Human-AI synergy, a capability where human intelligence and artificial intelligence work together to bring out the best of both worlds in productivity, decision-making, and value generation. Get in touch with us to find out more!



Published on: 12 September 2025

Written by:  WOON Ken Xhen


© 2025 Centre of Applied Metacognition (CAM)


 
 
 

Comments


bottom of page