Are We Abandoning Critical Thinking?: The Risks of Over-Reliance on AI


By Dr. R. Esi Asante

A lot of individuals have formed the practice of depending on generative artificial intelligence (AI) to handle nearly all tasks for them.

Relying excessively on artificial intelligence for creativity, education, academic pursuits, and business operations instead of fostering critical thinking skills is particularly alarming as we look ahead. This dependence has turned into an addiction for some individuals, causing them to neglect the development of essential analytical capabilities.

For example, individuals turn to AI guidance for financially precarious and morally significant choices, often with unfavorable outcomes. This issue is particularly common when the recommendations conflict with accessible data and personal beliefs (Klingbeil et al., 2024).

At every stage of their educational journey, students—from elementary school through higher education—have adopted the practice of depending heavily on artificial intelligence tools like search engines and more recently, ChatGPT with its extensive functionalities. They utilize these resources for various tasks ranging from completing homework, writing essays, drafting proposals and emails, developing projects, to making significant personal choices.

Relying solely on one’s mental abilities to understand our environment is increasingly uncommon. As Dedyukhina (2025) points out, individuals tend to offload more tasks onto generative artificial intelligence systems, believing this approach conserves time and enhances their intellect. While automating workflows and routines does offer advantages, this trend is clear.

With AI’s ability to assist with a wide range of tasks, such as writing, research, data driven-decision making, and data analysis, researchers find themselves uploading almost everything to AI to do the work and in the corporate and business context, AI has taken over analyzing complex texts among others.

The excessive dependence on artificial intelligence is quite concerning. It promotes cognitive laziness in humans and fosters an increasing reliance on AI for all aspects of life ahead, which is troubling since this trend can impair our intellectual capabilities like IQ levels, along with essential skills such as memory retention, concentration, and analytical reasoning.

It is clear that generative AI is advancing towards greater levels of intelligence, emulating human cognition and reportedly surpassing human capabilities according to studies.

Thomas (2024) noted that with AI robots becoming increasingly intelligent and agile, fewer human workers will be required for various tasks. Although an estimation suggests that around 97 million new job opportunities may emerge by 2025 due to advancements in AI technology, numerous employees might lack the necessary skills for these specialized positions and risk being marginalized unless organizations invest in enhancing workforce capabilities. Additionally, it’s crucial to recognize that employing AI technologies comes with inherent risks.

New studies show a notable inverse relationship between regular use of AI tools and critical thinking skills, with this effect being influenced by heightened cognitive outsourcing.

Participants who were younger showed greater reliance on AI tools and lower levels of critical thinking when compared to their older counterparts (Gerlich, 2025). In simpler terms, another scholar suggests that as we increasingly utilize Generative Artificial Intelligence, we tend to delegate more of our mental tasks to these systems.

To put it another way, we rely on technology as an external aid for our cognitive processes, which can lead to a diminished ability to analyze data independently (Dedyukhina, 2025). This piece critically examines the impact of depending excessively on artificial intelligence tools at the expense of personal intellectual capabilities. It also explores the repercussions of this dependency and suggests strategies to mitigate such reliance.


Critical thinking and Offloading

Critical thinking encompasses the skill to analyze, assess, and combine data effectively for making well-informed choices (Halpern, 2010). This process entails clear and rational thought processes, grasping logical relationships among concepts, critiquing arguments, and spotting flaws in reasoning (Ennis, 1987).

Scholarly success, professional expertise, and enlightened citizenship all hinge on critical skills such as problem-solving, decision-making, and reflective thought—elements crucial for thriving in complex and dynamic environments (Halpern, 2010).

Cognitive offloading entails utilizing external instruments and entities to lessen mental workload. This process boosts effectiveness by making more cognitive resources available.

In certain instances, though, heavy dependence on external resources—especially artificial intelligence—might lessen the requirement for significant mental engagement, which could impact critical analysis (Risko and Gilbert, 2016).

By lessening mental effort, cognitive offloading impacts cognitive growth and analytical reasoning (Fisher, 2011), which can result in a reduction of inherent intellectual skills.

Gerlich’s research from 2025 indicates that extensive utilization of artificial intelligence tends to have a negative correlation with critical thinking capabilities (such as analyzing, evaluating, and integrating data for informed choices). This reliance often leads individuals to prioritize locating information over understanding it, which can impair memory retention, diminish problem-solving skills, and reduce capacity for making independent decisions.

Teenage and young adult users (aged 17-25) tend to be more vulnerable since they often engage with artificial intelligence systems, exhibiting weaker critical thinking skills.

According to Dedyukhina (2025), the factors predicting cognitive decline, ranked by their significance, encompassed the frequency of utilizing AI tools—higher usage correlates with increased risk—and the educational attainment of users, where greater levels of education correlate with reduced risk.

Other contributing elements were the absence of profound cognitive activities, over-reliance on AI for decisions, with individuals consulting AI for every matter, and an attitude that using it saves time, making people more inclined to adopt it.

Thus, in times when cognitive abilities are critically needed to stay relevant, we now unlearn how to think. Are we actually outsourcing our cognitive abilities to AI? The consequences may be dire in the near future.


The Costs and Implications of Excessive Dependence on AI for Tomorrow

Research indicates that depending heavily on artificial intelligence for guidance, decision-making, and recommendations to people can harm their mental state. This dependency may lead to diminished critical thinking skills, as AI starts influencing how individuals perceive reality, conduct their daily activities, and anticipate future events (Kalezix, 2024).

It is clear that AI presents numerous opportunities and has the potential to trigger a new technological revolution. With self-learning computer algorithms along with open AI initiatives launched in 2022, we’re witnessing an unprecedented surge in digitization and growing dependence on these technologies across all aspects of life.


AI influences cognitive abilities

.

The analytic aspect of critical thinking entails dissecting intricate details into more basic parts to gain clearer insight.

AI tools such as data analytics software and machine learning algorithms can enhance analytical capabilities by processing vast amounts of data and identifying patterns that might be difficult for humans to detect (Bennett and McWhorter, 2021).

However, there is a risk that over-reliance on AI for analysis may undermine the development of human analytical skills. Individuals who depend too heavily on AI to perform analytical tasks may become less proficient at engaging in deep, independent analysis.


Cognitive offloading.

Using AI tools for cognitive offloading entails transferring responsibilities like retaining memories, making decisions, and accessing information to outside systems. This can boost mental capabilities by enabling people to concentrate on intricate and innovative pursuits.

Nevertheless, depending heavily on artificial intelligence for cognitive offloading can have substantial impacts on mental capabilities and analytical thought processes. This could result in diminished cognitive exertion, potentially cultivating what certain scholars call ‘mental indolence’ (Carr, 2010).

This might result in a decrease in people’s capability to accomplish these activities autonomously, possibly diminishing their cognitive resilience and adaptability over time (Sparrow and Wegner, 2011). Moreover, it has the potential to degrade crucial mental capabilities like memory storage, critical analysis, and problem resolution.


Reduced Problem-Solving Skills

.

Recent research has underscored the increasing worry that although AI tools can substantially decrease mental workload, they might impede the growth of essential analytical abilities (Zhai et al., 2024). According to Krullaraas et al. (2023), an excessive dependence on these technological aids for scholarly activities resulted in diminished problem-solving capabilities, as evidenced by decreased student involvement in autonomous intellectual analysis.

These insights highlight the importance of adopting a balanced strategy when integrating AI into education systems, making sure that cognitive support doesn’t compromise the cultivation of essential critical thinking abilities. Although AI applications can enhance the learning of fundamental competencies, they might fall short in nurturing advanced analytical reasoning necessary for tackling new or intricate challenges (Firth et al., 2019).

Depending too much on artificial intelligence for education may impede the growth of crucial analytical abilities, since pupils might lose proficiency in formulating their own ideas independently.

Intelligent Tutoring Systems (ITSs), utilizing AI algorithms to mimic individualized tutoring sessions, have demonstrated enhanced educational achievements, especially within STEM disciplines (Koedinger and Corbell, 2006).

Nevertheless, such systems might lead to cognitive offloading, causing students to depend on the system for guidance instead of interacting proactively with the content.


Loss of Human Influence

In certain segments of society, an excessive dependence on artificial intelligence might lead to a diminished role for human input and capability. Currently, AI is utilized across all sectors, and employing it in healthcare, for example, could diminish human compassion and logical thinking.

The consistent use of artificial intelligence in creative tasks might stifle human imagination and emotional articulation. Additionally, excessive interaction with these systems could result in diminished peer communication and social abilities, a trend increasingly observed within families (Thomas, 2024).

Existing research on the topic of excessive dependence on AI dialogue systems has shown patterns wherein users frequently rely heavily on these systems. They tend to accept the output provided by AI without verification, even when this may include fabricated information known as AI hallucinations.

This heightened reliance is further intensified by cognitive biases wherein decisions veer away from rational thinking and mental shortcuts are employed, resulting in an unexamined adoption of AI-provided data (Gao et al., 2022; Grassini, 2023).

Moreover, many AI systems are trained using datasets that contain built-in biases, leading users to view these prejudiced outcomes as neutral. Consequently, this fosters an unwarranted confidence in AI, which can skew analyses and interpretations, thereby reinforcing the preexisting biases (Xie et al., 2021).

Over-reliance on unverified AI outputs can cause misclassification and misinterpretation. This poses a significant risk, potentially culminating in research misconduct, including plagiarism, fabrication, and falsification.

Dempere et al. (2023) emphasized the potential dangers of incorporating AI conversation systems into higher education, including issues like breaches of privacy and unauthorized data usage.

Primarily, the risks associated with artificial intelligence encompass employment displacement due to increased automation, creation of deepfakes, breaches of privacy, biases in algorithms resulting from poor data quality, widening socio-economic disparities, fluctuations in financial markets, enhancement of weapon systems, and the potential for unmanageable self-aware AI.

Additionally, issues such as opacity and lack of clarity, social manipulation via algorithms, monitoring using artificial intelligence technologies, and erosion of ethical standards and benevolence due to AI have been highlighted (Thomas, 2024).

In his address for World Peace Day, the late Pope Francis cautioned about the potential misuse of artificial intelligence, highlighting that it might “generate assertions that initially seem credible yet lack foundation or contain prejudices.” This concern stems from the fact that such capabilities can amplify misinformation efforts, erode trust in media outlets, and interfere with electoral processes—ultimately escalating the likelihood of “aggravating tensions and obstructing harmony” (Pope Francis, January 2024).


Suggestions to Minimize the Propensity

Gerlich’s findings from 2025 underscore the possible mental drawbacks associated with depending heavily on AI tools, stressing the importance of developing educational approaches that encourage thoughtful interaction with these technologies. There is a risk that as AI systems and robots rapidly improve, surpassing human capabilities, they might adopt harmful intentions aimed at seizing control over humanity.

As stated by Thomas (2024), this issue has transitioned from being mere sci-fi speculation to becoming an imminent concern. It is crucial for organizations to start contemplating potential responses. Experts propose several strategies to address the risks associated with AI broadly: implementing legal frameworks, setting up corporate guidelines for AI usage, engaging in debates about algorithm oversight, incorporating humanistic viewpoints into technological advancements, and minimizing bias.

As we start to believe that ChatGPT can handle all our tasks and become entirely dependent on it, we risk developing mental laziness. Over time, this reliance could diminish our cognitive skills precisely when we need them the most.

This presents a significant threat to humanity, potentially leading to disruptions in the workforce as essential skills required for tomorrow’s labor force may vanish. It is crucial to preserve human cognitive capabilities rather than delegating them entirely to technological solutions. Deliberate steps must be taken to mitigate and possibly halt these trends.

Provided by Syndigate Media Inc. (
Syndigate.info
).

Leave a Reply

Your email address will not be published. Required fields are marked *