Critical thinking for the unstoppable use of artificial intelligence

Author: Martha R. Villabona works at Subdirección General de Cooperación Territorial e Innovación Educativa of the Spanish Ministry of Education and Vocational Training, where she coordinates the area of multiple literacies.

The expansion of generative artificial intelligence (GenAI) such as that offered by ChatGPT has multiplied the amount of content available very easily and quickly. In this scenario, two skills become decisive for safe and effective use: critical thinking—the ability to analyze and evaluate information—and media literacy—knowing how to locate, evaluate, and produce information responsibly.

Critical thinking
Photo: Emiliano Vittoriosi / Unsplash

A recently published study 1 analyzed data from 500 high school students (aged 12-17) based on the substantive associations between the use of GenAI, innovation capacity (a construct that integrates critical thinking skills, creative problem solving, and adaptive learning), and digital literacy. Based on the analysis, the research found high standardized coefficients between the application of GenAI and innovation, between GenAI and digital literacy, as well as a positive relationship between innovation and digital literacy. It should be noted that the study defines digital literacy in terms that include critical evaluation of information, content creation, and ethical use. On the other hand, it is important to emphasize that the associations were observed in a non-experimental design; even so, the results are consistent with the idea that well-structured interactions with AI can coexist with analytical and verification processes.

AI can offer poor-quality information, such as inaccuracies, outdated information, inconsistent reasoning, or “hallucinatory” references. It also presents biases linked to the opacity of training data. Studies such as Naqvi’s 2 warn that the very fluidity of GenAI can reduce user vigilance and lead to cognitive complacency, i.e., acceptance of the system’s output because of its convincing appearance and time savings, displacing independent analysis, problem solving, and other metacognitive functions. GPT can expand capabilities, but it requires critical thinking to apply different mechanisms for verifying the information it provides.

The deployment of large-scale models has facilitated the generation of text, images, audio, and video that appear credible but are false. When used maliciously in democratic processes or to erode public trust, this is known as disinformation. Recent research 3 reflects the social impact of deepfakes generated by GenAI and highlights that the ability to produce convincing content on a large scale increases the reach and speed of disinformation campaigns. Among the measures proposed to strengthen citizens’ resilience to false content produced with GenAI are educational programs based on critical thinking.

Furthermore, the problem is not limited to detecting falsehoods; it also lies in the fact that repeated exposure to dubious content causes even legitimate information to be received with skepticism, undermining social trust.

Among the best practices suggested in the research are verification proportional to impact, systematic external verification, and explicit recognition of limitations. The first consists of assessing the consequences of sharing content generated with GenAI (a report, a recommendation, an email, a decision), as this will involve a greater or lesser requirement for traceability and evidence (sources, date, method). The second consists of checking a key statement against at least one independent and recent source before adopting it; in sensitive areas, primary sources or meta-analyses should be preferred. Finally, the third good practice is to share assumptions, uncertainties, and what the system “does not know” or cannot verify. Making these gaps visible helps to identify biases and decide when additional verification is required.

The results obtained so far on GenAI literacy and critical thinking point to a dual scenario. On the one hand, the use of AI can coexist with better performance in skills related to critical evaluation and digital literacy when implemented in an active and structured way. On the other hand, cognitive complacency and low media literacy increase the likelihood of error and exposure to misinformation in environments saturated with plausible content.

Critical thinking must be the counterweight to what AI offers. Only by achieving balance can GenAI be used effectively.

References

  1. Wu D, Zhang J (2025). Generative artificial intelligence in secondary education: Applications and effects on students’ innovation skills and digital literacy. PLoS One 20(5): e0323349 doi: 10.1371/journal.pone.0323349
  2. Naqvi, W. M., Ganjoo, R., Rowe, M., Pashine, A. A., & Mishra, G. V. (2025). Critical thinking in the age of generative AI: Implications for health sciences education. Frontiers in Artificial Intelligence, 8, 1571527 doi:10.3389/frai.2025.1571527
  3. Shoaib, M. R., Wang, Z., Taleby Ahvanooey, M., & Zhao, J. (2023). Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models. arXiv: https://arxiv.org/abs/2311.17394

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *