Artificial Intelligence in Applied Cognitive Psychology: A Commentary

Authors

DOI:

https://doi.org/10.23947/2334-8496-2026-14-1-115-124

Keywords:

Artificial Intelligence, Applied Cognitive Psychology, Expertise Development

Abstract

 AI integration in applied cognitive psychology demands critical evaluation beyond efficiency metrics. Despite widespread institutional adoption, emerging research reveals concerning patterns including high hallucination rates, deteriorating retention with prolonged exposure, and a consistent tendency to support surface-level task completion at the expense of deeper cognitive processing. These findings align with established principles regarding desirable difficulties, metacognitive monitoring, skill acquisition, and vigilance, suggesting that applications prioritising task completion over cognitive development risk undermining the adaptive expertise essential for complex professional contexts. Methodological weaknesses in existing research, including brief interventions, inadequate control comparisons, and reliance on satisfaction measures, further constrain confident conclusions. Nonetheless, several domains including cognitive accessibility, rehabilitation, vigilance, and adaptive tutoring represent areas of genuine promise where AI’s architecture may complement rather than conflict with established cognitive science. This commentary synthesises emerging evidence, examines methodological limitations, proposes research priorities for responsible integration, and reflects on where cautious optimism is warranted.

Downloads

Download data is not yet available.

References

Ahmed, I., Jeon, G., & Piccialli, F. (2022). From artificial intelligence to explainable artificial intelligence in industry 4.0: A survey on what, how, and where. IEEE Transactions on Industrial Informatics, 18(8), 5031–5042. https://doi.org/10.1109/ TII.2022.3146552

Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Addison-Wesley.

Akgun, S., & Toker, S. (2024). Evaluating the Effect of Pretesting with Conversational AI on Retention of Needed Information. ArXiv. https://doi.org/10.48550/arXiv.2412.13487

Baars, M., Vink, S., van Gog, T., de Bruin, A., & Paas, F. (2014). Effects of training self-assessment and using assessment standards on retrospective and prospective monitoring of problem solving. Learning and Instruction, 33, 92–107. https://doi. org/10.1016/j.learninstruc.2014.04.004

Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Social Sciences & Humanities Open, 12, 102287. https://doi.org/10.1016/j.ssaho.2025.102287

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society, 56–64. Worth Publishers.

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition: Knowing about knowing, 185–205. MIT Press.

Bjork, R. A., & Bjork, E. L. (2020). Desirable difficulties in theory and practice. Journal of Applied Research in Memory and Cognition, 9(4), 475–479. https://doi.org/10.1016/j.jarmac.2020.09.003

Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J. L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews: Comparative analysis. Journal of Medical Internet Research, 26, e53164. https://doi.org/10.2196/53164

Chi, M. T. H. (2009). Active-constructive-interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1(1), 73–105. https://doi.org/10.1111/j.1756-8765.2008.01005.x

Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335. https://doi.org/10.1136/medethics-2020-106820

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406. https://doi.org/10.1037/0033-295X.100.3.363

Essel, H. B., Vlachopoulos, D., Essuman, A. B., & Amankwa, J. O. (2024). ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers and Education: Artificial Intelligence, 6, 100198. https://doi.org/10.1016/j.caeai.2023.100198

Exintaris, B., Karunaratne, N., & Yuriev, E. (2023). Metacognition and critical thinking: Using ChatGPT-generated responses as prompts for critique in a problem-solving workshop (SMARTCHEMPer). Journal of Chemical Education, 100(8), 2972–2980. https://doi.org/10.1021/acs.jchemed.3c00481

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. https://doi.org/10.1037/0003-066X.34.10.906

Shumate, J. N., Rozenblit, E., Flathers, M., Larrauri, C. A., Hau, C., Xia, W., ... & Torous, J. (2025). Governing AI in mental health: 50-state legislative review. JMIR Mental Health, 12, e80739.

Gilbert, S. J., Boldt, A., Sachdeva, C., Scarampi, C., & Tsai, P.-C. (2023). Outsourcing memory to external tools: A review of ‘intention offloading’. Psychonomic Bulletin & Review, 30(1), 60-76. https://doi.org/10.3758/s13423-022-02139-4

Grinschgl, S., Papenmeier, F., & Meyerhoff, H. S. (2021). Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology, 74(9), 1477-1496. https://doi. org/10.1177/17470218211008060

Hancock, P. A. (2013). In search of vigilance: the problem of iatrogenically created psychological phenomena. American Psychologist, 68(2), 97-109. https://doi.org/10.1037/a0030214

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23–31.

https://doi.org/10.4018/978-1-60566-048-6.ch003

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., ... Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Koriat, A., & Bjork, R. A. (2005). Illusions of competence in monitoring one’s knowledge during study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(2), 187-194. https://doi.org/10.1037/0278-7393.31.2.187

Koriat, A., & Helstrup, T. (2007). Metacognitive aspects of memory. In Everyday memory (pp. 251-274). Psychology Press.

Li, T., Ji, Y., & Zhan, Z. (2024). Expert or machine? Comparing the effect of pairing student teacher with in-service teacher and ChatGPT on their critical thinking, learning performance, and cognitive load in an integrated-STEM course. Asia Pacific Journal of Education, 44(1), 45-60. https://doi.org/10.1080/02188791.2024.2305163

MacDonald, R. (2023). Dude, where’s my citations? ChatGPT’s hallucination of citations. Mind Pad: The Canadian Psychological Association’s Magazine for Psychology Students and Instructors, Winter 2023. https://cpa.ca/docs/File/Students/ MindPad/MindPad_Winter2023.pdf

Marshall, T., Keville, S., Cain, A., & Adler, J. R. (2022). Facilitating reflection: a review and synthesis of the factors enabling effective facilitation of reflective practice. Reflective Practice, 23(4), 483-496. https://doi.org/10.1080/14623943.2022.2064444 Martinez-Martin, N., Luo, Z., Kaushal, A., Adeli, E., Haque, A., Kelly, S. S., ... Char, D. S. (2020). Ethical issues in using ambient intelligence in health-care settings. The Lancet Digital Health, 2(2), e115–e123. https://doi.org/10.1016/S2589-7500(20)30275-2

Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I, Outcome and process. British Journal of Educational Psychology, 46(1), 4–11. https://doi.org/10.1111/j.2044-8279.1976.tb02980.x

Meskó, B., & Topol, E. J. (2023). The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ digital medicine, 6(1), 120. https://doi.org/10.1038/s41746-023-00873-0

Mostafa, M. A. M. (2025). Bridging the digital divide: AI, VR, and AR for equitable K-12 education. SSRN Electronic Journal.

https://dx.doi.org/10.2139/ssrn.5124551

Ododo, E. P., Iniobong, U. B., Udoessien, A. I., Ukpe, I. U., & James, O. D. (2024). Artificial intelligence in the classroom: Perceived challenges to vocational education student retention and critical thinking in tertiary institutions. American Journal of Interdisciplinary Innovative Research, 6(9), 30-39. https://doi.org/10.37547/tajiir/Volume06Issue09-05

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j. tics.2016.07.002

Roediger, H. L., III, & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255. https://doi.org/10.1111/j.1467-9280.2006.01693.x

Rottner, R., Porter, L., Bock, J., Jannone, J., Senerchia, R. W., Ward, J., & Whittinghill, J. (2025). AI and the Digital Divide. In Corbeil, J. R., & Corbeil, M. E. (Eds). Teaching and Learning in the Age of Generative AI (pp. 309-331). Routledge.

Rowland, C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect. Psychological bulletin, 140(6), 1432-1463. https://doi.org/10.1037/a0037559

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460–475. https://doi.org/10.1006/ceps.1994.1033

Sharpe, B. T., Cotten, E. L., & Sivyer, L. R. (2026). The dangers of occupational vigilance: a scoping review. Safety and Reli ability, 1–23. https://doi.org/10.1080/09617353.2025.2607194

Sharpe, B. T., Smith, M. S., Williams, S. C., Hampshire, A., Balaet, M., Trender, W., ... Smith, J. (2024). Cognition and lifeguard detection performance. Applied Cognitive Psychology, 38(1), e4139. https://doi.org/10.1002/acp.4139

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science, 10(2), 176–199. https://doi.org/10.1177/1745691615569000

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi. org/10.1207/s15516709cog1202_4

Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59–89. https://doi.org/10.1207/s1532690xci0201_3

Sweller, J., Van Merrienboer, J. J., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational psychology review, 10(3), 251-296. https://doi.org/10.1023/A:1022193728205

Van Dijk, J. A. G. M. (2020). The digital divide. Polity Press.

Vansteenkiste, P., Bourgois, J. G., & Lenoir, M. (2025). Baywatch in the laboratory, Differences in visual surveillance between lifeguards and non-lifeguards. Applied Cognitive Psychology, 39(5), e70110. https://doi.org/10.1002/acp.70110

Wang, X., & Fan, Y. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12, 621. https://doi. org/10.1057/s41599-025-04787-y

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 2(2), 140–154. https://doi. org/10.1086/691462

Warm, J. S., Parasuraman, R., & Matthews, G. (2008). Vigilance requires hard mental work and is stressful. Human Factors, 50(3), 433–441. https://doi.org/10.1518/001872008X312152

Williamson, B. (2021). Meta-edtech: Digital education governance, data science and algorithmic power. Learning, Media and Technology, 46(1), 1–2. https://doi.org/10.1080/17439884.2021.1876089

Yang, Y., Luo, J., Yang, M., & Chen, J. (2024). From surface to deep learning approaches with generative AI in higher education: An analytical framework of student agency. Studies in Higher Education, 49(5), 817–830. https://doi.org/10.1080 /03075079.2024.2327003

Downloads

Published

2026-05-13

How to Cite

Sharpe, B. T., Rod, M., & Horne, G. (2026). Artificial Intelligence in Applied Cognitive Psychology: A Commentary. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 14(1), 115–124. https://doi.org/10.23947/2334-8496-2026-14-1-115-124

Metrics

Plaudit

Received 2026-03-14
Accepted 2026-05-06
Published 2026-05-13

Similar Articles

You may also start an advanced similarity search for this article.