Algorithmic Interviewers and the Reflexivity of Knowledge Production: Toward an Intelligent Critical Methodology in AI-Based Qualitative Research

Document Type : علمی - پژوهشی

Authors

1 Professor, Department of Social Sciences, University of Tabriz, Tabriz, Iran

2 Assistant Professor, Department of Sport Management, University of Tabriz, Tabriz, Iran

Abstract

Extended Abstract
 
Introduction and Objectives: The rapid evolution of artificial intelligence (AI) technologies has made the rethinking of knowledge production methods an undeniable necessity. AI, as a swiftly advancing field, is poised to transform the very foundations of research methodology. One of the most remarkable manifestations of this transformation is the emergence of algorithmic interviewers—tools that, relying on natural language processing algorithms, establish quasi-human interactions with participants.
AI language models are recognized for their extraordinary ability to write academic papers, design experiments, develop theories, transcribe, translate, conduct thematic analysis, code qualitative data, summarize articles, recommend journals, guide researchers, complete sentences and paragraphs, and generate human-like text. These capabilities make them invaluable resources for qualitative researchers. However, their entry into research is not merely a technological advancement; it entails a redefinition of the relationships among humans, machines, power, and knowledge.
Given the growing difficulty of conducting contemporary qualitative research and data analysis in the humanities and social sciences without computational tools, the adoption of modern technologies in computer-assisted qualitative data analysis (CAQDAS) reshapes interpretive frameworks and transforms our understanding of research phenomena. The aim of this paper is to critically re-examine the relationship between AI and knowledge production in qualitative research—with a particular focus on algorithmic interviewers and their potential to disrupt or reproduce epistemic and social relations.
Drawing on Michel Foucault’s power-knowledge theory and Manuel Castells’ network society theory, this study proposes a methodological model termed “Intelligent Critical Methodology”—a framework that enables researchers to develop both theoretical and practical sensitivity toward the biases, limitations, and ethical-political dimensions of AI-based research tools.
Method: This study investigates the role of algorithmic interviewers in redefining social relations and power structures in qualitative research. The research was conducted in two stages: a comparative-documentary phase and an intelligent critical phase.
In the first phase, Foucault’s and Castells’ theories were synthesized to develop a theoretical framework for analyzing AI-driven interviews. Foucault’s perspective aids in examining the power dynamics embedded in algorithmic tools, while large language models (LLMs) such as ChatGPT reveal new sources of epistemic power.
The second phase focused on developing the Intelligent Critical Methodology, which integrates AI technologies with human interpretation. This hybrid methodology enhances analytical transparency and precision, reduces analysis time, and simultaneously strengthens data validity through human interpretive oversight—thereby mitigating the biases inherent in purely human or purely algorithmic analyses.
To assess the validity of this indigenous Intelligent Critical Methodology, the study examined its alignment with power-knowledge and network-society theories. Triangulation through combined human–algorithmic analysis, joint coding, and participant review of findings contributed to the study’s credibility and dependability.
Ultimately, the research designed an indigenous intelligent critical framework that allows researchers to simultaneously leverage human experience and algorithmic processing power for deeper insights into digital-era social interactions. This approach underscores the importance of algorithmic transparency and data literacy, urging researchers to act as creative co-producers of knowledge in collaboration with AI.
Findings: The findings indicate that large language models such as ChatGPT can substantially enrich qualitative research by identifying latent patterns, formulating questions, transcribing, translating, coding, developing theory, and producing coherent text. However, these models are not neutral tools—they function as epistemic agents capable of influencing discursive structures and interpretive directions.
They can also detect social and cultural patterns, offering deeper insight into power relations. One key outcome of this study is the recognition that algorithmic interviewers can generate transparent and well-documented datasets, enabling researchers to trace analytic processes and substantiate results more effectively. This feature enhances transparency and trustworthiness in qualitative research, facilitating more precise analysis of complex data.
Moreover, integrating algorithmic and human analysis helps reduce bias and improve accuracy—particularly in sociocultural contexts where emotional and contextual interpretation is critical. This hybrid approach represents a paradigm shift in how qualitative data are understood and interpreted.
Finally, the study emphasizes the need to rethink research methodologies and design new approaches to address the ethical and social challenges of AI use in research. The findings provide a foundation for developing research and educational policies on AI-assisted qualitative inquiry, contributing to the improvement of research quality and validity in the social sciences.
Discussion and Conclusion: Recent advances in AI and data-mining technologies have profoundly influenced qualitative and social research methods. This study explored the impact of algorithmic interviewers on qualitative research, particularly regarding social relations and power–knowledge structures.
The results show that large language models (LLMs) such as ChatGPT function not merely as facilitative tools but as epistemic actors that participate in data interpretation and meaning-making. Such transformations also reshape social structures and human relationships.
Foucault’s theory highlights the pervasive role of power in everyday interactions, while Castells conceptualizes information as a new source of power. Together, these perspectives illuminate how AI reconfigures social relations and epistemic authority.
The study concludes that a critical and informed use of AI tools can enhance the precision and coherence of qualitative analyses—especially in complex phases like transcription and thematic analysis. The proposed indigenous intelligent critical methodology enables researchers to combine human interpretation with algorithmic processing, transforming them into creative co-producers of knowledge.
Ultimately, the research underscores the necessity of redefining the researcher’s role and designing new frameworks for documenting human–machine interactions, emphasizing critical and responsible approaches to AI use in social research.

Keywords


Abram, M. D., Mancini, K. T., & Parker, R. D. (2020). Methods to integrate natural language processing into qualitative research. International Journal of Qualitative Methods, 19, 1609406920984608. https://doi.org/10.1177/1609406920984608
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2). https://doi.org/ 10.7759/cureus.35179
Anis, S., & French, J. A. (2023). Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society, 62(6), 1139-1144. https://doi.org/10.1177/00076503231163286
Bahn, S., & Weatherill, P. (2013). Qualitative social research: a risky business when it comes to collecting ‘sensitive’data. Qualitative research, 13(1), 19-35. https://doi.org/10.1177/1468794112439016
Bahrini, A., Khamoshifar, M., Abbasimehr, H., Riggs, R. J., Esmaeili, M., Majdabadkohne, R. M., & Pasehvar, M. (2023). ChatGPT: Applications, opportunities, and threats. Systems and Information Engineering Design Symposium (SIEDS).
Baig, K., Altaf, A., & Azam, M. (2024). Impact of AI on Communication Relationship and Social Dynamics: A qualitative Approach. Bulletin of Business and Economics (BBE), 13(2), 282-289. https://doi.org/10.6101/02283/506
Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International journal of artificial intelligence in education, 1-41. https://doi.org/10.1007/s40593-021-00285-9 
Bala, R. (2022). Challenges and ethical issues in data privacy: academic perspective. International Journal of Information Retrieval Research (IJIRR), 12(2), 1-7. https://doi.org/10.4018/IJIRR.299938
Belli, S., & Leon, M. (2024). Emotions, Attitudes, and Challenges in the Perception of Artificial Intelligence in Social Research. International Conference on Applied Informatics.
Bijker, R., Merkouris, S. S., Dowling, N. A., & Rodda, S. N. (2024). ChatGPT for Automated qualitative research: content analysis. Journal of medical Internet research, 26, e59050. https://doi.org/10.2196/59050
Bircan, T. (2024). AI, big data, and quest for truth: the role of theoretical insight. Data & Policy, 6, e44. https://doi.org/10.1017/dap.2024.36
Bishop, L. (2023). A computer wrote this paper: What chatgpt means for education, research, and writing. Research, and Writing (January 26, 2023). https://doi.org/10.2139/ssrn.4338981
Boateng, O., & Boateng, B. (2025). Algorithmic bias in educational systems: Examining the impact of AI-driven decision making in modern education. International Journal with High Impact Factor for fast publication of Research and Review articles. 25(01), 2012-2017. https://doi.org/10.30574/wjarr.2025.25.1.0253
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576. https://doi.org/10.1126/science.aaf2654
Bryda, G., & Costa, A. P. (2023). Qualitative research in digital era: innovations, methodologies and collaborations. Social Sciences, 12(10), 570. https://doi.org/10.3390/socsci12100570
Burnard, P. (1991). A method of analysing interview transcripts in qualitative research. Nurse education today, 11(6), 461-466. https://doi.org/10.1016/0260-6917(91)90009-Y
Burton, S. L., Burrell, D. N., White, Y. W., Nobles, C., Dawson, M. E., Brown-Jackson, K. L., Muller, S. R., & Bessette, D. I. (2024). An In-Depth Qualitative Interview: The Impact of Artificial Intelligence (AI) on Privacy Challenges and Opportunities. Intersections Between Rights and Technology, 19-39. https://doi.org/10.4018/979-8-3693-1127-1.ch002
Cano, C. A. G. (2024). Research, Ethics and Artificial Intelligence Challenges and Opportunities. The International Conference on Artificial Intelligence and Smart Environment.
Castells, M. (2011). The rise of the network society. John wiley & sons.
Castleberry, A., & Nolen, A. (2018). Thematic analysis of qualitative research data: Is it as easy as it sounds? Currents in pharmacy teaching and learning, 10(6), 807-815. https://doi.org/10.1016/j.cptl.2018.03.019
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and engineering ethics, 24, 505-528. https://doi.org/10.1007/S11948-017-9901-7
Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., & Wang, Y. (2024). A survey on evaluation of large language models. ACM transactions on intelligent systems and technology, 15(3), 1-45. https://doi.org/10.1145/3641289
Chitty, N., & Dias, S. (2018). Artificial intelligence, soft power and social transformation. Journal of Content, Community and Communication, 7, 1-14.
Christou, P. A. (2023). The use of artificial intelligence (AI) in qualitative research for theory development. The Qualitative Report, 28(9). https://doi.org/10.46743/2160- 3715/2023.6536
Costa, A. P. (2023). Qualitative Research Methods: do digital tools open promising trends? Revista Lusófona de Educação, 59.
Cui, T., & Li, S. (2020). System movement space and system mapping theory for reliability of IoT. Future generation computer systems, 107, 70-81. https://doi.org/10.1016/j.future.2020.01.040
Dang, H., Goller, S., Lehmann, F., & Buschek, D. (2023). Choice over control: How users write with large language models using diegetic and non-diegetic prompting. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. HBR’S 10 MUST, 67.
De Paoli, S. (2024). Performing an inductive thematic analysis of semi-structured interviews with a large language model: An exploration and provocation on the limits of the approach. Social Science Computer Review, 42(4), 997-1019. https://doi.org/10.1177/ 08944393231220483
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., & Eirug, A. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International journal of information management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Floridi, L. (2021). Artificial intelligence, deepfakes and a future of ectypes. Ethics, governance, and policies in artificial intelligence, 307-312. https://doi.org/10.1007/978-3-030-81907-1_17
Foucault, M. (1972). Interview-Questions on Geography. Power/knowledge: selected interviews and other writings.
Foucault, M. (1975). 1979. Discipline and punish: The birth of the prison. In: New York: Random House.
Gao, J., Shu, Z., & Yeo, S. Y. (2025). Using Large Language Model to Support Flexible and Structural Inductive Qualitative Analysis. arXiv preprint arXiv:2501.00775. https://doi.org/10. 1177/16094069241231168
Gordon, C., & Foucault, M. (1980). selected interviews and other writings 1972− 1977. In (pp. 288 ): New York: Pantheon Books.
Goyanes, M., Lopezosa, C., & Jordá, B. (2024). Thematic analysis of interview data with ChatGPT: Designing and testing a reliable research protocol for qualitative research. Center for Open Science. https://osf. io/8mr2f/download.
Grace, K., Salvatier, J., Dafoe, A., Zhang, B &,. Evans, O. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-754.
Günther, W., Thompson, M., Joshi, M. P., & Polykarpou, S. (2023). Algorithms as Co-Researchers: Exploring Meaning and Bias in Qualitative Research. In Cambridge Handbook of Qualitative Digital Research (pp. 211-228). Cambridge University Press. https://doi.org/10.1017/9781009106436.018
Hajkowicz, S., Sanderson, C., Karimi, S., Bratanova, A., & Naughtin, C. (2023). Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021. Technology in Society, 74, 102260. https://doi.org/10.1016/j.techsoc.2023.102260
Haugaard, M. (2022). Foucault and power: A critique and retheorization. Critical Review, 34(3-4), 341-371. https://doi.org/10.1080/08913811.2022.2133803
Ho, J. Q., Hartanto, A., Koh, A., & Majeed, N. M. (2025). Gender biases within artificial intelligence and ChatGPT: evidence, sources of biases and solutions. Computers in Human Behavior: Artificial Humans, 100145. https://doi.org/10.1016/j.chbah.2025.100145
Isangula, K. G. (2025). Navigating Barriers: Challenges and Strategies for Adopting Artificial Intelligence in Qualitative Research in Low-Income African Contexts. Tanzania Journal of Health Research, 25(3), 2048-2059. https://doi.org/10.4314/thrb.v26i3.14
Jalali, M. S., & Akhavan, A. (2024). Integrating AI language models in qualitative research: Replicating interview data analysis with ChatGPT. System Dynamics Review, 40(3), e1772.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), 389-399.
Kasperiuniene, J. (2021). The use of artificial intelligence in social research: Multidisciplinary challenges. Computer Supported Qualitative Research: New Trends in Qualitative Research (WCQR2021) 5.
Kerry, C. F. (2020). Protecting privacy in an AI-driven world. Brookings Institution.
Khanbhai, M., Anyadi, P., Symons, J., Flott, K., Darzi, A., & Mayer, E. (2021). Applying natural language processing and machine learning techniques to patient experience feedback: a systematic review. BMJ Health & Care Informatics, 28(1), e100262. https://doi.org/10.1136/bmjhci-2020-100262
Kooli, C. (2023). Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability, 15(7), 5614. https://doi.org/10.3390/su15075614
Lee, E. A. (2023). Deep neural networks, explanations, and rationality. International Conference on Bridging the Gap between AI and Reality.
Leeson, W., Resnick, A., Alexander, D., & Rovers, J. (2019). Natural language processing (NLP) in qualitative public health research: a proof of concept study. International Journal of Qualitative Methods, 18, 1609406919887021. https://doi.org/10.1177/1609406919887021
Madanchian, M., & Taherdoost, H. (2025). The Impact of Artificial Intelligence on Research Efficiency. Results in Engineering, 104743. https://doi.org/10.1016/j.rineng.2025.104743
Mardani, A. (2023). Research Misconduct in Medical Research [Ethics in research]. Encyclopedia of Islamic Medical Ethics, 1(1), 1-30. http://eime.tums.ac.ir/article-1-107-fa.html
Marshall, D. T., & Naff, D. B. (2024). The ethics of using artificial intelligence in qualitative research. Journal of Empirical Research on Human Research Ethics, 19(3), 92-102. https://doi.org/10.1177/15562646241262659
McCradden, M. D., Baba, A., Saha, A., Ahmad, S., Boparai, K., Fadaiefard, P., & Cusimano, M. D. (2020). Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study. Canadian Medical Association Open Access Journal, 8(1), E90-E95. https://doi.org/10.9778/cmajo.20190151
Mehrjoo, P. (2020). The Study and Analysis of Michel Foucault’s Educational Thoughts Emphasizing on purpose, method and content [Analysis]. Rooyesh-e-Ravanshenasi Journal(RRJ), 9(2), 117-126.
Mensah, G. B. (2023). Artificial intelligence and ethics: a comprehensive review of bias mitigation, transparency, and accountability in AI Systems. Preprint, November, 10(1). https://doi.org/10.13140/RG.2.2.23381.19685/1
Mills, K. A. (2019). Big data for qualitative research. Taylor & Francis.
Mittelstadt, B. D. (2019). AI ethics-too principled to fail? CoRR.
Morgan, D. L. (2023). Exploring the use of artificial intelligence for qualitative data analysis: The case of ChatGPT. International Journal of Qualitative Methods, 22, 16094069231211248. https://doi.org/10.1177/16094069231211248
Nashwan, A. J., Abukhadijah, H &., Abukhadijah, H. J. (2023). Harnessing artificial intelligence for qualitative and mixed methods in nursing research. Cureus, 15(11). https://doi.org/10.7759/cureus.48570
Neuberger, C., Bartsch, A., Fröhlich, R., Hanitzsch, T., Reinemann, C., & Schindler, J. (2023). The digital transformation of knowledge order: A model for the analysis of the epistemic crisis. Annals of the International Communication Association, 47(2), 180-201. https://doi.org/10.1080/23808985.2023.2169950
Nii Laryeafio, M., & Ogbewe, O. C. (2023). Ethical consideration dilemma: systematic review of ethics in qualitative data collection through interviews. Journal of Ethics in Entrepreneurship and Technology, 3(2), 94-110. https://doi.org/10.1108/JEET-09-2022-0014
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
Olyaee, S., Montazer, G. A., & Hosseini Moghaddam, M. (2024). Policy Recommendations for the Realization of Intelligent Higher Education in Iran Based on Global Trends. Journal of Science and Technology Policy, 17(2), 69-88. https://doi.org/10.22034/jstp.2024.11659.1784
Oprescu, A. M., Miró-Amarante, G., García-Díaz, L., Rey, V. E., Chimenea-Toscano, A., Martínez-Martínez, R., & Romero-Ternero, M. d. C. (2022). Towards a data collection methodology for Responsible Artificial Intelligence in health: A prospective and qualitative study in pregnancy. Information Fusion, 83, 53-78. https://doi.org/10.1016/j.inffus.2022.03.011
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
Rädiker, S., & Kuckartz, U. (2020). Focused analysis of qualitative interviews with MAXQDA. MaxQDA Press.
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256-266. https://doi.org/10.1016/j.chb.2019.04.001
Siiman, L. A., Rannastu-Avalos, M., Pöysä-Tarhonen, J., Häkkinen, P., & Pedaste, M. (2023). Opportunities and challenges for AI-assisted qualitative data analysis: An example from collaborative problem-solving discourse data. International Conference on Innovative Technologies and Learning.
Steelman, Z. R., Hammer, B. I., & Limayem, M. (2014). Data collection in the digital age. MIS quarterly, 38(2), 355-378. https://doi.org/101018-9780203622278/4324.
Theelen, H., Vreuls, J., & Rutten, J. (2024). Doing Research with Help from ChatGPT: Promising Examples for Coding and Inter-Rater Reliability. International Journal of Technology in Education, 7(1), 1-18. https://doi.org/10.46328/ijte.537
Tian, G. Y. (2016). Current issues of cross-border personal data protection in the context of cloud computing and trans-pacific partnership agreement: Join or withdraw. Wis. Int’l LJ, 34, 367.
Vetrivel, S., Sabareeshwari, V., & Sowmiya, K. (2025). Artificial Intelligence in Communications. In Convergence of Antenna Technologies, Electronics, and AI (pp. 209-238). IGI Global. https://doi.org/10.4018/979-8-3693-3775-2.ch008
Wachinger, J., Bärnighausen, K., Schäfer, L. N., Scott, K., & McMahon, S. A. (2024). Prompts, pearls, imperfections: comparing ChatGPT and a human researcher in qualitative data analysis. Qualitative Health Research, 10497323241244669. https://doi.org/10.1177/10497323241244669
Wallach, W., & Allen, C. (2008). Moral machine: Teaching robots right from wrong. Oxford University Press.
Wibawa, A. P., & Kurniawan, F. (2024). Advancements in natural language processing: Implications, challenges, and future directions. Telematics and Informatics Reports, 16, 100173. https://doi.org/10.1016/j.teler.2024.100173
Woods, M., Paulus, T., Atkins, D. P., & Macklin, R. (2016). Advancing qualitative research using qualitative data analysis software (QDAS)? Reviewing potential versus practice in published studies using ATLAS. ti and Nvivo., 2013-1994 Social Science Computer Review, 34(5), 597-617. https://doi.org/10.1177/0894439315596311
Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X., Wu, Y., Dong, F., & Qiu, C.-W. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4). https://doi.org/10.1016/j.xinn.2021.100179
Zadeh, P. (2023). Use of Natural Language Processing (NLP) to Support Assuring the Internal Validity of Qualitative Research. Canadian Society of Civil Engineering Annual Conference.
Zhang, H., Wu, C., Xie, J., Lyu, Y., Cai, J., & Carroll, J. M. (2025). Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT. Computers in Human Behavior: Artificial Humans, 4, 100144. https://doi.org10/1016j.chbah.2025.100144
Zhang, Y., Wu, M., Tian, G. Y., Zhang, G., & Lu, J. (2021). Ethics and privacy of artificial intelligence: Understandings from bibliometrics. Knowledge-Based Systems, 222, 106994. https://doi.org/10.1016/j.knosys.2021.106994
Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203-213). Routledge. https://doi.org/10.4324/9781003320609-27