Exploring South African news media framings of trust and mistrust in ChatGPT and human integrity within (higher) education

Prof. Susan Brokensha, University of the Free State

Ensovoort, volume 47 (2026), number 3: 1

Abstract

Although there are myriad affordances with respect to utilising ChatGPT in educational spaces, embracing this conversational chatbot pivots to some degree around trust. Through the theoretical frameworks of sociotechnical imaginaries and technological frames, this paper explores South African news outlets’ depictions of (mis)trust in the adoption of ChatGPT in (higher) education, while simultaneously considering their framings of academic integrity or lack thereof in educator-student interactions mediated by the tool. Sociotechnical imaginaries is a concept that encompasses the shared beliefs, visions, and values societies hold about the integration of generative artificial intelligence into education. In recent years, it is a framework that has gained traction for fostering insights into the complex co-constitutive relationship between humans and artificial intelligence. Examining which educational imaginaries about ChatGPT are made more or less salient through South African news articles is important, given that these representations contribute towards shaping trust or mistrust in it. These representations also underscore how academic integrity and trust in the educator-student relationship hang in the balance. Using thematic analysis as a broad methodological approach, an examination of relevant articles published in six South African news outlets demonstrates overwhelmingly positive representations of ChatGPT in education. However, several framings also point to issues of trust around our entanglement with a potentially destructive technology in the sense that it may harm relationships between academics and their students where the former act as gatekeepers of academic integrity and the latter are perceived as potential cheaters. Within the framework of a technomoral virtue ethic espoused by Shannon Vallor, this study also considers how a relationship of trust and dignity between academics and students can flourish and be repaired by advocating narratives that shift from merely punitive measures used to address academic integrity to ones that embrace an ethics of care geared towards facilitating student reflection on qualities such as agency, creativity, self-disclosure, honesty and authenticity.

Keywords: ChatGPT; Higher education; News media framings; Sociotechnical imaginaries; Trust

Introduction

Artificial intelligence (AI) tools are becoming pervasive in a variety of sectors in South Africa and the educational sphere at tertiary level is no exception, with generative AI (GenAI) being used across several disciplines. While the affordances of this technology are not in doubt, what is cause for concern are ethical considerations that pivot around workplace displacement, algorithmic biases, misinformation, violations in terms of data privacy, and plagiarism. Whether positive or negative, sentiment reflects trust and mistrust, which are currently understudied phenomena in the context of educational spaces (cf. Amoozadeh et al. 2024). While conceptualisations of trust and its rhetorical opposite are elusive (see Literature Review), suffice it to say that with respect to a professional or pedagogical relationship, trust hinges on expecting students and staff to “perform their role competently and fulfil all their obligations without the need to be policed or controlled” (Madikizela-Madiya 2018:416). Mistrust, on the other hand, embodies uncertainty and even misgivings about each other’s ability to carry out expected tasks in a competent way (Madikizela-Madiya 2018:417). Viewing these broad definitions in the context of GenAI technology such as ChatGPT, we might conceive of students and staff as harbouring trust or mistrust not only with respect to their interactions with this technology, but also in terms of their interactions with one another.  A specific example of the erosion of trust in both types of interaction relates to epistemic mistrust, which may be defined as regarding information as unreliable (Li 2023). If a student drafts an essay with the assistance of ChatGPT and the educator deems the information in it to be false and to have been generated by the tool, the result may not only be a reduction in or lack of trust in the tool, but also a breakdown of trust in the student. The student may in turn exhibit less trust in the educator’s assessment if he or she did not in fact use the tool to commit plagiarism, but simply as a brainstorming aid.

Grappling with trust issues and in an attempt to understand the nature of an emerging and uncertain technology that is beginning to gain traction at universities across the globe, educators might engage not only with one another, but also with news media framings of ChatGPT’s risks and affordances. This is because such framings reflect the voices of stakeholders such as university leaders, researchers, students, and technology industry experts. How these stakeholders portray ChatGPT plays an important part in how the tool is then defined, interpreted, and judged (cf. Entman 1993:52). As Roe and Perkins (2023:2) observe, news frames “define topics represented in the media” and “can lead to additional societal effects when individuals engage with the news” (cf. Freeman and Aoki 2023:28). With respect to scholarly work, how South African news media outlets are currently framing ChatGPT and its role in higher education has not been studied.

To address this lacuna, the present study reports on a framing analysis that was conducted to examine several South African news articles’ discursive representations of ChatGPT’s deployment in (higher) education. The specific aim was to explore how these articles (taken from six news outlets) either explicitly or implicitly framed trust and mistrust with respect to education stakeholders’ use of the tool. To this end, the following research questions were posed: (i) In the given news articles, whose voices were represented in conveying themes around the use of ChatGPT in (higher) education? (ii) What were common themes in the news articles, and which were related to issues of trust and mistrust? (iii) How did the news articles frame trust and mistrust with respect to ChatGPT’s affordances and risks in (higher) education spaces? (iv) How did the news articles frame trust and mistrust in humans in relation to the use of ChatGPT in (higher) education spaces?

In addition to filling a specific research gap, the study is intended to provide useful analytical frameworks according to which scholars may examine news framings of a novel technology’s early adoption in the context of higher education.

The various concepts and theoretical frameworks that informed the study are considered in the next section before the methods are detailed and the analysis that was carried out is documented.

Literature Review

Whether at school or university level, trust between educators and their students is a key element for facilitating meaningful teaching and learning environments (Felten, Forsyth and Sutherland 2023). For students, the specific benefits of trusting relationships include increased motivation and engagement (Kim and Lundberg 2016), better receptivity to formative assessment practices (Winstone, Nash, Parker and Rowntree 2016), and a sense of well-being that positively influences critical thinking, and experiential learning (Hupper 2009; cf. Eloff, O’Neil and Kanegoni 2021). For educators, trusting relationships reduce burnout (Roffey 2012) and are associated with the positive experience of encountering students who are academically engaged as well as more autonomous (cf. Russell, Wentzel and Donlan 2016).

Although not easy to define, trust in the context of educator-student relationships refers to exhibiting confidence in five facets that Hoy and Tschannen-Moran (1999) have identified as the socio-emotional facets of benevolence, openness, and honesty, as well as the cognitive dimensions of reliability and competence. While this trust model is well over two decades old, it is widely employed by scholars who study trust in educational settings (e.g., Bradley, Dowell and Csaszar 2023; Howe, Hallam and Hilton 2023). According to the model, benevolence reflects care and protection of another person’s well-being; openness pertains to a willingness to share/disclose information with others; and honesty is synonymous with integrity, authenticity, and responsibility. Reliability refers to information, behaviour or actions that are consistent and predictable, while competence encompasses the knowledge and skills required to successfully accomplish goals. Linking the model to the deployment of ChatGPT in education, the socio-emotional dimensions of trust become crucial: Luo (2024a) analysed the generally harsh GenAI policies of 20 universities with respect to plagiarism, and one of the conclusions reached was that students might perceive their lecturers to be less than benevolent in light of their punitive approaches to lack of student honesty. Students might resort to going on the offensive in retaliation. In such a climate of mistrust, lecturers might be predominantly framed as plagiarism gatekeepers rather than as pedagogical bridge builders (Luo 2024b:12), while lecturers in turn might perceive students as antagonistic. Furthermore, trust could be eroded if students do not fully disclose that they used AI and how they did so. With respect to the cognitive facets of the trust model, it goes without saying that educators’ trust in students’ academic competence (and thus reliability) could be eroded if they discover an over-reliance on ChatGPT. Similarly, students want to trust that their educators have domain-specific knowledge and skills that will make them effective teachers (Zhou 2023). At the same time, Luo (2024b) points out that according to a recent study by Chan (2023), doubt may exist that teachers have the competence to detect whether or not students have in fact used AI: “With the rise of GenAI, students now also expect teachers to demonstrate AI literacy or, at the very least, a growth mindset towards continuously learning about emerging technologies” (Luo 2024b:12).

The trust model aligns well with Vallor’s (2016) socio-technical framework, which sets out technomoral virtues such as honesty and self-control that we argue could be employed to enhance academic integrity when ChatGPT is employed (cf. Harfield 2021). It is worth noting that Vallor’s (2016) conceptualisaion of honesty is closely connected to the notion of trust and is defined as “the appropriate and morally expert communication of information” (Vallor 2016:121).

What underpins all five facets proposed by Hoy and Tschannen-Moran (1999) is the notion of vulnerability; as they express it, “Where there is no vulnerability there is no need for trust” (Hoy and Tschannen-Moran 1999:186-187). With respect to the use of GenAI, the risk of vulnerability is particularly high among students who have just commenced with their studies, since they do not yet possess domain-specific knowledge and may be struggling to master (academic) skills. Given these deficits, they might over-rely on GenAI tools. Educators too might experience vulnerability in the sense that they could face a (legal) backlash if they make false allegations about AI plagiarism or if they fail to teach students certain foundational skills before GenAI is used.

In this study, both the socio-emotional and cognitive dimensions of trust are considered in the analysis of the data collected. In the context of higher education, studies of trusting relationships seem to be mainly carried out from the perspective of  the cognitive facets, the reason being that at tertiary level, trust is viewed as hinging mainly on knowledge trust rather than on trust at the interpersonal level, where the former entails trusting the knowledge or expertise of the educator/student and the latter encompasses trusting in the educator’s kindness or benevolence (Kovač and Kristiansen 2010:279). However, neither type of trust can be ignored:  trust is certainly anchored in positive student-teacher relationships (Platz 2021:690). However, the professional (pedagogical) component of this relationship cannot be overlooked: it buttresses these interactions (Hagenauer, Muehlbacher and Ivanova 2023), given that certain learning and assessment goals need be met. Leighton and Bustos Gómez (2018) refer to this kind of relationship as a pedagogical alliance which, according to them, originates with the socio-emotional facets of trust, but is maintained through the cognitive facets. The notion of a pedagogical alliance emanates from attachment theory, which proposes that learning success originates in part from secure relationships between teachers and their learners (Leighton and Bustos Gómez 2018:383).

How the use of ChatGPT in higher education spaces mediates this relationship of trust is at this stage an under-researched topic. However, it is unsurprising that early research points to how a trusting relationship between students and educators is steadily diminishing. Hasanein and Sobaih (2023), for example, point out that students who subordinate their own writing efforts to an over-reliance on ChatGPT not only weaken trust in their own educational endeavours, but also devalue teaching and learning in general. Moorhouse, Yeo and Wan (2023) observe that a preoccupation with implementing adversarial measures to punish students who over-rely on GenAI may further erode trusting relationships, while Sokol (2023) notes that students live in fear of being punished in the face of less than robust measures to accurately detect AI plagiarism.

Methods

Data Gathering

To explore media framings of ChatGPT, we initially collected 94 articles containing the keywords ‘ChatGPT’, ‘artificial intelligence’, ‘AI’, ‘generative AI’, and ‘GenAI’ from the Daily Maverick, News24, The Citizen, The Witness, SowetanLIVE, and the Mail & Guardian  We made use of purposive sampling, collecting news articles that were published within a year of ChatGPT’s launch, which was 30 November 2022, since our interest lay in early-phase perceptions and adoption of this tool. Ultimately, we analysed 16 articles from the six South African news outlets (Table 1).

Table 1. Overview of the dataset

Nr News outlet Headline
1 Daily Maverick Let’s equip students with the skills to use ChatGPT critically and responsibly
2 Daily Maverick Varsities, students hail ‘really useful’ ChatGPT, but many worry about the dodgy stuff it spits out – and plagiarism
3 Daily Maverick Navigating the AI challenge in education – banishing ChatGPT is not the answer
4 Daily Maverick Real danger of ChatGPT lies in its robbing us of our ability to read and research critically
5 Daily Maverick We must embrace new technology that challenges assumptions about higher education
6 News24 How SA universities plan to deal with ChatGPT
7 News24 ChatGPT: What are the ethical challenges of AI language models in higher education?
8 News24 How artificial intelligence can revolutionise education
9 The Citizen ChatGPT can help students who are non-native English speakers
10 The Citizen ChatGPT: More of a cheating aid?
11 The Witness Concerns over education system’s ability to handle issues of AI misuse
12 SowetanLIVE ChatGPT can be a valuable tool to enhance teaching and learning
13 Mail & Guardian A practical guide to ethical use of ChatGPT in essay writing
14 Mail & Guardian Humanities and social science educators must embrace ChatGPT (for now). Here’s why
15 Mail & Guardian Effective methods of teaching students in the age of AI
16 Mail & Guardian Alarm around AI is not warranted

As shown in Table 1, five articles were sourced from the Daily Maverick, three from News24, two from The Citizen, one each from The Witness and SowetanLIVE, and four from the Mail & Guardian. Our criteria for inclusion were (1) a sustained focus on the use of ChatGPT in (2) the context of (higher) education in South Africa, hence the small number of articles ultimately analysed. The reason for including articles on both basic and higher education was that several articles included references to both levels. Duplicate articles were eliminated as were those that were sponsored by companies such as Investec or Dell.

Analytical Frameworks

The argument made here is that the technological frames approach espoused by Orlikowski and Gash (1994) constitutes one useful framework for analysing news media framings of ChatGPT in (higher) education settings. In terms of conceptualisation, Orlikowski and Gash (1994) identify three overlapping domains they refer to as the nature of technology, technology strategy, and technology-in-use domains that describe the knowledge, assumptions and expectations individuals within organisations may draw on to make sense of technology (Table 2). Depending, among others, on contexts of use as well as on their technological expertise or lack thereof, individuals’ assessments of the value of a given technology will differ (Orlikowski and Gash 1994:197).

Table 2. Technological taxonomies (adapted from Orlikowski and Gash 1994:192-193)

Orlikowski and Gash’s (1994) taxonomy
Nature of technology The nature of technology frame reflects understandings of a given technology’s capabilities.
Technology strategy This frame evokes individuals’ understandings with respect to why their organisation is/was motivated to adopt a given technology. Some individuals may view the adoption of the technology with skepticism, unable to discern any value in it.
Technology-in-use This frame describes what individuals will do with a given technology as well as understandings of what its ramifications will be and what solutions need to be considered to manage the ramifications.

The nature of technology domain describes individuals’ understandings of what a given technology is, while the technology strategy domain refers to their views around why their particular organisation feels motivated to adopt (or reject) the technology. The third domain reflects their understandings of how the technology will be deployed and what the ramifications of this deployment might be. It also reflects possible solutions to these ramifications.

In this study, a further argument proposed is that these three domains for frames are useful for exploring how news articles depict educators’ perceptions of trust or mistrust in ChatGPT and in educator-student interactions because they implicitly reflect human values: as Grünloh (2021) succinctly puts it, although the frames do not explicitly address human values, “[they] conceptualise people’s assumptions, expectations, and knowledge, which can be used to examine people’s interpretive relations with the technology, and from which corresponding values are then recognised” (Grünloh 2021:56). These are values such as trust, well-being, transparency, and autonomy (Grünloh, 2021).

The absence of frames that speak directly to such values necessitates the inclusion of Hoy and Tschannen-Moran (1999) model of trust. Since neither this model nor Orlikowski and Gash’s (1994) typology focuses on the material elements of AI per se or on mass media framings of this technology, the study added an additional analytical framework with a view to accommodating framings of ChatGPT in the news. To this end, it drew on Jones’ (2015) taxonomy of AI frames which are nature, artifice, and competition (Table 3).

Table 3. Mass media framings of artificial intelligence (adapted from Jones 2015:32-42)

Jones’ (2015) taxonomy
Nature The nature frame describes AI’s attributes and questions individuals’ relationship with it.
Artifice Artifice frames AI in terms of magical/other-worldly abilities, often suggesting that AI matches or exceeds human intelligence.
Competition This frame portrays AI as threatening human and material resources.

Within Jones’ (2015) discourse-analytic framework, what underpins each frame are the risks that AI holds and how they are understood and addressed. The first frame identified in Table 3 describes the nature of the relationship between humans and machines, which may evoke images of humans struggling to control AI (Jones 2015:36-37). The frame of artifice goes further, portraying AI as an artefact that may surpass human intelligence and/or deceive humans. The competition frame depicts AI as depleting our physical resources and as replacing human labour (Jones 2015:31). To return to Hoy and Tschannen-Moran’s (1999) model of trust, this study proposes that being willing to address the risks that ChatGPT holds entails being vulnerable to these risks. To some extent Jones’ (2015) taxonomy has connections to Orlikowski and Gash’s (1994) framework: both exhibit the frame of nature, although Jones’ (2015) frame emphasises romanticising and/or anthropomorphising technology to the extent that it may be depicted as magical through the frame of artifice. The technology-in-use frame overlaps with the frame of competition, since both may view applications of technology in a negative light.

Analysis

(i) In the given news articles, whose voices were represented in conveying themes around the use of ChatGPT in (higher) education?

Several stakeholders’ voices were represented in the news articles, with most being those of academics who also wrote 10 of the 16 articles. Eighteen academics, twelve of whom were also in leadership positions, were directly quoted and/or paraphrased. Several of these academics or the journalists themselves summarised the views of other educators, referring, for example, to university “tutors” (articles 1, 2 and 15), “various academics” (articles 1 and 5) “professors” (article 5), “educators in the higher education sector” (article 7), school “teachers” (article 12), and “lecturers” (article 13). In one case, a journalist shared the perceptions of “some observers” with respect to the need to train students to use “critical [AI] tools” (article 7), although it is not clear who these observers were supposed to be. The educational views of a “trend analyst” and “developer” were quoted or paraphrased in articles 2 and 7, while article 8 featured the perceptions of an independent researcher in industry who was also the author of the article. Other voices on ChatGPT and education included a child psychologist, a politician, the United Nations, and a representative of the United Nations Educational, Scientific and Cultural Organization (article 11). With respect to student voices, one of the Mail & Guardian reports – article 15 – was written by a doctoral student, while only two articles (article 1 and 2) directly quoted or described the personal experiences of three university students.

 (ii) What were common themes in the news articles, and which were related to issues of trust and mistrust?

Themes that were common across the 16 news articles were, in order of prevalence, (1) the identification of pedagogical and/or ethical solutions to the affordances and risks that ChatGPT poses for teaching and learning; (2) discussions around its pedagogical and ethical consequences (particularly as these relate to plagiarism and academic integrity); (3) considerations of its current and future uses in higher education; (4) definitions of its functionalities; and (5) understandings of why universities may be motivated to use or ban it. Categorising these themes in terms of Orlikowski and Gash’s (1994) typology, the study posits that themes (1) to (3) constituted the domain of technology-in-use; theme (4) reflected the nature of technology domain; and theme (5) was constitutive of the technology strategy domain. Within Jones’ (2015) framework, the study found that all themes evoked the competition frame, while themes (4) and (5) reflected the frame of nature. Of significance is that although various stakeholders expressed amazement at ChatGPT’s capabilities, artifice was detected in only two articles (see (iii) below). All five themes explicitly or implicitly spoke to issues of trust and mistrust, either in ChatGPT itself or in students using the tool.

(iii) How did the news articles frame trust and mistrust with respect to ChatGPT’s affordances and risks in (higher) education spaces?
Nature of technology domain/Frame of nature/Frame of artifice

With reference to Orlikowski and Gash’s (1994) nature of technology domain as well as Jones’ (2015) frame of nature, different stakeholders generally exhibited similar understandings of ChatGPT’s architecture. Some  expressed implicit trust in the tool’s capabilities when they claimed, for example, that it is “a user-friendly AI chatbot” (article 1), “able to generate code” (article 2), and “[provide] detailed human-like responses” (article 3). However, they also demonstrated either tacit or explicit mistrust of ChatGPT through their awareness of this large language model’s ambiguities and flaws. With respect to implied mistrust, it was, for example, described as a tool that “can rapidly […] produce written pieces” (article 5), but that “sometimes combines text in a way that the answers it gives are factually incorrect, controversial, biased or made up” (article 2). Regarding more overt expressions of mistrust, typical statements were those such as “Framed as a means to cheat, ChatGPT is treated with suspicion” (article 3) and “Relying on ChatGPT to write essays is not a viable option for obtaining good grades” (article 13). As indicated under (i), there were academics (quoted in articles 3 and 5 respectively) who exaggerated ChatGPT’s functionalities through the frame of artifice: in article 3, two academics claimed, “It can respond, synthesise knowledge and produce essays”, while academics in article 5 described ChatGPT as a tool that can generate texts that “would result in full marks if submitted by an undergraduate”.

Technology strategy domain

There were few direct references to understanding why a particular university has adopted or plans to adopt ChatGPT, with academics in only three articles (articles 3, 7 and 10) explicitly referencing their own institutions’ motivations for deploying or intending to deploy it. In all cases, whether explicitly or implicitly expressed, the rationale for adopting GenAI like ChatGPT lay in the need to prepare graduates for the (future) world of work in which AI is becoming pervasive. A common suggestion for addressing ChatGPT’s ethical pitfalls was making sure that “[universities] equip students with the skills to use [it] critically and responsibly” (article 1). This is similar to the doctoral student’s argument that “it takes critical thinking skills to figure out errors in academic writing, especially those from the ‘smarter’ algorithms (article 15). Comments such as these point to education stakeholders’ mistrust of the tool. Of significance is that in article 7, one academic explicitly argued that institutions are obligated to adopt ChatGPT as a technology strategy because “whether we like it or not, AI language tools, like ChatGPT, are not going away”. The perception of a lack of choice in the matter was also evident in other articles in the dataset. In article 3, for instance, two academics argued that a ban on ChatGPT’s use “will result in a disconnect between education and the real world it prepares students for”, while the author of article 12 observed that “The general sentiment from those not hostile to this tool is that we need to figure out a way to adjust to it, not to just ban it”.

Technology-in-use domain

Regardless of what type of education stakeholder individuals represented, all commented extensively on ChatGPT’s (potential) uses, focusing predominantly on its positive and negative effects in the contexts of teaching, learning, and assessment. Among academics, researchers and trend analysts, typical comments around deployment, which may signal trust, included those such as ChatGPT can “produce a sample essay […] as a guideline for […] students’ answers” (article 2), “improve the writing of students” (article 4), and “grade and provide feedback on assignments” (article 12). In the only article written by a doctoral student, mention was made of ChatGPT being beneficial on condition that students “use it concomitantly with critical thinking skills” (article 15). Observations that point to mistrust in using the tool revolved mainly around concerns about its design flaws. These concerns related, among others, to “[the] potential for bias in the data used to train the models” (article 6), lack of “authorship verification” (article 7), the production of “misinformation” (article 7) or “false information” (article 8), and the fact that “its responses are limited to the information it has access to” (article 10). With respect to student voices in article 2, sentiment was also ambivalent, with students also expressing both trust and mistrust when using it for research and assignments. One student noted the following: “[…] with studying it’s an incredible tool to use to explain concepts in simple terms. I ask it things like that all the time and it’s very good at rewording information in a way that makes certain concepts much easier to understand. I use it all the time and I think it’s a complete game-changer.” Another student cited in the article claimed that using ChatGPT helped her save time: “Instead of searching for what I’m looking for and then trying to find a site with the right information, I just ask ChatGPT to let me know which sites will have the answers to what I’m looking for. I also use it for troubleshooting in coding and things like that”. Expressing mistrust in the chatbot’s capabilities, another student majoring in law was paraphrased as remarking that “she stopped using ChatGPT after it made up a law case as an answer to one of her questions”.

Frame of competition

Discussions of ChatGPT in the three domains evoked Jones’ (2015) competition frame and thus reflected mistrust of the tool. For example, with respect to the nature of technology domain, some stakeholders framed ChatGPT’s architecture as one that is difficult for humans to compete with: “We make a fundamental mistake when we try to compete with the machine” (article 2). Concerning the technology strategy domain, it has already been noted that some stakeholders voiced their perception that universities have little choice but to adopt ChatGPT. Exactly how this perception relates to mistrust in the tool is considered in the discussion section of this paper. With respect to the technology-in-use domain, a typical response by academics was related to “its potential threat to human intelligence and academic integrity” (article 12). The students represented in the dataset did not express any concerns about having to compete with a GenAI tool.

(iv) How did the news articles frame trust and mistrust in humans in relation to the use of ChatGPT in (higher) education spaces?

In this section, specific findings are summarised as they pertain to the technology-in-use domain, since the focus falls on perceptions of educator-student interactions in the context of using ChatGPT. For the most part, academics expressed either explicit or implicit mistrust in students’ use of ChatGPT, referring either to plagiarism and lack of academic integrity or to problems around academic performance. In article 13, for example, an academic sighted “[t]he risk of undetectable plagiarism” as being “a major concern”. Other stakeholders referred to students knowing how to ‘beat the system’, so to speak:  the tutor who was referenced in article 2 was described as finding it “difficult to pick up when a student had used ChatGPT in academic work, especially because many had found routes to trick plagiarism detectors, such as using [programmes] that rewrite ChatGPT’s copy to sound more human-like”. Comments that typified concerns about academic performance were those such as “ChatGPT disrupts [the] assumed association between process and product by producing an assessment (output) without adequate learning (process)” (article 3). Solutions for addressing concerns about poor academic performance and plagiarism included engaging in active learning (articles 1 and 15), employing process-driven assessment (article 1), encouraging critical thinking (articles 1, 13, 15 and 16), and subordinating the chasing of a grade to “an emphasis on “a transformative relationship to knowledge” (article 5).

Incidentally, it appears that students too were aware that using ChatGPT may diminish returns on academic growth and achievement. One of the students referenced in article 1 stated, “I think it’s really useful, but it can become dangerous if you’re too reliant on it.” One solution proposed by academics for dealing with threats to academic integrity constituted “[adapting] TLA [teaching and learning assessment] practices to AI” (article 1). This would entail reverting to traditional practices such as “oral or even handwritten assignments to ensure a student’s knowledge is genuinely tested” (article 1). In article 3, academics proposed “deliberate discussions about the language model’s ethical use” to “free students and educators from the stalemate the argument has reached, caught between a natural desire to tap into ChatGPT as a resource and a more conservative condemnation and restriction of its use”. This solution was echoed in article 1 in which an academic was quoted as suggesting that educators need to make students aware of what AI tools are to help them “understand what the correct and incorrect usage entails” (article 1). Another solution entailed having students take sit-down examinations (article 11). One academic spoke about the need for students to be transparent about their use of ChatGPT: “If students choose to use ChatGPT to help with an assignment, they need to reference it, and make it clear which parts were generated by [the tool] and which were their own writing” (article 2). With respect to trust in educators, four articles (articles 2, 3, 4 and 6) implicitly addressed the need to demonstrate benevolence towards students when acting on suspicions of GenAI/ChatGPT plagiarism. For example, the university spokesperson who featured in article 6 was quoted as recommending that educators should “take steps to mitigate the risks and ensure that students receive accurate, unbiased and personalised feedback”. Similarly, an academic in article 3 stated, “[…] we propose open-minded approach that embraces the challenge ChatGPT presents to us as a teachable moment. Moving away from punitive measures, a developmental and adaptive educational stance, in our opinion, will be much more helpful”. The  academic who wrote article 4 called for the need to understand why students may commit plagiarism in the first place: “contemporary students plagiarise for perfectly explicable reasons: a lack of confidence in their comprehension of prescribed readings and in their writing (often not in their first language); because they’ve left their work to the last minute; and, paradoxically, because they are constantly being warned against plagiarising and must already feed their work into plagiarism detecting software”.

 Discussion

Cognitive facets of trusting relationships

What can be deduced from the analysis is that how various stakeholders are currently framing ChatGPT’s applications in (higher) education is shaping (mis)trust in the tool’s capabilities as well as (mis)trust between educators and students. It is clear that, while singing the praises of ChatGPT’s functionalities, academics and students alike were also skeptical about the so-called hype surrounding it. This contradiction is not unexpected, given that whenever individuals are faced with an emerging and thus uncertain technology, ambivalence may emerge (Zhang, Yang and Tong 2024:10). The element of mistrust that this engenders among students should not be viewed in a negative light, since it may induce the skepticism and contemplation necessary for critical and reflective thinking (Zhang et al. 2024:8). It may also encourage students to be cautious about using ChatGPT, perceiving it to be a tool that is useful only for accomplishing undemanding academic tasks such as the retrieval of information (Zhang et al. 2024:8). This approach aligns with that taken by a student cited in one of the articles who did not unreservedly trust in ChatGPT’s capabilities, but instead used it to save time as well as complete mundane tasks such as searching for information. Academics’ mistrust of ChatGPT’s capabilities may also generate positive outcomes. It was noted, for example, that they reported identifying fabricated or factually incorrect information as well as biases in the tool’s training data as significant flaws. Awareness of these flaws should enable academics to sensitise students to the ethical consequences of ChatGPT’s architecture, showing them how to address them, and  in this way pre-empting exaggerated expectations of its capabilities. Of course, knowledge of ChatGPT’s ‘nuts-and-bolts’ and the ability to role model both ethical and responsible use of the tool demands a certain degree of AI literacy on the part of educators. As noted earlier, there is an expectation among students that their teachers will be AI literate (Luo 2024b). It is argued that such an expectation extends the definition of educator competence, which is traditionally associated with domain-specific expertise (Zhang 2023; cf. Luo 2024b). It is proposed that as GenAI becomes more sophisticated, students’ trust will increasingly depend not only on their educators’ subject knowledge, but also on their AI literacy or AI readiness, which may be defined as “the understanding and implementation of AI-based technologies in education and beyond” (Sperling, Stenberg, McGrath et al. 2024:100169; cf. Luo 2024b:13).

With respect to mistrust’s rhetorical opposite, unqualified trust in ChatGPT can have negative consequences. It was noted that some academics amplified the tool’s functionalities, claiming, for example, that it can synthesise texts and draft responses that reflect good writing at under-graduate level. Such misplaced trust in the tool could be embraced by students with serious consequences for academic competence and achievement. Exploring the role of ChatGPT in academic research, Rahman, Terano, Salamzadeh et al. (2023:5) concluded that as far as drafting a literature review is concerned, although ChatGPT has the capacity to summarise information, synthesising prior literature findings is beyond its capabilities. It also cannot use existing literature to construct a story (Rahman et al. 2023:9). With regard to writing, these scholars have noted that although the texts generated by ChatGPT are fairly well-structured, they do not demonstrate creativity or originality (Rahman et al.  2023:8).

Socio-emotional facets of trusting relationships

In terms of trust on the interpersonal level, the inference can be made that for educators represented in the news articles, trust hinged more on considerations of student behaviour as this relates to plagiarism and academic integrity when ChatGPT is used and less on critiques of the tools’ ethical flaws. The fact that several academics recommended revisiting more traditional assessment methods such as oral presentations, handwritten assignments, and examinations points to the absence of trust in students’ honesty: it is expected that some students might commit academic misconduct. Such recommendations and views are not new, with several scholars noting that educators are considering reverting to traditional assessment practices in attempts to prevent AI cheating (e.g., Neumann, Rauschenberger and Schön 2023).

In speaking about plagiarism and academic integrity in general, several educators framed ChatGPT as a tool that students are using or may use to cheat rather than to learn. This finding aligns with that of Sullivan, Kelly and McLaughlin (2023), who, in their examination of news articles’ framings of ChatGPT in higher education, found that ChatGPT may be regarded by some students as a ‘cheat machine’ rather than as a vehicle for meaningful learning. However, of interest is that in the dataset collected for this study, academics went beyond positioning ChatGPT in this way to consider how best to adapt their teaching, learning and assessment practices with a view to pre-empting plagiarism and enhancing academic integrity. The emphasis fell, among others, on recommending the construction of an ethical GenAI foundation upon which students can then build. Azoulay, Hirst and Reches (2023:5) contend that such an approach allows trust to develop between educators and their students. They further claim that “[t]he purpose of this is twofold – to maintain integrity and to enable students to acquire the necessary experience for their future professional tasks” (Azoulay et al. 2023:5).

With respect to lack of openness, it has been noted that in one article, a university tutor referred to students as being deceitful by running their ChatGPT-generated responses through different programmes in an attempt to pass off these responses as their own work. In other articles, several academics suggested that, in the interests of transparency, students should disclose how and where they have made use of ChatGPT. This type of openness is critical, not only because it promotes academic integrity and fosters trusting relationships, but also because viewing ChatGPT as an author or co-author is questionable. Zeilinger (2021), for example, calls for discussions around the notion of what authorship and creative expression entail. The general consensus is that ChatGPT is not capable of functioning as an author: “The thinking abilities inherent to humans remain irreplaceable, and ChatGPT should be regarded as a supplementary tool rather than a substitute for human authors” (Rane, Choudhary. Tawde et al. 2023:870). Students need to be aware of debates around authorship and appreciate that with respect to their written assignments, it is their competence that educators want to assess and not that of ChatGPT.

This study has reported that in a few articles, educators implicitly addressed the issue of benevolence, calling for a shift away from an adversarial approach to punishing students for AI plagiarism. They specifically called for educators to subordinate punitive measures to a developmental approach and to understand that students do not necessarily commit GenAI plagiarism to deceive, but may instead be using AI to remediate poor writing skills or because they have encountered difficulties comprehending the prescribed texts. Such calls speak to what Hoy and Tschannen-Moran (1999:187) refer to as benevolence or “the most common face of trust”, since they acknowledge the vulnerability of students and reflect educators’ attempts to act in their best interests (Hoy and Tschannen-Moran 1999:187).

The following observation is not of course generalisable to a larger cohort of students, but it is worth noting that by acknowledging that she ceased using ChatGPT when she realised that it had falsified a legal case, the law student cited in article 1 demonstrated an awareness that human behaviour needs to change when the tool is employed. This awareness encompasses honesty and authenticity which in turn foster accountability and transparency (Hoy and Tschannen-Moran 1999:188).

Conclusions and Future Research

By exploring news media coverage of early applications of ChatGPT in (higher) education, this paper has sought to make a contribution to existing knowledge of trust and mistrust not only in GenAI itself, but also in educator-students interactions mediated by this technology. While the analysis has demonstrated overwhelmingly positive sentiment towards using ChatGPT in higher education, various stakeholders also currently exhibit some ambivalence with respect to cognitive and socio-emotional facets of trust in interactions mediated by the tool. In terms of cognition, both educators and students tend to display a degree of mistrust in ChatGPT’s cognitive functions, although some are exaggerating these functions, thus obscuring the tool’s ethical and epistemological drawbacks. On the other hand, some educators and students are more concerned about the latter group’s cognitive development, questioning whether an over-reliance on ChatGPT may not impede academic skills such as reasoning, critical thinking, and problem-solving. One recommendation is to consider best practice principles for enhancing the AI literacy of educators and students. In this regard, Strauß (2021:45) calls for encouraging critical AI literacy, defined as understanding AI’s architecture and appreciating that its applications should be viewed within sociotechnical contexts and not from the point of view of a one-sided technocratic perspective.

Regarding the socio-emotional dimension of trust, the analysis points to educators’ preoccupation with the absence of students’ academic integrity when ChatGPT is used. One recommendation made is that future research should explore innovative approaches to dealing with GenAI plagiarism and student misconduct in this area. Traditionally, plagiarism is addressed through punitive measures, but a developmental approach could be more appropriate when attempting to resolve GenAI plagiarism, since students may, for example, not be aware that GenAI has the potential to generate false information, leading to accusations of academic dishonesty. Such a holistic approach is advocated by Luo (2024a), who argues that “[r]ather than stressing originality from a surveillance angle, policies can place more emphasis on the available support to students in producing original work that is meaningful to their learning” (Luo 2024a:662). This approach takes into account that several factors such as levels of  language proficiency and access to GenAI have an impact on the originality of assignments (cf. Luo, 2024a:662). To cultivate a culture of academic integrity among students in the context of GenAI use, academics too need to uphold values of academic honesty in their own tasks, disclosing when and how they themselves have used GenAI tools to conduct research, write a paper, set up assignments, and the like. Current research on trust in AI for education contexts underscores how educators should go about instilling academic honesty in their students (e.g., Rasul, Nair, Kalendra et al. 2024; Sevnarayan and Potter 2024). However, an under-researched area is how educators themselves should be role modelling ethical behaviours. This type of modelling conveys academics’ own vulnerabilities to this tool and may help students perceive them as more than simply plagiarism ‘watchdogs’. Although it constitutes a paradox, “the more vulnerable teachers make themselves, the stronger students’ trust in teachers becomes” (Romney and Holland 2023:84). Future projects could explore strategies that academics could employ to exhibit vulnerability while simultaneously upholding their credibility as educators and researchers.

Finally, the analysis has shown that the voices of educators are well represented in the news media, whereas student voices are scarce. Since under-representation of student voices has been reported by other scholars such as Sullivan et al. (2023), our current research endeavours have begun capturing these voices through focus groups. Allowing students to participate in guidelines that address GenAI plagiarism and that help them learn how to use ChatGPT ethically is aligned with what Sullivan et al. (2023:6) refer to as “a proactive role in collaborating with university staff in policy development”. Such a proactive role promotes student agency and self-disclosure, which is in keeping with Vallor’s (2016) technomoral virtue ethic.

In terms of limitations, one weakness of this study is its use of a small sample size, which means that its findings cannot be generalised to a larger pool of news outlets. We argue that since the adoption of ChatGPT and other GenAI tools is at an early stage often marked by uncertainty and controversy, exploratory research that relies on small datasets is warranted (Swedberg 2020). Another limitation of the study lies in the six news outlets’ different modes of delivery and focus areas: the Daily Maverick is regarded as an alternative news outlet, while News24 and The Witness are mainstream newspapers. The Mail & Guardian focuses on political analysis as well as investigative reporting, whereas both The Citizen and SowetanLIVE provide tabloid-like coverage of events and phenomena. Of interest is that at this stage, framings of ChatGPT in the context of (higher) education appear to be similar across these outlets, a pattern that could be attributed to ChatGPT being a novel and thus ambivalent technology. Future research would entail tracking and comparing how articles published in these outlets frame GenAI over an extended period of time as people become increasingly AI literate.

Reference List

Amoozadeh, M., Daniels, D., Nam, D., Kumar, A., Chen, S., Hilton, M., Srinivasa Ragavan, S. and Alipour, M. A. 2024. Trust in generative AI among students: An exploratory study, in Stephenson, B., Stone, J. A., Battestilli, L., Rebolsky, S.A. and Shoop, L. (eds.). Proceedings of the 55th ACM technical symposium on computer science education V. 1 conference. Portland, Oregon: ACM:67–73. doi: 10.1145/3626252.3630830.

Azoulay, R., Hirst, T. and Reches, S. 2023. Let’s do it ourselves: Ensuring academic integrity in the age of ChatGPT and beyond. Authorea Preprints. doi: 10.36227/techrxiv.24194874.v1.

Bradley, A., Dowell, M. M. M. S. and Csaszar, I. E. 2023. Situating servant leadership within educational leadership: Case study of trust as a relational element in teacher-principal relationships, in Thomas, U. (ed.). Cases on servant leadership and equity. Hershey, AP: IGI Global:1–28.

Chan, C. K. Y. 2023. A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1):1–25. doi: /10.1186/s41239-023-00408-3.

Eloff, I., O´Neil, S. and Kanengoni, H. 2021. Students’ well-being in tertiary environments: Insights into the (unrecognised) role of lecturers. Teaching in Higher Education, 28(7):1777–1797. doi: 10.1080/13562517.2021.1931836.

Entman, R. M. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4):51–58. doi: 10.1111/j.1460-2466.1993.tb01304.xCi.

Felten, P., Forsyth, R. and Sutherland, K. A. 2023. Building trust in the classroom: A conceptual model for teachers, scholars, and academic developers in higher education. Teaching and Learning Inquiry, 11. doi: 10.20343/teachlearninqu.11.20.

Freeman, B. and Aoki, K. 2023. ChatGPT in education: A comparative study of media framing in Japan and Malaysia, in  Proceedings of the 2023 7th international conference on education and e-learning. New York, NY: Association for Computing Machinery:26–32. doi: 10.1145/3637989.3638020.

Grünloh, C. 2021. Using technological frames as an analytic tool in value sensitive design. Ethics and Information Technology, 23(1):53–57. doi: 10.1007/s10676-018-9459-3.

Hagenauer, G., Muehlbacher, F. and Ivanova, M., 2023. “It’s where learning and teaching begins‒is this relationship” – Insights on the teacher-student relationship at university from the teachers’ perspective. Higher Education, 85(4):819–835. doi: 10.1007/s10734-022-00867-z.

Harfield, C. 2021. Was Snowden virtuous?. Ethics and Information Technology23(3):373–383. doi: 10.1007/s10676-021-09580-4.

Hasanein, A. M. and Sobaih, A. E. E. 2023. Drivers and consequences of ChatGPT use in higher education: Key stakeholder perspectives. European Journal of Investigation in Health, Psychology and Education, 13(11):2599 –2614. doi: 10.3390/ejihpe13110181.

Howe, T., Hallam, P. and Hilton, S. 2023. Principal trust: Factors that influence faculty trust in the principal, in Zajda, J., Hallam, P. and Whitehouse, J. (eds.) Globalisation, values education and teaching democracy. Cham: Springer International Publishing:159–179.

Hoy, W. K. and Tschannen-Moran, M. 1999. Five faces of trust: An empirical confirmation in urban elementary schools. Journal of School Leadership, 9(3):184–208. doi: 10.1177/105268469900900301.

Huppert, F. A. 2009. Psychological well‐being: Evidence regarding its causes and consequences. Applied Psychology: Health and Well‐Being, 1(2):137–164. doi: 10.1111/j.1758-0854.2009.01008.x.

Jones, S. A. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta.

Kim, Y. K. and Lundberg, C. A. 2016. A structural model of the relationship between student-faculty interaction and cognitive skills development among college students. Research in Higher Education, 57(3):288–309. doi: /10.1007/s11162-015-9387-6.

Kovač, V. B. and Kristiansen, A. 2010. Trusting trust in the context of higher education: The potential limits of the trust concept. Power and Education, 2(3):276-287. doi: 10.2304/power.2010.2.3.276.

Leighton, J. P. and Bustos Gómez, M. C. 2018. A pedagogical alliance for trust, wellbeing and the identification of errors for learning and formative assessment. Educational Psychology38(3):381-406. doi: 10.1080/01443410.2017.1390073.

Li, E., Campbell, C., Midgley, N. and Luyten, P. 2023. Epistemic trust: a comprehensive review of empirical insights and implications for developmental psychopathology. Research in Psychotherapy: Psychopathology, Process, and Outcome, 26(3):704. doi: 10.4081/ripppo.2023.704.

Luo, J. 2024a. A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 49(5):651–664. doi: 10.1080/02602938.2024.2309963.

Luo, J., 2024b. How does GenAI affect trust in teacher-student relationships? Insights from students’ assessment experiences. Teaching in Higher Education:1–16. doi: 10.1080/13562517.2024.2341005.

Luo, J. and Chan, C. K. Y. 2022. Conceptualising evaluative judgement in the context of holistic competency development: results of a Delphi study. Assessment & Evaluation in Higher Education, 48(4):513–528. doi: 10.1080/02602938.2022.2088690.

Moorhouse, B. L., Yeo, M. A. and Wan, Y. 2023. Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open, 5:100151. doi: https://doi.org/10.1016/j.caeo.2023.100151.

Neumann, M., Rauschenberger, M. and Schön, E. M. 2023. “We need to talk about ChatGPT”: The future of AI and higher education, in 2023 IEEE/ACM 5th international workshop on software engineering education for the next generation (SEENG). IEEE:29–32. doi: 10.1109/SEENG59157.2023.00010.

Madikizela-Madiya, N. 2018. Mistrust in a multi-campus institutional context: A socio-spatial analysis. Journal of Higher Education Policy and Management, 40(5):415-429. doi: 10.1080/1360080X.2018.1478609.

Orlikowski, W. J. and Gash, D. C. 1994. Technological frames: making sense of information technology in organizations, in Allen, R. B. (ed.) ACM transactions on information systems (TOIS), 12(2), New York, NY: Association for Computing Machinery:174–207.

Platz, M., 2021. Trust between teacher and student in academic education at school. Journal of Philosophy of Education55(4-5):688–697. doi: 10.1111/1467-9752.12560.

Rahman, M. M., Terano, H. J., Rahman, M. N., Salamzadeh, A. and Rahaman, M.S. 2023. ChatGPT and academic research: A review and recommendations based on practical examples. Journal of Education, Management and Development Studies, 3(1):1–12. doi: 10.52631/JEMDS.V3i1.175.

Rane, N. L., Choudhary, S.P., Tawde, A. and Rane, J. 2023. ChatGPT is not capable of serving as an author: Ethical concerns and challenges of large language models in education. International Research Journal of Modernization in Engineering Technology and Science5(10):581–874. doi: 10.56726/IRJMETS45212.

Rasul, T., Nair, S., Kalendra, D., Balaji, M.S., de Oliveira Santini, F., Ladeira, W. J., Rather, R. A., Yasin, N., Rodriguez, R. V., Kokkalis, P. and Murad, M. W. 2024. Enhancing academic integrity among students in GenAI Era: A holistic framework, The International Journal of Management Education 22(3):101041. doi: 10.1016/j.ijme.2024.101041.

Roffey, S. 2012. Pupil wellbeing – Teacher wellbeing: Two sides of the same coin?. Educational and Child Psychology, 29(4):8–17. doi: 10.53841/bpsecp.2012.29.4.8.

Roe, J. and Perkins, M. 2023. ‘What they’re not telling you about ChatGPT’: Exploring the discourse of AI in UK news media headlines. Humanities and Social Sciences Communications, 10(1):753. doi: 10.1057/s41599-023-02282-w.

Romney, A. C. and Holland, D. V. 2023. The vulnerability paradox: Strengthening trust in the classroom. Management Teaching Review, 8(1):84–90. doi: 10.1177/2379298120978362.

Russell, S. L., Wentzel, K. R. and Donlan, A. E. 2016. Teachers’ beliefs about the development of teacher–adolescent trust. Learning Environments Research, 19:241–266. doi: 10.1007/s10984-016-9207-8.

Sevnarayan, K. and Potter. M.A. 2024. Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice. Journal of Applied Learning & Teaching, 7(1):1–11. doi: 10.37074/jalt.2024.7.1.41

Sokol, D. 2023. It is too easy to falsely accuse a student of using AI: A cautionary tale, Times Higher Education, 10 July. https://www.timeshighereducation.com/blog/it-too-easy-falsely-accuse-student-using-ai-cautionary-tale. (Accessed 24 February, 2025).

Strauß, S. 2021. “Don’t let me be misunderstood”: Critical AI literacy for the constructive use of AI technology. TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis/Journal for Technology Assessment in Theory and Practice, 30(3):44–49. doi: 10.14512/tatup.30.3.44.

Sullivan, M., Kelly, A. and McLaughlan, P. 2023. ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1):1–10. doi: 10.37074/jalt.2023.6.1.17.

Swedberg, R. 2020. Exploratory research, in Elman C., Gerring, J. & Mahoney, J. (eds.) The production of knowledge: Enhancing progress in social science. Cambridge: Cambridge University Press:17–41. doi: 10.1017/9781108762519.002

Sperling, K., Stenberg, C-J., McGrath, C., Åkerfeldt, A., Heintz, F. and Stenliden, L. 2024. In search of artificial intelligence (AI) literacy in Teacher Education: A scoping review. Computers and Education Open:100169. doi: 10.1016/j.caeo.2024.100169.

Vallor, S. 2016. Technology and the virtues: A philosophical guide to a future worth wanting. New York, NY: Oxford University Press. doi: 10.1093/acprof:oso/9780190498511.001.0001.

Winstone, N. E., Nash, R. A., Parker, M. and Rowntree, J. 2016. Supporting learners; agentic engagement with feedback: A systematic review and taxonomy of recipience processes. Educational Psychologist, 1–21. doi:10.1080/00461520.2016.1207538.

Zeilinger, M. 2021. Tactical entanglements: AI art, creative agency, and the limits of intellectual property. Lüneburg, Germany: Meson Press. doi: 10.14619/1839.

Zhang, Y., Yang, X. and Tong, W. 2024. University students’ attitudes toward ChatGPT profiles and their relation to ChatGPT intentions. International Journal of Human-Computer Interaction, 41(5):3199–3212. doi: 10.1080/10447318.2024.2331882.

Zhou, Z. 2023. Towards a new definition of trust for teaching in higher education. International Journal for the Scholarship of Teaching and Learning, 17(2):2. doi: 10.20429/ijsotl.2023.17202.