Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles

Title: Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles

Author: Dr. Susan Brokensha and dr. Thinus Conradie, University of the Free State.

Ensovoort, volume 42 (2021), number 6: 3

Abstract

How artificial intelligence (AI) is framed in news articles is significant as framing influences society’s perception and reception of this emerging technology. Journalists may depict AI as a tool that merely assists individuals to perform a variety of tasks or as a (humanoid) agent that is self-aware, capable of both independent thought and creativity. The latter type of representation may be harmful, since anthropomorphism of AI not only generates unrealistic expectations of this technology, but also instils fears about technological singularity, a hypothetical future in which technological growth becomes unmanageable. To determine how and to what extent the media in South Africa anthropomorphise AI, we employed framing theory to conduct a qualitative content analysis of articles on AI published in four South African online newspapers. We distinguished between social anthropomorphism, which frames AI in terms of exhibiting a human form and/or human-like qualities and cognitive anthropomorphism, which refers to the tendency to conflate human and machine intelligence. Most articles reflected social anthropomorphism of AI, while a few framed it only in terms of cognitive anthropomorphism. Several reflected both types of anthropomorphism. Based on the findings, we concluded that anthropomorphism of AI may hinder the conceptualisation of the epistemological and ethical consequences inherent in this technology.

 

Keywords: Artificial intelligence; Framing theory; News articles; Cognitive anthropomorphism; Social anthropomorphism

1. Introduction

1.1 Social and cognitive anthropomorphism of artificial intelligence

Anthropomorphism describes our inclination to attribute human-like shapes, emotions, mental states and behaviours to inanimate objects/animals, and depends neither on the physical features nor on the ontological status of those objects/animals (Giger, Piçarra, Alves‐Oliveira, Oliveira and Arriaga, 2019:89). Artificial intelligence (AI), including computers, drones and chatbots, can be anthropomorphised regardless of material dissimilarities between these technologies and humans, and despite the absence of an evolutionary relationship (Giger et al., 2019:89). In this study, we examine both social and cognitive anthropomorphism of AI, where the former process ascribes human traits and forms to AI (cf. Giger et al., 2019:112), and the latter designates the expectation that AI mimics human intelligence (Mueller, 2020:12). We base this distinction on our datasets, which reflect a focus either on the anthropomorphic form and/or human-like qualities of AI, especially when framing human-robot interaction, or on cognitive processes when describing the intelligence of machine learning and deep learning, for instance. Several articles reflect both cognitive and social anthropomorphism (see Section 4).

Cognitive anthropomorphism saturates news coverage of both weak and strong AI; that is, when AI is framed as merely simulating human thinking or as matching human intelligence (Bartneck, 2013; Damiano and Dumouchel, 2018:5; Salles, Evers and Farisco, 2020). The penchant to conflate artificial and human intelligence is unsurprising, historically speaking. Watson (2019:417) traces the practice especially to Alan Turing’s (1950) eponymous Turing Test for determining whether a machine can ‘think’. Since then, technology experts and laypeople have framed AI in epistemological terms, constructing it as capable of thinking, learning, and discerning. Human intelligence is notoriously resistant to easy definition, and AI might be more challenging still (Kaplan and Haenlein, 2020). Consequently, when humans envision AI, human intelligence offers a ready touchstone (cf. Cave, Craig, Dihal, Dillon, Montgomery, Singler and Taylor, 2018:8; Kaplan and Haenlein, 2019:17). In part, the propensity to anthropomorphise AI in cognitive and/or social terms derives from speculative fiction (Salles et al., 2020:91). However, many AI researchers also employ anthropomorphic descriptions (Salles et al., 2020:91). Salles et al. (2020:91) suggest that the practice is driven by “a veritable inflation of anthropocentric mental terms that are applied even to non-living, artificial entities” or to “an intrinsic epistemic limitation/bias” on the part of AI scholars. They also speculate that anthropomorphism could stem from the human need to both understand and control AI in order to experience competence (Salles et al., 2020:91). We propose that journalists too are motivated to anthropomorphise AI to understand and control it, particularly because it is an emerging and therefore uncertain science: “people are more likely to anthropomorphize when they want to […] understand their somewhat unpredictable environment” (Salles et al., 2020:89-90).

When news media anthropomorphise AI, one epistemological consequence is the risk of exposing the public to exaggerated or erroneous claims (cf. Proudfoot, 2011; Samuel, 2019; Watson, 2019). To appraise this risk, we used framing theory to conduct a content analysis of anthropomorphism in articles published in four South Africa newspapers, namely, the Citizen, the Daily Maverick, the Mail & Guardian Online, and the Sowetan LIVE. We addressed the following research questions:

Research question 1:   What were the most salient topics in the coverage of AI?

Research question 2:  How was AI anthropomorphised?

Our analysis does not intend to, in Salles et al.’s (2020: 93) words, defend “moral human exceptionalism”. Instead, we are interested, from an ontological vantage, in AI-human differences. Moreover, we wish to interrogate the potential epistemological and ethical impacts of anthropomorphic framing of AI for public consumption. Therefore, we question how news media constitute and interrelate ‘humans’ and ‘machines’. This undertaking is important. How we anthropomorphise AI compels us to re-evaluate how we conceptualise human and artificial intelligences (cf. Curran, Sun and Hong, 2019). For the second research question, we honed our interest on the nature of anthropomorphic framing in South African news articles, instead of unpacking why coverage might differ across the outlets. Providing a methodical and nuanced account of inter-outlet variability exceeds the purview of this study; however, in future, we plan to question this variability by attending, among other things, to the agenda of each outlet, its target audience, and gatekeeping by editorial boards.

2. Framing theory and anthropomorphising AI

Media framings of AI shape its public reception and perception. Unlike technology experts who are au fait with the architecture of AI, the public rely on mediatised knowledge to learn about this technology (Vergeer, 2020:375). Consequently, the agendas of media outlets and the writers they employ inflect what is learned. Framing is also influenced by variables including the pressure to increase ratings and readership (Obozintsev, 2018:12) and by what Holguín (2018:5) terms the “myth-making of journalists” and the “public relations strategies of scientists”. Given this gamut of variables, how and the extent to which AI is reported on varies within and across media outlets. Nevertheless, the findings of some studies concur. For example, in a study of how AI is represented in mainstream news articles in North America, the researchers found that the media generally depict AI technologies as being beneficial to society and as offering solutions to problems related to areas including health and the economy (Sun, Zhai, Shen and Chen, 2020:1). United Kingdom-based scholars echo this picture of AI as a problem-solving technology (Brennen, Howard and Nielsen, 2018) as do Garvey and Maskal (2020), who completed a sentiment analysis of the news media on AI in the context of digital health. A study of the Dutch press by Vergeer (2020) reports a balance of positive and negative sentiments. Fast and Horvitz (2017) examined The New York Times’ coverage of AI over a 30-year period, and found that despite a broadly positive outlook, framings have grown increasingly pessimistic over the last decade, citing loss of control over AI as a concern. Within the literature, few studies of anthropomorphic framing of AI by journalists have been conducted, although several studies of the framing of AI in general touch upon anthropomorphism (Garvey and Maskal, 2020; Ouchchy, Coin and Dubljević, 2020; Vergeer, 2020; Bunz and Braghieri, 2021). A few studies indicate that anthropomorphism of AI is common in news articles that focus specifically on human-robot interaction (Bartneck, 2013; Złotowski, Proudfoot, Yogeeswaran and Bartneck, 2015).

Framing theory has proven fruitful for ascertaining what journalists and other writers for online news elect to foreground and background when writing on AI (Brennan, Howard and Nielsen, 2018; Obozintev, 2018; Chuan, Tsai and Cho, 2019; Vergeer, 2020). It entails “the process of culling a few elements of perceived reality and assembling a narrative that highlights connections among them to promote a particular interpretation” (Entman, 2010:36). Frames selectively define a specific problem in terms of its costs and benefits, allege causes of the problem, make moral judgements about the agents or forces involved, and offer solutions (Entman, 2010:336).

Our deductive analysis of framing combines Nisbet’s (2009) typology with that proposed by Jones (2015) (Table 1). Using existing frames circumvents what Hertog and McLeod (2001:150) decry as “one of the most frustrating tendencies in the study of frames and framing, the tendency for scholars to generate a unique set of frames for every study”. However, Nisbet’s (2009) coding scheme engages how science is broadly framed in public discourse, rather than spotlighting AI. Therefore, we amalgamate it with Jones’s (2015) exhaustive analysis of news articles about AI. From Nisbet’s (2009) typology of frames, we omitted the ‘scientific and technical uncertainty’ frame. Instead, we retained Nisbet’s ‘social progress’ frame and employed Jones’s (2015) ‘competition’ frame. We propose that these competing frames may be evoked simultaneously by a journalist to reflect uncertainty about the various facets of AI: “[t]he alternation between different perspectives, with an apparently contradictory identification in the journalist’s report, contributes above all to construct an image of an emergent scientific field” (Hornmoen, 2009: 16; cf. Kampourakis and McCain, 2020:152).

All nine frames in Table 1 can be expressed through anthropomorphic tropes. For example, in ‘Call me baby’: Talking sex dolls fill a void in China’ (Sowetan LIVE,  4 February 2018), the journalist employs anthropomorphic tropes to evoke the frame of nature, referring to one doll by name (“Xiaodie”) (cf. (Keay and Graduand, 2011) and describing others as “shapely” “hot”, and “beautiful”. Similarly, in ‘Prepare for the time of the robots’ (Mail & Guardian Online, 16 February 2018), the journalist employs the frame of artifice when he anthropomorphises AI as having the potential to “outperform [humans] in nearly every job function” in the future.

 

Table 1: A typology of frames employed to study AI in the media

Nisbet’s (2009) coding scheme
Frame Definition
Accountability Science is framed as needing to be controlled and regulated in order to counter the risks it might pose to society and to the environment (e.g., “The human element in AI decision-making needs to be made visible, and the decision-makers need to be held to account”: the Daily Maverick, 18 July 2019).
Morality/Ethics Science is framed as reflecting moral and ethical risks (e.g., “Artificial intelligence (AI) is meant to be better and smarter than humans but it too can succumb to bias and a lack of ethics”: Weekend Argus, 8 September 2019).
Middle way A compromise position between polarised views on a scientific issue is generated (e.g., “[…] the combined forces between human and machine would be better than either alone”: News24, 22 January 2020).
Pandora’s Box Science is depicted as having the potential to spiral out of control (e.g., “[…] robots […] would take targeting decisions themselves, which could ‘open an even larger Pandora’s box’, he warned”: News24, 23 May 2013).
Social progress Science is framed as enhancing the quality of life of people in areas such as health, education, or finance and as protecting/improving the environment (e.g., “[…] AI has made the detection of the coronavirus easier”: the Daily Maverick, 7 December, 2020).
 
Jones’s (2015) coding scheme
Frame Definition
Artifice AI is framed as an arcane technology in the sense that it could surpass human intelligence (e.g., “[…] AI may soon surpass [human intelligence] due to superior memory, multi-tasking ability, and its almost unlimited knowledge base”: IOL, 18 December 2020).
Competition AI is framed in terms of depleting human and/or material resources (e.g., “[…]  advancements in the tech world mean [AI technologies] are coming closer to replacing humans”: the Sowetan LIVE, 31 July 2018).
Nature AI is framed in terms of the human-machine relationship and often entails romanticising AI or describing/questioning its nature/features (e.g., a robotic model called ‘Noonoouri’ “describes herself as cute, curious and a lover of couture”: the Sowetan LIVE, 20 September 2018).

 

3. Methods

3.1 Sample

Jones’s (2015:20) approach to data gathering informs our qualitative content analysis, because  we “[mimicked] the results that a person (or machine) would have been presented with had they searched for the complete term ‘Artificial Intelligence’ in [popular news articles]”. We also adhered to Krippendorf’s (2013) guidelines for stratified sampling: first, we focused on collecting news articles from South African media outlets that have an online presence and that exhibit high circulations online. To determine which media outlets reach a wide readership, we used Feedspot, a content reader that allowed us to read top news websites in one place while keeping track of which articles we had read. We identified the Citizen, the Daily Maverick, the Mail & Guardian, and the Sowetan LIVE as newspapers with a high readership online. Second, concentrating on the period between January 2018 and February 2021, we collected articles from the four news outlets by searching for the term ‘artificial intelligence’. The third step involved limiting the sample to articles with a sustained focus on AI (Jones, 2015:25; Burscher, Vliegenthart and de Vreese, 2016). Ultimately, we conducted exhaustive analyses of 126 articles and discarded 260: 52 articles were collected from the Citizen, 36 from the Daily Maverick, 26 from the Mail & Guardian Online, and 12 from the Sowetan LIVE. This uneven sample matches similar studies of AI in the media (Ouchchy et al., 2020; Sun et al., 2020; Vergeer, 2020), and is rationalised by our curation process: articles with only passing allusions to AI were discarded along with advertorials, sponsored content or articles that were not text-based. Our unit of analysis was each complete article (cf. Chuan et al., 2019).

3.2 Analytic framework

As already noted, framing theory and the existing frames adumbrated in the previous section guided our inquiry. The strength of this directed approach to content analysis is that it corroborates and extends well-established theory and avoids cluttering the field with yet another idiosyncratic set of frames. Such an approach, “makes explicit the reality that researchers are unlikely to be working from the naïve perspective that is often viewed as the hallmark of naturalistic designs” (Hsieh and Shannon, 2005:1283). Of course, this approach is imperfect. Particularly, “researchers approach the data with an informed but, nonetheless, strong bias” (Hsieh and Shannon, 2005:1283). In response, we maintained an audit trail (White, Oelke and Friesen, 2012:244) and employed “thick description” (Geertz 1973) to bolster the transparency of the analysis and interpretation (Stahl and King, 2020:26).

3.3 Dominant topics and valence

We determined that unpacking how AI is anthropomorphised demands more than discerning the various frames through which this technology was represented in the data (Research question 2). It also proved necessary to examine how framing entwines with the topics that dominated our data. After all, repeated media exposure, “causes the public to deem a topic important and allows it to transfer from the media agenda to the public agenda” (Fortunato and Martin, 2016:134; cf. Freyenberger, 2013:16; McCombs, 1997:433).

After expounding how topics related to frames centred on anthropomorphism, a final step in our data analysis involved coding the overall valence of each article as positive, negative or mixed. Given the unreliability of automated content analysis for determining tone (Boukes, Van de Velde, Araujo and Vliegenthart, 2020), we manually coded each article by examining the presence of multiple keywords that reflected, amongst other things, uncertainty versus certainty and optimism versus pessimism about AI (cf. Kleinnijenhuis, Schultz and Oegema, 2015). 

4. Findings

4.1 Salient topics and valence

Two topics prevailed across the four media outlets: ‘Business, finance, and the economy’ and ‘Human-AI interaction’. Each appeared in 18.25% of all articles. The second most salient topic was ‘Preparedness for an AI-driven world’, which featured in 13.49%. ‘Healthcare and medicine’ was next and received coverage in 11.90% of all articles. ‘Big Brother’ and ‘Control over AI’ were the fourth most prevalent topics, with each featuring in 10.31% of all articles. Less salient topics were the ‘News industry’ (3.17%) followed by the ‘Environment’, ‘Killer robots’, ‘Strong AI’, and the ‘Uncanny Valley’ (2.38%). ‘Singularity’ featured in 1.58% of all articles, while ‘Education’ was covered in 0.79% of all articles. All news outlets reported on ‘Business, finance, and the economy’, ‘Human-AI interaction’, and ‘Healthcare and medicine’. The only newspaper that omitted ‘Preparedness for an AI-driven world’ was the Sowetan LIVE. ‘Big Brother’ featured only in the Citizen, while ‘Control over AI’ was addressed in all newspapers barring the Citizen. The ‘Environment’ and the ‘News Industry’ were covered only in the Citizen and the Daily Maverick. ‘Killer robots’ appeared only in the Daily Maverick and the Mail & Guardian Online, while ‘Strong AI’ featured in the Citizen and the Mail and Guardian Online. The ‘Uncanny Valley’ was absent from the Mail & Guardian Online. Only the Daily Maverick reported on ‘Singularity’, while ‘Education’ appeared only in the Citizen.

Positive valence characterised the topics ‘Business, finance, and the economy’, ‘Education’, the ‘Environment, ‘Healthcare and medicine’, ‘Human-AI interaction’, and ‘Strong AI’. Negative valence marked ‘Big Brother’ and ‘Killer robots’. ‘Control over AI’, the ‘News Industry’, ‘Preparedness for an AI-driven world’, ‘Singularity’, and the ‘Uncanny Valley’ were coded with mixed valence.

Although this does not form part of the discussion in Section 5, we noted that nineteen articles did not reflect the use of anthropomorphic tropes and topics across these articles included ‘Preparedness for an AI-driven world’ (six articles), ‘Big Brother’ (five articles), ‘Control over AI’ (three articles), ‘Business, finance, and the economy’ (two articles), the ‘News Industry’ (two articles), and ‘Strong AI’ (one article). ‘Preparedness for an AI-driven World’, ‘Business, finance, and the economy’ were mostly coded with positive valence, while ‘Big Brother’ was coded with negative valence. The two articles on the ‘News Industry’ reflected negative and mixed valence and ‘Control over AI’ was mostly coded with mixed valence.

4.2 Anthropomorphising AI

Dataset 1: Cognitive anthropomorphism. Approximately 12% of all articles in the first dataset (i.e., 16 of 126) reflected cognitive anthropomorphism. A closer reading also indicates that in articles featuring this type of anthropomorphism, the most salient topics were ‘Healthcare and medicine’, which featured in six articles, followed by ‘Business, finance, and the economy’, which was the focus of three articles. ‘Strong AI’ was covered in two articles, as was ‘Preparedness for an AI-driven world’. ‘Big Brother’, the ‘News Industry’, and ‘Singularity’ featured in one article each. When we examined how the two most salient topics were overwhelmingly framed, we noted that four articles focusing on ‘Healthcare and medicine’ were framed in terms of nature, one in terms of social progress, and one in terms of accountability. For the three articles addressing ‘Business, finance, and the economy’, the social progress frame predominated.

Dataset 2: Social anthropomorphism. Almost 47% of all articles (i.e., 59 of 126) reflected social anthropomorphism. The most prevalent topics were ‘Human-AI interaction’ (21 articles), followed by ‘Business, finance, and the economy’ and ‘Control over AI’ (with eight articles each). Less salient topics were ‘Preparedness for an AI-driven world’ (six articles), ‘Big Brother’ (five article), ‘Healthcare and medicine’ (four articles), the ‘Environment’ (three articles), the ‘Uncanny Valley’ (two article), ‘Education’ (one article), and ‘Singularity’ (one article). With respect to framing in the most salient articles, 14 articles on ‘Human-AI interaction’ evoked the frame of nature, six evoked the frame of social progress, and one reflected the frame of accountability. Three articles on ‘Control over AI’ reflected the morality/ethics frame and three evoked the frame of accountability. One article on this topic reflected the frame of nature and one evoked the frame of competition. Five articles on ‘Business, finance, and the economy’ evoked the frame of social progress, while three on this topic each reflected the frames of competition, accountability, and nature.

Dataset 3: Cognitive and social anthropomorphism. Thirty-two articles (25.39%) reflected both types of anthropomorphism. The most salient topics were ‘Business, finance, and the economy’ (nine articles) and ‘Healthcare and medicine’ (five articles). Next, ‘Control over AI’ and ‘Human-AI’ interaction’ were the most prevalent topics with four articles each. Less prevalent topics were ‘Preparedness for an AI-driven world’ (three articles), ‘Killer robots’ (three articles), ‘Big Brother’ (two articles), the ‘News Industry’ (one article) and the ‘Uncanny Valley’ (one article). With respect to salient topics and frames, five articles on ‘Business, finance, and the economy’ evoked the frame of social progress. Three evoked the frame of nature, and one the frame of accountability. Four articles on ‘Healthcare and medicine’ reflected the frame of social progress and one evoked the frame of nature. With respect to ‘Control over AI’, the frame of nature was the dominant frame in two articles, while the accountability frame was prevalent in two. Finally, two articles on ‘Human-AI interaction’ reflected social progress as the dominant frame and two evoked the frame of nature.

5. Discussion

A detailed discussion of all three datasets exceeds the scope of this study. Instead, we foreground articles from the first two datasets, based on the most salient topics.  Space constraints aside, we noted that news articles that reflected a dominant type of anthropomorphism coincided with a sustained focus on specific types of AI, which in turn impacted the topic under discussion. Thus, articles that accented cognitive anthropomorphism also topicalised AI technologies that simulate human cognition, including machine learning and neural networks. This finding rationalises our decision to discuss cognitive anthropomorphism of these technologies in relation to ‘Healthcare and medicine’ and ‘Business, finance, and the economy’, not only because these were the most salient topics in the first dataset, but also because both sectors demand types of AI that augment human thinking. The second dataset, where social anthropomorphism prevailed, essentially focused on AI-driven digital assistants/social robots and on human engagement with these technologies. We therefore explicate the topics ‘Human-AI interaction’, ‘Business, finance, and the economy’, and ‘Control over AI’, which were the most prevalent topics in this dataset. We did, however, review the third dataset, and noted that the findings mirrored those identified in the first two datasets.

5.1 Articles in which cognitive anthropomorphism predominated

All 16 articles in which cognitive anthropomorphism predominated also struck sensational and/or alarmist tones, where sensational reporting was “entertainment-oriented” or “tabloid-like” (Uribe and Gunter, 2007:207; cf. Vettehen and Kleemans, 2018:114) and alarmist reporting framed AI as warranting fear (cf. Ramakrishna, Verma, Goyal and Agrawal, 2020:1558). Typically, these articles portrayed technology as equalling or rivalling human intelligence. To illustrate, “A team […] taught an artificial intelligence system to distinguish dangerous skin lesions from benign ones” (the Daily Maverick, 29 May 2018), and “A computer programme […] learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert” (the Citizen, 9 May 2018). Interestingly, the writers of these articles also deployed various discursive strategies to mitigate an alarmist and/or sensational tone. Rather than relying solely on their own reporting, journalists commonly moderated exaggerated claims about machine intelligence by quoting or paraphrasing sceptical scholars/experts and other AI stakeholders, or by simply enclosing key terms in scare quotes (cf. Schmid-Petri and Arlt, 2016:269). Journalists were thus able to maintain authorial distance from potentially false or overstated claims (cf. Johannson, 2019:141). Indeed, in 12 of the 16 articles, we noted that journalists built their articles predominantly on quotations and/or paraphrases of various actors’ voices. In Johannson’s (2019:138) view, constructing news reports around quotations enables, “journalistic positioning […] based on the detachment of responsibility”. In ‘Will your financial advisor be replaced by a machine?’ (the Citizen, 10 March 2018), for instance, although the journalist reported that “Technology has the ability […] to […] analyse a full array of products, potentially identifying suitable [financial] solutions” and that it “can process and analyse all kinds of data far quicker and more accurately than humans”, he also cited an industry expert as predicting that “the human element” will remain. Doing so enabled him to maintain distance from claiming that artificial intelligence can outpace humans.

Another strategy that several of the journalists in our dataset adopted to attenuate alarmist/sensational claims about the cognitive abilities of AI was to frame this technology in contradictory terms. This was particularly evident in articles centred on AI in the healthcare industry. In ‘Could AI beat humans in spotting tumours?’ (the Citizen, 22 January 2020), for example, a statement such as “Machines can be trained to outperform humans when it comes to catching breast tumours on mammograms” was followed by referencing a study, which highlighted AI’s flaws and misdiagnoses. Similarly, in ‘AI better at finding skin cancer than doctors’ (the Daily Maverick, 29 May 2018), the journalist reported that according to researchers, “A computer was better than human dermatologists at detecting skin cancer”; yet the journalist also quoted a medical expert as stating that “there is no substitute for a thorough clinical examination”. Citing contradictions around AI represents one option in the range of strategies journalists can leverage to resolve the uncertainty and conflict surrounding this novel technology (cf. Hornmoen, 2009:1; Kampourakis and McCain, 2020:152). They may also disregard any uncertainties and simply treat scientific claims as factual (Peters and Dunwoody, 2016:896). However, this strategy only surfaced in two of the 16 articles in which cognitive anthropomorphism was apparent. For example, in ‘Wits develops artificial intelligence project with Canadian university to tackle Covid-19 in Africa’ (the Daily Maverick, 6 December 2020), the journalist quoted an academic as claiming that, in the fight against COVID-19, “Artificial intelligence is the most advanced set of tools to learn from the data and transfer that knowledge for the purpose of creating realistic modelling”.

Depicting AI in terms of competing interpretations that allow journalists to manage scientific uncertainty is typical of post-normal journalism, which blurs the boundaries between journalism and science (Brüggemann, 2017:57-58). Coined by Funtowicz and Ravetz (1993), post-normal science reflects high levels of uncertainty, given that the phenomena under investigation are characterised as “novel”, “complex” and “not well understood” (Funtowicz and Ravetz, 1993:87). AI is, undoubtedly, a contested technology. Some praise its power to evenly distribute social and economic benefits, while others decry its ontological threat to humanity (Ulnicane, Knight, Leach, Stahl and Wanjiku, 2020:8-9). Knowledge about AI remains limited and disputed. Unsurprisingly, then, journalists may generate “a plurality of perspectives” (Brüggemann, 2017:58).

By citing competing frames, journalists can “balance […] conflicting views” (Skovsgaard, Albæk, Bro and De Vreese, 2013:25) on a given topic and encourage readers to formulate judgements independently. Thus, in ‘Big data a game-changer for universities’ (the Mail & Guardian Online, 25 July 2019), readers must decide for themselves whether they support the view that AI is “capable of predicting lung cancer with greater accuracy than highly trained and experienced radiologists” or whether they believe that “humans are indispensable” in the detection of lung cancer. This particular example indicates that employing competing frames remains flawed. In this respect, Boykoff and Boykoff (2004:127) contend that the balance norm could constitute a false balance in that journalists may “present competing points of view on a scientific question as though they had equal scientific weight, when actually they do not”.[1] This false balance may confuse readers and hinder their ability to distinguish fact from fiction (Brüggemann, 2017:57-58). Research suggests that the public resist competing frames because they neutralise each other, complicating the process of taking a position on a particular issue (Sniderman and Therialt, 2004:139; cf. Chong & Druckman, 2012:2; Obozintsev, 2018:15). Consider the mixed messages in ‘X-rays and AI could transform TB detection in South Africa, but red tape might delay things’ (the Daily Maverick, 13 December 2020). A layperson would be hard pressed to reconcile the claim made by the World Health Organisation that, “the diagnostic accuracy and the overall performance of [AI-driven] software were similar to the interpretation of digital chest radiography by a human reader” with the view expressed by an expert from the Radiological Society of South Africa that this software requires human oversight. Significantly, what was omitted from this article, and from most articles centred on healthcare, was an account of why AI for disease detection requires human input. Instead, journalists merely reported that AI-driven diagnostic tools could err and require enormous datasets to enhance accuracy.

Indeed, five of the six healthcare articles in our dataset described AI as outperforming humans in the detection, interpretation or prediction of diseases. What is absent from these articles is the fact that human and artificial intelligence cannot be conflated: “Seeking to compare the reasoning of human and artificial intelligence (AI) in the context of medical diagnoses is an overly optimistic anthropomorphism” argues David Burns (2020:E290) in the Canadian Medical Association Journal. This position is premised on the observation that machine learning algorithms, which are employed in computer-based applications to support the detection of diseases, are mathematical formulae that are unable to reason, and so they are not intelligent. Quer, Muse, Nikzad, Topol and Steinhubl (2017:221) echo this argument, asserting that there is no explanatory power in medical AI: “It cannot search for causes of what is observed. It recognizes and accurately classifies a skin lesion, but it falls short in explaining the reasons causing that lesion and what can be done to prevent and eliminate disease”. Furthermore, although AI mimics human intelligence, it requires vast archives of data to ‘learn’. By contrast, humans can learn through simple observation. A good example is a scenario in which a human learns to recognise any given object after observing it only once or twice. AI software would need to view the object repeatedly to recognise it, and even then, it would be unable to distinguish this object from a new object (Pesapane et al., 2020:5).

The healthcare articles in our dataset that reflected cognitive anthropomorphism also failed to address ethical issues surrounding medical AI. A key ethical issue pertains to the consequences of using algorithms in healthcare. In ‘Could AI beat humans in spotting tumours?’ (the Citizen, 22 January 2020), the journalist reported on a deep learning AI model designed to detect breast tumours[2], quoting a medical doctor and researcher as stating that experts are unable to explain why the model ‘sees’ or ‘overlooks’ tumours: “At this point, we can observe the patterns […]. We don’t know the ‘why’”. This constitutes the so-called “black-box problem” (Castelvecchi, 2016:1), which arises when the processes between the input and output of data are opaque. Put differently, computers are programmed to function like neural networks (that are supposedly superior to standard algorithms), but as is the case with the human brain, “[i]nstead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher” (Castelvecchi, 2016:1). Problematically, this entails that while doctors can interpret the outcomes of an algorithm, they cannot explain how the algorithm made the diagnosis (cf. Durán and Jongsma, 2021:1; cf. Gerke, Minssen and Cohen, 2020:296), which generates a host of ethical problems: “Can physicians be deemed responsible for medical diagnosis based on AI systems that they cannot fathom? How should physicians act on inscrutable diagnoses?” (Durán and Jongsma, 2021:1). On an epistemological level, we should be concerned, not only about biased algorithms, but also about the degree to which black-box algorithms could damage doctors’ epistemic authority (Durán and Jongsma, 2021:1). Most of the articles in our dataset acknowledged that medical AI requires huge volumes of data to accurately screen for diseases, but overlooked such ethical and epistemic concerns. Additionally, the articles omitted any discussion of the potential for algorithmic biases related to race, gender, age, and disabilities, among others (Gerke et al., 2020:303-304). While several articles reported that AI can misdiagnose diseases, they overlooked arguments that inaccuracies may stem from the fact that algorithms are usually trained on Caucasian patients, instead of diverse patient data, thereby exacerbating health disparities (Adams, 2020:1). To educate the public about medical AI’s benefits and potential ethical and social risks, Ouchchy et al. (2020:927) suggest a multifaceted approach which, “could include increasing the accessibility of correct information to the public in the form of fact-sheets” and collaborating with AI ethicists to improve public debate.

Articles on ‘Business, finance, and the economy’ that used cognitive anthropomorphism were mainly framed in terms of social progress. This finding is not unexpected, since applications of AI in business, finance, and the economy are generally associated with benefits including increased economic wealth, greater productivity and efficiency (cf. Vergeer, 2020:377). Nevertheless, journalists also struck an alarmist and/or sensational tone by claiming that AI can emulate human intelligence to make independent financial decisions (the Citizen, 23 May 2018; 23 October 2019). Alarmist and/or sensational coverage of AI may, “maximize ratings, readership, clicks, and views”; yet it may also retard public understanding of such technologies (Lea, 2020:329). As was the case in articles featuring healthcare and medicine, journalists focusing on personal finance framed AI in contradictory terms, reporting, for example, that while AI either matches or exceeds human intelligence, it will not replace human financial advisors in the near future: “The machine can emulate, but it can’t innovate – yet” (The Citizen, 23 October, 2019). As already indicated, couching AI in contradictory terms may help journalists resolve uncertainty about this novel technology. On the other hand, such framings also risk befuddling the public, as noted earlier (cf. Brüggemann, 2017:57-58). Claims that AI will either mimic or rival human intelligence without replacing humans are unhelpfully vague, and might obstruct public confidence in and perceptions of the technology (cf. Cave et al., 2018:2).

A 2018 article in The Guardian quotes Zachary Lipton, a machine learning expert based at Carnegie Mellon University, as lamenting that, “as […] hyped-up stories [about AI] proliferate, so too does frustration among researchers with how their work is being reported on by journalists and writers who have a shallow understanding of the technology” (Schwartz, 2018:4). Although several articles in our dataset made vague assurances that AI cannot yet substitute human intelligence, none engaged rigorously with arguments among AI scholars and industry experts that artificial general intelligence (AGI) might remain unrealised (Bishop, 2016; Fjelland, 2020; Lea, 2020: 324). A 2021 book that offers interesting insights into AI’s so-called superintelligence is The myth of artificial intelligence by Erik Larson, who asserts that the scientific aspect of the AI myth is based on the assumption that we will achieve AGI as long as we make inroads in the area of weak or narrow AI. However, “[a]s we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress” (Larson, 2021:2; cf. Lea. 2020:323). In fact, to create an algorithm for general intelligence, “will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like” (Larson, 2021:2). In Watson’s (2019: 417) view, drawing on anthropomorphic tropes to conflate human intelligence and AI is misleading and even dangerous. Supposing that algorithms share human traits implies that “we implicitly grant [AI] a degree of agency that not only overstates its true abilities, but robs us of our own autonomy” (Watson, 2019:434).

5.2 Articles in which social anthropomorphism predominated

The 59 articles in which social anthropomorphism featured mirrored the above-mentioned discursive strategies. Generally, journalists blended their own reports with strategically selected quotes and paraphrases from AI researchers and technology experts. Scare quotes also registered a note of scepticism, and journalists continued to frame AI in contradictory terms. Anthropomorphic framing of social robots or digital assistants shaped articles on human-AI interaction as well as many articles on business/financial issues, pointing to the human tendency to regard AI-driven technologies as social actors (Duffy and Zawieska, 2012), despite an awareness that they are inanimate (Scholl and Tremoulet, 2000). This is predictable, given that creators of social robots and digital assistants rely on anthropomorphic design to enhance acceptance of and interaction with them (Fink, 2012:200; cf. Darling, 2015:3). Adopting a predominantly pro-AI stance, most journalists in our dataset portrayed social robots/digital assistants through the frames of nature and social progress, imbuing them with a human-like form and/or human-like behaviours. For example, with respect to having a human-like form, several social robots/digital assistants were described as exhibiting “human likeness” (Daily Maverick, 10 November 2019), “remarkable aesthetics” (the Citizen, 5 September 2018) and “complex facial expressions” (the Sowetan LIVE, 4 February 2018). With respect to human-like traits, journalists variously described AI as “capable of handling routine tasks” (the Daily Maverick, 9 May 2018), as “a colleague or personal assistant – in the human sense of the term” (the Mail & Guardian, 26 August 2019), and as offering emotional or mental support in the workplace (the Sowetan LIVE, 30 November 2020).  Quoting or paraphrasing industry experts, one writer of a Sowetan LIVE article (30 November 2020) actually claimed that AI surpasses humans’ capacity to function as assistants or companions: “[AI] doesn’t judge you and doesn’t care about your race or class or gender. It gives you non-biased responses”. The reality is that if AI systems are trained on a biased dataset, they will replicate bias (Borgesius, 2018:11).

With respect to bias, we noted that six Daily Maverick and Mail & Guardian Online articles in which social anthropomorphism was apparent briefly addressed AI bias and its ethical impact on society. This finding aligns with Ouchchy et al.’s (2019), who note that media coverage of ethical issues surrounding AI is broadly realistic, but superficial (cf. Ouchchy et al., 2020:1). Typical utterances in these articles humanised AI algorithms, as evidenced in: “AI algorithms […] will reflect and perpetuate the contexts and biases of those that create them” (the Mail & Guardian Online, 8 January 2018), “Fix AI’s racist, sexist bias” (the Mail & Guardian Online, 14 March 2019), and “[…] machines, just like humans, discriminate against ethnic minorities and poor people” (the Daily Maverick, 16 October). Epistemologically speaking, these utterances assign moral agency to AI. This misleading belief about AI’s capabilities detracts from debates around policies that need to address and prevent algorithmic bias on the part of humans (cf. Salles et al., 2020:93; Kaplan, 2015:36).

A few journalists focusing on human-AI interaction and on business/financial issues framed AI and human attributes as nearly indistinguishable. In such articles, journalists claimed that AI technologies possess, “human-sounding voice[s] complete with ‘ums’ and ‘likes’ (the Daily Maverick, 9 May 2018) and that they “can be programmed to […] chat with customers and answer questions” (the Sowetan LIVE, 2 March 2018). Of course, AI-driven technologies are limited to predetermine responses (Heath, 2020: 4), which Highfield (2018:3) terms, “canned responses to fixed situations that give humans a sense that the [AI] is alive or capable of understanding [them]”. An examination of the dataset indicated that several journalists mitigated claims about AI’s ability to imitate human traits and behaviours through contradictory views that also evoked the frame of nature. Thus, in ‘Chip labour: Robots replace waiters in restaurant’ (the Mail & Guardian Online, 5 August 2018), although the journalist described a “little robotic waiter” as wheeling up to a table and serving patrons with food, he also employed the frame of nature to emphasise its “mechanical tones”. Similarly, in ‘Is your job safe from automation? (the Sowetan LIVE, 20 March 2018), the journalist referred to a humanoid robot as being able “to recognise voice, principal human emotions, chat with customers and answer questions”, but also averred that “Robots have no sense of emotion or conscience”. Although the ‘Uncanny valley’ is not discussed here because it was not a salient topic (featuring in only three articles), we noted that a few journalists referred to AI-driven robots as “uncanny” (the Daily Maverick, 10 November 2019) and “eerie” (the Sowetan LIVE, 8 March 2018). These references acknowledge the uncanny valley, “the point at which something nonhuman has begun to look so human that the subtle differences left appear disturbing” (Samuel, 2019:12). This has prompted robot-designers to produce machines that are easily distinguishable from human appearance.

Only four journalists focusing on human-AI interaction or on business/financial issues expressed a negative or mixed stance on AI by questioning its ability to emulate human emotion and sentience. In a Sowetan LIVE article (17 May 2018), for example, the journalist questioned what the future would hold were robots to prepare our meals or care for our children. The journalist evoked the frame of nature to insist that “Robots can’t replace human love, laughter and touch”. In ‘Is your job safe from automation’ (the Sowetan LIVE, 20 March 2018), the journalist observed that AI “lack[s] empathy” and has “no conscience”. These arguments align with expert conclusions that AI-driven robots cannot possess sentience/consciousness (cf. Hildt, 2019). Put differently, AI remains emotionally unaware and, as Kirk (2019:3) argues, even if we train AI to recognise emotions, humans programme the labelling and interpretation process.

Reasons for attributing a human form and/or human-like attributes to AI are speculative and vary across the literature. Still, adopting a psychological explanation, Epley, Waytz and Caciappo (2007) propose that people tend to anthropomorphise a non-human agent when they possess insufficient knowledge about the agent’s mental model, when they need to understand and control it, or from a desire to form social bonds (cf. Złotowski et al., 2015:348). Scholars are divided over whether or not anthropomorphism in the context of social robots/digital assistants should concern us. Turkle (2007, 2010), for instance, argues that human-robot interaction undermines authentic human relationships and exploits human vulnerabilities, while Breazeal (2003) is of the view that social robots may be useful to humans as helpmates and social companions. As far as benefits are concerned, Darling (2015:9) observes that “[s]tate of the art technology is already creating compelling use cases in health and education, only possible as a result of engaging people through anthropomorphism”. We argue that while the benefits of social robots/digital assistants should not be dismissed, anthropomorphising AI does have several potentially negative consequences (Sparrow and Sparrow, 2006; Bryson, 2010; Hartzog, 2015). We have already touched on the idea that anthropomorphism may dupe people into believing that AI systems are human-like (cf. Kaplan, 2015:36). This concern is echoed by Engstrom (2018:19), who cautions that humanising AI may cause society to raise its expectations of this technology’s capabilities while ignoring its social, economic, and ethical consequences. In articles focused on human-AI interaction and business/financial issues, we noted that journalists either reported the risks of AI in a superficial manner or omitted them entirely. Thus, for example, in ‘AI tech to assist domestic abuse victims’ (the Citizen, 23 November 2018), an AI-driven programme that is accessed via Facebook’s Messenger, was described as “a companion” that is “non-judgmental”, but the ethical risks around this AI-mental healthcare interface remained unaddressed. Using AI applications for mental healthcare has several ethical concerns that have been widely discussed in the literature (Riek, 2016; Fisk, Henningsen and Buyx, 2019; Ferretti, Ronchi and Vayena, 2019). Some of these concerns revolve around possible abuse of the applications (in the sense that they could replace established healthcare professionals and thus widen healthcare inequalities), privacy issues, and the role and nature of non-human therapy in the context of vulnerable populations (Fiske et al., 2019). In “‘Call be baby”: Talking sex dolls fill a void in China’ (the Sowetan LIVE, 4 February 2018), the journalist employed derogatory female framing, describing “sex dolls that can talk, play music and turn on dishwashers” for “lonely men and retirees”. While the journalist conceded that “On social media, some say the products reinforce sexist stereotypes”, this observation ended the interrogation of sexism. Across the four media outlets – and quoting AI developers’ own words – journalists described AI companions or assistants as female, “endowed with remarkable […] aesthetics” (the Citizen, 5 September 2018), as “lean” or ‘slender, with dark flawless skin” etc. (the Sowetan LIVE, 28 September 2018). These descriptions echo mass media proclivities for framing human-AI relationships in terms of stereotypical gender roles instead of questioning such representations (cf. Döring and Poesch, 2019:665). The fact is that most AI-driven companions/assistants are designed according to stereotypical femininity (cf. Edwards, Edwards, Stoll, Lin and Massey, 2019). Informative journalism should challenge the entrenchment of these stereotypes that often “[come] with framing [AI] in human terms” (Darling, 2015:3). Another interesting example of how journalists may frame ethical concerns related to the application of AI is reflected in ‘Online chatbot suspended for hate speech, “despising” gays and lesbians’ published in the Citizen (1 January 2021). The article reports on ‘Lee Luda’, a chatbot who was recently ‘accused’ of hate speech after ‘attacking’ minorities online. Of significance is that although the journalist indicated that the chatbot “learned” from data taken from billions of conversations, this fact was backgrounded in favour of foregrounding the chatbot’s human-like behaviour; according to her designers, “Lee Luda is […] like a kid just learning to have a conversation. It has a long way to go before learning many things”. Emphasising the chatbot’s supposed ability to learn to avoid generating hate speech inadvertently frames this technology as having human intentions and moral agency, which are myths. AI does not possess intentionality (Abbass, 2018:165), which Searle defines as “that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world”. Without intentionality to act freely, AI does not have moral agency (Van de Poel, 2020:387).

Social anthropomorphism featured in eight articles on ‘Human control over AI’ in the Daily Maverick, the Mail & Guardian Online, and the Sowetan LIVE. The anthropomorphic tropes, which were evoked mainly through the frames of accountability and morality/ethics, typically reflected a mixed valence in its propositions that humans must regulate AI. Concerns were related mainly to controlling or curtailing algorithmic/data bias (particularly as this related to racist and sexist bias), autonomous weapons, and job losses. With respect to bias, and in a Daily Maverick article (3 October 2019), the journalist made the claim in the lead that “AI can end up very biased”, but nevertheless repeatedly averred that AI is designed by humans and trained on datasets selected by humans. Similarly, in a Sowetan LIVE article (30 January 2021), the journalist described AI as “dangerous” and “prone to errors”, but mainly topicalised the development of AI software “by Africans, for Africans” that helps combat privacy violations and discrimination. Both journalists therefore checked the tendency to frame AI as a moral agent that exhibits autonomous decision-making processes, thus mitigating fears and unfounded expectations about this technology’s capabilities (cf. Salles et al., 2020:93). With respect to autonomous weapons, although the journalist in a Daily Maverick article (3 December 2019) referred to “killer robots”, she foregrounded the need for the international community to protect societies from “machines [that] can’t read between the lines or operate in the grey zone of uncertainty”. Regarding job lay-offs, a Daily Maverick journalist wrote an article (26 November 2020) in which he predicted that AI will ultimately take human jobs that “will usher in an era of techno-feudalism”. Yet, he also mitigated this prediction by arguing that humans need to ensure that they regulate AI. It is not surprising that across the eight articles, the words “(human) control”/“controls” frequently appeared  in relation to algorithmic/data bias, autonomous weapons, and job losses: studies by Ouchchy et al. (2020) and Sun et al. (2020) suggest that regulation of AI is a frequent topic in the media amidst fears of the ethical consequences of this technology.

5.3 A comparison of the news outlets with regard to topics and anthropomorphism of AI

Whether journalists published in a mainstream paper, such as the Mail & Guardian Online, an alternative media outlet, such as the Daily Maverick, or in tabloid-style newspapers, such as the Citizen or the Sowetan LIVE, all of them employed similar strategies to reflect uncertainty and conflict surrounding AI and its applications. Articles typically combined journalists’ own reports, scare quotes, direct and indirect speech of different actors, and contradictory framing of AI. Unanimously, all outlets also anthropomorphised AI. The topics of ‘AI-human interaction’, ‘Healthcare and medicine’, and ‘Business, finance, and the economy’ featured across all four outlets, with anthropomorphic framing of AI under the first two topics being uniform across the outlets. AI was overwhelmingly framed positively and depicted as exhibiting human-like form/human traits or as mimicking human cognitive capabilities. Although articles published in the Sowetan LIVE also anthropomorphised AI when discussing business, finance, and the economy, these articles were coded with mixed valence, while articles published in the other newspapers were predominantly coded with positive valence. We eschew speculation as to why this was the case, given that between 2018 and the beginning of 2021, we identified only three articles in this newspaper that focused on AI and business/financial issues. Indeed, only 12 Sowetan LIVE articles satisfied our data collection criteria, suggesting that AI’s application in the business/financial world is an unpopular topic among readers.  ‘Control’ over AI’ featured in the Daily Maverick, the Mail & Guardian Online as well as in the Sowetan LIVE, and again, anthropomorphism of AI under this topic was uniform: AI was described as biased, as taking people’s jobs and as having the ability to kill humans. With a sensational soubriquet like ‘Killer robots’, one might assume that any reports on autonomous weapons would be the purview of tabloid-style newspapers, but this topic appeared only in three articles in the Daily Maverick and the Mail & Guardian Online. Despite some sensational/alarmist claims such the Mail & Guardian Online’s (19 March 2018)  – “[…] weapons may be able to learn on their own, adapt and fire” – journalists questioned the potential for AI to progress to a level where it will have moral agency and demanded that this type of AI be banned. Another topic that has the potential to be sensationalised is ‘Big Brother’. So, it is unsurprising that it appeared 13 times in the Citizen. Five of the 13 articles did not anthropomorphise AI as a ‘spy’, but highlighted the human element that drives surveillance technology. As noted in Section 4.1, the only news outlet that did not cover ‘Preparedness for an AI-driven world’ was the Sowetan LIVE. Since this topic was generally framed around the need for South Africans to equip themselves with the skills necessary to cope with AI which is ‘taking’ people’s jobs, we find the omission of this topic surprising, given that the newspaper’s readership constitutes mainly working-class South Africans. The remainder of the topics reflected in our datasets are not discussed, since they constituted less than 4% of the entire dataset.

6. Conclusions

This study has revealed that anthropomorphism of AI was pervasive in the four South African online newspapers, with only 19 reflecting no anthropomorphic tropes. Most articles (59) elicited social anthropomorphism of AI, while a few (16) evoked cognitive anthropomorphism. A total of 32 reflected both types of anthropomorphism. When cognitive anthropomorphism was evoked, journalists typically portrayed AI as matching or exceeding human intelligence and when social anthropomorphism was elicited, AI technologies were typically framed as social actors. Whichever type of anthropomorphism was dominant, AI was overwhelmingly represented as benefitting humankind. Although journalists generally attempted to mitigate exaggerated claims about AI by using a variety of discursive strategies, the construction of anthropomorphic tropes to some extent overtook the reality of what AI technologies currently encompass, essentially obscuring these technologies’ epistemological and ethical challenges. It is critical that journalists interrogate how they contextualise and qualify AI, given that it is disrupting almost every aspect of our lives.

While the content analysis yielded insights into how AI is framed by the media in South Africa, a limitation of the study is that the sample is not necessarily representative of the anthropomorphic framing employed in other online news outlets that may feature different or more polarised views of AI. Nevertheless, Obozintsev (2018: 45) observes that “it seems unlikely that artificial intelligence would be framed in a  markedly different manner” in other outlets, since “opinions about [AI] are not as politically divisive as scientific issues such as climate change and evolution”, for example.

References

Abbass, H.A. 2019. Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation 11: 159–171.

Adams, K. 2020. 3 hospital execs: How to ensure medical AI is trained on sufficiently diverse patient data. Becker’s Health IT, 30 November. Available:  https://www.beckershospitalreview.com/artificial-intelligence/3-hospital-execs-how-to-ensure-medical-ai-is-trained-on-sufficiently-diverse-patient-data.html (Date of access: 9 April 2021).

Bartneck, C. 2013. Robots in the theatre and the media. Design and Semantics of Form and Movement: 64–70.

Birkenshaw, J. 2020. What is the value of firms in an AI world? Pp. 23-35 in J. Canals and F. Heukamp (Eds.), The future of management in an AI world. USA: Palgrave Macmillan.

Springer International Publishing.

Bishop, J.M. 2016. Singularity, or how I learned to stop worrying and love artificial intelligence. Pp. 267-281 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.

Borgesius, F.Z. 2018. Discrimination, artificial intelligence, and algorithmic decision-making.

Available: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decisionmaking/1680925d73 (Date of access: 3 March 2021).

Boukes, M., Van de Velde, B., Araujo, T. and Vliegenthart, R. 2020. What’s the tone? Easy doesn’t do it: Analyzing performance and agreement between off-the-shelf sentiment analysis tools. Communication Methods and Measures 14(2): 83-104.

Boykoff, M. and Boykoff, J. 2004. Balance as bias: Global warming and the US prestige press. Global Environmental Change 14(2): 125-136.

Breazeal, C. 2003. Toward sociable robots. Robotics and Autonomous Systems 42(3): 167-75.

Brennen, J.S., Howard, P.N. and Nielsen, R.K. 2018. An industry-led debate: How UK media cover artificial intelligence. RISJ Fact-Sheet. Oxford, UK: University of Oxford.

Brüggemann, M. 2017. Post-normal journalism. Climate journalism and its changing contribution to an unsustainable debate. Pp. 57-73 in P. Berglez, U. Olausson and M. Ots (Eds.), What is sustainable journalism? Integrating the environmental, social, and economic challenges of journalism. New York, NY: Peter Lang.

Bryson, J. 2010. Robots should be slaves. Pp. 63-74 in Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam: John Benjamin Publishing Company.

Bunz, M. and Braghieri, M. 2021. The AI doctor will see you now: Assessing the framing of AI in news coverage. AI & SOCIETY: 1-14.

Burns, D.M. 2020. Artificial intelligence isn’t. Canadian Medical Association Journal 192(11): E290-E290.

Burscher, B., Vliegenthart, R. and Vreese, C.H.D. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Social Science Computer Review 34(5): 530-545.

Castelvecchi, D. 2016. Can we open the black box of AI?. Nature News 538(7623): 20-23.

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B. and Taylor, L. 2018. Portrayals and perceptions of AI and why they matter. Available: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf  (Date of access: 2 February 2020).

Chong, D. & Druckman, J.N. 2012. Counterframing effects. Journal of Politics 75(1): 1-16.

Chuan, C.H., Tsai, W.H.S. and Cho, S.Y. 2019. Framing artificial intelligence in American newspapers. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society: 339-344.

Colom, R., Karama, S., Jung, R.E. and Haier, R.J. 2010. Human intelligence and brain networks. Dialogues in Clinical Neuroscience 12(4): 489-501.

Curran, N.M., Sun, J. and Hong, J.W. 2019. Anthropomorphizing AlphaGo: A content analysis of the framing of Google DeepMind’s AlphaGo in the Chinese and American press. AI & SOCIETY 35: 727-735.

Damiano, L. and Dumouchel, P. 2018. Anthropomorphism in human–robot co-evolution. Frontiers in Psychology 9: 1-9.

Darling, K. 2015. ‘Who’s Johnny? Anthropomorphic framing in human-robot interaction, integration, and policy. Pp. 3-21 in P. Lin, G. Bekey, K. Abney and R. Jenkins (Eds.), Robotic Ethics 2.0. Oxford: Oxford University Press.

Döring, N. and Poeschl, S. 2019. Love and sex with robots: A content analysis of media representations. International Journal of Social Robotics 11(4): 665-677.

Duffy, B.R. and Zawieska, K. 2012. Suspension of disbelief in social robotics. 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): 484-89.

Durán, J.M. and Jongsma, K.R., 2021. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics 0: 1-7.

Edwards, C., Edwards, A., Stoll, B., Lin, X. and Massey, N. 2019. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior 90: 357-362.

Engstrom, E. 2018. Gendering of AI/robots: Implications for gender equality amongst youth generations. Report written by Eugenia Novoa (Speaker), Siddhesh Kapote (Speaker) Ebba Engstrom (Speaker), Jose Alvarez (Speaker) and Smriti Sonam (Special Rapporteur). Images provided by AFI Changemakers and UNCTAD Youth Summit Delegates 2018. Available: https://arielfoundation.org/wp-content/uploads/2019/01/AFI

Changemakers-and-UNCTAD-Delegates-Report-on-Technology-2019.pdf#page=13 (Date of access: 8 January 2021).

Entman, R.M. 2010. Framing media power. In: P. D’Angelo and J. Kuypers (eds.), Doing news framing analysis. New York, NY: Routledge. 331-355.

Epley, N., A. Waytz, and J. T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review 114(4): 864–886.

Erickson RP. 2014. Are humans the most intelligent species?. Journal of Intelligence 2(3): 119-121.

Fast, E. and Horvitz, E. 2017. Long-term trends in the public perception of artificial intelligence. In: Proceedings of the AAAI Conference on Artificial Intelligence 31(1): 963-969.

Ferretti, A., Ronchi, E. and Vayena, E. 2019. From principles to practice: Benchmarking government guidance on health apps. The Lancet Digital Health 1(2): e55-e57.

Fjelland, R., 2020. Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications 7(1): 1-9.

Fiske, A., Henningsen, P. and Buyx, A. 2019. Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research 21(5): e13216.

Fortunato, J.A. and Martin, S.E. 2016. The intersection of agenda-setting, the media environment, and election campaign laws. Journal of Information Policy 6(1): 129-153.

Freyenberger, D. 2013. Amanda Knox: A content analysis of media framing in newspapers around the world. Available: Available: http://dc.etsu.edu/cgi/viewcontent.cgi, article=2281&context=etd (Date of access: 22 February 2021).

Funtowicz, S.O. and Ravetz, J.R. 1993. Science for the post-normal age. Futures 25(7): 739-755.

Garvey, C. and Maskal, C. 2020. Sentiment analysis of the news media on artificial intelligence does not support claims of negative bias against artificial intelligence. Omics: A Journal of Integrative Biology 24(5): 286-299.

Geertz, C. 1973. The interpretation of cultures: Selected essays. New York, NY: Basic Books.

Gerke, S., Minssen, T. and Cohen, G. 2020. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare: 295-336.

Giger, J.C., Piçarra, N., Alves‐Oliveira, P., Oliveira, R. and Arriaga, P. 2019. Humanization of robots: Is it really such a good idea?. Human Behavior and Emerging Technologies 1(2): 111-123.

Heath, N. 2020. What is AI? Everything you need to know about artificial intelligence. ZDNet, 11 December. Available: https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/ (Date of access: 19 April 2021).

Hertog, J. and McLeod, D. 2001. A multi-perspectival approach to framing analysis: A field guide. Pp. 141-162 in S. Reese, O. Gandy and A. Grant (Eds.), Framing public life. Mahwah, NJ: Erlbaum.

Highfield, V. 2018. Can AI really be emotionally intelligent? Alphr, 27 June. Available: https://www.alphr.com/artificial-intelligence/1009663/can-ai-really-be-emotionally-intelligent/ (Date of access: 19 April 2021).

Hildt, E. 2019. Artificial intelligence: Does consciousness matter?. Frontiers in Psychology 10: 1-3.

Holguín, L.M. 2018. Communicating artificial intelligence through newspapers: Where is the real danger?. Available: https://mediatechnology.leiden.edu/images/uploads/docs/martin-holguin-thesis-communicating-ai-through-newspapers.pdf (Date of access: 3 April 2020).

Hornmoen, H. 2009. What researchers now can tell us: Representing scientific uncertainty in journalism. Observatorio 3(4): 1-20.

Hsieh, H.F. and Shannon, S.E. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15(9): 1277-1288.

Johannson M. 2019. Digital and written quotations in a news text: The hybrid genre of political news opinion. Pp. 133-162 in P.B. Franch and P.G.C. Blitvich, P.G.C. (Eds.), Analyzing digital discourse: New insights and future directions. Cham, Switzerland: Springer.

Jones, S. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta.

Kampourakis, K. and McCain, K. 2020. Uncertainty: How it makes science advance. USA: Sheridan Books, Incorporated.

Kaplan, A. and Haenlein, M. 2019. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons 62(1): 15-25.

Kaplan, A. and Haenlein, M. 2020. Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons 63(1): 37-50.

Kirk, J. 2019. The effect of Artificial Intelligence (AI) on emotional intelligence (EI). Capgemini, 19 November. Available: https://www.capgemini.com/gb-en/2019/11/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei/ (Date of access: 5 May 2021).

Kleinnijenhuis, J., Schultz, F. and Oegema, D. 2015. Frame complexity and the financial crisis: A comparison of the United States, the United Kingdom, and Germany in the period 2007–2012. Journal of Communication 65(1): 1-23.

Krippendorff, K. 2013. Content analysis: An introduction to its methodology. Los Angeles, CA: Sage.

Larson, E.J. 2021. The myth of artificial intelligence: Why computers can’t think the way we do. Cambridge, MA: Harvard University Press.

Lea, G.R. 2020. Constructivism and its risks in artificial intelligence. Prometheus 36(4): 322-346.

McCombs, M. 1997. Building consensus: The news media’s agenda-setting roles. Political Communication 14(4): 433-443.

Monett, D., Lewis, C.W. and Thórisson, K.R. 2020. Introduction to the JAGI special issue “On defining Artificial Intelligence” – Commentaries and author’s response. Journal of Artificial General Intelligence 11(2): 1-100.

Mueller, S.T. 2020. Cognitive anthropomorphism of AI: How humans and computers classify images. Ergonomics in Design 28(3): 12-19.

Nelson, T.E. and Kinder, D.R. 1996. Issue frames and group-centrism in American public opinion. The Journal of Politics 58(4): 1055-1078.

Nisbet, M.C. 2009. Framing science. A new paradigm in public engagement. Pp. 1-32 in L. Kahlor and P. Stout (Eds.), Understanding science: New agendas in science communication. New York, NY: Taylor and Francis.

Nisbet, M.C. 2016. The ethics of framing science. Pp. 51-74 in B. Nerlich, R. Elliott and B. Larson (Eds.), Communicating biological sciences. USA: Routledge.

Obozintsev, L. 2018. From Skynet to Siri: An exploration of the nature and effects of media coverage of artificial intelligence. Unpublished Doctoral thesis. Newark, Delaware: University of Delaware.

Ouchchy, L., Coin, A. and Dubljević, V. 2020. AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY 35(4): 927-936.

Pesapane, F., Tantrige, P., Patella, F., Biondetti, P., Nicosia, L., Ianniello, A., Rossi, U.G., Carrafiello, G. and Ierardi, A.M. 2020. Myths and facts about artificial intelligence: Why machine-and deep-learning will not replace interventional radiologists. Medical Oncology 37(5): 1-9.

Peters, H.P. and Dunwoody, S. 2016. Scientific uncertainty in media content: Introduction to this special issue. Public Understanding of Science 25(8): 893–908.

Peters, M.A. and Jandrić, P. 2019. Artificial intelligence, human evolution, and the speed of learning. Pp. 195-206 in J. Knox, Y. Wang and M. Gallagher (Eds.), Artificial intelligence and inclusive education. Perspectives on rethinking and reforming education. Singapore: Springer.

Proudfoot, D. 2011. Anthropomorphism and AI: Turingʼs much misunderstood imitation game. Artificial Intelligence 175(5-6): 950-957.

Quer, G., Muse, E.D., Nikzad, N., Topol, E.J. and Steinhubl, S.R. 2017. Augmenting diagnostic vision with AI. The Lancet 390(10091): 221.

Ramakrishna, K., Verma, I., Goyal, M.I. and Agrawal, M.M. 2020. Artificial intelligence: Future employment projections. Journal of Critical Reviews 7(5): 1556-1563.

Riek, L.D. 2016. Robotics technology in mental health care. Pp. 185-203 in D.D. Luxton (Ed.), Artificial intelligence in behavioral and mental health care. USA: Academic Press.

Salles, A., Evers, K. and Farisco, M. 2020. Anthropomorphism in AI. AJOB Neuroscience 11(2): 88-95.

Samuel, J.L. 2019. Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress 10(2): 8-26.

Schmid-Petri, H. and Arlt, D. 2016. Constructing an illusion of scientific uncertainty? Framing climate change in German and British print media. Communications 41(3): 265-289.

Scholl, B.J. and Tremoulet, P.D. 2000. Perceptual causality and animacy. Trends in Cognitive Sciences 4(8): 299-309.

Schwartz, O. 2018. “The discourse is unhinged”: How the media gets AI alarmingly wrong”. The Guardian, 25 July. Available: https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong (Date of access: 14 April 2021).

Skovsgaard, M., Albæk, E., Bro, P. and De Vreese, C. 2013. A reality check: How journalists’ role perceptions impact their implementation of the objectivity norm. Journalism 14(1): 22-42.

Sniderman, P.M. and Theriault, S.M. 2004. The structure of political argument and the logic of issue framing. Pp. 133-165 in W.E. Saris and P.M. Sniderman (Eds.), Studies in public opinion: Attitudes, nonattitudes, measurement error, and change. USA: Princeton University Press.

Sparrow, R., and L. Sparrow. 2006. In the hands of machines? The future of aged care. Minds and Machines 16(2): 141–161.

Stahl, N.A. and King, J.R. 2020. Expanding approaches for research: Understanding and using trustworthiness in qualitative research. Journal of Developmental Education: 44(1): 26-29.

Sun, S., Zhai, Y., Shen, B. and Chen, Y. 2020. Newspaper coverage of artificial intelligence: A perspective of emerging technologies. Telematics and Informatics 53: 1-9.

Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433-460.

Turkle, S. 2007. Simulation vs. authenticity. Pp. 244-247 in J. Brockman (Ed.), What is your dangerous idea? : Today’s leading thinkers on the unthinkable. USA: Simon & Schuster.

Turkle, S. 2010. In good company? On the threshold of robotic companions. Pp. 3-10 in Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam/Philadelphia: John Benjamins Publishing Company.

Uribe, R. and Gunter, B. 2007. Are sensational news stories more likely to trigger viewers’ emotions than non-sensational news stories? A content analysis of British TV news. European Journal of Communication 22(2): 207-228.

Ulnicane, I., Knight, W., Leach. T., Stahl, B.C. and Wanjiku, W.G. 2020: Framing governance for a contested emerging technology: Insights from AI policy. Policy and Society: 1-20.

Van de Poel, I. 2020. Embedding values in artificial intelligence (AI) systems. Minds and Machines 30(3): 385-409.

Vergeer, M. 2020 Artificial intelligence in the Dutch press: An analysis of topics and trends.  Communication Studies 71(3: 373-392.

Vettehen, H.P. and Kleemans, M. 2018. Proving the obvious? What sensationalism contributes to the time spent on news video. Electronic News 12(2): 113-127.

Wang, P. 2019. On defining artificial intelligence. Journal of Artificial General Intelligence 10(2): 1-37.

Watson, D. 2019. The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines 29(3): 417-440.

White, D.E., Oelke, N.D. and Friesen, S. 2012. Management of a large qualitative data set: Establishing trustworthiness of the data. International Journal of Qualitative Methods 11(3): 244-258.

Złotowski, J., Proudfoot, D., Yogeeswaran, K. and Bartneck, C. 2015. Anthropomorphism: opportunities and challenges in human–robot interaction. International Journal of Social Robotics 7(3): 347-360.

 

 

[1] We are not suggesting that frames can be reduced to one of two arguments: “Frames are constructions of the issue: they spell out the essence of the problem, suggest how it  should be thought about, and may go so far as to recommend what (if anything) should be done […]” (Nelson and Kinder, 1996:1057).

[2] Deep learning is a subset of machine learning and learns through an artificial neural network. In simple terms, this network mimics the human brain and enables an AI model to ‘learn’ from huge amounts of data.

Leave a Reply

Your email address will not be published. Required fields are marked *