Friend or foe? How online news outlets in South Africa frame artificial intelligence

Title: Friend or foe? How online news outlets in South Africa frame artificial intelligence
Author: Susan Brokensha, University of the Free State.
Ensovoort, volume 41 (2020), number 7: 2

Abstract

The influence that the media have in shaping public opinion about artificial intelligence (AI) cannot be overestimated, since the various frames they employ to depict this technology may be adopted into the public’s socio-cultural frameworks. Employing framing theory, we conducted a content analysis of online news articles published by four outlets in South Africa with a view to gaining insights into how AI is portrayed in them. We were particularly interested in determining whether AI was represented as friend or foe. Our analysis indicated that although most articles reflected a pro-AI stance, many also tended to be framed in terms of both anti- and pro-technology discourse, and that this dualistic discourse was to some degree resolved by adopting a middle way frame in which a compromise position between the polarised views was proposed. The analysis also signalled that several of the articles in our dataset called for the need for human agency to regulate and govern AI in (South) Africa. This is an important call as it is in keeping with the need to ensure that AI is applied in such a way that it benefits Africa and its culture and context.

1. Introduction

Key milestones in the evolution of artificial intelligence (AI) cannot be neatly mapped, given the need to take into account not only definitive discoveries and events in AI, but also hardware innovations, software platforms, and developments in robotics, all of which have had a significant impact on AI systems. Most AI scholars would agree that a defining moment in its history was the hosting by John McCarthy (Dartmouth College) and Marvin Minsky (Harvard University) of the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College in the United States in 1956 (Haenlin and Kaplan, 2019; Lele 2019; Mondal, 2020). Working alongside Minsky, Nathaniel Rochester (IBM Corporation) and Claude Shannon (Bell Telephone Laboratories), McCarthy coined the term ‘artificial intelligence’ in a 1955 proposal for the DSRPAI. In this document, the four individuals proposed that the 1956 brainstorming session be based on “the conjecture that every aspect of learning or any feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (McCarthy, Minsky, Rochester and Shannon, 1955:2). Prior to the DSRPAI, crucial discoveries in AI included McCulloch and Pitts’ computational model of a neuron (1943) – a model which was then further developed by Frank Rosenblatt when he formulated the Perceptron learning algorithm in 1958 – and Alan Turing’s Turing Test (1950), designed to determine if a machine could ‘think’.
Since the 1950s, other significant events or innovations that have influenced AI systems are simply too numerous to summarise in one paper, but scholars such as Perez, Deligianni, Ravi and Yang (2018) helpfully describe the evolution of AI in terms of positive and negative seasons, the events and innovations just described constituting the birth of AI. The period between 1956 and 1974 is described by Perez et al. (2018:9) as AI’s first spring, which was marked by advances in designing computers that could solve mathematical problems and process strings of words (cf. De Spiegeleire, Maas and Sweijs, 2017:31). First winter refers to the period between 1974 and 1980 when the public and media alike began interrogating whether AI held any benefits for humankind amidst over-inflated claims at the time that AI would surpass human intelligence (cf. Curioni, 2018:11). Since emerging technologies (such as machine translation) did not live up to lofty expectations, AI researchers’ funding was heavily curtailed by major agencies such as DARPA (Defense Advanced Research Projects Agency). The so-called second spring is generally regarded as the period between 1980 and 1987, and was characterised by the revival of neural network models for speech recognition (cf. Shin, 2019:71). This brief cycle was replaced by AI’s second winter (1987-1993) during which desktop computers gained in popularity and threatened the survival of the specialised hardware industry (cf. Maruyama, 2020:383). The period between 1997 and 2000 is not described in terms of a particular season, but during this time, machine-learning methods such as Bayesian networks and evolutionary algorithms dominated the field of AI. The period from 2000 to the present is described by scholars as the third spring of AI, a season distinguished by big data tools (that include Hadoop, Apache Spark, and Cassandra), as well as by other emerging technologies such as cloud computing, robotics and the Internet of Things (cf. Maclure, 2019:1).
Although some authorities in the technology sector maintain that this spring will not endure owing to AI’s cyclic nature (Piekniewski, 2018; Schuchmann, 2019), others argue that it is here to stay (Bughin and Hazan, 2017; Lorentz, 2018; Sinur, 2019). Indeed, Andrew Ng, a leading expert in machine learning and author of AI Transformation Playbook (2018), is of the view that “[we] may be in the eternal spring of AI” – that “[the] earlier periods of hype emerged without much actual value created, but today, it’s creating a flood of value” (Ray, 2018:1). It appears that media outlets throughout the world, whether positively or negatively disposed towards AI, have jumped onto the AI bandwagon if newspaper headlines are anything to go by:

  • South Africa: ‘The robots are coming for your jobs’ (News24, 29 September 2016)
  • Nigeria: ‘Machine learning may erase jobs, says Yudala’ (Daily Times, 28 August 2017)
  • Brazil: ‘In Brazil, “AI Gloria” will help women victims of domestic violence’ (The Rio Times, 29 April 2019).

Meredith Broussard (2018) argues that some journalists and researchers have succumbed to technochauvism, which is the utopian belief that technology will solve all our problems. At the other extreme are those who may have exaggerated the risks that accompany AI. In this regard, robotics expert Sabine Hauert decries, amongst other things, “hyped headlines that foster fear […] of robotics and artificial intelligence” (Hauert, 2015:416). Hauert (2015:417) laments the public being faced with “a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat”.
In this paper, we aim to explore how South African mainstream news articles that are published online frame AI with a view to determining whether they are depicted as a friend or foe to humans. The main reason for undertaking such an exploration is that online media outlets play a critical role in not only disseminating information, but also helping the public gain insights into scientific and technological innovations (cf. Brossard, 2013: 14096). In accordance with framing theory, “the way an issue is framed and discussed through specific perspectives can influence how audiences make sense of the issue” (Chuan, Tsai and Cho, 2019:3409). A review of the literature indicates that apart from analysing how journalists have framed scientific and technological news about, for instance, biomedicine, chemistry, physics, and renewable energy (Gastrow, 2015; Kabu, 2017; Rochyadi-Reetz, Arlt, Wolling and Bräuer, 2019), scholars have not undertaken studies to determine how AI is covered by the South African online press.

2. Framing theory and AI coverage in the media

From the outset, we would like to point out that we are not claiming that there is a causal relationship between journalists’ framing of issues and society’s opinions about those issues. Instead, “media as a powerful cultural institution […] may influence [the] public’s attitudes towards an emerging technology, particularly in the early stage when most people feel uncertain, wary, or anxious about an unfamiliar yet powerful technology” (Chuan et al. 2019:340). A number of researchers have, in recent years, studied media coverage of AI (Holguín, 2018; Jones, 2018; Brennen, Howard and Nielsen 2018; Obozintsev 2018; Chuan et al. 2019; Cui and Wu 2019), and framing theory in particular appears to be useful for understanding how the media depict both utopian and/or dystopian views of this type of technology. Nisbet (2009a:51) points out that frames enable us to gain insights into “how various actors in society define science-related issues in politically strategic ways” as well as “why an issue might be important, who or what might be responsible, and what should be done”. How AI is framed “may differ substantially across outlets” (Obozintsev, 2018:65), given that the complex relationship between journalism and science is influenced by a number of variables such as “myth-making of journalists, constraints, biases, public relations strategies of scientists” (Holguín 2018:5), and the like. Some scholars and AI experts argue that media coverage of AI leaves much to be desired – that it is “bogus” and “overblown” (Siegel, 2019:1) and that the media may “resort to headlines and images that are both familiar and sensationalist” (Cave, Craig, Dihal, Dillon, Montgomery, Singler and Taylor, 2018:17). Others have noted that AI “is typically discussed as an innovation that can impact humanity in a positive way, making the lives of individuals better or easier” (Obozintsev, 2018:65). We are of the view that it is critical to explore how AI is framed in South African news outlets to obtain a better understanding of the various perspectives that will inevitably inform public opinion of this technology. This might in turn “help bridge the diverse conversations occurring around AI and facilitate a richer public dialogue” (Chuan et al., 2019:340).
Adopting an approach employed by Chuan et al. (2019) and Strekalova (2015), we differentiated between topics and frames in our dataset, acknowledging that each news article may reflect multiple topics and frames. A topic “is […] a manifest subject, an issue or event”, while a frame “is a perspective through which the content is presented” (Chuan et al., 2019:340). To determine whether AI was framed as friend or foe, we posed the following questions:

  • Research question 1: Which topics and sub-topics were prominent in widely circulated South African online news articles?
  • Research question 2: How was AI framed in widely circulated South African online news articles?

3. Methods

3.1 Sample

Since we made use of stratified sampling (Krippendorff, 2013:116), we adhered to specific strata to collect suitable online news articles. Following an approach adopted by Jones (2015), we selected online news outlets that have a very high distribution in South Africa. Using Feedspot, a site that, amongst other things, offers data curation of news sites as a guide, we chose to collect articles from The Citizen, the Daily Maverick, the Mail & Guardian, and the SowetanLIVE. Second, we used ProQuest and Lexis Nexis Academic to collect articles using the search term ‘artificial intelligence’. Like Jones (2015), we did not search for articles using the abbreviation ‘AI’ because it returned too many results, given that it is a common letter combination in English. Third, we selected or eliminated articles based on whether or not they had a sustained focus on AI. We also discarded articles such as those that were not text-based and those that were actually movie listings or letters to the editor, for example. Finally, we collected articles that were published between January 2018 and April 2020. In this way we ultimately selected 73 articles across the four news outlets.1 The unit of analysis was the entire news article.
We are mindful that a qualitative study such as this one is open to criticism since it is not possible for a researcher to distance him- or herself from the subject under investigation (Jones, 2015:26). With this is mind, we kept a detailed memo in which we compiled a thick description of our design, methodology, and analyses, and this “detailed reckoning” (Jones, 2015:26) is available for perusal.

3.2 Framework of analysis

We made use of existing frames to answer Research question 2, combining and adapting frames first proposed by Nisbet (2009b, 2016). These are the frames of social progress, the middle way, morality/ethics, Pandora’s Box, and accountability. We also employed three frames proposed by Jones (2015), namely, the frames of competition, nature, and artifice. These frames are summarised in Table 1 below.

Table 1: A typology of frames employed to study AI in the media

Frame

Definition

Social progress

The frame of social progress is evoked when journalists wish to draw attention to the benefits of AI. Nisbet (2009b) restricts this frame to improvements to quality of life, but we have expanded the definition to include benefits in other areas such as the economy, health, and education.

Competition

The competition frame reflects the threats that AI may pose and these threats pertain to job losses, automated weapons, data breaches, and the like (Jones, 2015).

Middle way

Journalists may employ a middle way frame to propose what Obozintsev (2018:86) refers to as a “third way between conflicting or polarized views of options”.

Nature

Jones (2015:32) argues that articles that evoke the frame of nature “tend to discuss our continuing relationships with current technology, question the direction that this relationship is taking, and are often couched in romantic terms. Anthropomorphism is abundant in this discourse”. Typically, the virtues of AI “are identified as superior” (Jones, 2015:32), although journalists may use the frame to pass judgement on this technology.

Artifice

The frame of artifice depicts AI as a technology that is arcane; it is perceived as a technology that will surpass us in intelligence and ultimately engulf us (Jones, 2015:37).

Morality/Ethics

The frame of morality/ethics questions the rights and wrongs as well as the thresholds and boundaries of AI in terms of issues such as data privacy, surveillance, and the development of biased algorithms (Obozintzev 2018:86).

Pandora’s Box

Also referred to by Nisbet (2009b) as Frankenstein’s monster or as runaway science, Pandora’s Box portrays AI as technology that may spiral out of control.

Accountability

Accountability frames AI as technology that requires control and regulation to prevent, for example, algorithmic bias or abuses of power. (Nisbet (2009b) refers to this frame as the frame of public accountability/governance.)

4. Findings: Research question 1

4.1 Main topics and sub-topics

Many articles in the dataset covered multiple topics, but Table 2 below provides a summary of the main topics reflected in the title and first paragraph of each article. The most popular topic in the dataset was ‘Business, finance, and the economy’ (18 articles), and under this topic, the sub-topic of AI and job losses was most prominent, followed by AI and job creation, and AI-driven technology that functions as a personal financial advisor. Less frequent sub-topics dealt with under ‘Business, finance, and the economy’ are also provided in Table 2. After this, popular topics revolved around describing ‘AI-human interaction’ in terms of the anthropomorphism of the former (12 articles) and reporting on ‘Big Brother’ as it pertains to the use of AI to surveil online users and gain access to their personal data (eight articles). The next three prominent topics (with six articles each) were ‘Healthcare and medicine’ (as they related to AI being used to detect cancer and function as doctors), ‘Human control over AI’ (in the sense of a need to control and regulate AI), and ‘South Africa’s preparedness for an AI-driven world’, which reflected concerns about the country’s AI skills shortage within the context of the fourth industrial revolution. Three articles each were devoted to the topics of the ‘Environment’ (food production, crop management, and ecology), the ‘News industry’ (deepfaking and the curation of news by AI), and the ‘Uncanny valley’ (considered in the discussion section of this paper). ‘Defence weapons’ (so-called ‘killer robots’), ‘Singularity’ (which describes a hypothetical future in which technology will become uncontrollable), and ‘Strong AI’(which describes the goal of some in the field of AI to create machines whose intellectual capability will match that of human beings) featured in two articles each. Finally, one article in the dataset focused on ‘Education’ (specifically on robot teachers) and one on ‘Cyborgs’. The latter topic was not filed under the ‘Uncanny valley’ because it was an outlier in the dataset; it revolved around an individual who had a cybernetic implant attached to the base of his skull and no other article featured human enhancement with in-the-body AI technology.
All news outlets covered ‘Business, finance, and the economy’, ‘AI-human interaction’, ‘Healthcare and medicine’, and ‘South Africa’s preparedness for an AI-driven world’. ‘Big Brother’ was addressed only in articles published by The Citizen as were ‘Cyborgs’, ‘Education’, and ‘Strong AI’. Both the Daily Maverick and the Mail & Guardian reported on ‘Defence weapons’ and ‘Human control over AI’. AI in the ‘News industry’ was covered by The Citizen and the Daily Maverick. The ‘Uncanny valley’ featured in both The Citizen and the SowetanLIVE. AI as it relates to the ‘Environment’ received attention in The Citizen, the Daily Maverick, and the SowetanLIVE. The topic of ‘Singularity’ was addressed only in Daily Maverick articles. Apart from concluding that it is not surprising that the topic of ‘Business, finance, and the economy’ dominated as this dominance has been detected in other studies on how AI is framed in the media (cf. Chuan et al., 2019), we hesitate to draw any specific conclusions from this data. One reason for this is that, as stated at the beginning of this section, most articles reflected multiple topics. Another reason is that some of the titles in the dataset were quite misleading. One article published in The Citizen, for example, was entitled ‘Ramaphosa becomes first head of state to appear as hologram’ (5 July 2019), but the article itself considered South Africa’s preparedness to cope with the fourth industrial revolution including AI and robotics.

Table 2: Topics and sub-topics in the dataset (n=73 articles)

Main topic

Sub-topic(s) if applicable

Number of articles

AI-human interaction

(Robot companions/colleagues/assistants)

12

Big Brother

(AI surveillance/data privacy, including discussions of ethics/algorithmic bias)

8

Business, finance, and the economy

AI and job losses

AI and job creation

AI as financial advisors

AI to help businesses grow/become more efficient

AI to assist in human resources (including discussions of ethics/algorithmic bias)

AI as insurance brokers

6

3

3

4

1

1

Cyborgs

Human enhancement with in-the-body, AI-driven technology

1

Defence weapons

(Automated weapons)

2

Education

(AI teachers)

1

Environment

AI to improve food and crops

AI to improve the ecosystem

2

1

Healthcare and

medicine

AI diagnosticians

AI doctors

5

1

Human control over AI

(The importance of human agency in the development and implementation of AI)

6

News industry

AI and deepfaking

AI as curating the news (including discussions of ethics/algorithmic bias)

2

1

Singularity

Human identity under singularity

Human beings in a post-work world

1

1

South Africa’s preparedness for an AI-driven world (AI skills shortage)

(Training of human beings in an AI-driven world)

6

Strong AI

(AI modelled on the human brain)

2

Uncanny valley

(Appearance of AI as human-like or robot-like)

3

5. Findings: Research question 2

5.1 The frames of social progress and competition

An exhaustive analysis of all articles in terms of whether the frame of social progress or the frame of competition was more salient indicates that across all four news outlets, 50.68% of the articles focused on social progress, while 15.06% addressed competition (Table 3). Since the frame of social progress reflects the benefits that AI holds for humankind, while the frame of competition reflects the risks and threats inherent in AI, we concluded that most of the articles across the outlets were positively disposed towards AI, a conclusion supported by other studies (Obozintsev, 2018; Cui and Wu, 2019; Garvey and Maskal, 2019). One important reason for considering the salience of the two frames is that in terms of the distribution of these frames by source, both frames were employed in 54.79% of the articles (Table 4), a phenomenon which is considered in detail in the discussion section. We will return to the frames of social progress and competition just before the discussion section when we have considered all frames in context.

Table 3: Most salient frame by source

Newspaper

Social progress

Competition

Middle way

Nature

Artifice

Morality/

Ethics

Pandora’s Box

Accounta-

bility

The Citizen

(n=33)

19

4

0

2

0

6

0

2

Daily Maverick

(n=16)

8

1

0

0

0

3

0

4

Mail & Guardian

(n=14)

6

3

0

1

0

0

0

4

Sowetan-LIVE

(n=10)

4

3

0

3

0

0

0

0

Table 4: Distribution of frames in the dataset (without taking salience into account)

News outlet

Social progress

Competi-

tion

Social progress and competition

(in which the middle way was employed)

Middle way (in which the social progress and compe-tition frames were not evoked)

Nature

Artifice

Morality/

Ethics

Pando-ra’s

Box

Accounta-bility

The Citizen

(n=33)

8

8

14

(middle way=10)

3

22

6

9

1

2

Daily Maverick

(n=16)

2

4

10

(middle way=6)

1

12

8

6

1

6

Mail & Guardian

(n=14)

2

2

10

(middle way = 7)

4

10

6

5

1

7

Sowetan-LIVE

(n=10)

2

2

6

(middle way=3)

0

9

4

1

0

1

Middle way frame used in 34 articles

5.2 The middle way frame and its presence or absence in articles employinng the frames of social progress and competition

In her study of the framing of AI in news articles (n=64), Obozintsev (2018:40) found that only 3.1% were framed in terms of a middle way frame, but in our dataset, nearly 35.61% of all articles that employed both the frames of social progress and competition reflected this frame (Table 4): with regard to the 14 articles published in The Citizen that evoked both frames, ten employed a middle way frame. Out of the ten articles in the Daily Maverick that used the two frames, six employed a middle way frame, and out of the ten Mail & Guardian articles that used both frames, seven employed a middle way frame. The SowetanLIVE dataset contained six articles employing the two frames, and three of these evoked a middle way frame. The possible reasons why a middle way frame was constructed in these articles is considered in the discussion section.

5.3 The remainder of the frames

Although the frame of nature was not one that was made salient in most articles – it featured as a prominent frame in only six articles – it was nevertheless employed in 72.60% of all articles, which is not surprising, given that it is popular in the media to question or embrace the potential for AI to match or surpass human intelligence and to interrogate its capacity to form bonds with human beings. The frame of artifice did not appear at all as a salient frame, although it was used in 25.47% of all articles. The frame of morality/ethics was a salient frame in nine articles only, but it did occur in 28.76% of the articles in the dataset. Pandora’s Box did not feature as a salient frame, but was employed in 4.10% of the articles. Accountability is a frame that Obozintsev (2018) reports was rare in her dataset (7.8%), but we found that this was used as a salient frame in 13.69% of the articles under investigation, while it was touched upon in 21.91% of all articles. We speculate that its salience in particular articles was partly due to the fact that these articles also reflected the frame of morality/ethics and/or Pandora’s Box, frames which typically question issues of control and power. In addition, the writers of these articles included academics, computer scientists, mechanical engineers, and social commentators, all of whom have a vested interest in ethical issues around AI and in technology that leverages AI for the public good.

5.4 Revisiting social progress and competition in relation to all frames

Although a pro-AI stance was evident across the news outlets when we examined the salience of the frames of social progress and competition, we also had to consider the prevalence of this stance based on (1) an analysis of all frames and (2) a close reading of each article. One could be forgiven for concluding that, based on the data in Table 4, it is possible that most articles did not in fact reflect a pro-AI stance, given that frames outside of the social progress frame may reflect negative views of AI. However, an exhaustive analysis of each article allowed us to negate this conclusion. The analysis indicated, for example, that across the dataset, the frame of nature was evoked in 53 articles. In 35 of these articles, AI was depicted in positive terms, 15 reflected a negative view, and three were neutral (in the sense that they did not adopt a specific tone). For every article, we tracked each frame and determined if, overall, AI was portrayed in a positive, negative or neutral light. We concluded that 41 articles (56.16%) reflected a positive view of AI, 29 (39.72%) conveyed a negative view, and three (4.1%) were neutral (Table 5). AI was overwhelmingly viewed in a positive light in ‘AI-human interaction’, ‘Business, finance, and the economy’, ‘Education’, the ‘Environment’, and ‘Healthcare and medicine’, while it was depicted in a negative light in discussions around ‘Big Brother’, ‘Defense weapons’, ‘Human control over AI’, the ‘News industry’, and ‘South Africa’s preparedness for an AI-driven world. In terms of social progress (i.e., benefits) and competition (i.e., threats), the news coverage of AI across outlets and sources was more positive than negative.

Table 5: Positive, negative or neutral views of dominant AI topics

Topic

Number of articles

Positive

Negative

Neutral

AI-human interaction

12

11

1

0

Big Brother

8

1

7

0

Business, finance, and the economy

18

13

5

0

Cyborgs

1

0

0

1

Defence weapons

2

0

2

0

Education

1

1

0

0

Environment

3

3

0

0

Healthcare and medicine

6

6

0

0

Human control over AI

6

0

6

0

News industry

3

0

3

0

Singularity

2

1

1

0

South Africa’s preparedness for an AI-driven world (AI skills shortage)

6

2

3

1

Strong AI

2

1

0

1

Uncanny valley

3

2

1

0

6. Discussion

6.1 Nature and artifice

In an insightful Royal Society report of 2018, the researchers point to a tendency in fictional narratives to anthropomorphise AI, and this tendency was apparent in many of the articles in our dataset that evoked the frames of nature and artifice (Craig et al., 2018). Below are some typical examples of the anthropomorphisation of AI:

  • Intellectual superiority: “Technology has the ability both to remove [a financial advisor’s] biases and analyse a full array of products, potentially identifying suitable solutions that the advisor may have missed on their own” (The Citizen, 10 March 2018).
  • Human-like senses: “AI noses are now able to smell toxic materials, AI tongues can now taste wines and offer opinions on their taste scores, and robots are now able to touch and feel objects” (Daily Maverick, 12 November 2018).
  • Robot domination: “…robots decide who gets to live and who dies” (Mail & Guardian, 11 April 2018).
  • Life-like robots: a robotic model called ‘Noonoouri’ “is said to be 18 years old and 1.5m tall. The Parisian describes herself as cute, curious and a lover of couture” (SowetanLIVE, 20 September 2018).

The Royal Society report (2018:4) notes that what is concerning about such descriptions is that they instil certain “[e]xaggerated expectations and fears about AI”, and unfortunately also “contribute to misinformed debate, with potentially significant consequences for AI research, funding, regulation and reception”. It is important to point out that not all the articles in our dataset portayed AI in such a way that it was disconnected from reality. In an article in The Citizen entitled ‘China’s doctor shortage prompts rush for AI healthcare’ (20 September 2018), the journalist evoked the frame of nature when she subtly judged AI’s capacity for emotional intelligence by quoting some patients as claiming that they still “prefer the human touch.” She also quoted a technology officer as observing that “It doesn’t feel the same as a doctor yet. I also don’t understand what the result means.” These sentiments echo those of medical informatician Reddy Sandeep (2018:93), who contends that “[c]ontemporary healthcare delivery models are very dependent on human reasoning, patient-clinician communication and establishing professional relationships with patients to ensure compliance”.

6.2 AI as superior to human intelligence

Yet, many articles in the dataset that evoked the frame of nature to portray AI as matching or surpassing human intelligence also questioned this intelligence either by suggesting that AI should be regulated by human beings or by arguing that AI can neither feel nor think creatively. In a SowetanLIVE article published on 20 March 2018, for instance, the journalist questioned AI’s intelligence through the frame of nature: “[a major concern] is the fact that although robots may have AI (Artificial Intelligence), they are not as intelligent as humans. They can never improve their jobs outside their pre-defined programming because they simply cannot think for themselves. Robots have no sense of emotions or conscience. They lack empathy and this is one major disadvantage of having an emotionless workplace.” Only seven articles reflected the view that AI is unequivocally superior to human intelligence (although often doing so through the use of reported speech and/or multiple voices). Such a view “may be detrimental to the public’s understanding of A.I. as an emerging beneficial technology” (Obozintsev, 2018:1), and readers could be forgiven for feeling anxious when confronted by statements such as “educators must consider what skills graduates will need when humans can no longer compete with robots” (Mail & Guardian, 16 February 2018) and “there will come a time where technology will advance so exponentially that the human systems we know will be obliterated” (Mail & Guardian, 4 October 2019). Another article that framed AI as transcending human intelligence was ‘Self-navigating AI learns to take shortcuts: Study’ (The Citizen, 9 May 2018). As is typically the case when the frame of nature was employed, the AI system in this article was romanticised and anthropomorphosised through messages such as “A computer programme modelled on the human brain learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert.” Although the journalist framed the AI system in terms of the claims made about it by its designers, she did not go on to question the claims. David Watson (2019:417), who studies the epistemological foundations of machine learning, argues that “[d]espite the temptation to fall back on anthropomorphic tropes when discussing AI […] such rhetoric is at best misleading and at worst downright dangerous. The impulse to humanize algorithms is an obstacle to properly conceptualizing the ethical challenges posed by emerging technologies”. (We consider how ethical issues surrounding AI were represented in our dataset a little later on in this paper when we discuss morality/ethics, accountability, and Pandora’s Box.)

6.3 AI that looks/sounds like a human or AI that looks/sounds like a robot?

In a number of articles, the frame of nature or the frame of artifice was evoked to depict AI as human-like, and one example was evident in ‘Who’s afraid of robots?’ (Daily Maverick, 5 March 2019) in which the suggestion was made that human and robotic news anchors could become indistinguishable from one another in the near future. By contrast, AI in other articles was described as looking more robot-like. In ‘Robot teachers invade Chinese kindergartens’ (The Citizen, 29 August 2018), for instance, an educational robot called ‘Keeko’ was described as “[r]ound and white with a tubby body” and as an “armless robot” that “zips around on tiny wheels.” In the same article, the journalist quoted a teacher as describing the robot as “adorable”. It is no coincidence that in the dataset, robots that looked like ‘Keeko’ were variously described as “adorable” (The Citizen, 29 August 2018) and “client-friendly” (SowetanLIVE, 20 March 2018), while those who looked or sounded like human beings were framed as “eerie” (SowetanLIVE, 8 March 2018) or “uncanny” (Daily Maverick, 10 November 2019). These types of descriptions constitute a reference to the ‘uncanny valley’, a phenomenon “which describes the point at which something nonhuman has begun to look so human that the subtle differences left appear disturbing” (Samuel, 2019:12). Research studies indicate that individuals perceive robots to be less creepy if they are designed in such a way that they are distinguishable from human beings (MacDorman, 2006; Greco, Anerdi and Rodriguez, 2009). In ‘The rise of the machines looks nothing like the movies’ (Daily Maverick, 10 November 2019), the journalist briefly speculated why most machines do not look like humans: “most do not resemble us, they do not walk on two feet, they do not have pre-programmed facial expressions and human gestures for us to study, for us to suspect, to imbue with sinister motives, real or imagined.” A number of scholars have speculated why most machines do not have a human-like appearance. Samuel (2019: 12), for example, argues that “[while] eliciting social responses in humans is easier when the robot in front of them is human-like in design, this does not mean that robots automatically become more accepted the more human they look. This may initially be the case, but human design appears to reach a point at which positive social responses turn into negative ones and robots are rejected for seeming ‘too human’”. The journalist of an article published in The Citizen on 5 September 2018 showed awareness of this problem when, in evoking the frame of nature, he noted that because a machine called ‘Sophia’ “is designed to look as much like a robot as a human, with its mechanical brain exposed, and no wig in place to humanise her further”, people who encounter her “know they are dealing with a robot and don’t feel fooled into believing it is human.”
Of course it is not a given that individuals simply do not like machines that look human. Indeed, Samuel (2019:9) puts paid to the notion that people are inclined to favour anthropomorphic robots. He argues that “people show a preference for robots’ design to be matched to their task” (Samuel, 2019:9). In this respect, people tend to be positively disposed to human-like features if the robot in question is a social robot. Alternatively, “an industrial robot may be thought of in a different manner and thus does not appear to need to look human in order to be deemed acceptable for their task by a human observer” (Samuel, 2019: 9). In the dataset, a number of journalists appeared to be aware of the connection between appearance and task. For example, in ‘“Call me baby”: Talking sex dolls fill a void in China’ (SowetanLIVE, 4 February 2018), the reporter evoked the frame of nature to describe sex dolls as “shapely” (although the reporter also questioned just how life-like the dolls were, describing one as possessing a “robotic voice” and as having lips that do not move). In addition, the dolls were described as being able to “talk, play music and turn on dishwashers”. Clearly, in order to be regarded as a social companion, such dolls are required to look, sound and act more life-like.
On the subject of robots serving a social function, the journalist of the article just referred to also reported that “buyers can customise each doll for height, skin tone, amount of pubic hair, eye colour and hair colour”. Tellingly, the journalist went on to claim that “the most popular dolls have pale skin, disproportionately swelled breasts (sic) and measure between 158 and 170 centimetres”. The way in which these dolls were described here is similar to the way in which they were described in other articles in the dataset. For instance, and quoting Hanson Robotics (probably to maintain authorial distance), the journalist in an article published in The Citizen (5 September 2018) described the robot ‘Sophia’ as being “endowed with remarkable […] aesthetics”, while in a SowetanLIVE article (28 September 2018), robots serving as fashion models were described as female as well as “lean” or “slender, with dark flawless skin” (SowetanLIVE, 28 September 2018). In the entire dataset most robots driven by AI technology were reported as being female; in addition to Sophia, sex dolls, and robotic fashion models (such as ‘Shundu’ and ‘Noonoouri’), journalists also referred to ‘Alexa’, a virtual assistant (SowetanLIVE, 8 March 2018), ‘Vera’, a robot that assists in interviewing prospective job candidates (The Citizen, 27 April 2018), and ‘Rose’, a robot that sells insurance (The Citizen, 22 January 2020). According to Döring and Poesch (2019:665), the media tend to represent human-robot relationships in terms of “stereotypical gender roles” and “heteronormativity” (cf. Stassa, 206; Ndonye, 2019), although it should be added that in our dataset, the various journalists did not themselves encourage these representations, but merely framed female robots in terms of how they were described by their designers or by the AI industry in general. Disappointingly, with the exception of three journalists who (1) alluded to individuals on Chinese social media platforms expressing their concerns that sex dolls “reinforce(d) sexist stereotypes” (SowetanLIVE, 4 February 2018), (2) questioned whether AI may discriminate against bank loan applicants on the basis of gender (Mail & Guardian, 14 March 2019), and (3) observed that “[w]omen and minorities are grossly […] underrepresented” when AI-droven algorithms are employed, no other journalist in our dataset questioned how the AI industry is reinforcing power relations in which the objectification of women is normalised. This is highly problematic in another important sense: the media need to challenge gender stereotypes because applying gender to an AI-driven application may have serious consequences. In this respect, McDonnell and Baxter (2019:116) point out that “[t]he application of gender to a conversational agent [such as a chatbot system] brings along with it the projection of user biases and preconceptions”.
Remaining with the subject of how AI-driven technology is anthropomorphosised, it is interesting to note that in the dataset, the term ‘AI’ or ‘artificial intelligence’ was often replaced by the word ‘robot’, and there appear to be two reasons for this. First, in articles in which AI was regarded as dangerous, the word ‘robot’ served as a “spacing device” (Jones, 2015:40) in the sense that “it [set] another barrier between the reader and the developer of the technology and [provided] a focus for any negative will”. Jones (2015:40) observes that “[it] seems far easier for the journalist to focus on a physical being than an abstract concept called artificial intelligence”. This was evident in ‘South Africa should lead effort to ban killer robots’ (Mail & Guardian, 11 April 2018) in which the journalist referred to governments around the world producing “killer robots” that “decide who gets to live and who dies”. Here, the term ‘killer robots’ was used in place of the term AI/artificial intelligence, “[providing] a focus for risk concerns about the Other” (Jones, 2015:44). Second, in articles in which AI was regarded in a more positive light, the term ‘robot’ “[served] to assist in the anthropomorphizing of the technology as it is far easier to draw comparisons between a human body and a robot body” (Jones, 2015:40). In an article that appeared in The Citizen on 29 August 2018, the journalists used their own voices as well as that of a teacher to describe Keeko, an educational robot, as “adorable” and as “[reacting] with delight” when children answer questions correctly.” The journalists did not interrogate the societal or ethical consequences of placing such educational robots in classrooms, and even quoted the principal of the kindergaten where Keeko is based as stating that robots are “more stable” than human teachers. According to scholars such as Engstrom (2018:19), “in humanising robots and AI, we have to ask ourselves whether our perception of them as machines changes – for example, whether it causes us to feel empathy or even love for them, and whether it will make us have higher expectations [of] the technologies to carry out human responsibilities”. This is unfortunately not a theme that the journalists in our dataset interrogated.
Why did the journalists in our dataset show a tendency to portray AI-driven technology in human form? A partial answer may lie in how AI is portrayed in film and on television. Brennan (2016:1) speculates that “[i]dentification depends on viewers’ ability to understand characters through the lens of their own experience. As such, it relies on recognisable social categories like gender, age, nationality, class and so on. […] Writers must construct the characters of technological protagonists, or antagonists, using recognisable human traits”. Brennan (206:1) goes on to argue that this may unfortunately “limit the ways that AI and robotics are represented and imagined”. Interestingly, the journalist of a Daily Maverick article published on 10 November 2019 speculated that we are inclined to depict robots in human form “perhaps because of our elevated view of ourselves.”

6.4 Morality/ethics, Pandora’s Box, and accountability: AI as uncontrollable and unregulated

Raquel Magalhães (2019:1), editorial manager of Understanding with Unbabel, argues that what is problematic about humanoid representations of AI is that they detract from real issues, particularly from those that pertain to ethical considerations around data privacy concerns (owing to facial recognition algorithms), the use of biased algorithms in decision making, ‘killer robots’, and the absence of clear policies that help control and regulate the development of AI. However, there does appear to be some light at the end of the tunnel; a recent study by Ouchchy, Coin and Dubljević (2020:1) has found that although the media’s coverage of the ethics of AI is somewhat superficial, it does nevertheless have “a realistic and practical focus”, and our dataset confirmed this finding.
As already noted, a total of 21 articles in the dataset (28.76%) employed the morality/ethics frame, and within this frame, journalists questioned the ethics of fake news (‘Misinformation woes could multiply with “deepfake” videos’, The Citizen, 28 January 2018) data breaches (‘#FaceApp sparks data privacy concerns’, The Citizen, 17 July 2019) video surveillance (‘CCTV networks are ‘driving an AI-powered apartheid SA’, The Citizen, 9 December 2019), and biased algorithms (‘Developing countries need to wake up to the risks of new technologies’, Mail & Guardian, 8 January 2018). All these threats constitute reasonable concerns in the area of AI (Stahl, Timmermans and Mittelstadt, 2016; O’Carroll and Driscoll, 2018), yet they are sometimes overlooked in favour of what Bartz-Beielstein (2019:1) refers to as “the well-known ones such as the weaponisation of AI or the loss of employment opportunities”. In a study in which, amongst other things, she explores the nature of media coverage in AI, Obozintsev (2018) reports that the frame of morality/ethics was rarely employed in her dataset. It is possible that this frame was evoked more often in our dataset because a number of writers were also academics, computer scientists, and technology experts; we speculate that writers in these fields may be particularly interested in considering the political, socio-cultural, economic, and ethical implications of AI in (South) Africa.
As noted in the findings section, 21.91% of all articles evoked the frame of accountability. In doing so, these articles reflected (1) fears about how AI is controlling human beings in terms of their movements/online speech or (2) an emphasis on the need for human beings to control and/or regulate this technology in some way. An example of fears about AI controlling human beings is evident in the title of an article published in The Citizen on 5 August 2019, ‘Whatsapp could soon start censoring what you are saying’, while an example of a call for human beings to control and/or regulate AI is reflected in an 18 July 2019 Daily Maverick article in which the journalist claimed that “At least four rights are threatened by the unregulated growth of AI: freedom of expression, privacy, equality and political participation.” Fears about AI controlling human beings or about AI being uncontrollable was particularly evident in the Pandora’s Box frame which was constructed in three articles (4.10% of the dataset). In ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), for example, the journalist argued that AI-driven technology could unleash a Pandora’s Box and that students in Africa have to be properly trained “so that they gain the insight that will be needed to defend people from forces that may seek to turn individuals into disposable parts”.
What is interesting about these fears is that they mirror one of the the findings of Fast and Horvitz (2017:966) that “[t]he fear of loss of control […] has become far more common in recent years” when it comes to public opinion of AI. Readers will no doubt feel unnerved when they encounter statements such as “[the future] looks to be dominated by machines” (Mail & Guardian, 16 February 2018). Such predictions conjure up AI as being arcane and as a technology understood by only a few; they may be compelled to conclude that AI is beyond their control (cf. Nelkin, 1995:162).
On a positive note, out of the three articles that evoked Pandora’s Box, only one offered no solutions as to how we should control and regulate AI. The remaining articles referred to putting policies in place that will protect users in (South) Africa from the dangers of AI such as those that pertain to privacy issues and autonomous weapons as well as to ensuring that human beings take responsibility for the performance of AI systems. To provide a specific example, in a Daily Maverick article (29 January 2019 ), the writer called for technology and data to be democratised: We […] need to incorporate into the devices of the 4IR a character of the world as we desire it and not make these devices reflect biases, prejudices and unequal economic spaces as they currently exist.” By offering up solutions such as these, journalists challenged the notion that “technology is developed in a vacuum, with the suggestion being that the human user is an afterthought (Jones, 2015:36). Holguín (2018:17) points out that the importance of taking responsibility for AI is sometimes overlooked, “but it seems crucial for understanding the development of technology as depending on human agency. Thus, the improvements and goals of these intelligent systems are not self-driven by the force of technology, but by the decisions of the human actors behind their creation”.
It is encouraging that some of the articles in our dataset (i) identified AI’s potential threats as they relate to morality and ethics and (ii) within the frame of accountability, considered policies and principles that could help regulate these threats in (South) Africa. In ‘Why we need an AI-resilient society’, Bartz-Beielstein (2019:1) refers to strategy (i) as “awareness” and to strategy (ii) as “agreements”. The former strategy refers to helping society recognise the dangers that AI may pose and can be generated “by publishing papers and giving public talks” (Bartz-Beielstein, 2019: 6), for example. Agreements is a strategy that calls for society to generate principles and laws that regulate different aspects of AI.

6.5 Constructing dualistic frames

It is not surprising that just over half of the articles under investigation in this study reflected both pro- and anti-technology discourse since it is an inherent paradox in many news articles about technology including AI (Jones, 2015: 42; Brennan, Howard and Nielsen, 2018; Chuan et al., 2019). In the dataset, the polarised discourse around AI was typically framed in terms of both competition and social progress. This dualistic frame is apparent in a Mail & Guardian article which appeared on 16 March 2020 in which a World Economic Forum claim – “automation will displace 75-million jobs worldwide by 2022” – was juxtaposed with the statement that “AI is reducing the time it takes to generate reports, analyse risks and rewards, make decisions and monitor financial health.” The question is, Why simultaneously frame AI in terms of competition and social progress? Some scholars are of the view that AI may be constructed in dualistic terms to offer up a “digital opiate for the masses” (Floridi, 2016:1), as it were. Philosopher and ethics scholar Luciano Floridi (2019:1) puts it bluntly when he observes that “Fear always sells well, like vampire or zombie movies”: in other words, it is appealing for the mass media to frame AI around dystopian, dualistic narratives (Holguín 2018:5) because this increases readership and ratings (cf. Obozintsev, 2018:1). This view is shared by Dorothy Nelkin who, in a cynically entitled article ‘Selling science’, argues that “too often science in the press is more a subject for consumption than for public scrutiny, more a source of entertainment than for information” (Nelkin, 1995:162). This entertainment factor was apparent in the more sensationalist or alarmist titles in the dataset such as ‘“Call me baby’: Talking sex dolls fill a void in China’ (SowetanLIVE, 4 February 2018), ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), and ‘Ballerina bots of the Amazon job-pocalypse’ (Mail & Guardian, 1 March 2019). Of course, articles about AI may also be alarmist and/or sensationalist because the media are under pressure to succeed within what Davenport and Beck (2001:2) refer to as the “attention economy” (cf. Cave et al., 2018:17) in which clicks and views are highly sought after.
Of interest is that research on dualistic or competing frames indicates that individuals are averse to such frames and therefore attempt to resist them (Sniderman and Theriault, 2004). Obozintsev (2018:15) observes that “exposure to two competing frames can render one frame ineffective, or even counter-effective”, particularly if the frame is not aligned with their belief systems.

6.6 Framing uncertainty through dualistic frames

We argue that making use of a dualistic frame such as one in which AI is couched in terms of both competition and social progress is not necessarily a reflection of bad journalism (cf. Kampourakis, 2019). Indeed, it is not surprising that the media employ competing frames given that AI is an emerging technology characterised by uncertainty and conflict (cf. Hornmoen, 2009:1; Kampourakis and McCain, 2020:152). Notwithstanding the fact that the relationship between science and journalism is complex, Holguín (2018:5) contends that “[when] the scientific community is not able to agree on the possible risks or impact of a new scientific or technological breakthrough, this subject may become salient in newspapers”. Here we propose that the media may highlight uncertainy around AI and its risks or impact by framing this technology in terms of dualistic frames. Hornmoen (2009:16) points out that “[t]he alternation between different perspectives, with an apparently contradictory identification in the journalist’s report, contributes above all to construct an image of an emergent scientific field”. We further suggest that journalists may attempt to resolve the competing frames of competition and social progress in specific ways. In an article published in The Citizen on 19 June 2019, the journalist evoked both the frame of competition to depict AI as destroying jobs and the frame of social progress to portray this technology as creating jobs. The journalist attempted to resolve this dualistic frame by mitigating it: he quoted the vice-president of a software company as claiming that while AI may result in some jobs becoming redundant, AI will also generate labour switching in the sense that it will create “new categories of work.” Quoting Deloitte, he qualified this by reporting that AI will replace menial tasks/manual labour, thus “augmenting the workforce and enabling human work to be reframed in terms of problem solving and the ability to create new knowledge.” What is interesting about this article (and many others in the dataset) is that it did not question the veracity of the claim made in the field of AI that manual labour and menial tasks will be replaced by automated technology. In failing to provide readers with this particular context, we argue that the articles may compel readers to adopt an anti-AI view (cf. Jones, 2015:41). A typical claim about AI and menial tasks is epitomised in “Menial […] tasks that might once have needed the human touch are slowly but surely being replaced with the accuracy of computers” (SowetanLIVE, 31 July 2018). Although it is undeniable that automation is replacing and will continue to replace certain jobs, in the short- and medium-term at least, “[manual] work is likely to remain surprisingly resistant to automation” (Heath, 2014:1, in converation with Michigan Institute of Technology economist Erik Brynjolfsson). This is due to a phenomenon known as Moravec’s Paradox according to which AI researchers have observed that machines find it difficult to perform tasks that humans find easy, and vice versa. One article published in the Daily Maverick (12 November 2018) referenced this paradox when he stated that “Generally, jobs that require gross motor skills are easier to automate than those that require fine motor skills. The jobs that will remain will be those that require a human touch”.

6.7 Employing the middle way frame

In addition to mitigating claims, the journalists in our dataset appeared to resolve the ‘AI as competition’ and ‘AI as social progress’ paradox by adopting a middle way frame. Typically, the journalists recommended a compromise position in which human beings and AI should work together in order to complete a variety of tasks. In an article published in The Citizen on 22 January 2020, for example, the journalist quoted a clinical professor of imaging sciences as suggesting that “the combined forces of human and machine would be better than either alone” in the context of breast cancer detection.
Out of the 40 articles in the dataset that evoked the frames of competition and social progress, 26 employed the middle way frame and 14 did not. We argue that the presence or absence of the middle way frame in articles that reflect the competing frames may influence how readers perceive AI – whether they regard it as threatening or not. In the 14 articles that did not evoke the middle way frame, the coverage of AI was overwhelmingly alarmist in the sense that this technology was framed as replacing or being about to replace human beings. No room was made for a future in which human beings would be able to exercise control over AI-driven technology. A typical example is reflected in an article in The Citizen (4 October 2019) in which the journalist evoked the frame of nature and quoted Elon Musk as claiming that “computers actually are already much smarter than people on so many dimensions.” We noted that some journalists took this claim a step further and employed the frame of artifice to argue that the lines between AI and human beings will blur to such an extent that the former will entirely replace the latter (cf. Jones, 2015:37). In ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), the journalist used the frame of artifice to claim that human beings will be “cannabilised” by machines that will “outperform [them] in nearly every job function” in the future. It appears that articles in which AI is portrayed as matching or surpassing human intelligence, but in which a middle way frame is used, may be less alarming to readers because the human element is not dismissed. In a SowetanLIVE article of 2 January 2020, a researcher was quoted as claiming that “[a] computer programme can identify breast cancer from routine scans with greater accuracy than human experts.” However, the journalist tempered this claim when he used a middle way frame to quote the same researcher as observing that “[there’s] the opportunity for this technology to support the existing excellent service of the (human) reviewers.”

6.8 Using reported speech and multiple voices

Whether or not the middle way frame is employed, we also propose that journalists may attempt to resolve uncertainties around AI through the use of quotations/reported speech (Cotter, 2010:174) and multiple perspectives (Hornmoen, 2009:78): “Due to the technical complexity of the latest developments [in] the field and the uncertainty of its predictions around the impact, it seems probable that journalists will count on external sources that to a greater or lesser extent allow them to report on the topic and ‘validate’ their claims and arguments” (Holguín, 2018:7). Examples of the use of reported speech are evident in the section just before this one. Studying why and how journalists employ reported speech is a research paper on its own, but Calsamiglia and Ferrero (2003:44) observe that journalists may use reported speech “as a means of orientating their position on the topic of reference” and absolving themselves from “their responsibility to inform objectivity”. Another device we identified was the use of multiple voices through reference to formal reports, tests, and academic studies. We see this device operating in ‘Is your job safe from automation?’ (SowetanLIVE, 20 March 2018) in which the journalist stated that “According to a new Accenture report, one in three jobs in South Africa (5.7 million jobs) is currently at risk of total automation.” Use of reported speech and reference to formal reports, tests, and studies allow journalists to establish multiple perspectives which “play a major role in constructing popular understanding of the science in question” (Dunwoody, 1999:69; cf. Hornmoen, 2009:4), particularly if that science is marked by controversy and/or uncertainty. As far as the latter is concerned, Holguín (2018:7) suggests that an over-reliance on ‘experts’ means that journalists avoid providing critical judgements about the risks and impact of AI. In a 4 June 2019 Daily Maverick article, for instance, the journalist claimed that “Soon AI will drive our cars, stock our warehouses and take care of our loved ones. It holds much promise, and industry players say it is on the brink of explosion.” To validate the promise that AI holds, the journalist then quoted a number of experts and referred to “the AI Maturity Report”, which reports that “local organisations invested around R23.5-billion in AI over the last decade.” Other than briefly acknowledging that AI must be driven by human beings, the journalist did not critically interrogate the possible risks of AI.
Looking more closely at our dataset, what is problematic is that the use of multiple voices did not necessarily mean that an article was “multiperspectival” (Hornmoen, 2009:79): “closer inspection may reveal that the text is primarily advancing ‘ways of seeing’ and the rhetoric of a particular group of researchers” (Hornmoen, 2009:79). When it came to articles in our dataset that reflected competing frames, we had to determine which frames were made more salient to promote a particular view of AI (cf. Hornmoen, 2009:81). Consider, for example, ‘Will your financial advisor be replaced by a machine?’ published in The Citizen on 10 March 2018. In this article, AI was framed as a paradox in the sense that it was described in terms of competition (i.e., as leading to loss of jobs for financial advisors) and in terms of social progress (i.e., as helping financial advisors become more creative). The question in the title was repeated in the article: “will [financial advisors] become redundant altogether?.” Through reference to multiple voices (in the form of quotations from financial experts), the journalist constructed a middle way frame when he argued that AI will not replace financial advisors and that machines and human beings will work together to provide clients with financial advice. Returning to Calsamiglia and Ferrero’s (2003) study of reported speech, it appears that the use of reported speech in this case allowed the journalist to orientate his position on the topic of machines replacing human beings.

7. Conclusions

Like other studies on media portrayals of AI, our study signals that coverage in widely circulated South African newspapers tended to veer between utopian and dystopian views of this technology, although most articles reflected a more positive view since they evoked the frame of social progress more frequently than they evoked the frame of competition. In other words, AI was portrayed as friend more frequently than it was portrayed as foe. A pro-AI stance was particularly evident in the areas of ‘AI-human interaction’, ‘Business, finance, and the economy’, ‘Education’, the ‘Environment’, and ‘Healthcare and medicine’. Those articles that had an anti-technology stance, and that focused on threats/competition, were dominated by moral/ethical considerations around ‘Big Brother’, ‘Defence weapons’, ‘Human control over AI’, the ‘News industry’, and ‘South Africa’s preparedness for an AI-driven world’.
We argue that the employment of both the frames of social progress and competition may enable journalists to construct AI as an emerging and uncertain technology. We propose that future research should explore how uncertainty/conflict generated by journalists about AI is processed by readers, since the effect of this uncertainty/conflict is not known: “identifying and testing uncertainty-inducing message features is crucial as uncertainty is a complex cognition that can trigger or reduce both positive states […] and negative states” (Jensen and Hurley, 2012:690). As already mentioned, it does appear that a reader’s exposure to conflicting frames may cause a frame to be rendered ineffective or even counter-effective (Obozintsev, 2018:15).
In most articles in which AI was framed in terms of both pro- and anti-technology ideologies, journalists also made use of the middle way frame, which we argue allowed them to establish a compromise position between AI as friend and foe.
Of interest is that many articles made use of anthropomorphic tropes when discussing the nature of AI and these tropes overwhelmingly and unrealistically framed AI as either matching or surpassing human intelligence. Yet, several articles also subtly judged the intelligence of machines by questioning whether they had the capacity to think and feel. Others touched upon human agency in the development of AI in (South) Africa or considered human agency in more detail, discussing how AI’s growth and implementation should be governed and regulated for the sake of transparency and accountability. The call for human agency is critical as it steers society in the direction of an AI paradigm that draws on Ubuntu and that constructs AI as a technology that orbits around humanity, social justice, and community engagement (Nayebare, 2019:50-51). The public in South Africa and the rest of Africa constitute underrepresented voices in the field of AI (cf. Cisse, 2018), and the media have an important role to play in making sure that they are informed about AI and play a role in its implications for their lives and futures.

Footnotes

1 We acknowledge that since we examined articles published by only four news outlets, our results may not be representative of frames employed by other outlets.

Reference list

Badenhorst, J. 2016. The robots are coming for your jobs. News24, 29 September. Available: https://www.news24.com/xArchive/Voices/the-robots-are-coming-for-your-jobs-20180719 [Date of access: 18 March 2020].
Barocas, S. & Selbst, A.D. 2016. Big data’s disparate impact. California Law Review 104: 671-732.
Bartz-Beielstein, T. 2019. Why we need an AI-resilient society. arXiv preprint arXiv:1912.08786.
Bergstein, B. 2017. The great AI paradox. MIT Technology Review, 15 December. Available: (https://www.technologyreview.com/s/609318/the-great-ai-paradox/ (Date of access: 2 April 2020).
Bishop, J.M. 2016. Singularity, or how I learned to stop worrying and love artificial intelligence. Pp. 267-281 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.
Borgesius, Z.F. 2018. Discrimination, artificial intelligence, and algorithmic decision-making. Strasbourg: Council of Europe, Directorate General of Democracy.
Brennan, E. 2016. Why does film and television sci-fi tend to portray machines as being human?. Communicating with Machines. ICA Post-conference. Fukuoka Sea Hawk Hotel, Fukuoka, Japan, 14 June 2016. Available: https://arrow.tudublin.ie/cgi/viewcontent.cgi?article=1043&context=aaschmedcon (Date of access: 2 April 2020).
Brennen, J.S., Howard, P.N. & Nielsen, R.K. 2018. An industry-led debate: How UK media cover artificial intelligence. RISJ Fact-Sheet. Oxford, UK: University of Oxford.
Broadbent, E. 2017. Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology 68: 627-652.
Brossard, D. 2013. New media landscapes and the science information consumer. Proceedings of the National Academy of Sciences 110(Suppl. 3): 14096-14101.
Broussard, M., 2018. Artificial unintelligence: How computers misunderstand the world. Cambridge, MA: MIT Press.
Bughin, J. and Hazan, E. 2017. The new spring of artificial intelligence: A few early economies. VOX, 21 August. Available: https://voxeu.org/article/new-spring-artificial-intelligence-few-early-economics [Date of access: 17 March 2020].
Calsamiglia, H. and Ferrero, C. 2003. Role and position of scientific voices: Reported speech in the media. Discourse Studies 5(2): 147-173.
Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B. and Taylor, L. 2018. Portrayals and perceptions of AI and why they matter. Available: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf [Date of access: 2 February 2020].
Chuan, C.H., Tsai, W.H.S. and Cho, S.Y. 2019. Framing artificial intelligence in American newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society: 339-344.
Cisse, M. 2018. Look to Africa to advance artificial intelligence. Nature 562(7728): 461-462.
Cook, T.S. 2019. The importance of imaging informatics and informaticists in the implementation of AI. Academic Radiology 27: 113-116.
Cotter, C. 2010. News talk: Investigating the language of journalism. New York, NY: Cambridge University Press.
Curioni, A. 2018. Artificial intelligence: Why we must get it right. Informatik-Spektrum 41(1): 7-14.
Davenport, T. and Beck, J. 2001 Attention economy: Understanding the new currency of business. Boston, MA: Harvard Business Review Press.
De Spiegeleire, S., Maas, M. and Sweijs, T. 2017. Artificial intelligence and the future of defense: strategic implications for small-and medium-sized force providers. The Netherlands: The Hague Centre for Strategic Studies.
Döring, N. and Poeschl, S. 2019. Love and sex with robots: A content analysis of media representations. International Journal of Social Robotics 11(4): 665-677.
Elish, M.C. and Boyd, D. 2018. Situating methods in the magic of Big Data and AI. Communication Monographs 85(1): 57-80.
Emanuel, C.K. 2016. The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology 13: 1415-1420.
Entman, R.M. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication 43: 51-68.
Fast, E. and Horvitz, E. 2017, February. Long-term trends in the public perception of artificial intelligence. Pp. 963-969 in S.P. Satinder and S. Markovitch (Eds), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. San Fransisco, CA: AAAI Press.
Feliciano, D. 2019. In Brazil, ‘AI Gloria’ will help women victims of domestic violence’. The Rio Times, 29 April. Available: [Date of access: 17 April 2020].
Floridi, L. 2019. What the near future of artificial intelligence could be. Philosophy & Technology (2019) 32: 1–15.
Garvey, C. and Maskal, C. 2019. Sentiment analysis of the news media on artificial intelligence does not support claims of negative bias against artificial intelligence. Omics: A Journal of Integrative Biology 23(0): 1-14.
Gastrow, M. 2015. The stars in our eyes: Representations of the Square Kilometre Array telescope in the South African media. Unpublished Doctoral dissertation. Stellenbosch, South Africa: Stellenbosch University.
Gockley R., Bruce A., Forlizzi J., Michalowski M., Mundell A., Rosenthal S., Sellner B., Simmons R., Snipes K., Schultz A. and Wang, J. 2005. Designing robots for long-term social interaction. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS 2005: 1338-1343.
Greco, A., Anerdi, G. and Rodriguez, G. 2009. Acceptance of an animaloid robot as a starting point for cognitive stimulators supporting elders with cognitive impairments. Revue d’Intelligence Artificielle 23(4): 523-37.
Haenlein, M. and Kaplan, A. 2019. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review 61(4): 5-14.
Hauert, S. 2015. Shape the debate, don’t shy from it. Nature 521(7553): 416-417.
Heath, N. 2014. Why AI could destroy more jobs than it creates, and how to save them. TechRepublic, 18 August. Available: https://www.techrepublic.com/article/ai-is-destroying-more-jobs-than-it-creates-what-it-means-and-how-we-can-stop-it/ [Date of access: 7 April 2020].
Hoes, F. 2019. The Importance of ethics in artificial intelligence (or any other form of technology for that matter). Towards Data Science, 30 December. Available: https://towardsdatascience.com/the-importance-of-ethics-in-artificial-intelligence-16af073dedf8?gi=5314dcbcb910 [Date of access: 3 April 2020].
Holguín, L.M. 2018. Communicating artificial intelligence through newspapers: Where is the real danger?. Available: https://mediatechnology.leiden.edu/images/uploads/docs/martin-holguin-thesis-communicating-ai-through-newspapers.pdf [Date of access: 3 April 2020].
Hornmoen, H. 2009. What researchers now can tell us: Representing scientific uncertainty in journalism. Observatorio 3(4): 1-20.
Jensen, J.D. and Hurley, R.J. 2012. Conflicting stories about public scientific controversies: Effects of news convergence and divergence on scientists’ credibility. Public Understanding of Science 21(6): 689-704.
Jones, S. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta
Kabu, N. 2017. A content analysis of scientific news in two South African daily newspapers. Unpublished Doctoral dissertation. Johannesburg, South Africa: University of the Witwatersrand.
Kampourakis K. 2019. How Are the uncertainties in scientific knowledge represented in the public sphere?: The genetics of intelligence as a case study. Pp. 288-305 in K. McCain and K. Kampourakis (Eds), What is scientific knowledge? New York, NY: Routledge.
Kampourakis, K. And McCain, K. 2020. Uncertainty: How it makes science advance. USA: Sheridan Books, Incorporated.
Kanda T., Hirano T., Eaton D. and Ishiguro, H. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human-Computer Interactaction 19(1): 61-84.
Kirk, J. 2019. The effect of artificial intelligence (AI) on emotional intelligence (EI). Capgemini, 19 November. Available: https://www.capgemini.com/gb-en/2019/11/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei/ [Date of access: 3 April 2020.]
Krippendorff, K. 2013. Content analysis: An introduction to its methodology. Los Angeles, CA: Sage.
Krittanawong, C., 2018. The rise of artificial intelligence and the uncertain future for physicians. European Journal of Internal Medicine 48: e13-e14.
Lara, F. and Deckers, J. 2019. Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics : 1-13.
Lele, A. 2019. Disruptive technologies for the militaries and security. Singapore: Springer.
Lorentz, C. 2018. Is the second artificial intelligence winter just around the corner? NetApp, 13 February. Available: https://blog.netapp.com/is-the-second-artificial-intelligence-winter-just-around-the-corner/ [Date of access: 17 March 2020].
MacDorman, K.F. 2006. Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science: 26-29.
Maclure, J. 2019. The new AI spring: A deflationary view. AI & SOCIETY: 1-4.
Magalhães, R. 2019. Expectations vs. Reality: AI narratives in the media. Understanding with Unbabel, 18 October. Available: unbabel.com/blog/artificial-intelligence-media/ [Date of access: 16 April 2020].
Mbuthia, W. 2018. The rise of sex robots and the controversy that comes with them. Available: https://www.standardmedia.co.ke/evewoman/article/2001266355/sex-robots-a- necessary-evil-or-a-pure-curse (Date of access: 3 April 2020).
Mondal, B. 2020. Artificial Intelligence: State of the Art. Pp. 389-425 in V.E. Balas, R. Kumar and R. Srivastava (Eds), Recent trends and advances in artificial intelligence and Internet of Things. Cham, Switzerland: Springer.
Maruyama, Y. 2020. Quantum physics and cognitive science from a Wittgensteinian perspective: Bohr’s classicism, Chomsky’s universalism, and Bell’s contextualism. Pp. 375-408 in S. Wuppuluri and N. da Costa (Eds), WITTGENSTEINIAN (adj.): Looking at the world from the viewpoint of Wittgenstein’s philosophy. p. 375-407). Cham, Switzerland: Springer Nature Switzerland AG.
McCarthy, J., Minsky, M.L., Rochester, N. and Shannon, C.E. 2006. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine 27(4): 12-14.
McCulloch, W.S. and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics 5(4): 115-133.
McDonnell, M. and Baxter, D. 2019. Chatbots and gender stereotyping. Interacting with Computers 31(2): 116-121.
Müller, V.C. 2016. Editorial: Risks of artificial intelligence. Pp. 1-8 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.
Nayebare, M., 2019. Artificial intelligence policies in Africa over the next five years. XRDS: Crossroads, The ACM Magazine for Students 26(2): 50-54.
Ndonye, M.M. 2019. Mass-mediated feminist scholarship failure in africa: normalised body-objectification as artificial intelligence (AI). Editon Consortium Journal of Media and Communication Studies (ECJMCS) 1(1): 1-8.
Nelkin, D. 1995. Selling science: How the press covers science and technology. New York, NY: W.H. Freeman and Company.
Ng, A. 2018. AI transformation playbook. Landing AI, 13 December. Available: https://landing.ai/ai-transformation-playbook/ Date of access: 3 April 2020].
Nisbet, M.C. 2009a. The ethics of framing science. Pp. 51-73 in B. Nerlich, B. Larson and R. Elliott (Eds), Communicating biological sciences: Ethical and metaphorical dimensions. London, UK: Ashgate.
Nisbet, M.C. 2009b. Framing science. A new paradigm in public engagement. Pp. 1-32 in L. Kahlor and P. Stout (Eds), Understanding science: New agendas in science communication. New York, NY: Taylor and Francis.
Nisbet, M.C., 2016. The ethics of framing science. Pp. 51-74 in Nerlich, B., Elliott, R. And Larson, B. (Eds), Communicating biological sciences. USA: Routledge.
Obozintsev, L. 2018. From Skynet to Siri: An exploration of the nature and effects of media coverage of artificial intelligence. Unpublished Doctoral thesis. Newark, Delaware: University of Delaware.
O’Carroll, E. And Driscoll, M. 2018. ‘2001: A Space Odyssey’ turns 50: Why HAL endures. The Christian Science Monitor, 3 April. Available: https://www.csmonitor.com/Technology/2018/0403/2001-A-Space-Odyssey-turns-50-Why-HAL-endures [Date of access: 16 April 2020].
Orr, D. 2017. At last, a cure for feminism: Sex robots. Available: https://www.theguardian.com/commentisfree/2016/jun/10/feminism-sex-robots-women-technology-objectify [Date of access: 3 April 2020].
Osoba, O.A. and Welser, W. 2017. The risks of artificial intelligence to security and the future of work. Santa Monica, California: Rand Corporation.
Ouchchy, L., Coin, A. and Dubljević, V. 2020. AI in the headlines:The portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY: 1-10.
Pakdemirli, E. 2019. Artificial intelligence in radiology: Friend or foe? Where are we now and where are we heading? Acta Radiologica Open 8(2): 1-5.
Perez, J.A., Deligianni, F., Ravi, D. and Yang, G.Z. 2018. Artificial intelligence and robotics. London: UK-RAS Network.
Piekniewski, F. 2018. AI winter is well on its way. Piekniewski’s Blog, 28 May. Available: https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/ [Date of access: 17 March 2020].
Proudfoot, D. 2011. Anthropomorphism and AI: Turingʼs much misunderstood imitation game. Artificial Intelligence 175(5-6): 950-957.
Quer, G., Muse, E.D., Nikzad, N., Topol, E.J. and Steinhubl, S.R. 2017. Augmenting diagnostic vision with AI. The Lancet 390(10091): 31764-31766.
Ray, R. 2018. Andrew Ng sees an eternal springtime for AI. ZDNet, 13 December. Available: https://www.zdnet.com/article/andrew-ng-sees-an-eternal-springtime-for-ai/ [Date of access: 17 March 2020].
Reddy, S., 2018. Use of artificial intelligence in healthcare delivery. Pp. 81-97 in T.F. Heston (Eds), eHealth-Making Health Care Smarter. London, UK: IntechOpen.
Rochyadi-Reetz, M., Arlt, D., Wolling, J. And Bräuer, M. 2019. Explaining the media’s framing of renewable energies: An international comparison. Frontiers in Environmental Science 7: Article 119.
Rosenblatt, F. 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review 65(6): 386-408.
Samuel, J.L. 2019. Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress 10(2): 8-26.
Schumann, S. 2019. Probability of an approaching winter. Towards Data Science, 17 August. Available: https://towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a [Date of access: 17 March 2020].
Shin, Y. 2019. The spring of artificial intelligence in its global winter. IEEE Annals of the History of Computing 41(4): 71-82.
Siau, K. and Wang, W. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31(2): 47-53.
Siegel, E. 2019. The media’s coverage of AI is bogus. Scientific American, 20 November. Available: https://blogs.scientificamerican.com/observations/the-medias-coverage-of-ai-is-bogus/ [Date of access: 16 April 2020].
Sinur, J. 2019. So how goes that AI spring? Forbes, 29 April. Available: https://www.forbes.com/sites/cognitiveworld/2019/04/29/so-how-goes-that-ai-spring/#1c18387a23d4 [Date of access: 17 March 2020].
Sniderman, P.M. and Theriault, S.M. 2004. Pp. 133-165 in W.E. Saris and P.M. Sniderman, (Eds), Studies in public opinion. Princeton, NJ: Princeton University Press.
Staff reporter. 2017. Machine learning may erase jobs, says Yudala. Daily Times, 28 August. Available: https://dailytimes.ng/machine-intelligence-ai-may-erase-jobs-says-yudala/ [Date of access: 17 April 2020].
Stahl, B.C., Timmermans, J. and Mittelstadt, B.D. 2016. The ethics of computing: A survey of the computing-oriented literature. ACM Computing Surveys (CSUR) 48(4): 1-38.
Stassa, E. 2016. Are sex robots unethical or just unimaginative as hell? Available: https://jezebel.com/are-sex-robots-unethical-or-just-unimaginative-as-hell-1769358748 [Date of access: 3 April 2020].
Y. A. Strekalova (2015). Informing Dissemination Research: A Content Analysis
of US Newspaper Coverage of Medical Nanotechnology News. Science
Communication, 37(2), 151-172.
Strekalova, Y.A. 2015. Informing dissemination research: A content analysis of US newspaper coverage of medical nanotechnology news. Science Communication 37(2): 151-172.
Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433-460.
Vincent, J. 2016.These These are three of the biggest problems facing today’s AI. The Verge, 10 October. Available: https://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks [Date of access: 3 April 2020].
Watson, D. 2019. The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines 29: 417-440.
Złotowski, J., Yogeeswaran, K. and Bartneck, C. 2017. Can we control it? Autonomous robots threaten human uniqueness, safety, and resources. International Journal of Human-Computer Studies 100: 48-54.