{"id":1253,"date":"2021-06-07T17:37:13","date_gmt":"2021-06-07T17:37:13","guid":{"rendered":"http:\/\/ensovoort.co.za\/?p=1253"},"modified":"2022-01-27T10:53:37","modified_gmt":"2022-01-27T10:53:37","slug":"killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles","status":"publish","type":"post","link":"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/","title":{"rendered":"Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles"},"content":{"rendered":"<p>Title: Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles<\/p>\n<p>Author: Dr. Susan Brokensha and dr. Thinus Conradie, University of the Free State.<\/p>\n<p><em>Ensovoort, volume 42 (2021), number 6: 3<\/em><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_45_1 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\">Table of Contents | Inhoudsopgawe<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" area-label=\"ez-toc-toggle-icon-1\"><label for=\"item-69dc2f6f4436e\" aria-label=\"Table of Content\"><span style=\"display: flex;align-items: center;width: 35px;height: 30px;justify-content: center;direction:ltr;\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/label><input  type=\"checkbox\" id=\"item-69dc2f6f4436e\"><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#Abstract\" title=\"Abstract\">Abstract<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#1_Introduction\" title=\"1. Introduction\">1. Introduction<\/a><ul class='ez-toc-list-level-4'><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#11_Social_and_cognitive_anthropomorphism_of_artificial_intelligence\" title=\"1.1 Social and cognitive anthropomorphism of artificial intelligence\">1.1 Social and cognitive anthropomorphism of artificial intelligence<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#2_Framing_theory_and_anthropomorphising_AI\" title=\"2. Framing theory and anthropomorphising AI\">2. Framing theory and anthropomorphising AI<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#3_Methods\" title=\"3. Methods\">3. Methods<\/a><ul class='ez-toc-list-level-4'><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#31_Sample\" title=\"3.1 Sample\">3.1 Sample<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#32_Analytic_framework\" title=\"3.2 Analytic framework\">3.2 Analytic framework<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#33_Dominant_topics_and_valence\" title=\"3.3 Dominant topics and valence\">3.3 Dominant topics and valence<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#4_Findings\" title=\"4. Findings\">4. Findings<\/a><ul class='ez-toc-list-level-4'><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#41_Salient_topics_and_valence\" title=\"4.1 Salient topics and valence\">4.1 Salient topics and valence<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#42_Anthropomorphising_AI\" title=\"4.2 Anthropomorphising AI\">4.2 Anthropomorphising AI<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#5_Discussion\" title=\"5. Discussion\">5. Discussion<\/a><ul class='ez-toc-list-level-4'><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#51_Articles_in_which_cognitive_anthropomorphism_predominated\" title=\"5.1 Articles in which cognitive anthropomorphism predominated\">5.1 Articles in which cognitive anthropomorphism predominated<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#52_Articles_in_which_social_anthropomorphism_predominated\" title=\"5.2 Articles in which social anthropomorphism predominated\">5.2 Articles in which social anthropomorphism predominated<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#53_A_comparison_of_the_news_outlets_with_regard_to_topics_and_anthropomorphism_of_AI\" title=\"5.3 A comparison of the news outlets with regard to topics and anthropomorphism of AI\">5.3 A comparison of the news outlets with regard to topics and anthropomorphism of AI<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#6_Conclusions\" title=\"6. Conclusions\">6. Conclusions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-17\" href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/#References\" title=\"References\">References<\/a><\/li><\/ul><\/nav><\/div>\n<h3><span class=\"ez-toc-section\" id=\"Abstract\"><\/span>Abstract<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>How artificial intelligence (AI) is framed in news articles is significant as framing influences society\u2019s perception and reception of this emerging technology. Journalists may depict AI as a tool that merely assists individuals to perform a variety of tasks or as a (humanoid) agent that is self-aware, capable of both independent thought and creativity. The latter type of representation may be harmful, since anthropomorphism of AI not only generates unrealistic expectations of this technology, but also instils fears about technological singularity, a hypothetical future in which technological growth becomes unmanageable. To determine how and to what extent the media in South Africa anthropomorphise AI, we employed framing theory to conduct a qualitative content analysis of articles on AI published in four South African online newspapers. We distinguished between social anthropomorphism, which frames AI in terms of exhibiting a human form and\/or human-like qualities and cognitive anthropomorphism, which refers to the tendency to conflate human and machine intelligence. Most articles reflected social anthropomorphism of AI, while a few framed it only in terms of cognitive anthropomorphism. Several reflected both types of anthropomorphism. Based on the findings, we concluded that anthropomorphism of AI may hinder the conceptualisation of the epistemological and ethical consequences inherent in this technology.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Keywords:<\/strong> Artificial intelligence; Framing theory; News articles; Cognitive anthropomorphism; Social anthropomorphism<\/p>\n<h3><span class=\"ez-toc-section\" id=\"1_Introduction\"><\/span>1. Introduction<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<h4><span class=\"ez-toc-section\" id=\"11_Social_and_cognitive_anthropomorphism_of_artificial_intelligence\"><\/span>1.1 Social and cognitive anthropomorphism of artificial intelligence<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>Anthropomorphism describes our inclination to attribute human-like shapes, emotions, mental states and behaviours to inanimate objects\/animals, and depends neither on the physical features nor on the ontological status of those objects\/animals (Giger, Pi\u00e7arra, Alves\u2010Oliveira, Oliveira and Arriaga, 2019:89). Artificial intelligence (AI), including computers, drones and chatbots, can be anthropomorphised regardless of material dissimilarities between these technologies and humans, and despite the absence of an evolutionary relationship (Giger <em>et al.<\/em>, 2019:89). In this study, we examine both social and cognitive anthropomorphism of AI, where the former process ascribes human traits and forms to AI (cf. Giger <em>et al.<\/em>, 2019:112), and the latter designates the expectation that AI mimics human intelligence (Mueller, 2020:12). We base this distinction on our datasets, which reflect a focus either on the anthropomorphic form and\/or human-like qualities of AI, especially when framing human-robot interaction, or on cognitive processes when describing the intelligence of machine learning and deep learning, for instance. Several articles reflect both cognitive and social anthropomorphism (see Section 4).<\/p>\n<p>Cognitive anthropomorphism saturates news coverage of both weak and strong AI; that is, when AI is framed as merely simulating human thinking or as matching human intelligence (Bartneck, 2013; Damiano and Dumouchel, 2018:5; Salles, Evers and Farisco, 2020). The penchant to conflate artificial and human intelligence is unsurprising, historically speaking. Watson (2019:417) traces the practice especially to Alan Turing\u2019s (1950) eponymous Turing Test for determining whether a machine can \u2018think\u2019. Since then, technology experts and laypeople have framed AI in epistemological terms, constructing it as capable of thinking, learning, and discerning. Human intelligence is notoriously resistant to easy definition, and AI might be more challenging still (Kaplan and Haenlein, 2020). Consequently, when humans envision AI, human intelligence offers a ready touchstone (cf. Cave, Craig, Dihal, Dillon, Montgomery, Singler and Taylor, 2018:8; Kaplan and Haenlein, 2019:17). In part, the propensity to anthropomorphise AI in cognitive and\/or social terms derives from speculative fiction (Salles <em>et al<\/em>., 2020:91). However, many AI researchers also employ anthropomorphic descriptions (Salles <em>et al<\/em>., 2020:91). Salles <em>et al.<\/em> (2020:91) suggest that the practice is driven by \u201ca veritable inflation of anthropocentric mental terms that are applied even to non-living, artificial entities\u201d or to \u201can intrinsic epistemic limitation\/bias\u201d on the part of AI scholars. They also speculate that anthropomorphism could stem from the human need to both understand and control AI in order to experience competence (Salles <em>et al<\/em>., 2020:91). We propose that journalists too are motivated to anthropomorphise AI to understand and control it, particularly because it is an emerging and therefore uncertain science: \u201cpeople are more likely to anthropomorphize when they want to [\u2026] understand their somewhat unpredictable environment\u201d (Salles <em>et al<\/em>., 2020:89-90).<\/p>\n<p>When news media anthropomorphise AI, one epistemological consequence is the risk of exposing the public to exaggerated or erroneous claims (cf. Proudfoot, 2011; Samuel, 2019; Watson, 2019). To appraise this risk, we used framing theory to conduct a content analysis of anthropomorphism in articles published in four South Africa newspapers, namely, the<em> Citizen<\/em>, the <em>Daily Maverick<\/em>, the <em>Mail &amp; Guardian Online<\/em>, and the <em>Sowetan LIVE<\/em>. We addressed the following research questions:<\/p>\n<p><em>Research question 1:\u00a0\u00a0 <\/em>What were the most salient topics in the coverage of AI?<\/p>\n<p><em>Research question 2:<\/em>\u00a0 How was AI anthropomorphised?<\/p>\n<p>Our analysis does not intend to, in Salles <em>et al.<\/em>\u2019s (2020: 93) words, defend \u201cmoral human exceptionalism\u201d. Instead, we are interested, from an ontological vantage, in AI-human differences. Moreover, we wish to interrogate the potential epistemological and ethical impacts of anthropomorphic framing of AI for public consumption. Therefore, we question how news media constitute and interrelate \u2018humans\u2019 and \u2018machines\u2019. This undertaking is important. How we anthropomorphise AI compels us to re-evaluate how we conceptualise human and artificial intelligences (cf. Curran, Sun and Hong, 2019). For the second research question, we honed our interest on the nature of anthropomorphic framing in South African news articles, instead of unpacking <em>why<\/em> coverage might differ across the outlets. Providing a methodical and nuanced account of inter-outlet variability exceeds the purview of this study; however, in future, we plan to question this variability by attending, among other things, to the agenda of each outlet, its target audience, and gatekeeping by editorial boards.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"2_Framing_theory_and_anthropomorphising_AI\"><\/span>2. Framing theory and anthropomorphising AI<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Media framings of AI shape its public reception and perception. Unlike technology experts who are <em>au fait<\/em> with the architecture of AI, the public rely on mediatised knowledge to learn about this technology (Vergeer, 2020:375). Consequently, the agendas of media outlets and the writers they employ inflect what is learned. Framing is also influenced by variables including the pressure to increase ratings and readership (Obozintsev, 2018:12) and by what Holgu\u00edn (2018:5) terms the \u201cmyth-making of journalists\u201d and the \u201cpublic relations strategies of scientists\u201d. Given this gamut of variables, how and the extent to which AI is reported on varies within and across media outlets. Nevertheless, the findings of some studies concur. For example, in a study of how AI is represented in mainstream news articles in North America, the researchers found that the media generally depict AI technologies as being beneficial to society and as offering solutions to problems related to areas including health and the economy (Sun, Zhai, Shen and Chen, 2020:1). United Kingdom-based scholars echo this picture of AI as a problem-solving technology (Brennen, Howard and Nielsen, 2018) as do Garvey and Maskal (2020), who completed a sentiment analysis of the news media on AI in the context of digital health. A study of the Dutch press by Vergeer (2020) reports a balance of positive and negative sentiments. Fast and Horvitz (2017) examined <em>The New York Times\u2019<\/em> coverage of AI over a 30-year period, and found that despite a broadly positive outlook, framings have grown increasingly pessimistic over the last decade, citing loss of control over AI as a concern. Within the literature, few studies of <em>anthropomorphic<\/em> <em>framing<\/em> of AI by journalists have been conducted, although several studies of the framing of AI in general touch upon anthropomorphism (Garvey and Maskal, 2020; Ouchchy, Coin and Dubljevi\u0107, 2020; Vergeer, 2020; Bunz and Braghieri, 2021). A few studies indicate that anthropomorphism of AI is common in news articles that focus specifically on human-robot interaction (Bartneck, 2013; Z\u0142otowski, Proudfoot, Yogeeswaran and Bartneck, 2015).<\/p>\n<p>Framing theory has proven fruitful for ascertaining what journalists and other writers for online news elect to foreground and background when writing on AI (Brennan, Howard and Nielsen, 2018; Obozintev, 2018; Chuan, Tsai and Cho, 2019; Vergeer, 2020). It entails \u201cthe process of culling a few elements of perceived reality and assembling a narrative that highlights connections among them to promote a particular interpretation\u201d (Entman, 2010:36). Frames selectively define a specific problem in terms of its costs and benefits, allege causes of the problem, make moral judgements about the agents or forces involved, and offer solutions (Entman, 2010:336).<\/p>\n<p>Our deductive analysis of framing combines Nisbet\u2019s (2009) typology with that proposed by Jones (2015) (Table 1). Using existing frames circumvents what Hertog and McLeod (2001:150) decry as \u201cone of the most frustrating tendencies in the study of frames and framing, the tendency for scholars to generate a unique set of frames for every study\u201d. However, Nisbet\u2019s (2009) coding scheme engages how science is broadly framed in public discourse, rather than spotlighting AI. Therefore, we amalgamate it with Jones\u2019s (2015) exhaustive analysis of news articles about AI. From Nisbet\u2019s (2009) typology of frames, we omitted the \u2018scientific and technical uncertainty\u2019 frame. Instead, we retained Nisbet\u2019s \u2018social progress\u2019 frame and employed Jones\u2019s (2015) \u2018competition\u2019 frame. We propose that these competing frames may be evoked simultaneously by a journalist to reflect uncertainty about the various facets of AI: \u201c[t]he alternation between different perspectives, with an apparently contradictory identification in the journalist\u2019s report, contributes above all to construct an image of an emergent scientific field\u201d (Hornmoen, 2009: 16; cf. Kampourakis and McCain, 2020:152).<\/p>\n<p>All nine frames in Table 1 can be expressed through anthropomorphic tropes. For example, in \u2018Call me baby\u2019: Talking sex dolls fill a void in China\u2019 (<em>Sowetan LIVE<\/em>,\u00a0 4 February 2018), the journalist employs anthropomorphic tropes to evoke the frame of nature, referring to one doll by name (\u201cXiaodie\u201d) (cf. (Keay and Graduand, 2011) and describing others as \u201cshapely\u201d \u201chot\u201d, and \u201cbeautiful\u201d. Similarly, in \u2018Prepare for the time of the robots\u2019 (<em>Mail &amp; Guardian Online<\/em>, 16 February 2018), the journalist employs the frame of artifice when he anthropomorphises AI as having the potential to \u201coutperform [humans] in nearly every job function\u201d in the future.<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>Table 1: <\/strong>A typology of frames employed to study AI in the media<\/p>\n<table width=\"636\">\n<tbody>\n<tr>\n<td colspan=\"2\" width=\"636\"><strong>Nisbet\u2019s (2009) coding scheme<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><strong>Frame<\/strong><\/td>\n<td width=\"522\"><strong>Definition<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Accountability<\/em><\/td>\n<td width=\"522\">Science is framed as needing to be controlled and regulated in order to counter the risks it might pose to society and to the environment (e.g., \u201cThe human element in AI decision-making needs to be made visible, and the decision-makers need to be held to account\u201d: the <em>Daily Maverick<\/em>, 18 July 2019).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Morality\/Ethics<\/em><\/td>\n<td width=\"522\">Science is framed as reflecting moral and ethical risks (e.g., \u201cArtificial intelligence (AI) is meant to be better and smarter than humans but it too can succumb to bias and a lack of ethics\u201d: <em>Weekend Argus<\/em>, 8 September 2019).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Middle way<\/em><\/td>\n<td width=\"522\">A compromise position between polarised views on a scientific issue is generated (e.g., \u201c[\u2026] the combined forces between human and machine would be better than either alone\u201d: <em>News24<\/em>, 22 January 2020).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Pandora\u2019s Box<\/em><\/td>\n<td width=\"522\">Science is depicted as having the potential to spiral out of control (e.g., \u201c[\u2026] robots [\u2026] would take targeting decisions themselves, which could \u2018open an even larger Pandora\u2019s box\u2019, he warned\u201d: <em>News24<\/em>, 23 May 2013).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Social progress<\/em><\/td>\n<td width=\"522\">Science is framed as enhancing the quality of life of people in areas such as health, education, or finance and as protecting\/improving the environment (e.g., \u201c[\u2026] AI has made the detection of the coronavirus easier\u201d: the <em>Daily Maverick<\/em>, 7 December, 2020).<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" width=\"636\"><em>\u00a0<\/em><\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" width=\"636\"><strong>Jones\u2019s (2015) coding scheme<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Frame<\/em><\/td>\n<td width=\"522\"><strong>Definition<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Artifice<\/em><\/td>\n<td width=\"522\">AI is framed as an arcane technology in the sense that it could surpass human intelligence (e.g., \u201c[\u2026] AI may soon surpass [human intelligence] due to superior memory, multi-tasking ability, and its almost unlimited knowledge base\u201d: <em>IOL<\/em>, 18 December 2020).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Competition<\/em><\/td>\n<td width=\"522\">AI is framed in terms of depleting human and\/or material resources (e.g., \u201c[\u2026]\u00a0 advancements in the tech world mean [AI technologies] are coming closer to replacing humans\u201d: the <em>Sowetan LIVE<\/em>, 31 July 2018).<\/td>\n<\/tr>\n<tr>\n<td width=\"114\"><em>Nature<\/em><\/td>\n<td width=\"522\">AI is framed in terms of the human-machine relationship and often entails romanticising AI or describing\/questioning its nature\/features (e.g., a robotic model called \u2018Noonoouri\u2019 \u201cdescribes herself as cute, curious and a lover of couture\u201d: the <em>Sowetan LIVE<\/em>, 20 September 2018).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>\u00a0<\/strong><\/p>\n<h3><span class=\"ez-toc-section\" id=\"3_Methods\"><\/span>3. Methods<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<h4><span class=\"ez-toc-section\" id=\"31_Sample\"><\/span>3.1 Sample<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>Jones\u2019s (2015:20) approach to data gathering informs our qualitative content analysis, because\u00a0 we \u201c[mimicked] the results that a person (or machine) would have been presented with had they searched for the complete term \u2018Artificial Intelligence\u2019 in [popular news articles]\u201d. We also adhered to Krippendorf\u2019s (2013) guidelines for stratified sampling: first, we focused on collecting news articles from South African media outlets that have an online presence and that exhibit high circulations online. To determine which media outlets reach a wide readership, we used <em>Feedspot<\/em>, a content reader that allowed us to read top news websites in one place while keeping track of which articles we had read. We identified the <em>Citizen<\/em>, the <em>Daily Maverick<\/em>, the <em>Mail &amp; Guardian<\/em>, and the <em>Sowetan LIVE<\/em> as newspapers with a high readership online. Second, concentrating on the period between January 2018 and February 2021, we collected articles from the four news outlets by searching for the term \u2018artificial intelligence\u2019. The third step involved limiting the sample to articles with a sustained focus on AI (Jones, 2015:25; Burscher, Vliegenthart and de Vreese, 2016). Ultimately, we conducted exhaustive analyses of 126 articles and discarded 260: 52 articles were collected from the <em>Citizen<\/em>, 36 from the<em> Daily Maverick<\/em>, 26 from the <em>Mail &amp; Guardian Online<\/em>, and 12 from the <em>Sowetan LIVE<\/em>. This uneven sample matches similar studies of AI in the media (Ouchchy <em>et al.<\/em>, 2020; Sun <em>et al<\/em>., 2020; Vergeer, 2020), and is rationalised by our curation process: articles with only passing allusions to AI were discarded along with advertorials, sponsored content or articles that were not text-based. Our unit of analysis was each complete article (cf. Chuan <em>et al.<\/em>, 2019).<\/p>\n<h4><span class=\"ez-toc-section\" id=\"32_Analytic_framework\"><\/span>3.2 Analytic framework<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>As already noted, framing theory and the existing frames adumbrated in the previous section guided our inquiry. The strength of this directed approach to content analysis is that it corroborates and extends well-established theory and avoids cluttering the field with yet another idiosyncratic set of frames. Such an approach, \u201cmakes explicit the reality that researchers are unlikely to be working from the na\u00efve perspective that is often viewed as the hallmark of naturalistic designs\u201d (Hsieh and Shannon, 2005:1283). Of course, this approach is imperfect. Particularly, \u201cresearchers approach the data with an informed but, nonetheless, strong bias\u201d (Hsieh and Shannon, 2005:1283). In response, we maintained an audit trail (White, Oelke and Friesen, 2012:244) and employed \u201cthick description\u201d (Geertz 1973) to bolster the transparency of the analysis and interpretation (Stahl and King, 2020:26).<\/p>\n<h4><span class=\"ez-toc-section\" id=\"33_Dominant_topics_and_valence\"><\/span>3.3 Dominant topics and valence<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>We determined that unpacking how AI is anthropomorphised demands more than discerning the various frames through which this technology was represented in the data (Research question 2). It also proved necessary to examine how framing entwines with the topics that dominated our data. After all, repeated media exposure, \u201ccauses the public to deem a topic important and allows it to transfer from the media agenda to the public agenda\u201d (Fortunato and Martin, 2016:134; cf. Freyenberger, 2013:16; McCombs, 1997:433).<\/p>\n<p>After expounding how topics related to frames centred on anthropomorphism, a final step in our data analysis involved coding the overall valence of each article as positive, negative or mixed. Given the unreliability of automated content analysis for determining tone (Boukes, Van de Velde, Araujo and Vliegenthart, 2020), we manually coded each article by examining the presence of multiple keywords that reflected, amongst other things, uncertainty versus certainty and optimism versus pessimism about AI (cf. Kleinnijenhuis, Schultz and Oegema, 2015).<strong>\u00a0<\/strong><\/p>\n<h3><span class=\"ez-toc-section\" id=\"4_Findings\"><\/span>4. Findings<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<h4><span class=\"ez-toc-section\" id=\"41_Salient_topics_and_valence\"><\/span>4.1 Salient topics and valence<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>Two topics prevailed across the four media outlets: \u2018Business, finance, and the economy\u2019 and \u2018Human-AI interaction\u2019. Each appeared in 18.25% of all articles. The second most salient topic was \u2018Preparedness for an AI-driven world\u2019, which featured in 13.49%. \u2018Healthcare and medicine\u2019 was next and received coverage in 11.90% of all articles. \u2018Big Brother\u2019 and \u2018Control over AI\u2019 were the fourth most prevalent topics, with each featuring in 10.31% of all articles. Less salient topics were the \u2018News industry\u2019 (3.17%) followed by the \u2018Environment\u2019, \u2018Killer robots\u2019, \u2018Strong AI\u2019, and the \u2018Uncanny Valley\u2019 (2.38%). \u2018Singularity\u2019 featured in 1.58% of all articles, while \u2018Education\u2019 was covered in 0.79% of all articles. All news outlets reported on \u2018Business, finance, and the economy\u2019, \u2018Human-AI interaction\u2019, and \u2018Healthcare and medicine\u2019. The only newspaper that omitted \u2018Preparedness for an AI-driven world\u2019 was the <em>Sowetan LIVE<\/em>. \u2018Big Brother\u2019 featured only in the <em>Citizen<\/em>, while \u2018Control over AI\u2019 was addressed in all newspapers barring the <em>Citizen<\/em>. The \u2018Environment\u2019 and the \u2018News Industry\u2019 were covered only in the <em>Citizen<\/em> and the <em>Daily Maverick<\/em>. \u2018Killer robots\u2019 appeared only in the <em>Daily Maverick<\/em> and the <em>Mail &amp; Guardian Online<\/em>, while \u2018Strong AI\u2019 featured in the <em>Citizen<\/em> and the <em>Mail and Guardian Online<\/em>. The \u2018Uncanny Valley\u2019 was absent from the <em>Mail &amp; Guardian Online<\/em>. Only the <em>Daily Maverick<\/em> reported on \u2018Singularity\u2019, while \u2018Education\u2019 appeared only in the <em>Citizen<\/em>.<\/p>\n<p>Positive valence characterised the topics \u2018Business, finance, and the economy\u2019, \u2018Education\u2019, the \u2018Environment, \u2018Healthcare and medicine\u2019, \u2018Human-AI interaction\u2019, and \u2018Strong AI\u2019. Negative valence marked \u2018Big Brother\u2019 and \u2018Killer robots\u2019. \u2018Control over AI\u2019, the \u2018News Industry\u2019, \u2018Preparedness for an AI-driven world\u2019, \u2018Singularity\u2019, and the \u2018Uncanny Valley\u2019 were coded with mixed valence.<\/p>\n<p>Although this does not form part of the discussion in Section 5, we noted that nineteen articles did not reflect the use of anthropomorphic tropes and topics across these articles included \u2018Preparedness for an AI-driven world\u2019 (six articles), \u2018Big Brother\u2019 (five articles), \u2018Control over AI\u2019 (three articles), \u2018Business, finance, and the economy\u2019 (two articles), the \u2018News Industry\u2019 (two articles), and \u2018Strong AI\u2019 (one article). \u2018Preparedness for an AI-driven World\u2019, \u2018Business, finance, and the economy\u2019 were mostly coded with positive valence, while \u2018Big Brother\u2019 was coded with negative valence. The two articles on the \u2018News Industry\u2019 reflected negative and mixed valence and \u2018Control over AI\u2019 was mostly coded with mixed valence.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"42_Anthropomorphising_AI\"><\/span>4.2 Anthropomorphising AI<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p><strong><em>Dataset 1: Cognitive anthropomorphism.<\/em><\/strong> Approximately 12% of all articles in the first dataset (i.e., 16 of 126) reflected cognitive anthropomorphism. A closer reading also indicates that in articles featuring this type of anthropomorphism, the most salient topics were \u2018Healthcare and medicine\u2019, which featured in six articles, followed by \u2018Business, finance, and the economy\u2019, which was the focus of three articles. \u2018Strong AI\u2019 was covered in two articles, as was \u2018Preparedness for an AI-driven world\u2019. \u2018Big Brother\u2019, the \u2018News Industry\u2019, and \u2018Singularity\u2019 featured in one article each. When we examined how the two most salient topics were overwhelmingly framed, we noted that four articles focusing on \u2018Healthcare and medicine\u2019 were framed in terms of nature, one in terms of social progress, and one in terms of accountability. For the three articles addressing \u2018Business, finance, and the economy\u2019, the social progress frame predominated.<\/p>\n<p><strong><em>Dataset 2: Social anthropomorphism.<\/em><\/strong> Almost 47% of all articles (i.e., 59 of 126) reflected social anthropomorphism. The most prevalent topics were \u2018Human-AI interaction\u2019 (21 articles), followed by \u2018Business, finance, and the economy\u2019 and \u2018Control over AI\u2019 (with eight articles each). Less salient topics were \u2018Preparedness for an AI-driven world\u2019 (six articles), \u2018Big Brother\u2019 (five article), \u2018Healthcare and medicine\u2019 (four articles), the \u2018Environment\u2019 (three articles), the \u2018Uncanny Valley\u2019 (two article), \u2018Education\u2019 (one article), and \u2018Singularity\u2019 (one article). With respect to framing in the most salient articles, 14 articles on \u2018Human-AI interaction\u2019 evoked the frame of nature, six evoked the frame of social progress, and one reflected the frame of accountability. Three articles on \u2018Control over AI\u2019 reflected the morality\/ethics frame and three evoked the frame of accountability. One article on this topic reflected the frame of nature and one evoked the frame of competition. Five articles on \u2018Business, finance, and the economy\u2019 evoked the frame of social progress, while three on this topic each reflected the frames of competition, accountability, and nature.<\/p>\n<p><strong><em>Dataset 3: Cognitive and social anthropomorphism.<\/em><\/strong> Thirty-two articles (25.39%) reflected both types of anthropomorphism. The most salient topics were \u2018Business, finance, and the economy\u2019 (nine articles) and \u2018Healthcare and medicine\u2019 (five articles). Next, \u2018Control over AI\u2019 and \u2018Human-AI\u2019 interaction\u2019 were the most prevalent topics with four articles each. Less prevalent topics were \u2018Preparedness for an AI-driven world\u2019 (three articles), \u2018Killer robots\u2019 (three articles), \u2018Big Brother\u2019 (two articles), the \u2018News Industry\u2019 (one article) and the \u2018Uncanny Valley\u2019 (one article). With respect to salient topics and frames, five articles on \u2018Business, finance, and the economy\u2019 evoked the frame of social progress. Three evoked the frame of nature, and one the frame of accountability. Four articles on \u2018Healthcare and medicine\u2019 reflected the frame of social progress and one evoked the frame of nature. With respect to \u2018Control over AI\u2019, the frame of nature was the dominant frame in two articles, while the accountability frame was prevalent in two. Finally, two articles on \u2018Human-AI interaction\u2019 reflected social progress as the dominant frame and two evoked the frame of nature.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"5_Discussion\"><\/span>5. Discussion<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>A detailed discussion of all three datasets exceeds the scope of this study. Instead, we foreground articles from the first two datasets, based on the most salient topics.\u00a0 Space constraints aside, we noted that news articles that reflected a dominant type of anthropomorphism coincided with a <em>sustained focus<\/em> on specific types of AI, which in turn impacted the topic under discussion. Thus, articles that accented cognitive anthropomorphism also topicalised AI technologies that simulate human cognition, including machine learning and neural networks. This finding rationalises our decision to discuss cognitive anthropomorphism of these technologies in relation to \u2018Healthcare and medicine\u2019 and \u2018Business, finance, and the economy\u2019, not only because these were the most salient topics in the first dataset, but also because both sectors demand types of AI that augment human thinking. The second dataset, where social anthropomorphism prevailed, essentially focused on AI-driven digital assistants\/social robots and on human engagement with these technologies. We therefore explicate the topics \u2018Human-AI interaction\u2019, \u2018Business, finance, and the economy\u2019, and \u2018Control over AI\u2019, which were the most prevalent topics in this dataset. We did, however, review the third dataset, and noted that the findings mirrored those identified in the first two datasets.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"51_Articles_in_which_cognitive_anthropomorphism_predominated\"><\/span>5.1 Articles in which cognitive anthropomorphism predominated<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>All 16 articles in which cognitive anthropomorphism predominated also struck sensational and\/or alarmist tones, where sensational reporting was \u201centertainment-oriented\u201d or \u201ctabloid-like\u201d (Uribe and Gunter, 2007:207; cf. Vettehen and Kleemans, 2018:114) and alarmist reporting framed AI as warranting fear (cf. Ramakrishna, Verma, Goyal and Agrawal, 2020:1558). Typically, these articles portrayed technology as equalling or rivalling human intelligence. To illustrate, \u201cA team [\u2026] taught an artificial intelligence system to distinguish dangerous skin lesions from benign ones\u201d (the <em>Daily Maverick<\/em>, 29 May 2018), and \u201cA computer programme [\u2026] learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert\u201d (the <em>Citizen,<\/em> 9 May 2018). Interestingly, the writers of these articles also deployed various discursive strategies to mitigate an alarmist and\/or sensational tone. Rather than relying solely on their own reporting, journalists commonly moderated exaggerated claims about machine intelligence by quoting or paraphrasing sceptical scholars\/experts and other AI stakeholders, or by simply enclosing key terms in scare quotes (cf. Schmid-Petri and Arlt, 2016:269). Journalists were thus able to maintain authorial distance from potentially false or overstated claims (cf. Johannson, 2019:141). Indeed, in 12 of the 16 articles, we noted that journalists built their articles predominantly on quotations and\/or paraphrases of various actors\u2019 voices. In Johannson\u2019s (2019:138) view, constructing news reports around quotations enables, \u201cjournalistic positioning [\u2026] based on the detachment of responsibility\u201d. In \u2018Will your financial advisor be replaced by a machine?\u2019 (the <em>Citizen<\/em>, 10 March 2018), for instance, although the journalist reported that \u201cTechnology has the ability [\u2026] to [\u2026] analyse a full array of products, potentially identifying suitable [financial] solutions\u201d and that it \u201ccan process and analyse all kinds of data far quicker and more accurately than humans\u201d, he also cited an industry expert as predicting that \u201cthe human element\u201d will remain. Doing so enabled him to maintain distance from claiming that artificial intelligence can outpace humans.<\/p>\n<p>Another strategy that several of the journalists in our dataset adopted to attenuate alarmist\/sensational claims about the cognitive abilities of AI was to frame this technology in contradictory terms. This was particularly evident in articles centred on AI in the healthcare industry. In \u2018Could AI beat humans in spotting tumours?\u2019 (the <em>Citizen<\/em>, 22 January 2020), for example, a statement such as \u201cMachines can be trained to outperform humans when it comes to catching breast tumours on mammograms\u201d was followed by referencing a study, which highlighted AI\u2019s flaws and misdiagnoses. Similarly, in \u2018AI better at finding skin cancer than doctors\u2019 (the <em>Daily Maverick<\/em>, 29 May 2018), the journalist reported that according to researchers, \u201cA computer was better than human dermatologists at detecting skin cancer\u201d; yet the journalist also quoted a medical expert as stating that \u201cthere is no substitute for a thorough clinical examination\u201d. Citing contradictions around AI represents one option in the range of strategies journalists can leverage to resolve the uncertainty and conflict surrounding this novel technology (cf. Hornmoen, 2009:1; Kampourakis and McCain, 2020:152). They may also disregard any uncertainties and simply treat scientific claims as factual (Peters and Dunwoody, 2016:896). However, this strategy only surfaced in two of the 16 articles in which cognitive anthropomorphism was apparent. For example, in \u2018Wits develops artificial intelligence project with Canadian university to tackle Covid-19 in Africa\u2019 (the <em>Daily Maverick<\/em>, 6 December 2020), the journalist quoted an academic as claiming that, in the fight against COVID-19, \u201cArtificial intelligence is the most advanced set of tools to learn from the data and transfer that knowledge for the purpose of creating realistic modelling\u201d.<\/p>\n<p>Depicting AI in terms of competing interpretations that allow journalists to manage scientific uncertainty is typical of post-normal journalism, which blurs the boundaries between journalism and science (Br\u00fcggemann, 2017:57-58). Coined by Funtowicz and Ravetz (1993), post-normal science reflects high levels of uncertainty, given that the phenomena under investigation are characterised as \u201cnovel\u201d, \u201ccomplex\u201d and \u201cnot well understood\u201d (Funtowicz and Ravetz, 1993:87). AI is, undoubtedly, a contested technology. Some praise its power to evenly distribute social and economic benefits, while others decry its ontological threat to humanity (Ulnicane, Knight, Leach, Stahl and Wanjiku, 2020:8-9). Knowledge about AI remains limited and disputed. Unsurprisingly, then, journalists may generate \u201ca plurality of perspectives\u201d (Br\u00fcggemann, 2017:58).<\/p>\n<p>By citing competing frames, journalists can \u201cbalance [\u2026] conflicting views\u201d (Skovsgaard, Alb\u00e6k, Bro and De Vreese, 2013:25) on a given topic and encourage readers to formulate judgements independently. Thus, in \u2018Big data a game-changer for universities\u2019 (the<em> Mail &amp; Guardian Online<\/em>, 25 July 2019), readers must decide for themselves whether they support the view that AI is \u201ccapable of predicting lung cancer with greater accuracy than highly trained and experienced radiologists\u201d or whether they believe that \u201chumans are indispensable\u201d in the detection of lung cancer. This particular example indicates that employing competing frames remains flawed. In this respect, Boykoff and Boykoff (2004:127) contend that the balance norm could constitute a false balance in that journalists may \u201cpresent competing points of view on a scientific question as though they had equal scientific weight, when actually they do not\u201d.<a href=\"#_ftn1\" name=\"_ftnref1\"><sup>[1]<\/sup><\/a> This false balance may confuse readers and hinder their ability to distinguish fact from fiction (Br\u00fcggemann, 2017:57-58). Research suggests that the public resist competing frames because they neutralise each other, complicating the process of taking a position on a particular issue (Sniderman and Therialt, 2004:139; cf. Chong &amp; Druckman, 2012:2; Obozintsev, 2018:15). Consider the mixed messages in \u2018X-rays and AI could transform TB detection in South Africa, but red tape might delay things\u2019 (the <em>Daily Maverick<\/em>, 13 December 2020). A layperson would be hard pressed to reconcile the claim made by the World Health Organisation that, \u201cthe diagnostic accuracy and the overall performance of [AI-driven] software were similar to the interpretation of digital chest radiography by a human reader\u201d with the view expressed by an expert from the Radiological Society of South Africa that this software requires human oversight. Significantly, what was omitted from this article, and from most articles centred on healthcare, was an account of why AI for disease detection requires human input. Instead, journalists merely reported that AI-driven diagnostic tools could err and require enormous datasets to enhance accuracy.<\/p>\n<p>Indeed, five of the six healthcare articles in our dataset described AI as outperforming humans in the detection, interpretation or prediction of diseases. What is absent from these articles is the fact that human and artificial intelligence cannot be conflated: \u201cSeeking to compare the reasoning of human and artificial intelligence (AI) in the context of medical diagnoses is an overly optimistic anthropomorphism\u201d argues David Burns (2020:E290) in the <em>Canadian Medical Association Journal<\/em>. This position is premised on the observation that machine learning algorithms, which are employed in computer-based applications to support the detection of diseases, are mathematical formulae that are unable to reason, and so they are not intelligent. Quer, Muse, Nikzad, Topol and Steinhubl (2017:221) echo this argument, asserting that there is no explanatory power in medical AI: \u201cIt cannot search for causes of what is observed. It recognizes and accurately classifies a skin lesion, but it falls short in explaining the reasons causing that lesion and what can be done to prevent and eliminate disease\u201d. Furthermore, although AI mimics human intelligence, it requires vast archives of data to \u2018learn\u2019. By contrast, humans can learn through simple observation. A good example is a scenario in which a human learns to recognise any given object after observing it only once or twice. AI software would need to view the object repeatedly to recognise it, and even then, it would be unable to distinguish this object from a new object (Pesapane <em>et al<\/em>., 2020:5).<\/p>\n<p>The healthcare articles in our dataset that reflected cognitive anthropomorphism also failed to address ethical issues surrounding medical AI. A key ethical issue pertains to the consequences of using algorithms in healthcare. In \u2018Could AI beat humans in spotting tumours?\u2019 (the <em>Citizen<\/em>, 22 January 2020), the journalist reported on a deep learning AI model designed to detect breast tumours<a href=\"#_ftn2\" name=\"_ftnref2\"><sup>[2]<\/sup><\/a>, quoting a medical doctor and researcher as stating that experts are unable to explain why the model \u2018sees\u2019 or \u2018overlooks\u2019 tumours: \u201cAt this point, we can observe the patterns [\u2026]. We don\u2019t know the \u2018why\u2019\u201d. This constitutes the so-called \u201cblack-box problem\u201d (Castelvecchi, 2016:1), which arises when the processes between the input and output of data are opaque. Put differently, computers are programmed to function like neural networks (that are supposedly superior to standard algorithms), but as is the case with the human brain, \u201c[i]nstead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher\u201d (Castelvecchi, 2016:1). Problematically, this entails that while doctors can interpret the outcomes of an algorithm, they cannot explain how the algorithm made the diagnosis (cf. Dur\u00e1n and Jongsma, 2021:1; cf. Gerke, Minssen and Cohen, 2020:296), which generates a host of ethical problems: \u201cCan physicians be deemed responsible for medical diagnosis based on AI systems that they cannot fathom? How should physicians act on inscrutable diagnoses?\u201d (Dur\u00e1n and Jongsma, 2021:1). On an epistemological level, we should be concerned, not only about biased algorithms, but also about the degree to which black-box algorithms could damage doctors\u2019 epistemic authority (Dur\u00e1n and Jongsma, 2021:1). Most of the articles in our dataset acknowledged that medical AI requires huge volumes of data to accurately screen for diseases, but overlooked such ethical and epistemic concerns. Additionally, the articles omitted any discussion of the potential for algorithmic biases related to race, gender, age, and disabilities, among others (Gerke <em>et al<\/em>., 2020:303-304). While several articles reported that AI can misdiagnose diseases, they overlooked arguments that inaccuracies may stem from the fact that algorithms are usually trained on Caucasian patients, instead of diverse patient data, thereby exacerbating health disparities (Adams, 2020:1). To educate the public about medical AI\u2019s benefits and potential ethical and social risks, Ouchchy <em>et al.<\/em> (2020:927) suggest a multifaceted approach which, \u201ccould include increasing the accessibility of correct information to the public in the form of fact-sheets\u201d and collaborating with AI ethicists to improve public debate.<\/p>\n<p>Articles on \u2018Business, finance, and the economy\u2019 that used cognitive anthropomorphism were mainly framed in terms of social progress. This finding is not unexpected, since applications of AI in business, finance, and the economy are generally associated with benefits including increased economic wealth, greater productivity and efficiency (cf. Vergeer, 2020:377). Nevertheless, journalists also struck an alarmist and\/or sensational tone by claiming that AI can emulate human intelligence to make independent financial decisions (the <em>Citizen<\/em>, 23 May 2018; 23 October 2019). Alarmist and\/or sensational coverage of AI may, \u201cmaximize ratings, readership, clicks, and views\u201d; yet it may also retard public understanding of such technologies (Lea, 2020:329). As was the case in articles featuring healthcare and medicine, journalists focusing on personal finance framed AI in contradictory terms, reporting, for example, that while AI either matches or exceeds human intelligence, it will not replace human financial advisors in the near future: \u201cThe machine can emulate, but it can\u2019t innovate \u2013 yet\u201d (<em>The Citizen<\/em>, 23 October, 2019). As already indicated, couching AI in contradictory terms may help journalists resolve uncertainty about this novel technology. On the other hand, such framings also risk befuddling the public, as noted earlier (cf. Br\u00fcggemann, 2017:57-58). Claims that AI will either mimic or rival human intelligence without replacing humans are unhelpfully vague, and might obstruct public confidence in and perceptions of the technology (cf. Cave <em>et al<\/em>., 2018:2).<\/p>\n<p>A 2018 article in <em>The Guardian<\/em> quotes Zachary Lipton, a machine learning expert based at Carnegie Mellon University, as lamenting that, \u201cas [\u2026] hyped-up stories [about AI] proliferate, so too does frustration among researchers with how their work is being reported on by journalists and writers who have a shallow understanding of the technology\u201d (Schwartz, 2018:4). Although several articles in our dataset made vague assurances that AI cannot yet substitute human intelligence, none engaged rigorously with arguments among AI scholars and industry experts that artificial general intelligence (AGI) might remain unrealised (Bishop, 2016; Fjelland, 2020; Lea, 2020: 324). A 2021 book that offers interesting insights into AI\u2019s so-called superintelligence is <em>The myth of artificial intelligence<\/em> by Erik Larson, who asserts that the scientific aspect of the AI myth is based on the assumption that we will achieve AGI as long as we make inroads in the area of weak or narrow AI. However, \u201c[a]s we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress\u201d (Larson, 2021:2; cf. Lea. 2020:323). In fact, to create an algorithm for general intelligence, \u201cwill require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like\u201d (Larson, 2021:2). In Watson\u2019s (2019: 417) view, drawing on anthropomorphic tropes to conflate human intelligence and AI is misleading and even dangerous. Supposing that algorithms share human traits implies that \u201cwe implicitly grant [AI] a degree of agency that not only overstates its true abilities, but robs us of our own autonomy\u201d (Watson, 2019:434).<\/p>\n<h4><span class=\"ez-toc-section\" id=\"52_Articles_in_which_social_anthropomorphism_predominated\"><\/span>5.2 Articles in which social anthropomorphism predominated<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>The 59 articles in which social anthropomorphism featured mirrored the above-mentioned discursive strategies. Generally, journalists blended their own reports with strategically selected quotes and paraphrases from AI researchers and technology experts. Scare quotes also registered a note of scepticism, and journalists continued to frame AI in contradictory terms. Anthropomorphic framing of social robots or digital assistants shaped articles on human-AI interaction as well as many articles on business\/financial issues, pointing to the human tendency to regard AI-driven technologies as social actors (Duffy and Zawieska, 2012), despite an awareness that they are inanimate (Scholl and Tremoulet, 2000). This is predictable, given that creators of social robots and digital assistants rely on anthropomorphic design to enhance acceptance of and interaction with them (Fink, 2012:200; cf. Darling, 2015:3). Adopting a predominantly pro-AI stance, most journalists in our dataset portrayed social robots\/digital assistants through the frames of nature and social progress, imbuing them with a human-like form and\/or human-like behaviours. For example, with respect to having a human-like form, several social robots\/digital assistants were described as exhibiting \u201chuman likeness\u201d (<em>Daily Maverick<\/em>, 10 November 2019), \u201cremarkable aesthetics\u201d (the <em>Citizen<\/em>, 5 September 2018) and \u201ccomplex facial expressions\u201d (the <em>Sowetan LIVE<\/em>, 4 February 2018). With respect to human-like traits, journalists variously described AI as \u201ccapable of handling routine tasks\u201d (the <em>Daily Maverick<\/em>, 9 May 2018), as \u201ca colleague or personal assistant \u2013 in the human sense of the term\u201d (the <em>Mail &amp; Guardian<\/em>, 26 August 2019), and as offering emotional or mental support in the workplace (the <em>Sowetan LIVE<\/em>, 30 November 2020).\u00a0 Quoting or paraphrasing industry experts, one writer of a <em>Sowetan LIVE<\/em> article (30 November 2020) actually claimed that AI surpasses humans\u2019 capacity to function as assistants or companions: \u201c[AI] doesn\u2019t judge you and doesn\u2019t care about your race or class or gender. It gives you non-biased responses\u201d. The reality is that if AI systems are trained on a biased dataset, they will replicate bias (Borgesius, 2018:11).<\/p>\n<p>With respect to bias, we noted that six <em>Daily Maverick<\/em> and <em>Mail &amp; Guardian Online<\/em> articles in which social anthropomorphism was apparent briefly addressed AI bias and its ethical impact on society. This finding aligns with Ouchchy <em>et al.<\/em>\u2019s (2019), who note that media coverage of ethical issues surrounding AI is broadly realistic, but superficial (cf. Ouchchy <em>et al<\/em>., 2020:1). Typical utterances in these articles humanised AI algorithms, as evidenced in: \u201cAI algorithms [\u2026] will reflect and perpetuate the contexts and biases of those that create them\u201d (the<em> Mail &amp; Guardian Online<\/em>, 8 January 2018), \u201cFix AI\u2019s racist, sexist bias\u201d (the <em>Mail &amp; Guardian Online<\/em>, 14 March 2019), and \u201c[\u2026] machines, just like humans, discriminate against ethnic minorities and poor people\u201d (the <em>Daily Maverick<\/em>, 16 October). Epistemologically speaking, these utterances assign moral agency to AI. This misleading belief about AI\u2019s capabilities detracts from debates around policies that need to address and prevent algorithmic bias on the part of humans (cf. Salles <em>et al<\/em>., 2020:93; Kaplan, 2015:36).<\/p>\n<p>A few journalists focusing on human-AI interaction and on business\/financial issues framed AI and human attributes as nearly indistinguishable. In such articles, journalists claimed that AI technologies possess, \u201chuman-sounding voice[s] complete with \u2018ums\u2019 and \u2018likes\u2019 (the <em>Daily Maverick<\/em>, 9 May 2018) and that they \u201ccan be programmed to [\u2026] chat with customers and answer questions\u201d (the <em>Sowetan LIVE<\/em>, 2 March 2018). Of course, AI-driven technologies are limited to predetermine responses (Heath, 2020: 4), which Highfield (2018:3) terms, \u201ccanned responses to fixed situations that give humans a sense that the [AI] is alive or capable of understanding [them]\u201d. An examination of the dataset indicated that several journalists mitigated claims about AI\u2019s ability to imitate human traits and behaviours through contradictory views that also evoked the frame of nature. Thus, in \u2018Chip labour: Robots replace waiters in restaurant\u2019 (the <em>Mail &amp; Guardian Online<\/em>, 5 August 2018), although the journalist described a \u201clittle robotic waiter\u201d as wheeling up to a table and serving patrons with food, he also employed the frame of nature to emphasise its \u201cmechanical tones\u201d. Similarly, in \u2018Is your job safe from automation? (the <em>Sowetan LIVE<\/em>, 20 March 2018), the journalist referred to a humanoid robot as being able \u201cto recognise voice, principal human emotions, chat with customers and answer questions\u201d, but also averred that \u201cRobots have no sense of emotion or conscience\u201d. Although the \u2018Uncanny valley\u2019 is not discussed here because it was not a salient topic (featuring in only three articles), we noted that a few journalists referred to AI-driven robots as \u201cuncanny\u201d (the <em>Daily Maverick<\/em>, 10 November 2019) and \u201ceerie\u201d (the <em>Sowetan LIVE<\/em>, 8 March 2018). These references acknowledge the uncanny valley, \u201cthe point at which something nonhuman has begun to look so human that the subtle differences left appear disturbing\u201d (Samuel, 2019:12). This has prompted robot-designers to produce machines that are easily distinguishable from human appearance.<\/p>\n<p>Only four journalists focusing on human-AI interaction or on business\/financial issues expressed a negative or mixed stance on AI by questioning its ability to emulate human emotion and sentience. In a <em>Sowetan LIVE<\/em> article (17 May 2018), for example, the journalist questioned what the future would hold were robots to prepare our meals or care for our children. The journalist evoked the frame of nature to insist that \u201cRobots can\u2019t replace human love, laughter and touch\u201d. In \u2018Is your job safe from automation\u2019 (the <em>Sowetan LIVE<\/em>, 20 March 2018), the journalist observed that AI \u201clack[s] empathy\u201d and has \u201cno conscience\u201d. These arguments align with expert conclusions that AI-driven robots cannot possess sentience\/consciousness (cf. Hildt, 2019). Put differently, AI remains emotionally unaware and, as Kirk (2019:3) argues, even if we train AI to recognise emotions, humans programme the labelling and interpretation process.<\/p>\n<p>Reasons for attributing a human form and\/or human-like attributes to AI are speculative and vary across the literature. Still, adopting a psychological explanation, Epley, Waytz and Caciappo (2007) propose that people tend to anthropomorphise a non-human agent when they possess insufficient knowledge about the agent\u2019s mental model, when they need to understand and control it, or from a desire to form social bonds (cf. Z\u0142otowski <em>et al<\/em>., 2015:348). Scholars are divided over whether or not anthropomorphism in the context of social robots\/digital assistants should concern us. Turkle (2007, 2010), for instance, argues that human-robot interaction undermines authentic human relationships and exploits human vulnerabilities, while Breazeal (2003) is of the view that social robots may be useful to humans as helpmates and social companions. As far as benefits are concerned, Darling (2015:9) observes that \u201c[s]tate of the art technology is already creating compelling use cases in health and education, only possible as a result of engaging people through anthropomorphism\u201d. We argue that while the benefits of social robots\/digital assistants should not be dismissed, anthropomorphising AI does have several potentially negative consequences (Sparrow and Sparrow, 2006; Bryson, 2010; Hartzog, 2015). We have already touched on the idea that anthropomorphism may dupe people into believing that AI systems are human-like (cf. Kaplan, 2015:36). This concern is echoed by Engstrom (2018:19), who cautions that humanising AI may cause society to raise its expectations of this technology\u2019s capabilities while ignoring its social, economic, and ethical consequences. In articles focused on human-AI interaction and business\/financial issues, we noted that journalists either reported the risks of AI in a superficial manner or omitted them entirely. Thus, for example, in \u2018AI tech to assist domestic abuse victims\u2019 (the <em>Citizen<\/em>, 23 November 2018), an AI-driven programme that is accessed via Facebook\u2019s Messenger, was described as \u201ca companion\u201d that is \u201cnon-judgmental\u201d, but the ethical risks around this AI-mental healthcare interface remained unaddressed. Using AI applications for mental healthcare has several ethical concerns that have been widely discussed in the literature (Riek, 2016; Fisk, Henningsen and Buyx, 2019; Ferretti, Ronchi and Vayena, 2019). Some of these concerns revolve around possible abuse of the applications (in the sense that they could replace established healthcare professionals and thus widen healthcare inequalities), privacy issues, and the role and nature of non-human therapy in the context of vulnerable populations (Fiske <em>et al<\/em>., 2019). In \u201c\u2018Call be baby\u201d: Talking sex dolls fill a void in China\u2019 (the <em>Sowetan LIVE<\/em>, 4 February 2018), the journalist employed derogatory female framing, describing \u201csex dolls that can talk, play music and turn on dishwashers\u201d for \u201clonely men and retirees\u201d. While the journalist conceded that \u201cOn social media, some say the products reinforce sexist stereotypes\u201d, this observation ended the interrogation of sexism. Across the four media outlets \u2013 and quoting AI developers\u2019 own words \u2013 journalists described AI companions or assistants as female, \u201cendowed with remarkable [\u2026] aesthetics\u201d (the <em>Citizen<\/em>, 5 September 2018), as \u201clean\u201d or \u2018slender, with dark flawless skin\u201d etc. (the <em>Sowetan LIVE<\/em>, 28 September 2018). These descriptions echo mass media proclivities for framing human-AI relationships in terms of stereotypical gender roles instead of questioning such representations (cf. D\u00f6ring and Poesch, 2019:665). The fact is that most AI-driven companions\/assistants are designed according to stereotypical femininity (cf. Edwards, Edwards, Stoll, Lin and Massey, 2019). Informative journalism should challenge the entrenchment of these stereotypes that often \u201c[come] with framing [AI] in human terms\u201d (Darling, 2015:3). Another interesting example of how journalists may frame ethical concerns related to the application of AI is reflected in \u2018Online chatbot suspended for hate speech, \u201cdespising\u201d gays and lesbians\u2019 published in the <em>Citizen<\/em> (1 January 2021). The article reports on \u2018Lee Luda\u2019, a chatbot who was recently \u2018accused\u2019 of hate speech after \u2018attacking\u2019 minorities online. Of significance is that although the journalist indicated that the chatbot \u201clearned\u201d from data taken from billions of conversations, this fact was backgrounded in favour of foregrounding the chatbot\u2019s human-like behaviour; according to her designers, \u201cLee Luda is [\u2026] like a kid just learning to have a conversation. It has a long way to go before learning many things\u201d. Emphasising the chatbot\u2019s supposed ability to learn to avoid generating hate speech inadvertently frames this technology as having human intentions and moral agency, which are myths. AI does not possess intentionality (Abbass, 2018:165), which Searle defines as \u201cthat property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world\u201d. Without intentionality to act freely, AI does not have moral agency (Van de Poel, 2020:387).<\/p>\n<p>Social anthropomorphism featured in eight articles on \u2018Human control over AI\u2019 in the <em>Daily Maverick<\/em>, the<em> Mail &amp; Guardian Online<\/em>, and the <em>Sowetan LIVE<\/em>. The anthropomorphic tropes, which were evoked mainly through the frames of accountability and morality\/ethics, typically reflected a mixed valence in its propositions that humans must regulate AI. Concerns were related mainly to controlling or curtailing algorithmic\/data bias (particularly as this related to racist and sexist bias), autonomous weapons, and job losses. With respect to bias, and in a <em>Daily Maverick<\/em> article (3 October 2019), the journalist made the claim in the lead that \u201cAI can end up very biased\u201d, but nevertheless repeatedly averred that AI is designed by humans and trained on datasets selected by humans. Similarly, in a <em>Sowetan LIVE<\/em> article (30 January 2021), the journalist described AI as \u201cdangerous\u201d and \u201cprone to errors\u201d, but mainly topicalised the development of AI software \u201cby Africans, for Africans\u201d that helps combat privacy violations and discrimination. Both journalists therefore checked the tendency to frame AI as a moral agent that exhibits autonomous decision-making processes, thus mitigating fears and unfounded expectations about this technology\u2019s capabilities (cf. Salles <em>et al<\/em>., 2020:93). With respect to autonomous weapons, although the journalist in a <em>Daily Maverick<\/em> article (3 December 2019) referred to \u201ckiller robots\u201d, she foregrounded the need for the international community to protect societies from \u201cmachines [that] can\u2019t read between the lines or operate in the grey zone of uncertainty\u201d. Regarding job lay-offs, a <em>Daily Maverick<\/em> journalist wrote an article (26 November 2020) in which he predicted that AI will ultimately take human jobs that \u201cwill usher in an era of techno-feudalism\u201d. Yet, he also mitigated this prediction by arguing that humans need to ensure that they regulate AI. It is not surprising that across the eight articles, the words \u201c(human) control\u201d\/\u201ccontrols\u201d frequently appeared\u00a0 in relation to algorithmic\/data bias, autonomous weapons, and job losses: studies by Ouchchy <em>et al.<\/em> (2020) and Sun <em>et al.<\/em> (2020) suggest that regulation of AI is a frequent topic in the media amidst fears of the ethical consequences of this technology.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"53_A_comparison_of_the_news_outlets_with_regard_to_topics_and_anthropomorphism_of_AI\"><\/span>5.3 A comparison of the news outlets with regard to topics and anthropomorphism of AI<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>Whether journalists published in a mainstream paper, such as the<em> Mail &amp; Guardian Online<\/em>, an alternative media outlet, such as the <em>Daily Maverick<\/em>, or in tabloid-style newspapers, such as the <em>Citizen<\/em> or the <em>Sowetan LIVE,<\/em> all of them employed similar strategies to reflect uncertainty and conflict surrounding AI and its applications. Articles typically combined journalists\u2019 own reports, scare quotes, direct and indirect speech of different actors, and contradictory framing of AI. Unanimously, all outlets also anthropomorphised AI. The topics of \u2018AI-human interaction\u2019, \u2018Healthcare and medicine\u2019, and \u2018Business, finance, and the economy\u2019 featured across all four outlets, with anthropomorphic framing of AI under the first two topics being uniform across the outlets. AI was overwhelmingly framed positively and depicted as exhibiting human-like form\/human traits or as mimicking human cognitive capabilities. Although articles published in the <em>Sowetan LIVE<\/em> also anthropomorphised AI when discussing business, finance, and the economy, these articles were coded with mixed valence, while articles published in the other newspapers were predominantly coded with positive valence. We eschew speculation as to why this was the case, given that between 2018 and the beginning of 2021, we identified only three articles in this newspaper that focused on AI and business\/financial issues. Indeed, only 12 <em>Sowetan LIVE<\/em> articles satisfied our data collection criteria, suggesting that AI\u2019s application in the business\/financial world is an unpopular topic among readers.\u00a0 \u2018Control\u2019 over AI\u2019 featured in the <em>Daily Maverick<\/em>, the <em>Mail &amp; Guardian Online<\/em> as well as in the <em>Sowetan LIVE<\/em>, and again, anthropomorphism of AI under this topic was uniform: AI was described as biased, as taking people\u2019s jobs and as having the ability to kill humans. With a sensational soubriquet like \u2018Killer robots\u2019, one might assume that any reports on autonomous weapons would be the purview of tabloid-style newspapers, but this topic appeared only in three articles in the <em>Daily Maverick<\/em> and the <em>Mail &amp; Guardian Online<\/em>. Despite some sensational\/alarmist claims such the <em>Mail &amp; Guardian Online<\/em>\u2019s (19 March 2018)\u00a0 \u2013 \u201c[\u2026] weapons may be able to learn on their own, adapt and fire\u201d \u2013 journalists questioned the potential for AI to progress to a level where it will have moral agency and demanded that this type of AI be banned. Another topic that has the potential to be sensationalised is \u2018Big Brother\u2019. So, it is unsurprising that it appeared 13 times in the <em>Citizen<\/em>. Five of the 13 articles did not anthropomorphise AI as a \u2018spy\u2019, but highlighted the human element that drives surveillance technology. As noted in Section 4.1, the only news outlet that did not cover \u2018Preparedness for an AI-driven world\u2019 was the <em>Sowetan LIVE.<\/em> Since this topic was generally framed around the need for South Africans to equip themselves with the skills necessary to cope with AI which is \u2018taking\u2019 people\u2019s jobs, we find the omission of this topic surprising, given that the newspaper\u2019s readership constitutes mainly working-class South Africans. The remainder of the topics reflected in our datasets are not discussed, since they constituted less than 4% of the entire dataset.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"6_Conclusions\"><\/span>6. Conclusions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>This study has revealed that anthropomorphism of AI was pervasive in the four South African online newspapers, with only 19 reflecting no anthropomorphic tropes. Most articles (59) elicited social anthropomorphism of AI, while a few (16) evoked cognitive anthropomorphism. A total of 32 reflected both types of anthropomorphism. When cognitive anthropomorphism was evoked, journalists typically portrayed AI as matching or exceeding human intelligence and when social anthropomorphism was elicited, AI technologies were typically framed as social actors. Whichever type of anthropomorphism was dominant, AI was overwhelmingly represented as benefitting humankind. Although journalists generally attempted to mitigate exaggerated claims about AI by using a variety of discursive strategies, the construction of anthropomorphic tropes to some extent overtook the reality of what AI technologies currently encompass, essentially obscuring these technologies\u2019 epistemological and ethical challenges. It is critical that journalists interrogate how they contextualise and qualify AI, given that it is disrupting almost every aspect of our lives.<\/p>\n<p>While the content analysis yielded insights into how AI is framed by the media in South Africa, a limitation of the study is that the sample is not necessarily representative of the anthropomorphic framing employed in other online news outlets that may feature different or more polarised views of AI. Nevertheless, Obozintsev (2018: 45) observes that \u201cit seems unlikely that artificial intelligence would be framed in a\u00a0 markedly different manner\u201d in other outlets, since \u201copinions about [AI] are not as politically divisive as scientific issues such as climate change and evolution\u201d, for example.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"References\"><\/span>References<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Abbass, H.A. 2019. Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. <em>Cognitive Computation<\/em> 11: 159\u2013171.<\/p>\n<p>Adams, K. 2020. 3 hospital execs: How to ensure medical AI is trained on sufficiently diverse patient data. <em>Becker\u2019s Health IT<\/em>, 30 November. Available:\u00a0 <a href=\"https:\/\/www.beckershospitalreview.com\/artificial-intelligence\/3-hospital-execs-how-to-ensure-medical-ai-is-trained-on-sufficiently-diverse-patient-data.html\">https:\/\/www.beckershospitalreview.com\/artificial-intelligence\/3-hospital-execs-how-to-ensure-medical-ai-is-trained-on-sufficiently-diverse-patient-data.html<\/a> (Date of access: 9 April 2021).<\/p>\n<p>Bartneck, C. 2013. Robots in the theatre and the media. <em>Design and Semantics of Form and Movement<\/em>: 64\u201370.<\/p>\n<p>Birkenshaw, J. 2020. What is the value of firms in an AI world? Pp. 23-35 in J. Canals and F. Heukamp (Eds.), <em>The future of management in an AI world<\/em>. USA: Palgrave Macmillan.<\/p>\n<p>Springer International Publishing.<\/p>\n<p>Bishop, J.M. 2016. Singularity, or how I learned to stop worrying and love artificial intelligence. Pp. 267-281 in V.C. M\u00fcller (Ed), <em>Risks of general intelligence<\/em>. London, UK: CRC Press \u2013 Chapman &amp; Hall.<\/p>\n<p>Borgesius, F.Z. 2018. <em>Discrimination, artificial intelligence, and algorithmic decision-making<\/em>.<\/p>\n<p>Available: <a href=\"https:\/\/rm.coe.int\/discrimination-artificial-intelligence-and-algorithmic-decisionmaking\/1680925d73\">https:\/\/rm.coe.int\/discrimination-artificial-intelligence-and-algorithmic-decisionmaking\/1680925d73<\/a> (Date of access: 3 March 2021).<\/p>\n<p>Boukes, M., Van de Velde, B., Araujo, T. and Vliegenthart, R. 2020. What\u2019s the tone? Easy doesn\u2019t do it: Analyzing performance and agreement between off-the-shelf sentiment analysis tools. <em>Communication Methods and Measures<\/em> 14(2): 83-104.<\/p>\n<p>Boykoff, M. and Boykoff, J. 2004. Balance as bias: Global warming and the US prestige press. <em>Global Environmental Change<\/em> 14(2): 125-136.<\/p>\n<p>Breazeal, C. 2003. Toward sociable robots. <em>Robotics and Autonomous Systems<\/em> 42(3): 167-75.<\/p>\n<p>Brennen, J.S., Howard, P.N. and Nielsen, R.K. 2018. An industry-led debate: How UK media cover artificial intelligence. <em>RISJ Fact-Sheet<\/em>. Oxford, UK: University of Oxford.<\/p>\n<p>Br\u00fcggemann, M. 2017. Post-normal journalism. Climate journalism and its changing contribution to an unsustainable debate. Pp. 57-73 in P. Berglez, U. Olausson and M. Ots (Eds.), <em>What is sustainable journalism? Integrating the environmental, social, and economic challenges of journalism<\/em>. New York, NY: Peter Lang.<\/p>\n<p>Bryson, J. 2010. Robots should be slaves. Pp. 63-74 in Y. Wilks (Ed.), <em>Close engagements with artificial companions: Key social, psychological, ethical and design issues<\/em>. Amsterdam: John Benjamin Publishing Company.<\/p>\n<p>Bunz, M. and Braghieri, M. 2021. The AI doctor will see you now: Assessing the framing of AI in news coverage. <em>AI &amp; SOCIETY<\/em>: 1-14.<\/p>\n<p>Burns, D.M. 2020. Artificial intelligence isn\u2019t. <em>Canadian Medical Association Journal<\/em> 192(11): E290-E290.<\/p>\n<p>Burscher, B., Vliegenthart, R. and Vreese, C.H.D. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. <em>Social Science Computer Review<\/em> 34(5): 530-545.<\/p>\n<p>Castelvecchi, D. 2016. Can we open the black box of AI?. <em>Nature News<\/em> 538(7623): 20-23.<\/p>\n<p>Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B. and Taylor, L. 2018. Portrayals and perceptions of AI and why they matter. Available: https:\/\/royalsociety.org\/-\/media\/policy\/projects\/ai-narratives\/AI-narratives-workshop-findings.pdf\u00a0 (Date of access: 2 February 2020).<\/p>\n<p>Chong, D. &amp; Druckman, J.N. 2012. Counterframing effects. <em>Journal of Politics<\/em> 75(1): 1-16.<\/p>\n<p>Chuan, C.H., Tsai, W.H.S. and Cho, S.Y. 2019. Framing artificial intelligence in American newspapers. In: <em>Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society<\/em>: 339-344.<\/p>\n<p>Colom, R., Karama, S., Jung, R.E. and Haier, R.J. 2010. Human intelligence and brain networks. <em>Dialogues in Clinical Neuroscience<\/em> 12(4): 489-501.<\/p>\n<p>Curran, N.M., Sun, J. and Hong, J.W. 2019. Anthropomorphizing AlphaGo: A content analysis of the framing of Google DeepMind\u2019s AlphaGo in the Chinese and American press. <em>AI &amp; SOCIETY<\/em> 35: 727-735.<\/p>\n<p>Damiano, L. and Dumouchel, P. 2018. Anthropomorphism in human\u2013robot co-evolution. <em>Frontiers in Psychology <\/em>9: 1-9.<\/p>\n<p>Darling, K. 2015. \u2018Who\u2019s Johnny? Anthropomorphic framing in human-robot interaction, integration, and policy. Pp. 3-21 in P. Lin, G. Bekey, K. Abney and R. Jenkins (Eds.), <em>Robotic Ethics 2.0. <\/em>Oxford: Oxford University Press.<\/p>\n<p>D\u00f6ring, N. and Poeschl, S. 2019. Love and sex with robots: A content analysis of media representations. <em>International Journal of Social Robotics<\/em> 11(4): 665-677.<\/p>\n<p>Duffy, B.R. and Zawieska, K. 2012. Suspension of disbelief in social robotics. <em>21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)<\/em>: 484-89.<\/p>\n<p>Dur\u00e1n, J.M. and Jongsma, K.R., 2021. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. <em>Journal of Medical Ethics<\/em> 0: 1-7.<\/p>\n<p>Edwards, C., Edwards, A., Stoll, B., Lin, X. and Massey, N. 2019. Evaluations of an artificial intelligence instructor&#8217;s voice: Social Identity Theory in human-robot interactions. <em>Computers in Human Behavior<\/em> 90: 357-362.<\/p>\n<p>Engstrom, E. 2018. Gendering of AI\/robots: Implications for gender equality amongst youth generations. Report written by Eugenia Novoa (Speaker), Siddhesh Kapote (Speaker) Ebba Engstrom (Speaker), Jose Alvarez (Speaker) and Smriti Sonam (Special Rapporteur). Images provided by AFI Changemakers and UNCTAD Youth Summit Delegates 2018. Available: https:\/\/arielfoundation.org\/wp-content\/uploads\/2019\/01\/AFI<\/p>\n<p>Changemakers-and-UNCTAD-Delegates-Report-on-Technology-2019.pdf#page=13 (Date of access: 8 January 2021).<\/p>\n<p>Entman, R.M. 2010. Framing media power. In: P. D\u2019Angelo and J. Kuypers (eds.), <em>Doing news framing analysis.<\/em> New York, NY: Routledge. 331-355.<\/p>\n<p>Epley, N., A. Waytz, and J. T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. <em>Psychological Review<\/em> 114(4): 864\u2013886.<\/p>\n<p>Erickson RP. 2014. Are humans the most intelligent species?. <em>Journal of Intelligence<\/em> 2(3): 119-121.<\/p>\n<p>Fast, E. and Horvitz, E. 2017. Long-term trends in the public perception of artificial intelligence. In: <em>Proceedings of the AAAI Conference on Artificial Intelligence<\/em> 31(1): 963-969.<\/p>\n<p>Ferretti, A., Ronchi, E. and Vayena, E. 2019. From principles to practice: Benchmarking government guidance on health apps. <em>The Lancet Digital Health<\/em> 1(2): e55-e57.<\/p>\n<p>Fjelland, R., 2020. Why general artificial intelligence will not be realized. <em>Humanities and Social Sciences Communications <\/em>7(1): 1-9.<\/p>\n<p>Fiske, A., Henningsen, P. and Buyx, A. 2019. Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. <em>Journal of Medical Internet Research<\/em> 21(5): e13216.<\/p>\n<p>Fortunato, J.A. and Martin, S.E. 2016. The intersection of agenda-setting, the media environment, and election campaign laws. <em>Journal of Information Policy<\/em> 6(1): 129-153.<\/p>\n<p>Freyenberger, D. 2013. Amanda Knox: A content analysis of media framing in newspapers around the world. Available: Available: http:\/\/dc.etsu.edu\/cgi\/viewcontent.cgi, article=2281&amp;context=etd (Date of access: 22 February 2021).<\/p>\n<p>Funtowicz, S.O. and Ravetz, J.R. 1993. Science for the post-normal age. <em>Futures<\/em> 25(7): 739-755.<\/p>\n<p>Garvey, C. and Maskal, C. 2020. Sentiment analysis of the news media on artificial intelligence does not support claims of negative bias against artificial intelligence. <em>Omics: A Journal of Integrative Biology<\/em> 24(5): 286-299.<\/p>\n<p>Geertz, C. 1973. <em>The interpretation of cultures: Selected essays<\/em>. New York, NY: Basic Books.<\/p>\n<p>Gerke, S., Minssen, T. and Cohen, G. 2020. Ethical and legal challenges of artificial intelligence-driven healthcare. <em>Artificial Intelligence in Healthcare<\/em>: 295-336.<\/p>\n<p>Giger, J.C., Pi\u00e7arra, N., Alves\u2010Oliveira, P., Oliveira, R. and Arriaga, P. 2019. Humanization of robots: Is it really such a good idea?. <em>Human Behavior and Emerging Technologies<\/em> 1(2): 111-123.<\/p>\n<p>Heath, N. 2020. What is AI? Everything you need to know about artificial intelligence. <em>ZDNet<\/em>, 11 December. Available: <a href=\"https:\/\/www.zdnet.com\/article\/what-is-ai-everything-you-need-to-know-about-artificial-intelligence\/\">https:\/\/www.zdnet.com\/article\/what-is-ai-everything-you-need-to-know-about-artificial-intelligence\/<\/a> (Date of access: 19 April 2021).<\/p>\n<p>Hertog, J. and McLeod, D. 2001. A multi-perspectival approach to framing analysis: A field guide. Pp. 141-162 in S. Reese, O. Gandy and A. Grant (Eds.), <em>Framing public life<\/em>. Mahwah, NJ: Erlbaum.<\/p>\n<p>Highfield, V. 2018. Can AI really be emotionally intelligent? <em>Alphr<\/em>, 27 June. Available: <a href=\"https:\/\/www.alphr.com\/artificial-intelligence\/1009663\/can-ai-really-be-emotionally-intelligent\/\">https:\/\/www.alphr.com\/artificial-intelligence\/1009663\/can-ai-really-be-emotionally-intelligent\/<\/a> (Date of access: 19 April 2021).<\/p>\n<p>Hildt, E. 2019. Artificial intelligence: Does consciousness matter?. <em>Frontiers in Psychology<\/em> 10: 1-3.<\/p>\n<p>Holgu\u00edn, L.M. 2018. Communicating artificial intelligence through newspapers: Where is the real danger?. Available: https:\/\/mediatechnology.leiden.edu\/images\/uploads\/docs\/martin-holguin-thesis-communicating-ai-through-newspapers.pdf (Date of access: 3 April 2020).<\/p>\n<p>Hornmoen, H. 2009. What researchers now can tell us: Representing scientific uncertainty in journalism. <em>Observatorio<\/em> 3(4): 1-20.<\/p>\n<p>Hsieh, H.F. and Shannon, S.E. 2005. Three approaches to qualitative content analysis. <em>Qualitative Health Research<\/em> 15(9): 1277-1288.<\/p>\n<p>Johannson M. 2019. Digital and written quotations in a news text: The hybrid genre of political news opinion. Pp. 133-162 in P.B. Franch and P.G.C. Blitvich, P.G.C. (Eds.), <em>Analyzing digital discourse: New insights and future directions<\/em>. Cham, Switzerland: Springer.<\/p>\n<p>Jones, S. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta.<\/p>\n<p>Kampourakis, K. and McCain, K. 2020. <em>Uncertainty: How it makes science advance<\/em>. USA: Sheridan Books, Incorporated.<\/p>\n<p>Kaplan, A. and Haenlein, M. 2019. Siri, Siri, in my hand: Who\u2019s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. <em>Business Horizons<\/em> 62(1): 15-25.<\/p>\n<p>Kaplan, A. and Haenlein, M. 2020. Rulers of the world, unite! The challenges and opportunities of artificial intelligence. <em>Business Horizons<\/em> 63(1): 37-50.<\/p>\n<p>Kirk, J. 2019. The effect of Artificial Intelligence (AI) on emotional intelligence (EI). <em>Capgemini<\/em>, 19 November. Available: <a href=\"https:\/\/www.capgemini.com\/gb-en\/2019\/11\/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei\/\">https:\/\/www.capgemini.com\/gb-en\/2019\/11\/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei\/<\/a> (Date of access: 5 May 2021).<\/p>\n<p>Kleinnijenhuis, J., Schultz, F. and Oegema, D. 2015. Frame complexity and the financial crisis: A comparison of the United States, the United Kingdom, and Germany in the period 2007\u20132012. <em>Journal of Communication<\/em> 65(1): 1-23.<\/p>\n<p>Krippendorff, K. 2013. <em>Content analysis: An introduction to its methodology.<\/em> Los Angeles, CA: Sage.<\/p>\n<p>Larson, E.J. 2021. <em>The myth of artificial intelligence: Why computers can\u2019t think the way we do<\/em>. Cambridge, MA: Harvard University Press.<\/p>\n<p>Lea, G.R. 2020. Constructivism and its risks in artificial intelligence. <em>Prometheus <\/em>36(4): 322-346.<\/p>\n<p>McCombs, M. 1997. Building consensus: The news media&#8217;s agenda-setting roles. <em>Political Communication<\/em> 14(4): 433-443.<\/p>\n<p>Monett, D., Lewis, C.W. and Th\u00f3risson, K.R. 2020. Introduction to the JAGI special issue \u201cOn defining Artificial Intelligence\u201d \u2013 Commentaries and author\u2019s response. <em>Journal of Artificial General Intelligence <\/em>11(2): 1-100.<\/p>\n<p>Mueller, S.T. 2020. Cognitive anthropomorphism of AI: How humans and computers classify images. <em>Ergonomics in Design<\/em> 28(3): 12-19.<\/p>\n<p>Nelson, T.E. and Kinder, D.R. 1996. Issue frames and group-centrism in American public opinion. <em>The Journal of Politics<\/em> 58(4): 1055-1078.<\/p>\n<p>Nisbet, M.C. 2009. Framing science. A new paradigm in public engagement. Pp. 1-32 in L. Kahlor and P. Stout (Eds.), <em>Understanding science: New agendas in science communication<\/em>. New York, NY: Taylor and Francis.<\/p>\n<p>Nisbet, M.C. 2016. The ethics of framing science. Pp. 51-74 in B. Nerlich, R. Elliott and B. Larson (Eds.), <em>Communicating biological sciences<\/em>. USA: Routledge.<\/p>\n<p>Obozintsev, L. 2018. From Skynet to Siri: An exploration of the nature and effects of media coverage of artificial intelligence. Unpublished Doctoral thesis. Newark, Delaware: University of Delaware.<\/p>\n<p>Ouchchy, L., Coin, A. and Dubljevi\u0107, V. 2020. AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. <em>AI &amp; SOCIETY<\/em> 35(4): 927-936.<\/p>\n<p>Pesapane, F., Tantrige, P., Patella, F., Biondetti, P., Nicosia, L., Ianniello, A., Rossi, U.G., Carrafiello, G. and Ierardi, A.M. 2020. Myths and facts about artificial intelligence: Why machine-and deep-learning will not replace interventional radiologists. <em>Medical Oncology<\/em> 37(5): 1-9.<\/p>\n<p>Peters, H.P. and Dunwoody, S. 2016. Scientific uncertainty in media content: Introduction to this special issue. <em>Public Understanding of Science<\/em> 25(8): 893\u2013908.<\/p>\n<p>Peters, M.A. and Jandri\u0107, P. 2019. Artificial intelligence, human evolution, and the speed of learning. Pp. 195-206 in J. Knox, Y. Wang and M. Gallagher (Eds.), <em>Artificial intelligence and inclusive education. Perspectives on rethinking and reforming education.<\/em> Singapore: Springer.<\/p>\n<p>Proudfoot, D. 2011. Anthropomorphism and AI: Turing\u02bcs much misunderstood imitation game. <em>Artificial Intelligence<\/em> 175(5-6): 950-957.<\/p>\n<p>Quer, G., Muse, E.D., Nikzad, N., Topol, E.J. and Steinhubl, S.R. 2017. Augmenting diagnostic vision with AI. <em>The Lancet<\/em> 390(10091): 221.<\/p>\n<p>Ramakrishna, K., Verma, I., Goyal, M.I. and Agrawal, M.M. 2020. Artificial intelligence: Future employment projections. <em>Journal of Critical Reviews <\/em>7(5): 1556-1563.<\/p>\n<p>Riek, L.D. 2016. Robotics technology in mental health care. Pp. 185-203 in D.D. Luxton (Ed.), <em>Artificial intelligence in behavioral and mental health care<\/em>. USA: Academic Press.<\/p>\n<p>Salles, A., Evers, K. and Farisco, M. 2020. Anthropomorphism in AI. <em>AJOB Neuroscience<\/em> 11(2): 88-95.<\/p>\n<p>Samuel, J.L. 2019. Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. <em>Ethics in Progress<\/em> 10(2): 8-26.<\/p>\n<p>Schmid-Petri, H. and Arlt, D. 2016. Constructing an illusion of scientific uncertainty? Framing climate change in German and British print media. Communications 41(3): 265-289.<\/p>\n<p>Scholl, B.J. and Tremoulet, P.D. 2000. Perceptual causality and animacy. <em>Trends in Cognitive Sciences<\/em> 4(8): 299-309.<\/p>\n<p>Schwartz, O. 2018. \u201cThe discourse is unhinged\u201d: How the media gets AI alarmingly wrong\u201d. The Guardian, 25 July. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2018\/jul\/25\/ai-artificial-intelligence-social-media-bots-wrong\">https:\/\/www.theguardian.com\/technology\/2018\/jul\/25\/ai-artificial-intelligence-social-media-bots-wrong<\/a> (Date of access: 14 April 2021).<\/p>\n<p>Skovsgaard, M., Alb\u00e6k, E., Bro, P. and De Vreese, C. 2013. A reality check: How journalists\u2019 role perceptions impact their implementation of the objectivity norm. <em>Journalism<\/em> 14(1): 22-42.<\/p>\n<p>Sniderman, P.M. and Theriault, S.M. 2004. The structure of political argument and the logic of issue framing. Pp. 133-165 in W.E. Saris and P.M. Sniderman (Eds.), <em>Studies in public opinion: Attitudes, nonattitudes, measurement error, and change<\/em>. USA: Princeton University Press.<\/p>\n<p>Sparrow, R., and L. Sparrow. 2006. In the hands of machines? The future of aged care. <em>Minds and Machines<\/em> 16(2): 141\u2013161.<\/p>\n<p>Stahl, N.A. and King, J.R. 2020. Expanding approaches for research: Understanding and using trustworthiness in qualitative research. <em>Journal of Developmental Education<\/em>: 44(1): 26-29.<\/p>\n<p>Sun, S., Zhai, Y., Shen, B. and Chen, Y. 2020. Newspaper coverage of artificial intelligence: A perspective of emerging technologies. <em>Telematics and Informatics<\/em> 53: 1-9.<\/p>\n<p>Turing, A.M. 1950. Computing machinery and intelligence. <em>Mind<\/em> 59(236): 433-460.<\/p>\n<p>Turkle, S. 2007. Simulation vs. authenticity. Pp. 244-247 in J. Brockman (Ed.), <em>What is your dangerous idea? : Today&#8217;s leading thinkers on the unthinkable<\/em>. USA: Simon &amp; Schuster.<\/p>\n<p>Turkle, S. 2010. In good company? On the threshold of robotic companions. Pp. 3-10 in Y. Wilks (Ed.), <em>Close engagements with artificial companions: Key social, psychological, ethical and design issues.<\/em> Amsterdam\/Philadelphia: John Benjamins Publishing Company.<\/p>\n<p>Uribe, R. and Gunter, B. 2007. Are sensational news stories more likely to trigger viewers\u2019 emotions than non-sensational news stories? A content analysis of British TV news. <em>European Journal of Communication<\/em> 22(2): 207-228.<\/p>\n<p>Ulnicane, I., Knight, W., Leach. T., Stahl, B.C. and Wanjiku, W.G. 2020: Framing governance for a contested emerging technology: Insights from AI policy. <em>Policy and Society<\/em>: 1-20.<\/p>\n<p>Van de Poel, I. 2020. Embedding values in artificial intelligence (AI) systems. <em>Minds and Machines <\/em>30(3): 385-409.<\/p>\n<p>Vergeer, M. 2020 Artificial intelligence in the Dutch press: An analysis of topics and trends.\u00a0 <em>Communication Studies<\/em> 71(3: 373-392.<\/p>\n<p>Vettehen, H.P. and Kleemans, M. 2018. Proving the obvious? What sensationalism contributes to the time spent on news video. <em>Electronic News<\/em> 12(2): 113-127.<\/p>\n<p>Wang, P. 2019. On defining artificial intelligence. <em>Journal of Artificial General Intelligence<\/em> 10(2): 1-37.<\/p>\n<p>Watson, D. 2019. The rhetoric and reality of anthropomorphism in artificial intelligence. <em>Minds and Machines<\/em> 29(3): 417-440.<\/p>\n<p>White, D.E., Oelke, N.D. and Friesen, S. 2012. Management of a large qualitative data set: Establishing trustworthiness of the data. <em>International Journal of Qualitative Methods<\/em> 11(3): 244-258.<\/p>\n<p>Z\u0142otowski, J., Proudfoot, D., Yogeeswaran, K. and Bartneck, C. 2015. Anthropomorphism: opportunities and challenges in human\u2013robot interaction. <em>International Journal of Social Robotics <\/em>7(3): 347-360.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> We are not suggesting that frames can be reduced to one of two arguments: \u201cFrames are constructions of the issue: they spell out the essence of the problem, suggest how it \u00a0should be thought about, and may go so far as to recommend what (if anything) should be done [\u2026]\u201d (Nelson and Kinder, 1996:1057).<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> Deep learning is a subset of machine learning and learns through an artificial neural network. In simple terms, this network mimics the human brain and enables an AI model to \u2018learn\u2019 from huge amounts of data.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Title: Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles Author: Dr. Susan Brokensha and dr. Thinus Conradie, University of the Free State. Ensovoort, volume 42 (2021), number 6: 3 Abstract How artificial intelligence (AI) is framed in news articles is significant as framing influences society\u2019s perception and &hellip; <a href=\"http:\/\/ensovoort.co.za\/index.php\/2021\/06\/07\/killer-robots-humanoid-companions-and-super-intelligent-machines-the-anthropomorphism-of-ai-in-south-african-news-articles\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[14,3,11],"tags":[45,430,428,429,431],"_links":{"self":[{"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/posts\/1253"}],"collection":[{"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/comments?post=1253"}],"version-history":[{"count":7,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/posts\/1253\/revisions"}],"predecessor-version":[{"id":1333,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/posts\/1253\/revisions\/1333"}],"wp:attachment":[{"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/media?parent=1253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/categories?post=1253"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/ensovoort.co.za\/index.php\/wp-json\/wp\/v2\/tags?post=1253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}