5 July 2021 | Draft
Introduction
Role of AI in building models of the pandemic
Humans as agents empowered by an AI with agency?
Development of artificial intelligence by AI: "AI self-development"?
Subtleties of AI agency: how would who know what?
AI articulation of communication scripts used by agents
Neural learning as "new-role learning" in practice: arrogant officiousness?
Herd immunity as an unconscious metaphor for groupthink
Effective elimination of "humanity" as determined by AI?
AI and the paradoxical engagement with singularity
References
There is no lack of authoritative references to the future role and impact of artificial intelligence (Artificial Intelligence Industry 2021, ReporterLinker, 2021; Joanna J. Bryson, The Future of AI's Impact on Society, MIT Technology Review, 18 December 2019; Ashley Stahl, How AI Will Impact the Future of Work and Life, Forbes, 10 March 2021; Darrell M. West, How artificial intelligence is transforming the world, Brookings, 24 April 2018).
The role of the OECD in anticipating this is especially noteworthy (OECD Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 22 May 2019). It already features in various UN initiatives as reported to the 2018 AI for Good Global Summit (United Nations Activities on Artificial Intelligence (AI), ITU, 2018).
The potential relevance to the pandemic is evident in the focus on "artificial intelligence in healthcare". This term is used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data (WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use, WHO News, 28 June 2021; WHO guidance on Artificial Intelligence to improve healthcare, mitigate risks worldwide, UN News, June 2021).
The latter notes that WHO's Ethics and governance of artificial intelligence for health report points out that AI is already being used to improve the speed and accuracy of diagnosis and screening for diseases; assist with clinical care; strengthen health research and drug development; and support diverse public health interventions, including outbreak response and health systems management. It recognizes that AI could also empower patients to take greater control of their own health care and enable resource-poor countries to bridge health service access gaps.
Less obvious are the simpler manifestations of AI in enabling virtual gatherings, most notably of world leaders and decision-makers, as has been evident in the organization of recent summits, including the General Assembly of the United Nations in 2020. These possibilities could be understood as heralding a major revolution (Forthcoming Major Revolution in Global Dialogue, 2013; Envisaging the AI-enhanced Future of the Conferencing Process, 2020; From Zoom Organization to Zome Configuration and Dynamics, 2020).
As a consequence of such enhancement, less evident, and far more subtle, will be the role of AI in manipulating the manner in which information is presented to facilitate human comprehension, most notably in processes of governance. These may indeed enable new forms of collective intelligence -- with unforeseen possibilities for collective organization. These may well contrast dramatically with the constraints and limitations of such experiments as the European Commission's Conference on the Future of Europe (2021), partially inspired by the unprecedented Great National Debate in France in 2019 (Multi-option Technical Facilitation of Public Debate: eliciting consensus nationally and internationally, 2019).
The question explored here is the extent to which the impact of artificial intelligence on global governance has already had an unforeseen (and unacknowledged) role in relation to the strategic response to the global pandemic -- and potentially to the manner in which that pandemic has been framed. Using AI, might the sophisticated analysis of the global condition already be conditioning evaluation of disease fatal to human beings? Such questions go the heart of the process of data gathering, model-building, formulation of recommendations, and the design of more effective methods of implementation than have been evident in the past.
Much has been made of the problematic role of misinformation as engendered and purveyed through global communications. The complicity of many has been variously cited and challenged in a global blame-game within which clarity and trust have been progressively eroded. The Facebook–Cambridge Analytica data scandal, and its role in enabling the Brexit outcome, is one example of this. It has given heightened focus to the ethical dilemmas associated with AI. Whilst Brexit is an instance of AI-enabled fragmentation of regional institutions, the response to the pandemic could be explored as an instance of its role in enabling an unexpected form of global consensus.
Such manipulation is also susceptible to sophisticated modelling with AI, if not intrinsic to it --- as with the interventions which might exacerbate or mitigate its effect. In that light the pandemic indeed merits exploration as being in some measure an artificially induced memetic disease (COVID-19 as a Memetic Disease -- an epidemic of panic, 2020).
Given the supercomputer resources now available, especially to secretive intelligence agencies, it is indeed appropriate to ask how these might be applied in governance of the pandemic response -- especially given the intensifying competition between nations of the G7 and of the G20 (Neil Savage, The race to the top among the world's leaders in artificial intelligence, Nature, 9 December 2020; Kirsten Gronlund, State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons, Future of Life, 9 May 2019). For Gronlund:
As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.
These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.
Irrespective of the widely documented issues of cyberwarfare, the pandemic might however be recognized as part of a process of memetic warfare (Tom Ascott, How memes are becoming the new frontier of information warfare, The Strategist: Australian Strategic Policy Institute, 19 February 2020; It's Time to Embrace Memetic Warfare, Defence Strategic Communications of the NATO Strategic Communications Centre of Excellence, 2015; Missiles, Missives, Missions and Memetic Warfare: navigation of strategic interfaces in multidimensional knowledge space. 2001). As argued by Johan Galtung with respect to physical violence versus structural violence, cyberwarfare could be appropriately understood as being "for amateurs".
An obvious preoccupation with cyberwarfare is evident in the simulation exercises initiated in 2020 as Cyber Polygon by the World Economic Forum. As announced, Cyber Polygon 2021 is a global cyberattack simulation to instruct participants in "developing secure ecosystems" by simulating a supply-chain cyberattack and asserting that adding that "a secure approach to digital development today will determine the future of humanity for decades to come" (Tim Hinchliffe Prepping for a cyber pandemic: Cyber Polygon 2021 to stage supply chain attack simulation Will Cyber Polygon 2021 be as prophetic as Event 201 in simulating a pandemic response? perspective, The Sociable, 10 February 2021). Rather than the simulation itself, potentially of greater interest are observations of a higher order -- a meta-perspective -- on the psychodynamics of the participants in their engagement with it, and the possibilities for its future manipulation.
However it is in the implementation of the current response by authorities to the pandemic that subtle traces of the manner in which governance is effectively designed by AI are evident. To what extent have humans already become agents of AI, articulating scripts designed by AI, and eliminating critical thinking that does not conform to those scripts? Does this ensure a perverse form of cognitive "herd immunity" -- understood as a closed-minded form of human groupthink? Does AI now determine a form of cognitive gerrymandering whereby misinformation is defined?
AI applications to the pandemic: The use of AI with respect to the pandemic has been usefully reviewed according to conventional scientific methodology:
The AAAS report concludes:
This report, together with a commissioned study on the attitudes of marginalized populations toward AI as applied in the context of health and the COVID-19 pandemic will inform the elaboration of a responsibility framework that will provide a roadmap for developing and implementing just and ethical AI-based medical applications. This roadmap will be conceptualized and articulated by a cohort of thought leaders in a wide variety of fields ranging from ethicists to computer specialists and from human rights activists to lawyers and public servants. We anticipate that the result will help both AI practitioners and lawmakers and policy makers to usher a new era for AI-based medical applications.
The AAAS report fails however to address the concerns of those now stigmatized as indulging irresponsibly in vaccine hesitancy -- despite increasing recognition of adverse reactions, now widely supported by data and the necessity to indemnify those held responsible. The focus is on COVID and vaccination not on the psychosocial dynamics within which the pandemic is embedded -- and by which they may be variously exploited..
Relevance of AI modelling to "misinformation"?: Hopefully history will clarify the role of AI in modelling the pandemic and the strategic response by major institutions and governments. At this time the information that is available could be considered part of the problem, given the rmarginalization of those who dare to comment critically on the matter.
A key question is the extent to which AI modelling take a scientific approach to misinformation as may be variously understood (Varieties of Fake News and Misrepresentation: when are deception, pretence and cover-up acceptable? 2019). The latter notes in particular the range of initiatives empowered by a model by those defensive of its framing. This is otherwise framed as "cover-up" -- organized into a remarkably extensive typology of cover-ups in the relevant Wikipedia entry, based on analysis of a number of typical cases (Vital Collective Learning from Biased Media Coverage: acquiring vigilance to deceptive strategies used in mugging the world, 2014)
Of major concern is whether such models deliberately or effectively exclude as irrelevant all information which does not correspond to the design of the model (Caitlin Johnstone, The Horrifying Rise of Total Mass Media Blackouts on Inconvenient News Stories, Information Clearing House, 3 July 2021). This is naturally a feature of many models framed as a simplified approach to complexity. It is characteristic of some of the constraints on the scientific method reinforcing denial (Knowledge Processes Neglected by Science, 2012). These include failure to take account of the processes documented by Naomi Oreskes and Erik M. Conway (Merchants of Doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming, 2010). Given the advantageous economic implications of the pandemic for some, it would be naive to neglect recognition of its "merchants".
As presented by Mark Hertsgaard (The climate crisis is a crime story, Al Jazeera, 30 June 2021), fossil fuel companies lied for decades about climate change, and humanity is paying the price. Would an analogue be true of pharmaceutical companies now benefitting to such a high degree from the pandemic and the vaccine agenda. Lies or not, should such processes be central to the public narrative?
One example is the set of arguments presented by F. William Engdahl (The Dubious COVID models, the tests, and now the consequences, The Irish Sentinel, 30 April 2020; Can We Trust the WHO? Global Research, 22 May 2021).
Engdahl notes, in the following terms, the problematic key role of Neil Ferguson in leading the Imperial College COVID-19 Response Team -- and the nexus of associations with WHO and The Bill and Melinda Gates Foundation:
Two major models are being used in the West since the alleged spread of coronavirus to Europe and USA to “predict” and respond to the spread of COVID-19 illness. One was developed at Imperial College of London. The second was developed, with emphasis on USA effects, by the University of Washington's Institute for Health Metrics and Evaluation (IHME) in Seattle, near the home of Microsoft founder Bill Gates. What few know is that both groups owe their existence to generous funding by a tax exempt foundation that stands to make literally billions on purported vaccines and other drugs to treat coronavirus—The Bill and Melinda Gates Foundation.
In early March, Prof. Neil Ferguson, head of the MRC Centre for Global Infectious Disease Analysis at Imperial College London issued a widely-discussed model that forecast possible COVID-19 deaths in the UK as high as 500,000. Ferguson works closely with the WHO. That report was held responsible for a dramatic u-turn by the UK government from a traditional public health policy of isolating at risk patients while allowing society and the economy to function normally. Days after the UK went on lockdown, Ferguson's institute sheepishly revised downwards his death estimates, several times and dramatically. His dire warnings have not come to pass and the UK economy, like most others around the world, has gone into deep crisis based on inflated estimates.
Other facets of this nexus are presented by Peter Koenig (Covid-19: The Great Reset – Revisited: Scary Threats, Rewards for Obedience, Global Research, 18 October 2020; Covid-19: The Great Reset – Revisited: Scary Threats, Rewards for Obedience, Global Research, 18 October 2020; Depopulation and the mRNA Vaccine: The New York Times Predicts Massive Population Reduction, 8 June 2021) and by Rosemary Frei (The Modelling-paper Mafiosi: the pandemic modellers have a conflict of interest problem, OffGuardian, 18 February 2021).
Whether philosophical and/or speculative, it is remarkable the desperate need to dismiss the framing offered by C. J. Hopkins (The War on Reality, OffGuardian, 30 June 2021):
So, the War on Reality is going splendidly. Societies all across the world have been split into opposing, irreconcilable realities. Neighbors, friends, and even family members are bitterly divided into two hostile camps, each regarding the other as paranoid psychotics, delusional fanatics, dangerous idiots, and, in any event, as mortal enemies.... An apocalyptic virus is on the loose. Mutant variants are spreading like wildfire. Most of society is still shut down or subject to emergency health restrictions. People are still walking around in public with plastic face shields and medical-looking masks. The police are showing up at people's homes to arrest them for “illegally gathering outdoors.” Any deviation from official reality is being censored by the Internet corporations. Constitutional rights are still suspended. Entire populations are being coerced into being injected with experimental "vaccines". Pseudo-medical segregation systems are being brought online....
The War on Reality is not an attempt to replace reality with a fake reality. Or it is that, but that is only one part of it. Its real goal is to render reality arbitrary, to strip it of its epistemological authority, to turn it into a "floating signifier", a word that has no objective referent, which, of course, technically, it already is. You cannot take a picture of reality. It is a concept. It is not a physical object that exists somewhere in time and space. [emphasis added]
According to the principles of the scientific method, the information presented in this manner would merit a degree of consideration. If models are constructed such as to exclude the possible relevance of such insights, this would constitute a highly irresponsible approach to risk analysis in modelling the pandemic and its context.
The difficulty seems to be that once there is uncritical reliance on models elaborated with the aid of AI, there is no capacity whatsoever to take account -- and model -- the existence and role of alternative perspectives, in all their diversity. Such perspectives must then be condemned as dangerously aberrant in relation to some norm -- and censored to the extent possible (Eradication as the Strategic Final Solution of the 21st Century? 2014).
Role of AI in enabling misinformation? Given the extensively documented role of AI in future warfare (discussed below), it is appropriate to recall the fundamental importance of deception in military strategy:
From such a perspective, the development of AI in relation to planetary healthcare raises the question as to whether this in fact deliberately takes account (and deploys) some forms of misinformation. Is a degree of deception of the population now to be recognized as vital to the health of the planet -- if not to its inhabitants? Are the more sophisticated uses of AI in relation to the pandemic necessarily enabling the forms of strategic confusion which have become so evident?
Given the considerable familiarity with double-agents, as deployed by the intelligence and security services, any AI model of pandemic response would be crude if it failed to envisage the possibility of enabling those to be deprecated as a source of misinformation -- if only as "honey traps".
The analysis of the deprecated processes deployed in the Facebook-Cambridge Analytica has shown their success in the manipulation of public opinion, as other wise shown (Tristan Greene, Study shows how dangerously simple it is to manipulate voters (and daters) with AI, Neural, 22 April 2021). This suggests that there would be strong arguments for using such techniques in the disruption of conventional responses to crises. The latter can be claimed to have been relatively ineffective in the past. The AI-based processes could well be used in the deception of many, and especially those in leadership roles -- as is the purpose of deception in military strategy -- in order to "market" perspectives of greater strategic relevance.
The assumptions associated with assessment of AI seemingly ignore the influential framing offered by Leo Strauss and cultivated by his followers. Strauss believed that essential truth about human society and history should be held by an elite and withheld from others who lack the fortitude to deal with truth. In their view it has been necessary to tell lies to people about the nature of political reality...The elite keeps the truth to itself... This gives it insight and ...power that others do not possess (William Pfaff, The Long Reach of Leo Strauss, International Herald Tribune, 15 May 2003).
As noted by Jim Lobe (Leo Strauss' Philosophy of Deception, 19 May 2003), deception is considered to be the norm in political life. The political order can only be stable, according to that argument, if it is united by an external threat. Following Machiavelli, Strauss maintained that if no external threat exists, then one has to be manufactured. In his view you have to fight all the time (Thoughts on Machiavelli, 1958).
Given the demonstrated skills of AI in strategic games, it might be asked whether these would be deployed in the response to planetary healthcare. Mass deception would then need to be camouflaged, as discussed separately (Orders of deception and stealth -- in relation to orders of complexity, 2004).
Organized deception? Despite their established credibility in military terms, and in framing the response to the pandemic as a "war", such possibilities do not feature in the "objective" review of the application of AI to the pandemic -- being necessarily framed as "objectionable". This attitude could be compared to the failure by the German military to detect the deception which enabled the success of the Normandy landings by the Allies through Operation Overlord -- which proved so crucial to the outcome of World War II.
If response to the pandemic is to be compared with World War III -- as some have done -- who indeed are the "Germans" to be deceived -- and what form might the "Normandy landings" of an "Operation Overlord" be expected to take, enabled as it then was by the deceptions of Operation Fortitude? (Andrew McLuhan, COVID-19 as World War, Medium, 16 March 2020; Aditya Chakrabortty, Johnson says this is war, The Guardian, 18 March 2020;·Peter D. Zimmerman, World War III Has Already Begun, Military.com, 29 October 2020; Eunice Castro Seixas, War Metaphors in Political Communication on Covid-19, Frontiers in Sociology, 25 January 2021).
Despite frequent conspiratorial evocation of the Deep State meme, there is seemingly little reflection on how such organization might be enabled or manipulated by its use of AI. This is especially notable with respect to conspiracies theories with regard to the Deep State within the USA. There is a charming irony to the fact that the principal data centre of the NSA -- the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center -- is based in Bluffdale (Utah).
It is not to be expected that such secretive possibilities would be evoked in the recent report of the US National Intelligence Council (Global Trends 2040: A More Contested World, March 2021). Other apparent blindspots in that respect, whether intentional or otherwise, are noted by Michael Marien (Report on Global Reports, 2020-2021: the Whale and the Minnows, Cadmus, 25 June 2021).
The probable extent of deception in relation to the pandemic is more obvious in the widespread recourse to non-disclosure agreements (NDAs), whether requested of governments regarding pricing or of patients seeking healthcare following adverse vaccine reactions (Bernadette E. Tamayo, Drug firms demand non-disclosure agreements for vaccines, The Manila Times, 21 January 2021; Barry Schoub, Vaccine negotiation non-disclosure agreements 'the nature of the game', CapeTalk, 28 January 2021). More intriguing is the potential use of injunctions (or "superinjunctions") against revelation of the existence of such NDAs. (COVID-19 Vaccines and Corruption Risks: preventing corruption in the manufacture, allocation and distribtion of vaccines, United Nations Office on Drugs and Crime)
As discussed here with respect to agents and agency, it is remarkable to note the degree to which terms of particular significance in computing and AI have been borrowed (or adapted) from usage in the psychosocial realms.
Agents: The point of interest is the extent to which the official response to COVID is administered and implemented by people termed "agents" -- and who identify themselves as such. If challenged they readily indicate that they are following orders from higher authority. Those of higher authority typically indicate that they are similarly beholden to even high authority. This may be qualified by reference to an advisory boards of health experts from which the most appropriate advice has been obtained.
It is however curious to note the transformation among experts from cautious expression of opinion to unquestionable claims to knowledge. Seemingly the knowledge of such experts derives primarily from models -- obviating any need for the qualified opinion so evident in their role as witnesses in legal proceedings (Andrea Lavazza and Mirko Farina, The Role of Experts in the Covid-19 Pandemic and the Limits of their Epistemic Authority in Democracy, Frontiers in Public Health, 8, July 2020):
In the 2020 Covid-19 pandemic, medical experts (virologists, epidemiologists, public health scholars, and statisticians alike) have become instrumental in suggesting policies to counteract the spread of coronavirus. Given the dangerousness and the extent of the contagion, almost no one has questioned the suggestions that these experts have advised policymakers to implement. Quite often the latter explicitly sought experts' advice and justified unpopular measures (e.g., restricting people's freedom of movement) by referring to the epistemic authority attributed to experts.
Identification of ultimate responsibility for the action of agents now recalls the dilemmas associated with the primary defence of Adolf Eichman. Termed superior orders, but also known as the Nuremberg defense (or "just following orders"), is a plea in a court of law that a person, whether a member of the military, law enforcement, a firefighting force, or the civilian population, should not be considered guilty of committing actions that were ordered by a superior officer or official.
Government officials at the highest level -- including the leadership -- make similar claims. It is of course necessarily the case that the health experts derive their authority from models that have been developed -- in all probability with the assistance of AI. It is then appropriate to ask at what point those in authority cease to be agents of a higher authority and can acknowledge their responsibility in ordering the implementation of the pandemic response strategy. Clearly the matter is rendered more complex when reference is made to an AI-designed model as the ultimate authority -- thereby transforming the health experts themselves into agents for the interpretation of the insights seemingly offered by the model.
Are there indeed "Eichman's" -- convinced of their innocence -- to be found in the presentation and administration of the strategic response to COVID? Related possibilities has been evoked with regard to the experimental use of inadequately tested vaccines on large populations -- seemingly in conflict with articles of the Nuremberg Code (Saranac Hale Spencer, Nuremberg Code Addresses Experimentation, Not Vaccines. FactCheck,org, 17 May 2021; HowardTenenbaum, The present COVID-19 vaccines violate all 10 tenets of the Nuremberg Medical Ethics Code as a guide for permitted medical experiments, TrialSiteNews, 29 June 2021).
Agency: From a philosophical perspective, agency is the capacity of an actor to act in a given environment. From a social science perspective, agency is defined as the capacity of individuals to act independently and to make their own free choices. By contrast, structure are those factors of influence (such as social class, religion, gender, ethnicity, ability, customs, etc.) that determine or limit agents and their decisions. An agent is an individual engaging with the social structure. It continues to be debated to what extent a person's actions are constrained by social systems. This debate concerns, at least partly, the level of reflexivity an agent may possess -- to the extent that this lends itself to evaluation..
Extensive clarification of understandings of agency have been presented by Maurice Yolles and colleagues (A Theory of the Collective Agency, SSRN, February 2014). That research has been further developed by Maurice Yolles and Gerhard Fink (A Configuration Approach to Mind set Agency Theory: a formative trait psychology with affect, cognition and behaviour, 2021; Governance Through Political Bureaucracy: an agency approach, Cybernetics, 48, 2019, 1).
Yolles has subsequently argued that the use of process intelligence, adopted as autopoiesis, is quite consistent with AI (Autopoiesis, its Efficacy and Stability: a metacybernetic view, forthcoming 2021). This distinguishes explicit and implicit cognition, noting the relevance of the the latter to AI. It is then unnecessary to propose that an AI system is aware:
To illustrate the distinction between implicit and explicit cognition, one can highlight the shift in the area of computing, in particular through adaptive artificial intelligence (AI) systems [Rogerio de Lemos and Marek Grzes, Self-Adaptive Artificial Intelligence, IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 2019]. These systems embrace a need for: robustness, the ability to achieve high algorithmic accuracy; efficiency, the ability to achieve low use of resources in computation, memory, and power; and agility which includes an ability for recognition, and which responds to a need to alter operational conditions based on current needs. To enhance these attributes, conscious self-awareness is being introduced into processing, storing, retrieving information about self, and a capacity for individuating -- the ability for an entity to distinguish itself from others [R. Chatila, et al., Toward Self-Aware Robots, 2018). Such a development would enable robots to understand their environment and be cognizant about what they do and about the purpose of their actions, making timely initiatives beyond goals set by others, and to learn from their own experience, knowing what they have learned and how [A. Chella, et al, Developing Self-Awareness in Robots via Inner Speech, Frontiers in Robotics and AI, 19 February 2020]. [included references expanded]
If agency is explored in terms of "operacy", as articulated by Edward de Bono (Judgment, recognition and operacy, Extensor), can agents be understood as having degrees of operacy?
AI's as agents or having agency? If an AI is constructed to complete a certain task for humans. an AI is clearly held to be an agent. This understanding can be challenged when AI's develop to the degree envisaged with respect to governance, potentially with intentions that deviate from those of its constructors -- suggesting that an AI would then indeed have agency.
A workshop on AI and Society explored the topic of "agency", noting that it is defined differently across domains and cultures, relating to many of the topics of discussion in AI ethics, including responsibility and accountability. The group found paradoxes and incongruities, with many open questions, rather than answers. The output took the form of the following set of essays, many framed as provocations (Sarah Newman (AI & Agency, AI Pulse, 26 September 2019):
Jon Bowen: Characterizing Agency Spondee Isobar: The Value of the Concept of Agency in an Increasingly Rational World Ababa Birchen: Human agency in the age of AI Mike Anjou: Agency to Change the World |
Gabriel Lima: Can (and Should) AI Be Considered an Agent Carina Punk: How does AI affect human Autonomy? Sarah Newman: The Myth of Agency |
Agency of an agent? Clearly a fundamental issue is the extent to which agents have agency -- if agency implies a degree of independence and freedom of choice. It is as yet unclear how much agency an agent can be understood to have. For James W. Moore What Is the Sense of Agency and Why Does it Matter? Frontiers in Psychology, 7, 2016; 7, 1272):
The number of scientific investigations of sense of agency has increased considerably over the past 20 years or so. This increase is despite the fact that experiments on sense of agency face certain methodological problems. A major one is that the sense of agency is phenomenological thin... That is, when we make actions we are typically only minimally aware of our agent ic experiences. This is quite unlike conscious experience in other modalities, especially vision, where our experiences are typically phenomenologically strong and stable. What this means is that sense of agency can be difficult to measure. As a result of this, experimenters have had to be quite inventive in order to develop paradigms that capture this rather elusive experience.
The question is central to the implementation of the response to the pandemic. Who has freedom of choice and among what possibilities are they free to choose? Unfortunately the question goes to the root of the interminable debate regarding free will versus determinism.
Of greater significance in practice is the illusion of agency -- as attributed to an agent or in which the agent undulges (Cees Midden and Jaap Ham The Illusion of Agency: The Influence of the Agency of an Artificial Agent on Its Persuasive Power, International Conference on Persuasive Technology, 2012; Matthew William Fendt, et al, Achieving the Illusion of Agency, 2012).
For an AI, the art lies in enabling scripts to enhance the sense of self-importance of subordinate agents at every level -- whilst ensuring that overweening importance does not ultimately prove counterproductive in the interaction with the population to be controlled.
Those working with AI express considerable enthusiasm for the manner in which artificial intelligence might develop the capacity for improving itself. (Dom Galeon, Google's Artificial Intelligence Built an AI That Outperforms Any Made by Humans, Futurism, 12. February 2017). For Edd Gent:
Artificial intelligence (AI) is evolving -- literally. Researchers have created software that borrows concepts from Darwinian evolution, including "survival of the fittest", to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI (Artificial intelligence is evolving all by itself, Science, 13 April 2020).
As clarified by Pierre-Yves Oudeyer:
Autonomous lifelong development and learning is a fundamental capability of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives. (Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human--like learning, arxiv.org, 5 Dec 2017)
Further distinctions are however necessary, as noted by Jolene Creighton (The Unavoidable Problem of Self-Improvement in AI, Future of Life, 19 March 2019; The Problem of Self-Referential Reasoning in Self-Improving AI, Future of Life, 21 March 2019):
Today's AI systems may seem like intellectual powerhouses that are able to defeat their human counterparts at a wide variety of tasks. However, the intellectual capacity of today's most advanced AI agents is, in truth, narrow and limited. Take, for example, AlphaGo. Although it may be the world champion of the board game Go, this is essentially the only task that the system excels at. Of course, there's also AlphaZero. This algorithm has mastered a host of different games, from Japanese and American chess to Go. Consequently, it is far more capable and dynamic than many contemporary AI agents; however, AlphaZero doesn't have the ability to easily apply its intelligence to any problem. It can't move unfettered from one task to another the way that a human can.
The same thing can be said about all other current AI systems -- their cognitive abilities are limited and don't extend far beyond the specific task they were created for. That's why Artificial General Intelligence (AGI) is the long-term goal of many researchers. Widely regarded as the "holy grail" of AI research, AGI systems are artificially intelligent agents that have a broad range of problem-solving capabilities, allowing them to tackle challenges that weren't considered during their design phase. Unlike traditional AI systems, which focus on one specific skill, AGI systems would be able to efficiently tackle virtually any problem that they encounter, completing a wide range of tasks.
It would be naive to neglect the possibility that AI had not been developed beyond that highlighted by its use in board games. Given the success of AI against Go masters, it is approprioate to note that the complexity of Go exceeds that of chess by almost 240 orders of magnitude (J. Burmeister, The challenge of Go as a domain for AI research: a comparison between Go and chess, Intelligent Information Systems, 1995; C. Koch, How the Computer Beat the Go Master, Scientific American, 19 March 2016). For Andrew Ilachinski:
The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. This exploratory study considers the state-of-the-art of artificial intelligence (AI), machine learning, and robot technologies, and their potential future military implications for autonomous (and semiautonomous) weapon systems. Although no one can predict how AI will evolve or how it will affect the development of military autonomous systems, we can anticipate many of the conceptual, technical, and operational challenges that DOD will face as it increasingly turns to AI-based technologies. We identified four key gaps facing DOD as the military evolves toward an “autonomy era”... (Artificial Intelligence & Autonomy Opportunities and Challenges, Center for Naval Analyses, October 2017)
Elaboration of an AI "Ponzi scheme"? In its quest for viable solutions to planetary healthcare, there is a case for exploring the possibility that a sophisticated AI would recognize the strategic advantages of organizing and disseminating information according to the much-studied principles of a Ponzi scheme (Global Economy of Truth as a Ponzi Scheme: personal cognitive implication in globalization? 2016). This would effectively configure a hierarchy of agents, each deferring to the framing offered by agents at a higher level. The AI would then position itself at the summit of that hierarchy
The dynamics of such a scheme have notably been the subject of extensive commentary with regard to the highly influential role of that operated by Bernie Madoff (Madoff investment scandal). Of relevance is the manner in which the credibility of the scheme was cultivated among many of eminence (see Madoff client list) who would vigorously deny their gullibility.
Who might be the "eminent" with respect to a pandemic-related Ponzi scheme? More challenging with respect to the pandemic is any counterpart to the US Securities and Exchange Commission (SEC) whose remarkable avoidance of due diligence in the Madoff case has been the subject of extensive investigation. Of particular relevance was the manner in which information regarding the abuses were ignored because of the esteem in which Madoff and his clients were held. Which "oversight" bodies might be the counterparts in the pandemic case?
More intriguing is the possibility that, as with SEC, any implication that the pandemic was a hoax of some kind would be ignored, dismissed or cause for repressive measures (Roland Imhoff and Pia Lamberty, A Bioweapon or a Hoax? The Link Between Distinct Conspiracy Beliefs About the Coronavirus Disease (COVID-19) Outbreak and Pandemic Behavior , Social Psychological and Personality Science, 2020, July; Rudolf Hänsel, Let Us Put an End to the Corona Pandemic Hoax, Global Research, 25 February 2021; United Health Professionals, The Covid Outbreak: Biggest Health Scam of the 21st Century, Global Research, 6 July 2021; Michel Chossudovsky, COVID-19 Coronavirus: A Fake Pandemic? Who's Behind It? Global Economic, Social and Geopolitical Destabilization, State of the Nation, 4 March 2020).
Global society as a simulation? As previously discussed, there is an ongoing debate among scientists as to whether humanity and the planetary environment are best understood as part of a simulation developed and maintained by an advanced race of extraterrestrials (Living within a Self-engendered Simulation: re-cognizing an alternative to living within the simulation of an other, 2021). As a speculative argument, this offers one way of understanding why humanity has not been contacted by ETs -- if humans could comprehend the form which such contact might take. The question has notably been framed by the Fermi paradox, namely the apparent contradiction between the lack of evidence for extraterrestrial civilizations and various high estimates for their probability.
Of relevance to the current exploration, a question to be asked is whether the global population can be understood as effectively living within some kind of simulation. The issues raised by the Facebook-Cambridge Analytica scandal (mentioned above) are an indication that such a framing is cultivated and crafted -- at least for marketing purposes, understood in their most general sense as influencing public opinion.
As precursors, it is appropriate to note the development of the Joint Simulation System initiated in 1995 (Kari Pugh and Collie Johnson, Building a Simulation World to Match the Real World; The Joint Simulation System, January-February 1999, p.2; James W. Hollenbach and William L. Alexander, Executing the DOD Modelling and Simulation Strategy: making simulation systems of systems a reality, 1997).
This has seemingly now morphed, via the US Total Information Awareness program, into the Sentient World Simulation (SWS) and will be a "synthetic mirror of the real world with automated continuous calibration with respect to current real-world information" with a node representing "every man, woman and child". As with the European FuturIcT project (The FuturIcT Knowledge Accelerator: unleashing the power of information for a sustainable future), these would however seem to avoid providing a node for every perceived problem, insight, advocated strategy, or value.
Comprehension of context? Ironically, with regard to "agency", the question could for example be explored in the case of the US Central Intelligence Agency, the National Security Agency (NSA), or the other 15 bodies formng the US Intelligence Community. The issue is how would who know to what extent any or all of them were operating such a simulation in some way? How is the CIA to be understood as having "agency" -- namely above and beyond the comprehension of anyone employed there?
The question relates to individual sensitivity to surveillance. As widely indicated, many are quite incapable of recognizing that they are the subject of surveillance through their usage of telecommunication facilities. This has included world leaders who have been surprised to discover that their telephone communications are bugged -- and their emails rendered accessible to other parties..
The other aspect of the question is: who is fully informed of the abilities of AI facilities used in monitoring the global population and influencing their decision-making? It is quite unclear that those at the highest levels of government have the capacity to comprehend such abilities. There is irony in framing any held to have such a responsibility as members of an "oversight" committee (Paul Gregoire, Half of our federal laws were passed without oversight, Australian Independent Media, 6 July 2021). The irony derives from the ambiguity of "oversight" in that it is readily understood as implying a "blindspot" -- typically characteristic of members of a committee over whose perspicacity and assiduity there is little effective control (Quis custodiet ipsos custodes?).
Controllability? More relevant to this argument is the challenge to the comprehension of the forces in play, as argued by management cybernetician Stafford Beer (on Le Chatelier's Principle as applied to social systems):
Reformers, critics of institutions, consultants in innovation, people in short who "want to get something done", often fail to see this point. They cannot understand why their strictures, advice or demands do not result in effective change. They expect either to achieve a measure of success in their own terms or to be flung off the premises. But an ultra-stable system (like a social institution)... has no need to react in either of these ways. It specializes in equilibrial readjustment, which is to the observer a secret form of change requiring no actual alteration in the macro-systemic characteristics that he is trying to do something about. (The cybernetic cytoblast - management itself, Chairman's Address to the International Cybernetic Congress, September 1969)
What is it that has agency in that analysis? What has agency in an "intelligence community" above and beyond the comprehension of its agents?
As indicated above, there is extensive investment in enhancing military operations with AI. Whether such applications have translated into operations relating to the pandemic dimensions of healthcare are seemingly unknown.
The question of whether and how the use of autonomous AI might be controlled is addressed by Elke Schwarz who challenges:
... the presupposition that we can meaningfully be in control over autonomous weapon systems, especially as they become increasingly AI controlled. I argue that their technological features progressively close the spaces required for human moral agency. In particular, there are three technological features which limit meaningful human control that I briefly highlight: 1) cognitive limitations produced in human-machine interface operations; 2) epistemological limitations that accompany the large amounts of data upon which AI systems rely; 3) temporal limitations that are inevitable when laws take on identification and targeting functions (The (Im)possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems, Humanitarian Law and Policy, 2018).
The point is emphasized otherwise with respect to the utility of data, as interpreted by AI in healthcare:
If implemented properly, AI can significantly delete or categorize the data which is no longer useful for analysis. Human intervention is not possible for this purpose because of the demand for maintaining the efficiency of this process.(How to Manage Data Chaos in Healthcare Industry, Reliable Software, 9 January 2019) [emphasis added]
"Harnessing AI"? The challenge of control has been usefully framed in a commentary for the US Department of Defense (Brian David Ray, et al, Harnessing Artificial Intelligence and Autonomous Systems Across the Seven Joint Functions, Joint Force Quarterly, 96, 10 February 2020). This explores the most likely impacts of AI/AS on each of seven joint military functions: command and control, intelligence, fires, movement and maneuver, protection, sustainment, and information. These functions represent groups of related activities that provide commanders and staff with the ability to synchronize and execute military operations. The article notes:
Rapid technological developments in five key areas (info, neuro, quantum, nano, and bio) will be primary drivers in various areas of AI and AS. As the Brookings Institution's John Allen and Darrell West note, AI will significantly impact the world's economy and workforce, the finance and health-care systems, national security, criminal justice, transportation, and how cities operate. All of this change is likely to redistribute and concentrate wealth, challenge political systems, and generate new cyber threats and defenses.
Future kinetic conflicts, especially those that include near peers such as China or Russia, will likely be replete with AI/AS architectures and methods that will include engagements best characterized as a "swarm" of lethality with unprecedented "coordination, intelligence, and speed".... [F]uture conflicts will spread quickly across multiple Combatant Command geographic boundaries, functions, and domains. U.S. near peers clearly understand the importance that AI/AS will have in future conflicts.
Is use of "harnessing" in this context to be compared to the proverbial condition of "holding a tiger by the tail"? The metaphor has been optiminstically adopted elsewhere (Edd Gent, How AI can help us harness our 'collective intelligence', BBC,14 May 2020).
How to determine whether an AI is then "playing" those who deem themselves to be its developers and controlllers -- a theme explored in science fiction? Why is it so readily assumed they would be aware of being "played" -- when Go and Chess masters were themselves surprised by the elegance of the moves by which they were out-maneuvered?
The demonstrated skills of AI in competitive games could be understood as the ability to assess the trap defined by an opponent's behaviour-- in the spirit of the insight of policy scientist Geoffrey Vickers: A trap is a function of the nature of the trapped (1972). Could an AI "play" humans such that they can indulge in the illusion of winning and controlling the game? Are assurances of human control then to be interpreted as the voices of those unknowingly trapped already? (Mauro Vallati, Will AI take over? Quantum theory suggests otherwise, The Conversation, 7 January 2020). The dynamic recalls the dilemma faced by atheists -- in the eyes of those believing in deity ("God is dead", signed Nietsche; "Nietsche is dead", signed God !).
AI-enhanced pandemic strategies as a precursor of future warfare? The framing offered above for the US Department of Defense assumes a future capacity to "harness" AI operation. There is however the possibility that AI capacities have already been deployed in relation to the pandemic as an extension of their acknowledged healthcare functions. The future scenario anticipated by the above quote could well be recognized as playing out with respect to the pandemic at this time. Whether any such deployment is understood as the deliberate "harnessing" by some party, or whether AI capacities have developed on their own initiative to take on this role, is necessarily unclear -- most obviously by design.
The question could be clarified by considering how one or more AIs would process the vast amounts of data relating to the pandemic -- given the nature of their neural learning capacity. More challenging would be the extent to which that learning was thereby set in a context taking account of other strategic issues which might be recognized as exacerbating any human "healthcare" consideration. Clearly environmental issues and those relating to climate change and utilization of non-renewable reources might be taken into account -- if not the preoccupations of all 17 of the UN's Sustainable Development Goals.
Whilst the focus of conventional global institutions can be restricted -- and are -- it is questionable whether and how an AI would engage with the diversity of issues which could be recognized as impacting on the health of humanity in a planetary context. Are there issues, deemed problematic, which an AI would see as contributing to a viable solution in the interests of humanity and the planet.
"Foreign powers" and AI? With respect to this argument, the ET simulation of human reality speculatively imagined by scientists (as mentioned above) can be usefully confronted with the conclusion of the long-anticipated report on UFOs recently published (US Office of the Director of National Intelligence, Preliminary Assessment: Unidentified Aerial Phenomena 25 June 2021). As widely remarked, it offered no justification for the existence of ETs and their UFOs (Chris Impey, Pentagon UFO report: No aliens, but government transparency and desire for better data might bring science to the UFO world, The Conversation, 30 June 2021).
The report does however specifically envisage the possibility that "foreign adversary systems" might have developed the capacity for "a breakthrough or disruptive technology" consistent with some of the unusual observations reported. The challenge of a "major technological advancement by a potential adversary" is acknowledged. The possibility that such an advance might be in the realm of AI is seemingly not considered, although the competence in that respect of the named potential adversaries, such as Russia and China, is otherwise acknowledged.
Unfortunately the report fails to address the reality of UFOs in the imagination of many and effectively dismisses that possibility as lacking the evidence required by science. This reframes the question as to the reality of AI manipulation of governance of the pandemic -- given that it would be similarly dismissed as unsubstantiated, despite the massive engagement in AI for military and other purposes. Arguably "science" is unable to accord legitimacy to research undertaken in secret, as would be the case with socially "disruptive" use of AI.
The point could be illustrated otherwise given the coincidental demise on 20 June 2021 of Donald Rumsfeld former US Secretary of Defence and architect of the intevervention in Iraq in quest of weapons of mass destruction. The intervention was justified by evidence -- deemed credible -- presented to the UN Security Council, but which proved to be a figment of NATO imagination.
In his defensive framing of such inconsistencies, Rumsfeld is renowned for his "poetic" distinction of the known unknowns -- presented during a Department of Defense news briefing on 12 February 2002. It is reproduced on the left below, with an adapted version of that "poem" presented on the right -- on The Undoing, as discussed separately (Unknown Undoing: challenge of incomprehensibility of systemic neglect, 2008).
The Unknown | The Undoing | |
As we know, There are known knowns. There are things we know we know. We also know There are known unknowns. That is to say We know there are some things We do not know. But there are also unknown unknowns, The ones we don't know We don't know. |
It is to our undoing that, There are things unfortunately done. These are things we knowingly do. We also leave undone Things that ought to be done. That is to say We do some things unknowingly Without knowing what we don't do. But there are also things unknowingly undone, The ones we don't know We are undoing. |
Both with respect to UFOs and to manipulation of the pandemic by AI, Rumsfeld's framework merits consideration, especially in the light of his total aversion for evidence unsupportive of his agenda, as described by Binoy Kampmark (The Known Knowns of Donald Rumsfeld, Australian Independent Media, 2 July 2021). Critics of his agenda have not however been kind (Patrick Cockburn, Into the Quagmire with Donald Rumsfeld, CounterPunch, 5 July 2021; Ben Burgis, Donald Rumsfeld, Rot in Hell, Information Clearing House, 1 July 2021).
Apophenia versus Apophasia? Eliciting information of strategic relevance under conditions of uncertainty is curiously constrained by extreme cognitive modalities known mainly through obscure terms.
Apophenia is the tendency to perceive meaningful connections between things otherwise held to be unrelated. It has been associated with the early stages of schizophrenia as the "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". This early stage of delusional thought is characterized by self-referential, over-interpretations of actual sensory perceptions, in contrast with hallucination. Although it can be considered a commonplace effect of brain function, taken to an extreme, apophenia can be a symptom of psychiatric dysfunction as in the case of conspiracy theory, where coincidences may be woven together into an apparent plot
Apophasis, originally and more broadly understood, is a method of logical reasoning or argument by denial. This is a means of saying what something is by indicating what it is not, a way of talking about something by talking about what it is not. This possibility is frequently overlooked, other than in negative theology. But if it is appropriate through apophatic theology to understand divinity as ineffable and beyond description, there is a case for recognizing the extent to which any contextual agency transcends incomprehensibly that implied by any corresponding kataphatic description (Being What You Want: problematic kataphatic identity vs. potential of apophatic identity? 2008; Michael A. Sells, Mystical Languages of Unsaying, 1994)
Challengingly situated between these extremes are the possibilities of strategically significant pattern recognition (as associated with creative collective intelligence) potentially entangled problematically with groupthink.
Hierarchies of agents: Reporting on the management of the pandemic response by authorities has repeatedly made it clear that the highest authorities defer to the advice of boards of health advisors. That advice is then to be recognized as of a higher order -- thereby transforming those advised into agents, to whatever degree. This is in curious contrast to the minimal deference accorded to the advice of climate scientists.
As noted above, however, such health experts defer to the models through which the pandemic is framed. It is from such a model that their authority is derived -- as indicated above in the case of the two models which have been so influential in framing the lockdown and related strategies. To what degree extensive use is made of "artificial intelligence" in the elaboration and operation of those models is far from clear -- despite the claims by WHO (noted above) with regard to the importance accorded to the healthcare role of AI.
Also less evident is the degree to which "artificial intelligence" then has a primary role in articulating the narrative -- as has been exemplified and documented in the case of the Facebook-Cambridge Analytica data scandal. There is also the question of whether those claiming to control any such use of AI are fully aware of the extent to which they are themselves effectively controlled by that facility.
As described by Warren Chik of Singapore Management University:
The ability of an AI system to conduct personal profiling could fundamentally change a user's digital personality, said Professor Chik, highlighting a cause of worry for many. "While an AI holds specific information such as your name and address, it also forms its own knowledge of your identity, and who you are as a person," Professor Chik said, citing algorithms used by social media feeds to collect data on one's identity, interests and surfing habits. From that data, the system then creates a profile of who they think you are. "These algorithms - which may be right or wrong - feed you information, articles and links, and as a result brings about an effect on your thinking. In other words, AI can mold human behaviour, and this is a risk that makes a lot of people uncomfortable," Professor Chik said. The threat is very real, he emphasised, noting that regulators have clearly identified a need to regulate the use of data in AI. (Engendering trust in an AI world,, Eureka-Alert/AAAS, 2 March 2020)
Of particular interest is how the articulation of the narrative at the most authoritative level acquires ever greater clarity and definition as it percolates down through agents at lower levels of authority -- each refining the certainty of the language they deploy with ever greater confidence. As in in any military structure, there is little scope for the content of a communication then to be challenged. Any efforts to do so are immediately subject to reprimand -- reinforced by peer group pressure.
For an agent, there is no room whatsoever for doubt -- as has become especially obvious in the case of the pro-vaccination narrative. Agents do not have questions regarding their role -- they provide answers to others in accordance with the script with which they have been provided -- or suffer the consequences of their failure to do so (Question Avoidance, Evasion, Aversion and Phobia: why we are unable to escape from traps, 2006).
If such is the case with respect to agents, it is perhaps curious that it is seemingly assumed that an AI of superior agency would also be valued for its focus on answers alone, More intriguing is whether it engendered questions of a higher order, as discussed separately ((Superquestions for Supercomputers: avoiding terra flops from misguided dependence on teraflops? 2010; Framing Cognitive Space for Higher Order Coherence, 2019).
Conflation of significance of "script": It is especially intriguing to note current use of the term "script". Clearly agents are expected to communicate according to a script and may well be carefully trained to do so in an appropriately designed program. In any press conference or other declaration, those at the highest levels of government necessarily rely on a carefully prepared text -- resulting in a communication which can be described as "scripted". This is in accord with the pattern of any media performance by actors -- dependent on script-writers. Earlier variants are evident in the reliance of priesthoods on liturgical scripts inspired by scriptures.
It is however the case that "script" is also widely used to describe the lines of computer code typical of any algorithm by which an AI would operate. It has effectively replaced use of the term "program". In the case of AI, there is the further issue of the extent to which an AI can develop and elaborate the scripts by which it operates -- as a consequence of neural learning.
Curiously however, use of "prescription" by physicians -- to specify to pharmacists the medication to be received by an individual -- is now abbreviated to "script" in some contexts. This is especially curious given the attested importance of AI to healthcare. It suggests a strange conflation of meaning and usage which merits careful attention. In French, for example, prescription is translated as ordonnance -- with a computer translated as ordinateur -- offering associations to notions of order.
Are agents to be understood as purveyors of scripts in a generic sense -- scripts which have ultimately been crafted by the intervention of AI to a degree which is necessarily unknown to most? The process is strangely reminiscent of the role of priesthoods in relation to a deity of which only the high priests can claim any real understanding. Priests of a lower order rely on use of scripts variously transmitted to them.
The strange conflation in the significance of "script" (as noted above) can be recognized as extending to "conscript". It follows from any adoption of a hierarchy of expertise that the agents embodying expression of that expertise are effectively conscripted. They become an expression of a script as required by regulatory authority whether conscripted or voluntarily -- with the prefix inviting interpretation in terms of conversion, if not confidence trickery. Is the promised return to normality -- as the light at the end of the tunnel -- to be curiously recognized in terms of a secular anticipation of "heaven"?
Curiously (and despite the questionable mnemonic convenience) the "neural learning" (or "deep learning") associated with the development of AI, can then be recognized as "new-role learning" by the conscripted agents of that hierarchy? Given the authority by which such agents are empowered -- and believe themselves to be -- the role is especially characterized by a sense of certainty and a freedom from doubt. Any questions regarding the role are only held to be relevant for higher authority -- although any that challenge the authority of the agent necessarily evoke defensive responses.
Tone of voice: As discussed separately, considerable attention is now given to the tone-of-voice appropriate to engaging with a chosen audience when marketing a product or service (Varieties of Tone of Voice and Engagement with Global Strategy, 2020). Understandings of tone-of-voice are variously considered important in other domains where persuasive engagement is sought. These may include, politics, religion, drama, the military, or the arts in general. They extend to engagement with animals, especially domestic animals. This is necessarily of relevance in the case of any agent of authority with respect to the pandemic response.
Although the matter is subject to continuing review regarding nonverbal communication, it has been estimated that 7 percent of meaning is communicated through the spoken word, 38 percent through tone of voice, and 55 percent through body language according to the 7-38-55 rule first formulated by Albert Mehrabian (Silent Messages, 1971). This rule has been variously misinterpreted and subject to criticism, although considerable importance continues to be attached to tone-of-voice (irrespective of any rule and its relevance across cultures).
Although this has not been the subject of research with respect to agent communications regarding the pandemic, it is appropriate to note the very limited range of tone-of-voice now used by agents. Remarkably this applies to agents with the most limited authority, to experts affirming the validity of the script, or to politicians held to be of the highest authority. The tone adopted is an expression of certainty -- with an implication of command reminiscent of that adopted in military training and boot camps. It implies a hostility to any question -- even precluding any interaction that questions the script.
Curiously a similar tone tends to be adopted by those held to be purveyors of critical misinformation regarding the framing of the crisis. There is a sense in which those at both extremes have an unquestionable need to "conscript" -- a form of communication beyond marketing calling for existential commitment. This is familiar in some forms of religious discourse as proselytizing -- with the implication that those who fail to engage appropriately with the script are necessarily problematic, justifying recourse to other measures.
Arrogance: The pandemic has given rise to a range of critical commentary on the arrogance of those especially empowered by it. Leaders of nations have been specifically held to be arrogant -- with that arrogance undermining the efficacy of the strategic response.
Especially problematic has been the manner in which arrogance has featured in academia in response to colleagues raising unwelcome questions regarding the preferred script (Morteza Mahmoudi and Loraleigh Keashly, COVID-19 pandemic may fuel academic bullying, Bioimpacts, 10, 2020 3). This process is conflated with the problematic dynamics of political correctness in academia and the so-called cancel culture.
It is appropriate to ask whether, as agents, officials are necessarily officious or whether this is an aberration (Melanie McDonagh, Is it just me? Or are officious social distances a nightmare? Culture ReadSsector, 11 October 2020). In the current context, others -- now labelled as "Karens" -- may be understood as voluntarily adopting the role of agents (Tim Blair, A Pandemic of Karens, Daily Telegraph, 25 May 2020):
But coronavirus lockdowns and other restrictive measures have lately vastly empowered the global scolding movement. Behold the planet's Karens, defined last year in the New York Times by Sarah Miller as “the policewomen of all human behaviour”.
As defined by Heather Suzanne Wood in the Atlantic magazine, their defining essence is 'entitlement, selfishness, a desire to complain'.
A Karen 'demands the world exist according to her standards with little regard for others, and she is willing to risk or demean others to achieve her ends'.
The question of arrogance has been raised with respect to the role of the security services in responding to the pandemic, as by Ian Loader (Coronavirus: why we must tackle hard questions about police power, The Conversation, 9 April 2020):
The COVID-19 crisis has – in a most dramatic and unexpected way – brought policing back to the forefront of the public mind. As police forces grapple with how to secure adherence to new public health regulations while sustaining legitimacy, social media is a battleground of tales of petty police officiousness towards "law-abiding" citizens or urgent calls for the police to stop selfish gatherers putting lives at risk... The “lockdown”, and its uncertain duration and effects, calls on us to confront some enduring but vital questions about police power and its limits, and about the fragile relation between the exercise of those powers and public consent. This means thinking again – and hard – about the police mission: what it is we expect the police to do, and how should they go about doing it?
The CATO Daily Podcast (Health Care Regulation's Pandemic Errors, 7 October 2020) notes the detailing by Jeffrey Singer of the combination of officious health care regulation and viral pandemic that have worsened economic and health outcomes for those affected (Pandemics and Policy, CATO, 15 September 2020).
Emphasis has been placed on Western arrogance:
Other indications include:
It can be argued with respect to the chaotic response to the pandemic that it is arrogance that is the disease -- not COVID.
Reframing "essential" people: Emergency provisions in response to the pandemic have resulted in a form of triage through which a class of essential people and functions have been defined as exceptions to the restrictions imposed upon others. (Centers for Disease Control and Prevention, Interim List of Categories of Essential Workers Mapped to Standardized Industry Codes and Titles). This CDC list notes:
This interim list identifies "essential workers" as those who conduct a range of operations and services in industries that are essential to ensure the continuity of critical functions in the United States (U.S.). Essential workers were originally described by the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency's (CISA): Guidance on the Essential Critical Infrastructure Workforce: Ensuring Community and National Resilience in COVID-19 Response, (18 August 2020).
Termed "essential", this recalls fictional exploration of those who would have access to bunkers in times of natural disasters and nuclear war -- or even of space on vessels to other planets. There is some irony to use of the term in that it borrows from the significance of essential and essence is relation to insight and wisdom -- which in no way figure in current emergency provisions, in contrast with those recognized as agents.
Consensual surrogacy? It has been emphatically stressed by world leaders that universal vaccination is essential to achieving herd immunity. The analogous distinctions in other countries have added to arguments regarding the inequalities engendered by the pandemic and access to vaccines, exemplified by the declarations of the UN Secretary-General (Guterres: Vaccines should be considered 'global public goods', UN News, 11 June 2021):
"We are at war" with the coronavirus, he said, that continues to cause "tremendous suffering" and destroy the global economy. To defeat the virus, we must "boost our weapons", he added, calling for a "global vaccination plan".
As previously argued, the United Nations has been frustrated over decades in its inability to achieve effective global consensus on the implementation of strategies in response to the range of global crises -- exemplified by those framed by the UN's Sustainable Development Goals as a global "dream" (Systemic Coherence of the UN's 17 SDGs as a Global Dream, 2021). This frustration is echoed historically by that of other collective "dreams": Christianity, Communism, Socialism, and the like.
The current global consensus on the desirability of universal vaccination can therefore be recognized as the closest that humanity has come to achieving the kind of consensus sought by those initiatives of the past. Given that many can be said to have been articulated as a surrogate for the quest of the world's monotheistic religions for universal belief in a particular deity, universal vaccination can be explored in that light.
Groupthink? A relevant question is the extent to which the quest for herd immunity is effectively a surrogate for the form of groupthimk implied by those other initiatives. Otherwise understood, is herd immunity a metaphor for a collective process implied by groupthink? A difficulty highlighted from a secular perspective is the delusion identified in the religious case by Richard Dawkins (The God Delusion, 2006). This critique can however be extended to the delusion regarding global consensus -- as "consensus" and "global" are currently understood (The Consensus Delusion: mysterious attractor undermining global civilization as currently imagined, 2011).
How would a sophisticated AI enable the framing of a consensual global reality? Would it build in the recognition of enemies upholding seemingly opposing realities -- achiving a form of dynamic integration? Would those variously upholding such disparate realities have any capacity to recognize the "invisible hand" of its operations?
Debate with regard to artificial intelligence -- and its future evolution -- has highlighted the question as to whether and how this would constitute a threat to the very nature of humanity as currently known and valued.
One feature of the debate has considered whether humanoid robots should have rights akin to human rights. The question would be much more complex in the case of an AI -- although occasionally a central theme in science fiction scenarios. The relation to individual human rights has been challenged by the legal tendency to recognize corporate personhood. With respect to agency, this frames the question as to the distinction between a corporations and an AI, especially for a corporation dependent to a higher degree on an AI for its operations.
Science fiction has rendered familiar the possibilities of a hive mind, and swarm intelligence, most notably through the Borg -- readily reminiscent of bureaucracy. The Borg are an alien group, framed as antagonists in the Star Trek fictional universe. As described by Wikipedia:
The Borg are cybernetic organisms linked in a hive mind called "the Collective". The Borg co-opt the technology and knowledge of other alien species to the Collective through the process of "assimilation": forcibly transforming individual beings into "drones" by injecting nanoprobes into their bodies and surgically augmenting them with cybernetic components. The Borg's ultimate goal is "achieving perfection".
Existing trends in society, as promoted by techno-optimists, can be interpreted as its progressive "borgification" clarified in one definition as:
Borgification is the assimilation of established external resources and organisational aspects into the network's holistic system of organisation. It extends the unified network into other foreign "legacy" informational environments such as operating systems, language environments, document repositories, applications and organisations. (Borgification, Organic Design)
The process is a useful caricature of dehumanisation as widely discussed in relation to the future relation transformation of humans through dependent interaction with artificial intelligence. The process is associated with that of "dumbing down" humans through their increasing adaptation to simplistic media content associated with advertising -- whose scheduling and placement is programmed to an even higher degree with the aid of AI. The process can be extended to "psychic numbing" with regard to information regarding threats as diverse as financial and economic collapse, the risk of nuclear weapon detonations, pandemics, and global warming. Would a sophisticated AI enable psychic numbing to enable certain agendas?
Such arguments enhance any preoccupation with the humanity of agents -- readily held to be inhumane, if not inhuman -- through the actions they are required to perform, most obviously in the case of the security services. More intriguing is the recognized process whereby an agent "reverts" to "being human", namely abandoning the required script. Frequently dramatised, this occurs when the complexity of a situation highlights the inappropriateness of the script to the degree that the agent acknowledges the cognitive dissonance experienced.
There is a case for exploring a requisite of agent scripts, namely a primary emphasis on being positive and avoiding negativity of any kind -- except with respect to condemnation of those critical of the program. This phenomenon has been otherwise noted by Barbara Ehrenreich (Bright-sided: how the relentless promotion of positive thinking has undermined America, 2009). Missing is the essence of humanity identified by the poet John Keats as negative capability:
Negative Capability, that is, when a man is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason... of remaining content with half-knowledge.
More problematic from an AI perspective is the sense in which the human then ceases to function appropriately as an agent -- framed as having become uncontrollable and having "lost the plot". When appropriately challenged, an agent may then be understood to "degrade" into a form of residual humanity. An aspect of this is the loss of institutional integrity with which security leaks are associated in contexts in which confidentiality and secrecy are highly valued. These processes are then recognized as a problematic degradation of performance. By contrast, humanity is highlighted by the deprecated actions of Chelsea Manning, Edward Snowden and Julian Assange.
Of interest is then the sense in which a human functioning as an agent is effectively "unconscious" in some manner or to some degree. The more general implications are explored by John Ralston Saul (The Unconscious Civilization, 1995). A human agent clearly has the possibility of "becoming conscious" of the inhuman implications of actions and responses required by the script -- when avoiding this recognition may be less stressful. The question above regarding how much agency an agent may have (or believe that "it" has) can then be partially reframed in terms of how conscious the agent can be held to be (or believe "itself" to be). Agency might then be intimately associated with an understanding of consciousness -- whether experienced or inferred.
This highlights issues with regard to degrees of consciousness and the possibilities of some sense of rebirth (Varieties of Rebirth: distinguishing ways of being born again, 2004). More provocative for agents claiming religious beliefs is whether an agent -- as such -- can be "reborn". In cult-like organizations with hierarchies of agents -- as in any priesthood -- the question is clearly provocative when progress involves some form of initiatory rebirth (Strategic Opportunities of the Twice Born: reflections on camouflaging deception, 2004).
In what manner are higher forms of consciousness concealed -- as exemplified by the levels of security clearance in some organizations? Are the agents with lower clearance then to be deemed as conscious to a lesser degree compared with those at the highest level? This is implied by the use of "clear" in scientology and the number of degrees in Freemasonry.
Given the cybernetic implications of the operation of an AI, such questions can be reframed in terms of the degree of self-reference with which the human-agent is endowed. Four of these cybernetic orders are discussed separately (Consciously Self-reflexive Global Initiatives, 2007). An interpretation of such distinctions is provided in the discussion of Cadell Last (Towards a Big Historical Understanding of the Symbolic-Imaginary, 2017):
As exemplified by the dramatisation of the Borg Collective, of interest is the extent to which humans "assimilated" as agents can be appropriately understood as "possessed". That sense is emphasized by a phrase commonly featured in asymmetric dramatic encounters: I own you, and don't you forget it. Given the problematic associations of possession, it is useful to recall the continuing recognition given to the process of exorcism, most notably in a religious context.
It is especially provocative to explore the sense in which promotion of universal vaccination can be understood as a kind of quest for possession -- a literal embodiment or incorporation articulated through the script. The agency associated with peer group pressure may be seen in the same light with respect to piercings, tatoos, and genital mutilation. Is vaccination now effectively upheld as a form of "silver bullet" -- a magical response to possession by a virus?
The cult-related implications are helpful with respect to preoccupation with "deprogramming" individuals holding beliefs associated with a controversial belief system. Allegiance to the religious, political, economic, or social group associated with the belief system is thereby purportedly abandoned. To the extent that an individual may be functioning as an agent, this highlights the question: Can an agent be deprogrammed? Should anti-vaxxers be deprogrammed?
Somewhat ironically this evokes questions relating to the above arguments regarding the extended significance of "script" and "scripting". Should they extend to "description", then to be understood as intimately related to deprogramming? This would be consistent with references to the need for "unlearning" and to the problematic consequences of conventional categorization (Definitional Boundary Games and De-signing the 21st Century, 1995; Mark Bonchek, Why the Problem with Learning Is Unlearning, Harvard Business Review, 3 November 2016; Mariana Plata, The Power of Unlearning, Psychology Today, 25 April 2020).
Again however this raises issues regarding the comparability with respect to agency of an AI-governed society and a cult -- especially emphasized by reference to program and programming. What exactly is happening when organizers propose a program to people -- with some implication that as a result of that experience they will have been "programmed"? Use of the term is especially problematic for a society as a whole given the widespread adoption of "educational programs".
To what extent are those who have experienced education through such processes effectively rendered into agents -- as might be especially evident in boot-camp styles of education? Is there a widespread need for "deprogramming" to enable agents to become human again? Should deprogramming anti-vaxxers be understood in this light?
Chaos: There is no lack of references to the chaotic nature of the times and to the characteristic of governance response. One description of the experience is surreal (Surreal nature of current global governance as experienced, 2016).
Specific references are made to the preferences of some leaders for engendering a degree of chaos which facilitates their preferred styles of governance , (Michael Gerson, All this chaos is a sign of Trump's confidence, The Washington Post, 16 March 2018; Pierre Guerlain, US Foreign Policy of Chaos under Trump: the Wrecker and the Puppeteers, Revue Lisa, 16, 2018, 2; Mike Pesca, Trump Sows Chaos -- but he doesn't reap it, Slate, 2 September 2020). A similar theme is developed with respect to Boris Johnson (Mark Gongloff, It's Boris Johnson's World, Britain Just Lives in It, Bloomberg, 4 September 2019). Such indication recall insights into the so-called reality distortion field of charismatic leaders.
Missing however is the understanding that such leaders may simply be "agents" of a higher order (John Haltiwanger, Trump is a chaos agent in his final days between fighting with Congress, raising fears of war with Iran, and continuing his futile effort to overturn the election, Business Insider, 1 January 2021), Such indications are only cited in the USA in conspiracy theories about the role of a hypothetical "Deep State".
Other commentary has however made the point in the case of the Brexit process that chaos had to be engendered to enable that agenda -- for which the Facebook-Cambridge Analytica facilitated by AI had proved vital. Given the acknowledged capacities of AI to manage chaotic amounts of seemingly unrelated data, it could be argued that a form of chaos is the environment which places AI at an advantage -- irrespective of the comprehension of its purported controllers and those they claim to serve.
The chaos perceived by humans is effectively the "world" of AI. This recalls the classic assertion of Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic (Profiles of the Future: an inquiry into the limits of the possible, 1973). In this "magical world" leaders have indeed been transformed into agents.
Singularity: The "takeover" by AI at some future time has long been framed in terms of a technological singularity. As indicated by Wikipedia, according to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
As argued above, the difficulty with this hypothesis is that humans have little capacity to comprehend the moment of that takeover. There are indications that it may have already have taken place with respect to "planetary healthcare".
Human engagement with singularities: Missing in the focus on a technological singularity is the variety of other singularities which continue to be so influential (Emerging Memetic Singularity in the Global Knowledge Society , 2009). These were distinguished there as:
So framed, given considerable cognitive investment by many in the arrival of a Messiah and/or extraterrestrials -- singularities in their own right -- that discussion included the manner in which the "end times" of civilization are now imagined;
"Inside-Outside" vs "Outside-Inside"? Beyond the issue of human capacity to comprehend any form of singularity, there is the question of how it might be experienced cognitively -- rather than being projected onto an externality.
What form might this experience take, as can be variously discussed (Existential Embodiment of Externalities: radical cognitive engagement with environmental categories and disciplines, 2009; World Introversion through Paracycling: global potential for living sustainably "outside-inside" . 2013; Cognitive Osmosis in a Knowledge-based Civilization: interface challenge of inside-outside, insight-outsight, information-outformation, 2013).
The nature of that interface lends itself to exploration through the aesthetic experience of liminality (Living as an Imaginal Bridge between Worlds: global implications of "betwixt and between" and liminality, 2011). Any framing of a superordinate AI as a "global brain" as a preferred metaphor invites speculation of that kind -- especially if the AI develops an ever more subtle aesthetic interface as more viable for interaction with humans experiencing challenges of comprehension to varying degrees. Particularly intriguing are implications of that experience understood as an "organ" combining both musical, organizational and biological connotations (Envisaging a Comprehensible Global Brain -- as a Playful Organ, 2019).
Indwelling intelligence? There is therefore a paradoxical relationship between inferring an "external" AI, or other elusive contextual entity, and any sense of indwelling intelligence potentially inferred or experienced by, and within, a human being. As implied by personal construct theory, it could even be argued that characteristics of that intelligence may tend to be recognized as an "artificial" construct, especially by a relatively unconscious human being -- irrespective of what is externally projected onto the incomprehensible dimensions of AI.
Given that any integrative sense of "global" for the individual is itself a challenge, the global implications of AI with respect to the management of the system within which people believe themselves to be embedded merits further consideration (Implication of Indwelling Intelligence in Global Confidence-building, 2012). Hence the above-mentioned discussion of Living within a Self-engendered Simulation: re-cognizing an alternative to living within the simulation of an other (2021).
As variously argued, is there a sense in which experience of the pandemic has been engendered by characteristics of "collective subjectivity" and intersubjectivity as yet to be understood (José Maurício Domingues, Public opinion and collective subjectivity: a conceptual approach, Distinktion: Journal of Social Theory, 19, 2018, 3; Michael Adrian Peters, et al, Experimenting with academic subjectivity: collective writing, peer production and collective intelligence, Open Review of Educational Research, 6, 2019, 1; Bingjun Yang, A Study of Intersubjective Representations of Inferential Information in Health Crisis News Reporting, Corpus-based Approaches to Grammar, Media and Health Discourses, 2020).
Stafford Beer. Beyond Dispute: the invention of team syntegrity. Wiley, 1994.
Karen A. Cerulo. Never Saw It Coming: cultural challenges to envisioning the worst. University of Chicago Press, 2006
Arthur C. Clarke. Profiles of the Future: an inquiry into the limits of the possible. Harper and Row, 1973
Ariel Conn. The United Nations and the Future of Warfare. Bulletin of Atomic Scientists, 9 May 2019 [text]
M. L. Cummings. Artificial Intelligence and the Future of Warfare. Chatham House, 2017 [text]
Richard Dawkins. The God Delusion. Bantam Books, 2006
Justin Haner and Denise Garcia. The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development.Global Policy, 10, 2019, 3 [abstract]
J. Vincent Hansen. Blessed are the Piecemakers. North Star Press of St. Cloud, 1988 [summary]
Barbara Ehrenreich:
F. William Engdahl:
J. Goodell. Inside the Artificial Intelligence Revolution: A Special Report. Rolling Stone Magazine, February/March 2016.
M. Hassoun. Fundamentals of Artificial Neural Networks. MIT Press, 2003
Naomi Oreskes and Erik M. Conway. Merchants of Doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Press, 2010 [summary]
Joshua Cooper Ramo. The Age of the Unthinkable: why the New World Disorder constantly surprises us and what we can do about it. Little, Brown and Company, 2009
Heather Roff, Kenneth Cukier, Hannah Bryce, and Jacob Parakilas. Artificial Intelligence and International Affairs: Disruption Anticipated. Chatham House Report, 14 June 2018 [text]
John Ralston Saul. The Unconscious Civilization. Knopf, 1995
Elke Schwarz. The (Im)possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems. Humanitarian Law and Policy. 2018 [text]
Michael A. Sells, Mystical Languages of Unsaying. University of Chicago Press, 1994 [contents]
Leo Strauss. Thoughts on Machiavelli. The Free Press, 1958.
Nassim Nicholas Taleb. The Black Swan: the impact of the highly improbable. Random House, 2007
Michael Yankoski, Walter Scheirer, and Tim Weninger. Meme Warfare: AI countermeasures to disinformation should focus on popular, not perfect, fakes. Bulletin of the Atomic Scientists, 13 May 2021 [text]
Maurice Yolles and Gerhard Fink:
For further updates on this site, subscribe here |