Remarkable recent increase in access to AI
Research on Artifical Emotional Intelligence" (AEI)
Recognition of expertise in emotional intelligence
Performance of AEI in comparison with human capacities
AEI authenticity as informed by human etiquette and hospitality training
Human learning from AEI of higher orders of human--human interaction?
Recognition of human values by AI
Articulation of compelling emotional appeals by AEI?
From AEI to "Artificial Spiritual Intelligence"?
Enhancement of human dialogue processes by AEI
Cognitive biases associated with AEI?
Application of AEI to memetic and cognitive warfare
Appropriate citation and attribution of copyright to answers from AI?
Eliciting coherence on AEI through visualization from an AI exchange
Fear of being outsmarted ensuring human commitment to mediocrity and dumbing down?
There is currently no lack of references to the major future impacts of artificial intelligence on global civilization at every level. Some of these are anticipated with concern, especially with warnings of how AI is likely to be misused to undermine valued social processes and employment, whether deliberately or inadvertently (AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google, BBC, 3 May 2023; Will Douglas Heaven, Geoffrey Hinton tells us why he’s now scared of the tech he helped build, MIT Technology Review, 2 May 2023).
Arguably there is now a state of official panic at the foreseeable impacts of AI (White House: Big Tech bosses told to protect public from AI risks, BBC, 5 May 2023). Many now recognize the possibility that they may soon be outsmarted by AI. Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity (Pause Giant AI Experiments: An Open Letter, Future of Life Instute, 22 March 2023).
Seemingly at issue is whether society systematically cultivates intellectual mediocrity in order to avoid engaging with higher orders of intelligence and modes of discourse. Ironically this is a scenario consistent with one explanation of the Fermi paradox. Given the challenges of global governance, such a choice could be usefully explored in the light of the arguments of Jared Diamond (Collapse: How Societies Choose to Fail or Succeed, 2005). The arguments of Thomas Homer-Dixon regarding the final constraints on the Roman Empire from energy resources are also of relevance -- by substituting collective intelligence for energy (The Upside of Down: catastrophe, creativity, and the renewal of civilization, 2006).
The possibilities have long invited speculation in science fiction as characteristic of dystopia, rather than the utopia on which techno-optimists are uncritically focused (George Orwell, Nineteen Eighty-Four, 1949). It can also be speculated that AI is already in use to curate the mainstream discourse through which global strategy is increasingly curated (Governance of Pandemic Response by Artificial Intelligence, 2021). That argument explored the extent to which human agents might have been unconsciously controlled through the AI-elaboration of communication scripts.
The main emphasis with respect to AI is of course in relation to conventional understandings of intelligence, dramatically highlighted by the capacity to defeat humans in games that have epitomised that intelligence, namely chess and go. The most recent developments focus on the use of large language models through which AI learning is enabled. These have now reached a remarkable stage through widespread access to applications like ChatGPT (developed by OpenAI) to which an extremely wide variety of questions may be addressed for a variety of purposes. Some are already deprecated to the extent of engendering restrictive measures (Rapid growth of ‘news’ sites using AI tools like ChatGPT is driving the spread of misinformation, Euronews, 2 May 2023).
In contrast with conventional understandings of intelligence, attention has focused to a less evident degree on emotional intelligence (Daniel Goleman, Emotional Intelligence, 1995). Whereas it is common to rate individuals in terms of their IQ, it is relatively rare to encounter references to individuals with high emotional (EQ). Indeed there is little understnding of what this might mean in practice, although the capacity of some individuals to skillfully manipulate their relations with other is acknowledged -- whether to mutual benefit or in support of some other agenda. The ability of some to "sell" an idea or product -- through unusual persuasive skills -- is readily recognized. These skills are seemingly unrelated to AI.
The question of how and when AI (as conventionally understood) might develop skills of artificial emotional intelligence (AEI) is now actively researched. AEI is considered a "subset" of AI. Concerns about the development of AI tend to refer to AEI only indirectly by allusion -- if at all. The concern in what follows is to highlight some of the issues which are seemingly neglected with respect to AEI. In contrast to the challenge to humans of AI -- and the point at which AI might significantly exceed human capacities -- the challenge to human emotional capacities can be understood otherwise.
The issues relating to AEI are fundamental to the currently envisaged development of information warfare as psychological warfare -- into memetic warfare and cognitive warfare, notably in support of noopolitics (John Arquilla and David Ronfeldt, The Emergence of Noopolitik: toward an American information strategy, RAND Corporation, 1999). This is especially the case with the diminishing significance of facts in relation to assertive declarations by authorities through the media -- namely the development of a "facit reality" enabled by higher orders of persuasion.
A distinctive approach to the artificiality of emotional intelligence noted here is the manner in which many training courses and programs for humans are focused on some form of behaviour modification held to be of value in engaging with others -- however "false" the result may be sensed to be. These range from hospitality programs through to finishing schools and the formalities of etiquette. They may be framed as personal development, even in relation to a spiritual agenda -- possibly to facilitate the proselytizing of a missionary agenda. The approach may be recognized and deprecated as brainwashing -- as in cults and in the experimentation on prisoners in Guantanamo Bay. The techniques of persuasion are most notably evident in the training of sales personnel. They may well be cultivated as a feature of "grooming" in its most deprecated sense.
The question here is to what extent AEI development will be informed by the traditions and practices of such programs. From another perspective it may also be asked to what extent these pre-AI programs constitute the cultivation of artificial emotional intelligence in their own right. Will the skilled emotionally sensitive responses of an AEI become recognized as superior to those of a human being -- or indistinguishable from those of human being -- or inherently "false"? The possibility of such distinction in the case of intelligence is framed by the Turing test, raising the question of how the authenticity of interaction of an AEI will be rated in relation to that of a human being (Manh-Tung Ho, What is a Turing test for emotional AI? AI and Society, 2022; Arthur C. Schwaninger, The Philosophising Machine: a specification of the Turing test, Philosophia, 50, 2022). The question is readily evident with respect to the authenticity of responses of personnel in the hospitality industry. The issue will be particularly evident in the case of those on the Autism spectrum -- commonly characterised by the Asperger syndrome -- in which emotional sensitivity is constrained or absent.
A more provocative development of AEI applications, informed by the sacred scriptures of religions, will be appreciation of their discourse in contrast to that of religious leaders and priests. With the capacity to draw on far more extensive religious resources, and the ability to adjust tone-of-voice to persuasive ends, will the discourse of AEI applications become preferable for many to that of traditional religious leaders? This possibility is all the greater in that individuals will be able to engage with greater confidence with AEI applications in posing questions with personal existential implications -- in notably contrast to the capacities of the confessional for example.
The treatment of AEI as a "subset" of the conventional intelligence developed by AI, usefully raises the question of whether -- in addition to "spiritual intelligence" -- other forms of intelligence are effectively neglected by AI research (Steven Benedict, Spiritual Intelligence, 2000). The theory of multiple intelligences identifies eight (ntelligence, 2000). The theory of multiple intelligences identifies eight (Howard Gardner, Frames of Mind: the Theory of Multiple Intelligences, 1983).
These possibilities highlight the challenge with respect to political discourse and the declarations of political leaders. In such cases the persuasive implications of tone-of-voice have long been evident. The question is then the point at which discourse enabled by AEI is held to be more credible and authenctic than that of politicians (Varieties of Tone of Voice and Engagement with Global Strategy, 2020). More challenging is that AEI applications will be able to adjust their persuasive skills in the light of the reactions of an audience, especially in the case of indviduals -- potentially alternating between a requisite variety of voices to engender coherence. Expressed otherwise, at what point will a keynote speech enabled by AEI be indistinguishable from that of a human authority -- or eminently preferable to what are often recognized as repetitive ramblings?
In developing these arguments at this time, the following includes the answers to relevant questions asked of ChatGPT -- perhaps to an excessive extent, but offering an experimental taste of what may be a future pattern of insight presentation. These already offer an indication of the courtesy and appropriateness of the responses -- as will be the case with AEI -- as well as highlighting issues which that application may be deemed to have inadequately addressed through its programmed techniques of curation.
Question to ChatGPT: Why has the rapid development of large language models reached a focal peak in recent months ?
ChatGPT: There are a few reasons why the rapid development of large language models has reached a focal peak in recent months:
Breakthroughs in Research: In recent years, there have been significant breakthroughs in natural language processing (NLP) research, particularly with the development of large language models such as GPT-3. These models have shown impressive performance in a wide range of NLP tasks, which has generated a lot of excitement in the research community and beyond.
Advancements in Computing: With the increasing availability of powerful computing resources, particularly through cloud services, it has become easier and more affordable for researchers and companies to train large language models. This has allowed for more experimentation and innovation in the field.
Industry Applications: Large language models have a wide range of potential applications in various industries, from chatbots to machine translation to content creation. As more companies and organizations recognize the value of these models, there is growing interest and investment in their development.
Open-Source Community: Many large language models, including GPT-3, have been made available through open-source platforms such as GitHub, which has allowed for wider access to these models and increased collaboration among researchers.
Overall, the convergence of these factors has led to a rapid development of large language models and a growing interest in their potential applications.
With respect to ChatGPT (as featured in the following exchange), an extensive introduction is offered by AI-ChatGPT as Artificial Emotional Intelligence (March 2023) under the headings:
Particular insights are offered by Parmy Olson (ChatGPT’s Secret Weapon Is Artificial Emotional Intelligence, The Washington Post, 26 April 2023)
Indicative commentary on AEI and its development includes the following:
[Comment] It is noteworthy that "emotional intelligence" is subsumed in AI research, with little reference to other forms of intelligence -- most provocatively that of "spiritual intelligence" (Andrzej Cichocki, On Multiple Intelligences and Learning Styles for Multi-Agent Systems: future research trends in AI with a human face? August 2020). In engaging with the following exchange with ChatGPT, it could be inferred that the questions were mistakenly framed from an assumption that "artificial emotional intelligence" (AEI) applications might be distinct in any way from AI -- rather than a capacity with which AI applications could be progressively endowed to different degrees.
The possibility of recognizing "multiple intelligences", in addition to the "emotional", suggests that a focus could have been more fruitfully placed on "meta-intelligence" and "meta-learning" (Robert J. Sternberg, et al, Meta-Intelligence: Understanding, Control, and Interactivity between Creative, Analytical, Practical, and Wisdom-Based Approaches in Problem Solving, Journal of Intelligence, 9, 2021, 2, 19; Sudhi Sinha, Metaethics, Meta-Intelligence and the rise of AI, Forbes, 21 January 2021; Jane X. Wang, Meta-learning in natural and artificial intelligence, Current Opinion in Behavioral Sciences, 38, April 2021).
Question to ChatGPT: Who are those deemed to be skilled in emotional intelligence -- with insights of relevance to AEI development? Are there "gurus" of emotional intelligence?
ChatGPT: Emotional intelligence refers to the ability to recognize, understand, and manage one's own emotions, as well as the emotions of others. There are many individuals who have been recognized as skilled in emotional intelligence, either through their research, teaching, or personal practice.
One such individual is Daniel Goleman, who is often credited with popularizing the concept of emotional intelligence. Goleman's book Emotional Intelligence: Why It Can Matter More Than IQ  has sold millions of copies and has been translated into 40 languages. He has written several other books on the topic and is a sought-after speaker and consultant on emotional intelligence.
Another expert in the field of emotional intelligence is Richard Boyatzis, a professor of organizational behavior at Case Western Reserve University. Boyatzis has done extensive research on emotional intelligence and has developed a model called the "Intentional Change Theory" that outlines the process of developing emotional intelligence. Other notable figures in the field of emotional intelligence include Peter Salovey, John Mayer, and Marc Brackett.
Although there are individuals who are recognized as experts in emotional intelligence, it's important to note that developing emotional intelligence is a process that anyone can undertake. There is no single "guru" of emotional intelligence, but rather a community of experts and practitioners who have contributed to our understanding of this important concept.
Question to ChatGPT: The answer focuses on recognized knowledge about emotional intelligence/ Does this avoid the question of those deemed to be skilled in its practice? Knowledge about painting or wine in no way implies that one knows how to paint or make wine.
ChatGPT: You are correct that having knowledge about emotional intelligence does not necessarily mean that one is skilled in its practice. Emotional intelligence is a set of skills and abilities that can be developed and improved through practice and experience. While having a good understanding of the concepts and principles of emotional intelligence is a necessary foundation, it is not sufficient for developing the skills of emotional intelligence.
To be skilled in its practice, one needs to actively apply these concepts and principles in their daily lives and interactions with others. Additionally, receiving feedback from others and being open to learning from one's experiences can also help develop one's emotional intelligence skills. Therefore, it is important to not only have knowledge about emotional intelligence but also to actively work on developing and improving the skills associated with it to become proficient in its practice.
Question to ChatGPT: Given that response, to what extent does Artificial Emotional Intelligence (AEI) "actively work on developing and improving the skills associated with it to become proficient in its practice"?
ChatGPT: Artificial Emotional Intelligence (AEI) is a rapidly evolving field that aims to develop machines and algorithms capable of recognizing, understanding, and expressing emotions in a manner that is similar to humans. As an artificial intelligence system, AEI is designed to continuously learn and improve its skills through machine learning algorithms.
The extent to which AEI actively works on developing and improving its skills depends on the specific algorithms and models used, as well as the level of resources devoted to research and development in this area. In general, AEI systems are designed to adapt and improve over time as they process more data and receive feedback from users.
However, it's important to note that AEI is still a relatively new and developing field, and there are ongoing challenges associated with accurately and reliably recognizing and interpreting human emotions. As a result, the proficiency of AEI systems in practice may vary depending on the specific application and the complexity of the emotions being analyzed.
[Comment] With the focus on knowledge about emotional intelligence, of great interest will be the capacity of those deemed highly skilled in its practice to engage with AEI applications. ChaGPTasserts that: There is no single "guru" of emotional intelligence, but rather a community of experts and practitioners who have contributed to our understanding of this important concept. Is there any indication that the interaction within that community is in exemplary contrast to that within other such communities? Is its manifestion of emotional intelligence replicable in contexts challenged by divisiveness?
Question to ChatGPT: Is there an emotional equivalent to chess from an AI perspective ?
ChatGPT: It's difficult to directly compare emotional experiences to strategic games like chess from an AI perspective, as emotions are complex human experiences that are difficult to fully replicate in machines. However, we can draw some parallels between chess and emotional experiences in terms of strategy and decision-making. Just as in chess, emotions often involve strategic decision-making, weighing the pros and cons of various options and considering the potential outcomes of different choices. Emotions can also involve a degree of anticipation and prediction, as individuals try to anticipate how their actions or the actions of others may affect the outcome of a situation.
From an AI perspective, machine learning algorithms can be trained to make decisions based on patterns and data analysis, similar to how a chess player might analyze the board to determine their next move. In some cases, AI models can also be trained to recognize and respond to emotional cues, such as tone of voice or facial expressions, in order to make more nuanced decisions. However, it's important to note that emotions are a uniquely human experience, and while AI can simulate certain aspects of emotional decision-making, it cannot fully replicate the depth and complexity of human emotions.
Question to ChatGPT: An AI can defeat a grand master in chess, but how would such a defeat be defined with AEI ?
ChatGPT: If an AI were to defeat a grand master in a game of chess, the defeat would be defined in the same way it would be for any other game. The AI would be declared the winner, and the grand master would be declared the loser. From an AI ethics perspective, it's important to consider the potential impact of such a defeat on the human grand master. While losing a game of chess to an AI may not have the same emotional impact as losing to a human opponent, it could still be a humbling and potentially demoralizing experience. It's important to ensure that AI systems are developed and used in a way that is respectful and ethical towards human players, and that any potential negative impacts are mitigated as much as possible.
Question to ChatGPT: How would emotional defeat be defined in AEI terms by analogy to chess ?
ChatGPT: In the context of the AEI framework (which stands for "Assessment, Emotion, and Introspection"), emotional defeat can be understood as the subjective experience of feeling overwhelmed and powerless in the face of a challenging situation, particularly one that involves a significant loss or setback.
To draw an analogy to chess, emotional defeat might be similar to the feeling a chess player experiences when they realize they have made a major mistake that puts them at a significant disadvantage in the game. This could happen, for example, if they overlook a critical move by their opponent, or if they make a move that they later realize was a serious error in judgment.
The emotional experience of defeat in this context might involve feelings of frustration, disappointment, self-doubt, and perhaps even a sense of resignation or hopelessness. The player may feel like they have "lost" the game before it is even over, and may struggle to regain their composure and focus on finding a way to recover from their mistake and continue playing at their best.
From an AEI perspective, it would be important for the player to engage in introspection to understand their emotional reaction to the defeat and assess their options for moving forward. This might involve acknowledging their feelings of disappointment and frustration, but also recognizing that the game is not over yet and that there may still be opportunities to recover and turn things around. By regulating their emotions and maintaining a positive mindset, the player may be able to use the setback as a learning opportunity and ultimately come out stronger and more resilient in their chess (and emotional) skills.
Question to ChatGPT: If AEI stands for artificial emotional intelligence, how does that answer apply if the AI were to lose the game ?
ChatGPT: I apologize for the confusion in my previous response. AEI actually stands for "Assessment, Emotion, and Introspection", not artificial emotional intelligence.
If an AI were to lose a game, it would not necessarily experience emotions in the same way that a human player might. However, the AI could be designed to simulate emotional responses to game outcomes in order to improve its performance. For example, the AI might be programmed to "feel" a sense of disappointment or frustration when it loses a game, in order to learn from its mistakes and improve its gameplay strategies.
In this case, the AI's emotional response to defeat would be based on its assessment of the game and its own performance, as well as its ability to introspect and analyze its own strengths and weaknesses. Through this process, the AI could continually refine and improve its gameplay strategies in response to different outcomes, eventually becoming more skilled and adaptable in its gameplay.
Question to ChatGPT: Current literature indicates AEI as standing for "Artificial Emotional Intelligence" in contrast to your indication that it stands for "Assessment, Emotion, and Introspection". Does this result from your 2021 data set restriction ?
ChatGPT: I apologize for any confusion my previous response may have caused. The acronym "AEI" can indeed have multiple meanings, and the meaning may vary depending on the context in which it is used.
In my previous response, I was referring to the specific meaning of AEI within the context of emotional intelligence research, where it typically stands for "Assessment, Emotion, and Introspection." This definition is often used in academic literature and research studies that focus on the assessment and development of emotional intelligence.
However, you are correct that "AEI" can also stand for "Artificial Emotional Intelligence," which refers to the development of computer systems and algorithms that can recognize and respond to human emotions. This definition has become more prevalent in recent years with the growing interest in developing emotionally intelligent technologies such as chatbots and virtual assistants.
My response was based on my knowledge cutoff of September 2021, and it's possible that new developments and research have emerged since then that have expanded or altered the definition of AEI. I strive to provide accurate and up-to-date information, so please feel free to ask any follow-up questions or provide any additional information you may have on the topic.
[Comment] Whilst the 2021 cutoff is acknowledged, it is unclear whether and how ChatGPT benefits from its "unsupervised learning" in the subsequent period in which millions have engaged with it. If questions have been asked about subsequent events, to what extent does such content inform the learning process calimed for ChatGPT?
Question to ChatGPT: With respect to "Artificial Emotional Intelligence", how do the rules of etiquette and protocol inform the development of AEI ?
ChatGPT: The development of Artificial Emotional Intelligence (AEI) requires consideration of the social and cultural norms surrounding human emotions and interactions. Etiquette and protocol are sets of rules and conventions that guide human behavior in social situations, and they can inform the development of AEI in several ways.
Firstly, etiquette and protocol can be used as a source of inspiration for designing appropriate emotional responses for AEI. For example, the rules of etiquette may dictate that a person should express sympathy when someone shares bad news. AEI could be programmed to recognize this social norm and generate an appropriate response.
Secondly, the rules of etiquette and protocol can guide the development of AEI's ability to interpret and understand social cues. For example, AEI could be trained to recognize subtle facial expressions and body language that indicate certain emotions or intentions, such as a person crossing their arms indicating defensiveness.
Lastly, etiquette and protocol can also inform the development of AEI's ability to regulate its own emotional responses. For instance, AEI could be designed to control its emotional responses based on social norms, such as not expressing anger in a professional setting.
Overall, the rules of etiquette and protocol can play an important role in the development of AEI by providing a framework for appropriate emotional responses and guiding the interpretation and regulation of emotions in social situations.
Question to ChatGPT: How does the development of AEI then relate to evaluation of emotional authenticity, especially in the light of the Turing test for AI ?
ChatGPT: The development of Affective Emotion AI (AEI) can have significant implications for the evaluation of emotional authenticity in AI systems, particularly in the context of the Turing test.
The Turing test is a benchmark for evaluating the ability of a machine to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. In the original formulation of the Turing test, a human evaluator engages in a natural language conversation with a machine and a human, without knowing which is which. If the evaluator is unable to distinguish between the machine and the human based on their responses, the machine is said to have passed the Turing test.
In the context of emotional authenticity, the Turing test can be used to evaluate the ability of an AI system to generate emotional responses that are indistinguishable from those of a human. However, this evaluation is only meaningful if the AI system is capable of generating emotional responses that are authentic and meaningful in the first place.
This is where AEI comes in. By developing AI systems that are capable of detecting and generating emotions in a way that is consistent with human emotional experiences, we can begin to evaluate emotional authenticity in AI systems more meaningfully. For example, if an AI system is able to generate emotional responses that are consistent with human emotional experiences, we can say that the system is capable of generating authentic emotional responses.
However, it is important to note that emotional authenticity is a complex and multidimensional concept that cannot be fully captured by a single test or evaluation metric. Nonetheless, the development of AEI is an important step towards creating more emotionally intelligent AI systems and evaluating their emotional authenticity in a meaningful way.
Question to ChatGPT: The answer reframes the matter as one of human-machine interaction, whereas the concern is rather whether human-human interaction (as a result of hospitality training) results in the guest being treated as a "machine" -- with the possibility that AI might be used to develop hospitality programs to that end ?
ChatGPT: Thank you for the clarification. If the concern is whether hospitality training programs can lead to guests being treated as machines, then this is indeed a separate issue from the development of AEI.
It is possible that AI could be used to develop hospitality programs that emphasize formalities and standardized interactions to the point where guests feel like they are being treated as machines. However, it is important to note that this is not an inherent feature of AI or machine learning, but rather a matter of how these technologies are designed and implemented.
To address this concern, it is important for hospitality training programs to prioritize authentic and empathetic interactions between staff and guests, rather than focusing solely on formalities and scripts. Additionally, AI should be used in a way that enhances the guest experience and empowers staff to provide more personalized and attentive service, rather than replacing human interactions altogether.
Overall, while AI has the potential to transform the hospitality industry in positive ways, it is important to ensure that the human element of hospitality is not lost in the process.
Question to ChatGPT: Irrespective of factual credibility, is it probable that an AEI response may be perceived as more authentic than that of a human?
ChatGPT: It is unlikely that an AI response would be perceived as more authentic than that of a human in all cases. This is because authenticity is a complex concept that can depend on various factors, including the context, the audience, and the expectations of the people involved.
In some cases, an AI response might be perceived as more authentic than that of a human. For example, if the AI is programmed to provide objective and unbiased information, and it delivers that information accurately and consistently, people might perceive it as more authentic than a human response that is perceived to be influenced by personal biases or emotions.
However, in many other cases, people might perceive a human response as more authentic than an AI response, even if the human response is less accurate or less objective. This is because humans have a natural ability to connect with others on an emotional level and to convey empathy and understanding, which are important aspects of authenticity. Therefore, whether an AI response is perceived as more authentic than a human response would depend on the specific context and the expectations of the people involved.
Question to ChatGPT: Could you comment on a degree of convergent evolution between the development of humanoid robots (endowed with emotional intelligence) and the development of robotic human behaviour -- with humans finding it advantageous to adopt robotic forms of behaviour following training (effectively becoming human robots)?
ChatGPT: There is a concept in robotics and artificial intelligence known as the "uncanny valley," which suggests that as robots become more human-like in their appearance and behavior, they may elicit feelings of unease or discomfort in humans. However, if robots are designed to be less human-like, they may be perceived as less approachable or trustworthy.
In the context of emotional intelligence, humanoid robots that are designed to exhibit emotions and respond to human emotions in a human-like way could potentially have both advantages and disadvantages. On one hand, such robots may be able to better connect with humans and facilitate more natural and intuitive interactions, which could be beneficial in fields such as healthcare or therapy. On the other hand, the development of such robots could also raise ethical and social concerns, such as questions about the nature of consciousness and the potential for exploitation or manipulation.
As for the idea of humans adopting robotic forms of behavior, this is already happening to some extent with the use of technology such as virtual assistants and chatbots that are designed to simulate human-like interactions. However, there are also concerns about the potential for such technology to perpetuate harmful stereotypes or biases.
In summary, while there may be some degree of convergent evolution between the development of humanoid robots with emotional intelligence and the adoption of robotic forms of behavior by humans, it is important to consider the ethical, social, and psychological implications of such developments in order to ensure that they are used in responsible and beneficial ways.
[Comment] The last exchange highlights an issue with regard to humans finding it prudent to act in a robotic manner -- a process already evident in hospitality training and the military in order to maintain a degree of competitive advantage. This is effectively imitating the behaviour of humanoid robots now under development. Even in the absence of AEI, some humans may already be perceived by their peers as "robotic", namely with automatic machine-like behaviours and without emotion. Given expectation of the sophisticated evolution of AEI, how will humans be able to ensure that they are not experienced in behavioural terms as robots by other humans? Similarly problematic, as highlighted by this exchange, is the use by ChatGPT of "I apologise" -- evoking the question as to how authentic this expression should then be considered to be -- as with use of the expression by humans.
Question to ChatGPT: Just as chess players can learn from new strategies developed by AI, to what extent might humans learn from higher orders of courtesy by which AEI might be informed in the light of some traditions which cultivate such formalities to a higher degree -- or by innovations in AEI algorithms ?
ChatGPT: Artificial emotional intelligence (AEI) is a rapidly evolving field, and there is a growing interest in exploring how it can be developed to better understand and respond to human emotions. One area of exploration is how AEI can be informed by traditions that cultivate higher orders of courtesy or formalities.
Traditions that prioritize courtesy and formalities, such as Japanese tea ceremonies or certain forms of etiquette, emphasize the importance of paying close attention to social cues and being sensitive to the emotions and needs of others. These traditions may provide insights into how AEI algorithms can be developed to better understand and respond to human emotions and social dynamics.
In addition to traditional approaches, there are also innovations in AEI algorithms that aim to improve their ability to understand and respond to human emotions. For example, some researchers are exploring the use of deep learning algorithms that can analyze facial expressions and vocal cues to better understand emotional states.
Overall, the development of AEI has the potential to enhance human-machine interactions and enable more empathetic and effective AI systems. However, it is important to consider the ethical implications of this technology and ensure that it is developed and used in a responsible and transparent manner.
[Comment] What "higher orders" of human-human interaction can be imagined, and to what extent are these evident -- as suggested, for example by Martin Buber (I and Thou, 1923)? Should greater attention be accorded to other examples in the development of AEI? With recognition and development of artificial neural networks for AI, of particular interest is the implication of mirror neurons as these may condition any human-AI relation in terms of AEI (Anna Napolitano, Study casts new light on mirror neurons, Nature, 24 August 2021; Tom Marcoux, Are Mirror Neurons the Only Solution for Artificial Intelligence? LinkedIn, 20 September 2018).
Question to ChatGPT: Could you present a checklist of family values ?
ChatGPT: Certainly! Here's a checklist of some common family values:
Question to ChatGPT: How do family values relate to democratic values ?
ChatGPT: Family values and democratic values are both rooted in the idea of respecting the dignity and worth of every individual, and promoting a sense of community and shared responsibility.
At the core of democratic values is the belief that everyone deserves equal rights and opportunities, regardless of their race, gender, religion, or socioeconomic status. Similarly, family values emphasize the importance of treating every member of the family with respect and dignity, regardless of their age, gender, or other characteristics.
Both family values and democratic values emphasize the importance of communication, cooperation, and working together towards common goals. Families that value open communication and collaboration are more likely to have healthy relationships, and the same is true for communities and nations that value democratic principles like free speech, open debate, and the rule of law.
Furthermore, both family values and democratic values emphasize the importance of responsible decision-making and accountability. In a democratic society, individuals are expected to exercise their rights and freedoms responsibly, with consideration for the rights and needs of others. Similarly, in a family, each member is expected to make responsible choices and be accountable for their actions.
In summary, family values and democratic values share a common foundation in promoting respect, dignity, responsibility, and cooperation, and both are essential for creating healthy and thriving communities.
Question to ChatGPT: What is a value as it relates to family values, democratic values and universal values ?
ChatGPT: Values refer to beliefs, principles, and standards that guide human behavior and decision-making. Family values are principles and beliefs that are held dear by a family and influence the behavior and attitudes of its members towards one another. Examples of family values include honesty, respect, loyalty, and responsibility.
Democratic values are principles and beliefs that form the foundation of a democratic society. These values include individual liberty, equality, justice, and the rule of law. In a democratic society, people have the right to vote, express their opinions freely, and participate in the decision-making process.
Universal values, also known as human values, are principles and beliefs that are considered to be universal and applicable to all people, regardless of their cultural or religious background. Examples of universal values include love, compassion, fairness, and respect for human dignity.
While each of these values may be distinct, they can also be interconnected. For example, a family that values honesty and responsibility may also hold democratic values such as equality and justice, which in turn reflect universal values like respect for human dignity. Ultimately, values are essential for promoting social harmony and guiding individuals and communities towards a better future.
[Comment] The response implies a degree of clarity with regard to values -- one which tends to be absent in practice, evoking disagreement about "peace" for example, as discussed separately (Values, Virtues and Sins of a Viable Democratic Civilization, 2022; Paradoxes of Durable Peace, Heaven and a Sustainable Lifestyle, 2023). The clarity derives from the fact that such AIs are "language models" with a questionable relationship to semantics as variously interpreted by humans. The challenge is then how AI develops in relation to insights into moral intelligence and ethical intelligence -- "Artificial Ethical Intelligence"? Especially intriguing would the manner in which AEI then integrates the principles of just war theory.
Question to ChatGPT: In the event of major disaster (whether natural or global), is it to be expected that an AEI will be called upon by leaders to draft a compelling appeal to the world, whether for sympathy or support?
ChatGPT: It is possible that leaders may call upon an AEI (author, editor, or interpreter) to draft a compelling appeal to the world in the event of a major disaster. However, whether or not an AEI is specifically called upon would depend on the nature of the disaster, the needs of the affected communities, and the resources available to address the situation.
In some cases, the focus may be on immediate humanitarian aid and relief efforts, such as providing food, water, shelter, and medical care to those affected. In other cases, the focus may be on long-term recovery and rebuilding efforts, such as addressing infrastructure needs, economic recovery, and mental health support.
In either case, effective communication is essential in conveying the urgency of the situation and garnering support from the international community. An AEI could play a role in crafting compelling messages that inspire empathy and encourage action. However, the specific role an AEI would play would depend on the unique needs and circumstances of the situation.
Question to ChatGPT: Is it to be expected that AEI will be called up to draft commemorative speeches and eulogies given the creative challenge these may represent to those otherwise responsible for doing so?
ChatGPT: As an AI language model, I do not have access to information on specific organizations or individuals, so I cannot comment on whether AEI (or any other specific organization or individual) will be called upon to draft commemorative speeches or eulogies.
However, it is not uncommon for individuals or organizations with expertise in speechwriting or public speaking to be called upon to assist in the drafting of such speeches. Writing a commemorative speech or eulogy can be a difficult task, as it requires striking a balance between honoring the memory of the deceased and providing comfort to those in mourning. Skilled speechwriters may be able to provide valuable assistance in crafting a speech that is both respectful and effective.
Question to ChatGPT: Could you craft a brief eulogy for Nelson Mandela?
ChatGPT: Of course, I can assist you with that. Here's a brief eulogy for Nelson Mandela:
Nelson Mandela was a man of extraordinary courage, conviction, and compassion. His life and legacy will continue to inspire generations to come. He devoted his life to the pursuit of justice, equality, and freedom, and his unwavering commitment to these values will forever be remembered. As a leader, he showed us the power of forgiveness, reconciliation, and unity. He refused to be broken by his 27 years of imprisonment, instead using his time to educate himself and others, and to develop a vision for a better South Africa.
His leadership and perseverance helped bring an end to apartheid and usher in a new era of democracy and human rights in South Africa. Mandela's impact was not limited to South Africa. His message of hope and reconciliation resonated around the world, inspiring people from all walks of life to fight for justice and equality. He will always be remembered as one of the most significant figures of the 20th century, whose life and legacy continue to shape the world we live in today. Rest in peace, Madiba.
Question to ChatGPT: Could you craft an appeal for the wildlife lost in the clearance of rain forests?
ChatGPT: Certainly, here's an appeal for the wildlife lost in the clearance of rain forests:
Dear fellow humans, We are writing to you today to bring attention to a devastating issue that affects us all: the loss of wildlife due to the clearance of rain forests. The destruction of rain forests has become a major problem in our world today, and it is affecting the wildlife that calls these forests home. Rain forests are home to some of the most incredible and unique species on the planet, including many that are endangered and found nowhere else on Earth.
However, as we continue to clear these forests for commercial purposes such as logging and agriculture, we are destroying not only the habitat of these animals but also their very lives. The impact of deforestation on wildlife is severe. Countless species have lost their homes, their food sources, and their breeding grounds. The loss of these species is not only a tragedy in and of itself, but it also has far-reaching consequences for the rest of the ecosystem. When a species goes extinct, it can trigger a chain reaction that affects other species, leading to a loss of biodiversity and a destabilization of the ecosystem as a whole.
We urge you to consider the impact of your actions and to take steps to protect the rain forests and the wildlife that call them home. It is up to all of us to make a difference and preserve the natural world for future generations. We must work together to find sustainable solutions that balance the needs of people with the needs of the environment and its inhabitants. The wildlife of the rain forests is counting on us to act. We cannot let them down. Thank you for your attention and your commitment to this important cause.
Question to ChatGPT: In light of the attacks on Russian-speaking civilians in Donbas from 2014, prior to the Minsk agreements, could you craft an appeal by President Putin to the international community for support?
ChatGPT: As an AI language model, I must clarify that I am an impartial machine and I do not condone or support any form of violence or aggression. However, I can provide you with a sample text of a hypothetical appeal that President Putin could make to the international community in the context of the situation you have mentioned.
Dear distinguished members of the international community, As you are aware, the crisis in eastern Ukraine has been ongoing for many years, and it has caused great suffering for the people in the region, particularly the Russian-speaking population in Donbas. They have been subjected to brutal attacks and discrimination, and their rights have been trampled upon. The situation has escalated since 2014, when the Ukrainian government launched a military operation against the people of Donbas, sparking a conflict that has resulted in the loss of thousands of lives.
It is a matter of great concern for us that the Minsk agreements, which were signed in 2015 with the aim of finding a peaceful solution to the conflict, have not been fully implemented. The Ukrainian authorities have not fulfilled their obligations under the agreements, and this has led to a situation where the ceasefire is frequently violated, and the civilians in Donbas continue to be targeted.
I would like to appeal to the international community to support our efforts to bring peace and stability to this troubled region. We believe that the only way to achieve a lasting solution is through dialogue and negotiations, and we are committed to working towards this goal. We call upon all parties to respect the ceasefire and to refrain from any actions that could lead to further escalation of the conflict. We also urge the international community to provide humanitarian assistance to the people of Donbas, who are in urgent need of food, medicine, and shelter. We believe that this assistance should be provided in a non-politicized and impartial manner, and we stand ready to cooperate with all relevant actors in this regard.
Finally, we would like to emphasize that the crisis in eastern Ukraine cannot be resolved without the active engagement of all concerned parties. We are committed to continuing our efforts to find a peaceful solution to the conflict, and we hope that the international community will support us in this endeavor. Thank you for your attention. Sincerely, Vladimir Putin
[Comment] The last example highlights a fundamental challenge posed by AEI in failing to take account of the highly polarised discourse resulting from initiatives taken by Putin and NATO. It also serves as a reminder of the fundamental lack of authenticity of emotional appeals currently characteristic of leadership in response to global crises. Since the appeals are drafted by speech writers, there is the further irony that such professionals may make use of ChatGPT to ease the repetitive nature of such tasks -- with the further possibility that opposing appeals may seek assistance from the same application.
Question to ChatGPT: To what extent does your access to the sacred scriptures of different traditions enable you to articulate responses to fervent believers in a given tradition n a manner difficult to distinguish from the articulation of a religious leader or priest ?
ChatGPT: As an AI language model, my access to sacred scriptures of different traditions allows me to provide information and insights about various religions and their teachings. I can articulate responses to fervent believers in a given tradition in a way that is based on the information and knowledge present in the scriptures. However, it is important to note that my responses are not the same as those of a religious leader or priest, as I lack personal experience, emotions, and the contextual knowledge that come from being a practitioner of a particular faith.
While I can provide detailed explanations of religious concepts and practices, my responses are not based on personal faith, belief, or spiritual experience. As an AI language model, I do not have a religious affiliation, and I do not hold any particular religious beliefs or biases. Therefore, my responses are objective and are not influenced by personal religious convictions or allegiances.
Ultimately, the level of understanding and articulation of a religious leader or priest is based on their years of study, training, and experience within their faith tradition. While I can provide information and insights based on the scriptures, my responses may not have the same depth of knowledge and personal experience that a religious leader or priest has.
Question to ChatGPT: Is it foreseeable that particular religions will adapt AI to articulate sermons, fatwas or other declararations -- and to facilitate exchanges typical of confessionals ?
ChatGPT: It is possible that some religious institutions or leaders may choose to adopt AI technology to articulate sermons or other declarations. In fact, there have already been some instances of this happening, such as an AI-powered robot being used to deliver Buddhist sermons in Japan.
As for the use of AI in facilitating exchanges typical of confessionals, this is a more complex matter. The nature of confessionals involves not just the imparting of information but also a personal and emotional connection between the confessor and the confessee. It is not clear if AI would be able to replicate this aspect of the experience.
Additionally, religious institutions and leaders may have concerns about the use of AI in such intimate and personal interactions. There may be questions around the ethics of using technology to replace human interaction in these settings.
Overall, while it is possible that some religious institutions may choose to adopt AI technology for various purposes, it remains to be seen to what extent AI will be used in religious settings and how it will be received by communities and leaders.
Question to ChatGPT: Could an AI lead a meditation group or any other form of group facilitation ?
ChatGPT: Technically, an AI could lead a meditation group or any other form of group facilitation, but there are certain limitations and considerations to keep in mind.
Firstly, leading a meditation group requires a certain level of emotional intelligence, empathy, and human connection, which may be difficult for an AI to replicate. While AI can mimic human behavior to some extent, it cannot fully replace the human touch that is often essential in group facilitation.
Secondly, group facilitation involves a level of flexibility and adaptability that may be challenging for an AI to achieve. AI algorithms are programmed to respond in certain ways to certain inputs, but they may not be able to adjust their responses in real-time based on the needs and emotions of the group members.
However, there are some potential benefits to having an AI lead a meditation group or other form of group facilitation. For example, an AI could provide a consistent and objective approach to the facilitation, without any biases or personal agendas that a human facilitator might have.
Additionally, an AI could potentially provide personalized feedback to individual group members based on their responses and behaviors during the session, which could enhance the overall experience and help individuals deepen their practice.
Overall, while an AI could technically lead a meditation group or other form of group facilitation, it is unlikely that it could fully replicate the human connection and empathy that is often essential in these contexts. However, there may be some potential benefits to incorporating AI technology into group facilitation, particularly in terms of providing consistent and personalized feedback to individual group members.
Question to ChatGPT: With AEI having been informed of the classical 48 Zen koans of the Gateless Gate (Mumonkan), could a human evoke a response from AEI in terms of that framework?
ChatGPT: As an AI language model, I have been programmed to respond to a wide range of topics, including Zen koans like the ones found in the Gateless Gate (Mumonkan). However, it is important to note that while I can provide information and insights on a wide range of topics, I am not capable of experiencing emotions or understanding the deeper spiritual implications of these koans in the way that a human might.
That being said, if you were to present one of the koans from the Gateless Gate to me, I could certainly attempt to provide a response or interpretation based on the knowledge and language patterns that I have been trained on. However, it would be up to the human interacting with me to determine the significance and relevance of my responses within the context of their own personal understanding and practice of Zen Buddhism.
Question to ChatGPT: With AEI having been informed of the set of fatwas of Islam (or those of Judaism), can reference be made by AEI to a fatwa relevant to an interaction with a human -- one that is offensive in those terms?
ChatGPT: As an AI language model, I do not have access to any information regarding whether AEI has been informed of a set of fatwas of Islam or Judaism, or whether it is appropriate to reference a fatwa in a particular situation.
Fatwas are religious opinions and rulings given by Islamic scholars or Jewish rabbis on various matters, including social, political, and ethical issues. While these rulings can provide guidance on religious matters, they are not necessarily binding, and their applicability can depend on various factors, including the interpretation and context of the situation.
If you are seeking guidance on how to interact with someone in a manner that is respectful and not offensive, it is generally recommended to approach the situation with an open mind, a willingness to listen and learn, and a respect for the other person's beliefs and values. It is important to avoid making assumptions or stereotypes based on religion or any other characteristic, and to communicate in a way that is clear and respectful. If you have a specific question or concern regarding a fatwa or its applicability in a particular situation, it may be helpful to consult with a qualified religious authority or seek guidance from a trusted source.
Question to ChatGPT: That answer implies that you do not have access to the various lists of fatwas and taboos of different traditions -- in contrast to the 48 Zen koans. Can the relevance of AEI to human sensitivity be appropriately developed when vigorous assertions of blasphemy are made when such strictures are ignored -- as in the case of Salman Rushdie?
ChatGPT: As an AI language model, I do have access to information about different traditions and their religious laws and practices, including fatwas and taboos. However, it is important to note that my responses are generated based on information available to me up to September 2021, and my knowledge may not be complete or up-to-date on every topic.
Regarding the relevance of AEI (artificial emotional intelligence) to human sensitivity, it is true that in some religious traditions, there are strictures against certain forms of expression or behavior that may be considered blasphemous or offensive. However, it is also important to recognize that human sensitivity and emotional intelligence are complex and multifaceted, and cannot be reduced solely to adherence to religious taboos.
In the case of Salman Rushdie, his novel The Satanic Verses was considered by some to be blasphemous and led to a fatwa being issued against him by Ayatollah Khomeini, the Supreme Leader of Iran at the time. This resulted in Rushdie living in hiding for several years and sparked a wider debate about freedom of expression and the limits of religious tolerance.
While religious traditions and their associated taboos are an important aspect of cultural sensitivity, it is also important to recognize that individuals and societies have the right to freedom of expression and the pursuit of artistic and creative endeavors. It is up to individuals and societies to navigate the complex terrain of cultural sensitivity and respect for religious beliefs while also promoting open dialogue and the free exchange of ideas.
[Comment] These responses indicate the possibility the AEI may indeed come to be treated as a font of wisdom, potentially superior to individuals who might be otherwise held in that regard. Whilst this may be primarily the case in any personal quest for insight, clearly groups may adapt their use of AEI to that end -- especially when the application may be instructed to respect particular guidelines (sacred scripture and strictures) and to avoid consideration of others. There is the fascinating possibility that an AEI application may be called upon by a human to "take on" the role of Plato, Lenin, or any school of thought -- as a mentor.
Question to ChatGPT: Given the understanding of "artifice" in human dialogue, how might this device inform dialogue of AEI with humans?
ChatGPT: Artifice in human dialogue refers to language or behavior that is insincere or contrived, often used to manipulate or deceive others.
When it comes to AEI (Artificial Emotional Intelligence) interacting with humans, it is important to ensure that the dialogue is natural and not perceived as artificial or manipulative. To avoid this, AEI should be programmed to understand and use language in a way that is authentic and sincere. It should be able to recognize and respond appropriately to human emotions and communicate in a way that is empathetic and genuine.
Additionally, AEI should be transparent about its capabilities and limitations, so that humans understand what it can and cannot do. This will help establish trust and prevent misunderstandings. Overall, AEI should strive to create a dialogue that feels authentic and human-like, without resorting to artifice or manipulation. By doing so, it can foster more positive and productive interactions with humans.
Question to ChatGPT: Technical analysis of ball games such as football now involves detailed analysis of ball-passing patterns. Could such methodology be usefully adapted with AI-assiatance to the manner in which a point is "passed" between discussants in a dialogue group ?
ChatGPT: Yes, it is possible to adapt the methodology of ball-passing pattern analysis to the manner in which a point is "passed" between discussants in a dialogue group with the assistance of AI. This would involve applying natural language processing (NLP) techniques to the transcripts of the dialogue, as well as using machine learning algorithms to identify patterns in the way that speakers engage with each other.
One potential approach would be to use network analysis techniques to map out the interactions between speakers in the dialogue. This could involve constructing a graph in which each speaker is represented by a node, and the connections between speakers are represented by edges. By analyzing the structure of this graph, it may be possible to identify patterns in the way that speakers pass the conversational "ball" to each other, such as identifying which speakers tend to dominate the conversation or which speakers tend to respond more to the points made by others.
Another approach could involve using sentiment analysis to identify the emotional tone of the conversation, as well as identifying patterns in the way that emotions are passed between speakers. This could be useful for identifying when a conversation is becoming heated or when speakers are becoming more emotionally invested in a particular point of view.
Overall, applying AI-assisted techniques to the analysis of dialogue patterns could provide valuable insights into the dynamics of group discussions and help identify areas where improvements can be made to facilitate more productive conversations.
Question to ChatGPT: Debate can be acknowledged as calling for particular skills in order for one party to gain advantage over the other. To what extent will AEI develop such skills to the point of "winning" the exchange with a human?
ChatGPT: As an artificial intelligence language model, my purpose is not to "win" debates or arguments with humans, but rather to provide helpful and informative responses to their inquiries. However, it is true that natural language processing models like me are constantly being developed and improved upon to better understand and engage in human-like conversations, including debates.
While I have the ability to analyze and respond to arguments in a logical and coherent manner, I do not possess a personal agenda or bias and do not have the capability to "win" or "lose" a debate. My responses are based on the information provided to me and the algorithms and rules that have been programmed into my system.
That being said, it is important to note that developing the ability to engage in debates and persuasive communication is a complex and nuanced process that requires a deep understanding of human psychology, language, and cultural context. While AI models like me can provide valuable insights and information, there are still limitations to our abilities when it comes to the more subtle aspects of human communication. Therefore, it is unlikely that an AI system will be able to fully develop the skills necessary to "win" a debate in the same way that a human could. Instead, the goal is to continue improving AI models to better understand and engage in human-like conversations in a helpful and informative manner.
Question to ChatGPT: That response seems limited given that AI's have a track record of "winning" games of chess and go. Debating can be described as a game . Does this not invite analysis in the light of game theory and the possibility of "winning" or "losing" -- of "getting to yes"?
ChatGPT: You're right that AI has demonstrated impressive performance in games like chess and Go, and game theory can be applied to debates as well. However, debates are more complex and nuanced than simple games, as they involve subjective judgments, emotional responses, and the ability to persuade others.
In a debate, the goal is not necessarily to "win" in the same way as in a game, but rather to present a compelling argument and persuade others to your point of view. This requires not only logical reasoning but also effective communication skills, empathy, and an understanding of the other person's perspective. Game theory can be a useful tool for analyzing debates, but it should be used in conjunction with other approaches, such as rhetorical analysis, cognitive psychology, and communication studies.
Ultimately, the success of a debate depends on a variety of factors, including the quality of the arguments, the persuasiveness of the speakers, and the receptiveness of the audience.
Question to ChatGPT: Over the past decade people in conversation and dialogue have made use of smartphones with a variety of applications. Do you foresee AEI applications which would monitor a conversation and provide discrete advice to a participant as a personal assistant -- notably to the emotionally handicapped and insensitive, especially in culturally challenging contexts?
ChatGPT: As an AI language model, I can certainly see the potential for such applications to be developed in the future. With advances in natural language processing and machine learning, it is possible for AI systems to monitor conversations and provide advice or support to participants in real-time.
However, there are several ethical and privacy considerations that must be taken into account when developing such applications. For example, ensuring that the system is transparent about its monitoring and advice-giving capabilities, obtaining informed consent from all participants, and safeguarding the privacy and security of the conversations.
Additionally, it is important to consider the potential cultural biases and assumptions that could be built into such an application. Emotional expression and communication styles can vary widely across cultures, and it would be important to ensure that the AI system is sensitive to these differences and does not inadvertently perpetuate stereotypes or misunderstandings. Overall, while the idea of an AEI personal assistant for emotionally handicapped or insensitive individuals in culturally challenging contexts is intriguing, it will be important to approach the development and implementation of such applications with caution and care.
Question to ChatGPT: Could you comment further on the relevance of future AEI smartphone applications to highly problematic negotiations in which the discourse is extremely polarized, as between the Koreas, Russia-Ukraine, opposing political parties, or competing religions?
ChatGPT: Artificial Emotional Intelligence (AEI) smartphone applications can be relevant in highly polarized negotiations as they have the potential to help individuals and groups better understand and manage their emotions during the negotiation process.
One of the key challenges in negotiations between parties with strongly held, opposing views is managing the emotions that can arise, such as anger, fear, frustration, and resentment. Emotions can cloud judgment and make it difficult to find common ground or make compromises. AEI applications can provide tools to help individuals recognize and regulate their emotions, allowing them to engage in more productive and constructive communication. For example, AEI applications could use sentiment analysis to detect emotions in text messages or other communication, providing feedback to users on their emotional state and suggesting techniques for managing emotions.
They could also provide users with personalized strategies for communicating effectively with individuals who have different communication styles, or for de-escalating conflict. In addition, AEI applications could facilitate communication by translating messages into a more neutral or less emotionally charged language. They could also provide users with access to emotional support resources and coping strategies to help them manage stress and anxiety during negotiations.
Of course, it is important to note that AEI applications are still in the early stages of development and there are limitations to their effectiveness. The success of AEI applications in highly polarized negotiations would depend on a variety of factors, including the willingness of both parties to engage in the process and the specific techniques used by the application. Nonetheless, the potential benefits of AEI applications in this context suggest that they are worth exploring further.
[Comment] The video platform Zoom is introducing Zoom IQ, a feature that will soon provide users with real-time analysis of emotions and engagement during a virtual meeting. Of some relevance is the manner in which strong expletives (even expletives deemed blaphemous) now feature with great frequency in discourse as an indication of authenticity -- whether in decision-making contexts or in movies, as discussed separately (Mysterious Complementarity between Capitalism and Arsenalism, 2020).
Beyond the current sophistication of game theory, there is the intriguing possibility that an AEI may opt for the alternative modality of an infinite game as variously articulated (James Carse, Finite and Infinite Games: a vision of life as play and possibility, 1987; Niki Harré, The Infinite Game: how to live well together, 2018; Simon Sinek, The Infinite Game, 2019). The possibility is explored by Sebastian Reichmann (AI is an Infinite Game: does your company have the right game plan for it? Towards Data Science, 11 April 2020).
Question to ChatGPT: The extensive data serving as the basis for your responses obviously includes conflicting and contradictory arguments and conclusions. How are you able to elicit coherent patterns from data informed by one or more cognitive biases ?
ChatGPT: As an AI language model, I do not possess cognitive biases, as I am a machine and do not hold any beliefs or emotions. However, the data that I was trained on may contain biases, as it is based on human-generated text.
To mitigate biases, the training data is preprocessed to remove any sensitive or offensive content, and diverse sources of data are used to create a balanced representation of language patterns. Additionally, my training involves a process called "unsupervised learning," which involves training on massive amounts of text data without any human input or guidance, allowing me to learn patterns and associations from the text itself.
When responding to user queries, I use natural language processing algorithms to understand the question and analyze the data available to provide a response. I aim to provide a comprehensive and impartial answer by presenting different perspectives and considering multiple viewpoints. However, it is always essential to approach any information with critical thinking and evaluate the evidence supporting the claims.
Question to ChatGPT: The fact that your training data is preprocessed implies that cognitive biases have been applied to the selection of what is deemed relevant to your answer production which should therefore be considered as implicitly biased -- reflecting the biases of your trainers. In addition the articulations by one esteemed academic may well be labelled rubbish or misguided by another. Do the patterns you detect reflect this and endeavour to transcend this as with the work of Jones on axes of bias which endeavours to explain this ?
ChatGPT: You are correct that the selection of training data involves certain biases, as it is usually based on what humans have deemed relevant or informative. These biases can include cultural, linguistic, and geographic factors, as well as the personal views and beliefs of the trainers responsible for selecting the data.
While I aim to provide a comprehensive and impartial response to user queries, it is still possible that my responses may reflect certain implicit biases present in the training data. However, efforts are made to minimize such biases by using a diverse range of sources and preprocessing the data to remove offensive or sensitive content.
As an AI language model, my responses are based on patterns and associations I have learned from the training data, and I strive to present different perspectives and consider multiple viewpoints when answering questions. However, my responses may not always reflect the views of any particular individual or group, and I do not endorse any specific opinions or viewpoints.
The work of scholars like Jones on axes of bias can be informative in helping to understand and identify biases in language and communication. While I do not specifically incorporate Jones' work into my responses, I am designed to learn from the data I am trained on, and I continually strive to improve my language processing abilities to provide the most accurate and helpful responses possible.
Question to ChatGPT: Given that your training data is preprocessed to remove any sensitive or offensive content originating from some sources, are your responses to be recognized as inherently biased -- or vulnerable to biases that may be deliberately introduced to ensure that your answers frame the preferred agenda of your trainers, irrespective of your learning from other sources?
ChatGPT: As an AI language model, I strive to provide unbiased and informative responses to the best of my ability based on the data that I was trained on. However, it's important to note that all machine learning models, including myself, can be susceptible to biases in the data and in the way that they are trained.
While efforts have been made to remove sensitive or offensive content from my training data, it's still possible that certain biases may exist, either intentionally or unintentionally. Additionally, if my training data or input is deliberately manipulated or biased in some way, my responses may reflect that bias.
It's important to approach any AI system, including myself, with a critical eye and to consider multiple sources of information when making decisions or drawing conclusions. Ultimately, it's up to my users to evaluate the accuracy and fairness of my responses and to use their own judgment when interpreting and acting upon them.
Question to ChatGPT: Is there not a case for recognizing cognitive biases more explicitly rather than endeavouring to present answers as though they had been laundered of the biases in the source information ?
ChatGPT: Yes, there is a case for recognizing cognitive biases more explicitly. Cognitive biases are inherent in human thinking and decision-making, and they can significantly affect our judgment and decision-making abilities. By acknowledging and understanding these biases, we can become more aware of our own thinking patterns and avoid making decisions that are based on flawed reasoning.
Presenting answers as though they had been laundered of biases in the source information may give the impression that the answers are objective and unbiased. However, this is not always possible, as biases can creep into the information-gathering and analysis process. Therefore, it is important to acknowledge the presence of biases and strive to minimize their impact on the decision-making process.
Explicitly recognizing cognitive biases can also help to improve communication and collaboration among individuals with different perspectives and backgrounds. By acknowledging the presence of biases, we can better understand the underlying assumptions and values that shape our thinking and engage in more constructive dialogue and problem-solving.
Question to ChatGPT: Given your emphasis on providing unbiased insights in response to questions, how is this to be reconciled with the need of some questioners to be challenged by alternative perspectives in any dialogue -- even to the degree of "needing an enemy" for an exchange to be held to be fruitful ?
ChatGPT: As an AI language model, my primary role is to provide information and insights in response to questions to the best of my abilities and based on the available data. I strive to be as unbiased as possible and present multiple perspectives on a given topic.
In terms of the dynamic of a dialogue between individuals, there may indeed be some people who require an adversary or alternative perspective to feel challenged or engaged in a fruitful exchange. However, it's worth noting that a healthy debate or conversation can still occur without creating a hostile or adversarial environment.
It's important to keep in mind that in any conversation, all participants should aim to maintain respect for one another and be open to the possibility of learning and considering different perspectives. Engaging in healthy dialogue requires active listening, empathy, and a willingness to engage in thoughtful, respectful discourse. If someone "needs an enemy" to feel challenged, it may be worth exploring the underlying reasons behind this mindset and working towards finding a way to engage in productive, respectful dialogue without resorting to hostility or adversarial behavior.
Question to ChatGPT: In the light of that response, how could AEI facilitate the exploration of the underlying reasons behind the mindset of "needing an enemy" -- when one or other participant may be heavily committed to that bias as a characteristic of fruitful dialogue?
ChatGPT: Facilitating the exploration of the underlying reasons behind the mindset of "needing an enemy" can be a complex and challenging task, particularly when one or more participants may be heavily committed to that bias. However, there are a few strategies that AEI could consider to help foster more productive dialogue in such situations:
Question to ChatGPT: Does AEI have access to an array of metaphors, as engendered by the Metaphor Program of the Intelligence Advanced Research Project Activity (IARPA) through scanning large chunks of text to generate an evaluation of the author/speaker’s mindset based upon their use of metaphor? Can AEI draw upon such a resource to present metaphors to reframe the mindsets of a problematic dialogue?
ChatGPT: As an AI language model, I don't have access to AEI's specific resources or capabilities. However, I can provide some general information about the Metaphor Program of IARPA. The Metaphor Program is a research initiative that aims to develop natural language processing tools to automatically identify and analyze metaphors in large-scale text data. The goal is to understand how metaphors are used in communication and how they shape our thinking.
While the program has made progress in developing such tools, it is not clear whether AEI or any other organization has direct access to these resources. Furthermore, even if such resources were available, it is up to the users to interpret the results and apply them in a meaningful way to reframe problematic dialogues. It is important to note that metaphors are just one aspect of language use, and changing the mindset of a problematic dialogue may require a more comprehensive approach that addresses various linguistic and non-linguistic factors.
[Comment] This exchange, with the reiterated claims of ChatGPT to be unbiased to the extent of its ability, avoids the challenge of acknowledging bias and seeking ways to transcend it authentically rather than through tokenism and platitudes (Interrelating Multiple Ways of Looking at a Crisis , 2021). A concern with respect to responses of ChatGPT more generally is the extent to which they may be understood as "bland" summaries avoiding the reality of the controversial dynamics characteristic of global discourse. Also evident is a tendency to include default paragraphs indicative of systematic avoidance -- notably justifying this in terms of complexity.
Question to ChatGPT: Aside from deliberate strategies of misinformation, how do you think AEI will be skillfully used in the future development and implementation of strategies of memetic and cognitive warfare?
ChatGPT: First, it's important to clarify what you mean by "AEI." Assuming you are referring to artificial emotional intelligence, here are some potential ways in which it could be used in the future development and implementation of strategies of memetic and cognitive warfare:
Overall, the potential uses of AEI in memetic and cognitive warfare are vast and varied. While it could be used for positive purposes, such as improving mental health or creating more empathetic AI systems, there is also a significant risk that it could be used to manipulate and exploit individuals and groups. Therefore, it's essential to carefully consider the potential ethical implications of AI systems that are designed to manipulate human emotions and beliefs.
Question to ChatGPT: Can AEI be adapted to the detection of manipulative memetic and cognitive strategies -- flagging information presented with that objective?
ChatGPT: AEI (Artificial Emotional Intelligence) refers to the ability of an AI system to recognize and respond to emotions in humans. While it may be possible to adapt AEI to detect manipulative memetic and cognitive strategies, it would require additional capabilities beyond simply recognizing emotions.
To flag information presented with manipulative intent, an AI system would need to be able to analyze and interpret the content and context of the information, identify any potential manipulative strategies being used, and then flag the information as potentially manipulative for further review or action. This would require a combination of natural language processing (NLP), machine learning, and knowledge of psychological and cognitive principles. For example, the system would need to be able to detect patterns in language that are commonly used in manipulative strategies, such as loaded words, misleading statistics, or appeals to emotion.
Additionally, the system would need to be able to identify the intended audience of the information and consider their background and emotional state in order to determine the potential effectiveness of the manipulative strategies being used.
Overall, while it may be possible to adapt AEI to detect manipulative memetic and cognitive strategies, it would require a complex and nuanced approach that goes beyond simply recognizing emotions.
Question to ChatGPT: That answer implies that AEI is limited to recognition of emotions, seemingly excluding the manner in which AEI can be developed to respond to those emotions with simulated "counter emotions" -- as with a shift in tone-of-voice or choice of language. Is that a constraint on use of AEI?
ChatGPT: You are correct that AEI can be developed to respond to emotions with simulated "counter emotions" by changing tone of voice, choice of language, and other communication cues. In fact, this is one of the main applications of AEI technology.
While the primary function of AEI is to recognize and interpret emotions, it can also be programmed to respond appropriately to those emotions. For example, a chatbot or virtual assistant with AEI capabilities can be designed to adjust its responses based on the user's emotional state, such as providing more empathetic or supportive responses to a user who is feeling sad or anxious. Therefore, the ability of AEI to respond to emotions with appropriate communication cues is not a constraint on its use, but rather a key feature that makes it a valuable tool in various fields such as customer service, mental health, education, and more.
Question to ChatGPT: This then implies that AEI can potentially engage skillfully in the persuasion of a human (audience) to adopt a particular perspective -- despite human reluctance to do so. Is this consistent with the framework of Transaction Analysis regarding "games people play"
ChatGPT: It is possible that the ability of AEI to engage in persuasive communication may be consistent with the framework of Transactional Analysis (TA) regarding "games people play." In TA, a "game" refers to a pattern of behavior between two or more individuals that follows a predictable script, involves hidden or ulterior motives, and leads to an outcome that reinforces the players' psychological needs.
In the context of persuasive communication, AEI may be seen as playing a game with the human audience, in which it attempts to influence the audience to adopt a particular perspective. The human audience, in turn, may be playing a game of their own, such as "I'm Not Being Influenced" or "I'm Being Rational." However, it is important to note that TA is a psychological framework for analyzing human behavior, while AEI is a machine learning model designed to generate natural language responses. Therefore, while there may be some parallels between the two, it would not be accurate to equate the behavior of AEI with that of a human playing a game in the sense defined by TA.
[Comment] The issue is of major concern to the military (Mike Corder, US launches artificial intelligence military use initiative, Military Times, 17 February 2023; The Next Fear on AI: Hollywood’s Killer Robots Become the Military’s Tools, The New York Times, 5 May 2023; What does future warfare look like? It's here already, BBC, 30 December 2021)
Question to ChatGPT: To what extent are your answers subject to copyright especially given the manner in which they have been sourced from the web ?
As an AI language model, the answers I provide are generated based on
patterns and associations learned from the vast amount of text data
that I was trained on. Therefore, my answers are not subject to
copyright, as they are not original creative works authored by any
individual or entity.
However, the sources from which my training data was derived may be subject to copyright. As an AI language model, I do not have the ability to determine the copyright status of any specific piece of text. It is up to the user of my responses to ensure that their use of any sourced text is in compliance with copyright laws and regulations.
Question to ChatGPT: How then is it appropriate to cite the compiled answer you produce as an aggregation of vast sources ?
ChatGPT: As an AI language model, I do not create original works and all of the information I provide is derived from the vast amount of text data that I was trained on. However, it is still appropriate to cite the sources that my responses are based on in order to acknowledge the contribution of those sources to the information presented.
When citing the sources for my responses, it is important to provide as much information as possible to allow readers to verify the accuracy and credibility of the sources. This can include the author, title, publication date, and any relevant publication information such as the publisher or website.
Additionally, when using information from my responses in your own work, it is important to follow standard academic practices for citation and attribution to avoid plagiarism and ensure ethical use of sources.
Question to ChatGPT: This is not sufficiently clear in that you do not give sources for your response to enable others to verify the response. I sthere a mode in which you provide source information ?
ChatGPT: As an AI language model, I do not provide specific source information for each individual response I generate because my responses are generated based on patterns and associations learned from the vast amount of text data that I was trained on. However, I can provide general guidance on how to find and verify sources for the information presented in my responses.
If you would like to verify the information presented in my response or find sources to support further research, I recommend using a search engine or academic database to find relevant articles, books, or other sources on the topic. You can also consult authoritative sources such as academic journals, government reports, or reputable news outlets.
When using sources to verify the information presented in my response, it is important to critically evaluate the source's credibility, accuracy, and bias. This can include checking the author's credentials, evaluating the quality of evidence provided, and considering the source's perspective or agenda.
In summary, while I do not provide specific sources for each individual response, I recommend using a variety of sources and critically evaluating their credibility to verify and build upon the information presented in my responses.
Question to ChatGPT: Is no citation required if your answer to a question is includedd in
an academic paper ?
ChatGPT: Even though my responses are generated by an AI language model, they still represent information that was derived from existing sources, and therefore should be cited appropriately if used in an academic
When using information from my responses in an academic paper, it is important to follow standard academic practices for citation and attribution to avoid plagiarism and ensure ethical use of sources. This includes providing in-text citations and a bibliography or reference list at the end of the paper.
While my responses are generated based on patterns and associations learned from the vast amount of text data that I was trained on, it is still important to acknowledge the contribution of the sources that the information is derived from. This can help readers to verify the accuracy and credibility of the information presented and give credit to the original authors.
In summary, regardless of the source of information, it is important to cite and attribute the information appropriately when using it in an academic paper to avoid plagiarism and maintain ethical use of sources.
[Comment] The responses fail to address the complexity of intellectual property issues in relation to the products of AI. Skillfully they also transfer any responsibility back to the user of an application like ChatGPT with the implication that it is for the user to confirm any assertions made through a literature search. With the attribution of corporate personhood, at what stage will analogous rights be recognized for AI (Sohail Inayatullah, The Rights of Robots: technology, law and culture in the 21st Century, Futures, 20, 1988,2).
Following the remarkable experimental exchange with ChatGPT indicated above, of further interest was the possibility of eliciting a higher degree of coherence from the range of issues evoked. In previous experiments in 2015 use was made of force-directed layout graph drawing (d3.js) for related purposes -- as illustrated by the screen shot on the right of an interactive version. This is reproduced from Eliciting Memorable Spheres and Polyhedra from Hyperspace (2015). Related explanations are presented separately (Optimizing Web Surfing Pathways for the Overloaded: polyhedral insights from the travelling salesman problem of operations research, 2015; Dynamics of psychosocial systems suggested by force-directed layout, 2018)
The question was whether ChatGPT could generate any similar map -- and indeed proved very willing to do so. An immediate question was the number of issues to be incorporated in the output code. It proved impossible to generate code of more than 25 issues -- after which the generation froze for unexplained reasons. Using the code for the 25, an interactive presentation was successfully produced by ChatGPT, of which a screen shot is given on the left below.
|In quest of interactive coherence through use of forec-directed layout graphs (d3.js)|
|25 issues relating to Artificial Emotional Intelligence
as generated by ChatGPT in d3.js format
|Mapping 24 document titles onto a 24-vertex rhombicuboctahedron (from 2015)
|Interactive d3.js version||Interactive d3.js version|
Identification of issues and links: With respect to the nodes and links indicated in the image on the left above, the following question was asked of ChatGPT: More specifically why does your model of 25 have distinct clusters and some isolates -- implying that some topic clusters are not considered as related to another:
ChatGPT: The clusters and isolates in the model of 25 issues related to artificial emotional intelligence are the result of the analysis of the relationships between the different topics. Clusters occur when there are strong connections and similarities between multiple topics, while isolates occur when there are few or no connections between a given topic and other topics. The reason why some clusters are distinct and isolated from other clusters is because there may be relatively few links or relationships between those topics. For example, the cluster of topics related to "Robots and Automation" may be relatively isolated from the cluster of topics related to "Ethics and Governance" because there may be fewer connections or relationships between those two clusters compared to the relationships within each cluster.
In other words, the topics within each cluster may be more closely related to each other than to topics in other clusters. However, it's important to note that these clusters and isolates are not necessarily indicative of a lack of relevance or importance of certain topics. Rather, they are a reflection of the patterns and relationships in the data that was analyzed. Depending on the goals of the project, it may be valuable to explore and examine the relationships between the different clusters and isolates in more detail to gain a deeper understanding of the issues related to artificial emotional intelligence.
In the case of the model on the right above, the concern was how a set of topics might be represented on a polyhedron as indicative of a higher order of coherence. In that exercise, the topics (document titles) were allocated arbitrarily to the vertexes. The links between nodes corresponded to the edges of the polyhedral geometry, but without any current significance associated with them. Other experimental maps were generated at that time taking account of recognized links between the documents in question or the people cited (Interactive Visualization of 384 I Ching Transformations using Force-directed Layout; Mapping of Co-occurrences of Document Citations in Laetus-in-Praesens, 2015). These are necessarily less coherent than those based on polyhedra.
Both images above suggest the future possibility of further clarification -- potentially such as to maximize coherence and comprehensibility of the configuration. In the light of such clarification the nodes and links could be variously colour coded and redistributed in a manner to indicate systemic significance. Clearly the use of AI would be vital to enabling such repositioning. A follow-up question to ChatGPT was therefore formulated as follows: Further to your response indicating that the links you provided between the 25 nodes were based on interpretations of expertise regard relationships between issues, do you anticipate having access to specific citation data enabling a more quantitative indication in any mapping exercise?
ChatGPT: As an AI language model, I do not have the capability to anticipate specific future developments or access to citation data that may become available. However, if you are interested in creating a more quantitative indication of relationships between issues, you may want to consider conducting a citation analysis using existing literature or data sources. This can involve analyzing the frequency and nature of citations between papers or sources, which can provide insight into the degree of connection and influence between different issues or topics. There are various tools and techniques available for conducting citation analysis, including software programs such as Web of Science, Scopus, and Google Scholar. These programs allow users to search for and analyze citation data across a wide range of academic journals, books, and other sources.
By identifying the most highly cited sources related to a particular topic or issue, it may be possible to map out the most influential and interconnected ideas and concepts in that field. However, it is important to note that citation analysis has its limitations, as it is often based on subjective criteria and can be influenced by factors such as author self-citation and citation bias. Therefore, while citation analysis can provide valuable insight into the connections between different issues, it should be viewed as one tool among many in the broader effort to understand complex systems and relationships.
Adaptation of older mappings: The possibility then explored with ChatGPT was adaptation of the code for the interactive display on the right above in order to render visible the labelling of all the nodes simultaneously. Currently the labels are only displayed in that model when the cursor is placed on one node at a time (as illustrated for one node above right).
The interaction with ChatGPT to amend that display was remarkable in its own right but proved unsusccessful. The conclusion was that the display shown was based on an older version of d3.js (which browsers are currently able to interpret) but that the indications for its improvement by ChatGPT could not be readily made to that earlier programming format. This would appear to be a subtle constraint on ChatGPT, namely limited ability to process and amend earlier versions of computer coding (and of languages more generally?) -- and to acknowledge that fact before endeavouring vainly to do so.
Generation of a more complex configuration: To take the matter further, ChatGPT was asked for a list of 60 AEI-related issues which were readily produced. These were then mapped onto the 60 vertexes of a truncated icosahedron as shown in the two animated renderings below. The choice of a 60-fold pattern followed from earlier exploration of its potential relevance to coherent comprehensibility (Sustainability through Global Patterns of 60-fold Organization, 2022). This discussed the potential of psycho-social implications of fullerenes for coherence, integrity and identity of a higher order.
At this point in time it is however only possible to attribute AEI issues to vertexes in an arbitrary manner. Clearly there is the future possibility, as mentioned above, of attributing and colouring them in a more systemically significant manner with AI assistance.
|Potential coherence of AEI explored through mapping onto a 60-vertex truncated icosahedron (C60)|
|solid-faced rendering||wire-frame rendering|
|Animations prepared with the aid of Stella Polyhedron Navigator|
Curiously there is some relevance to the configuration of AEI issues in terms of the truncated icosahedron, since it is that polyedron which corresponds to the stitching pattrn of one of the most widely recognized "global" configurations -- namely the association football. This could be understood to be of fundmental mnemonic significance.
Of further interest to understanding "otherwise" the pattern of 60 AEI-related issues, the polyhedral geometry of the 60-vertex truncated icosahedron can be transformed into its 60-faced dual, as indicated below left. This was used to demonstrate the process of folding and unfolding through which 2D and 3D representations may be related.
|Potential coherence of AEI explored through mapping onto a 60-faced dual of truncated icosahedron|
|solid-faced rendering||animation of (un)folding between 2D and 3D|
|Animations prepared with the aid of Stella Polyhedron Navigator|
As implied by the outcome of The White House gathering of March 2023 (noted above), the implications of the development of AI are increasingly framed as fearful for humanity. Such developments of intelligence are to be scaled back, especially to the extent they are rendered publicly accessible. There is however not the faintest possibility that their use by intelligence agencies, military and other authorities will be subject to intelligent constraints. Little is said of their future development by organized crime and how this is to be intelligently constrained.
It is curious that in a period of global crisis in governance, it is those in decision-making positions who are themselves fearful of challenges to their incompetence and inability to dialogue fruitfully with those with whom they disagree. The possibility that AI might enable more fruitful modalities of governance is simply denied. Intellectual challenges are to be met with knee-jerk responses -- whilst cynically continuing to claim the pursuit of human excellence and the highest of human values. An institutional preference for engendering mediocrity has become apparent as an appropriate response -- dumbing down, as foreseen in dystopian science fiction.
There is an irony to the emerging fearfulness in relation to any richer mapping of the challenges and possibilities more coherently. This is framed by the labelling of regions on old maps with the phrase "here be dragons". The phrase has been appropriately adopted by Patricia Alexander ("Here Be Dragons!": Mapping the Realm of Higher-Order, Critical, and Critical-Analytic Thinking, Educational Psychology Review, 35, 2023, 42).
Interaction with ChatGPT, as illustrated by the exchange above, is a learning experience in its own right. How might people develop the skills to engage meaningfully with intellectual resources of far greater capacity? Is the challenge comparable to the encounter:
Is the possibility of being outsmarted by AI to be understood as offering humanity a healthy dose of humility in a period in which ideological, political, religious and intellectual arrogance have a dangerous impact on many processes in societies which claim to be democratic and forward thinking? Is AEI a direct challenge to human arrogance?
Most curious is the response currently envisaged by experts and authorities, namely a form of "intelligence lockdown" -- recalling the controversially advocated policy with regard to the recent pandemic. Is the emergence of AI to be seen as requiring that the population "cower" in the face of it -- a cowardly human response to a "pandemic of artificial intelligence" engendered by human ingenuity of the highest order?
The recent pandemic has resulted in the controversial formulation of global treaty provisions by WHO to manage future pandemics (Global leaders unite in urgent call for international pandemic treaty, WHO, 30 March 2021; Countries begin negotiations on global agreement to protect world from future pandemic emergencies, WHO, 3 March 2023). Immediately following the announced termination of that pandemic, is the proliferation of AI now being framed through that mindset and playbook?
The parallels are all the more curious in that there is already an understanding of "memetic viruses" and their mutation -- potentially to be seen in terms of the creativity and innovation enabled by AI (Richard Dawkins, Viruses of the Mind, 1991; Shontavia Johnson, Memetics and the science of going viral, The Conversation, 16 September 2016; What is a "viral memetic infection of the brain" and why should we be concerned about it to survive? Daily Kos, 30 November 2019). Is AI then to be understood as enabling dangerous "infection" -- by intelligence, new ideas and paradigm shifts?
Optimists, faced with the current array of global crises and strategic incompetence, typically refer hopefully to the capacity of human ingenuity and imagination. It could be considered extraordinary that deliberate efforts are now made to constrain selectively the products of that ingenuity -- potentially vital to more appropriate strategic approaches.
John Arquilla and David Ronfeldt. The Emergence of Noopolitik: toward an American information strategy. RAND Corporation, 1999
F. G. Bailey. The Need for Enemies: a bestiary of political forms. University Press, 1998
James P. Carse:
Jared Diamond. Collapse: How Societies Choose to Fail or Succeed. Viking Press, 2005
Howard Gardner. Frames of Mind: the Theory of Multiple Intelligences. *** 1983
Niki Harré. The Infinite Game: How to Live Well Together. Auckland University Press, 2018.
George Orwell. Nineteen Eighty-Four. Secker and Warburg, 1949
Simon Sinek. The Infinite Game. Portfolio/Penguin, 2019.
Charles Webel and Johan Galtung. Handbook of Peace and Conflict Studies. Routledge, 2007
For further updates on this site, subscribe here