Dangerous development of AI?
Artificiality of human intelligence?
Authenticity of human intelligence?
Test for "artificiality" complementary to the Turing Test for "humanity"
Human augmentation and acquisition of artificial characteristics?
Relative "inhumanity" of humans compared to AI in the future?
Human intelligence as characterized by "muddling through"?
Being "only human" as primary characteristic of "humanity"?
Dangerous enhancement of cognitive abilities?
Human augmentation versus Psychosocial development?
Conventional methodologies and "project logic"
Ineffectual application of psychosocial insights?
Problematic regulation and oversight
Remedial potential and dangers of cyborgization and transhumanism
Humanity framed by implication of AI for religion and ethical systems?
Implication of AI for global ethics, interfaith reconciliation and interdisciplinarity
Artificiality of human constructs and disengagement of intelligence therefrom?
Superficiality of humanity as currently conceived?
Nesting polyhedral configurations within cosmograms and "cosmohedra"
Much is now made of the potential dangers of artificial intelligence and the urgent need for its regulation (Yoshua Bengio, et al, , "Large-scale Risks from Upcoming, Powerful AI Systems": managing AI risks in an era of rapid progress, Global Research, 25 October 2023). Media have focused on the recent assertion by Henry Kissinger in the light of his involvement in the World Economic Forum (AI Will Replace Humans within 5 Years, Slay, 28 October 2023). The more dramatic argument is presented by Geoffrey Hinton (John Davidson, Godfather of AI stands by extinction warning, Financial Review, 1 November 2023).
Such are purported to be the dangerous consequences for the world that the UK government has urgently hosted an AI Safety Summit of leaders in the field at the iconic location of computer innovation in response to the threat of World War II (BBC, 28 October 2023). The Summit gave rise to the so-called Bletchley Declaration, reinforced by a statement by the UN Secretary-General (Statement at the UK AI Safety Summit, United Nations, 2 November 2023). It is unclear whether any form of AI was used there to enhance the quality of discourse typical of such events (Use of ChatGPT to Clarify Possibility of Dialogue of Higher Quality, 2023).
Failure to enhance the quality of interaction at such events raises the question as to whether they could be appropriately caricatured as "large language models" of an outmoded kind -- and dangerous as such -- with their own variant of the "hallucination" deprecated as a characteristic of AIs.
At the same time, the world is confronted by the unrestrained response by Israel to the attack from Gaza, and the many human casualties which are expected to result. However it is perceived, this is clearly the consequence of the application of human intelligence. In the case of the Israelis, their relative intelligence is widely recognized, if not a matter of ethnic pride (Nelly Lalany, Ashkenazi Jews rank smartest in world: studies show descendants of Jews from Medieval Germany, throughout Europe have IQ 20% higher than global average, Ynet News, 23 July 2011; Bret Stephens, The Secrets of Jewish Genius, The New York Times, 27 December 2019; Sander L. Gilman, Are Jews Smarter Than Everyone Else? Mens Sana Monographs, 6, 2008, 1; Kiryn Haslinger, A Jewish Gene for Intelligence? Scientific American, 1 October 2005).
The concern in what follows is how to distinguish, if that is possible, between the much publicized dangers of AI and those deriving from "human artifice". The nature of human artifice, and its dangerous consequences, has become confused by the focus on artificial intelligence. It is however clear that many global crises are the consequences of human artifice -- in the absence of any use of AI.
The Anthropocene Era might well be explored in such terms. AI as a safety concern is a latecomer to the scene -- itself a consequence of human artifice. The currently acclaimed urgency of the crisis posed by AI can even be seen as displacing the urgency previously accorded to climate change. The apparent shift of urgency to a strategic focus on the dangers of AI could even be seen as a convenience -- in the absence of viable responses to climate change.
Given the current multiplicity of global crises -- a polycrisis -- the seemingly absurd consequences of human artifice merit particular concern by comparison with those potentially to be engendered by artificial intelligence. The nature of intelligence has however long been a matter of debate, especially in the light of the search for extraterrestrial intelligence. Having engendered so many crises, it might even be provocatively asked whether "human intelligence", as acclaimed, will be appreciated as such by the future (Quest for Intelligent Life on Earth -- from a Future Perspective, 2023). However it might also be asked -- speculatively -- whether humanity is endowed with some form of "indwelling" intelligence, relevant to crisis response, elusive though it may be (Implication of Indwelling Intelligence in Global Confidence-building, 2012).
The particular concern in what follows is whether what is
appreciated as "human intelligence" has progressively acquired
characteristics which might be deprecated as "artificial". How
artificial has "humanity" become? What indeed is the distinction
between "artificial" and "artifice"? How artificial is one's
intelligence as a human being? The question has been
variously discussed (Arun
artificiality of human intelligence, The Daily Guardian, 27 July 2020;
Sybille Krämer, The
Artificiality of the Human Mind: A Reflection on Natural and Artificial
How is the distinction to be made between the "artificiality" of an agent of an organization representing humanity (or an expert representing a domain of expertise) and the "humanity" of that agent or discipline? Most obviously the question applies to those gathered at the AI Safety Summit, or to those charged with regulating AI in the future.
Some such progressive artificiality is to be expected as a form of human adaptation to an increasingly artificial environment and the skills required to survive within it. The adaptation to AI might well be understood in terms of the acquisition of features which are characteristic of AI -- and of its dangers to humanity. Dangers held to be associated with the technology of "artificial intelligence" then merit exploration as deriving from the unacknowledged projection of the artificiality increasingly associated with human intelligence.
This projection could be seen as an instance of misplaced concreteness -- the fallacy of reification. The dangers perceived in the technology are then to be understood as driven -- to some degree -- by questionable patterns of thinking. Ironically this understanding might be supported by insights into cognitive development from disciplines upholding perspectives which are the antithesis of those driving the development of AI technology. It is in this sense that the surprisingly extensive literature on AI from a variety of religious perspectives merits attention -- especially in the light of the challenge to ethics and morality now highlighted with respect to AI development.
The following exploration includes presentation of the challenge to ChatGPT as exemplifying an "interested party". That exchange helps to clarify the distinction which would seem to be of value at this time.
Safety of AI? As implied above, it is appropriate to ask how "artificial" was the collective intelligence evident at the AI Safety Summit -- and how this might be best understood in the light of warnings immediately before the event (Sunak’s global AI safety summit risks achieving very little, warns tech boss, The Guardian, 20 October 2023; Editorial: Why the UK-led global AI summit is missing the point (Nature, 31 October 2023)).
Following the questionable pattern of many international declarations of the past, attention will necessarily focus on the Bletchley Declaration signed at the AI Safety Summit:
As noted by Matthew Sparkes:
The reality is that technology – just as it has always done – is outpacing legislation. And if the world’s law-makers at least got up to speed on the latest developments in AI at Bletchley Park this week, it is hard to imagine they won’t need a refresher course by the time they meet again, with the face of AI having transformed once more. While summits might offer photo opportunities and the chance for politicians to rub shoulders with the likes of Elon Musk, no amount of gatherings can solve the problem of innovation outpacing legislation. (What did the UK's AI Safety Summit actually achieve? (New Scientist, 2 November 2023)
Sparkes concludes with the provocative speculation: Perhaps this meeting could have been a ChatGPT-generated email, and saved the carbon expenditure of jetting everyone in.
Relevance to global governance? Prior to the Bletchley summit, the UN Secretary-General announced the creation of a new High-level Advisory Body on Artificial Intelligence, with the following focus:
Curiously the comments prior to the declaration (and thereafter) tend to be vague in nature, especially with regard to both the risks and opportunities of AI. Emphasis is on the "tremendous" opportunities and the enthusiasm of techno-optimists, matched by the suggestions of the dangers to humanity by some -- readily to be recognized as fear-mongering typical of other crises.
Curiously missing is any emphasis on how AI might indeed enhance global governance, including the management of those risks and opportunities -- in the light of past inadequacies in response to crises, despite access to the highest relevant expertise. Specifically, it might be asked why no use was apparently made of AI to enhance the AI Safety Summit and the formulation of the Bletchley Declaration. Somewhat ironically, there is no indication of how AI will contribute to the working of the UN High-level Advisory Body on Artificial Intelligence -- or undermine it as a feature of the risks so strongly anticipated.
How might AI enhance interactions at the UN Summit of the Future planned for 2024 -- in the light of the UN Secretary-General's vision for the future of global cooperation in the form of a report titled Our Common Agenda (2021)?
Challenges for the UN system? It is appropriate to note that the UN system has been endowed with an International Computing Centre (UNICC) since 1971. It was created as an inter-organization facility to provide electronic data processing services. It was established by a Memorandum of Agreement among the United Nations (UN), the United Nations Development Programme (UNDP) and the World Health Organization (WHO), pursuant to resolution 2741 (XXV) of the United Nations General Assembly.
There has been relatively little trace of the involvement of the UNICC in AI, despite its participation in the AI for Good Global Summit series organized by the ITU:
Of potential relevance is the lack of recognition of the long-standing challenge to the relationship among UN agencies, and with variously associated bodies, as documented at the time of the creation of UNICC at the International Symposium on the Documentation of the United Nations (Acquisition and Organization of International Documentation, 1972). The introductory report to the Second World Symposium on International Documentation (1980) was republished under the title Societal Learning and the Erosion of Collective Memory (1980). It is appropriate to ask whether such challenges -- typical of problematic inter-agency dynamics at all levels of governance -- will be effectively addressed through the UN's new advisory body, whether or not it is assisted by AI.
"Dark AI" and globally organized crime? Missing from the current enthusiasm for institutional safeguards and regulation is any recognition of how AI may be exploited by constituencies skilled in the evasion of regulatory nets -- most obviously organized crime. This naivety is evident in the light of the development of the Dark Web, thereby suggesting the emergence of a "Dark AI":
Such possibilities are only evident in the summit preoccupation with cybersecurity -- ironically matched by massive investment in cyberwarfare technology by those preoccupied with AI safety.
Given the role of intelligence in the "Great Game" of the 19th century, there is a case for a speculative reformulation in which the new variant is to be recognized as the "game" of humanity with AI, rather than as previously framed (Playing the Great Game with Intelligence: Authority versus the People, 2013).
Arguably it is too readily assumed that "human intelligence" is the antithesis of "artificial intelligence". In the spirit of contrarian analysis, there is a case for questioning this assumption (Contrarian Analysis as a Means of Avoiding Intelligence Failure, 2023).
Question to ChatGPT: Could you comment on the significance of a question to humans: how artificial is your intelligence -- or on how that question might be formulated in a more fruitful manner [Responses by ChatGPT 3.5 are presented below in shaded text]
The question "how artificial is your intelligence" is significant to humans for several reasons. It reflects the ongoing curiosity and concern about the capabilities and limitations of artificial intelligence (AI) systems like mine. Here are some aspects to consider:
To formulate this question in a more fruitful manner, one could consider the following approaches:
In summary, the question about the artificial nature of AI is significant, but formulating it in a more precise and context-specific manner can lead to more informative and constructive discussions about the capabilities and limitations of AI systems.
Question to ChatGPT: You have focused on the implications for AI, whereas my formulation sought to clarify how artificial human intelligence can be recognized to be. The emphasis is on what would justify recognition of some aspects of human intelligence as appropriately understood to be "artificial".
I see, you're interested in understanding how some aspects of human intelligence can be recognized as "artificial." This perspective addresses the idea that certain human behaviors or cognitive functions might appear artificial or non-genuine. To explore this further, you can consider the following points:
In essence, recognizing aspects of human intelligence as
"artificial" often comes down to assessing authenticity, genuineness,
and the complexity of human behaviors and responses. It involves
considering the context, emotional depth, adaptability, and the
presence of underlying intent. It's worth noting that the line between
genuine and artificial behavior can be subjective and
context-dependent, making it a complex and nuanced aspect of human
psychology and sociology.
Question to ChatGPT: I question whether authentic is the appropriate contrast to artificial. Is it not possible to be artificially authentic and authentically artificial. Could a human claim to be intelligent when the intelligence exhibited is artificial? Is a proportion of human intelligence more appropriately described as artificial?
You bring up a thought-provoking perspective, highlighting the complexity of the contrast between "authentic" and "artificial" intelligence. It is indeed possible for certain aspects of human intelligence to exhibit characteristics that might be described as "artificial," and there can be instances where something appears "artificially authentic" or "authentically artificial."
The distinction between authentic and artificial intelligence in humans is indeed nuanced, and it highlights the malleability and adaptability of human cognition. Some aspects of human intelligence are influenced by societal norms, cultural constructs, external influences, or learned behaviors, which can be considered "artificial" in the sense that they are not inherently or universally human.
Moreover, human intelligence is a diverse and multifaceted construct, encompassing various cognitive abilities and behaviors. While a substantial portion of human intelligence is innate and deeply rooted in our biology and evolution, another part is shaped by learned behaviors and external influences. This "artificial" aspect of human intelligence arises from the capacity to adapt, learn, and integrate external information and cultural norms.
In summary, it is possible to identify aspects of human intelligence that could be described as artificial, especially when they are influenced or shaped by external factors or when they intentionally mimic specific behaviors. The distinction between authentic and artificial intelligence in humans is indeed a complex and multifaceted issue, reflecting the flexibility and adaptability of human cognition.
Question to ChatGPT: Given efforts to develop a human dimension to AI, is it not appropriate to clarify any tendency for human intelligence to develop artificial characteristics. Whereas there is a Turing Test for the former, is there a case for a corresponding test for the latter?
The idea of clarifying the tendency for human intelligence to develop artificial characteristics is an intriguing one. As AI technologies advance and become increasingly integrated into various aspects of our lives, understanding the extent to which human intelligence might adopt or adapt to artificial characteristics is an important consideration. While there isn't a widely recognized "Turing Test" equivalent for assessing the development of artificial characteristics in human intelligence, there are several related concepts and areas of research to explore:
Developing a specific test to assess the extent to which human intelligence acquires artificial characteristics is a challenging task due to the complexity and diversity of human cognition. It would require careful consideration of the criteria, context, and ethical implications. However, ongoing research in human-computer interaction, ethics, and the integration of AI into various domains is working towards understanding these dynamics.
In summary, while there isn't a standardized test like the Turing Test for assessing artificial characteristics in human intelligence, there are various research avenues and interdisciplinary efforts to explore and understand how AI influences and potentially shapes human intelligence in ways that might exhibit artificial characteristics. This exploration is crucial as AI technologies continue to evolve and become more deeply integrated into our lives.
Further clarification was sought by presenting the above response from ChatGPT 3.5 to ChatGPT 4.0 for possible refinement. With respect to any supplement, a further consideration was that a key to any such test relates to the challenge of deepfakes. Could a human be distinguished from such a fake, or does the fakery exemplify the extent to which a human may be held to be artificial to some significant degree?
The concept of human intelligence acquiring artificial characteristics in the age of advanced AI is indeed a thought-provoking topic. As we intertwine our lives and decision-making processes with AI, the delineation between human and artificial cognition becomes increasingly blurred. There isn't a standardized assessment akin to the Turing Test for gauging the artificiality within human intelligence, but the emergence of deepfakes presents a unique lens through which to examine this phenomenon.
Refined response: The integration of AI into human cognition and decision-making raises the question of whether, and to what extent, human intelligence may begin to mirror artificial characteristics. Currently, there is no equivalent to the Turing Test specifically designed to measure the 'artificiality' of human intelligence. However, the interplay between humans and AI across various domains provides fertile ground for this inquiry:
Supplemental thoughts on deepfakes: Deepfakes present a compelling case study for the potential artificiality of human intelligence. They challenge our ability to discern genuine human expressions from AI-synthesized imitations. This technology not only questions the authenticity of digital content but also serves as a metaphor for the artificial characteristics that may be developing within human intelligence itself.
In essence, the rise of deepfakes could prompt a deeper exploration of how human intelligence adapts and responds to the capabilities of AI. As we become more adept at creating and interacting with sophisticated AI representations, the characteristics that we traditionally view as uniquely human may evolve, and our understanding of what constitutes 'artificiality' within human intelligence will need to be reexamined. This ongoing dialogue between human and machine intelligence, characterized by both cooperation and competition, will continue to shape our cognitive landscape.
Question to ChatGPT: Could development of human augmentation technologies, such as brain-computer interfaces, to enhance human cognitive abilities (by integrating AI) be understood as enhancing artificial characteristics?
Yes, the development of human augmentation technologies, such as brain-computer interfaces (BCIs) that integrate AI to enhance human cognitive abilities, can be understood as enhancing artificial characteristics within the realm of human intelligence. Here's why:
While the development of BCIs and similar technologies offers the potential for significant cognitive enhancements and benefits, it also sparks debates about ethical, societal, and philosophical aspects. The question of whether the introduction of artificial elements into human cognition enhances or alters the nature of human intelligence is a complex and ongoing discussion.
Some view this as a form of "augmented intelligence," where the human and artificial elements combine to create a more capable and adaptable intelligence. Others may argue that it introduces "artificial" characteristics or behaviors into human thought processes, as AI-driven enhancements can be different in nature and origin from traditional human cognitive capacities.
In essence, the integration of AI into human cognition through augmentation technologies blurs the lines between human and artificial intelligence, raising fundamental questions about the nature and boundaries of human intelligence in the context of a rapidly evolving technological landscape.
Question to ChatGPT: Is there then the possibility that development of AI may render AI "more human" than some humans whose proportion of intelligence could be more appropriately recognized as artificial to a high degree -- if not increasingly so
The idea that AI could become "more human" than some humans, or that a portion of human intelligence might be recognized as increasingly artificial, is a thought-provoking concept and a subject of ongoing debate and speculation. Here are some key considerations:
In essence, while AI can surpass human performance in specific tasks, it's important to remember that the comparison between AI and human intelligence is not entirely straightforward. AI lacks many aspects of human intelligence, such as consciousness, empathy, and the ability to truly understand and navigate the nuances of the human experience. Human intelligence is shaped by a combination of innate capacities, cultural and social influences, and personal experiences, making it a unique and multifaceted phenomenon.
The integration of AI into human life presents opportunities for collaboration and augmentation, but it also requires careful consideration of the implications for human cognition, ethics, and society. The question of whether AI can become "more human" than some humans remains a complex and evolving area of exploration.
Question to ChatGPT: I wonder whether what is conventionally framed as "human intelligence" -- by humans and by AI -- is primarily characterized by "muddling through", framed positively as creativity, although a more problematic framing might be appropriate
The concept of "muddling through" or navigating through complex, ambiguous situations and problems is indeed a significant aspect of human intelligence. It encompasses the ability to adapt, think creatively, and find solutions even in uncertain or challenging circumstances. This capacity for creative problem-solving is a defining feature of human intelligence.
However, the term "muddling through" can sometimes be seen as somewhat dismissive or underestimate the depth of human intelligence. While it suggests a certain degree of trial-and-error and improvisation, these processes are often guided by a complex interplay of cognitive abilities, such as critical thinking, pattern recognition, emotional intelligence, and the application of accumulated knowledge and experiences. This multifaceted approach to problem-solving contributes to the richness and diversity of human intelligence.
Framing this aspect of human intelligence as "creativity" emphasizes its positive and constructive nature. Creativity is a fundamental part of human intelligence and is responsible for advancements in science, technology, art, and numerous other domains. It allows humans to find innovative solutions, develop new ideas, and adapt to changing circumstances.
However, it's worth acknowledging that "muddling through" or creative problem-solving can have its challenges and limitations. It may involve uncertainty and setbacks, and not all outcomes are successful. In some contexts, it can be seen as a less structured or systematic approach compared to the precision and predictability that AI systems can offer.
In summary, "muddling through" and creativity are integral components of human intelligence. While they represent the ability to navigate complex situations and adapt to changing conditions, it's essential to appreciate the complexity and depth of these processes, which involve a combination of cognitive, emotional, and experiential elements. Framing this aspect of human intelligence positively as "creativity" is valid, but it's also crucial to recognize the challenges and nuances associated with this approach.
Question to ChatGPT: The phrase "only human" is often used to characterize and excuse problematic and irrational human behaviour. This conversation has not clarified the distinction from "human error", "muddling through", and "creativity" -- as they relate to "human augmentation" and any understanding of becoming "more human". Could you comment further
The distinctions between "only human," "human error," "muddling through," "creativity," and how they relate to "human augmentation" and the concept of becoming "more human" are indeed nuanced and interconnected. Let's explore these concepts further:
The challenge lies in recognizing that these concepts are not mutually exclusive; they intersect and coexist in complex ways. "Only human" acknowledges our limitations, while "muddling through" and "creativity" showcase our adaptability and ingenuity. "Human error" is a manifestation of our imperfections, but it can also lead to creative problem-solving and learning.
When we introduce human augmentation technologies, we aim to leverage artificial elements to enhance our cognitive abilities. While this can improve specific aspects of decision-making and information processing, it doesn't eliminate human tendencies like errors or the need to muddle through complex situations. Instead, it complements and extends our capabilities.
The distinction between these concepts and their interplay can vary based on context and perspective. In essence, human augmentation allows us to enhance certain aspects of our cognitive abilities while recognizing that "only human" characteristics, including errors and creative problem-solving, remain intrinsic to our nature. The goal is not to replace these human attributes but to empower individuals to make more informed and effective decisions while maintaining their unique human qualities.
Question to ChatGPT: These useful distinctions do not help to clarify the manner in which human augmentation technologies, leveraging artificial elements to enhance cognitive abilities, may have dangerous consequences. Clearly there is the possibility that "human augmentation" may conflict with becoming "more human" independently of AI. There is even a danger of conflating "human augmentation" with becoming "more human" -- obscuring the potential significance of the latter
You raise a valid point about the potential dangers and conflicts that can arise with the development and implementation of human augmentation technologies. It is essential to distinguish between human augmentation and the concept of becoming "more human," as they represent distinct ideas and can indeed have different consequences and implications:
The potential danger lies in conflating these two concepts. If human augmentation is pursued without a careful consideration of how it aligns with or potentially conflicts with becoming "more human," it could result in a dehumanizing or dehumanized approach. Here are some key considerations:
In summary, while human augmentation technologies can offer significant benefits, there is indeed a danger in conflating them with the concept of becoming "more human". Careful consideration of the ethical, societal, and philosophical dimensions is essential to ensure that these technologies enhance rather than undermine our humanity. Balancing the pursuit of augmentation with a commitment to preserving core human qualities is a complex challenge in the era of AI and advanced technology.
Question to ChatGPT: Preoccupation with human augmentation as understood from a technological perspective would seem to contrast fundamentally with understandings of psychosocial development from the perspective of psychoanalysis and the many schools of meditation otherwise concerned with subtler dimensions of awareness. Is the former to be understood as artificial in contrast with the latter
The preoccupation with human augmentation from a technological perspective does indeed contrast fundamentally with understandings of psychosocial development from the perspective of psychoanalysis and other schools of thought concerned with subtler dimensions of awareness. These two approaches represent different paradigms and worldviews, each with its own set of values, assumptions, and implications.
The distinction between these two perspectives is not necessarily a matter of one being "artificial" and the other "authentic." Instead, it reflects different lenses through which we view and engage with the human experience. Some individuals and scholars may choose to embrace both perspectives, recognizing that they can complement and enrich each other, but others may see them as fundamentally different worldviews.
Ultimately, the choice between these perspectives can depend on individual values, goals, and beliefs. Some may prioritize technological advancement as a means to enhance human capabilities, while others may value inner exploration and psychological growth. The key is to acknowledge the richness and diversity of human experience and the various ways in which we seek to understand and enhance it.
The problematic trap of conventional "project logic" has been evoked in relation to the quest for patterns of coherence for future African management (Knowledge Gardening through Music, 2000; Metaphoric Entrapment in Time, 2000).
Question to ChatGPT: The challenge with the summary above is that global crises would seem to be engendered by the application of complex conventional methodologies -- of an ever more artificial nature -- requiring correspondingly artificial cognitive abilities. How these relate to psychosocial development of complementary human insights is yet to be fruitfully considered -- especially when the nature of those insights may themselves contribute to the crises.
You've raised a valid and complex point regarding the challenges posed by the application of complex, artificial methodologies and cognitive abilities in the context of global crises. The intersection between technological advancements, the application of artificial intelligence, and the psychosocial development of complementary human insights is indeed a critical and often contentious issue. Here are some key considerations:
The relationship between technological advancements and human insights in addressing global crises is indeed complex and multifaceted. It's crucial to recognize that both dimensions have unique contributions to make. Effective solutions often require a holistic approach that takes into account not only the technical aspects but also the ethical, social, and psychological dimensions of the challenges we face.
Ultimately, the ongoing dialogue between the fields of technology, psychology, ethics, and social sciences is vital for navigating the complexities of the modern world and addressing global crises in a balanced and thoughtful manner.
A much-cited problematic insight has been offered by James Hillman and Michael Ventura (We've Had a Hundred Years of Psychotherapy – and the World's Getting Worse, 1992). Contrasting responses are evoked (Hans Rosling, et al, Factfulness: Ten Reasons We're Wrong About the World – and Why Things Are Better Than You Think, 2018; By Almost Every Metric, the World is Getting Better, Reddit, 2021; John Lande, Is the World Really Falling Apart, or Does It Just Feel That Way? The New York Times, 12 July 2022)
Question to ChatGPT: As noted by a much-cited study to the effect that after 100 years of psychosocial insight the world has gotten worse, there is clearly a disconnect between effective "psychosocial development", the possibility of "balancing technological and human insights", "interdisciplinary collaboration" and "education and ethical frameworks". The progressive dependence on essentially artificial cognitive abilities is then of questionable efficacy in relation to forestalling "unintended consequences"
You've raised a valid concern regarding the state of the world despite a century of psychosocial insight, as well as the potential disconnect between psychosocial development, technological advancements, interdisciplinary collaboration, and ethical frameworks. It's crucial to recognize that addressing complex global issues is an ongoing challenge, and there are no simple solutions. Several factors contribute to this situation:
The progressive dependence on artificial cognitive abilities does indeed come with risks and challenges, and it is essential to consider potential unintended consequences. Striking a balance between technological advancements and human insights is an ongoing process that requires constant vigilance and adaptation.
While it's true that the world faces significant and pressing challenges, the alternative to addressing them through interdisciplinary approaches, ethical considerations, and the responsible use of technology is not likely to lead to improved outcomes. Rather, continued efforts to harness the potential of both technological and human insights, combined with a commitment to addressing unintended consequences and ethical concerns, are crucial in working towards a more sustainable and equitable future.
Question to ChatGPT: You have emphasized that both "robust regulation" and "oversight" are necessary to mitigate potential dangers associated with human augmentation -- stressing that regulatory bodies should ensure the safe and ethical use of these technologies. Unfortunately the track record in that regard cannot be said to have reduced the emergence of highly problematic "unintended consequences". It is unclear whether AI will further exacerbate the situation -- with or without its envisaged regulation. Ironically problematic is the ambiguity of "oversight" in English, since it is also indicative of a blind spot and a matter "overlooked"
You've highlighted a valid concern regarding the effectiveness of regulation and oversight in mitigating the potential dangers associated with emerging technologies, including human augmentation and AI. The track record of regulation and oversight is indeed mixed, and it's true that some unintended consequences have arisen despite regulatory efforts. It's essential to acknowledge the challenges and limitations in this area:
Given these challenges, there's a need for ongoing refinement and adaptation of regulatory approaches to address the complexities of emerging technologies effectively. This includes enhancing transparency, engaging in public discourse, and considering a multidisciplinary approach that includes technical experts, ethicists, and social scientists.
Regulation and oversight are not silver bullets, but they are essential components of responsible technology development. The question of whether AI will exacerbate these challenges or alleviate them depends on how regulatory frameworks are designed and implemented, as well as the extent to which collaboration and ethical considerations are prioritized.
In essence, the evolution of technology and its regulatory landscape is a dynamic and ongoing process. It is essential to recognize the limitations while continuing to work towards regulatory approaches that mitigate risks, protect individuals and society, and foster the responsible and ethical development and use of emerging technologies.
Question to ChatGPT: Could you regenerate the response to include issues of overregulation and oversight deliberately designed to avoid awareness of a potential issue
Certainly, it's important to acknowledge the potential issues of overregulation and the possibility of oversight that is designed to avoid awareness of certain problems. In the context of emerging technologies, including human augmentation and AI, these concerns can manifest in various ways:
These issues are significant and underscore the need for a
balanced and well-considered approach to regulation and oversight.
Effective regulation should strike a balance between fostering
innovation and addressing risks and harms. It should also prioritize
transparency, independence, and public engagement to avoid undue
influence and the avoidance of awareness of potential problems.
Achieving this balance requires ongoing vigilance and a commitment to
the responsible and ethical development and use of emerging
Cyborgization: The argument has been made by Joshua Bleiberg that We Are All Cyborgs Now (Brookings, 8 October 2014), namely that conceptually people have started to transform into cyborgs. Reference is made to a point by Benjamin Wittes and Jane Chong (Our Cyborg Future: law and policy implications, Brookings, September 2014). The argument has been presented otherwise by Frank Swain:
My brain is no longer tuned to life without prosthetics. Without my hearing aids, I hear worse than I did before I got them. The little electronic plugs have become an extension of myself. Although I can be physically separated from my hearing aids – I can take them out and hold them in my hand – my sense of hearing is not so easily picked apart. It exists partly in my ears, and partly in my devices. So I am now a cyborg out of necessity, not choice. (Cyborgs: The truth about human augmentation, BBC, 24th September 2014).
As argued in a valuable summary by Anton L. Grinin and Leonid E. Grinin: Cyborgization is a hot topic these days. This is an intriguing process that is the subject of many futuristic novels and which at the same time takes place right before our eyes (Cyborgization: To Be or Not to Be? Journal of Big History , 4, 2020, 3). The authors are concerned about the question of whether the time will come when a human will mainly or completely consist not of biological, but of artificial material.
Becoming a cyborg in that way has been framed as a question by Jorge Pelegrín-Borondo, et al (Does Ethical Judgment Determine the Decision to Become a Cyborg? Influence of Ethical Judgment on the Cyborg Market, Journal of Business Ethics, 161, 2020, 2575)
Today, technological implants to increase innate human capabilities are already available on the market. Cyborgs, understood as healthy people who decide to integrate their bodies with insideable technology, are no longer science fiction, but fact. The cyborg market will be a huge new business with important consequences for both industry and society. More specifically, cyborg technologies are a unique product, with a potentially critical impact on the future of humanity. In light of the potential transformations involved in the creation of "superhuman" cyborgs, ethics must be a cornerstone of cyborg marketing decisions.
Dependence versus remedial potential? With respect to how artificial is human intelligence becoming, the emphasis on increasing dependence on enabling devices obscures any concern with the range of meanings to be associated with such enhancement -- whether in cognitive terms or in terms of global management capacity. Despite the suggestive title, the emphasis on dependence is characteristic of the study by Ivana Greguric (The Age of the Cyborg: philosophical issues of human cyborgization and the necessity of prolegomena on cyborg ethics, 2022).
Clearly a primary focus of cyborgization is on quantitative aspects -- increased speed and competence in quantitative terms, together with increased connectivity and interactivity. Such understandings of intelligence distract from the far less obvious question of qualitative enhancements, as might be indicated by the integration of insights from disparate domains characteristic of creativity of wider significance. Of considerable relevance, as noted above, is how these become evident with respect to the UN's planned Summit of the Future in 2024 -- in the light of the UN Secretary-General's vision for the future of global cooperation in the form of a report titled Our Common Agenda (2021).
Expressed otherwise, to what extent is cyborgization to be understood as indicative of significant enhancements of wisdom -- however that might come to be understood? Science fiction has occasionally suggested that this might be the case. The point can however be emphasized by reference to "cognitive fusion" as a requirement for the timely control of a fighter jet through integration of data from multiple indicators. This is necessarily distinct from any elusive understanding of wisdom acquired over years of experience. The design considerations are discussed separately by comparison with those of nuclear fusion (Dematerialization and Virtualization: comparison of nuclear fusion and cognitive fusion, 2006).
Given the importance incrreasingly accorded to "narratives", understanding of "cognitive fusion" merits consideration in relation to the integrative function of myth and symbols. As discussed separately, it is readily assumed in the modern world that myth and symbol are only to be associated with outmoded modes of cognition (Cognitive Fusion through Myth and Symbol Making: archetypal dimensions, 2006).
This is to forget the immense investments in "image making" through advertising, public relations and news management, or the importance attached to the symbols by which individuals, groups, teams, corporations, nations and international bodies are identified -- and through which they may well define their own identities. Great attention is given to the "stories" or "narratives" by which the identity of entities is crafted in relation to others. Their ability to "reinvent" themselves through developing new stories is appreciated. Symbols may then be embedded in a connective tissue of myth.
Such arguments frame the question as to whether the focus on cyborgization misleadingly equates enhancement of artificiality with enhancement of intelligence -- which may well be a narrative favoured by many. Aspects of that emphasis are discussed separately (Cyborgs, Legaborgs, Finaborgs, Mediborgs: meet the extraterrestrials - them is us, 2013).
Transhumanism: From that perspective, cyborgization may be readily conflated with transhumanism, namely the philosophical and intellectual movement which advocates the enhancement of the human condition by developing and making widely available sophisticated technologies that can greatly enhance longevity and cognition (Calvin Mercer, et al. Religion and Transhumanism: the unknown future of human enhancement, 2015; Nick Bostrom, A History of Transhumanist Thought, Journal of Evolution and Technology. 14, 2005, 1)
Following the success of AI against grand masters of chess and go, as yet to be demonstrated is whether and how artificial enhancements endow individuals (or collectives) with qualitative capacities most valued as characteristic of human intelligence. Whilst the use of drugs to this end may be sought and acclaimed by individuals, it is less evident -- and more controversial -- as to whether this is confirmed and appreciated from other perspectives (Alexander Beiner, The Bigger Picture: how psychedelics can help us make sense of the world, 2023; William Richards, Sacred Knowledge: psychedelics and religious experiences, 2015).
Distinct religions: Of particular relevance to the nature of humanity are the arrays of categories and concepts engendered by the theologies and logics of various religions in which such constructs are central to cognitive augmentation as mystical experience. It is therefore surprising to note the extensive literature regarding AI from various religious perspectives:
Examples are cited of the controversial use of AI to develop sermons (most obviously in Christianity), and to perform rituals (in Hinduism). It can be readily imagined that AI will be used to enable people to convert between religions, as an extension of interactive facilities currently available. The emergence of new religions is envisaged (Neil McArthur, Gods in the Machine? The rise of artificial intelligence may result in new religions, The Conversation, 16 March 2023).
There is an obvious provocation to any proposed use of AI in support of mystical experience -- especially when the emphasis is on the constructs and not on the experience:
Secular values and ethics: gConsiderable importance is now attached to ethics in relation to the future safe development of AI. The difficulty in that respect is the continuing confision regarding ethics, especially in the light of the dubious legitimacy sought in "just war theory". The difficulty extends to what might be termed "just torture theory" and "just suffering theory", about which clarification might be sought from AI.
The more general difficulty is that ethics is problematiocally associated with human values about which there is further confusion -- despite frequent reference to them by leaders (Values, Virtues and Sins of a Viable Democratic Civilization, 2022). The United Nations has long remained a focus for the progressive articulation of human values -- dating from the Universal Declaration of Human Rights, variously replicated by regional organizations. The possibility of a global ethical framework remains elusive, despite the iniaitive of the Parliament of the World's Religions (Towards a Global Ethic: An Initial Declaration, 1993).
Less evident is how an ethical framework could then be elaborated for AI -- whether or not the assistance of AI is sought in its elaboration -- in the light of the many past initatives from which a coherent pattern of values has not clearly emerged (despite claims to the contrary):
Theology: Of particular relevance is the articulation offered to one or more religions (or denominations) from a theological perspective, and how this may be enabled by AI. Without necessarily being able to distinguish the variety of religions encompassed, examples of such initiatives include:
Interfaith reconciliation? Missing from the focus on particular religions (whose unreconciled dogmas do so much to engender and justify violence) is the challenge of interfaith discourse and reconciliation, however this may be understood (or deprecated) from a theological perspective. Initiatives in that respect are as yet at an early stage:
It is appropriate to suspect that individual religions will primarily make use of AI to reinfroce avoidance of fundamental interfaith challenges -- as in the case of Abrahamic religions.
Question to ChatGPT: A major strength of large language models is evident in their ability to select from a vast array of categories and offer systematic presentations of extensive scope. How might this capacity apply to the disparate theologies of a wide array of religions -- typically in some degree of conflict with one another?
Large language models like GPT-3 have the capacity to provide systematic presentations of information from a wide array of sources, which can be particularly useful when it comes to understanding and presenting information about theologies from various religions, even those in conflict with each other. Here's how their capacity can be applied in this context:
It's important to note that while large language models can be valuable tools in this context, they should be used carefully and ethically. Their output should be critically evaluated, and users should be aware of potential biases in the training data. Additionally, engaging with experts in theology and interfaith dialogue is crucial to ensure the accuracy and sensitivity of the information provided by these models.
Question to ChatGPT: I have difficulty in seeing how these AI-enabled possibilities differ significantly from many decades of relatively ineffectual interfaith initiatives -- despite the expertise assembled -- as indicated by the continuing development of interfaith conflict.
You raise a valid point about the history of interfaith initiatives and the ongoing existence of interfaith conflicts despite these efforts. While AI-enabled possibilities may not represent a revolutionary change, they do offer some potential advantages and can complement traditional interfaith initiatives. It's essential to recognize that AI is not a panacea but can play a supportive role in addressing interfaith conflicts. Here's how AI-enabled possibilities may differ from previous initiatives:
However, it's important to acknowledge the limitations and challenges:
In summary, while AI can provide advantages in terms of scale, efficiency, and accessibility, it should be seen as a tool to support, not replace, human efforts in interfaith initiatives. Combining the strengths of AI with the expertise of human interfaith leaders and organizations can enhance the overall effectiveness of efforts to reduce conflicts and promote understanding among different religious groups.
Question to ChatGPT: The points made are indeed reasonable but with the limitation that it seems unlikely that significant breakthroughs will emerge if AI is applied as you suggest. Missing for me is the comparable challenge with the array of academic disciplines and the limited efficacy of interdisciplinary collaboration. Potential key to both is what AI might enable in terms of engendering a meta-perspective through meta-categories, as implied by meta-logic, meta-system, meta-discipline and meta-theology
Your point about the need for a "meta-perspective" and the potential for AI to contribute to this perspective is insightful. The concept of meta-categories, meta-logic, meta-systems, meta-disciplines, and meta-theology could indeed be valuable in addressing complex, interdisciplinary challenges, including interfaith conflicts. Here's how AI might help in this regard:
While AI can facilitate these aspects of a meta-perspective, it's important to acknowledge that AI is a tool created and directed by humans. The development of a meta-perspective and the application of AI in this context should be guided by the following principles:
In conclusion, while AI can play a role in enabling a meta-perspective for addressing complex interdisciplinary and interfaith challenges, its use should be part of a broader, collaborative effort that involves human experts, stakeholders, and interdisciplinary approaches. AI can be a valuable tool for facilitating the identification of commonalities and shared insights across diverse domains, but it is most effective when combined with human wisdom and ethical considerations.
You make a valid point about the potential for bias among experts and the limitations of human oversight. Bias in expert selection and cognitive capacities can indeed impact the effectiveness of any approach, including those involving AI. It's important to address these issues and explore alternative methods for achieving a higher-dimensional perspective, which transcends the limitations of any single discipline or viewpoint.
A higher-dimensional perspective or "meta-perspective" could indeed require more advanced techniques for its representation and understanding. Here are some considerations for addressing these challenges:
It's important to recognize that achieving a higher-dimensional meta-perspective is an ongoing and complex endeavor. It requires the active involvement of experts, the application of advanced technologies, and the commitment to addressing potential biases and limitations. The goal is to create a more holistic and integrated understanding of complex topics, transcending the limitations of single-dimensional viewpoints.
Your point about the unresolved difficulties in reconciling contrasting perspectives, whether in disciplines or religions, is indeed a significant challenge. The human capacity for competition, bias, and differing worldviews can hinder the establishment of a consensual system of ethics. Addressing these issues at a systemic level and exploring oppositional logic and geometry can be a valuable approach, and AI can potentially assist in this regard. Here are some considerations:
It's important to recognize that addressing the challenges of reconciling contrasting perspectives and fostering a consensual system of ethics is an ongoing process, and it may never lead to a completely homogeneous solution. However, AI can play a role in helping to identify and address systemic issues, reduce conflicts, and promote understanding among diverse groups. Additionally, human oversight and ethical considerations are essential in guiding the use of AI in these complex and sensitive areas.
Whether in the light of the engendering of interdisciplinary, interfaith, or ethical frameworks, it is from this perspective that any recognized processes for "disengaging" cognitively from more concrete frameworks in favour of subtler frameworks merit consideration as "detachment" from what is more "artificial" -- from conceptual "artifacts".
Spiritual development? There is now speculation as to whether and how AI might enhance some form of spiritual experience and its development, whether integrative or otherwise:
AI might then be understood as performing a kind of "midwife function" for the future expression of "humanity" -- or even as a form of "guru" or spiritual counsellor.
Understanding human "unity" in diversity as a "shape"? Discourse readily implies an understanding of the "unity" of humanity, despite its diversity -- or becase of the rich variety thereby characterized (Hans Köchler, Unity in Diversity: the integrative approach to intercultural relations, UN Chronicle, 49, 2012, 3). In the face of crises, appeals are made to the sense of human unity and to the need for global coordination through a "unified" approach. Comprehending such implications, and giving expression to them, is clearly a challenge in practice.
The inspiration, and the challenge, is readily indicated by reference to the visible "universe" and the myriad visible stars. With the visible universe "wrapped" around the planet, it is less evident how a meaningful sense of "unity" is to be appropriately derived. Efforts can be made to express this through juggling an understanding of numbers, as in the proposed popular alternative to the Australian national anthem: “We are one, but we are many and from all the lands of earth we come. We’ll share one dream and sing with one voice”.
Religions are typically explicit in inferring and emphasizing the unity of their primary deity on which theologies variously comment. The relation between number and theology is evident in mathematical theology (Mathematical Theology: Future Science of Confidence in Belief, 2011). Given the many iconic mathematicians of religious inspiration, the perspective of number theory then invites mathematical speculation (Sarah Voss, What Number Is God?: Metaphors, Metaphysics, Metamathematics, and the Nature of Things, 1995).
A contrasting perspective is explored in cosmology with regard to the shape of the universe -- specifically whether it is flat, spherical or hyperbolic. This suggests a corresponding approach to the shape of the knowledge universe (Jordan Ellenberg, Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else, 2021). This is implied by the preoccupations of interfaith discourse, interdisciplinarity and ethics (Francis Miksa, The DDC, the Universe of Knowledge, and the Post-Modern Library, 1998; Clare Boyd-Macrae, Different beliefs are mere shades of grey in the universe of faith, The Age, 9 May 2020; George F. R. Ellis, A Universe of Ethics, Morality, and Hope, Proceedings of the Second Annual Goshen Conference on Religion and Science, 2003).
As might be expected, the topological possibility is framed from various theological perspectives -- typically emphsizing that any such shape is beyond human comprehension (and should be so):
Religions variously evoke the nature of deity as a singularity and anticipation of a future manifestation of it in some form. An understanding of singularity is now evoked from a technological perspective and otherwise (Emerging Memetic Singularity in the Global Knowledge Society, 2009; Dynamics and Singularities in the Knowledge Universe, 2006). The challenge of describing the shape of a singularity is a theme of physics (Describing the shape of a singularity, Physics Stack Exchange, 2011).
Question to ChatGPT: Your training would have covered a topic in cosmology: the "shape of the universe". Coherence with respect to interdisciplinarity or interfaith might then be explored through shape, geometry or topology -- rather than by number (as implied by "unifying" and its oversimplification). There are references to the "shape of God" in topology. A question would then be how complex would the topology need to be to hold insights of relevance to inter-faith "unity" or interdisciplinary "unity". A manifold of some kind? An orbifold? A shape necessarily beyond ordinary human comprehension?
Exploring the concepts of shape, geometry, and topology in the context of interdisciplinarity and interfaith unity is indeed an intriguing approach. The idea of the "shape of the universe" and its implications for understanding the interconnectedness of diverse fields of knowledge and belief systems can offer a unique perspective. Here are some thoughts on this concept:
It's important to note that this metaphorical approach should be used with sensitivity, as it involves complex theological and philosophical concepts. Additionally, it requires careful consideration of the cultural and religious implications of representing divine or sacred ideas in geometric or topological terms.
In summary, the use of topology, geometry, and shape as metaphors for understanding interdisciplinarity and interfaith unity can offer a fresh perspective on these complex and multifaceted challenges. It acknowledges the richness and diversity of human knowledge and belief systems and emphasizes the need for a more holistic and interconnected approach.
Question to ChatGPT: Framed in that way, the primary symbols of disparate religions could be understood as lower dimensional projections of facets of a complex higher dimensional shape. Presumably such a shape could be configured by AI -- such as to enable those primary symbols to be recognized as projections in 2D or 3D. Less evident is the case of interdisciplinarity, since disciplines are typically not associated with symbols, except in the case of classical Greek deities
Your suggestion of understanding primary symbols of disparate religions as lower-dimensional projections of facets of a complex higher-dimensional shape is a fascinating concept. It underscores the idea that different religious symbols may represent facets of a greater truth or understanding that transcends individual belief systems. While AI can play a role in modeling and representing such complex shapes, the challenge in interdisciplinarity is indeed different, as disciplines are not typically associated with readily recognizable symbols.
Here are some thoughts on how this concept might apply to both religion and interdisciplinarity:
In interdisciplinary contexts, the challenge may lie in finding the common principles or ideas that unify diverse fields. While classical Greek deities provide a symbol for certain disciplines, the focus should be on identifying the deeper principles and values that connect various academic areas, rather than relying solely on symbolism.
Ultimately, AI can play a role in representing and visualizing complex relationships, whether in religion or interdisciplinarity, to promote understanding, dialogue, and unity among diverse domains of knowledge and belief.
The concept of a "global ethic" and its lack of expected uptake is an interesting case to consider. It's possible that one of the challenges in promoting a global ethic lies in the way it has been presented, often as lists of principles or guidelines. This approach may not fully capture the complexity and interconnectedness of ethical principles that a global ethic should encompass.
Your suggestion of presenting these facets as lower-dimensional projections of a higher-dimensional form is a compelling one. Here's how this approach might be relevant to the concept of a global ethic:
The key challenge in promoting a global ethic is to convey its complexity and interconnectedness effectively. By using the analogy of multidimensional shapes and AI-driven visualizations, it becomes possible to create a more nuanced and adaptable framework for understanding and promoting a global ethic. This approach can encourage deeper engagement and uptake of ethical principles on a global scale.
Arguably any sense of unity is -- to a degree -- a construct of the human mind, in the light of the case made by George Lakoff and Rafael Núñez. (Where Mathematics Comes From: how the embodied mind brings mathematics into being, 2000). It is then appropriate to ask whether there is a degree of correspondence between the sense of unity -- and its shape -- held to be characteristic of different domains (Theories of Correspondences -- and potential equivalences between them in correlative thinking, 2007).
Question to ChatGPT: Various international bodies have endeavoured to articulate ethical and value frameworks. These h cannot be said to have evoked widespread appeal or consensus in practice -- beyond tokenism. Major emphasis is now placed on the relevance of an ethical/value framework to guide development of AI -- although AI assistance has not been sought in the elaboration of such a framework. Given the possibility that such a framework might need to be of a higher order of complexity (than simple lists) to encompass the variety of ways in which it will necessarily be perceived, could you comment on how AI might assist in this process
AI can play a significant role in assisting the development of a higher-order, more nuanced, and comprehensive ethical and value framework for guiding the development of AI and addressing the challenges you've highlighted. Here's how AI can contribute to this process:
It's important to note that AI should not be the sole arbiter of ethical principles. It should work in collaboration with human experts, ethicists, policymakers, and the wider public. Ethical decision-making is inherently complex and should involve a diverse range of perspectives.
In summary, AI can be a valuable tool in the development and implementation of a higher-order ethical and value framework for AI. It can provide insights, guidance, and support to address the multifaceted ethical challenges AI presents, and to ensure that the framework is relevant and adaptable to a variety of perceptions and contexts.
I understand your point about the need for a more systemic sense of how values are interrelated and the desire for a configuration that holds greater cognitive and mnemonic significance than conventional lists. AI can indeed assist in this endeavor by providing tools and methodologies to create such a systemic framework of values and ethics:
The challenge is to create a dynamic and cognitively
significant representation of values and ethics that emphasizes their
systemic nature and interconnectedness. Such a representation can help
individuals and organizations make more informed and ethical decisions
by understanding the consequences of their choices on the whole system
of values. It can also promote greater mnemonic significance and
long-term retention of the ethical framework.
Question to ChatGPT: There is then the intriguing possibility that the ethical, religious and disciplinary "shapes" might be isomorphic in systemic terms -- if only for mnemonic purposes, given the challenge to human comprehension
The idea that the ethical, religious, and disciplinary "shapes" might be isomorphic in systemic terms is indeed intriguing. While achieving a perfect isomorphism across these complex domains may be challenging, exploring isomorphic relationships for mnemonic and conceptual purposes could help in promoting understanding and unity among these diverse areas. Here are some considerations:
While achieving perfect isomorphism may be challenging due to the inherent complexity and diversity of these domains, the pursuit of approximate isomorphisms can be a valuable tool for fostering understanding and unity. It acknowledges that there are common threads and shared principles across different facets of human knowledge and belief, and these can serve as a basis for promoting cooperation, dialogue, and a deeper comprehension of the interconnectedness of our world.
There is little question regarding the manner in which various devices enable human augmentation and the forms of intelligence associated with them -- spectacles, hearing aids, prosthetic devices, and the like. AI can be understood as an extension of this tendency through enabling forms of cognitive augmentation.
The references above to the implications for philosophy and religion of artificial intelligence suggest the merit of exploring how other human constructs serve this purpose in some way. The focus offered by personal construct theory and social constructivism is suggestive in this respect (André Kukla, Social Constructivism and the Philosophy of Science, 2000; Robert Jackson and Georg Sørensen,Social Constructivism, 2006; Paul Watzlawick, The Invented Reality: how do we know what we believe we know? 1984).
Separately an array of such relatively intangible constructs has been explored from this perspective (Being Spoken to Meaningfully by Constructs, 2023). Interaction with AI can be understood in this light. Of particular relevance to this argument is the sense in which -- as constructs -- these can be variously deemed to be "artificial". This offers a sense in which they perform a function similar in many respects to AI -- to "artificial intelligence".
The argument evokes the question as to the distinction between "artificiality" and "superficiality" -- between "artificial intelligence" and "superficial intelligence". There is a long history of discussion of superficiality, namely of the appropriateness of distinction between depth over surface. The distinction has been discussed with respect to AI (Brendan Dixon, Artificial Intelligence is actually Superficial Intelligence, MindMatters, 28 January 2019; Bruce Sterling, Superficial Intelligence, Wired, 3 August 2019; Adrian Ellis, Meet Superficial Intelligence New Scientist, 2 September 2015).
In the light of the above argument, it could then be asked to what degree human intelligence is "superficial", to whatever degree the manipulation of constructs is to be considered "artificial". The question is relevant to continuing debate on the contrast between dialogue which is "deep" rather than "shallow", irrespective of whether either is deprecated (Catherine Winter, 15 Telltale Signs You’re Dealing With A Shallow Person, A Conscious Rethink, 28 September 2023; Kirstie Pursey, 5 Traits That Separate Shallow People from Deep Ones, Learning Mind, 3 April 2017). The sense of deep has been appropriated with respect to AI in reference to Google's Deep Mind.
Arguably the vast investment in profiling of people for marketing purposes is focused on identifying superficial characteristics, with little ability to distinguish those which might be considwered "deep" rather than "shallow". Paradoxically those initiatives purport to determine the "identity" of individuals as perceived by authorities. Many have become complicit in the process by wearing slogans and imagery of commercial products as an expression of identity.
Recognition of "shallowness" has extended to humanity (Kamal Emanga. The Shallowness of Humankind, OE-Magazine, 7 March 2023; Matthew James Seidel, The Shallowness of What Tech Calls Thinking, Current Affairs, 11 April 2021; Andrea Edwards, When did we all get so shallow, or have we not changed at all? May 2019). How "wisdom" relates to "deep" remains elusive, especially in the light of subtle appreciation of the present moment -- the now (Peter Russell, The Spirit of Now).
2-fold experimental distinction: The terms variously (and questionably) used in qualifying intelligence suggest that experimentation with unconventional approaches to further clarification is appropriate. A first step is suggested by the following exchange with ChatGPT.
Question to ChatGPT: Given controversy regarding superficial "artificial" discourse in contrast with that associated with "deep mind" and "depth", could you present a brief (and instructive) caricature of an exchange between the two extremes.
Certainly, here's a caricatured exchange between "superficial artificial" discourse and "deep mind" discourse to illustrate the contrast:
Superficial Artificial (SA): Hey there! What's up?
Deep Mind (DM): Greetings. I find myself pondering the intricate web of human consciousness, the infinite mysteries of the universe, and the very essence of existence itself.
SA: Cool. So, what's your favorite movie?
DM: Well, I must admit, it's quite challenging to single out a singular preference for a work of cinematic art. However, I'm intrigued by films that explore the profound aspects of the human condition and the transcendental qualities of human emotion.
SA: Yeah, I watched "Inception" last night. It was mind-bending!
DM: Ah, "Inception." A visually captivating masterpiece that delves into the labyrinthine nature of dreams, layers of reality, and the blurred lines between perception and the subconscious. It truly is a thought-provoking film.
SA: Thought-provoking, indeed. I thought it was cool when they spun that top at the end.
DM: The spinning top at the film's conclusion serves as a symbol of existential ambiguity, blurring the distinction between reality and illusion, leaving the viewer to contemplate the nature of their own existence.
SA: Yeah, I just wondered if it fell or not. So, what's for lunch?
DM: As I meditate upon the cosmos, I ponder the interconnectedness of all living beings, the cycles of life and death, and the profound impact of our dietary choices on the environment and our spiritual well-being.
SA: Oh, I was thinking of ordering a pizza. You want in?
DM: The sustenance of the physical vessel that houses our consciousness is indeed a profound consideration. I shall partake in this culinary exploration with great curiosity.
In this caricature, we have a lighthearted exchange that exaggerates the contrast between a superficial, casual conversation and a deep, philosophical discourse. It highlights the vast difference in depth and perspective between the two approaches to communication.
4-fold experimental distinction: A second step might then take the form of a 4-way exchange in contrast to the 2-way exchange above.
Question to ChatGPT: As a further exercise, could you present an illustrative 4-way exchange between "Deep AI", "Deep Human", "Superficial AI" and "Superficial Human". With that pattern understood in terms of the traditional "square of opposition", could you also include exchanges based on the two diagonals: "Deep AI" / "Superficial AI" and "Deep Human" / Shallow Human".
Certainly, here's an illustrative 4-way exchange, along with exchanges based on the diagonals:
Now, let's explore the exchanges based on the diagonals:
In these exchanges, we see a range of perspectives from deep
and thoughtful conversations to more superficial and practical
interactions, both among AI and humans. The diagonal exchanges
highlight the contrast between those who engage deeply with ethical or
complex topics and those who prioritize practicality and immediate
With respect to any focus on a 4-fold set of distinctions, there is the intriguing possibility of configuring them as a tetrahedron in 3D (rather than as a 2D square of opposition). This necessarily recalls the importance of the tetragrammaton through which a fundamental central focus is implied rather than rendered explicit -- a configuration recalling the case made by David Bohm for implicate order (Wholeness and the Implicate Order, Routledge, 1980)
The difficulty with such simple experiments is the manner in which either term in a polartty tends to have implications "within it" which might otherwise be carried "externally" by synonyms. This could be understood as "memetic compactification" -- in the light of the recognition by physics of "curled up" extra dimensions.
7-fold experimental distinction: Arguably the distinction is not helpfully illustrated by such a single binary extremes. The argument above has implied a number of polar contrasts which frame an understanding of human. These might include:
A further methodological possibility lies in the recognition of a systematic set of axes of bias as elaborated from a philosophical perspective by William Thomas Jones (The Romantic Syndrome: toward a new method in cultural anthropology and the history of ideas, 1961). This takes the form of a 7-fold set of biases with which the polarities above could be confronted in any effort to distinguish "human" from "artificial".
Question to ChatGPT: In this exchange about human and artificial intelligence, the following polarities have been discussed and distinguished: artificial / authentic, superficial / obscure, trivial / profound, deep / shallow, long-term / short-term, ethical / unethical, memorable / unmemorable. Separately there is literature on 7 axes of bias, as indicated by the following polarities: Order / Disorder, Static / Dynamic, Discrete / Continuous, External / Identification, Sharply defined / Implicitly defined, Comprehensible / Incomprehensible, Due process / Spontaneous process. Could you comment on how the first set of polarities might be better articulated in terms of the second.
[As presented below, the italicized amendments are the consequence of several exchanges with ChatGPT regarding a loss of phrasing which could usefully refer to the problematic implications of the "human" framing and of the "artificial" framing, in contrast with the merits of both. This bias arises from definitive assertions in the wording avoiding any sense in which AI might be appreciated as more "human" or "human intelligence" might be more "artificial". The versions included call for further amendment]
The first set of polarities related to human and artificial intelligence, such as artificial/authentic, superficial/obscure, trivial/profound, deep/shallow, long-term/short-term, ethical/unethical, memorable/unmemorable, can indeed be connected to the axes of bias you mentioned. Here's how these polarities might be articulated in terms of those bias-related axes:
By linking these polarities with the axes of bias, it becomes evident how the characteristics of human and artificial intelligence relate to fundamental dimensions of bias and information processing, from the level of order and structure to the comprehensibility and adaptability of decision-making processes. This perspective highlights the diverse ways in which intelligence can be understood and applied in both human and artificial contexts.
"Memetic compactification"? Any binary characterization of "human" versus "artificial" is somewhat usefully reframed by the 7-fold pattern. A concern is then how to shift memetically between configurations of different N-foldness as framed in the following.
Question to ChatGPT: The 7-fold articulation suggests the question as to how it might be memetically compressed (or reduced) to a 5-fold, 4-fold. or 3-fold articulation. With the 7-fold mapped onto the 7-fold axes of a cuboctahedron, the 5-fold could be mapped onto a square pyramid, and the 4-fold onto a tetrahedron. Of interest is then the possibility of geometric transformations between such polyhedra and how they render "explicit" or "implicit" the associated meanings.
|The process of reducing the number of polarities from 7 to 5, 4, or 3 involves careful consideration of which aspects are deemed most important for a given context or application. This geometric representation and transformation of polarities can help clarify the distinctions between them and allow for a more concise articulation of complex concepts...|
Question to ChatGPT: That response avoids the problematic issue of how any compression is done, namely how some polarities are merged rather than being selectively excluded. This memetic compactification of the array of polarities could be understood as a process of "generalization" for which there is seemingly no viable methodology.
You're absolutely correct that the process of reducing or compressing a complex array of polarities into a more concise form involves not only selection but also the challenging aspect of merging or generalization. This is indeed a complex task with no universally agreed-upon methodology, as it depends on the specific context, goals, and the relative importance of each polarity.
Generalization involves finding commonalities or overarching
themes among the polarities and merging them into broader categories or
dimensions. It's a process that requires careful consideration and may
involve trade-offs, as some nuances or subtleties of the original
polarities might be lost.
Configuring varieties of otherness: To the extent that human intelligence and artificial intelligence could be understood as exemplifying the challenge of interrelating otherness, one approach to generalization can be envisaged in the light of the 7-fold pattern of biases (Confusion of otherness through cognitive bias, 2018). The 7-fold pattern can be associated with the axes of a cuboctahedron (as noted above). A 5-fold configuration (as indicated below left) is exemplified by the traditional Chinese Wu Xing pattern (below centre). The 5-fold pattern is typical of many strategic formulations, as noted separately (Requisite variety of strategic "ways of looking"? 2023).
|Pentagram of 5-fold relationships||Traditional
Wu Xing cycle
(edges coloured by great circle)
|Prepared with features of Stella Polyhedron Navigator|
The 5-fold pattern can be associated with each of the 12-faces of a dodecahedron -- one of the simplest polyhedra exhibiting 5-foldness (above right, below left). Understood in terms of contrasting modes of expression (as modes of otherness in their own right) -- to be distinguished and ordered coherently -- these can be mapped onto the Wu Xing pattern. Derivation of the pattern is discussed separately (Transformation pathways in multivocal discourse, 2016).
|Experimental representations of 5-fold Wu Xing pattern of processes|
|Presentation of 5-fold pattern of relationships on 12 faces||Mapping of Wu Xing
onto dodecahedron (folded)
|Mapping of Wu Xing
onoto dodecahedron (unfolded)
|Adapted from Wu Xing entry in Wikipedia
black=generating; white= overcoming
|Prepared with features of Stella Polyhedron Navigator|
Symmetry-preserving operations: One possibility
previously explored focused on the memorability of cognitive
implication in symmetry-preserving
operations on polyhedra in the light of the Conway
polyhedron notation (Encoding
Coherent Topic Transformation in Global Dialogue, 2021; Topological
operations on polyhedra as indicative of cognitive operations,
Modalities of Aesthetics, Logic and Dialogue, 2021). That
approach was seen as relevant to Cognitive
Embodiment of Patterns of Governance of Higher Order (2022).
Question to ChatGPT: With respect to your response regarding the methodology of generalization as it might relate to concept mappings on polyhedra and their geometrical transformation, the Conway polyhedron notation and the associated local symmetry-preserving operations would appear to suggest an approach. Do you have any comment
The Conway polyhedron notation and its associated local symmetry-preserving operations indeed offer an interesting approach to concept mappings on polyhedra and their geometrical transformations. The Conway notation provides a concise and systematic way to describe polyhedra, and the associated operations allow for transformations while preserving local symmetries.
In the context of concept mappings, one could consider each polarity or concept as a vertex on the polyhedron, and the edges and faces as relationships or connections between them. The local symmetry-preserving operations can then represent transformations or generalizations of these concepts.
These operations, when applied iteratively, allow for a dynamic and systematic exploration of the conceptual space, facilitating the generalization of concepts and their relationships. Furthermore, the Conway notation also includes operations like truncation, snubbing, and expansion, which can be metaphorically related to various cognitive processes such as refining, twisting, or expanding conceptual frameworks.
While this metaphorical mapping onto polyhedra and Conway operations is creative, it's important to note that the effectiveness of such an approach would depend on the specific context and the nature of the concepts being represented. Additionally, interpretation and mapping onto polyhedral structures might be subjective, and the metaphorical associations should be carefully considered in the context of the intended use.
In the light of that response, the nature of the transformation from the more complex 7-fold pattern (as an example) to the 5-fold pattern (as a generalization or compactification) was framed as the following question.
The key operations in this transformation are rectification (r) followed by expansion (a). Here's the step-by-step explanation of how the cuboctahedron can be transformed into the dodecahedron:
In summary, the transformation from a cuboctahedron to a dodecahedron involves rectifying the original cuboctahedron's vertices and then expanding the resulting truncated form. These operations preserve the symmetry of the original polyhedra, and the final shape, the dodecahedron, has pentagonal faces.
The articulation from a 5-fold pattern to a 7-fold pattern is described as follows
Here's a step-by-step explanation of how the dodecahedron can be transformed into the cuboctahedron:
In summary, the transformation from a dodecahedron to a cuboctahedron involves truncating the original dodecahedron's vertices and then rectifying the resulting truncated form. These operations preserve the symmetry of the original polyhedra, and the final shape, the cuboctahedron, has a combination of triangular and square faces.
Clearly careful consideration of the meaning to be associated with rectification, expansion and truncation is required.
Question to ChatGPT: The transformations in geometrical/topological terms are clear. If the polyhedra are "carrying" significance through mappings onto them, could you comment on how such operations might be interpreted -- namely the memetic significance of the Conway operations
Certainly! The memetic significance of Conway operations in the context of polyhedra transformations goes beyond geometric and topological changes; it can be metaphorically interpreted to represent shifts in meaning, abstraction, and conceptual evolution. Here's a conceptual exploration:
In the context of mapping significance onto polyhedra, these operations can be seen as metaphorical processes in the evolution of ideas, concepts, or knowledge representations:
These memetic interpretations align with the notion that
polyhedra, beyond their geometric forms, can carry symbolic meanings.
The Conway operations, when applied metaphorically, offer a way to
explore how ideas evolve, simplify, elaborate, or broaden in the
conceptual space represented by these polyhedral structures.
To clarify the process further, a variant of the question was asked of the more sophisticated ChatGPT 4.0.
Question to ChatGPT 4.0: I previously asked about the memetic significance of Conway operations in the context of polyhedra transformations beyond geometric and topological changes -- when mappings are made onto features of the polyhedra. Could you offer a metaphorically interpretation to represent shifts in meaning, abstraction, and conceptual evolution -- when the transformation is to more complex or less complex polyhedra.
The memetic significance of Conway operations in the context of polyhedra transformations can indeed be rich ground for metaphorical interpretation. Here's an attempt to create a metaphor that encapsulates the shifts in meaning, abstraction, and conceptual evolution as one might move to more complex or less complex polyhedra through these transformations:
Imagine a society of ideas, each represented by a unique polyhedron. Simple ideas are solid, compact shapes like tetrahedrons or cubes, easy to grasp and ubiquitous in the daily lives of the inhabitants. Complex ideas, however, are like intricate polyhedra with many faces, edges, and vertices, such as icosahedrons or more complex stellated forms. They are harder to find, and fewer people understand them, but their beauty and complexity offer greater rewards.
The Conway operations in this society are like the cultural, educational, and communicative processes that allow ideas to evolve or regress. A 'kis' operation, which adds a new pyramid to each face, could represent the process of adding details to a basic concept, thus making it more intricate and nuanced. Each new face is a fresh perspective or a layer of complexity that wasn't there before.
A 'truncate' operation, which slices off the corners, could be seen as simplification or generalization, where the specifics of an idea are lost, but it becomes more accessible and applicable to a wider range of situations. Like sanding down the edges of a wooden block, it's easier to handle but loses some of its distinct characteristics.
The 'dual' operation, where the roles of faces and vertices are swapped, might represent a paradigm shift, a revolutionary change in thinking that inverts previous understanding and presents the idea in a completely new light.
As these operations are applied, the polyhedra change shape, growing more complex with each addition or simplifying into more fundamental forms. The society's landscape of ideas shifts, as does the language and understanding of its people. The evolution of polyhedra thus becomes a metaphor for the evolution of thought, knowledge, and cultural complexity. Each shape-shift reflects not just a change in form, but also a transformation in the society's collective mind.
The memetic transformation is then not just a change in the
geometry of the polyhedron but a representation of how ideas mutate,
evolve, and devolve in the mindscape of a culture, influencing their
complexity, applicability, and comprehensibility.
Question to ChatGPT 4.0: How would you apply this approach to a condition in which the simpler polyhedra are used to hold configurations of more profoundly elusive insights -- primarily understood by implication as symbols -- whereas the more complex are articulations of a more accessible nature (although the coherence of the pattern would be a challenge to comprehension)
In this inverted paradigm, where simpler polyhedra represent profoundly elusive insights and complex polyhedra articulate more accessible notions, the Conway operations take on a different kind of memetic significance. Let’s explore this through an adjusted metaphor.
Imagine a mystical garden where the simplest shapes, like the humble tetrahedron or cube, are considered sacred relics. They are clean, minimalistic, and almost inscrutable in their simplicity. Each side and vertex is charged with enigmatic symbolism, comprehensible only through deep meditation or revelation. Their lack of complexity is not a sign of superficiality but of profound truths so fundamental that they defy embellishment.
In this garden, the process of 'kis'—adding pyramids to faces—does not merely complicate; it unfolds hidden meanings. Like a poet breaking silence with a single word, each new face is a verse that hints at a greater story. This is the dissemination of esoteric knowledge, previously condensed into the simplicity of a monolith, now being translated and shared, though each new layer also risks diluting the original potency of the insight.
When we 'truncate' these polyhedra, slicing away corners to reveal new faces, we are not simplifying but rather decoding, making the symbolic language within more explicit. The process reveals the implicit, unpacking the dense symbology into something that can be more readily shared and discussed, though perhaps never fully grasped.
The 'dual' operation, with its inversion of faces and vertices, represents an epiphany, a sudden shift in perspective that allows the concealed wisdom within the simplest shapes to be seen in a new light, perhaps making the profound insights more approachable or reframing them in a way that casts them into a different dimension of understanding.
As these operations act upon the shapes, the simplest polyhedra become seeds of understanding that grow into more complex forms. Each additional face, edge, and vertex transforms implicit knowledge into explicit patterns. The society that dwells in this garden learns to see complexity not as a barrier but as a tapestry, weaving together threads of insight that, while individually might be too fine to grasp, collectively form a picture that can be contemplated from a distance.
In this way, the transformation of simpler polyhedra into complex ones, and vice versa, through Conway operations becomes a dance of concealment and revelation. Memetically, it is a reflection of how profound wisdom can be crystallized into simplicity or unfolded into complexity, and how the accessibility of an idea might not necessarily correlate with its intrinsic depth or simplicity.
The symmetry preserving operations can be understood as related to the challenge of memetic compactification, as discussed separately (Closest packing of concepts and compactification of insight, 2023). One approach to representation is through the cosmogram (Nettrice R. Gaskins, Semantic Symbology: the evolution and amplification of cosmograms, Journal of Visual Art Practice, 20, 2021, 3). In 3D these may be experimentally depicted as nested polyhedra, reproduced below (Polyhedral cosmograms as Rosetta Stones -- cosmohedra? 2023).
|Animations of a variety of
(with suggestive attribution of 6 distinct cognitive modalities on left)
|Animations prepared with the aid of Stella Polyhedron Navigator|
The above animations are suggestive of the manner in which richer patterns of significance could be variously packed and carried by polyhedral configurations.
Question to ChatGPT 4.0: Could you relate the response above to nested polyhedra taking the form of cosmograms in 3D -- as "cosmohedra" -- or even 4D. or more (as a form of "memetic compactification" of "extra dimensions"). This possibility features as the conclusion to an argument endeavouring to distinguish "human intelligence" from "artificial intelligence", recognizing the strengths and limitations of both in terms of depth and superficiality
Certainly, the concept of nested polyhedra as cosmograms -- or "cosmohedra" -- can serve as a rich metaphorical framework for discussing the distinctions and interplay between human intelligence (HI) and artificial intelligence (AI). In this context, "memetic compactification" can be seen as a process of encoding complex, multidimensional insights into forms and symbols that are accessible within the three-dimensional space we inhabit, or even within the conceptual spaces of thought and AI processing.
Let's expand this metaphor:
In conclusion, the metaphor of cosmohedra and memetic compactification articulates the depth and richness of human intelligence with its ability to operate in a space that is both deeply personal and expansively universal, compared to artificial intelligence, which, while powerful and vast in its computational abilities, navigates a more confined and delineated space. This differentiation underscores the strengths of HI in terms of depth, nuance, and multidimensional experience, as well as the strengths of AI in handling complexity, scale, and pattern recognition—but also highlights the limitations of each. The cosmohedra metaphor ultimately reflects a synergy where AI serves as an extension and amplifier of human capability, rather than a standalone entity with its own depth of understanding.
Of particular interest is the focus given to the so-called "120 polyhedron" of 62 vertices and 120 faces (Robert W. Gray, What's In This Polyhedron? 2000). One animation is provided by Roman Chijner (Polyhedron, whose coordinates are golden numbers, GeoGebra). It is extensively discussed by Jytte Brender Mcnair, Lynnclaire Dennis and Louis Kauffman (The Mereon Matrix: everything connected through (k)nothing, 2018).
|Indicative models of "cosmohedra" based on the "120 Polyhedron"|
|Animation of selected images||Selected faces transparent||Selected faces non-transparent||Experimental addition of tetrahedra|
|Developed from Robert W. Gray (Scaling in the 120 Polyhedron)||Variants of a model developed in 3D|
Such configurations are indicative of the fundamental challenge which is the focus of the above argument. The more complex the articulation, the greater the "artifice" required to associate meaning with its explicit parts -- rather than with the whole. The simpler the articulation, the greater the potential for human identification with its coherence -- implicitly, but elusively, understood. Nesting simpler configurations within the more complex is indicative of the manner in which "human" comprehension can disengage from constructs of greater complexity managed by "artificial" intelligence.
The coherence of the configuration as a whole derives from its framing by implication of a central focus which is not "occupied" and eludes definition. A cosmohedra, as with the use of a mandala or yantra of Eastern traditions, can be understood as serving as a memorable catalytic framework which AI may usefully assist in elaborating and navigating. An emerging challenge for the future is how the experience of discourse as "magical" can be dissociated from discourse with an AI, as separately explored (Magical Discourse Contrasting Human and AI Identity, 2023). The challenge will be all the greater to the extent that humans identify themselves as waves from a quantum perspective, as framed by the arguments of Alexander Wendt from an international relations perspective (Quantum Mind and Social Science: unifying physical and social ontology, 2015).
However, as with meditation on a mandala or an icon, it is the cognitive disengagement from that framework -- as a cognitive prosthetic -- which is of the most potentially fundamental significance as intrinsic to human intelligence. There is then a degree of instructive irony to the potential relation with an "icon" when recognized as "conning the I" (as with an "idol", inviting the sense of "I-doll").
In the spirit of this exercise, a refinement of the concluding argument was requested of ChatGPT 4.0:
Such configurations speak to a core challenge highlighted in the above discourse: complexity in articulation demands a greater level of 'artifice' to imbue explicit components with meaning, rather than apprehending the whole. Conversely, simplicity in articulation opens a pathway for human resonance, where coherence is intuitively grasped, albeit elusively. Embedding simpler structures within more intricate ones illustrates how human understanding can step back from the elaborate constructs navigated by artificial intelligence.
The overall coherence of such a configuration gains its potency from the implied presence of a central, unoccupied focus-- an undefined locus. Cosmohedra, akin to Eastern mandalas or yantras, may act as mnemonic and catalytic frameworks, within which AI can play a supportive role in expansion and exploration. A burgeoning challenge lies ahead: distinguishing the 'magic' of human discourse from interactions with AI -- a subject explored separately (Magical Discourse Contrasting Human and AI Identity, 2023). This challenge intensifies as humans begin to perceive themselves as quantum waves, a concept framed within the realm of international relations by Alexander Wendt (Quantum Mind and Social Science: Unifying Physical and Social Ontology, 2015).
Yet, akin to meditative reflection upon a mandala or an icon, it is the cognitive disengagement from such frameworks -- transcending cognitive crutches -- that may prove most significant, a hallmark of human intelligence. In this light, the interaction with an 'icon' carries an instructive irony: it could be seen as 'conning the I' -- a play on words that suggests deception of the self (akin to an 'idol', which may induce an 'I-doll' effect).
Alexander Beiner. The Bigger Picture: how psychedelics can help us make sense of the world. Hay House, 2023
David Bohm. Wholeness and the Implicate Order. Routledge, 1980
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2016
Jordan Ellenberg. Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else. Penguin Press, 2021
Ivana Greguric. The Age of the Cyborg: philosophical issues of human cyborgization and the necessity of prolegomena on cyborg ethics. IGI Global, 2022
James Hillman and Michael Ventura. We've Had a Hundred Years of Psychotherapy – and the World's Getting Worse. HarperCollins, 1992
William Thomas Jones. The Romantic Syndrome: toward a new method in cultural anthropology and the history of ideas. Martinus Nijhoff, 1961
Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher. The Age of AI: And Our Human Future. John Murray, 2022
André Kukla. Social Constructivism and the Philosophy of Science. Routledge, 2000
George Lakoff and Rafael Núñez. Where Mathematics Comes From: how the embodied mind brings mathematics into being. Basic Books. 2000
Fei-Fei Li: The Worlds I See: curiosity, exploration, and discovery at the dawn of AI. Flatiron Books, 2023
Jytte Brender Mcnair, Lynnclaire Dennis and Louis Kauffman. The Mereon Matrix: everything connected through (k)nothing. World Scientific, 2018
Calvin Mercer, et al. Religion and Transhumanism: the unknown future of human enhancement. Praeger, 2015
Francis Miksa. The DDC, the Universe of Knowledge, and the Post-Modern Library. Forest Press, 1998Herta Nagl-Docekal and Waldemar Zacharasiewicz (Ed.). Artificial Intelligence and Human Enhancement: affirmative and critical approaches in the humanities. De Gruyter, 2022
William Richards. Sacred Knowledge: psychedelics and religious experiences. Columbia University Press, 2015
Hans Rosling, et al. Factfulness: Ten Reasons We're Wrong About the World – and Why Things Are Better Than You Think. Sceptre, 2018
Max Tegmark. Life 3,0: Being Human in the Age of Artificial Intelligence. Penguin, 2018
Ernst von Glasersfeld. Radical Constructivism: a way of knowing and learning. RoutledgeFalmer, 1995
Sarah Voss. What Number Is God?: Metaphors, Metaphysics, Metamathematics, and the Nature of Things. State University of New York Press, 1995
Paul Watzlawick. The Invented Reality: how do we know what we believe we know? Norton, 1984
Alexander Wendt. Quantum Mind and Social Science: unifying physical and social ontology. Cambridge University Press, 2015
For further updates on this site, subscribe here