This is a reflection on the press release announcing that Supercomputers ramp up to tackle global societal problems (Science and Technology Facilities Council, 17 November 2010). It argues that supercomputers of the future, capable of rapidly crunching vast amounts of data way beyond the existing capabilities of current technology, will spearhead the development of new drugs, new sources of energy and environmental monitoring. It indicates that:
In claiming to focus on "global societal problems" and the "important issues facing society", the initiative follows in a long tradition of approaches to global simulation and world modelling (Balaton Group, Society for Modeling and Simulation International, Sentient World Simulation, Joint Simulation System, and the European FuturICT project). These all raise questions like who defines what are the "global societal problems" and the "important issues facing society" -- and to the satisfaction of whom? How is such definition achieved? Which problems are included in any such definition -- or excluded (possibly without consideration)? When is such definition undertaken -- and where and why?
The concern here is the nature of the questions for which such computing power is being developed, as well as the manner in which questions some would consider relevant to "global societal problems" are likely to be omitted from such exploration. The point is well made by the example of a particular dimension from climate change considerations by the IPCC in its Fourth Assessment Report (H.-H Rogner, et al., Introduction. In Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change) it is stated that:
The Kaya identity (Kaya, 1990) is a decomposition that expresses the level of energy related CO2 emissions as the product of four indicators: (1) carbon intensity (CO2 emissions per unit of total primary energy supply (TPES)), (2) energy intensity (TPES per unit of GDP), (3) gross domestic product per capita (GDP/cap) and (4) population....
The challenge - an absolute reduction of global GHG emissions - is daunting. It presupposes a reduction of energy and carbon intensities at a faster rate than income and population growth taken together. Admittedly, there are many possible combinations of the four Kaya identity components, but with the scope and legitimacy of population control subject to ongoing debate, the remaining two technology-oriented factors, energy and carbon intensities, have to bear the main burden.... [emphasis added]
No further reference is made to this factor by IPCC.
The concern here is the nature of the "superquestions" meriting answers -- in the light of the engagement of those who would want them answered, and who may well be called upon as tax payers to finance the development of supercomputers and their use.
It follows from earlier interest in questions and what may be readily neglected (Generating a Million Questions from UIA Databases: Problems, Strategies, Values, 2006; Global Strategic Implications of the Unsaid: from myth-making towards a wisdom society, 2003; Unknown Undoing: challenge of incomprehensibility of systemic neglect, 2008). The challenge with respect to governance has been previously summarized in a somewhat analogous checklist (Governing Civilization through Civilizing Governance: global challenge for a turbulent future, 2008) to which the "superquestions" highlighted here are tentatively linked.
A listing of the most powerful, known (non-distributed) computers in the world, necessarily including "supercomputers", is maintained by the TOP500 project. Currently the Tianhe-1A supercomputer in China at the National Supercomputing Center is ranked as the fastest at 2.566 petaFLOPS, or over 2½ quadrillion floating point operations per second. The concern here is however as much with the intended use of such devices.
The Joint Simulation System was initiated in 1995 (Kari Pugh and Collie Johnson, Building a Simulation World to Match the Real World; The Joint Simulation System, January-February 1999, p.2; James W. Hollenbach and William L. Alexander, Executing the DOD Modelling and Simulation Strategy: making simulation systems of systems a reality, 1997).
This has seemingly now morphed, via the Total Information Awareness program, into the Sentient World Simulation (SWS) and is intended as a "synthetic mirror of the real world with automated continuous calibration with respect to current real-world information" with a node representing "every man, woman and child" -- presumably including those responsible for the SWS itself. "Sophisticated physics" were integrated into the simulation in 2007. Regrettably, as might be expected, this is being undertaken entirely in the interests of a US strategic defence strategy on behalf of the US Department of Defense (Mark Baard, Sentient World: war games on the grandest scale -- Sim Strife, The Register, 23 June 2007).
Understandably SWS will necessarily acquire a bias of defensiveness, as argued with respect to ECHELON with which SWS would presumably be functionally integrated (From ECHELON to NOLEHCE: enabling a strategic conversion to a faith-based global brain, 2007). Of interest is how it might be integrated with:
The FuturICT Knowledge Accelerator is a previously unseen multidisciplinary international scientific endeavour with focus on techno-socio-economic-environmental systems. Its focus is on "unleashing the power of information for a sustainable future". FuturICT is a response to the European Flagship call. It intends to unify hundreds of the best scientists in Europe in a 10 year 1 billion EUR program to explore social life on earth and everything it relates to. It proposes to produce historic breakthroughs and provide powerful new ways to manage challenges that make the modern world so difficult to predict, including the financial crisis. The three intended achievements of the FuturICT flagship will be the establishment of:
Aside from individual supercomputers, the increasing power of distributed computer networks is potentially of greater significance -- most obviously in the case of search engine facilities, social networking and intelligence/surveillance systems (see the Wikipedia List of distributed computing projects). Of relevance to the following argument is the extent to which these elicit, through volunteer distributed computing, the engagement of volunteers connected through their personal computers (see the Wikipedia List of volunteer distributed computing projects). A striking example in the case of the security services is offered by Tom Burghardt (FBI Wiretapping of Internet Users -- "All Your Data Belongs to Us": a seamless global surveillance web, Global Research, 21 November 2010). The existence of the Secret Internet Protocol Router Network (Siprnet), a system of interconnected computer networks used by the US Department of Defense and the US Department of State to transmit classified information, has been highlighted by the current release via WikiLeaks of data stored on it (Julian Borger and David Leigh, Siprnet: where America stores its secret cables, The Guardian, 29 November 2010).
There have been many variant simulations of world dynamics -- dating from that originally promoted by the Club of Rome. The Limits to Growth had been based on the World3 model, a computer simulation model of interactions between population, industrial growth, food production and limits in the ecosystems of the Earth (Club of Rome Reports and Bifurcations: a 40-year overview, 2010). Curiously these have tended to fragment into specialized models, notably climate models and economic models. As shown by Graham Turner (A Comparison of the Limits to Growth with Thirty Years of Reality, CSIRO 2007), the original study provoked many criticisms which falsely stated its conclusions in order to discredit it.
On a much more modest scale, but sharing the concern with the use of information to address "global societal problems", is the Encyclopedia of World Problems and Human Potential. This is an initiative developed since the 1970s by the Union of International Associations and Mankind 2000 to interrelate in an online database highly disparate institutional initiatives and concepts. This was notably presented to a meeting of the Global Brain Group (Simulating a Global Brain: using networks of international organizations, world problems, strategies, and values, 2001). This was last funded to develop its biodiversity and multimedia applications through the INFO2000 project of the European Community. It was subsequently positively evaluated for funding through the World Bank INFODEV project in order to augment its application to development.
Three fundamental question might be asked, given the resources deployed on the initiatives of the past:
The point of the last question has perhaps been most sharply made by the work of a team of 26 scientists (Johan Rockstrom and Will Steffen, Planetary Boundaries: exploring the safe operating space for humanity, 2009) presented at the Club of Rome General Assembly (Amsterdam, 2009). These boundaries are necessarily environmental constraints and boundary conditions, and the focus was on the degree to which they are already exceeded or in process of being exceeded. In discussion of action to constrain the marked tendency to exceed these boundaries, and the initiatives which might be collectively undertaken, the point has been made that a complementary analysis is necessary (Recognizing the Psychosocial Boundaries of Remedial Action: constraints on ensuring a safe operating space for humanity, 2009). A complementary analysis would factor in remedial capacity in the light of the disastrously inadequate track record of collective action, namely the probability that any advocated collective action could be effectively undertaken -- even if agreement was reached on what needed to be done.
In an interview prior to the UN Climate Change Conference (Cancun, November 2010), Britain's chief scientist John Beddington indicated little possibility of any agreement (No hope for climate talks, The Australian, 29 November 2010). Unusually he added: It's not just climate change...There is also a demographic boom, with world population rising by 70 million a year. We have got major issues with food security interacting with climate change. This is curiously reminiscent of the reflections on the challenge of overpopulation of John Farrands, former Head of the Australian Department of Science and the Environment (Don't Panic, Panic: the use and abuse of science to create fear, 1993).
Whilst supercomputers may indeed significantly enhance "environmental monitoring" (as claimed above), the need for the attention to "superquestions" can be brought into focus by asking whether the capacity for collective response at Cancun would be any greater if the predicted temperature rise from global warming was 1 °C per decade, or even 1 °C per year -- rather than between 1.1 to 6.4 °C during the 21st century, as currently estimated.
An appropriate reservation is the tendency for claims to be made for the relevance of sophisticated technology to a wide range of "motherhood" issues which few consider it appropriate to questions. Examples include:
The health and financial crises, borrowing metaphors from each sector, serve to indicate many constraints as discussed separately (Remedies to Global Crisis: "Allopathic" or "Homeopathic"? Metaphorical complementarity of "conventional" and "alternative" models, 2009). Difficulties are further exacerbated by the degree to which irrational or nonrational criteria are now a major factor in global decision-making (Cultivating Global Strategic Fantasies of Choice: learnings from Islamic Al-Qaida and the Republican Tea Party movement, 2010).
The capacity to monitor conditions with ever greater precision, and to predict their cumulative effects, is typically understood as 90% of solving any problem -- in the light of technical problems. However practice indicates that it might be better understood as 10% of any remedial response in the case of psychosocial problems. In that case 90% of the challenge lies in focusing attention, eliciting resources and ensuring their effective application.
As discussed separately (Psychosocial energy through a metaphorical technology, 2007) in relation to transformation between epistemological modes, various authors refer to technology seen as metaphor (Robert Romanyshyn, Technology as Symptom and Dream, Routledge, 1989; David Weinberger, Technology as Metaphor, 2000; Jason Ohler, Seeing Technology Through Metaphor, 2005; Tamo Chattopadhay, Technology as a Metaphor: mechanics of power in the global development marketplace, 2005; Jason Balck, at al, The Metaphors of Emerging Technologies, 2006). There is also a case for seeing metaphor as a form of technology (cf Digital Humanities, Metaphor as Technology: critical thinking through understanding metaphor). The significance of the use of metaphor in this context is well stated by Maurice Yolles (Knowledge Cybernetics: a new metaphor for social collectives, 2005):
Having defined the metaphorical nature of knowledge cybernetics, there is a question of whether any of the metaphorical models provided have any practical value. Whether they do depends on how one sees the nature of metaphors. They are not simple comparitors, and for Brown (2003) they provide a very important way of creating a basis for new knowledge. We do not say that the models give here are true, indeed we cannot say this because of their constructivist nature. They are simply representations that will have to be evaluated and believed if there is evidence that they are practically useful to explain and perhaps to diagnose and intervene in situations that we see.
It may then be asked what might humanity be engaged in enabling "unconsciously" in its quest for every faster computing power. Some questionable aspects may be seen in terms of the association of national identity with the ever increasing cost of ever-taller buildings and other processes (John Ralston Saul, The Unconscious Civilization, 1995). To the extent that an "unconscious civilization" offers signals to the conscious world through the key terms associated with this quest, some attention could be given to those applied to current supercomputers -- as is done in the selection of new commercial brands:
Potentially more usefully provocative is the conflation of "teraflops" and "petaflops" through the mnemonics of "terra peta", especially given that the global population is estimated to reach anywhere between 5.5 million and 14,000 million by 2100 (5.5x109 to 14x109).
However questionable such associations, it is unquestionable that recent use of such technology cannot be said to have constrained the level of disaster associated with the financial crisis triggered in 2008-2009 or the dimensions of the climate change disaster. Potentially more pertinent is the level of risk now enabled by dependence on such technological sophistication. The dangers of automated trading on electronic financial markets have already been made evident. One form is termed high-frequency trading. In the U.S., high-frequency trading firms represent 2% of the approximately 20,000 firms operating today, but account for 73% of all equity trading volume.
The above-mentioned "simulation of a global brain" was primarily understood as an exercise in "global modelling" (Simulating a Global Brain: using networks of international organizations, world problems, strategies, and values, 2001). In reviewing the outcome of this project, which took the form of an Encyclopedia of World Problems and Human Potential, a contrast was made with the equation-focused, number crunching of conventional global models -- with which supercomputers are typically associated (Global modelling perspective, 1995):
As noted in that review, global or world modelling may be understood as the attempt to represent rigorously the economic, political, social, demographic and/or ecological issues and their interdependencies on a global scale. The models map these relationships as explicit mathematical equations which may be "run" forward in time to study their dynamic behaviour. They can thus be used to simulate future developments under a variety of conditions. Such modelling may be considered as the most sophisticated approach to dealing systematically with the nature of, and solution to, world problems.
But, as also noted, following the appropriation of the term "global modelling" by those designing models based on mathematical equations, it might be assumed that no other forms of "modelling" of the global problematique are possible. Interesting models of systems can also be explored using analog methods. It is interesting to note that a number of disciplines use other kinds of models in order to grasp the nature of complex systems. In the case of chemistry, molecular structures made up of many thousands of atoms are displayed graphically under conditions where the real complexities of the system do not lend themselves to mathematical analysis. The review highlights various Weaknesses of an equation-based perspective.
The question to be asked with respect to future use of supercomputers is whether they can enable "qualitative" exploration in contrast with "quantitative" exploration -- and what this might imply. Two possibilities merit consideration, in contrast to "number crunching", where the alternative objective is to give primacy to enabling widespread "comprehension" of complexity rather than deriving analytical "solutions" which are meaningless to all but a small elite (typically protective of their special understanding):
Of course mathematically these may indeed be dependent on "equations" and "number crunching". The outcomes sought however -- and the nature of the cognitive engagement with them -- are quite distinct and imply additional constraints on conventional approaches.
The issue is how questions of a higher qualitative order might be framed as a focus for exploration. Pointers may be offered by the insights of other cultures, as suggested by the work of Susantha Goonatilake (Toward a Global Science: mining civilizational knowledge, 1999) as discussed separately (Enhancing the Quality of Knowing through Integration of East-West metaphors, 2000). Those fundamental to the culture of China offer a point of departure (9-fold Higher Order Patterning of Tao Te Ching Insights, 2006).
The complexity of global psychosocial civilization can notably suggest the need to engage with higher (or more fundamental) orders of "twistedness" (Engaging with Questions of Higher Order: cognitive vigilance required for higher degrees of twistedness, 2004). Of related interest is the degree of complementarity between such questions of a higher order and the clues to such exploration offered by mathematics (Functional Complementarity of Higher Order Questions: psycho-social sustainability modelled by coordinated movement, 2004). Again "qualitative" has potential cognitive implications (Cognitive Feel for Cognitive Catastrophes: Question Conformality, 2006; Conformality of 7 WH-questions to 7 Elementary Catastrophes: an exploration of potential psychosocial implications, 2006).
The challenge in deriving "answers" from use of supercomputers is whether they are comprehensible and to whom? Clearly global models, since the time of Limits to Growth (1972), cannot be said to have engaged attention fruitfully (Club of Rome Reports and Bifurcations: a 40-year overview, 2010). The problem in relation to the marvels discovered by mathematics is that again they are comprehensible to only the very few, possibly requiring years for any complex "proof" to be verified by a team of mathematicians (Dynamics of Symmetry Group Theorizing: comprehension of psycho-social implication, 2008). The issue has proven particularly acute in relation to the financial crisis of 2008-2009 and the vulnerability arising from limited comprehension of a risk-assessment formula, in that case the Gaussian copula (Uncritical Strategic Dependence on Little-known Metrics: the Gaussian Copula, the Kaya Identity, and what else?, 2009).
Do the results to be derived from supercomputers call for a form of potentially questionable "hypercomprehension", capable of enabling initiatives insensitive to their own blindspots (Hyperaction through Hypercomprehension and Hyperdrive: necessary complement to hypertext proliferation in hypersociety, 2006)? What are the "superquestions" that could correct for such tendencies and how is their requisite complexity to be ensured with out disenabling the capacity for many to comprehend them?
The following set of questions is necessarily far from comprehensive or appropriately articulated. It emerges from a particular set of biases but in so doing makes the point that there is a need for an information context in which questions can be articulated for consideration. A major limitation of supercomputer initiatives is that their institutional context takes little consideration of its own possible blindspots, whatever the level of computer sophistication. This engenders a form of groupthink which characterized the intelligence failure of 9/11 as notably lacking in imagination (Groupthink: the Search for Archaeoraptor as a Metaphoric Tale, 2002).
It is appropriate to note that reference is made to "superquestions" by Paul Dekker (Optimal Inquisitive Discourse, 2007), in a section on Superquestions and 'Mention Some', within the context of a discussion on the nature of questions in dynamic semantics. Dekker remarks:
...it is important to distinguish the decision problem which an agent faces, which is inherently indexical and subjective, and the objective question which she actually asks....So while it normally does not make sense that people directly express their decision problem...we may realize that the objective questions they actually ask... can originate from such subjective decision problems. I believe this distinction between subjective decision problems and factual questions, and their relation, together with some pragmatic reasoning, also throws light on the so-called 'mention some' problem. (p. 98)
Mathematicians have responded with enthusiasm to a set of intractable problems -- identified in Wikipedia (List of unsolved problems in mathematics) -- raising fundamental issues regarding the nature of "proof", both more generally and specifically as an acceptable mathematical proof. The current relevance of the adequacy of proof is highlighted by the domain in which proof can be offered regarding the existence of entities such as Al-Qaida (Reality and existence, 2010) in a world characterized by both "reality" and "fantasy" (Cultivating Global Strategic Fantasies of Choice, 2010). In addition to the field of mathematics, Wikipedia offers lists of unsolved problems by discipline: Biology · Chemistry · Computer science · Economics · History · Linguistics · Neuroscience · Philosophy · Physics · Statistics. Presented in this way, the unsolved problems of governance, the future of the planet, and the challenges of a higher quality of meaningful life, do not emerge very clearly from this primarily "academic" perspective.
What can be said about the unsolved problems of mathematics as a set -- and of those of other disciplines? And, cognitively and with the inclusions of other disciplines, what do those sets imply, namely as a set of sets?
The argument here is for the formulation of analogous checklists of "intractable problems" -- "superquestions" to which the power of supercomputers could be applied. What is then implied by "problem", "question", "solution" and "proof"? What makes a problem or a question "interesting"? Framed in this way, what is to be understood as an "intractable question"? With respect to governance, do such questions and problems correspond to what is described in the literature as "wicked"?
By what means may they be approached -- by analogy with the approach of mathematicians to the unsolved problems for which supercomputer computers may be used to supply a form of brute-force "proof" questioned by some mathematicians? How have "intractable problems" emerged in various domains, most notably in the field of mathematics -- where they have acquired iconic status (Fermat's Last Theorem, P versus NP, etc)? What questions might be said to have acquired "iconic status" in relation to governance and meaningful quality of life? What might constitute a "meaningful proof" in a non-academic context -- meaningful to those who supposedly mandate democratic "global strategy-making" and fund the use of supercomputers?
In contrast to the set of unsolved "academic" problems, a relevant characteristic of the qualitative "superquestions" of concern here is their fundamentally "existential" nature -- whether in terms of issues of survival, thrival or sense of self-fulfilment. It is of course the case that such questions might well have been formulated and addressed by religions and theology -- notably absent from the Wikipedia list. Clues from such non-academic perspectives have been explored elsewhere (Navigating Alternative Conceptual Realities: clues to the dynamics of enacting new paradigms through movement, 2002).
The use of the 7 classic WH-questions is consistent with the method advocated by Paris Arnopoulos (Nova Magna Moralia -- physics-ethics-politics: neoclassic concepts for postmodern times, Skepsis: a journal of philosophy and interdisciplinary research, 2002-3) in exploring the possibility of a "neoethics". Following his earlier work, he emphasizes a trilateral pattern of global morality combining physics, politics and ethics: physics because nature is the underlying context of global existence, politics because culture is the highest creation of human evolution, and ethics because it provides the conjunction between the other two. Consequently, neo-macro-morals take into account ecology, ethology and sociology. As he notes, to demonstrate this thesis:
... our method combines the four Aristotelian causes with the W5 (who, what, where, when, why) journalistic questions by reformulating his material, formal, efficient and final causes as what, how, who and why of ethics. To these, for the sake of completeness, we have added five more questions as to where, when, whether, whence and how much. We believe that by answering these questions as correctly as possible, one can explain a subject matter as completely as possible. [emphasis added]
These questions were previously used in considering the nature of any New Renaissance (Missing the New Renaissance? 2010) and in generating and clustering entities in vrious databases (Generating a Million Questions from UIA Databases: Problems, Strategies, Values, 2006).
The tentative (and necessarily presumptuous) approach taken here, is to use as a framework for the identification of such questions an earlier exercise (Governing Civilization through Civilizing Governance: global challenge for a turbulent future, 2008) prepared for the 3rd Annual Conference organized by the Global Governance Group of the New School of Athens (NSOA): Theme: Making Global Governance Work: Lessons from the Past. Solutions for the Future (Athens, 2008). The request was to highlight a set of practical possibilities -- correlating "thinking" with "doing". It is this set which is assumed here to be indicative of a set of underlying problems implying fundamental questions.
As an initial effort, the set of complementary "superquestions" might then be clustered as indicated in the following table 4-part table adapted from Potential response conventionally presented : "Thinking" and "Doing" (Fig. 2) with links to explanatory details and documents in that paper (Governing Civilization through Civilizing Governance, 2008), including discussion of the significance of problematique, resolutique, imaginatique and irresolutique.
Fig. 1a: Resolutique -- Tentative clustering of "Superquestions" in relation to "Superthinking" items | ||||
Indicative labels |
"Superquestions" ? | "Superthinking" ? | Docs | |
What is "resolution"? Where is resolution to be sought? When is resolution expected? How is resolution to be achieved? Which resolution is appropriate? Who can find resolution? Why seek resolution? |
Exploratory simulation (gaming) |
What merits "simulation"? Where is simulation to be undertaken? When will simulation be undertaken? How is simulation to be done? Which simulation is appropriate? Who can develop a simulation? Why seek simulation? |
Designing simulations to elicit (unconventional) options, associating them with openly accessible, attractive gaming to elicit cognitive entrainment | [01] |
Sustainable dynamics |
What is a "sustainable dynamic"? Where is a sustainable dynamic to be found? When will sustainable dynamic be achieved? How is sustainable dynamic to be achieved? Which sustainable dynamic is appropriate? Who can develop a sustainable dynamic? Why seek a sustainable dynamic? |
Exploring unforeseen potentials of complex dynamics of non-linear systems involving multiple actors | [02] | |
Appropriate organizational architecture |
What is "appropriate organization"? Where is appropriate organization to be found? When will appropriate organization be achieved? How is appropriate organization to be achieved? Which appropriate organization? Who can develop appropriate organization? Why seek appropriate organization? |
Identifying organizations of requisite complexity, viability and coherence and ensuring their emergence | [03] | |
Recognition of higher order challenges |
What constitutes "higher order"? Where is higher order to be found? When will higher order emerge? How is higher order to be enabled? Which higher order is appropriate? Who can enable higher order? Why seek higher order? |
Articulating challenges and possibilities beyond conventional polarization (and demonisation) | [04] |
***
Fig. 1b: Imaginatique -- Tentative clustering of "Superquestions" in relation to "Superthinking" items | ||||
Indicative labels |
"Superquestions" ? | "Superthinking" ? | Docs | |
What is "imagination"? Where is imagination to be sought? When is imagination expected? How is imagination to be achieved? Which imagination is appropriate? Who can find imagination? Why seek imagination? |
Quality and Significance enhancement |
What is "qualitative significance"? Where is qualitative significance required? When will qualitative significance be enabled? How is qualitative significance achieved? Which qualitative significance is appropriate? Who can develop qualitative significance? Why seek qualitative significance? |
Reframing "quality of life" and "pursuit of happiness"; implications of voluntary simplicity | [05] |
Experimental alternatives |
What is an "experimental alternative"? Where are experimental alternatives found? When will experimental alternatives be enabled? How is an experimental alternative achieved? Which experimental alternative is appropriate? Who can develop experimental alternatives? Why seek experimental alternatives? |
Recognizing and monitoring the viability of the widest spectrum of alternatives, in isolation and as necessary complements in any system | [06] | |
Reframing assumptions (engaging with "nasty questions") |
What is a "reframed assumption"? Where are reframed assumptions found? When will reframed assumptions be possible? How is assumption reframing achieved? Which reframed assumption is appropriate? Who can reframe assumptions? Why reframe assumptions? |
Cognitive vigilance and critical thinking appropriate to detection of vital insights readily suppressed by spin and advocacy of positive thinking | [07] | |
Self-reflexivity and Internalization |
What is a "self-reflexivity"? Where is self-reflexivity enacted? When will self-reflexivity be possible? How is self-reflexivity achieved? Which self-reflexivity is appropriate? Who can develop self-reflexivity? Why seek self-reflexivity? |
Identifying the conceptual challenges of cognitive embodiment of "external" reality and its role in psycho-social sustainability | [08] |
***
Fig. 1c: Problematique -- Tentative clustering of "Superquestions" in relation to "Superthinking" items | ||||
Indicative labels |
"Superquestions" ? | "Superthinking" ? | Docs | |
What is a "problem"? Where is a problem to be sought? When is a problem expected? How is a problem to be achieved? Which problem is appropriate? Who can find a problem? Why seek a problem? |
Insight capture |
What is "insight capture"? Where is insight capture required? When will insight capture be enabled? How is insight capture achieved? Which insight capture is appropriate? Who can develop insight capture? Why seek insight capture? |
Designing open processes for gathering, configuring and disseminating insight -- in anticipation of it proving valuable | [09] |
Enabling and Facilitation |
What is "enabling facilitation"? Where is enabling facilitation found? When will facilitation be enabled? How is an facilitation achieved? Which facilitation is appropriate? Who can develop enabling facilitation? Why seek enabling facilitation? |
Designing processes to identify opportunities for enabling and facilitating innovative, regulatory and "best practice" initiatives | [10] | |
Strategic comprehension and engagement |
What is "strategic engagement"? Where is strategic engagement found? When will strategic engagement be possible? How is strategic engagement achieved? Which strategic engagement is appropriate? Who can engage strategically? Why engage strategically? |
Identifying nature of coherent strategic representations capable of eliciting appropriate engagement; challenge of comprehension of complexity | [11] | |
Crisis preparedness |
What is "crisis preparedness"? Where is crisis preparedness enabled? When will crisis preparedness be enabled? How is crisis preparedness achieved? Which crisis preparedness is appropriate? Who can develop crisis preparedness? Why seek crisis preparedness? |
Identifying implications for social systems of the adaptive cycle, resilience and degrading gracefully under conditions of collapse | [12] |
***
Fig. 1d: Irresolutique -- Tentative clustering of "Superquestions" in relation to "Superthinking" items | ||||
Indicative labels |
"Superquestions" ? | "Superthinking" ? | Docs | |
What is "irresolution"? Where is irresolution to be sought? When is irresolution expected? How is irresolution to be achieved? Which irresolution is appropriate? Who can find irresolution? Why seek irresolution? |
Credibility ("hearts and minds") |
What is "credibility"? Where is credibility required? When will credibility be ensured? How is credibility achieved? Which credibility is appropriate? Who can develop credibility? Why seek credibility? |
Rethinking destructive loss of confidence (as recognized by the military); meaning of confidence (as modelled by financial system) and eroded by tokenism, secrecy and abuse of faith in authorities | [13] |
"Access" and Feedback to authorities |
What is "appropriate feedback"? Where is appropriate feedback found? When will appropriate feedback be enabled? How is an appropriate feedback achieved? Which feedback is appropriate? Who can develop appropriate feedback? Why seek appropriate feedback? |
Identifying processes to enable meaningful access to authoritative focal points in highly asymmetric conditions (information overload and underuse) | [14] | |
Participation and Social networking |
What is "participative networking"? Where is participative networking found? When will participative networking be possible? How is participative networking achieved? Which participative networking is appropriate? Who can network participatively? Why engage in participative networking? |
Exploring implications of web-enhanced (social) networking for new approaches to governance of requisite complexity | [15] | |
Dialogue (engaging with otherness) |
What is "dialogue"? Where is dialogue enabled? When will dialogue be enabled? How is dialogue achieved? Which dialogue is appropriate? Who can develop dialogue? Why seek dialogue? |
Exploring the challenge of "designing in" otherness and disagreement beyond comfort zones (rather than harmonizing them "out") | [16] |
As noted above, "questions" have potential cognitive implications of mathematical relevance (Cognitive Feel for Cognitive Catastrophes: Question Conformality, 2006; Conformality of 7 WH-questions to 7 Elementary Catastrophes: an exploration of potential psychosocial implications, 2006). Will the future find the current categories of "question", "problem" and "solution" or "proof" to be appropriate? What other formulations might be of greater relevance?
The challenge implied above is that of ensuring an iterative process which "confronts" any proposed set of questions with questions which do not appear to be adequately encompassed by that set. In other words the "set" calls reflexively for a design which enables such challenge and evolves as a consequence of it, as discussed separately (Strategic Embodiment of Time: configuring questions fundamental to change, 2010). In arguing for the need for a qualitative dimension to superquestions, with which people find it meaningful to engage, it is however also useful to consider more philosophical questions.
One such exercise (Clustering Questions of Existential Significance, 2010) clustered 31 questions formulated by Acarya. Shambhushivananda Avadhuta (Eternal Philosophy: Questions and Answers). The latter document had been produced for the College of Neohumanist Studies (Sweden) of which he is rector, and in which capacity he is chancellor of the global education network Ananda Marga Gurukula (AMGK), which runs over 1,200 educational institutions in over 80 countries. The merit of the document for this exercise is the general nature of the questions as they variously relate to any concern with religion or philosophy.
That exercise serves to highlight the question of whether the superquestions are assumed to respond to the world of "reality" or to an imaginal world that might be pejoratively described as "fantasy". It is however a characteristic of contrasting approaches to governance, notably at the national level, that political parties readily uphold their own views as "realistic" and frame those opposed to them as "fantasy" -- as is evident in any parliamentary debate. The above table took account of this dynamic in distinguishing between the superordinate clusters of problematique, resolutique, imaginative and irresolutique (Imagining the Real Challenge and Realizing the Imaginal Pathway of Sustainable Transformation, 2007). The interface between reality and fantasy was also central to current challenges of global governance (Cultivating Global Strategic Fantasies of Choice: learnings from Islamic Al-Qaida and the Republican Tea Party movement, 2010).
As is implied by understandings of "correspondences", a degree of "fantasy" may be vital to any ability to engage with the complexities confronted by governance. This was highlighted in terms of poetry by biologist Gregory Bateson (Steps to an Ecology of Mind, 1972) in explaining why "we are our own metaphor" at a a conference on the effects of conscious purpose on human adaptation:
One reason why poetry is important for finding out about the world is because in poetry a set of relationships get mapped onto a level of diversity in us that we don't ordinarily have access to. We bring it out in poetry. We can give to each other in poetry the access to a set of relationships in the other person and in the world that we're not usually conscious of in ourselves. So we need poetry as knowledge about the world and about ourselves, because of this mapping from complexity to complexity.
Clearly there is a case for exploring the relevance of aesthetics and rhythm to formulating superordinate questions -- and for developing the capacity of computers to enable associated insights, as is evident in the visual renderings of complex mathematical objects (Mandelbrot set, Lie group, etc).
Given that the above set is a "mechanical" adaptation of an earlier set, the nature of this iterative process might be fruitfully "tested" in the light of other inputs. Issues which could, for example, be used to challenge the above formulation of superquestions, or superordinate questions, might then include
The potential of supercomputers and distributed computation networks raises the question as to the role of artificial intelligence in relation to any superordinate questions. The prospects outlined in 2000 by Bruce G. Buchanan, as president of the (then) American Association for Artificial Intelligence, imply a possibility of engaging with such questions (Creativity at the Meta-level, 2000). It might even be asked whether "meta-level cretivity" corresponds to a "superordinate creativity" at which the nature of "problem, "question", "solution/proof" and "interest" are integrated in a form of cognitive fusion.
At the time of writing, the termination of a project exploiting artificial intelligence was reported in The Economist (No command, and control, 27 November 20110). The 5-year project, ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) was a multidisciplinary research programme undertaken within the framework of the Decentralised Data and Information Systems (DDIS) Strategic Capability Partnership between BAE Systems and the University of Southampton (Nicholas R. Jennings, ALADDIN End of Project Report, 2010). It was jointly funded by BAE Systems and the Engineering and Physical Sciences Research Council (EPSRC).
The project was concerned with developing mechanisms, architectures, and techniques to deal with the dynamic and uncertain nature of distributed and decentralised intelligent systems. Disaster management was the chosen as the application domain given that the world faces an urgent need for better means to deal with such situations where a number of actors have to coordinate their activities when facing significant degrees of uncertainty and where the context is very dynamic. The project took a total systems view on information and knowledge fusion and considered feedback between sensing, decision making and acting in such a system.
As noted by The Economist, disasters are similar to battlefields in their degree of confusion and complexity, and in the consequent unreliability and incompleteness of the information available:
What works for disaster relief should therefore also work for conflict. BAE Systems has said that it plans to use some of the results from ALADDIN to improve military logistics, communications and combat-management systems... ALADDIN, and systems like it, should help them keep afloat by automating some of the data analysis and the management of robots. Among BAE Systems' plans, for example, is the co-operative control of drones, which would allow a pilot in a jet to fly with a squadron of the robot aircraft on surveillance or combat missions.
The potential with respect to such well-defined command and control situations applications may well highlight the weaknesses with respect to the kind of situations highlighted by the "superordinate questions" above. This could be well-illustrated by the example cited by The Economist with respect to the use of intelligent agents:
In the case of an earthquake, for instance, the agents bid among themselves to allocate ambulances. This may seem callous, but the bids are based on data about how ill the casualties are at different places. In essence, what is going on is a sophisticated form of triage designed to make best use of the ambulances available. No human egos get in the way. Instead, the groups operating the ambulances loan them to each other on the basis of the bids. The result does seem to be a better allocation of resources than people would make by themselves. In simulations run without the auction, some of the ambulances were left standing idle.
The key issue relates to the phrase "No human egos get in the way". The question is how, as currently envisaged, ALADDIN and systems like it, can integrate intelligent agents (like drones) with human beings (frequently framed as "drones") -- the latter having a variety of tendencies to challenge systems of command and control and the views of others. The clever title of The Economist summary, "No command, and control" suggests a paradox rather than a resolution.
Despite the creative optimism of Buchanan, the above-mentioned "superordinate" dimensions would appear to be effectively (if not creatively) designed out of artificial intelligence initiatives. As a fruitful example of meta-level creativity, he notes:
In haiku poetry, perhaps the most creative person was the person a few centuries ago who changed an art form called hokku into an art form called haiku by adding one more constraint on the semantics:
Hokku Syntax: |
Hokku Semantics: |
Framed in such terms, one might ask what are the creative constraints that could be usefully imposed on the use of supercomputers in the quest for solutions to "global societal problems" and the "important issues facing society"? The cognitive challenge of haiku is not itself without its strategic significance (Ensuring Strategic Resilience through Haiku Patterns: reframing the scope of the "martial arts" in response to strategic threats, 2006). As noted at various points above, a key issue would seem to be the manner in which complexity is encompassed to enable cognitive engagement with it. In the case of haiku this is achieved by referring to "one of the seasons". This is reminiscent of a previously suggested mapping of "reality" and "fantasy" in relation to data, information and knowledge in order to indicate the viable cognitive environment where they intersect (Integrative relationship between reality and fantasy? 2010).
It would appear that there is a form of "disconnect" between the applications for which supercomputers are currently envisaged and the "superordinate questions" which might ensure their relevance to "global societal problems" and the "important issues facing society" -- to the issues with which human beings are ultimately concerned. The capacity to consider such superquestions has in effect been carefully designed out as with the challenges faced by individuals and collectivities in engaging with complexity..
"Excessive complexity": It is typically argued that so many of the challenges of governance inhibiting effective response to global challenges are "too complex" to be amenable to the quantitative power of supercomputers -- despite the decades of research on complex systems dynamics. As with the exclusion of the population factor from the response to climate change framed by the above-mentioned Kaya Identity, "simpler" problems are then selected for attention in what amounts to an exercise in conceptual gerrymandering. In terms of the larger systemic chellanges, might this be described as "keyhole science"? The problems and questions that are "too complex" for mathematicians are then left to the capacities of those "of more limited intelligence" in positions of authority -- with minimal computer support. In the case of climate change, this will in all likelihood result in the recommendation and implementation of geo-engineering solutions of the most questionable technical simplicity -- since these lend themselves to exploration with the kinds of models amenable to supercomputer calculation (Geo-engineering Oversight Agency for Thermal Stabilization (GOATS), 2008).
Surprise: Despite the widespread use of computers for economic and financial modelling, clearly the methodology is fundamentally defective in relation to the case for superquestions. The point may be succinctly made through the now famous question of the Queen of England to the assembled faculty of the London School of Economics with regard to the financial crisis: Why did nobody notice it? (The Queen asks why no one saw the credit crunch coming, The Telegraph, 5 November 2008). The LSE director of research responded: At every stage, someone was relying on somebody else and everyone thought they were doing the right thing.
However, the point to be made is that neither the superordinate question nor that response are appropriately integrated into the models for which supercomputers are currently designed. The pattern of the response can be usefully elaborated further (Responsibility for Global Governance Who? Where? When? How? Why? Which? What? 2008). Given the proven strategic expertise of computers in chess, a strategic distinction might be fruitfully made, using chess metaphors, between:
It might be asked how many teraFLOPS (and misguided dependency on their results) are required to increase the risk of a "terra flop" -- a global crisis of the form increasingly anticipated -- in contrast with the number required to avoid the kind of "flop" which characterizes so many current global plans.
Reformulation: Given the case made for supercomputer capacity, it is useful to explore the argument of physicist Peter Rowlands in a domain where such capacity is currently required (Removing redundancy in relativistic quantum mechanics, 2005):
Essentially, the problem arises from the use of an intrinsically 2-dimensional mathematical structure to represent a 3-dimensional reality; we can, for example, show that the problem is immediately solved and the singularity removed when the intrinsically 3-dimensional quaternions are introduced. Exactly, the same kind of reasoning can be applied to relativistic quantum mechanics, where problems emerge from the imposition of a matrix representation. Relativistic quantum mechanics as represented by the Dirac equation and quantum field theory produces at least one type of singularity that appears to be an artefact of the system - the infrared divergence. It also leads to infinities that have to be removed by renormalization, even in the ideal case of free particles where there is apparently no real source for the divergent terms. In addition, there appears to be a great deal of redundancy. For example, QCD calculations using Feynman diagrams derived from the standard gamma matrix representation require ten million calculations for a six gluon interaction, whereas the alternative algebraic approach using twistor space... reduces the calculations required to only six. Even with this method, it is clear that redundancies are still visible; so the question we should ask is whether it is possible to find a coordinate system for the fermionic state which removes redundancy entirely. [emphasis added]
It might then be asked whether the governance problems that are alleged to be "too complex" are a consequence of their formulation using the 2-dimensional matrices (characteristic of the input/output charts of spreadsheets), as is typically the case -- and is notably the case in the 4-fold figure above. Formulated otherwise, the impossible "millions" of calculations might be reduced "to only six" -- a theme explored separately (Geometry of Thinking for Sustainable Global Governance, 2009; Metaphorical Geometry in Quest of Globality, 2009), notably in terms of the implications for identity and engagement (Geometry, Topology and Dynamics of Identity, 2009; Topology of Valuing: dynamics of collective engagement with polyhedral value configurations, 2008). How might the tentative list of superquestions be better formulated, perhaps as "superordinate questions", to render them amenable to sophisticated analysis -- with output suitably constrained to be widely comprehensible?
Avoidance of innovation: Of relevance to the general theme of this argument is that an earlier paper of Peter Rowlands, submitted to the arXiv open archive for physics, was suppressed by the administrators as being "inappropriate" and "of no interest" to users -- and without further explanation (see paper trail at: The Suppression of Dr. Rowlands' Quantum Physics Paper). Rowlands' comment on the process is relevant to the control of innovation in a global knowledge society -- but in this case raises questions relevant to the mindset governing investment in supercomputers:
However, it was novel and original in its approach, which, of course, is the whole reason for doing research in the first place. The arXiv is not a journal with specific stated policies for inclusion. It claims to represent the whole of physics, and it does not say anywhere that it will refuse to publish papers that fall outside the narrow interests of its moderators. This covert censorship is even more insidious in the light of arXiv's pretended policy of being open.
Physics would seem to remain insensitive to a problem of knowledge management recognizable in governance. This insensitivity notoriously engendered assertions such as the following:
This is curious in the light of the widely-cited assertion by Niels Bohr that innovative theories in fundamental physics need to be "crazy enough". Worse still is the possibility that the future will judge the mathematics and physics of today -- convinced as it is of the need for supercomputers -- as being "not even wrong". More curiously, to an entry on the Status of Superstring and M-theory (in a blog entitled Not Even Wrong, 13 December 2008), Samuel Prime responds:
You wonder if the enormous advancements in physics (and science) are prompting researchers to delve into, speculate, and think about superquestions and more adventerous ideas that, by nature, go beyond the currently accepted methods of science. It's hard to imagine science not evolving in its methods.... We are hitting the boundaries of experiment. (More specifically, experiment that can be humanly done.) These many superquestions and supercuriosities show that we are more curious by questions that now seem untestable by our current state of abilities, at least to a high degree of certainty.
What questions might the future consider it appropriate to have been formulated now? The point to be stressed is that such "interesting questions" are of a kind that needs to be used to refine "superquestions" -- if the same mindset is not to be used to define the (above-mentioned) "global societal problems", and the "important issues facing society", for which supercomputers are assumed to enable solutions. It is unfortunate that in practice the greater the sophistication of analysis, as enabled by supercomputers, the less the relevance to the practical challenges of governance and sustaining its credibility.
Unconventional pattern exploration: Following his widely-cited work on a pattern language and his subsequent work on order in nature, Christopher Alexander has suggested a program of research of great potential relevance to governance (New Concepts in Complexity Theory: an overview of the four books of the Nature of Order with emphasis on the scientific problems which are raised. 2003; Harmony-Seeking Computations: a science of non-classical dynamics based on the progressive evolution of the larger whole, International Journal for Unconventional Computing (IJUC), 5, 2009). Its implications, notably in the light of "superquestions", are discussed separately (Harmony-Comprehension and Wholeness-Engendering: eliciting psychosocial transformational principles from design, 2010).
Through enhancing human pattern recognition capacity, whether through geometry or dynamic rhythm, is it the case that superquestions do not need supercomputers -- provided the capacity of the human brain is appropriately enabled to formulate and engage with superquestions? Is the issue rather how to reframe the "coordinate system" which currently introduces redundant singularities? In this respect it is appropriate to look for a mnemonic reframing of FLOPS:
The mnemonic highlights the need to encompass multiple perspectives (effectively "floating" between them), enabling appropriate action (operacy) in their associated (sub)systems.
Enabling collective intelligence: It is striking, at the time of writing, that individual tax payers (via dubious governmental intermediaries) are being called upon to bail out various "developed" countries (Ireland, following Greece, potentially to be followed by Portugal, Spain and Belgium) -- after previously bailing out banks and corporations "too big to fail". Most curiously no one has been effectively held responsible for such extreme mismanagement and there is every sign that "business as usual" is expected to continue -- appropriately rewarded by bonuses acknowleged to be obscene, awarded to those taking the most extreme risks (Extreme Financial Risk-taking as Extremism -- subject to anti-terrorism legislation?, 2009). Mismanagement with impunity has become the primary characteristic of governance at the highest level (Emergence of a Global Misleadership Council: misleading as vital to governance of the future? 2007).
The academic community has been significantly complicit in this process, whilst disclaiming any responsibility whatsoever. It is therefore appropriate to ask whether that mindset will ensure appropriate use of supercomputers -- to remedy, rather than exacerbate, the challenge for the population at large in continuing to bail out systemic stupidity, the antithesis of collective intelligence.
The set of tentative superquestions might be usefully understood as derivative of a more fundamental, superordinate question for a global knowledge-based society. This might be framed as how does the rapidly developing computing power, distributed or otherwise, enable integrative dialogue capable of enabling the emergence of collective intelligence.
A danger is that techno-enthusiasm considers this as already to be an inherent characteristic of current web interactivity, social networking and the like, when these have as yet to demonstrate a quantum leap in capacity to respond coherently to emerging crises -- as demonstrated on a minor scale by the Gulf oil spill (Enabling Collective Intelligence in Response to Emergencies, 2010) and, at the time of writing by the situation in Haiti.
"Purpose, culture, process, and people replace strategy, structure, and systems as our superordinate questions."
James Champy, Reengineering Management: mandate for new leadership. 1995
Christopher Alexander:
Paris Arnopoulos:
Gregory Bateson. Steps to an Ecology of Mind: collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press, 1972
Jason Balck, et al. The Metaphors of Emerging Technologies. 2006 [summary]
Bruce G. Buchanan. Creativity at the Meta-level. American Association for Artificial Intelligence (AAAI-2000 Presidential Address) [text]
Karen A. Cerulo. Never Saw It Coming: cultural challenges to envisioning the worst. University of Chicago Press, 2006
Tamo Chattopadhay. Technology as a Metaphor: mechanics of power in the global development marketplace. 2005 [abstract]
Paul Dekker:
John L. Farrands. Don't Panic, Panic: the use and abuse of science to create fear. Melbourne, Text Publishing, 1993
Susantha Goonatilake. Toward a Global Science: mining civilizational knowledge. Indiana University Press, 1999
Jason Ohler. Seeing Technology Through Metaphor. 2005 [text]
Joshua Cooper Ramo. The Age of the Unthinkable: Why the New World Disorder Constantly Surprises Us And What We Can Do About It. Little, Brown and Company, 2009
H. H. Rogner, D. Zhou, R. Bradley, P. Crabbé, O. Edenhofer, B. Hare, L. Kuijpers and M. Yamaguchi. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [text]
Robert Romanyshyn. Technology as Symptom and Dream. Routledge, 1989 [summary]
Nassim Nicholas Taleb. The Black Swan: the impact of the highly improbable. Random House, 2007
Graham Turner. A Comparison of the Limits to Growth with Thirty Years of Reality. CSIRO 2007 [text]
David Weinberger. Technology as Metaphor. 2000 [text]
For further updates on this site, subscribe here |