9 April 2009 | Draft

Considering All the Strategic Options

Whilst ignoring alternatives and disclaiming cognitive protectionism

- / -


Produced on the occasion of the NATO Summit (Strasbourg, April 2009)
immediately following the G20 Summit (London, April 2009). Also PDF version (0.5 mb)


Introduction
Considering "all the options"
"Listening to everyone" and considering "all the feedback"
Designing out options and feedback
Learnings from democratic voting and polling systems
Misleading feedback solicitation: implications for democracy and consensual strategies
Implications for e-democracy and crowdsourcing
Simulation of communication challenges in democracy and strategy formulation
Information processing insights from large telescope design
Forms of cognitive protectionism in the light of trade protectionism
Framework for exploring attentiveness to new information
Cognitive corruption: deficiencies in feedback processes and identification of strategic options
Options for global governance: the Holbrooke Option Selection Quotient
E-democracy, swarm behaviour and swarm intelligence
Possible extensions to representation of option selection
Cycle-proof regulation of confidence
Conclusion


Introduction

This exploration is inspired by the decision in March 2009 -- immediately before the NATO Summit in April 2009 -- that further military resources should be allocated to the Afghanistan/Pakistan arena as the prime source of "terror" on the planet. This decision was announced despite a succession of flawed assessments over years by arrogantly, overconfident military experts. Richard Holbrooke, the US president's special representative for that area, asserted on CNN (Transcript: David Petraeus and Richard Holbrooke on CNN, 29 March 2009) that in concluding on this policy "all the options were considered" as a means of eliminating terror as the greatest national security threat for the USA:

And in these discussions... I can assure you, and through you everyone who's watching, that every single option was considered, its pros and cons.

The assertion that "all options have been considered" is made relatively frequently to justify questionably repetitive international actions, or the lack thereof. Bill Clinton, as president of the USA, had asserted that "no stone had been left unturned" in exploring options for resolution of the Middle East crisis (pun not intended). A similar unilateral strategic response may be expected in support of geoengineering -- despite disastrous initiatives justified by similar patterns in the past.

Such strategic decisions typically involve "more of the same". This implies that the situation had been inadequately evaluated on previous occasions -- despite recognition of fundamental "intelligence failures" and "lack of imagination". The pattern must therefore be set against the assessments of:

Albert Einstein: To repeat the same thing over and over again, and yet to expect a different result, this is a form of insanity.
George Santayana: Those who cannot remember the past are condemned to repeat it,

This pattern is placed in a wider context here in relation to the emerging process of online solicitation of feedback from large numbers of people ("send in your comments", "join the dialogue", "make your views known", etc). These processes are typically highly misleading in that they seek to engender engagement but are obliged by simple logistics to restrict themselves to extremely selective consideration of what they receive and how they use it -- whatever their claims to the contrary.

Considering "all the options"

When it is stated so categorically, by Holbrooke or Clinton, that "all the options" have been considered, it is typically far from clear:

  • what other options were considered
  • where they are identified
  • what were considered to be the "pros and cons"
  • by whom "pros" or "cons" were identified with respect to particular options
  • how such options were collected for consideration
  • how "option" was defined for the purpose of this process

The obvious response to such questions is that these all touch on matters of "national security" and are therefore the subject of the highest confidentiality. In this light:

  • the population at large is expected to have every confidence that the highest level of expertise has been brought to bear on these considerations (despite the extreme loss of credibility of such expertise as demonstrated with respect to the financial crisis of 2008)
  • no account is to be taken of the official recognition of the disastrously faulty "intelligence failure" and "lack of imagination" associated with the use of this expertise in relation to the "weapons of mass destruction" in Iraq
  • no account is to be taken of the complicity of official thinking and (in)action in processing intelligence in relation to the instabilities of the financial system -- despite the disaster to which these led
  • the possibility of groupthink on the part of those involved in the process of considering "all the options" is not to be considered
  • the possibility of deliberate duplicity on the part of the highest authorities is to be considered ridiculous (if not insulting), despite the prime example of this in the case made for the invasion of Iraq through the solemn assertions of Colin Powell to the UN Security Council in 2003 -- which might be held to be analogous to those made by Richard Holbrooke in the presence of David Petraeus

It is especially intriguing in the case of Afghanistan, following failure of a current strategic there, when the exercise is repeated -- again asserting that "all the options have been considered". What was not considered on the previous occasion with equivalent expertise that enables such an assertion then to be made so confidently? How many times can the situation in Afghanistan be reassessed -- thereby questioning the process of previous assessments -- without recognizing that there is some assumption in the pattern of assessment which is fundamentally flawed?

What is wrong with the associated learning process? A remarkable account of the challenge is provided by the Studies Coordinator in the Lessons Learned Center (Office of the US Director of National Intelligence) Josh Kerbel, Lost for Words: the Intelligence Community's struggle to find its voice, US Army War College Quarterly, Parameters, Summer 2008). Kerbel introduces his commentary as follows:

In the wake of the 9/11 attacks and the Iraq intervention, most of the national security components of the US government have had some -- mostly overdue -- introspective moments. Such reviews can only be considered healthy. For as Sun Tzu, the Chinese military and intelligence theorist, said, Know the enemy and know yourself; in a hundred battles you will never be in peril. The fact is, however, that many of those governmental components did not necessarily like what they saw looking back at them from the mirror. This result was particularly true of the intelligence community, which found its own self-identity issues staring back with an unnerving intensity. To be blunt, the intelligence community, which for the purposes of this article refers mainly to the analytic component, still does not 'know itself.'

As he notes:

For the intelligence community, the linear mechanical metaphor remains the dominant linguistic and consequently mental model; it is the default setting.

Hence the exploration elsewhere of Engaging with Globality -- through cognitive lines, circlets, crowns or holes (2009).

"Listening to everyone" and considering "all the feedback"

Somewhat analogous to "considering all the options" in governance is the process whereby major media-based institutions claim to "listen" attentively to public opinion and to solicit feedback and comment. This is most evident in the case of proactive broadcasting services (BBC, CNN) and newspapers (The Guardian, etc) but is also evident on other scales in relation to the websites of interest groups, proactive search engines (Google, etc) or intergovernmental agencies seeking to claim legitimacy from such "wide consultation". The claim is then variously made that:

  • such feedback is highly valued
  • careful attention is paid to it
  • the quantity of input is (unfortunately) such that individual communications cannot be individually processed so that only an automated response is possible
  • communicants should be assured that their message is transferred to the appropriate department, person, etc for further careful consideration -- and possible action
  • if appropriate this may then engender further communication
  • a possible alternative, if the communicant so desires, is for the communication to be directly displayed on a blog, etc

To ensure the viability of the process, typically it may involve:

  • registration of the communicant in order to be able to make the comment
  • provision of an e-mail address, possibly with some indication of field of interest, etc -- possibly to enable future promotional and publicity communications to the communicant, namely a means of building an address list for such purposes
  • automated acknowledgement to that e-mail address -- whether to confirm identity or to enable non-abusive login

Designing out options and feedback

Communicant perceptions: Less consideration is given to the logistics of the feedback process, as perceived by the communicant:

  • resemblances to the more irritating features of multiple choice voice-mail menu systems, ultimately involving lengthy and expensive queuing (especially in the case of long-distance calls), etc
  • lack of transparency regarding explanations for delays (or failures) in confirmation processes
  • use of feedback forms which do not provide the communicant with any record (or proof) of the communication process (unless a copy is deliberately made into some other document) whilst allowing the receiving service to maintain a complete trace of the communication from that e-mail address
  • degree of use by the system of "blacklisting" communicants -- readily framed as abusive -- to speed the process and to avoid engaging with problematic communicants
  • lack of facility for communication regarding any problems in registration and communication, notably any form of appeal processes
  • developing sense that the communication is simply sent to a "blackhole" in cyberspace, possibly immediately trashed

Exclusion criteria: Of particular interest are the criteria for any software algorithm designed to filter or channel incoming communications automatically. They may well be designed to label certain communications as inappropriate and to be trashed. This may be done, as with "nanny" programs, by the detection of problematic expressions or keywords. The content of such word lists is necessarily confidential. The receiving system may also use one or more blacklists to exclude communications from particular e-mail addresses, IP addresses or countries -- again without such information being available, or known to more than a very few. If one of the blacklists is maintained internally, the manner in which a communicant is placed on that list -- possibly automatically -- is also neither known nor the subject of any appeal. Given the increasing concern with security issues, any such blacklisting may be done on the basis of criteria supplied by security consultants possibly acting for government agencies with special mandates.

Given that the priority for the service is to be able to claim to be "listening", any issues held to be problematic can be framed as "marginal" and simply ignored as irrelevant irritants. The system is necessarily designed to solicit and cultivate interaction with a spectrum of "average" communicants and to avoid the costs associated with "long tails".

Unsubstantiated claims: The service may publicly make unconfirmable claims that "hundreds" (if not "thousands" or more) have responded to any specific request for feedback. The implication is that from this process communications have been carefully clustered in such a way that the most representatives comments can be further publicized -- due consideration having thereby been effectively given to all. Since, if done as claimed, this is potentially very labour intensive (or beyond the allocated budget), there is every motivation to simply "pick out" a few messages and imply that some intelligent processing is done. This mechanism may be seen in its simplest form when participants in a meeting are invited to submit written messages to the "chair" -- a few being selected (or apparently so) to be addressed by the "panel" as judged convenient by the "chair" (within the time constraints).

Whereas there is a more intensive degree of monitoring of "libelous" and discriminatory statements, "misleading advertising", and "fraudulent trading practices", there is no mechanism to verify the integrity of electronic feedback processes. These processes are effectively introduced and used as an extension of promotional and public relations processes. Any exaggerated claims that are contested can then be legally legitimated as "puffery".

Institutional abuse: A well-documented example of institutional abuse of such a process is the so-called Blue Peter competition-rigging scandal at the BBC in 2006. Phone-in feedback was solicited from listening children, at some cost to them, but the "responses" of such communicants were ignored in preference to fake responses fabricated as "more suitable" by the editorial service of the BBC programme in question. This is even more abusive when the purpose is to select correct answers to a phone-in quiz for which prizes are attributed.

More generally however, there is no mechanism to confirm that such logistically convenient abusive practices are not endemic and -- even when isolated cases are detected and corrected -- that the policy does not continue in some other form (as at the BBC?). Claims are made to the contrary and communicants are invited to have every confidence in a trustworthy institution -- such as the BBC or CNN. But there are no robust checks and balances as might be otherwise expected.

Spectrum of admissible communicants: The issue is what kinds of anomalous communication the feedback service wants to handle and how to frame and design out other forms of communication. Of particular concern is the range of problematic communications:

  • communicants with abusive messages in terms of content and argument
  • communicants exploiting the process for ends other than intended by those offering the service (identity theft, spammers, etc)
  • communicants with content offensive to "our sponsors" or the editorial ambitions of the service (whether explicit or not)
  • communicants that have been placed on some form of "blacklist" for reasons not subject to justification or appeal (by analogy with the "no fly list" precluding some people from use of airlines in the USA), especially where it is quite unclear how people get listed in this way (possibly by analogy to the controversial retention of records in crime databases of those who have been innocent witnesses to crimes, or interviewed as potential suspects at some time, even in the distant past).

Transparency: Of particular interest is the fact that there is no obligation to indicate how many communications have been received on a given topic as distinct from how many are used in some way. Some systems indicate, seemingly transparently, how many registered comments have been received -- but there is rarely any sense of how many have been "removed" because they infringed some unexplained editorial constraint or "objection" from others. Processing inconveniences are presented as appropriate editorial sensitivity in what are increasingly non-transparent systems -- purportedly emblematic of emergent democratic processes of the future.

Fig. 1: Illustration of option/feedback selectivity processes from an authoritative perspective
Feedback selectivity processes from an authoritative perspective

Project methodology for the Assessment of Future National and International Problems (reproduced from a document of that title, published by the National Science Foundation, 1977, NSF/STP76-02573)

This methodology was criticized in the commentary justifying the more open methodology used for processing the 56,564 problems profiled in the Encyclopedia of World Problems and Human Potential

Although there is agreement that there are many problems and that many are serious, little concerted effort has been made to determine how many problems there are. Such efforts as have been made have generally been limited to identifying major or critical problems, usually guided either by political expediency or by the particular objective of a major agency.

For example, as illustrated by the diagram, one study for the President of the USA, resulting in 6 problems analyzed in detail, was based on a procedure whereby 1000 problems were deliberately filtered through a succession of phases down to 100, to 50, to 20 before the "final sort and aggregation". Only the final 6 were submitted to the President. No further mention is made of the 994, whatever their importance to particular constituencies. At that time UNESCO engaged in an exercise to identify the "major world problems" with which it was concerned and identified 12 (Medium-Term Plan 1977-1982. UNESCO, 19 C/4).

Related methodological issues were also discussed in Representation, Comprehension and Communication of Sets: the Role of Number (1978) where Fig.1 was reproduced.

Selection bias in politics: An unsuspected bias is implicit in the results of a recent study as to why different countries favoured people of different disciplinary backgrounds in government. The study focused on why different countries favoured different professions and why some professions are so well represented in their political system. To find out, The Economist trawled through a sample of almost 5,000 politicians in International Who's Who, a reference book, to examine their backgrounds (Selection bias in politics: there was a lawyer, an engineer and a politician... The Economist, 16 April 2009). The most common professions worldwide were: law, business, diplomacy, journalism, economics, medicine, academia, engineering. The study comments on the implication of any encounter between the USA (favouring lawyers) and China (favouring engineers). Implicit in such discip0linary bias, however, is the manner in which information is selected and prioritised. The Economist does not comment on this.

Learnings from democratic voting and polling systems

Since such feedback mechanisms are increasingly presented as a desirable characteristic of an open, responsive, transparent society, it is appropriate to note the challenges faced by the forms of feedback that preceded them -- and remain of vital importance:

  • surveys (opinion polling, etc): For these to be credible, typically some indication of the methodology and sample size is required -- consistent with published standards. Since these results can easily be simply claimed rather than factual, any confirmation can only be achieved through surveys made by others providing comparable data. The reputation of any institution discovered to be engaged in some degree of fraudulent polling would be severely damaged -- although proving such fraud may be difficult. Assertion to that effect are readily denied. Polling institutions are not regulated in any way, whatever the standards to which they purport to adhere. They are "self-regulating". As with the BBC Blue Peter scandal, damage limitation would typically enable the institution to survive -- possibly with only token rectification of the deficiencies. A typical problem for surveys conducted to test scientific hypotheses is the manner in which anomalous data points are "removed" from any results in order to "clean up the data" and "provide a good result". In the analysis of scientific fraud, this may lead to such research being discovered to have been fraudulent.

  • voting systems: The variety of abuses possible with democratic voting systems has resulted in the process being monitored by teams of "observers" in countries where the potential for abuse is purportedly higher. In the USA the process is typically monitored by teams of lawyers from the opposing parties -- given a history of abuse, notably with the introduction of electronic voting machines. Many of the issues to which they are sensitive take analogous form in electronic feedback systems.

Given the vulnerability to abuse in both cases, it is clear that the current approach to web feedback and voting is relatively naive -- if not extremely so -- and vulnerable to every form of abuse. An interesting contrast is provided by the process whereby an individual sends a communication (in the form of an envelope or a package) to a destination by courier service. A key feature of this process is the provision of a tracking number enabling the movement and receipt of the communication to be followed and confirmed. What kind of feedback service would result from the incorporation of such facilities enabling those supplying the feedback to trace its movement to its final destination or to determine where it might have been "held up"? As things stand, there is a developing sense that there is every probability that feedback communications are freely trashed when they are deemed inappropriate by the receiving party according to criteria that are seldom made clear.

USA: Citizen's Briefing Book
The challenge of using any form of e-democracy to enable such a process is illustrated by the Citizen's Briefing Book (2009) -- a compilation of recommendations for change in the USA, made electronically to Barack Obama in anticipation of his inauguration. Estimates variously indicated that 400,000 suggestions were proffered by over 100,000 respondents, with some 1.4 million votes on various proposals. The most popular proposal, ending marijuana prohibition, was dismissed out of hand by the recipient as not worthy of serious discussion (despite its current active consideration in other countries of the region). The contents of the exercise are no longer electronically accessible. This gives a strong sense of how meaningful such consultation is considered to be in practice. This is of a kind with rules for invited commentators on the website of the New Scientist, whereby any comment that is not esteemed to be based on fact is deleted (New Scientist house rules on commenting, 2009).
South Africa: Presidential hotline goes into meltdown

The Democratic Alliance says it tested the hotline 46 times during the past three weeks but has been able to register only four complaints, and has not been given a reference number for any of them. The party leader, Helen Zille, said that 42 calls failed to get through and its callers spent 572 minutes on hold - a total of nine hours and 32 minutes.

"When President Zuma launched the hotline, he said he wanted to create an 'ethos of accountability'," Zille said. "During his state of the nation address he undertook to treat every complaint as if it were the only one. As every day passes, that commitment rings increasingly hollow....At the end of the first week, it was clear the presidency had designed a system that could not meet the requirements of the undertaking. By the second week, things had not improved. And, by the third week the situation is completely inadequate and, at least in practical terms, dysfunctional."

On its first day of operation, manned by 43 liaison staff at the presidential headquarters, the free hotline took 27,000 calls. More than 2,500 were received in the first hour, increasing to 7,000 in the third hour.

(David Smith, Jacob Zuma's presidential hotline goes into meltdown Complaints line overwhelmed by angry callers, The Guardian, 8 October 2009)

Misleading feedback solicitation: implications for democracy and consensual strategies

Some general conclusions may be drawn both with respect to policy consultation in which it is claimed that "all options are considered" and electronic feedback solicitation ("we want to hear your views"):

  • a prime purpose may well be:
    • tokenism, namely creating the appearance of listening and consultation, possibly even as an exercise in procrastination
    • use of feedback mechanisms as a means of building up promotional mailing lists
    • use of feedback mechanisms as a means of building up lists of troublemakers for other purposes
  • whether the view expressed by a particular communicant is taken into account in some way may be as likely as the expectation of winning in a lottery -- well-recognized as being for the "numerically challenged". In the lottery case, it is however impressive the degree of attention given to the drawing mechanism, even though the actual "winners" are unknown and not subject to confirmation
  • there is every opportunity for
    • selecting supportive feedback and excluding critical feedback, with no obligation to take any account of inconvenient feedback
    • "sponsors" to pay for "placement questions" -- notably in the light of the "cash for questions" scandal in the UK, even involving members of parliament.
  • procrastination, by claiming that a wider selection of views is being sought (through due process) to frame appropriate action, when one purpose is to waste the time of those solicited and to induce a sense of expectation in them

Binding contractual agreements are increasingly significant in determining constraints on public policy, as indicated by the unquestionable respect for the contractually-binding exorbitant executive payouts to those responsible in some way for major corporate failures in 2008 -- leading to the financial crisis and considerable public anger.

Curiously no such constraints are attached to the electoral or other commitments of politicians and policy-makers. Whereas legislation provides for commercial malpractice taking the form of "misleading advertising" or "fraudulent trading practices", no such provisions are envisaged for those involved in policy-making. They are free to make any claims (possibly to be excused as legitimate puffery) and are not required to substantiate them. Indeed failure to do so may be justified as being a matter of "national security". Furthermore they are typically well protected by forms of parliamentary immunity (or its diplomatic analogue) against being held legally to account for any abuse.

Challenge of effective communication as symbolized by the European Parliament hemicycle
-- a case for exploratory simulation of representation and misrepresentation?
Communication potential within European Parliament hemicycle?

Implications for e-democracy and crowdsourcing

Since EU elections by direct universal suffrage began 30 years ago, turnout has dwindled progessively. On the last occasion the average turnout was less than 46 percent. To raise public awareness and encourage voter engagement in anticipation of the European elections of June 2009, Europe's voters are being invited by the European Parliament to step into room-sized, interactive multimedia cubes in prominent public places, such as city centre squares. Within such a "Choice Box", the intention is to prompt people to record a video message, giving their views and opinions on what the European Parliament should do. Is the Choice Box an unforeseen metaphor of the in-the-box thinking by which choice is enabled by authorities for their consideration?

Might this offer an avenue for Tony Blair to accelerate his ongoing campaign to be President of the EU (Blair speech sparks EU presidency speculation, Euractiv, 14 January 2008; Tony Blair for president of Europe? The Guardian, 9 January 2009)? Perhaps by encouraging his supporters to express this "choice" in multiple video recordings? How will these then be processed and used?

Some video recordings will be screened daily outside but it is quite unclear how the views so expressed in (hundreds of) thousands of video recordings are then to be processed for the Parliament. Presumably a small selection will be made -- purely as an exericse in public relations -- some to be shown in the hemicycle above? Will they be uploaded to YouTube and tagged by enthusiasts -- as a means of communicating effectively with Members of Parliament? Or perhaps rendered succinctly into text messages for them via Twitter?

Would such representatives be satisfied by presenting their own views for their peers in the hemicycle by a similar process? Would that assist in the consideration and selection of strategic options -- especially if representatives in the hemicycle were to Twitter?

How effectively can the views of millions of people be "represented" through such mechanisms -- or through "voting" -- with so many poorly explored constraints on meaningful comunication? Or is the undeclared purpose merely tokenism and pretence? At what cost?

Technically there is considerable potential for widespread electronic consultation, leading to enthusiastic proposals for e-democracy, participatory democracy and crowdsourcing. The basic challenge involves use of the scarcest resource -- the "attention time" of those charged with processing large quantities of input, notably elected representatives -- as discussed elsewhere (Practicalities of Participatory Democracy with International Institutions: Attitudinal, Quantitative and Qualitative Challenges, 2003; Possibilities for Massive Participative Interaction: including voting, questions, metaphors, images, constructs, melodies, issues, symbols, 2007)..

As Head of the Unit for e-Government of DG Information Society, Paul Timmers (Coherent Agenda for e-Democracy: an EU perspective, 2005) outlines the initial efforts at Interactive Policy Making for input to policy-making through spontaneous feedback and online consultation. As an example, an internet-enabled consultation resulted in the collection of 6,500 contributions published on a Commission website for full transparency to show which organization, company or individual had advocated which amendments. This does not clarify how final decisions were made in the light of this consultation. Furthermore it is not clear how the evolution of such a possibility (in 2005) is related to current decision-making with the benefit of Choice Box input as a means of popular consultation (in 2009).

It is intriguing that the hemicycle is an architectural configuration dating from the construction of arenas in classical Greece and Rome. Whilst these were admired for their acoustics, the acoustics of the European Parliament are deplored. Is this too a metaphor of the unexplored challenges of communication -- even amongst those charged with considering strategic options and selecting amongst them? Curiously electronic communication equipment was only permitted in the French model of that hemicycle, in the Palais Bourbon, in 2008 (Les ordinateurs débarquent dans l'Hémicycle, avril 2008). Whilst a major proportion of the EU administrative budget is devoted to interpretation and translation -- perhaps to be augmented for Choice Box feedback -- it would appear that no significant funding is devoted to other challenges of the communication process, if they are recognized.

The technical possibilities have been widely, enthusiastically and insightfully debated (Frank Bannister, e-Democracy: an information systems perspective; Steven L. Clift, Government and Democracy: representation and citizen engagement in the information age, 2004). But beyond the unquestionable technical feasibility, where are the above issues given due consideration through meaningful simulations? At the World e-Democracy Forum? Where is the feasibility of simulating the democratic challenge considered -- despite recognitiom of the democratic deficit and voter apathy? Curiously the point has been ironically made that (as a sham) many processes of democracy are already, as they stand, "simulations" of what they might become (Dmitriy Yefimovich Furman, Simulation of Democracy Seen as Possibly Developing Into Real Thing, Nezavisimaya Gazeta, 11 April 2007).

At the same time, it is clear that aspects of the process can give rise to satisfactory outcomes in some cases -- in the form of open source projects, including the development of software, hardware and databases, otherwise known as community-based design or distributed participatory design. An unusual example is the (playful) collective design of dynamic mechanisms by the Soda Constructor community -- suggesting organizational and strategic analogues (Animating the Representation of Europe: visualizing the coherence of international institutions using dynamic animal-like structures, 2004). The successful extension of such paradigms to community democracy has yet to be demonstrated -- if only as a test of assumptions about alternative social forms. The operation of virtual or cyberparliaments remains to be effectively explored by simulation (The Challenge of Cyber-Parliaments and Statutory Virtual Assemblies, 1998; Using Research in the Participative Orchestration of Europe, 2004).

In terms of the challenge for representatives of managing information overload, there is a vital need to simulate how this is handled, especially when some vital information is excluded from consideration by that process. It is indicative that one of the processes typical of MBA educational programmes is to give students far more information than they can possibly process (each evening) in the expectation that they will develop techniques of selectivity that do not select out vital anomalies (when tested on the following day). The challenge also lends itself to analysis in terms of techniques of information clustering and the attention span with respect to clusters exceeding a certain size or requiring "drill down" beyond a certain level. These issues are discussed in Representation, Comprehension and Communication of Sets: the Role of Number (1978). The challenge calls for innovative use of mnemotechnics (In Quest of Mnemonic Catalysts -- for comprehension of complex psychosocial dynamics, 2007).

Simulation of communication challenges in democracy and strategy formulation

The communication challenge can be expressed mathematically and lends itself to computer simulation. Such simulation is to be contrasted with use of the term to refer to mock-parliaments or model parliaments, as is the practice of the EYP Parliament Simulations Programme or the internet game Become a Member of the European Parliament. The computer simulation could be defined in terms of:

  • number of issues that merit articulation, whether in the opinion of experts or the voting public, namely tax payers -- as discussed separately (Global Solutions Wiki, 2009)
  • number of topic "threads" in which these can be appropriately clustered
    • probability of manipulative clustering into such threads to reduce the challenge of the variety of issues and to exclude those for which no framework currently exists ("conceptual gerrymandering")
    • degree of explicit interrelationship of threads to reflect a coherent pattern of concerns
    • manner of linking between issues, within or between threads
    • manner of "weaving" the threads into a coherent "carpet", especially if they are variously "coloured" calling for a sense of design
    • determining the perspective enabling comprehension of the whole design
    • determining what is not there and how to elicit it, possibly from resistant sources to avoid an inappropriately skewed result
  • number of voters per representative in any constituency
  • percentage of electorate engaged in endeavouring (or desiring) to communicate with the representative, physically or electronically, frequently or infrequently
    • proportion of electorate communications processed administratively by a representative within any period (with the best will in the world) -- notably in the light of the quantity the representative is prepared to read and their own reading constraints
    • proportion of such communications acknowledged minimally, and/or substantively, to the satisfaction of the constituent (with the best will in the world) -- assuming (possibly) the representative has a staff allocated to that processing
    • proportion of information absorbed by the representative -- other than in a gross form (as being "for" or "against") after any "clustering" into threads
    • proportion of relevant documents, formally issued for consideration by the representative, that can be appropriately processed
    • proportion of information so absorbed that misrepresents the view of those submitting it (in seeking to present new possibilities or to raise questions) -- especially in the light of what can be pre-processed by filterers/synthesizers
    • proportion of information formed into some initiative by the representative to be presented through due process (such as in the parliamentary hemicycle above) -- notably in relation to the concerns of other voices
    • degree to which the above information/communication processes are technically enabled rather than constrained by cumbersome hardcopy technologies -- and the extent to which the representative is empowered to benefit from these advances in response to an increasingly empowered electorate
    • level of confidence sustained in the ordinary constituent in such communication processes or in their result
  • degree to which features of the process should appropriately be labelled as tokenism -- but acclaimed as exemplary due process
  • degree to which representatives succumb to inappropriate pressures -- as in case of the UK House of Commons "cash for questions" affair, or to some non-financial variant
  • proportion of "back-channel" input, supposedly received by the representative according to due process -- in the light of the pattern highlighted by the recent BBC Blue Peter phone-in scandal?
  • degree to which representatives take their role seriously rather than as a source of perks -- as recently highlighted in the case of the European Parliament regarding representative absenteeism?
  • effect of additional constraints, beyond the case constituted by the European Parliament, with respect to a World Parliament or the proposed United Nations Parliamentary Assembly -- in the light of any preferred cultural communication patterns in the General Assembly of the United Nations
  • constraints in the manner of selection (or deselection) of options for consideration in the light of the range of alternatives of which voters are aware

Curiously no consideration is given to such challenges to assumptions about representative or participatory democracy. As with the absenteeism issue, they may even be subject to internal sanction on the person raising the issue. There is an unexamined assumption that the process of filtering input can be handled appropriately and transparently by some form of self-regulation (by worthy people within a worthy institution) whose constraints have not been explored, notably in simulations.

Also of relevance, despite the feasibility, is the apparent lack of comparative simulation of the variety of voting systems whereby representatives may be (s)elected. Such simulations, suitably adapted for the media, would inform debate on alternatives when such are under consideration (as with the constitutional crisis in the UK in 2009).

The set of constraints might be combined to constitute a measure of what is absorbed by representatives from the pool of concerns, expertise, isnights and suggestions of the people represented. Bluntly:

  • What gets through the democratic decision-making process?
  • How does any meaningful outcome emerge?

A simulation might usefully lead to the elaboration of a Group Think Index (GTI) -- a measure of the ability of the process to break out of reinforcement of comfortable, constraining patterns of the past. Potentially as vital as GDP as an indication of capacity to respond to challenges of the future, a GTI would be a measure of how much does the democratic process enable emergent insight rather than repressing it?

Information processing insights from large telescope design

One useful approach to analyzing the challenge of information processing is indeed through conventional forms of simulation. Another might be developed as an analogue simulation based in the light of the challenges of the design of very large telescopes. Examples associated with the European Southern Observatory include:

As a simulation, the gathering of light from distant parts of the universe (with maximim resolution and minimum distortion) has called for telescope mirrors of every increasing diameter. The design of such mirrors is extremely challenging because of their tendency to distort under their own weight -- thereby losing the appropriate curvature and the ability to focus. One design approach is to ensure that a single large mirror adjusts its shape a thousand times per second to compensate for distortion. Current innovation lies in the use of a segmented mirror, namely an array of smaller mirrors designed to act as segments of a single large curved mirror. Examples include:

It might be argued that the configuration of the parliamentary hemicycle (above) constitutes a design choice that is likely to have given inadequate attention to the challenge of avoiding "distortion" and maximizing "resolution" in the collection of "light" from very distant members of the population. In particular, to the extent that members of parliament function like "hexagonal mirrors", there is a real challenge to ensure that they are appropriately adjusted to one another in the hemicycle to constitute a "curved mirror" capable of focusing and achieving resolution. The pressure to design every larger telescopes is an indication of what should be the pressure to design more adequate arrays of members of parliament to achieve appropriate resolution of distant "objects" (whether issues or objections). The possibilities for such design are ever more feasible using web technology to configure arrays of decision-makers in response to macro-systemic challenges -- as intimated in an early study by Joel de Rosnay (Macroscope, 1979).

To the extent that collective stategy development remains heavily dependent on optical metaphors in clarifying any "vision" for the future, careful attention to the insights from the optical systems of telescope design would seem appropriate, if not essential.

Forms of cognitive protectionism in the light of trade protectionism

Another approach to framing the challenge that merits consideration is through the learnings to be derived from the extensively studied processes of trade protectionism -- especially the various subterfuges employed to disguise and disclaim such protectionism. Generically it could be argued that the tangible features of the trade case offer a template through which the intangible features of the cognitive situation can be more clearly comprehended.

The argument is that any group, especially of those most closely associated with processes of governance (or other vested interests) frames a boundary -- a "circle of trust" -- within which it operates and which it seeks to protect from the disruptive effects of outsiders (and their information). This cognitive closure is increasingly replicated in "administrative complexes" and gated communities -- in both cases emblematic of forms of cognitive closure offering a requisite sense of belonging and identity (Dynamically Gated Conceptual Communities: emergent patterns of isolation within knowledge society, 2004). It has been argued by Matt Frei (Taming the cyber beast, The Guardian, 24 January 2009) that:

The Bush White House circled the wagons and lived in a bubble; it turned loyalty into a test of service and largely disdained the clutter of opinions from the world outside...

As is widely recognized, even within government administrations, or between the UN Specialized Agencies, information may be jealously guarded and not shared in ways that might be assumed as of value to the purpose for which the bodies were created. Information may then be selectively "traded" and any failure to do so may be subject to criticism -- justifying the exploration of the parallel with trade protectionism. Intergovernmental agencies purporting to operate in a knowledge society might themselves benefit from a form of Doha Round!

The degree of openness or closure to new insights or patterns may be expressed through a generalization of the notion of a "glass ceiling". It might take account of a set or sequence of conditions including (in no particular order):

  • avoidance of learning, namely focusing on "this problem", without considering whether it forms part of a series that should be considered as such
  • transfer of cognitive centre of gravity to an emergent problem, namely a form of "turncoat phase" in which there is denial of complicity in adherence to the framing now held to be outmoded (only too evident in the aftermath of the 2008 financial crisis)
  • tokenism and lip service, involving processes like:
    • rewarding the whistleblower as a substitute for action on the issue
    • patronising believers
  • reframing the debate in order to cast blame
  • penalising those committed to the emergent issue (withholding loans, funding, etc)
  • denial of the relevance of the issue
  • demonising whistleblowers
  • silencing protagonists ("dirty tricks", etc)
  • ignoring indications as symptoms of inappropriate methodology and lack of credibility

Such processes are articulated in the row labels of Fig. 2 (below). At the time of writing a valuable case study of such processes is offered (The Guardian, 9 April 2009) as a consequence of the controversial crowd control operations by security services on the occasion of the G20 Summit:

  • accusations by protesters of excessive violence were denied by the police, indicating that any violence on their part had been necessitated in response to a small minority of anarchists
  • the death of a person present was framed by the police (in an anodyne press release) as "by natural causes" which had not been provoked by any contact with them -- and they had simply provided assistance to protect him from missiles they claimed were being thrown to where they were providing that assistance
  • the Independent Police Complaints Commission was perceived as having a "cosy" uncritical relationship with the police encouraging them initially to deny the need for any inquiry and to delay their investigation of the incident
  • only following global web dissemination of a video (via The Guardian website) by another crowd member, showing violent police action against the person (in the presence of other officers), did the police reframe their offical account of the "facts"
  • under public pressure the IPCC was then obliged to assert its independence taking over the investigation of the incident from the police force whose officers had observed the violent action and failed to report it
  • police statements, whether on or off the record, appear to have misled the media, omitting details that must have been known and including false claims:
    • police refused to indicate why they had delayed indication that the person had died or how they were able to affirm that missiles had been thrown at them
    • police hindered contact with the family who had depended on public support to obtain evidence of the actions surrounding the person's death, with the IPCC declaring to journalists that there was nothing in the story
  • public pressure is now ensuring a stringent inquiry giving its serious implications for policing and public policy

Such a case study implies some form of "standard operating procedure" with which authorities respond through various stages to unwelcome investigation as it becomes progressively more difficult to deny. It is appropriate to note that "human sacrifice" is required before such incidents are taken seriously.

Framework for exploring attentiveness to new information

A single diagram (Fig. 3) may be used for such a framework, of which the basic structure is presented in Fig. 2.

Fig. 2: General framework for representation of issue recognition and denial over time
Strategic issue recognition and denial over time

Explanation (valid for Fig. 2 above and Fig. 3 below):

  • Rows: succession of socio-political processes from isolated issue recognition (at the bottom) to mainstream recognition (at the top); the upper rows are those authoritatively considered as "relevant", with the bottom rows typically authoritatively framed as "irrelevant". The number and spacing of rows is purely indicative; they might even be spaced logarithmically. The gradation of colouring from the darker colour of the lower rows to the lighter colours of the upper rows is indicative of the decreasing constraints on communication -- with the lower rows typically associated with more "heated" controversy, especially in the light of the condemnation of its isolated "local" perspectives by the "enlightened" authoritative "global" perspectives associated with the higher rows.

  • Columns: periods of time from the past (left) to the future (right); the present being indicated in the centre, with indication of "short-term" recognition on each side of the present, as well as "medium-term" and "longer-term"

  • S-curves: indicative of a succession of emergent Issues -- the left-most necessarily emerging first -- suggestive of how confirmation of each is first only accepted in isolated locations, progressing to the emergence of each over time into mainstream (global) recognition. Their different colours are only used here better to distinguish them -- although clearly they could be variously colour-coded. Their succession might include; human rights, poverty, development, environment, energy, climate change, terrorism, etc. -- "waves" of emergent crises ("parading past the central stand of public attention"). The time of their mainstream recognition might be marked by their institutionalization -- as in the UN Specialized Agencies -- in contrast with any earlier recognition by isolated civil society bodies or individuals associated with the much lower (and less-representative) rows. (A quantitative measure of the emergence of an issue might be a count of the words devoted to it in the media.)

    An argument for the S-curve can be derived by comparison with that for the diffusion of innovation developed by Everett M. Rogers (Diffusion of Innovations, 1962). The S-curves here might be understood as a variant. Rogers proposes that adopters of any new innovation or idea can be categorized as innovators (2.5%), early adopters (13.5%), early majority (34%), late majority (34%) and laggards (16%), based on the mathematically-based Bell curve. Innovation adoption is then a process that occurs over time through five stages: Knowledge, Persuasion, Decision, Implementation and Confirmation. However in Fig. 2, the "innovations" are unforeseen problematic issues -- even "wicked problems" or crises -- to whose recognition various vested interests and authorities are initially resistant, even vigorously so. Relevant insights may also be gained from the Technology Acceptance Model (TAM) in information systems theory indicating how users come to accept and use a technology. From that perspective, Fig. 2 might be understood as an effort towards modelling the cognitive process of progressive relaxation of "issue rejection".

    The Issues might also be understood as "strategies" rather than as "problems", then highlighting the progressive recognition of their viability, especially that of "alternatives". There is the possibility that the point of inflection in the S-curve at which it rises more rapidly to mainstream acceptance is triggered by globally mediatised tragedy -- as with the homelessness and joblessness resulting from the financial crisis of 2008, or the deaths from earthquakes -- effectively a form of "option legitimation" by a modern form of "human sacrifice".
Fig. 3: Superposition on Fig. 2 of curves indicative of the focus of authoritative attention and recognition
Focus of authoritative attention and recognition over time

Explanation (valid only for those items in Fig. 3 not already mentioned in relation to Fig. 2):

  • White bell curve (E-E*): Indicative of the scope of collective memory and attention span, whether of the few in isolation at the horizontal extremes (of which most are unaware), or as the shared focus in the present (shared by most). The curve encompasses that which is authoritatively approved and accepted as well as that which is, notably at the grassroots level, partially accepted or considered marginal -- namely both upper and lower rows

  • White inverted curves (A-A*, B-B*, C-C*, D-D*): Indicative of the focus of authoritative memory and attention (constrained by information overload), possibly to be interpreted successively as:
    • A-A*: Governance and security:
    • B-B*: Belief systems (science, religion, politics)
    • C-C*: Business
    • D-D*: Communications (media, surveillance)

    Other possible curves of that form could be added, just as those identified could be split or modified in size. These curves are suggestive of the challenge to institutions and disciplines in sustaining a longer (and more open) attention span than can be encompassed by governance. Whether particular belief systems have a larger or narrower scope (in temporal terms) might be more fruitfully indicated with the aid of a much larger number of such curves -- some being more skewed to the future or to the past. Functions such as those associated with curve D-D*, necessarily more sensitive to popular opinion, may descend more deeply into the lower rows -- especially given the increasingly invasive operation of the broadcast media and the surveillance services. Any such curve would also offer a sense of how (democratic) feedback systems function in selecting or excluding information considered to be relevant or exemplary.

    The nesting of the functions centered on governance, and their number, could also be explored in the light of the priorities of the need hierarchy of Abraham Maslow (A Theory of Human Motivation, 1943) as understood for a collectivity: basic needs (food, energy, etc), safety/security, social, esteem, self-actualization.

  • Core zone (common to both A-A* and E-E*):
    • This suggests the scope of the zone associated with consideration of "all options" on the part of governance, necessarily constrained by the curve E-E* by which any legitimacy accorded by the political majority is supposedly determined in a democracy
    • The zone is necessarily limited to the highest rows and those Issues which have emerged by that present period to that level, therefore excluding as irrelevant (or deprioritising) Issues that have not yet achieved that degree of recognition or are outside the short-term framework characteristic of strategic decision-making.
    • The focus of this core zone is encompassed and constrained by the zones defined by the curves B-B*, C-C* and D-D* representing progressively lesser degrees of pressure on governance -- but constituting authoritative sources of advice or legitimation, possibly articulated through powerful vested interest groups.
    • Encompassed as is this governance core by the curves associated with other authoritative worldviews, it may be understood as a cognitive community nestled within several concentric "circles" (cf Dynamically Gated Conceptual Communities: emergent patterns of isolation within knowledge society, 2004)

  • Lower rows: These rows are labelled as "potentially dangerous" in that preoccupation with them by the population is considered a distraction and irrelevant to the focus which is authoritatively considered to be relevant. There is therefore a "disconnect" between popular, grassroots understanding of issues and that held to be significant (and deserving of grassroots taxpayer resources) in the processes of governance. The horizontal line marking that disconnect might of course be more appropriately placed higher or lower.

Figure 3 serves to indicate how there are issues to which some are sensitive even though their implications have negligible impact on the short-term priorities of governance -- whether or not there is any recognition in the processes of governance of their longer term implications.

It should be stressed that it is the general form of Figure 3 that merits consideration rather than:

  • specific labels added for illustrative purposes. Thus the succession of avoidance and denial processes describing the rows could be articulated in more detail and with more precision, or presented logarithmically (as noted above).
  • the shape of the S-curves. This is purely indicative of the emergence of issues, whether problems or strategies (as noted above). Rather than being parallel, as presented, the lines could be quite differently oriented, even crossing one another (even shifting as with weather isobars).
  • the location of the "present", denoted here as the central vertical access. This might be fruitifully understood as a "slider" displaced to the left or the right with respect to particular constituencies as they relate to the Issues. Of major interest is how and when different Issues emerge in different cultures, especially in comparison with their emergence in the "western world" -- projected from that perspective as an unquestionable, universal norm. This blithe assumption should be seen in the light of western savagery in the 20th century, as well as its late according of rights to women and non-whites -- very late in some cases, with as yet unresolved challenges of discrimination.

    Tentatively, and perhaps controversially, if the central axis represents a universal, normative "present", several such "sliders" could be used to denote the relative position of countries, cultures or regions that either do not yet fully accept such norms (relative years "behind"), or that have already endeavoured to move beyond their limitations (relative years "ahead"). Such a facility might also be used with respect to constituencies within a country or culture. Capital punishment provides a useful example for discussion in terms of which countries "get it" and which have yet to repudiate it -- especially when compared by some with the barbarity of flogging favoured by others.

As a whole Figures 3 provides a framework in which the form and descriptors of the various curves could be adjusted for purposes of discussion about the probability of tunnel vision, silo thinking and group think -- and the nature of the disconnect with any grassroots sense of reality that has not been crafted or recognized by authority structures. Any such adjustment could best be done dynamically and interactively (with an applet) in support of reflection and discussion.

Cognitive corruption: deficiencies in feedback processes and identification of strategic options

Assertions to the effect that due attention is given to feedback ("trust us") and that every strategic option has been considered ("trust us") merit testing for the validity of such claims -- just as careful attention is given to testing the deficiencies of security systems (Quis custodiet ipsos custodes?). In the latter case the test is for effective closure, here the required test is for effective openness to enable detection of unforeseen potentials and anomalies -- namely testing openness to feedback.

There is a case for an institution like Transparency International, primarily known for its focus on corruption, to develop indicators for closure to options and feedback -- despite vigorous claims to the contrary. In effect there is a need to measure "cognitive corruption". Such matters are of interest in the detection of anomalies in reporting procedures, notably in the light of unforeseen disasters. What rating system would be appropriate, analogous to that developed for corruption?

A classic example, after the tsunami disaster of December 2004, was the discovery that a detailed report documenting its probability had been provided in 1998 -- when the the head of the Thai meteor logical office had been obliged to retire under a shadow for having warned that the coast was dangerously vulnerable to such effects. He was accused of scaremongering and jeopardising the tourist industry around the island of Phuket. Similarly an Italian seismologist had predicted the disastrous earthquake in Italy in April 2009 (Seismologist predicted L'Aquila quake, Euronews, 6 April 2009). He had been reported to the authorities for spreading panic and warnings suppressed from the web.

In such cases it is not a question of whether there are deficiencies but of who checks whether there are and whether this can be done systematically -- as is done by health and safety inspectors (whether or not their recommendations are implemented or "adapted for a consideration"). Electronic feedback system lend themselves to such testing -- possibly facilitated with intelligent "bots". Ironically such agents are used by Wikipedia to mark up profiles for possible exclusion -- on the basis of unspecified criteria.

How might major solicitors of feedback, like broadcasting services and newspapers, fare under such testing? What processes do they have for complaints about the procedure or is such information itself excluded as an unwelcome anomaly -- as is apparently the case with the BBC?

Fig. 4: Detail of Fig. 3, highlighting constraints on claims to have considered "all the options"
constraints on claims to have considered "all the options"

In its manner of considering "all the options", governance is necessarily constrained to the central (darkened) zone as explained with respect to Fig. 3.

The detail indicates how only three of the Issues (the coloured curves) are selected into this arena, since those that have not yet emerged into the mainstream (top row) are necessarily excluded (coloured curves in the lower right part of the detail)

Although this is a purely indicative diagram, those Issues selected in (as at the G20 Summit) might be "economic meltdown", "terrorism" and "climate change". Within the central zone, clearly lower priority is necessarily given to "climate change", since its S-curve only reaches the mainstream row outside the short-term focus and the constraints of the various bell-curves. As an Issue, "overpopulation", indicated by one of the excluded S-curves (lower right), is rising outside consideration within the various authoritative (inverted) bell-curves -- and will only subsequently be recognized as an "unforeseen" mainstream crisis.

Options for global governance: the Holbrooke Option Selection Quotient

The challenge in determining how to process potentially available information is what might a viable option look like? And in whose eyes? How to be sure that a viable option is recognized? And what if two or more non-viable options together trigger a creative response that enables a viable option to emerge (by analogy with binary weapons)?

A major lesson of the financial crisis of 2008-2009 is the totally unforeseen, and extremely rapid, transformation in the status of giants of the conventional economy -- General Motors and Chrysler -- from motors of the economy to beggars in need of a safety net. But the question is whether the range of management expertise on which those giants can draw enables them to envisage anything more than "business as usual" -- and whether government has access to the imagination and expertise to encourage them appropriately to do otherwise.

Metaphor is much used in selling new approaches to management and policy-making. Thus a former editor of the Harvard Business Review authored a book entitled When Giants Learn to Dance: the challenge of strategy, management and careers (Rosabeth Kanter, 1989). Another by Dudley Lynch and Paul Kordis is entitled Strategy of the Dolphin; scoring a win in a chaotic world (1988). Is General Motors capable of "learning to dance" to the sound of a "different drummer" in the words of Henry David Thoreau, "however measured or far away". The phrase was echoed by M. Scott Peck (The Different Drum: Community Making and Peace, 1987) in contrast with his study of the "people of the lie" (People of the Lie: The Hope For Healing Human Evil, 1983).

Who are the GMs and Chryslers of the "option production process" -- whose strategic insights are considered "too big to fail"? Is that a reason for the repeated convergence on "more of the same", notably in Afghanistan? Those complicit in the process seemingly include:

There would seem to be no process whereby large numbers of options are collected, held, refined and commented in a common database (a WikiOptions or a WikiStrategies) -- one that is open to a wide variety of creative input. This would be a complete contrast to that ensuring premature cognitive closure as illustrated by Fig. 1 above. Such a database could then be subject to data mining techniques (notably using intelligent agents) to identify viable new possibilities -- the "green shoots" of genuine strategic recovery. It would also give all concerned a sense of what was on the table and why -- and what was not, and why. [A major step in that direction has been the compilation of the online Global Strategies Project profiling some 30,000 strategies of significance to various international constigtuencies.] This possibility is discussed separately (Global Solutions Wiki, 2009)

Such an approach would go far to guard against the risks of dangerous groupthink and tunnel vision. It would also ensure that alternatives are juxtaposed with conventional proposals -- avoiding the accusation that alternatives are systematically ignored (Framing the Global Future by Ignoring Alternatives: unfreezing categories as a vital necessity, 2009) .

This frames the challenge as one of "insight capture" rather than "insight exclusion" -- currently engendering "electronic middens" outside cyberdomains into which insightful creative chat and listserv waste is dumped. Such middens may be fruitfully mined -- preferably now rather than by a future civilization.

Any assertion by such as Richard Holbrooke, that all the strategic options have been considered, should perhaps be evaluated in terms of a "Holbrooke Option Selection Quotient" (HOSQ) -- namely an estimate of the percentage of extant or potential options that had been effectively open to consideration. The challenge for governance would be to ensure that the Quotient is increased from 0.6% (as in Fig. 1) to a healthy proportion -- perhaps 30-40%. As an example, when the flaw in the mirror of the Hubble Space Telescope was discovered in 1990 some 25 proposals were put on the table as possible strategies for remedial action. Creative response to unforeseen crises is not facilitated by focusing on predetermined "more of the same".

As with IQ, a higher HOSQ could be considered a measure of organizational creativity and intelligence -- or as a measure of organizational learning capacity. Any such measure should however be integrated with an indicator derived from the slope of the S-curves, namely the delay in time between crisis recognition and global response, namely a measure of policy lag. The examples of the delays between isolated recognition of Thai tsunami vulnerability and Italian earthquake prediction highlight the problem. Given the fatal delays in emergency response once the disasters struck (as with Hurricane Katrina), such indicators should also be related to indicators of the institutional capacity to act on crises rather than simply to predict them (Remedial Capacity Indicators Versus Performance Indicators, 1981).

In a NATO context, for example, it would seem curious that the organization of rapid emergency response facilities are so ill-equipped to respond to non-military disasters (NATO's Split Personality: Why The Rapid Response Force Is Not Fully Operational, Atlantic Review, 6 September 2007). Curiously the delays in response to the Italian earthquake disaster of April 2009 were immediately preceded by the NATO Summit in Strasbourg -- headquarters of Eurocorps, the multinational army corps within the framework of the European Union and NATO common defence initiatives, declared operational in 1995.

Ironically, in the case of the Italian earthquake on the occasion of the NATO Summit, more police and resources were probably assembled to contain the protesters in favour of consideration of alternatives, than were marshaled in rapid response to the disaster. This offers a splendid example of the capacity of the purportedly best coordinated institutions of governance to respond to emerging crises -- even when scientifically predicted and marked by foreshocks felt by all over a wider area. -- al the while engaging in processes of self-congratulation and expansion. For what other issues does this offer a powerful metaphor?

E-democracy, swarm behaviour and swarm intelligence

Fig. 3 might be also used as a framework within which to consider the operation of e-democracy in focusing grassroots and marginal options for global consideration. Specifically the question how various approaches to e-democracy could be represented in relation to the main bell curve (E-E*) and how any filtration of its insights could be positioned in relation to the inverted bell curves within which global governance is nested.

Given the increasing interest in swarm behaviour (and simulation) as a model for intelligent agents including human beings, the question arises as to how the rapid shifts of public opinion -- and the temporary emergence of fashionable focal issues -- might be understood in relation to the operation of e-democracy (cf Dynamically Gated Conceptual Communities: emergent patterns of isolation within knowledge society, 2004). Provocatively, for example, when is such swarm behaviour to be compared with that of destructive locust swarming?

It is appropriate to note, for example, the development of the Joint Simulation System initiated in 1995 (Kari Pugh and Collie Johnson, Building a Simulation World to Match the Real World; The Joint Simulation System, January-February 1999, p.2; James W. Hollenbach and William L. Alexander, Executing the DOD Modelling and Simulation Strategy: making simulation systems of systems a reality, 1997).

This has seemingly now morphed, via the Total Information Awareness program, into the Sentient World Simulation (SWS) and will be a "synthetic mirror of the real world with automated continuous calibration with respect to current real-world information" with a node representing "every man, woman and child" -- presumably including those responsible for the SWS itself. "Sophisticated physics" were integrated into the simulation in 2007. Regrettably, as might be expected, this is being undertaken entirely in the interests of a US strategic defence strategy on behalf of the US Department of Defense (Mark Baard, Sentient World: war games on the grandest scale -- Sim Strife, The Register, 23 June 2007).

[More recently, as part of a EU research initiative named FuturIcT, a "Living Earth Simulator" is under development (Largest Supercomputers to Simulate Life on Earth, Including Economies and Whole Societies, ScienceDaily, 28 May 2010). The FuturIcT project (The FuturIcT Knowledge Accelerator: Unleashing the Power of Information for a Sustainable Future) aims to interrelate many distinct data gathering initiatives in order to simulate the entire globe, including all the diverse interactions of social systems and of the economy with the environment. The concept for the project has already been deeply explored within several European research projects. Again, however, it is unclear whether such initiatives, and the groups responsible, will themselves feature as integral elements of the simulation.]

However, it is unclear that any such approaches will be taken to enrichen the selection of strategic options -- except by the intelligence services to inhibit unrest -- whatever the more fruitful possibilities (From ECHELON to NOLEHCE: enabling a strategic conversion to a faith-based global brain, 2007). One example of an expanded initiative by the US National Security Agency is described by James Bamford (The Spy Factory -- the New Thought Police: the NSA wants to know how and what you think, 2009). Known as AQUAINT ("Advanced QUestion Answering for INTelligence"), the project has been under development for many years at the NSA Advanced Research and Development Activity.

Curiously the highly controversial Total Information Awareness programme, abandoned as such, was a means of eliciting opinion from the local level -- effectively benefitting from the vigilance of neighbourhood watch committees. Whatever its actual current form it seems now to be related to the Terrorist Watch List, (which maintains the No Fly List). It is unfortunate that such capacities cannot be adapted to elicit creative strategic options and early warning signals -- beyond the questionably narrow focus on terror (Terror as Distractant from More Deadly Global Threats, 2009).

It is also curious that understandings of "global consciousness" would seem to be failing to distinguish themselves significantly from swarm consciousness, as manifested in the often beautiful collective behaviour of large shoals of fish, flocks of birds and insects. From a perspective of e-democracy, how indeed is the exploration of swarm behavior to be related through swarm intelligence into any genuine manifestation of collective intelligence?

In a review of the possibility of democratic choice via cyberspace, Gustavo Lins Ribeiro (Bodies and Culture in the Cyberage: a review essay, Culture Psychology, 1998, 4) concludes with the reservation:

As we see, it is not only the nature of community but of power in the contemporary world that needs to be debated in light of the discussion prompted by the cyber prefix.... Full-fledged electronic, direct democracy is a fascinating possibility. But it can also transform the democratic process... into a string of dull, sometimes meaningless referenda performed not at the open public scene but at ascetic and shielded individual electronic homes. The choice frenzy of consumer culture unequivocally migrates to the political market. Just push the button on any kind of issue and you will be partici-choosing. The very core of democracy, the transformative, discursive, and hopefully knowledgeable mediation of conflicts and interests, can be reduced to a technical and numerical event. If such a simulation of democracy (simdemo) is ever installed it will certainly represent a most effective mode of reproducing the status quo

In a valuable recent compilation (Mark Tovey, Collective Intelligence: creating a prosperous world at peace, 2008), with regard to the Global Knowledge (GK97) conference (Toronto, 1997), also calls for research:

However, an examination of the GK97 online discussions suggests the need for more research -- and more experimentation with findings of existing research -- on how to conduct effective online group thinking. Research is needed on what works and what does not work. How can people really be helped to think together online when they are scattered across five continents?

The compilation reports on current challenges and possibilities in the contributions of:

  • Mark Klein1: Achieving collective intelligence via large-scale argumentation
  • Paul Martin and Thomas Homer-Dixon: The Internet and the revitalization of democracy

Robert Steele, as publisher of that compilation, is providing related studies of relevance. These are intermeshed with various understandings of Open Source Intelligence -- including its overlap with the intelligence community. Challenges remain. Simon French (Web-enabled strategic GDSS, e-democracy and Arrow's theorem: A Bayesian perspective, Decision Support Systems, 43, 4, 2007) points out with respect to any more substantive approach to participative democracy that:

Web-technologies bring the possibility of supporting geographically and temporally dispersed decision making. However, although technically feasible, it is not clear that there are valid methodologies for the use of web-based group decision support (wGDSS). Many approaches to decision support are driven by the perspective of a single decision maker. Yet there are many reasons to expect that the extension of individualistic theories to a group context will be fraught with difficulty.

E-Democracy has been the focus of a Council of Europe Forum for the Future of Democracy (Madrid, 2008) prior to the finalization of the investigation of its Ad Hoc Committee on e-democracy (CAHDE) leading to adoption of a very comprehensive Recommendation CM/Rec(2009) 1 of the Committee of Ministers to member states on electronic democracy (e-democracy) (18 February 2009). It notes:

  • the alarming shortcomings in democratic processes that may be observed in Council of Europe member states and have been contributing to the growing feeling of political discontent and disaffection among citizens;
  • the need to promote, ensure and enhance transparency, accountability, responsiveness, engagement, deliberation, inclusiveness, accessibility, participation, subsidiarity and social cohesion;
  • the need to provide opportunities for meaningful and effective public deliberation and participation in all stages of the democratic process, responsive to people's needs and priorities;

Despite the comprehensiveness of its guidelines with respect to ensuring that e-democracy methods and tools:

  • should be devised in such a way that citizens can take part in a ubiquitous, non-stop democracy where participation is possible round the clock, at the same time and wherever they may be.

the Recommendations seem to assume full understanding of the challenges to communication in consideration and selection of strategic options -- implementation "within-the-box" of what is already known and assumed. For example, the closest it gets to any form of exploratory simulation of the strategic challenge is with respect to "e-democracy games" for education regarding existing processes. These are to be devised:

  • to involve, for example, parliamentary procedure and budgeting in such a way as to provide citizens with a better understanding of the tasks and processes of democratic institutions.

It is unclear whether the Recommendation asks effective questions about the communication link between e-democracy and consideration of policy options, or whether these are deliberately avoided through a form of unintegrated definitional "divide and rule" with respect to the components of e-democracy as encompaassing:

  • e-parliament, e-legislation, e-justice, e-mediation, e-environment, e-election, e-referendum, e-initiative, e-voting, e-consultation, e-petitioning, e-campaigning, e-polling and e-surveying; it makes use of e-participation, e-deliberation and e-forums.

The closest it gets to identifying the need for further research to highlight the problems of communication in decision-making between these functions is:

  • Given the various approaches to, and views on, e-democracy in academia and the need to harness quality expertise in many sectors, governments, representative assemblies, the business community and international institutions should encourage and fund research on e-democracy.

The more fundamental challenge is well expressed by Olivia Judson (To expand knowledge, we must first admit ignorance, The Guardian, 26 February 2009) with respect to science:

Of all the limits on expanding our knowledge, unexamined, misplaced assumptions are the most insidious. Often, we don't even know that we have them: they are essentially invisible. Discovering them and investigating them takes curiosity, imagination, and the willingness to risk looking ridiculous. And that, perhaps, is one of the hardest tasks in science.

The argument could usefully be extended to the manner in which strategic options are considered and selected.

Possible extensions to representation of option selection

Figs. 2 and 3 suggest the possibility of other representations of the challenge -- as presented below.

Fig. 5: Indication of emergence of issues to global recognition with corresponding progressive embodiment locally
(stars on the line of emergence for Issue 1 indicate progressive switch
from strong resistance to proposals to their enthusiastic acceptance)
Fig. 6: Succession of issues of Fig. 4 arrayed in a circle around the central governance zone
(effectively a "vertical" view assuming "circular" time with phased emergence of issues in cybernetic response to the dominance of each)
Emergence of issues to global recognition Circular array of issues of global governance concern

Fig. 6 offers a means of highlighting the relationship between different functions (as issues or viable remedial strategies) in a psycho-social system -- where their progressive recognition is itself a challenge requiring learning (over time). The nature and urgency of any learning in response to crisis then leads to a form of excessive fixation on a particular function (or closely related set of functions). This fixation is effectively resistant to the recognition of the emergence of other issues associated with other functions. Excessive fixation necessarily neglects and engenders other issues -- effectively chaining the issues into a pattern of emergence as implied by Fig. 5. The "timeless" stability of such a dynamic system as a whole is however dependent on the ability to shift continually between functions according to the feedback on each such issue.

Fig. 6 also raises the question of how many distinct critical "issues" might be fruitfully recognized as vital to systemic stability over time -- perhaps in terms of knowledge cybernetics (Maurice Yolles, Knowledge Cybernetics: a new metaphor for social collectives, Intellect, 3, 2006, 1; Y.Zude and M. Yolles, From Knowledge Cybernetics to Feng Shui). Clearly any checklist of seemingly tangible "issues" can be transformed into generic terms and variously clustered for that purpose (Checklist of Peak Experiences Challenging Humanity, 2008; Memetic and Information Diseases in a Knowledge Society, 2008; Patterning the Problematique, 1995).

Cycle-proof regulation of confidence

Fig. 6 also highlights the value of generalizing the valuable comment of Raghuram Rajan (Cycle-proof Regulation, The Economist, 8 April 2009):

As the G20 summit showed, we typically regulate in the midst of a bust. That is when righteous politicians feel the need to do something, bankers' frail balance-sheets and vivid memories make them eschew risk, and regulators have their backbones stiffened by public disapproval of past laxity.

But we reform under the delusion that the regulated, and the markets they operate in, are static and passive, and that the regulatory environment will not vary with the cycle. Ironically, faith in draconian regulation is strongest at the bottom of the cycle, when there is little need for participants to be regulated. By contrast, the misconception that markets will take care of themselves is most widespread at the top of the cycle, at the point of most danger to the system. We need to acknowledge these differences and enact cycle-proof regulation.

As a former chief economist of the IMF, Rajan's focus is necessarily on the tangibles of economics understood through the eyes of finance. However the essential crisis in the financial system, which the IMF did little to prevent, is now framed in terms of the subtleties of the ultimate intangible, namely "confidence" -- for which economics has no measure. In this current period the British government has framed a solution to the crisis of confidence termed "quantitative easing" -- traditionally understood and disparaged as "printing money". Fundamentally money is of course a token of confidence in the validity of a "promise to pay". The British government, with other governments of the G20, have effectively framed their response in terms of "printing confidence".

Curiously, but consistently, the typical output of a summit conference is also a form of "quantitative easing", namely the press release. Prhaps to be considered as an exercise in "printing promises" to relieve public pressure regarding hopes and expectations -- "qualitative easing"?

Rajan's valuable commentary would be even more valuable if "confidence" were to be substituted for its surrogates therein with respect to the cycle-proof regulation to which he refers. It would then highlight the systemic significance of "bank capital requirements" at different points in the cycle. This would then relate his commentary to a variety of specific distortions of "confidence" that might be recognized as the destabilizing issues (of the above diagrams) -- issues that challenge systemic governance over time under various conditions. Effectively "confidence" underlies the dynamic pattern of relationships on which knowledge cybernetics may be expected to focus. "Business cycles" are but one manifestation of cycles of confidence and credibility (Credibility Crunch engendered by Hope-mongering: "credit crunch" focus as symptom of a dangerous mindset, 2008).

A similar point might be made with respect to the commentary of Nassim Nicholas Taleb (Ten Principles for a Black Swan-proof World, Financial Times, 7 April 2009), transforming his focus on financially significant surprises into those relating generically to the processing of surprising new information -- as highlighted by the above diagrams.

Rajan's analysis is especially valuable when placed in the context of C. S. Holling's adaptive cycle and the necessary resilience required in any civilization to be able to navigate it, as notably stressed by Thomas Homer-Dixon (The Upside of Down: catastrophe, creativity, and the renewal of civilization, 2006). In particular Rajan's commentary would then offer an understanding of appropriate responses at different parts of any cycle illustrated by Fig. 6 -- a cycle of confidence in its most generic sense. As he says:

To have a better chance of creating stability through the cycle -- of being cycle-proof -- new regulations should be comprehensive, contingent and cost-effective.... A crisis offers us a rare opportunity to implement reforms. The temptation will be to over-regulate, as we have done in the past, only to liberalise excessively over time. It would be better to think of regulation that is immune to the cycle.

More fundamentally, this is the concern with respect to eliciting, managing and communicating confidence, represented in the arguments above by confidence in the process of eliciting (local) knowledge relevant to the stability of a dynamic (global) system -- and grounding any emergent global insight locally.

Conclusion

The argument is essentially an exploration of how "global" coherence emerges and is sustained -- and challenged -- given the selective partiality imposed by an attention span faced with ever increasing information overload. Whilst the argument has been developed in relation to global governance, the diagrammatic representation is also of value in representing the coherence of individual (or group) understanding in the face of emerging issues -- most evidently in how information available electronically is attentively processed or ignored (e-mails, web links, news feeds, blogs, etc). The fact that 97% of e-mail is now understood to be spam, is symptomatic of the challenges for a knowledge society.

As stressed, the various diagrams are purely indicative but they do suggest the possibility of varying the number, form and labelling of elements -- preferably dynamically using an applet -- in support of discussion and the representation and comparison of a range of variants.

The diagrams facilitate understanding of what is as yet "unknown" in relation to what is assumed to be coherently "known" -- progressively challenged by emergent "unknowns" (Unknown Undoing: challenge of incomprehensibility of systemic neglect, 2008). The focus on emergence helps to highlight the manner in which the focus on a currently prominent issue may obscure or distort others that are emerging in its wake -- possibly of even greater significance (Climate Change and the Elephant in the Living Room:, 2008; Systemic Crises as Keys to Systemic Remedies: a metaphorical Rosetta Stone for future strategy? 2007).

The diagrams have the advantage of integrating recognition of the process of "irrational" resistance to information regarding emergent issues that are necessarily destabilizing (threatening) to the current sense of coherence and relevance. Such information appears to emerge from "incoherence" appropriately understood as chaos. In this sense the "disconnect" in Figs. 2 and 3, represented by the horizontal separation, is between a sense of coherence (order, etc) and incoherence (chaos, etc.). The inverted bell curves of Figs. 3 and 4 might then be understood as nesting "governance" within the forms of coherence offered by various belief systems, business, media, etc.


creative commons license
this work is licenced under a creative commons licence.