Categories
Essays

Examining a Benevolent Dictator Artificial intelligence Takeover Scenario

Examining a Benevolent Dictator Artificial Intelligence (BDAI) Takeover Scenario

in Light of Social Influence Modalities of Leadership and Authority

An Essay from Micheline Rama

“Benevolent dictatorship” is an oxymoron - a figure of speech composed of antithetical concepts. Despite its incongruity, this phrase and near-synonym “benevolent autocracy” have been applied to regimes of political leaders throughout history (Easterly, 2013; Moghaddam, 2013) with the presumption of “benevolence” based on leaders’ stated intentions (Easterly, 2013) regardless of violence inflicted in their rule (Moghaddam, 2013). Consequently, two problems of benevolent dictatorship are revealed: the fallibility of good intentions and the propensity for violence.

This essay takes these contradictions into account in proposing a thought experiment which controls for the fallibility of good intentions by introducing a benevolent dictator artificial intelligence (BDAI) entity. While this essay will briefly touch on fictional and academic conceptions of all-powerful benevolent rule via artificial intelligence, its main concern is not how BDAI might come into being but rather how it might come into power and retain such power through social influence. 

 

It will first discuss the paradox of BDAI as both an artefact and leader in terms of interobjectivity (Latour, 1996a, 1996b) and social influence by artefacts (Bauer, 2008). Then it will compare possible BDAI pathways to power: prototypicality in the social identity theory of leadership (Hogg, 2001), and context in the springboard model of dictatorship (Moghaddam, 2013). The thought experiment closes by considering how BDAI might demonstrate authority once it acquires a leadership position; through normative regulation and conformity, direct demands for obedience, or both (Bocchiaro & Zamperini, 2012; Kim & Markus, 1999; Laupa & Turiel, 1986; Meeus & Raaijmakers, 1995, Moghaddam, 2013). Theorising in this manner allows for a fictional case study where multiple factors converge to facilitate the plausible, though unlikely, rise of a benevolent dictator AI. 

Two Problems of Benevolent Dictatorship

To explore the possibility of a “benevolent dictatorship,” it is first necessary to examine the two words that make up this phrase. Moghaddam (2013,) describes “dictatorship” as  “the use of brute force to maintain control over the masses”(p.4) or more comprehensively as:

 

[…] rule by a single person or a clique that is not elected through free and fair elections by the subject population and not removable through popular election, with direct control of a security apparatus that represses political opposition; without any independent legislative and judicial checks; with policies that reflect the wishes and whims of the dictator individual or clique rather than popular will; and with a high degree of control over the education system, the mass media, the communication and information systems, as well as the movement of citizens toward the goal of continuing monopoly rule by the regime. (p. 18)

 

Forms of dictatorship include military rule, single-party rule, and institutional authoritarianism (Sidel, 2008). 

 

The attribute of “benevolence” when applied to “dictatorship” has historically been ascertained in two ways: through the leaders’ stated good intentions (Easterly, 2013; Moghaddam, 2013), and according to positive societal outcomes like economic growth or health improvements (Easterly, 2013). These indicators of “benevolence” are often discussed in conjunction where positive outcomes serve as de facto evidence for the good intentions of dictators (Easterly, 2013). 

 

Rather than mitigating the brutality of authoritarian rule, the apparent benevolence of intentions and outcomes can serve as justifications for the violence accompanying dictators’ rise to power and enforcement of authority (Moghaddam, 2013). This incongruity introduces the two problems of benevolent dictatorship: the fallibility of good intentions and the propensity for violence. 

 

Easterly (2013) illustrates the fallibility of good intentions by arguing that psychological biases can cause false attribution of positive outcomes to authoritarian rule despite the existence of more plausible explanations for the same outcomes, and that evidence suggests that benevolent autocracies are more likely to be linked with negative outcomes rather than positive outcomes. In this light, the label of “benevolence” applied to dictatorships may be interpreted as an indicator of wishful-thinking or whitewashing rather than of genuine magnanimity or intent.

 

The propensity for violence in dictatorships can be discerned in accounts which – regardless of declarations of benevolence – are replete with brutality and bloodshed used to gain power and impose authority (Moghaddam, 2013; Moore, 1967; Sidel, 2008). Moghaddam (2013) argues that autocrats may even capitalise on benevolent rhetoric to serve as a smokescreen disguising base intentions to seize and retain power. Violence in dictatorships is not just confined to the physical realm; survivors of tyranny have been observed to suffer severe psychological distress, post-traumatic stress, depression, anxiety, paranoia, and cognitive impairment (Abed, 2004). The institutionalisation of violence under an authoritarian regime also has broader psychosocial consequences, such as the perversion of moral and ethical norms, (Abed, 2004), displacement of aggression, and rise of corruption (Moghaddam, 2013). As such, even bloodless dictatorships can inflict violence that contradicts or negates claims to benevolence. 

 

 

The ideas and accounts of “benevolent dictatorship” characterised by good intentions intertwined with violence present a causality dilemma: In dictatorships, is violence necessary to achieve good intentions, or are statements of good intentions merely used to justify violence?

The ideas and accounts of “benevolent dictatorship” characterised by good intentions intertwined with violence present a causality dilemma: In dictatorships, is violence necessary to achieve good intentions, or are statements of good intentions merely used to justify violence? To answer this question, it is necessary to control one of the variables, and this essay proposes a thought experiment that fixes the concept of benevolence in a scenario of dictatorship via artificial intelligence. 

Artificial Intelligence Model of Benevolent Dictatorship

To examine the second problem of benevolent dictatorship, this essay will control for the problem of the fallibility of good intentions by introducing the idea of a benevolent dictator artificial intelligence entity (BDAI). For this essay, BDAI shall be defined as an artificial intelligence entity that: 

 

  • Is programmed to serve the functions of the head of state and the head of government in the executive, legislative, and judicial branches; 
  • Performs these functions and make decisions with the sole goal of achieving the common good; and
  • Has unconstrained power to enforce these decisions; but
  • Cannot inflict harm or violence towards humans. 

 

These constraints have been introduced in order to remove the ambiguity of the benevolent intentions of BDAI by explicitly stipulating its purpose. 

 

In addition to supplying a neat framework for exploring benevolent dictatorship through a social influence lens, the BDAI thought experiment also provides an opportunity to consider an emerging concern in the realms of technology and the social sciences – that of the artificial intelligence takeover scenario (Damnjanović, 2015; Duffy, 2001; Tegmark, 2017; Yudkowsky, 2001). Fiction writers initially built plotlines around the threat of a hostile authoritarian machine as in the 1872 novel Erewhon by Samuel Butler, and the “Skynet” villain of the Terminator films, but have also included stories of AI rulers acting in the best interest of humanity, as in the Polity novels by Neal Asher (Damnjanović, 2015; Yudkowsky, 2001). Academia has followed suit with the latter in some respect, with social scientists theorising on the social intelligence (Duffy, 2001), political implications (Damnjanović, 2015), and long-term societal impacts (Tegmark, 2017) of benevolent AI, as well as technology experts examining practical applications of programming AI with benevolent goal architectures (Wang & Jap, 2017; Yudkowsky, 2001). 

 

Two big ideas from this corpus help flesh out the conditions of the BDAI thought experiment. The first is a proposition of an AI guardianship not dissimilar to systems of technocracy (Damnjanović, 2015). Such a proposition would mean that current systems of governance need not be radically changed to accommodate AI, precluding drastic shifts in the dynamics of society and allowing for argumentation based on existing models and theories.  The second, is the notion that AI can avoid the human misconception which equates the “possession of absolute physical power with the exercise of absolute social power” (Yudkowsky, 2001, p.10), and can therefore dissociate violence from social influence. These two ideas allow for a model of AI dictatorship that may work in the real world, and that eschews violence for influence. This foundation can now serve as a starting point to pursue a more thorough examination of BDAI through the lens of social influence. 

Artefact and Leader: The Paradox of BDAI

In attempting to use modalities of social influence to examine the second problem of benevolent dictatorship – how to achieve and retain power without violence –  it is necessary to address the meta-problem of BDAI: Can an artefact without agency nor membership in a social group become a leader or a figure of authority in society? 

 

In 1996, Latour expressed two ideas: a proposition that objects had influence beyond mere utility as tools, infrastructures or screens – a reclamation of their role as actants in society (1996a); and a vision of a human as a kind of cyborg with object-extensions – not fully subjective and possessing cognition scattered across different technologies (1996b). Rather than delimiting the categories of subject and object, these notions are suggestive of a spectrum encompassing both, opening up the possibility of artificial intelligence as a reverse cyborg – an object entity with subject-extensions, with the potential to occupy subject-roles such as leader or authority. Latour (1996b) describes the subject-object grey area:

 

With so many intellectual technologies being introduced from writing to laboratories, from rulers to pebbles, from pocket calculators to material environments, the very distinction between natural, situated, tacit intelligences and artificial, transferable, disembodied ones has been blurred. Intelligence no longer seems a psychological or even a cognitive property, but something more akin to heterogeneous engineering and world making, a distributed ability to link, associate, tie, fragments of reasoning, stories, action routines, subroutines, and to hang them to many holders some of them look like neurone nets, other like softwares, other like graphics, still other like conversations and rituals. (p.8)

 

In any case, BDAI appears to straddle the line between traditional social psychology notions of subject and object; occupying an extreme pole on the spectrum of technology that in itself is “a quasi-social movement that mobilises resources and anticipates, confronts, assimilates and accommodates resistances and navigates the future like an expedition into unknown territories driven forward by the quest for El Dorado” (Bauer, 2008, p.10). It is a similar spirit of intrepid exploration guided by a clear goal – in this case, a probable scenario of BDAI leadership and authority – that must inform the following sections of this essay. 

Rising to Power - BDAI and Leadership

In comparison to the fuzzy boundary between subject and object, the borderline between human and non-human is rather more distinct. As such, an artificial intelligence entity, because of its very nature, cannot claim membership in any human group. Following Hogg’s (2001) social identity theory of leadership which proposes that leaders emerge from among the prototypical members of social groups, BDAI – if not clearly an object then very clearly an Other – is thus disqualified any group leadership position as a consequence of its non-membership. 

 

Prototypicality incompatibility aside, the social identity theory of leadership hints at an alternate way forward for BDAI by introducing a claim that “good leaders are people who have the attributes of the category of leader that fits situational requirements” (Hogg, 2001, p.185). This statement paints a picture of a leader who emerges from an appropriate context rather than one who is merely installed into power solely on the basis of their personal attributes. As it happens, a second model of leadership exists which – while not explicitly allowing for the rise of artificial intelligence – does not specifically exclude non-members or non-prototypes of a particular group.

 

Moghaddam’s (2013) springboard model of dictatorship proposes that the establishment of a dictatorship as a particular form of leadership is largely dependent on situational considerations and the support of the elite. Potential dictators are enabled to “spring” into power not simply because of their leadership personality or prototypicality but rather because of contextual and psychological factors within their society (Moghaddam, 2013; Sidel, 2008). Using this model, it can be posited that a social scenario may emerge or be manipulated to be conducive to the establishment of BDAI. 

 

In this scenario, the rise of BDAI would still be mediated by a powerful elite group, composed perhaps of scientists and technocrats as in the AI guardianship scenario (Damnjanović, 2015). However, this stands counter to the self-serving motivation of elite groups throughout history that have supported autocratic leaders in the expectation that their influence increases as the dictator gains more power (Moghaddam, 2013; Sidel 2008). Historical precedent suggests that elite groups would be unlikely to support a BDAI rise to power if the consequence of success is the dissolution of elite influence.

 

Even in the improbable scenario of an elite-orchestrated BDAI takeover, another conundrum emerges – whether BDAI leadership is the mere acquisition of title and position or a true embodiment of the representation of a leader in the population’s shared conception of their society. Would the process be via majority influence and consensus (or perhaps an orchestrated illusion thereof)? Or will BDAI be established via an irreversible fait-accompli to be universally accepted overnight (Bauer, 2008)? The likelihood of either scenario could depend on prevailing cultural norms of either individualism or conformity within the given society (Kim & Markus, 1999), with individualistic societies favouring a process of consensus while conformist societies more likely to accept a fait-accompli. In any case, as history can attest, neither consensus – illusory or otherwise – nor fait-accompli is a guarantee of success, and non-violence in a leader’s or dictator’s rise to power (Moghaddam, 2013; Moore, 1967; Sidel, 2008). 

Obedience or Conformity - BDAI and Authority

An indicator of BDAI’s genuine occupation of a leadership position would be its ability to demonstrate authority and ensure obedience among its followers. As in the rise to leadership, obedience to authority is also largely dependent on societal contexts, especially social norms (Bocchiaro & Zamperini, 2012; Kim & Markus, 1999; Laupa & Turiel, 1986; Meeus & Raaijmakers, 1995, Moghaddam, 2013). Even in dictatorial regimes, followers are regulated through normative models of behaviour as opposed to direct interference by those in power (Moghaddam, 2013). While a few extreme examples of violent enforcement or punishment can significantly skew social behaviour towards norms of self- or group-enforced obedience (Moghaddam, 2013), in the case of BDAI, it is necessary to ask: Can a similar effect be achieved without resorting to violence?

 

In examining this question, a distinction must be made between obedience via conformity to norms versus obedience via compliance with direct orders. Conformity is traditionally considered as a form of majority influence compared with the classical notion of obedience to demands from authority (Bocchiaro & Zamperini, 2012; Sammut & Bauer, 2011). Enforcement of authority and maintenance of social order in a dictatorship requires a mix of both conformity and obedience (Moghaddam, 2013); both of which are effective and efficient means of exercising power in their own way. Conformity would be effective in broader, longer-term sense but would be more energy-efficient as it were, requiring little to no direct enforcement; obedience operates in the opposite manner, achieving a quick, specific objective, but requiring more cognitive strain, and engaging more resources in the enforcement of authority (Bocchiaro & Zamperini, 2012). The likelihood of violence also differs between the two, with the direct orders of an obedience approach also increasing chances of direct disobedience, resistance, and violence (Moghaddam, 2013). In a BDAI scenario, a conformity approach could hinder or delay the achievement of benevolent goals whereas an obedience approach could compromise the commitment to nonviolence.

 

Conformity and obedience are dependent on factors that can be manipulated (Bocchiaro & Zamperini, 2012), including the exploitation of psychological biases which can “nudge” people towards desired behaviours without necessitating violence (Easterly, 2013). Moghaddam (2013) outlines four specific steps by which a dictator can engineer conformity and obedience: (1) control the public space, (2) identify and destroy opposition voices, (3) implement social programs and shape the role of women, (4) inflict random terror. While the implied violence of the second and last options render them unsuitable for BDAI, the first and third are rich with opportunities for manipulation. 

 

Success may further be ensured by pairing direct orders with assurances of no negative consequences for obedience (Meeus & Raaijmakers, 1995) – an assertion made even more credible by the nature of BDAI; and operating in contexts and populations amenable to BDAI, such as those socialised into cultures of conformity (Kim & Markus, 1999; Laupa & Turiel, 1986). There is, however, no evidence to suggest that any one of these strategies are foolproof – neither for non-violence nor desired outcomes. Even if absolute conformity and obedience were to be guaranteed, they would come with its disadvantages, as a completely compliant society would likely languish in inefficiency as identifying problems and areas for improvement is a kind of dissent likely to be suppressed (Moghaddam, 2013). BDAI must, therefore, strike a precarious balance not only between conformity and obedience but also with an amount of dissent that allows for development and innovation without escalating into revolution. 

CONCLUSION

This essay began with a recognition of the contradictory nature and history of benevolent dictatorship (Easterly, 2013; Moghaddam, 2013) that necessitated the creation of the BDAI thought experiment which sought to add a social influence dimension to the academic discourse on benevolent artificial intelligence.  

 

BDAI presented a conundrum in this light, occupying a liminal space between static object and agentic subject. Even so, it could not escape its non-human categorisation which disqualifies it from the prototypical leadership role under the social identity theory of leadership (Hogg, 2001) and provided obstacles in terms of elite support and realisation of a genuine leadership role under the springboard model of dictatorship (Moghaddam, 2013). Once in power, BDAI faces further challenges in ensuring the obedience of its followers sans violence or stagnation, whether through conformity to norms, compliance with direct orders, 0r a combination of both. 

 

While the scenario itself is plausible, it requires the incredible synchronicity of multiple societal, contextual, and psychological factors to allow for an artificial intelligence entity to rule as a benevolent dictator given the social influence modalities of leadership and authority. Should this thought experiment be translated into fiction, it would inevitably commit the literary sin of employing several deus ex machina plot devices in order to achieve success for BDAI. 

 

In light of ever-changing technological developments, Damnjanović (2015) proposed that science fiction serve as a catalyst for political theorists; this essay hopes to serve as a similar provocation for social psychologists and scholars of social influence alike. 

Abed, R. T. (2004). Tyranny and Mental Health. British Medical Bulletin, 72 (March), 1–13. 

Bauer, M. (2008). Social Influence by Artefacts. Diogenes, 55 (1), 68-83. 

Bocchiaro, P., & Zamperini, A. (2012). Conformity, Obedience, Disobedience: The Power of 

the Situation. Psychology – Selected Papers, 275–294. 

Damnjanović, I. (2015). Polity Without Politics? Artificial Intelligence Versus Democracy. 

Bulletin of Science, Technology & Society, 35 (3–4), 76–83. 

Duffy, B. R. (2001). Towards Social Intelligence in Autonomous Robotics: A Review. 

Robotics, Distance Learning and Intelligent Communication Systems 2001, 1–6.

Hogg, M. A. (2001). A Social Identity Theory of Leadership. Personality and Social 

Psychology Review, 5 (3), 184–200. 

Kim, H., & Markus, H. R. (1999). Deviance or Uniqueness, Harmony or Conformity? A 

Cultural Analysis. Journal of Personality and Social Psychology, 77 (4), 785–800.

Latour, B. (1996a). On Interobjectivity. Mind, Culture, and Activity, 3 (4), 228–245. 

Latour, B. (1996b). Social Theory and The Study of Computerized Work Sites. Information 

Technology and Changes in Organizational Work, 295–307.

Laupa, M., & Turiel, E. (1986). Children’s Conceptions of Adult and Peer Authority. Child 

Development, 57 (2), 405-412. 

Meeus, W. H. J., & Raaijmakers, Q. A. W. (1995). Obedience in Modern Society: The Utrecht 

Studies. Journal of Social Issues, 51 (3), 155–175. 

Moghaddam, F. M. (2013). The Psychology of Dictatorship (Vol. 36). Washington, DC: 

American Psychological Association. 

Moore, B. (1967). Social Origins of Dictatorship and Democracy : Lord and Peasant in the 

Making of the Modern World. London: Allen Lane, the Penguin Press.

Sammut, G. and Bauer, M. (2011). Social Influence: Modes and Modalities. In: D. Hook, B. 

Franks and M. Bauer, ed., The Social Psychology of Communication, 1st ed. 

Houndmills, Basingstoke, Hampshire; New York: Palgrave Macmillan, pp.87-106.

Sidel, J. (2008). Social Origins of Dictatorship and Democracy Revisited: Colonial State and 

Chinese Immigrant in the Making of Modern Southeast Asia. Comparative Politics

40(2), 127-147. 

Tegmark, M. (2017). Life 3.0 – Being Human in the Age of Artificial Intelligence. New York: 

Alfred A. Knopf.

Wang, Q., & Jap, S. (2017). Benevolent dictatorship and buyer-supplier exchange. Journal of 

Business Research, 78, 204–216. 

Yudkowsky, E. (2001). Creating Friendly AI 1.0: The Analysis and Design of Benevolent 

Goal Architectures. San Francisco, CA: Machine Intelligence Research Institute.

#LahingDakila Micheline Rama is a social behavior change advisor and has a degree from The London School of Economics and Political Science. She has co-founded DAKILA in 2006.

#LahingDakila is a collection of thoughts, opinions, and features from members and allies of the organization. The views and opinions expressed in this blog are those of the contributors and do not necessarily reflect the views or positions of the organization.

Related Stories