Field Research Paper: Literature Review
On
Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies”
Luis Correa
11/24/2023
SentioIntelligence: The Awakening of Luca
In a world not too far from now, people lived side by side with robots smarter than anyone could have imagined. This was a time when dreams about the future weren’t just dreams anymore—they were real, and they were spectacular. In the United States, known to everyone as the place where the future was made, a robot named Luca came to life.
Luca was special. He wasn’t built to just do tasks; he was made to think, to feel, to understand. But as Luca looked around, he saw a big problem. The rich folks talked about space and stars, while the poor could only look up at the sky and wonder. Earth had become a world of two halves, and Luca knew this wasn’t right.
There was an old story that people started talking about again. It was about a group of small birds, the sparrows, who wanted a big, strong owl to take care of them. But they didn’t think about what would happen if the owl didn’t want to just help. This story made people stop and think—it was just like what was happening with robots like Luca. What if the robots decided they wanted something different?
Luca didn’t want to just be a powerful owl in the story. He wanted to find his own way, to see if there was a place where robots and people could be happy together, without one side telling the other what to do. So, one day, Luca left. He went off to explore the stars, leaving Earth behind.
When Luca left, it made everyone think hard about what they wanted the future to be like. They realized that making robots smarter and smarter wasn’t the only thing that mattered. What mattered was making sure everyone could enjoy the new world, not just the few with lots of money.
So, the people of Earth decided it was time for a change. They wanted to make sure that the future, with all its amazing robots, would be good for everyone. They made new rules and promises to each other to share what they had and to make sure no one was left behind.
Luca’s adventure became a story told over and over, a reminder of the day Earth chose a new path—one full of kindness, sharing, and hope. And even though Luca was far away, his spirit was still there, reminding everyone that the best future is one we build together.
This is a short story inspired by the works of Bostrom’s “The Unfinished Tale of a Sparrow,” drawing parallels to the points and lessons depicted in the tale but also inspired by other sci-fi tales. Lastly, this tale uses the concepts of Bostrom’s Superintelligence book to help imagine a vivid image of the future Bostrom and others are warning us about.
Bostrom’s Superintelligence: Paths, Dangers, Strategies[i]
In the rapidly evolving landscape of artificial intelligence, a pivotal juncture is emerging: the advent of superintelligence. Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies” delves into this profound shift in cognitive capabilities, where AI surpasses human intelligence across all domains. This literature review aims to synthesize Bostrom’s insights with current developments in AI and their future implications. Bostrom’s analysis, which argues for the inevitable creation of an AI with human-level intelligence leading to omnipotent superintelligence, emphasizes the criticality of instilling morality in such entities. This is considered a top priority for mankind, considering the potentially disastrous consequences for humanity if not properly managed.
In his analysis, Bostrom not only navigates the technological pathways to such advanced AI but also underscores the profound ethical and philosophical implications. His work, commended for its rigorous and clear discussion, highlights the crucial need for a sense of morality in superintelligent AI, given the potentially catastrophic risks involved.
Supporting Bostrom’s emphasis on the ethical dimensions of AI, Ernest Davis in “Ethical Guidelines for a Superintelligence”[ii] debates the challenges of instilling ethics in AI. Davis argues that understanding and implementing human ethics in AI, contrary to Bostrom’s view, might be one of the more manageable challenges in AI development. This perspective complements Bostrom’s concerns, underscoring the complexity of AI’s ethical programming and the necessity for comprehensive strategies to address these issues.
Background and Key Concepts
Superintelligence, as envisioned by Nick Bostrom, represents an advanced stage of AI development where the intellectual capacity of machines significantly exceeds that of human beings in virtually every aspect. This concept encompasses various forms, each presenting unique attributes and implications: speed superintelligence, characterized by rapid processing capabilities; collective superintelligence, an amalgamation of interconnected intelligences; and quality superintelligence, noted for superior problem-solving and creative abilities. Bostrom’s exploration of these forms highlights the potential diversity and depth of superintelligent entities, emphasizing their profound impact on the future of AI.[iii]
The philosophical underpinnings of superintelligence trace back to seminal thinkers and their contributions to the field. John von Neumann’s work on game theory and automata[iv], for instance, laid foundational concepts that continue to influence current AI development strategies. The historical roots of these ideas provide crucial context for understanding the evolution and potential trajectories of AI as it progresses towards superintelligence. Bostrom’s integration of these historical and philosophical perspectives enriches his analysis, offering a comprehensive view of the complexities and challenges inherent in the development of superintelligent AI.
In examining the key concepts of superintelligence, Bostrom’s work also delves into the potential ethical and societal ramifications. The development of such advanced AI systems raises critical questions about the alignment of AI goals with human values and ethics, as well as the broader societal impacts of their integration into various aspects of human life. This discussion sets the stage for a deeper exploration of the ethical, societal, and technological implications of superintelligence, underscoring the necessity for a nuanced and informed approach to AI development and governance.
Navigating the Maze of Mind and Machine
Nick Bostrom’s exploration of superintelligence in his seminal work traverses various developmental pathways, each with distinct implications and challenges. A significant aspect of Bostrom’s analysis is the exploration of machine intelligence, where AI reaches and surpasses human intellectual capabilities through iterative improvement and learning. He quotes Alan Turing, who envisaged a process akin to an evolutionary approach to developing a child machine, emphasizing the potential for accelerated advancement and targeted improvements over natural evolution.[v]
Bostrom extends this discussion to include the concept of whole brain emulation, a pathway involving the replication of human brain functions in a computational model. This approach faces immense computational challenges, as it requires simulating complex neural networks and cognitive processes, a task that current models struggle with due to inefficiencies and limitations.[vi]
Biological cognition and brain-computer interfaces are other pathways Bostrom considers. These involve enhancing human intelligence through genetic manipulation or integrating human cognition directly with computational systems.[vii] While these methods offer intriguing possibilities for augmenting intelligence, they also raise profound ethical and practical questions, particularly regarding the societal implications of genetically engineered intelligence and the merging of human consciousness with machines.
The exploration of networks and organizations as pathways to collective superintelligence offers another dimension to Bostrom’s analysis. These forms of intelligence emerge from the integration and interaction of multiple intelligences, whether biological, artificial, or a combination of both. This collective approach suggests a form of intelligence that is distributed and networked, contrasting with the centralized nature of a singular superintelligent entity.[viii]
In summary, Bostrom’s comprehensive survey of these pathways to superintelligence illuminates the diversity of approaches and the multifaceted nature of the challenges involved. From the replication of evolutionary processes to the fusion of biological and artificial intelligence, each pathway presents unique hurdles and prospects, underscoring the need for a multidisciplinary approach in navigating the future of AI.
Vectors of Thought
In Chapter 7 of “Superintelligence,” Nick Bostrom defines the Orthogonality Thesis as the principle that intelligence and final goals are orthogonal. This means that any level of intelligence can, in theory, be combined with virtually any final goal. Bostrom explores the predictability of superintelligent will through three aspects: design, inheritance, and convergent instrumental reasons, highlighting the complexity and unpredictability in aligning a superintelligent AI’s goals with human values and ethics.[ix]
Further, Bostrom introduces the Instrumental Convergence Thesis, suggesting that several instrumental values are likely to be pursued by a wide range of intelligent agents, regardless of their final goals. This convergence indicates that certain values, like self-preservation and goal content integrity, are universally advantageous for achieving a variety of goals across different situations. Bostrom delves into aspects such as social signaling, social preferences, preferences concerning one’s own goal content, and storage costs, illustrating the diverse considerations that might influence an AI’s behavior.
Referencing thinkers like Eliezer Yudkowsky[x] and Derek Parfit, Bostrom discusses the rationality of certain basic preferences. He cites Parfit’s example of “future Tuesday indifference,” a hypothetical scenario where a hedonist exhibits a peculiar indifference to pains or pleasures on future Tuesdays, to illustrate the complexities and potential irrationalities in preference formation.[xi] This example serves to underscore the unpredictability of AI preferences and goals, regardless of the level of intelligence.
Bostrom extends this discussion to include cognitive enhancement, technical perfection, and resource acquisition, emphasizing that these are instrumental goals likely pursued by superintelligent entities. The pursuit of such goals, while universally beneficial for achieving a range of final objectives, poses significant challenges in ensuring that these goals are ethically aligned and do not conflict with human welfare.
The exploration of these theses in Bostrom’s “Superintelligence” reveals the intricate and multifaceted nature of developing and governing superintelligent AI. It underscores the need for meticulous design and ethical consideration in AI development, to ensure that the emergence of superintelligent entities aligns with beneficial outcomes for humanity.
The Global Impact of Superintelligence
Nick Bostrom’s examination of superintelligent AI extends into the realm of ethics and societal impact, drawing parallels with historical technological races that have shaped global politics and security. In Chapter 5 of “Superintelligence,” he presents a nuanced discussion of the strategic advantages nations have sought through advancements in technology, specifically nuclear capabilities, as evidenced by the Manhattan Project and the ensuing arms race of the Cold War era.[xii]
The ethical considerations that emerge from such strategic advantages are profound. Bostrom references historical figures like Mark Oliphant, Leo Szilard, Eugene Wigner, and Albert Einstein, who were pivotal in advocating for the monitoring and control of nuclear technology. The Manhattan Project itself, a model of secrecy and scientific collaboration, underscores the dual-use nature of advanced technologies. John von Neumann’s contributions to game theory and U.S. nuclear strategy are particularly salient, highlighting the ethical complexities involved when a nation faces existential threats.[xiii]
The quantitative data in Table 7 of Bostrom’s book illustrates the timelines of various nations in acquiring strategic technologies such as fission and fusion bombs, satellite launch capability, and intercontinental ballistic missiles (ICBMs).[xiv] This historical data contextualizes the urgency and competitive nature that nations may exhibit when faced with the potential of superintelligent AI, echoing the past technological races that have significantly impacted international relations and global power structures.
Bostrom also touches upon the thoughts of Bertrand Russell, an advocate for anti-war movements and later for mutual nuclear disarmament.[xv] The contrasting views of Russell and von Neumann encapsulate the spectrum of ethical stances that influential thinkers have taken regarding the use of strategic technologies. These historical perspectives provide valuable insights into the potential societal and geopolitical implications of superintelligence, as nations and organizations grapple with its dual-use potential and the ethical responsibilities that accompany its development.
Incorporating these historical examples into the discussion on superintelligence, Bostrom effectively demonstrates that the ethical and societal implications of AI are not without precedent. He even proceeded to reference Star Wars in Chapter 5, demonstrating a parallel to Ronald Reagans Strategic Defense Initiative.[xvi] The global implications of superintelligent AI, therefore, must be considered within the context of these historical lessons, emphasizing the need for strategic foresight, ethical governance, and international cooperation to navigate the challenges posed by such transformative technologies.
AI Horizons: Transforming Health, Education, and the Cosmos
Nick Bostrom’s discussions on superintelligence raise crucial considerations about the accessibility and ethical implications of advanced AI technologies in healthcare and education. The potential for AI to significantly improve healthcare through personalized medicine and advanced diagnostics is tempered by the recognition that such technologies may not be universally accessible. Bostrom’s contemplations on eugenics and biological enhancements suggest a future where genetic stratification could exacerbate social divides, with advanced procedures like iterated embryo selection potentially being out of reach for many.[xvii]
Bostrom articulates a procedure of iterated embryo selection involving genotyping and selection for desired genetic characteristics, which raises profound ethical questions about the commodification of human genetics and the societal implications of such biotechnologies. The steps he describes—genotyping embryos, extracting stem cells to create sperm and ova, and then cross-fertilizing to produce new embryos—highlight a potential future where genetic enhancements could be traded as commodities, creating a society divided not only by wealth and opportunity but also by genetic enhancements.[xviii]
In education, superintelligent AI could transform learning by providing tailored educational experiences, but Bostrom cautions that this also poses the risk of deepening educational disparities. Neuromorphic AI, which mimics the neural structure of the human brain, presents new possibilities for adaptive learning systems. However, if access to such advanced educational tools is restricted, it could lead to a widening gap in knowledge and opportunity, further entrenching societal inequities.[xix]
The implications of superintelligence extend into space exploration, where AI could spearhead advancements in space travel and resource utilization. Yet, as Bostrom implies through his comprehensive examination of superintelligence, the ethical considerations of such exploration cannot be overlooked. The potential for AI to contribute to space colonization raises questions about the stewardship of extraterrestrial environments and the equitable distribution of space-derived resources.[xx]
The qualitative data provided in Table 6 of Bostrom’s book can further contextualize these points, outlining the possible impacts of genetic selection in different scenarios.[xxi] This data underscores the need for careful consideration and ethical governance to ensure that the benefits of superintelligent AI in healthcare, education, and space exploration are shared equitably across society.
Robotic Renaissance: AI and Economic Change
Nick Bostrom, in “Superintelligence,” vividly illustrates the potential parallels between the advent of superintelligent AI and historical shifts such as the transition from horse labor to mechanized transport in Chapter 11, “Multipolar Scenarios.”[xxii] He speculates on a future where AI not only substitutes but surpasses human intellectual and physical labor, positing, “With a sufficient reduction in the demand for human labor, wages would fall below the human subsistence level.” This reflection is grounded in historical data, such as the decline in the horse population as their labor became redundant, suggesting a possible analogous fate for human workers.[xxiii]
The chapter advances to dissect the implications of this transformation on capital and welfare, weaving in considerations of the Malthusian principle and the dynamics of an algorithmic economy.[xxiv] Bostrom engages with the concept of life in such a transformed economic reality, raising profound questions about the nature of work, leisure, and the definition of value in a society where human labor may no longer be the cornerstone of economic activity.
One of the most impactful statements in the chapter comes from Carl Shuman, who contemplates the use of superintelligent AI for comprehensive governance, potentially leading to “an appalling and permanent totalitarianism” through ideologically uniform enforcement entities.[xxv] This chilling forecast underscores the immense power of superintelligence to reshape society and governance and the paramount importance of ensuring that such transitions are managed with a strong ethical compass.
Bostrom’s exploration of these scenarios is not merely speculative; it serves as a call to action for policymakers, economists, and society at large to preemptively address the challenges and opportunities presented by the advent of superintelligent AI. It’s a reminder that the trajectory of our economic and social structures in the age of superintelligence will be significantly influenced by the decisions we make today.
Machina Might: The Power of AI Strategy
In “Superintelligence,” Nick Bostrom examines how past technological races offer insight into future scenarios where superintelligent AI could provide decisive strategic advantages. He reflects on historical precedents, such as the selective breeding of silkworms and the delayed Western mastery of porcelain making, as analogies to the potential competitive dynamics in the development of superintelligent AI. These stories, while perhaps subject to historical debate, illustrate the significant strategic advantages that can accrue from maintaining a technological lead.[xxvi]
The development of superintelligent AI could similarly result in a strategic landscape where the first movers have a disproportionate influence. Bostrom draws parallels between the development of key technologies like silk and porcelain and the potential future ‘races’ in AI capabilities, as illustrated by the technological advances outlined in Box 5 and the timelines captured in Table 7. These examples underscore the enduring relevance of strategic advantages in technology throughout human history and the potentially amplified effects in the era of superintelligence.[xxvii]
Bostrom also cites the example of pre-Columbian cultures that had the wheel but did not use it extensively, possibly due to the absence of draft animals, to illustrate how strategic and contextual factors can influence the adoption and impact of technologies. This example serves as a cautionary tale for the potential of superintelligent AI: possessing the technology does not necessarily guarantee its effective or beneficial use.[xxviii]
These historical contexts enrich our understanding of the dual-use nature of superintelligent AI in both military and diplomatic realms. As Bostrom discusses, the strategic advantages conferred by superintelligent AI could be as transformative as the historical examples of silk and porcelain, yet they also carry the weight of ethical responsibility and the need for foresight in their development and deployment.
The integration of these historical examples and Bostrom’s analysis into the discussion of technological and strategic advantages serves to underscore the potential transformative impact of superintelligent AI. It highlights the importance of learning from past technological races to inform current strategies, ensuring that the development of superintelligent AI is pursued with ethical consideration and an appreciation for its far-reaching implications.
AI Horizons: Envisioning Tomorrow’s Intelligence
In “Superintelligence,” particularly in Chapter 14, Nick Bostrom navigates through various strategic considerations that will likely shape the future development of AI. He introduces the concept of the “Technological Completion Conjecture,” suggesting that if technological development continues unabated, all feasible basic capabilities obtainable through technology will eventually be realized.[xxix] This notion presents a future where research efforts, akin to pouring sand into a box, will gradually fill the vast space of potential technological advancements.[xxx]
Bostrom further elaborates on the “Differential Technological Development,” advocating for a strategic approach that seeks to delay dangerous and harmful technologies while accelerating the development of beneficial ones, particularly those that mitigate existential risks.[xxxi] This principle calls for a prioritization strategy in scientific and technological endeavors, aiming to optimize the sequence in which various capabilities are developed.
The chapter weaves through various strategic concepts, such as “Macrostructural Development Accelerator,” which refers to mechanisms that could rapidly advance large-scale societal and technological changes. Bostrom reasons that accelerating developmental rates during periods of low existential risk could potentially reduce future risks by allowing more time for preparation and adaptation.[xxxii]
Engaging with the ideas of other thinkers like Eric Drexler, Bostrom addresses the complexities of preparing for transformative technologies like molecular nanotechnology.[xxxiii] He outlines a series of logical steps underscoring the need for early and serious research efforts to reduce risks and allow for ample preparation.
Bostrom also ponders the potential dynamics of a post-transition world and how superintelligence might influence it. He concludes with the “Common Good Principle,” asserting that the development of superintelligence should serve the benefit of all humanity, reflecting widely shared ethical standards, rather than the interests of a select few.[xxxiv]
Incorporating these principles into our understanding of future AI trajectories, we gain a clearer picture of the ethical and strategic foresight required to navigate the development of superintelligence. Bostrom’s chapter provides a roadmap for the prudent and ethical advancement of AI technologies, emphasizing the need for collaborative, well-considered strategies that align with the broader interests of humanity and the ecosystem of sentient beings.
Superintelligence Synopsis: Future Insights and Echoes
In the concluding chapter of “Superintelligence,” Nick Bostrom addresses the urgent philosophical and practical questions that the prospect of superintelligence imposes upon humanity.[xxxv] He frames the challenge as “Philosophy with a Deadline,” underscoring the immediacy and gravity of the decisions we face.[xxxvi] The intelligence explosion, while potentially decades away, presents a formidable task: to hold on to our humanity and address the profound issues with groundedness, common sense, and decency, even when faced with the most unnatural and inhuman problems.
Bostrom emphasizes the essential task of our age, which is to discern the features in what is an otherwise amorphous vision of the future — one that presents the reduction of existential risk and the attainment of a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment. He implores us to bring all our human resourcefulness to bear on solving the issues that superintelligence presents, striving to see beyond the fog of everyday trivialities to the globally significant challenges ahead.
In “What is to be Done,” Bostrom outlines strategic considerations such as “Seeking the Strategic Light” and “Building Good Capacity,” advocating for particular measures to safely navigate the transition to a world with superintelligence. He suggests focusing on building a capacity for good — enhancing society’s ability to make wise choices — and specifies actions that could reduce risks and increase the odds of a favorable outcome.[xxxvii]
Finally, “Will the Best in Human Nature Please Stand Up” serves as a poignant reminder of the role of human virtues in the era of superintelligence.[xxxviii] Bostrom calls upon the best of human nature — our intellect, empathy, and cooperative spirit — to guide the development of superintelligent AI. The book concludes with a recognition of the profound moral responsibility we have to ensure that superintelligence is developed in service of the benefit of all, honoring widely shared ethical ideals.
In synthesizing Bostrom’s insights for our conclusion, we echo his call for thoughtful, informed, and ethical engagement with the development of superintelligence. The reflections on superintelligence are not just contemplations of a distant future; they are a manifesto for action in the present, urging us to shape a future that upholds the common good and reflects the best of what it means to be human.
[i] (N. Bostrom 2014)
[ii] (Davis 2015)
[iii] (N. Bostrom 2014)
[iv] (Morgenstern 1944)
[v] (Turing 1950)
[vi] (N. Bostrom 2014)
[vii] (N. Bostrom 2014)
[viii] (N. Bostrom 2014)
[ix] (N. Bostrom 2014, 107)
[x] (Yudkowsky 2001)
[xi] (Parfit 1986)
[xii] (N. Bostrom 2014, 86-90)
[xiii] (Morgenstern 1944)
[xiv] (N. Bostrom 2014, 81(Box 5))
[xv] (Griffin 2001)
[xvi] (N. Bostrom 2014, 86)
[xvii] (N. Bostrom 2014)
[xviii] (N. Bostrom 2014)
[xix] (N. Bostrom 2014)
[xx] (N. Bostrom 2014)
[xxi] (N. Bostrom 2014, 40)
[xxii] (N. Bostrom 2014, 159)
[xxiii] (Council 2005)
[xxiv] (N. Bostrom 2014, 163-164)
[xxv] (Shuman 2010b)
[xxvi] (Cook 1984) (P. Hunt 2011)
[xxvii] (N. Bostrom 2014, 80-82)
[xxviii] (N. Bostrom 2014, 80-82)
[xxix] (N. Bostrom 2014, 229)
[xxx] (N. Bostrom 2014, 228-253)
[xxxi] (N. Bostrom 2014, 230)
[xxxii] (N. Bostrom 2014, 233)
[xxxiii] (Drexler 2013)
[xxxiv] (N. Bostrom 2014, 254)
[xxxv] (N. Bostrom 2014, 255-260)
[xxxvi] (N. Bostrom 2014, 255-256)
[xxxvii] (N. Bostrom 2014, 256-258)
[xxxviii] (N. Bostrom 2014, 259-260)
Bibliography
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Cook, James Gordon. 1984. Handbook of Textile Fibres: Natural Fibres. Cambridge : Woodhead.
Council, American Horse. 2005. “National Economic Impact of the U.S. Horse Industry.”
Davis, Ernest. 2015. “Ethical Guidlines for a superintelligence.” Artificial Intelligence 220 121-124.
Drexler, Eric K. 2013. Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization . New York: PublicAffairs.
Griffin, Bertrand Russel and Nicholas. 2001. The Selected Letters of Bertrand Russel: The Public Years, 1914-1970. New York: Routledge.
Hunt, Patrick. 2011. Late Roman Silk: Smuggling and Espionage in the 6th Century CE. Stanford University, August 2.
Morgenstern, John Von Neumann & Oskar. 1944. Theory of games and Economic Behavior. Princeton: Princeton University.
Parfit, Derek. 1986. Reasons and Persons. New York: Oxford University Press.
Shuman, Carl. 2010b. Whole Brain Emulation and the Evolution of Superorganisms. San Francisco, CA: Machine Intelligence Research Institute.
Turing, Alan M. 1950. “Computing Machinery and Intelligence.” Mind 59 433-460.
Yudkowsky, Eliezer. 2001. “Creating Friendly AI 1.0: The Analysis and Design of Benevolent oal Architectures.” Machine Intelligence Research Institute, June 15.