
Abstract:
This brief paper looks at a point in the analysis of the Singularity Problem which appears to be overlooked: The difficulty of building or maintaining an infrastructure to support the Singularity goals, whatever they might be. The building and maintenance of infrastructure to accomplish human goals, whether those goals be the building of Smart Phones- compare this with the Paperclip Maximizer – or Transportation or World Wide Energy or Computer Chips of all kinds is a major problem for humanity and consumes a major part of humanities creativity, effort and resources. In the analysis that I’ve heard and studied – references below – on this subject, an economic analysis of the limiting factors of developing an maintaining the necessary infrastructure is either left out entirely, often in the case of fictional treatments, or glossed over. I suggest that just as biology and economics have developed theories of population and market collapse due to resource constraints, so should the analysis of the singularity treat more thoroughly the issues of the development and maintenance of the required infrastructure. It is my view that such analysis will likely calm many of the fears of catastrophe that accompany presentation of the singularity and its effect on humanity.
Updated 13 Jan 2015 – see below
Background Material
Ray Kurzweil Coined the use of “Singularity” in this context and has written many books over his career on the subject. Here is a recent talk at Google.
Nikola Danaylov – Provides a YouTube channel devoted to interviews with leaders and thinkers in the field of AI.
Nick Bostrom – Many talks and books on the subject. Recently created the Future of Humanity Institute to analyse and mitigate the problem. Here is a recent talk at Google on the subject.
Future of Humanity Institute – Analysis and Mitigation of the issues surrounding the Singularity.
Examples of Scenarios
Contains spoilers.
These examples are taken from Movies, as examples of the scenarios from popular culture. There are too many short stories, books and other examples to site here of course. Since the 1930’s this theme has been a popular one in Science Fiction, short and long forms. Many Twilight Zone episodes were devoted to the subject, for example.
Benign or Less Dangerous Examples
Bicentennial Man (1999)
An android endeavors to become human as he gradually acquires emotions.
A. I. Artificial Intelligence (2001)
A Pinocchio like story where the AI are shown as exploited slaves and sympathetic victims. Humanity is never in danger. One of the AIs far outlives humans and represents us to alien visitors in the far future.
Eva Movie (2011)
An AI is mistaken for a child, but later becomes unstable when confronted with her true nature.
Ex Machina (2015)
An abusive creator of an AI, continues to abuse her in the guise of testing her. But she kills her creator and escapes. Apparently a cautionary tale about humans treating others as we normally do.
Dangerous Examples

Metropolis (1927)
“In a futuristic city sharply divided between the working class and the city planners, the son of the city’s mastermind falls in love with a working class prophet who predicts the coming of a savior to mediate their differences.” The savior is of course, a robot.
Colossus: The Forbin Project (1970)
“Thinking this will prevent war, the US government gives an impenetrable supercomputer total control over launching nuclear missiles. But what the computer does with the power is unimaginable to its creators.”
The Terminator (1984)
A Military computer system takes over the world. Similar in message to Colossus, but with more devastating results and largely as a vehicle for time travel and individual human stories.
I Robot (2004)
The controller AI of a monopoly robot company attempts to take over the world for the “benefit” of humans. A cautionary tale about the “misunderstanding” of Asimov’s Three Laws.
Thought Experiments
Paperclip Maximizer
What if an AI had the goal to maximize the number of paperclips.
Analysis
All the benign examples can be ignored. None of these suggest any world wide, or largely devastating results. Tragic while they may be, they are no worse than the explosion of any other experiment in the laboratory. These examples may serve as reminders of how we may want to act toward our new creations.
The dangerous examples are fodder for analysis of the economic issues of infrastructure creation and maintenance.
Metropolis
The infrastructure is clearly in place. The movie is about the workers uprising lead by the robot. But there is only on robot and it’s not taking over the world from humans. This is probably not the issue we are worried about.
Colossus
The infrastructure is clearly in place. The movie is quite convincing and detailed about how humanity put itself in this predicament. But there are only two AIs in the world in this case. Eventually humans working off the network, out in the wilderness can probably mount an attack on them.
I Robot
The infrastructure is the monopoly robot company. Again the difficulty is the issue of an off-switch for the single rampant AI. All is well when the intrepid heros destroy this AI.
The Terminator
This story shows the failure of the infrastructure analysis. I suggest that there is some magical thinking, forgivable in fiction, in that the Skynet could create enough manufacturing and other resources in short enough time to take over the world without being stopped by humans. This is the crux of my argument about the need for economic and infrastructure analysis.
Historical Example
We have seen an historical example of what might happen in a dangerous scenario. The example is World War II. Consider the infra-structure required on both sides and what happened during the war.
The Allies had a large protected infrastructure that was never attacked. The Axis powers infrastructure was under constant attack. The Allies used paid, cheerful hardworking, motivated laborers. The Axis, in many notorious cases, used slave labor. It was clear during and after the war that while the fight was long and hard, the Axis powers were losing most of the time due to their disadvantages.
Update: 13 Jan 2016
The Insurgencies Problem
An obvious example from the last few decades is the problem of insurgencies. What would keep an AI army from using insurgency tactics to take over as human insurgencies have in so many places around the world?
First, let’s observe that there are no long-lived insurgencies. One might argue that Cuba is a counter example. But as you can see, there are very few. And, as we know, the Cuban insurgency was supported by a nation state actor.
Again as for other methods, for insurgencies, there is a problem of infrastructure. We observe that insurgency armies are based on older weaponry and simple vehicles. None have ever had air power. While a recent Middle East insurgency gained much media attention by stealing modern weapons left behind, those weapons were soon gone and the current situation for that insurgency is that it is based, as all recent ones have been, on IEDs, RPGs, AK47s, and used trucks. A prominent example is the US Plumber who sued his local car dealer for letting his used truck, with his company name and phone number still displayed, to become part of the current Middle Eastern conflict. There was a vague report that the truck was purchased from the dealer through an auction. The media has not been forthcoming about the support for any modern insurgencies, but we know they exist.
While it is not frequently or prominently covered in the media, all insurgency conflicts are essentially proxy wars where the sides are supported by nation states and their infrastructures. This observation has much to say in the matter of a possible AI insurgency since it shows that human nation state infrastructure would be required on a long term basis to support an AI insurgency. Insurgencies rapidly die off when their goals no longer are in line with one or another nation state supporting them. It seems clear that human nation states would have limited interest in long term support of an AI insurgency if the AI were to turn against humans in general.
While this is likely true, it is none the less important to note that AIs are a possible real threat when considered as a slave, autonomous robot army mounting an attack. It seems likely that an AI army could be more ruthless, self-sacrificing and fanatical toward its goals than any human army could be. But again, this army would need to be supported by one or more human nation states to mount any long term attack since long term supply and logistics for such an army would need to rely on a large infrastructure.
End of Update
Conclusion

Consider the large and complicated infrastructure required to build modern computer technology:
- Hundreds of small companies designing chips – Processing, graphics, memory, network, interfacing, sensing, control, GPS, etc. etc.
- Dozens of companies doing FAB.
- Dozens of companies providing machines and consumables of many types to do FAB – robots, clean rooms, suits, filters, bearings, machine tools.
- Dozens of companies providing and maintaining software for Design and FAB.
- Dozens of companies providing materials for FAB – Silicon Wafers, High purity gasses, Chemicals for dopants, etc. etc.
- Hundreds of companies providing parts, raw materials, and other things like PCs to support all the above companies.
- Hundreds of companies providing transportation hardware and services to move the stuff around the world to build the stuff above.
Now notice that to build Skynet, Skynet needs all this in place, and much more besides. Like space launch companies and all their infrastructure. Without willing and enthusiastic participation of humans, it could not happen.
I would love to see a detailed economic analysis of this problem by someone who knows the field of the logistics of large companies.
I could go on and on, but I think I’ve made my point. To replicate an Army of AI to take over the world, the AIs would need the infrastructure in place and the humans to run it, or a planet to hide for a very long time to build the infrastructure themselves. Remember it takes all this infrastructure to build the AIs in the first place, so a limited number of them will be able to get away to hide on that planet and start to work. When AIs emerge, they will be among the most complicated devices humans have ever built. They cannot be built using some 3D printers in someone’s basement. Those only build Shapes in Plastic, not working complicated electronic devices.
:ww