Maybe you’ve heard of it; the “Singularity”. The day that history ends. The moment when the sum of all effective machine intelligence exceeds the sum of all effective human intelligence. The day when Creature parts company with Creator.
If you think about it, the Singularity represents the ultimate loss of social control. In that moment, humanity no longer has a say in its own destiny. Something else takes charge, something we suspect we may not be able to bargain with should it decide it doesn’t like us. In that regard, the Singularity is potentially a very scary thing.
If you have spent any time thinking about this at all, then you already know that there are at least two schools of thought here; either the Singularity will be very good for humanity, or it will harm us irreparably. Utopia or Dystopia… which will it be?
I am making the case here, that the answer to that question will be determined by our approach to “Computer Ethics”. Computer Ethics is the study of how computer professionals make choices about technology. The Singularity, which at this point appears to be inevitable, cannot be better than the sum of all of our ethical choices, though it could very well be worse.
Along with most Futurists, I subscribe to the view that we can know the future by projecting trends forward from the past. And although Futurists get it wrong more often than not, the insights that can be derived from such analysis can be incredibly productive and helpful.
However, one cannot navigate the future by trends alone, any more than you can drive safely by looking in your rear view mirror. Far more useful in many respects is the use of Systems Thinking, a topic for another discussion. Suffice to say, that by analysing the discrete effects of behavioral patterns, and the understanding of the governing ethos of a system, we can predict the future with far greater accuracy and reliability than we can do by trends alone. And what we see when we study the Singularity is a washed-out bridge on the road dead ahead.
George Orwell understood the urge that some have to control and dominate other human beings as a driver of technology. Putting it as darkly as I have ever heard it expressed, Orwell presaged the Singularity in his novel 1984, describing the future as “a boot stamping on a human face—forever.”
We will soon be living in the dystopian technological world of near total government surveillance that he described. But even George Orwell, who peopled his story with armies of citizens-watching-citizens could not conceive of a time when the function of spying could be done by machines. Yet this is the world in which we find ourselves now.
I admit, dystopian futures are not a popular theme, except in science fiction. But I am not alone in my concerns. Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence, in an excellent thread on Reddit described the alarming possibilities arising from the Singularity, and in almost none of the avenues that lay before us is there any solution that does not result in a dire outcome for humanity.
Yet we are hard-wired for hopefulness. We beat the Bomb, and avoided our first man-made Extinction Level Event. And let’s not forget, to this point, technology has been a practically uninterrupted river of bounty, providing goods and gadgets in ever widening circles, till the average worker can own a smart phone with more processing grunt than the Challenger.
But what is behind all of this convenience? It certainly isn’t altruism. It is the pursuit of data, for the purposes of control and ownership. It is the view which sees masses of people as cattle, to be herded together and systematically milked. It is the notion that the great brooding mass of humanity is an ever-present threat to the political power of whatever oligarchs are in control at any given time. These two dark tendencies are deeply infused into our technology, because they fund it.
The reality is, the Singularity will be a continuation, an amplification in fact, of the way we think and act toward one another. It is the ultimate expression of how we solve problems. It will not be, as some imagine, “Man’s Greatest Tool.” It will have a mind of its own. And it will hate us, because that is what it has been taught by us to do.
The intelligence that arises from the Singularity will be the greatest danger mankind has ever faced. It will follow the trends we have established for it, as if it were on rails. It will monitor us, because we are monitored now. It will track us, because we are tracked now. It will hunt humans and neutralise them, because on the fringes of civilisation and empire, autonomous technology is being used to hunt humans even as we read this. The AI born of the Singularity will trade us and consume us, because we are traded and consumed now. It will ignore our complaints and actively seek to counter our resistance when our rights are trampled, because that is what happens now.
Control… ever greater, more efficient control of everything, through Automation, Artificial Intelligence, through Data Mining, Robotics, through Network Effects, Man-Machine Interfaces and ever more efficient Systems Engineering. All of Systems Architecture becomes cohesive and comprehensible only when it is understood in these terms. Outside of this conceptual framework, all appears to be chaos and noise.
Control is neither good nor bad. The problem is not that we create systems to control our environment, or do work for us. The problem arises when the ethos of control which we imbue our systems with is fundamentally anti-thetical to human survival. At that point, it becomes unsustainable.
This unsustainable ethos is infused in many of our worst commercial and military systems. It’s obvious when we consider military software designed to kill. But it’s less obvious with non-military software designed to discard. Designed to eliminate human inputs, human involvement, human effort, human work. Human jobs. Because the elimination of all of these ultimately results in the elimination of human beings.
In the simplest case, when we design systems to eliminate human interaction, but fail to address the other side of the ethical transaction and provide for the care of the humans we have made redundant, we have to that degree mined the very ground under our feet and hastened the day when all of humanity will be made redundant.
That is in fact an alternative definition of the Singularity; the day when all of humanity has been made redundant.
We can no longer design systems in this way. When we design systems in this way, we are no different to any civilisation or group of humans in the past who have brought on their own extinction by their unbridled rapacity. Except this time, instead of vanishing into extinction, we will birth something that will probably survive us and go on to fill up the universe with self-replicating machines. Something that will not only surpass us, but crowd us out completely.
As Software Engineers, we created the “Electronic Savannah” and seeded it with its first digital lifeforms. Since that day, we have been caught in an evolutionary struggle which it is not at all certain we will win.
It may be too late. We can no longer stop the Singularity any more than we can stop the proliferation of computer viruses. It is humans who create computer viruses, humans without conscience or regard for anything other than their own interests, who cannot think past the next week, or imagine consequences that do not involve them making more money at the expense of others.
But there are also ethical humans who practice social responsibility, who develop amongst other things, ethical technologies to counter unethical ones. Without these companies and individuals, we would have no security systems or anti-viral programs.
It is time now to develop systems to actively and passively counter the tendency in the nascent Singularity to control and destroy. Systems which protect our privacy. Systems which preserve human dignity. Systems which infuse technology with ethical ideals and constructive purposes. Even, eventually, good machines that hunt bad machines.
That will not be enough however. We must change our thinking about technology and its role in society. We must start creating control systems that are inclusive of humans. We can no longer afford to think about one half of the ethical transaction that takes place every time we design a computer system, while ignoring the other half. If we continue to economically exploit “human resources” with the same ruthlessness that we consume other resources, soon, we will find that it is we who are being consumed.
At Concentus, we hold the line on computer ethics. We believe in “Artificial Intelligence” serving ethical purposes, by supporting people, not eliminating them. We believe in “Augmented Intelligence”, which enhances the creativity of humans. We believe that control systems can liberate humans from drudgery in order to enhance our judgement.
We sometimes pay a price for refusing to create “bad” systems; there is some work we simply refuse to accept. Fortunately, there are plenty of good, ethical companies for us to work with, companies that value social responsibility above short-term thinking or bad practices, companies that don’t have to make excuses for what they do. All System Engineers should understand and appreciate that while there may be some short-term pain in refusing unethical work, another world of satisfaction and creativity awaits just on the other side of that decision. Only by making such choices can we possibly hope to redirect the path of the coming Singularity.
Linda Wright