The Last Invention

Those who are interested in the cutting edge of computing will be familiar with the opportunities and benefits afforded by simulators and simulation in general.  We’ve come a long way from Conway’s “Game of Life”.  Exciting new opportunities, in both the short and long term are starting to open up now.

One of the most important branches of the field of simulation involves the use of “Intelligent Agents” in both continuous and discrete simulations.  Intelligent Agents are “actors” who inhabit a “world”, and who respond to events in that world according to various sets of rules which govern their behavior. If you would like to play around with these concepts, I heartily recommend MASON, a very powerful Multiagent Simulation engine written in Java.

Simulation Games NFL

Simulated NFL players so realistic that you could be forgiven for thinking this was a snapshot from an actual professional football game.

If you aren’t into Java, you may be familiar with the notion of Intelligent Agents through computer gaming.  3D, 1st person simulation games often use “bots”, which populate the world, challenging and often opposing the player, making these games far more interesting.  The more intelligent these bots are, the more interesting the game is.

Bots are normally easy to distinguish from human players in games, because of the rigidity of their behaviors and the limited range of their responses.  For decades, the games industry has put the lion’s share of its investment into improving the realism of graphics, but as the rendering engines of even ordinary games has begun to rival HD video footage of the real world, attention has begun to turn to developing more sophisticated AI engines. These newer intelligence models will dramatically change the realm of computer simulations forever.  Already, bots are starting to pass the Turning-Test in limited gaming scenarios.  Real progress is being made.

Mechanical robotic intelligence is still lagging far behind.  Humanoid robots which move, act and appear like us under unconstrained real-world circumstances are still decades away. Physical robots fail the Turing-test miserably.  But at a certain point, there will be a cross-over, between gaming and robotics, and the highly sophisticated and very realistic mental reasoning and behavioral models pioneered by “Intelligent Agents” in games will be transplanted into mechanical robots. Mechanical robots will then be able to do a vast new array of things, limited only by their mechanical and physical configurations. This transplantation will result in a Cambrian Explosion in new robotic forms.  Quite quickly, the questions of what is ethical and permissible in robotics will be raised again with a new sense of urgency.

The development of simulated Intelligent Agents, driven primarily by games designers, will far out-strip not only the development of robotic intelligence, but will also radically change the way we approach complex problem-solving in general computing.

First, developers of robotic systems will gradually begin to accept the primacy of software and software systems in control system design.  Use of modern software development techniques will drive greater and greater separation of concerns, cleaner abstraction into API interfaces, more software unit testing, the widespread introduction of continuous integration and delivery, and greater generalisation of software logic.  Robots will become “dumb clients” possessing tools (like hands and feet) and telemetry (eyes and ears), albeit superior to ours, but nonetheless devoid of any actual intelligence of their own.  The best robots will be modular, serving as adjuncts to larger robotic systems, or drones, operated remotely.  Uniquely, the drivers of these drone machines will increasingly not be human.  They will be intelligent agents contained in simulated environments, which get their sensory input from telemetry, and react according to sophisticated rules, just as game bots do.  They will “emit” control signals back to the drones in real time from within their environment, without needing to know anything about the external world, according to Model-View-Presenter principles. Within these “containerised” simulated worlds, their intelligence and capabilities can continue to evolve in isolation, at the pace of software, not hardware development.  Within these worlds, they will be able to “think” and “evolve” in real-time, enhanced and maintained independently of physical hardware.

This “containment” of the agent within the simulated environment is vital to the quality of the simulation and the outcome, and will be a compelling direction for both simulation and robotics technology to evolve in.  Not only is it efficient and effective, it brings with it many seductive advantages over older approaches.

In this paradigm, from the perspective of the Intelligent Agent, the world they inhabit is the “real world”.  They may be oblivious to whatever external interfaces or monitors exist, so long as there is no interference.  Indeed, a “bot” in a simulation could not know for certain what if anything exists outside of their world.  It would perceive its world as the only reality, experiencing mathematical constraints as physical laws, and injurious collisions with other mathematical objects as real “pain”.  The more accurately and precisely these internal mathematical rules and constraints are implemented, the more reliable and predictable are the actions and behaviors of the agent(s).

While mechanical models and simulations have been with us for a long time, this new type of simulation is barely in its infancy.  I maintain in fact that this mode of simulation will eventually become the most powerful tool humanity has ever invented, or will ever invent.  Because it is possible to conceive of this mode of Artificial Intelligence eclipsing even the notion of the “Singularity”, which I have written about previously.

The current obsession in Artificial Intelligence is the creation of a “thinking” machine.  While I believe this is important research, I also believe that such an approach inevitably leads us into a cul-de-sac.

The first reason for this is due to the fundamental limits of analytical computing.

Analytical reasoning, and the software which facilitates it will always have its place.  But ultimately, it is possible to envisage a future in which it is no longer the primary means of solving intractable problems.

Simply put, analytical computer programs are naturally limited by the complexity of their structure.  The more capable they become, the more complex they become, and thus the less maintainable, to the point where humans are no longer able to comprehend them and intelligently direct their advancement.  At that point, the program must take over its own maintenance and enhancement, directing its own development, and in the process, potentially diverging in evolutionary terms from anything that human beings might consider useful or relevant to the original purpose.  This is implied in the notion of the Singularity.  At some point, as computational complexity increases, humanity and the thinking machine must part ways.

The second reason why an analytical machine approach is ultimately limited, is because of the limitation of its perspective. Cognitive processing modelled on human reasoning will inevitably share the failings and limitations of that model.  For instance, a learning machine will naturally suffer from cognitive bias.  This paper, by  Eliezer Yudkowsky illustrates the way in which one seemingly simple but highly consequential question cannot be answered without the answer containing highly dangerous hidden assumptions.  If even a single proposition cannot be answered in a reliable and failsafe manner by means of analytical reasoning, what of an entire monolithic analytical program fueled by a world of data compounded and conflated to bursting with such assumptions?  Such a program cannot fail to factor back into its reasoning, all past conclusions it has ever made on any given subject, influencing all future reasoning on that subject, for better or for worse.

Cognitive bias is unavoidable in analytical reasoning.  The alternative to this failing, if reasoning is to be used, is to recompute everything from first principles every time new information is encountered.  Of course eventually, the cost of re-computation becomes prohibitive, and no further learning development is possible.  However long it takes, the cumulative effects will see the  analytical machine stall at some future point, unable to progress.  And stalling out might be the best of a large set of bad outcomes; there are also very serious issues with flawed super-intelligences that we cannot ignored.

Nature solves both of these problems handily, not by investing all of its evolutionary assets in a single super-intelligent individual, but by diversifying into a multitude of less capable, disposable units, each one examining its own tiny part of the puzzle with its limited reasoning power.  This multitude constantly feeds its conclusions upward into a common pool, which is ever changing.  The sum of all human knowledge has already well-exceeded the limitations of any individual brain (about 5 petabytes, or 300 years of television), yet human beings collectively are still able to function perfectly well within their individual spheres and cooperate with others when need-be.  The sum of all of this activity has given us the technological civilisation we have today.

This natural model of distributed intelligence has served us very well, even if it is flawed.  Flawed as it may be, it is highly fault-tolerant, and can scale nearly without limit, provided there are enough natural resources to support the endeavor.  Indeed, IBM’s Watson relies entirely on source material generated by this collective knowledge culture, our knowledge culture, and in that sense, Watson is not so much a “thinking” machine, as it is an “interface” to this corpus.

Nature then provides us with a new model of artificial intelligence beyond the limitations of costly super-intelligent programs.  This model is based on the use of massively parallel multi-agent simulations.

It works like this; as simulation technology improves, and our raw computing power grows, we learn how to re-frame our challenges, from analytical problems, to situational problems, suitable for solution through simulation.  “Genetic algorithms” are the most famous means of doing this, there are others, and many are yet to be developed.

The ultimate approach may be to create non-deterministic “worlds” in which the problems we face are solved by the inhabitants of these worlds as a necessity of their survival.  In this way, we “culture” simulations, and then learn from them by indirect observation.  The richer these worlds, and the greater the scope of freedom of the agents within them, the more “general” the system becomes as a problem-solving tool.  elite2

We have already learned that we do not have to model the entire universe to make this work, only the relevant parts, and even then, not atom for atom.  We are already very good at simulating the physical world programmatically.  We are capable of programmatically generating an entire galaxy of stars, planets and planetary surfaces using procedural generation.  The computer game Elite: Dangerous, a stunningly rendered space trading and combat simulation includes the entire Milky Way Galaxy, containing billions of stars and trillions of objects and small bodies, every one of which, when inspected, will reveal incredible surface details.  It is populated with bots that are almost indistinguishable from humans when first encountered, though they lack any real motivation beyond fighting with players.  It would be relatively trivial for example to imbue these bots with a simple “profit motive” and “consequences” in the game.  Coupled with the ability of bots to evolve over time, Elite: Dangerous could simulate a complete market system that could actually be studied.

Passively observing and studying the internal interactions of intelligent agents within a game system like Elite: Dangerous may seem odd now, because we don’t consider these systems to be anything more than mere entertainment. But seen in a different light, we come to a startling realisation;  if Elite: Dangerous were to be made intelligent in this way, with bots having the ability to evolved their own behaviours and optimise themselves based on learned experience, the game would quickly become unplayable.  The bots in the game would optimise themselves and their interactions with one-another to the point of dominating every niche and crevice in the inhabited universe, spreading out and colonising the whole of the rest of the game galaxy. Titanic struggles for domination would no doubt rage back and forth until they eventually reached some sort of equilibrium.  In the face of all of this, it would be impossible for any human player or even a team of players to identify even the smallest opportunity in such a system – the bots would see it and occupy it first.  Elite: Dangerous would then cease to be a game.  Instead, it would become a vast source of knowledge about markets, competition, evolution which we could learn from, but not compete with, or even emulate.  The same simulation, “tuned-down” from a competitive emphasis on exploitation to a cooperative emphasis on sustainability could however provide us with valuable solutions to the issues we face today.  And all of this from only one such simulation.

I use the example of an existing game, because I would like to make the point that doing this is well within our present capabilities.  As our ability to produce realistic simulations improves, the value of the information derived from them can only increase.  And, as our ability to harness computing power grows, so does our ability to use simulation to explore divergent solution pathways.  As we improve our ability to create genuinely productive simulations, we also learn techniques for interacting with them without interference, and ways of inferring information from them.  Along this journey, we would also learn to rethink our 3000 year love-affair with analytical method.  We would develop a discipline I call “Simulalogy”, studying our own non-deterministic simulacra, to learn how they have solved the problems that confront them, as a means of making new discoveries and  inventions that will help us solve ours.

Just as with anthropology, there would be rules governing Simulalogy, to prevent the tainting of our subjects and our observations.  These simulations would have to be as realistic as possible, from the perspective of the agents, and to ensure the results obtained were as relevant as possible to our real-world existence.  Agents would be the prime actors in the simulation, with all else being inanimate mathematical constructs and information objects.  There could be no interference in the simulation by its creators in any way – no “smiting” of agents, or bending of rules, no matter how tempted we may be, or how much we may disapprove of them or their conduct.  These simulations would be non-deterministic, that is, they would have certain initial conditions, but no predetermined outcomes or ends.  Their universe would be based ultimately on pure information.  The individual and collective experience of the agents are the record of the experiment.  And as with all good science, there is not just one experiment, one simulation, but many, as many as may be required to explore all of the parameters of the problem, all happening simultaneously, but separately, in their own space/time continuum. In theory, to be completely thorough, one simulation for every consequential combination of every consequential interaction in the “world” would be the ultimate aim, if we could afford the computing power.  All of this is possible, though not immediately doable, given our current understanding of simulation and computing.  Scott Aaronson has recently described computing models beyond “ordinary” Quantum computers which could potentially achieve this.  Quantum Computers are said to be able to contain a near infinity of states simultaneously, making them ideal for such simulations.

A collective system which could perform all such simulations, and not a single super-intelligence we currently call the “Singularity” would likely be humanity’s actual Last Invention; all subsequent inventions would be made by our simulacra, while we merely examine them and use the one’s we like.  Modern Physics has postulated a hypothesis in which the entire universe as we know it is in fact just such a simulation, governed almost certainly by some external control interface, for the purpose, as far as we may determine, of solving all possible problems combinatorially, populating the entire problem space with solutions, without direct resort to analytics, and in advance of the asking.

This possible future is much to be preferred over the creation of a “Singularity” for one extraordinarily good reason;  a vast array of parallel world simulations like this is not a threat to humanity on the same scale as the Singularity.  The Intelligent Agents in the simulation cannot escape, and cannot gain control of our world, provided we deny them all the ability to directly communicate to us, and deny ourselves the ability to interact.  Unfortunately, the same cannot be said for a Singularity, if it can be created.  Such a super-intelligence, given any kind of executive control whatsoever, even the power of persuasion, could easily end our future in the blink of an eye.  We can put a stop to the threat of the Singularity only by refusing to create it.  There is no such problem with our multi-world, multi-agent AI approach.

There is only one real place for a single “superior” intelligence in this scheme, but it is not as a “thinking” machine.  This is the role of The Experimenter.  The Experimenter is our proxy, and our firewall.  It simply runs all of the simulations, and provides an intelligent human-machine interface to the yottabytes of data and experiential information that resides within them.  The Experimenter can answer any question, simply by drawing on the experiences of trillions of simulated worlds, all operating non-deterministically to explore every possible combination of event, and in the process, answering every conceivable question that may arise.  Literally.  Because if it does not perform any analytical functions, but merely collates, it is completely free from the problem of analytical rigidity and prejudice.  It will always provide the most direct answer to any question, based on its direct observation of the simulations it is running, or has run.

Just as has been postulated by the Multiverse Hypothesis, anything that can happen will happen in at least one simulated universe.  With enough simulations of our universe, one could ultimately, if only theoretically simulate every possible and practical event.  And if an event had occurred, it would be possible to interrogate it and learn from it.  Thus, you could (in theory) ask the Experimenter anything.  You could ask, “What would have happened if Hitler had died in infancy?”  The Experimenter would return answers based on thousands of simulated worlds where this scenario occurred.  Ask it how to make a fusion power generator the size of a hot water heater, and it would return a hundred different models in various colors, invented by scientists in a number of different simulations.  Are you curious what happens to time at the center of a black hole in the instant it is formed?  You can ask the Experimenter, and they will tell you that in at least 6000 simulations, the laws governing this event, and the calculations that go with it have all been independently arrived at by mathematical agents, and here they are.  Do you lay awake at night wondering what will happen to humanity as we approach the Heat Death of our universe?  The Experimenter can tell you – this has happened over a billion times in a billion simulated universes, and if you like, you can see a composite view of what the last human saw in each of them, in 3D living color, and with every sensation they experienced.

With the ability to simulate every possible combination of meaningful interaction in the universe, there is no need any longer to “think through” complex problems, one merely needs to simulate and observe.  This is why I now believe the creation of Super-Intelligences is a dead end.  A single, cogitative thinking machine, even if it could be devised, would swiftly be overtaken by the simplicity and power inherent in the replication of a multitude of computationally simpler general problem-solving simulacra.

If this is true, then this has the makings of a new branch of computer science; Simulalogy, the study and use of massively parallel, non-deterministic multi-agent simulations as a means of solving problems by pre-populating a problem space combinatorially with all possible and relevant outcomes.

Perhaps the easiest way to compare the radical difference between these two approaches to problem solving and artificial intelligence is to consider the problem of chess;  the current approach, the “thinking” approach uses familiar “expert systems” techniques.  The best chess systems have big game databases, and highly complex rule systems.  Each move that an opponent makes is analysed and played forward, much like the way that humans do.  This works, and it is reasonably efficient and compact.  Note that “playing forward” is a form of simulation bent to the service of analytics.

The “simulalogical” approach would be to simulate every game, by means of evolving multiple agents pairing up and competing with each other, playing many billions of games until the entire problem space had been explored to the last consequential end, that is, till simulations are no longer able to add to our repository of experience.  This is not a brute force analytical approach, which mathematically could lead to infinity, but rather a more conservative mapping of experiential pathways, some of which are quickly terminated by the end of the game, some of which go further, but most of which merge back into repeating patterns which expose a conservation of potential. Assuming the storage and the computational power to do this, a complete mapping of the problem space would show every consequential (possible and practicable) outcome.  The rules required are only the rules of chess.  The simulacra will discover every brilliant and stupid play that can be played (which is not the same as every possible combination), with no “thinking” required on the part of the program.

Any chess move a chess program of this nature might be challenged with would be addressed almost immediately by a lookup (without analysis) of all of the stored experience showing this precise board pattern where play ended in victory.  Such a chess player would theoretically be impossible to beat.  No human player, or any number of human players in combination would be able to find a pattern of moves that had not been previously encountered.  Whereas with analytical and cognitive systems, it will always be theoretically possible for a human or humans working in concert to find the flaws in an program’s logic, requiring a patch, adding to the complexity of the system, and potentially introducing new flaws as a consequence.

Still, many challenges still lay ahead for this hypothesis.  Solving problems using simulalogical techniques may only have theoretical value at our current stage of evolution – it may not be possible to produce such a system, now, or even in 100 years.  But the attempt to address some of the problems which confront us in doing so are very interesting;  these have to do with the representation, compression, conservation and storage of information, and the study of these subjects can have tremendous benefits today.  Also, the search for a simpler, general-purpose, low-order simulated intelligence, versus a high-order, “single-mind” intelligence will also no doubt be a fruitful avenue of exploration.

This brings me to my final point;  do we live in a simulation?  There is growing evidence that we ourselves may be part of a simulation, but at this point, we have no conclusive evidence one way or another.  If we do live in a simulation, it is an exceptionally good one.  If we cannot objectively distinguish between a physical and a simulated reality from where we sit, does it matter?  While we may never be able to contact our “makers”, if they exist, we may enjoy becoming makers in our own right, by participating in the creation of a general-purpose simulation framework that could be used to solve the many problems that face humanity.  We may never encounter The Experimenter who governs our Universe, nor would we wish to if we could. But we might relish the opportunity of experimenting ourselves, giving “life” to uncountable trillions of simulacra made in our own image, who might one day be able to contemplate the question of our existence, while revealing to us their many wonderful experiences and discoveries. Should we live so long!