Note: this is the first part in a series of blogs I am writing related to why I am pursuing a career in AI safety research.
Humans evolved in an environment with a lot of conflicting pressures. Every extra cubic centimeter of brain volume increases the difficulty and risk of childbirth, because the human birth canal can’t become very wide without compromising mobility and making our bipedalism impractical. Every extra molecule of glucose consumed by our brain is one that can’t be put to use somewhere else. Larger, more complicated brains increase the length of childhood, increase dependence, and increase mortality before reproductive age. Larger brains have more genetic “moving parts” that can be disrupted, creating higher rates of developmental disorders. In general, humans were only selected for intelligence insofar as it had instrumental value for reproduction. Evolution of animals on Earth, specifically as a mechanism for producing intelligence, is not particularly optimal.
In fact, our intelligence and agency have enabled us to beat evolution, again and again, at the task of design in service of some objective. Our designs aren’t constrained by opposing evolutionary pressures or the requirement to implement designs as cell-based things encoded by DNA, nor are they hampered by the fact that evolution just doesn’t optimize for the characteristics that we can optimize for. The billions of years of selection that created aerodynamic falcon bodies that dive faster than any other animal didn’t create the scramjets we designed that hurtle the X-43 at ten times the speed of sound. We also designed Saturn V rocket engines that carry a 310,000 pound payload outside Earth’s atmosphere by generating the force of 160 million horses, neutrino detectors that register uncharged, nigh massless particles passing through eight hundred miles more rock than any biological sensor can do anything with, and transistors that switch states ten million times faster than human neurons fire. Now, in the 21st century, we have started seriously looking at the task of recreating the characteristic that has enabled all of our innovations.
Imagine a factory that, in cycles, grows enormous numbers of disembodied brains in vats. Each cycle, they present every brain with an extremely broad, shifting battery of cognitive tasks (basically, idealized IQ tests on steroids, or something). For the next cycle, the factory keeps only a small fraction of the best-performing brains, spawns many slightly mutated descendants from those survivors, discards the rest, and iterates. This factory applies selection pressure with intelligence as the sole objective of optimization, as opposed to trying to find the Pareto frontier of intelligence/energy use/size/developmental risk/etc. given biophysical limits. Because so many conflicting pressures were involved in its upbringing, the human brain is a suboptimal intelligence compared to an alternative engineered with focus and fewer constraints: the brains grown in the factory would drastically outperform human brains at most intelligence-related tasks after sufficiently many generations, unless we assume a priori that humans are the pinnacle of intelligence.
That human brains exist and can be created is itself evidence for the feasibility of some types of superintelligences. Many human brains working together can outperform a single brain; for example, markets can make superhuman predictions. If no alternative approaches pan out, barring specific defeat conditions, it could be possible to simulate an ensemble of human brains collaborating at a faster than real-time pace, and this would yield something that can do every intellectual task that a human could, but quicker and better. This would be an example of a superintelligence.
Not only are smarter-than-human things plausible, but we also have no current strong evidence that creating non-biological implementations of intelligence is a particularly intractable problem. We do have evidence that we have already created systems with many aspects of biological intelligence, and that these may lead to superintelligences; this is the subject of a few upcoming blogs.