Some misconceptions about the Intelligence Explosion
Some misconceptions about the Intelligence Explosion:
The intelligence explosion (IE) is the idea that when we are able to create a mind smarter than the human baseline, that superior mind will be tasked with creating an even more intelligent successor. Each successor will be able to use its superior abilities to create a successor with more capabilities and in less time. This will lead to rapid progress in intelligence, leading to a superintelligence far beyond human abilities. Presumably, that mind will be able to manipulate the physical world, transforming it beyond human understanding, thus ushering in the Singularity.
The first fallacy is that Moore's Law either leads to or is required for IE. Moore's Law is slowing down, so some have viewed this as a critique of IE. In fact, modern computers are already far faster than human minds in terms of raw speed and memory. They don't necessarily need to get faster or smaller. Intelligence requires new hardware architecture and algorithms, but we do not necessarily need smaller transistors and quantum computers.
The second fallacy is that IE requires an AGI -- a humanlike sentient intelligence. All that is needed is a recursive algorithm capable of building a superior successor. It may be a very narrow intelligence, which only knows how to write an algorithm that optimizes certain classes of other algorithms. For example, it could optimize a neural network that produces WW2 documentaries. After a few generations, it would produce documentaries far superior to what any human team could. An organization that produces superior content for any subject/medium could quickly take over the world.
The third fallacy is that IE requires an AI. The superior intelligence could be an augmented human mind, an uploaded human mind, or an AI. There could be other ways to augment intelligence, like directly linking minds or some super-productivity tool that is scalable. Once the process gets going, it will tend towards whatever medium is most efficient for further optimizations.
The fourth fallacy is that there are fundamental physical limits on intelligence, and the human brain has evolved close to these limits. There are various justifications for this, such as the need to cool a massively parallel computer, or purported quantum effects exploited by neurons. While we can't categorically dismiss this view, I have three objections:
First, the basic computing properties of transistor technology seem far superior to neurons, support far higher energy use, and can scale independently of the size and energy constraints of living beings. References to "quantum effects" are ignorant handwaving.
Second, we already have evolved organic brains to improve on. We can copy biological models and improve on them.
Third, evolved intelligence may be highly optimized, but they suffer from local maxima. That is, evolution can make incremental changes, but not leaps in design, which is why, for example, wheels are very rare, as are animals that can use both photosynthesis and respiration.
To conclude, there are still major open questions, such as whether an AGI or narrow AI is needed for IE, but it's premature to rule out the possibility.