The Market As A Superintelligence + 3 Reasons To Believe AGI Is Possible
Markets as a form of super-human intelligence
The Market Economy As A Superintelligence
The market economy (i.e. capitalism) is already a form of superintelligence:
Superintelligence denotes an entity that possesses intelligence far surpassing that of the brightest and most gifted human minds. Now, this can be said of the market because it harnesses and aggregates the knowledge and abilities of all its participants, allowing for the emergence of a system more intelligent than any single human or group.
F.A. Hayek's concept of spontaneous order is pivotal here. This refers to the emergence of order out of seeming chaos, where individuals pursuing their own interests within certain rules or constraints result in an organized system. The market economy is a prime example of this phenomenon. Each participant in the economy, by pursuing their own interests, contributes to the overall intelligence and efficiency of the market.
Moreover, just like a superintelligence, the market processes vast amounts of information. Every transaction, every change in supply and demand, and every innovation is information that is processed and integrated into the market system. This allows the market to adapt to changes in circumstances, technologies, and preferences much more efficiently than any centrally planned system could.
Of course, just because it's superintelligent, doesn't mean that the market is infallible. However, as an information processing system, markets are superior to what any single human or centrally planned system could achieve.
3 Reasons To Believe AGI Is Possible
What might serve as evidence that Artificial General Intelligence (AGI), or human-level AI, is within reach, barring AGI's actual realization? Is it worthwhile to ponder this? Given the immense value and seeming inevitability of AGI, if it's feasible, it's bound to occur. I believe contemplating this is vital, as the advent of AGI paves the way for superintelligent AI, which presents existential challenges for humanity.
Three compelling arguments bolster the case for the potential of Artificial General Intelligence (AGI):
Firstly, the successful recreation of biological behavior in artificial neural networks suggests a promising pathway toward AGI. Take the roundworm Caenorhabditis elegans as an example. It has a simple nervous system of just 302 neurons, a complete map of which we have at our disposal. A 2023 study demonstrated that we could effectively model the worm's behavior, such as egg-laying and carbon dioxide avoidance, using the Gene Ontology Causal Activity Modelling (GO-CAM) framework. This suggests that if we can replicate simple biological behaviors with a simple neural network, we might be able to scale up to more complex ones. If the connectome of a neural network is sufficient to recreate animal behaviors, we may not need to do modeling on a molecular, atomic, or subatomic level to simulate minds.
Secondly, AI algorithms have proven capable of simulating aspects of human-level cognition. GPT-like language models, for instance, can mimic key elements of human cognition such as memory recall, inference, learning from context, problem-solving, and understanding language semantics. We don’t know how far this paradigm will scale, but it appears that aspects of cognition that we previously considered uniquely human require no “special” biological process.
Lastly, the theoretical feasibility of creating artificial biological intelligence provides another avenue towards AGI. While humanity has not earnestly pursued this path due to the superior efficiency of electronic systems, there's no logical barrier preventing us from creating Blade Runner-style synthetic intelligences.
In conclusion, while it's clear that significant hurdles remain, the evidence leans towards the possibility of achieving AGI. This underscores the importance of such discussions, as the advent of AGI, and subsequently, superintelligent AI, could carry existential risks for humanity.