Timeline of the future of language models like ChatGPT 🤖
My timeline of the future of language models like ChatGPT 🤖:
2023:
GPT-4 and competing large language models (LLMs) are integrated into most search engines 🔎.
Wave of startup launches with promises to integrate LLMs into every industry 🚀.
LLMs integrate real-time data sources and logic engines (WolframAlpha) 🧠.
2025:
LLMs are used for a rapidly increasing percentage of white-collar fields 💼.
Voice increasingly replaces text interfaces 🗣️.
Almost all LLM use is with a specialized agent: marketer, therapist, teacher, doctor, mechanic, etc 📚💉🔧.
Most human communications are mediated by computers and altered by augmented reality. For example, we can specify appearance, tone, and eye contact in calls 📞🔮.
LLMs become capable of incorporating new information ("memories"), so they don't need to be versioned from scratch 🧪.
2027:
Most consumer interaction is partially or fully with LLMs: medical care, counseling, education, restaurant ordering, etc. Access to humans is limited to high-end venues 🤖🏥🏫🍽️.
Most white-collar workers and an increasing number of blue-collar workers use a productivity tool that integrates an LLM 🛠️.
Vast majority of production for TV, film, and other arts use LLM to enhance details 🎬🎨.
An LLM passes the Turing test with 50% of untrained people 🏆.
Film and television can place the viewer scenes with VR, using LLMs to generate detail as needed. Many video games use LLMs to enhance detail and interactivity 🎥🎮.
Nation states consider LLMs to be a strategic resource and sponsor local LLMs to control the narrative 🌐.
Experiments with robots using LLMs to navigate the world in a more compelling manner: for example, an LLM AI dog behaves very life-like 🐶.
2030:
Most white-collar workers are completely dependent on an LLM for productivity. For example, most programmers use English directions to create software and cannot compete by writing machine code 💻.
An LLM passes the Turing test with 70% of untrained people 🎖️.
LLMs generate real-time experiences in response to user actions. While some prefer old-format "static" media, most films and video games place the viewer in the experience. The plot is pre-scripted, but participants have the ability to interact with the world and modify the plot, to the extent the author allows 🌟.
Much of recreational activity is within virtual worlds. Adapters enable VR sports, conferences, and sex (to a degree). Many of these worlds use dynamically generated LLM-powered content to enhance realism 🏞️.
VR is used to hide currency inflation by directing many experiences to cheaper virtual worlds 💸.
VR addition is a major social problem 🚨.
2035:
AIs can synthesize LLMs and logical reasoning. They are not close to human level, but good enough to automate most jobs. A wave of deflation follows 💹.
Human labor needs are dramatically reduced. Most workers focus on higher-order tasks: both white-collar and blue-collar workers are mostly supervising AIs 👷♀️👨💼.
AI robot agents are everywhere and automate many fields. Plumbers, appliance repairmen, etc are replaced by robots that instal and replace modular systems 🤖🔧.
Most people have replaced computers and smartphones with augmented reality systems 🕶️.
2040:
Under human supervision, AIs make rapid scientific, engineering, and medical progress in many fields 🧬🔬🚀.
Virtually all construction is done by robots creating modular systems in factories 🏗️🤖.
AI is starting to be used for functional human brain mapping with the goal of uploading minds 🧠💾.
AIs start to resemble the human brain, in having regions dedicated to input, language, logical reasoning, short and long-term memory, & limb control 🌐.
AIs pass the Turing test with a 95% success rate among untrained people 🏅.
Some people starting to use direct brain-computer interfaces 🧠💻.