From Punch Cards to AI Agents: The Evolution of Computing Interfaces
The Future Is AI Personal Agents
Let’s begin with a recap of the history of computing interfaces:
The Future Is AI Personal Agents
I predict that starting in 2024, AI agents will replace screens and keyboards with voice/video interfaces. These aren't your current Alexa or Siri - they're a new interaction paradigm.
Agentic Interaction: The New Paradigm
Consider how high-level executives or world leaders interact with technology. They rarely touch devices directly. Instead, they have human assistants who aggregate information, present narratives, and execute orders. Future AI agents will function similarly, serving as our primary interface to the digitally mediated world. While they consume information on screens, it's usually to read narratives prepared by others, such as status reports, recommendations, or books.
This may seem inefficient if you're used to visual interfaces. Why explain that you want to buy a book, when you can browse a list of books to pick the one you want? But agents aren't an interface - they're an interaction paradigm. Let's compare three approaches to finding a book:
Physical: Browse a bookstore, scanning covers.
Online: View a webpage, scanning digital covers.
Agentic: Discuss recommendations with a personal AI librarian who knows your entire reading history.
Future AI agents will be much like an executive assistant. Rather than a device or form factor, they will be an ever-present personality that provides our primary interface to the non-immediate (i.e. digitally mediated) world. The agent can still render visual (and eventually physical) experiences tailored to your preferences, but it handles all transactions and interactions.
What if you still just want to look at book covers and decide for yourself? Well, you can, but the agent will still render a catalog experience tailored to your preferences. When you choose a book to read, you will indicate it verbally or physically (picking it up in VR or real space) and the agent will handle the transaction.
Beyond Consumption: Agents for Creation
Agents will excel at content creation too. Here's a recent example:
I asked Claude about Intel's performance vs. AMD and Nvidia.
Claude presented historical share prices, market caps, and revenue.
Claude created custom bar charts visualizing this data.
Claude summarized my insights for social media posting.
This process incorporates:
Information collection
On-the-fly visual creation
Output transformation and distribution
The Agentic Future
Future AI agents will become our primary interaction paradigm for content creation and consumption. Desktops, mice, keyboards, and monitors will go away. There will still be ever present displays, but the hours spent doing low-level data processing (writing/reading emails, status reports, code, customer service replies) will go away, in the same way that word processors eliminated the need for typists. Screens will be more occasional, as-needed experiences rather than a dedicated terminal that we spend 8+ (16+ in my case) hours a day sitting at.
I am not proposing that our interactions with agents will be via voice. The new paradigm is the ever-present personal assistant, not a voice agent. The agent will decide if voice, a screen, or other medium (robot, etc) is appropriate for each engagement. Sometimes the agent will present a menu to tap on -- the point is that that menu will be generated dynamically by the agent, not a static app.
Key Changes:
The cooperative aspects of productivity will become more important than deep technical expertise (for everyone other than innovators at the edge of knowledge).
Personalized, context-aware personal assistants operating at a high level of abstraction
Seamless integration of information gathering, analysis, and creation
Reduction of cognitive load in navigating digital interfaces