Discover more from People vs Algorithms
A new interface is coming to everything.
Welcome to People vs Algorithms #76.
I look for patterns in media, business and culture. My POV is informed by 30 years of leadership in media and advertising businesses.
Sometimes it’s nice to read in the browser.
Listen to the PvA podcast. This Thursday we dig into interface. Find it here.
You can tell we’ve reached peak phone when the latest iPhone release is distinguished by the kind of metal it’s wrapped in… the Titanium iPhone 15 is our most boring phone yet.
Fear not, a new era is fast approaching, one that will dramatically challenge existing notions of how we interact with the technology in our hands — in particular, the iOS / touch / app / search / webpage paradigm that has defined mobile computing for more than 15 years.
Two events last week helped me see the contours of the change — a flurry of announcements at OpenAI’s Dev Day (see keynote here), and the launch of a screenless lapel computer from Humane called the “AI Pin.” Let’s put them in context.
Rethinking inputs and outputs
Simplistically, human-machine interactions are just a connected cycle of inputs and outputs. We put information into a device, a series of applications and services return something back to us.
Inputs come from a keyboard and mouse, a touch screen, voice or camera - interface points. A bunch of cloud-based systems crunch away at your input and dutifully deliver the reward. An airline booking query checks inventory systems and responds with a list of flights and prices. A Google search query scans an index of the internet and returns a list of relevant URLs. A headline triggers a web server that pulls down an article from the New York Times. These interface things happen billions of times a day. Requests cascade into a vast interconnected web of inputs, applications, algorithms, databases and outputs that collectively form our digital cosmos.
Of course, all of this used to unfold with the old interfaces — people, paper and pens and typewriters, carbon paper and filing cabinets. Slowly old routines were replaced with mainframes and dumb terminals, then servers, evolving into vast data centers connected to desktop and portable computers.
Today, most of what we do is mediated through a cellular connection to ubiquitous mobile screens, guided by fingers. Fingers manipulating screens. Screens containing interfaces.
We’re surrounded by them. Interfaces are there to help us navigate routine and complex tasks. They haven’t changed in a while. They are about to.
The 30 year path to a screen and your fingers
The evolution to our present “screen and finger age” began in 1993 with the introduction of the Apple Newton. I had one. It was a clumsy low-utility device powered by a flawed pen-based input mechanism. This was before cellular and the internet and GPS and maps made mobile devices useful.
The Palm Pilot further popularized the idea of mobility in 1996, seducing a larger group of tech-hungry devotees excited by the promise of unshackling calendars, to-do lists and note taking from the desktop. We embraced its promise and light, gray, hand-sized form factor. We hated how you had to learn Graffiti, Palm’s proprietary writing-recognition system. We longed for something more powerful.
BlackBerry came along in 1999 with the first truly addictive mobile computing unlock, the ability to communicate with text wirelessly. Its screen was an afterthought, the keyboard was what mattered. BlackBerry’s Canadian maker, Research In Motion, had nailed the use case, even if they sucked at interface design. We loved it so much we renamed it “Crackberry.”
A series of hardware / software innovations followed: the iPod, the Palm Treo, Windows Mobile, dozens of iterations from Nokia and Motorola. All showed how, when innovating greenfield use cases, especially with hardware products, many must die so few can live. All paved the way for the iPhone’s glorious 2007 debut, a device that introduced an interface model that has sustained us ever since.
A bunch of enabling technologies like cellular communication, GPS, powerful microprocessors and batteries, touch screens emerged to make this new world possible. Together, our current paradigm was 30 years in the making.
The interface remained consistent, more or less.
The routines are familiar and ubiquitous. Touch screens, typing, searching, reading and specific app-guided functionality like ordering an Uber. Structured, dumb input fields triggering useful, if impersonal, outputs.
AI is changing all of that. Mostly because, as my designer podcast friend Alex (Threads: @alexoid) astutely put it, “An interface layer that understands humans changes everything.”
Think about what that means. Something smart can now sit between what you want and what a whole bunch of other AI enabled smart processes can do for you. It’s the difference between typing a request into a form or asking a helpful human for something.
AI will be a new disruptive ecosystem enabler, more so than proceeding technologies. This week we started to see pieces of a new paradigm come together.
The new intelligent interface
What if much of that structured input could be replaced by my smart assistant. Instead of filling out a detailed form request to book a flight, we could simply ask the computer to find and book the best flight option that fit our schedule and preferences. A helper that knows everything about you. Our interfaces would change accordingly:
Inputs shift to conversational requests via a text box or voice assistant: “I am thinking of going to San Francisco next week. Find a good day to go and a flight time that works with my schedule.”;
Mobile apps lose relevance because we no longer need them to structure inputs to tasks. Why open the Uber app when you can just ask the AI to request a car? Apps are replaced with AI-assisted queries;
Discrete things can become chains of things that combine to accomplish a task. AI agents will be designed to stitch together all matter of services;
Why spend time bouncing between media sites or apps when you can ask the machine for a personalized summary? There’s been a lot of talk about how media evolves in this scenario. Obviously video does better than text. Broadly, becomes more personal and intimate or otherwise differentiated or it’s just grist for the AI summarizer;
Naturally, Google’s role as connective tissue and toll booth to a set of pages on the open web is challenged in unpredictable ways;
Images can be deconstructed as inputs on their own. A photo of the inside of a fridge becomes a grocery order, a picture of a completed dish, a recipe query;
Outputs become infinitely more personal. The notion of a page-based response, mediated on a browser through Google will begin to feel outdated;
Platforms owners, especially ones anchored by a physical device enjoy dominant power in these new scenarios. If the starting point is just a text box or voice request, the company that can control this entry point enjoys mind-boggling power over discovery and its economics;
The chain of events from your device to an endpoint steadily gets refactored, with humans playing smaller and smaller roles. Customer service is the obvious first domino to fall. But all computer / human process are rationalized;
We are able, though the change will be slow, to move further away from the screen as the primary input device. Entire new categories of devices will emerge, but like the previous mobile era demonstrates, most will become roadkill on a road to a dominant new hardware / ecosystem steady state.
Death of skeuomorphic
Set in this light, the OpenAI and Humane announcements last week can be heard as starter pistols to a new era of human-machine interaction.
OpenAI now allows anyone to effortlessly roll their own GPT. What does this really mean? Chat interfaces slowly replace web sites and apps. You-GPT is a smarter, more personalized version of the web page, one that connects your knowledge and applications to OpenAI’s infinite AI brain and then to any downstream API. Now you can create useful things with natural language. Brains meet action.
This is not some empty prognostication. The OpenAI keynote shows how easy this will be to do in the near term. This video shows what people are already doing.
Say you are a local real estate agent. Your collection of unique market knowledge can be uploaded as the core of your new chat-powered AI offering. Prospective customers can now query price trends in your area, ask what to look for in a starter home or what areas are best for kids. Real Estate-Agent GPT will connect to the listing database API and surface the latest listings that fit your needs. It will automatically book house tours and notify you when new listings come to market. We will still have real agents, just far fewer of them.
And Humane’s AI Pin, however imperfect, gives us some sense of how hardware evolves with AI in the mix. The ambitious new product is an OpenAI powered 2”x 2” $699 thingy without a screen that magnetically fastens to your lapel. Among its promises is intelligent voice-prompted access to all of your communications (ok maybe cool but prob better in concept), hand gesture interaction (touch your fingers together to navigate, this will be a thing), a neato projector to cast images onto your hand (seems gimmicky but if you’re taking away the screen you might need it), the ability to recognize objects in your hand (like how many calories in this handful of almonds… marginal utility), an ability translate conversations in real time (useful), record videos (a new GoPro), play music via its tiny speaker and a Tidal partnership (sounds unsatisfying).
This will be a bumpy road to innovate along, as it has been for any technology that gets too close to the body and demands visible new behaviors. Not to mention our never ending devotion to our phones and the ecosystems that power them. Google Glasses were an imperfect technology on your face and spawned the justified moniker “glassholes” for those that eagerly adopted the technology. AI Pin enthusiasts will quickly be cast as “Pinheads”, I suspect.
None of this matters. We are in a new phase of use case exploration that will eventually consolidate into a very different way of interacting with technology. Again, it took 30 years from the launch of the Palm Pilot to get to peak iPhone. The next wave will come faster and with far more downstream consequences.
The digital ecosystem created through a combination of powerful personal devices, a Google powered web, Meta owned connections to your friends and Amazon as a shopping destination was disruptive to everyone who enjoyed a protected distribution position in the physical world. The new tech Goliaths concentrated huge power in the process, but media was still media, advertising still advertising, shopping still shopping.
We’ve never had a helper on the front end doing all the work for us.
When AI elbows its way into every part of a value chain, everything changes. It starts with a new interface. Our skeuomorphic understanding of discrete brands and the utility associated with them (picture your phone desktop) morphs into a conversational back and forth. Media, especially text based media disintegrates in this AI vortex. Advertising struggles to find a comfortable place to exist.
But that’s just the beginning. In the AI mediated world the process of finding and sorting and evaluating and comparing is displaced by the smart computer. Service is automated. Markets become more efficient. Brands struggle to reinforce their uniqueness.
And when the interface needs change, the hardware that carries it naturally evolves. AI Pin is the first take on what it might look like. This time, the change will come more quickly because the software and servers, chips and APIs already exist. Last time we had to build all of the enablers.
Interface seems like the benign domain of a UX team. In reality, it's the best way to understand the new world and how it’s about to change fundamentally... / Troy
"Telephone Line" is a song by English rock band Electric Light Orchestra (ELO). It was released in May 1977 through Jet Records and United Artists Records as part of the album A New World Record. It was very successful, reaching the Top 10 in Australia, US, and UK, and number 1 in Canada. The song appears in the 1995 Adam Sandler film Billy Madison.
Thanks for reading People vs Algorithms! Subscribe for free to receive new posts and support my work.