Can Media Survive the AI Onslaught
The coming AI battle will reshape the internet and everything else. What will it leave for media?
Welcome to People vs Algorithms #63.
I look for patterns in media, business and culture. My POV is informed by 30 years of leadership in media and advertising businesses.
Sometimes it’s nice to read in the browser.
Note: Monday? You never send Monday. Well, I came down with something… Which may explain why the note meanders a bit more than usual!
I will be traveling next week… T
There's an epic battle shaping up concerning the future of AI in society. Closely connected are the fortunes of media — how it sustains value in a world overwhelmed by AI information flows. I wanted to explore media’s potential role in future scenarios a bit more, building on previous posts (on chat, and plug-ins) as the future seems to be zooming towards more quickly that I would have expected.
As I see it, inside of this future, the role media brands play moves beyond content to 1) marks of trust / informational integrity; 2) purveyors of proprietary data sets; 3) connective tissue for groups of communities with shared interests, as such a bridge between digital and offline worlds. The traditional way we think about media as content creator is going to change fundamentally. Some context…
Battle lines
Just six months ago, ChatGPT was dismissed by skeptics as fancy "auto complete", a statistical sleight of hand that cleverly, if sometimes inaccurately, processed the giant corpus of public online data into coherent natural language responses.
It quickly became clear to anyone who played with the tools, that generative AI was becoming deceptively powerful, opening a revolutionary new chapter in the history of technology. Creative tools from newcomers like Midjourney and Runway.ai demonstrated how the same tech will soon underpin all creative activities with jaw-dropping results. Incumbents Adobe (Firefly), Microsoft (Bing, CoPilot) or Google (Bard) are scrambling to show how AI integrations will fundamentally change the promise of their product suites. If writers and creatives feel threatened, it's the programming class whose world is changing most rapidly. The structured nature of the code work product has proven incredibly well suited to AI augmentation, promising 10x developer productivity gains. The notion that computers can program themselves is not far off.
Two weeks ago, the introduction of OpenAI's GPT4 added momentum to the tidal wave of AI hype and with it, ignited a lively debate around the potential dangers of the technology.
The Future of Life Institute, non-profit focused on AI safety issues called for a six month pause of training for systems more powerful than GPT4, fearing "loss of control of our civilization." Essays in the New York Times, the Atlantic and Time highlighted the dangers with a heavy dose of hyperbole. AI researcher, Eliezer Yudkowsky, writing in Time, issued the most dire of warnings: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." In the NYT essay, Yuval Harari, Tristan Harris and Aza Raskin issued a similarly ominous red flag: "Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable."
It was economist Tyler Cowen who seemed to provide the most even-handed assessment, one that I suspect is far more rational:
I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!”
Meanwhile, entrepreneurs and the venture capitalists that back them are downright giddy with the prospects of a fertile new AI investment thesis, grateful to have an offramp from the now third-rail Web 3 hype cycle. Three quarters of Y-Combinator startups were focussed on AI last year. The grind of techno-capitalism is one that will not be halted.
And this battle has the added geo-political gravity of the next cold war. Nobody wants to be seen as giving up AI supremacy to the Chinese. The voice of cautionary skeptics will provide important counterbalance but will be overwhelmed by a human addiction to technological progress.
The battle lines are not neatly drawn around technological optimism or political affiliation, simply because the effects are way too hard to discern. For now, they are divided between those those that inherently embrace change, those that see the opportunity to make money, and others who are much more cautious about disruptive social impact of technology, particularly those that view the impact of the last round — the rise of social media and more broadly, the empowerment of a dominant technological class — as having adverse impact on mental health and social / political stability.
Lessons of social media
There's a straight line to be drawn from our media past to our social media present to our AI future. The equation is a function of 1) elimination of barriers to the creation and distribution of content; 2) inadequate systems to manage identity of media creators and the legitimacy of what is being circulated; 3) an inability to understand or control who sees what.; 4) marginalization of established content creators, certainly the destabilization of existing economic models.
These are challenges we have struggled to come to grips with since social media's ascendence beginning in the early 2000's. AI pours gas on them. Content grows in fidelity and volume at a cost that approaches zero. The human / machine interaction evolves from today's algorithmic presentation of content in the feed to a conversational interaction so seductive and human-like that it may be will be hard to separate truth and hallucination. The economics of traditional media will no doubt deteriorate. More complexity, the same root causes.
Maybe this is slightly optimistic, but the social media era may have prepared us for the next chapter. It has taught us to examine information's veracity and provenance more carefully. We pay far more attention to privacy rights and our personal data sovereignty. We've ratcheted up controls for malicious information spread.
And we are demanding much more transparency from the black boxes that control what we see. Like him or not, two recent moves from Elon point to a material change in how platforms are managed. The public release of Twitter's algorithm is a fresh example, one that portends not just a new era of transparency, but the first step in giving users the ability to roll their own feed. And, the controversial step to limit feed access to paid Twitter Blue subscribers points to a new way of thinking about access. Here-to-fore, scaled access to an audience was purchased with advertising. Even if media orgs chaff at the Elon's idea of verification pay to play (sorry NYT) , the idea of audience access as a benefit of membership points the way to a future where the commons is underwritten by participants, not advertisers. Free open access, the pursuit of scale at all costs, opaque algorithms and ad-leveraged systems look increasingly dubious in the rear view mirror.
But while the opportunity seems tantalizing, the risks of an AI reality that completely overwhelms our information space and, per per the NYT essay above, finds us “living inside the hallucinations of nonhuman intelligence," seems worthy of some paranoia. Certainly, media will be forced to navigate a reality of ever more abundance, decreased relevance and consumer confusion. The ways that we think of value creation will change pretty fundamentally in the years ahead.
Future breaks models of past
What does it start to look like? In the media continuum of access / curation / content / activation all become connected in a conversational back and forth mediated by AI. We may attempt to enforce old structures on a new model, forcing chat interfaces to surface media brands, link to pages, and exchange downstream traffic for content. I doubt this will sustain. The incentives of new utility and frictionless access will win the day and a new structure will emerge.
The article (or media artifact) construct will exist tangentially to chat, but with less importance to the reader experience and more important as a reference source. The media brand will surface as a validator in the chat experience, a signature of quality like "Intel Inside." Only the most recognizable will have value. Evergreen content, like the hundreds of articles of the “how to stop a bleeding nose” variety will languish. Personal, timely, authoritative POV will do much better.
The few media companies that cross the line between content and data provider will find valuable new opportunities in the chat world. A rich, faceted database of all the roofing providers, senior living centers or drug conditions will find a way to capture value connected to AI and upstream of clear economic events. Similarly, marketplaces that represent a unique catalog of products will do well. Value increases with proximity to a transaction.
As will providers of real life activations. Brands that connect legitimate communities of shared interest will find value by bringing this unique product to the table instead of refactorable content whose value is extracted by AI.
We are about to witness a competitive race not unlike the search wars of the late 90s and early 2000s. Google won, taking almost all of the search share. I suspect something similar will happen in chat. AI functionality will proliferate all over the place. We will have right wing chatbots, uncensored chatbots, domain-expert chatbots etc. But one or two big ones will build ecosystems that dominate mind share in the same way Google did. They will reinforce their position with a better product on the margin, supported by an ecosystem that incents broad economic participation. This is why the plug in notion is so important as a sign of what is to come. Network effects it generates will reinforce the winner-take-most outcome. Google will have a huge advantage because of sheer surface area. It will be interesting to watch OpenAI compete, evolving from research non-profit to the world’s most valuable consumer tech startup.
And critically, as Sam Altman highlighted in the Lex Fridman interview, it is impossible for a single model to be unbiased. What he calls "system level requests" will become a more important way of influencing the flavor of results. Ask ChatGPT to respond from the POV of a libertarian, as an example, sets the context for the interaction that follows. This is a critical new concept. One that will enable an AI system to transcend ideological constraints and, one can imagine the potential to morph into “brand responses” that filter information from narrow points of view of a media outlet.
It's hard to see a world where much of what we used to make money sustains the economic value like it did in the the past. Too many people helped by an infinite AI bench will overwhelm supply. Enterprising media brands will need to become much more like marketplaces — of products, services, experiences, even humans — things that preserve value inside of a new distribution system that will inevitably marginalize anything without highly unique data value or IP. Unique point-of-view, personality and entertainment will always find a place because, at least for now, its pretty hard to replicate.
Have a good week.../ Troy
ON THE PODCAST
The death of ad formats
It's been 20 years, and despite endless attempts to bend the internet to its will, advertising remains a fraught part of the digital media experience. The internet put the consumer in control, connected the impression to the transaction, and in the process, completely transformed how we think about advertising, personal data sovereignty, and its underlying economics.
AI is about to change things all over again.
We delve into digital advertising's messy history, failed attempts at innovation, and the impact of new interfaces that flatten the space between desire and fulfillment.
This week, we invite our Gen Z friend and my son, Seb, to provide a generational perspective.
Plus, Succession is a good product. Click to listen. Enjoy!
ON DECK
Andy from Regina
Andy Shauf writes lush, cinematic pop songs. Plus he is from my home town, Regina, Saskatchewan. Yes, the one that everyone was making fun of when the tourist board recently came up with the most imaginative new ad campaign. It’s a great town and Andy is a special musician who occasionally rocks out with the clarinet. Anyhow, “Norm” is his new album and “Halloween Store” is a glorious track and I encourage you to have a listen. If you like it, here is Andy and co doing a NPR Tiny Desk Concert.
God and the devil have preoccupied Andy Shauf’s work since the beginning. Sometimes, he paints sympathetic portraits; on 2009’s “The Devil,” he describes a lovesick Satan, clutching a bottle and weeping over the souls he’s ushered to hell. On the 2012 murder ballad “Wendell Walker,” the title character hears the voice of God urging him to rescue his wife from an affair by killing her lover. For over a decade, these visitations have helped to contextualize the complex characters in the Canadian songwriter’s elaborate storylines. His eighth album, Norm, is his most meticulous and beguiling, straying from his semi-autobiographical past work to span three perspectives and tactfully downplaying its philosophical quandaries with his lushest arrangements to date. More»
Source: Pitchfork