06/08/2024
Tech evangelists like to say that AI will eat the worldâa reference to a famous line about software from the venture capitalist Marc Andreessen. In the past few weeks, weâve finally gotten a sense of what they mean.��
This spring, tech companies have made clear that AI will be a defining feature of online life, whether people want it to be or not. First, Meta surprised users with an AI chatbot that lives in the search bar on Instagram and Facebook. It has since informed European users that their data are being used to train its AIâpresumably sent only to comply with the continentâs privacy laws.
OpenAI released GPT-4o, billed as a new, more powerful and conversational version of its large language model. (Its announcement event featured an AI voice named Sky that Scarlett Johansson alleged was based on her own voice without her permission, an allegation OpenAIâs CEO Sam Altman has denied.
You can listen for yourself here.) Around the same time, Google launchedâand then somewhat scaled backââAI Overviewsâ in its search engine. OpenAI also entered into new content partnerships with numerous media organizations (including The Atlantic) and platforms such as Reddit, which seem to be operating on the assumption that AI products will soon be a primary means for receiving information on the internet. (The Atlanticâs deal with OpenAI is a corporate partnership.
The editorial division of The Atlantic operates with complete independence from the business division.) Nvidia, a company that makes microchips used to power AI applications, reported record earnings at the end of May and subsequently saw its market capitalization increase to more than $3 trillion. Summing up the moment, Jensen Huang, Nvidiaâs centibillionaire CEO, got the rock-star treatment at an AI conference in Taipei this week and, uh, signed a womanâs chest like a member of MĂśtley CrĂźe.
The pace of implementation is dizzying, even alarmingâincluding to some of those who understand the technology best. Earlier this week, employees and former employees of OpenAI and Google published a letter declaring that âstrong financial incentivesâ have led the industry to dodge meaningful oversight.
Those same incentives have seemingly led companies to produce a lot of trash as well. Chatbot hardware products from companies such as Humane and Rabbit were touted as attempts to unseat the smartphone, but were shipped in a barely functional state. Googleâs rush to launch AI Overviewsâan attempt to compete with Microsoft, Perplexity, and OpenAIâresulted in comically flawed and potentially dangerous search results.
Technology companies, in other words, are racing to capture money and market share before their competitors do and making unforced errors as a result. But though tech corporations may have built the hype train, others are happy to ride it. Leaders in all industries, terrified of missing out on the next big thing, are signing checks and inking deals, perhaps not knowing what precisely it is theyâre getting into or if they are unwittingly helping the companies who will ultimately destroy them.
The Washington Postâs chief technology officer, Vineet Khosla, has reportedly told staff that the company intends to âhave A.I. everywhereâ inside the newsroom, even if its value to journalism remains, in my eyes, unproven and ornamental. We are watching as the plane is haphazardly assembled in midair.
As an employee at one of the publications that has recently signed a deal with OpenAI, I have some minor insight into what itâs like when generative AI turns its hungry eyes to your small corner of an industry. What does it feel like when AI eats the world? It feels like being trapped.
Thereâs an element of these media partnerships that feels like a shakedown. Tech companies have trained their large language models with impunity, claiming that harvesting the internetâs content to develop their programs is fair use.
This is the logical end point of Silicon Valleyâs classic âAsk for forgiveness, not for permissionâ growth strategy. The cynical way to read these partnerships is that media companies have two choices: Take the money offered, or accept OpenAI scraping their data anyway. These conditions resemble a hostage negotiation more than they do a mutually agreeable business partnershipâan observation that media executives are making in private to one another, and occasionally in public, too.
Publications can obviously turn down these deals. They have other options, but these options are, to use a technical term, not great. You can sue OpenAI and Microsoft for copyright infringement, which is what The New York Times has done, and hope to set a legal precedent where extractive generative-AI companies pay fairly for any work they use to train their models.
This process is prohibitively costly for many organizations, and if they lose, they get nothing but legal bills. Which leaves a third option: Abstain on principle from the generative-AI revolution altogether, block the web-crawling bots from companies such as OpenAI, and take a justified moral stand while your competitors capitulate and take the money. This third path requires a bet on the hope that the generative-AI era is overhyped, that the Times wins its lawsuit, or that the government steps in to regulate this extractive business modelâwhich is to say, itâs uncertain.
The situation that publishers face seems to perfectly illustrate a broader dynamic: Nobody knows exactly what to do. Thatâs hardly surprising, given that generative AI is a technology that has so far been defined by ambiguity and inconsistency. Google users encountering AI Overviews for the first time may not understand what theyâre there for, or whether theyâre more useful than the usual search results. There is a gap, too, between the tools that exist and the future weâre being sold. The innovation curve, weâre told, will be exponential.
The paradigm, weâre cautioned, is about to shift. Regular people, weâre to believe, have little choice in the matter, especially as the computers scale up and become more powerful: We can only experience a low-grade disorientation as we shadowbox with the notion of this promised future. Meanwhile, the ChatGPTs of the world are here, foisted upon us by tech companies who insist that these tools should be useful in some way.
But there is an alternative framing for these media partnerships that suggests a moment of cautious opportunity for beleaguered media organizations. Publishers are already suppliers for algorithms, and media companies have been getting a raw deal for decades, allowing platforms such as Google to index their sites and receiving only traffic referrals in exchange. Signing a deal with OpenAI, under this logic, isnât capitulation or good business: Itâs a way to fight back against platforms and set ground rules: You have to pay us for our content, and if you donât, weâre going to sue you.
Over the past week, after conversations with several executives at different companies who have negotiated with OpenAI, I was left with the sense that the tech company is less interested in publisher data to train its models and far more interested in real-time access to news sites for OpenAI's forthcoming search tools. (I agreed to keep these executives anonymous to allow them to speak freely about their companiesâ deals.) Having access to publisher-partner data is helpful for the tech company in two ways:
First, it allows OpenAI to cite third-party organizations when a user asks a question on a sensitive issue, which means OpenAI can claim that it is not making editorial decisions in its product.
Second, if the company has ambitions to unseat Google as the dominant search engine, it needs up-to-date information.
Here, Iâm told, is where media organizations may have leverage for ongoing negotiations: OpenAI will, theoretically, continue to want updated news information. Other search engines and AI companies, wanting to compete, would also need that information, only now thereâs a precedent that they should pay for it.
This would potentially create a consistent revenue stream for publishers through licensing. This isnât unprecedented: Record companies fought platforms such as YouTube on copyright issues and have found ways to be compensated for their content; that said, news organizations arenât selling Taylor Swift songs.
(Spokespeople for both OpenAI and The Atlantic did clarify to me that The Atlanticâs contract, which is for two years, allows the tech company to train its products on Atlantic content. But when the deal ends, unless it is renewed, OpenAI would not be permitted to use Atlantic data to train new foundation models.)
Zoom out and even this optimistic line of thinking becomes fraught, however. Do we actually want to live in a world where generative-AI companies have greater control over the flow of information online? A transition from search engines to chatbots would be immensely disruptive. Google is imperfect, its product arguably degrading, but it has provided a foundational business model for creative work online by allowing optimized content to reach audiences.
Perhaps the search paradigm needs to change and itâs only natural that the webpage becomes a relic. Still, the magnitude of the disruption and the blithe nature with which tech companies suggest everyone gets on board give the impression that none of the AI developers is concerned about finding a sustainable model for creative work to flourish.
As Judith Donath and Bruce Schneier wrote recently in this publication, AI âthreatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.â Follow this logic and things get existential quickly: What incentive do people have to create work, if they canât make a living doing it?
If you feel your brain start to pretzel up inside your skull, then you are getting the full experience of the generative-AI revolution barging into your industry. This is what disruption actually feels like. Itâs chaotic. Itâs rushed. Youâre told itâs an exhilarating moment, full of opportunity, even if what that means in practice is not quite clear.
Nobody knows whatâs coming next. Generative-AI companies have built tools that, although popular and nominally useful in boosting productivity, are but a dim shadow of the ultimate goal of constructing a human-level intelligence. And yet they are exceedingly well funded, aggressive, and capable of leveraging a breathless hype cycle to amass power and charge head-on into any industry they please with the express purpose of making themselves central players.
Will the technological gains of this moment be worth the disruption, or will the hype slowly peter out, leaving the internet even more broken than it is now? After roughly two years of the most recent wave of AI hype, all that is clear is that these companies do not need to build Skynet to be destructive.
AI is eating the world is meant, by the technologyâs champions, as a triumphant, exciting phrase. But that is not the only way to interpret it. One can read it menacingly, as a battle cry of rapid, forceful colonization.
Lately, Iâve been hearing it with a tone of resignation, the kind that accompanies shrugged shoulders and forced hands. Left unsaid is what happens to the raw materialâthe foodâafter itâs consumed and digested, its nutrients extracted. We donât say it aloud, but we know what it becomes.
The web itself is being shoved into a great unknown.