In early July, the Associated Press made a deal with OpenAI, maker of ChatGPT, to license “part of AP’s text archive” and get access to “OpenAI’s technology and product expertise.” A few days later, OpenAI announced a $5 million grant, accompanied by $5 million in software use “credits,” to the American Journalism Project, an organization that supports nonprofit newsrooms. Meanwhile, Google has reportedly been presenting major news organizations, including the New York Times, the Washington Post, and the Wall Street Journal, with a new software “personal assistant” for journalists, code-named Genesis, which promises to “take in information — details of current events, for example — and generate news content,” with a pitch described by some in attendance as unsettling. A number of news organizations, including G/O media, which owns Gizmodo, Jezebel, and The Onion, are experimenting with blog-style content generated from scratch, and plenty of others, with varying degrees of transparency, have started to dabble.
Last week, Semafor reported that the next significant meeting between news organizations and AI firms might occur in court: Barry Diller’s IAC, along with “a handful of key publishers,” including the Times, News Corp, and Axel Springer, are reportedly “formalizing a coalition that could lead a lawsuit as well as press for legislative action.” They’re not looking for small grants or exploratory collaborations. In their view, AI companies are systematically stealing content in order to train software models to copy it. They’re looking for compensation that could “run into the billions.”
These are, it is fair to say, the inconsistent actions of a mixed-up industry confronting speculative disruption from a position of weakness. This is not ideal if you’re the sort of person who places much stock in a functional Fourth Estate, but it’s also not unique: In conference rooms around the world, white-collar workers are stumbling through mind-numbing conversations about incoherent presentations on the imminent approach of AI with the assignment or intention of making some — any! — sort of plan. It’s also understandable. It’s easier to get the leadership at OpenAI and Google to talk about the apocalypse than it is to get a clear sense of even their own plans for making money with large language models, much less how those plans might affect the reporting and distribution of the news. The media industry’s particular expressions of panic are a result of a comprehensive sense of exposure to these new forms of automation — which is arguably the best way to think about artificial intelligence — combined with a sense of profound confusion about what the challenges are and for whom.
The industry’s scattered early responses to AI do, however, seem to contain some assumptions, and from those assumptions we can extrapolate some possible futures — if not the likely ones, then at least ones that people in charge of the news business are most excited about or of which they are most afraid. The news media’s flailing early responses to AI are, in their own ways, predictions. There are, so far, a few dominant schools of thought about this.
Continue reading: https://nymag.com/intelligencer/2023/08/how-ai-will-change-the-news-business.html