By Charles Ferguson
AI is coming for “content,” meaning everything from advertising and novels to movies and journalism. The result, starting very soon, is likely to be simultaneously horrific, wonderful, depressing, and exciting. There will be not only creative destruction, but also lots of plain old destruction.
Having spent most of my adult life producing research, journalism, and documentaries, as well as consuming many escapist novels and movies, I have great sympathy for creators. But for the past three years, I have been an investor and venture capitalist in AI, and this experience has shaped the message I would offer to everyone in journalism, publishing, music, advertising, and Hollywood: you ignore the potential of this technology at your peril.
Towering Inferno
First, consider the prospects for Hollywood. The film and television industry has already been contracting for years, owing to the new forms of media delivery (like streaming services) enabled by the internet, laptops, tablets, and mobile phones. The decline of cable TV and DVDs reflects a variety of factors, including video streaming, the rise of user-generated content, the democratization of creation through inexpensive cameras and software, and the resulting competition for eyeballs from YouTube, Facebook, and TikTok.
Yet throughout this decade of painful contraction, the fundamental techniques of video production didn’t change much. You still used real cameras to film real people and things.
Soon, though, all these real-world inputs will be obsolete, replaced by AI. The pioneers of this new world will be, without exception, startups, some of them less than a year old. Not a single legacy studio, producer, or distributor is at the forefront of AI filmmaking or distribution. The first such startup, Runway, was founded eight years ago, and it has since been joined by Arcana, Flick, Koyal, Zingroll, and others.
I have spoken with founders and senior executives at each of these companies in recent months, and asked them all the same question: How long will it take before a non-technical person can make a complete feature-length AI film with characters and production values as good as a typical Hollywood product? Their answers fall within a tight range: one to three years, averaging around two. For simpler short films and commercials, we are already there.
HOLYWATER, a Ukrainian startup founded in 2020, enables anyone to make “vertical” (specifically for phones) short films by using AI to create huge numbers of text stories whose popularity then guides film production. HOLYWATER’s revenues already exceed $100 million, and are more than doubling annually. Similarly, Wide Worlds, founded in 2024, enables fans to make short films by drawing from their favorite fan-fictional universe.
The $600 billion digital advertising industry is next. The leading startup in AI commercials, Higgsfield, was founded only in 2023, but its business has exploded, with revenues doubling every month, on track to exceed $1 billion this year.
For longer, more complex series and films, the technology isn’t there yet. But it is advancing rapidly. Within a decade, human actors will become historical artifacts, as will cinematographers, stunt performers, art directors, costume designers, line producers, and location scouts. While a few studios are quietly using a lot of AI (Lionsgate is often mentioned), most of Hollywood is preparing for this impending tsunami by doing … virtually nothing. Studios, producers, distributors, and agencies are dreaming (or pretending) that AI will be just one more technological wave to ride, like cable TV, CGI, DVDs, and streaming.
By contrast, the unions representing actors, writers, art directors, and other industry professions are terrified – and have responded by blindly opposing all uses of AI, which is futile at best. Still, they are right to worry. The technology is advancing so fast that the transition from physical to AI video production will probably be brutal and brief, destroying thousands of careers and companies virtually overnight. I have already seen friends leave the industry.
Hollywood is just one example of how the AI revolution will cause enormous social pain unless managed humanely and carefully (and there’s little sign of that happening). Similar statements can be made about fiction writing, commercial photography, radio, and, above all, music, where multiple, rapidly growing AI startups (including Udio, Suno, and Mozart) are enabling non-musicians to create music.
To be sure, Udio and Suno engaged in massive IP theft, were sued, and recently reached settlements with the major music labels. But none of the legacy music companies are at the forefront of AI, except in filing lawsuits.
The Day After Tomorrow
So, the AI revolution is coming to the arts, and the carnage in legacy industries will be awful. What the day after will look like, however, is a far more complicated question.
Personally, as a once and future filmmaker, I am excited about AI filmmaking. I would love to be able to write treatments and screenplays, feed them to my AI “studio,” get back a good rough cut, and then hone and hone with AI until I have exactly the film I want to make, with every character, setting, movement, line of dialogue, and camera angle perfect. There will be no need to beg for financing, employ a producer’s girlfriend, indulge an egomaniacal movie star, or worry about whether someone on set loaded a gun with live ammunition.
There is, however, an urgent need for new laws, systems, and institutions to protect intellectual property and its creators. The most discussed issue is the very real need to compensate traditional creators whose prior work is being used to train AI models. But there is also a need to protect AI creators and creations.
The idea that AI-generated art can’t or shouldn’t be protected is misguided. When human artists – writers, photographers, film directors – use AI to create new work, they deserve protection just as much as human artists using conventional tools.
In fact, I expect AI to create the conditions for major new genres and artists of genius. For a glimpse of what I mean, check out the Runway AI Film Festival, especially the superb grand prize winner, Total Pixel Space. Such work shows why I welcome the AI era in artistic creation, even while recognizing that the AI arts revolution will also have major downsides. Many good people – hundreds of thousands, perhaps millions – will be unemployed with little warning, often late in their careers. There will also be oceans of AI slop – literally millions of new novels, songs, and films every year – making it difficult for gifted new artists to stand out. And, of course, there will be more AI girlfriends and AI pornography, as well as gruesome creations ranging from resurrected Nazis to depictions of child abuse.
The Life of an Illusion
Far more frightening to me, however, is what is happening to the world of nonfiction – news, information sources, and reference services. Here, we are already witnessing the blurring of the boundaries – to the point of indistinguishability – between fact and fabrication. While the AI era of art excites me more than it worries me, the balance is different in the realm of truth and reality. As much as there is to celebrate, I am terrified by what AI might bring.
Journalism, like Hollywood, has already contracted. The internet forced daily newspapers, weekly magazines, radio, and television news all into the same market; it destroyed the classified advertising revenues that newspapers depended on; and it spawned thousands of low-quality new entrants. The news sources upon which most people previously relied – magazines such as Time and Newsweek, and network television news – were decimated as social media, YouTube, and aggregators took over, offering summaries that were barely short of copyright infringement – when they were true. Junk and falsehoods proliferated, and the quality of news consumed by the general population plunged.
To be sure, after multiple near-death experiences, a small number of high-quality English-language news organizations emerged even stronger and with larger global audiences than before: the New York Times, the Financial Times, the Guardian, Bloomberg News, the Economist, Politico, and the Reuters and AP wire services. But these outlets reach only a small minority of the population. They are also expensive to produce, and their finances are fragile. AI threatens not only the remaining institutions of high-quality journalism, but, more fundamentally, the capacity of anyone to deliver truthful information and maintain an informed public capable of rational judgment.
The obvious, most frequently discussed issue is AI deepfakes. These are indeed a huge problem, considering that YouTube, Facebook, Snap, X, and TikTok face few obligations with regard to truth or accuracy. For all the damage that conspiracy theorists like Alex Jones wrought, at least we knew that we were hearing from the real Alex Jones. Soon, it will be possible to synthesize nearly undetectable fake versions of almost anyone, and almost any event.
Even the most carefully trained AI models can be misused, and some open-source AI models have no controls whatsoever. Available to anyone, they can be “trained” to produce nearly any kind of text or video, ranging from high-quality (carefully fact-checked) to insane distortions.
Yet at the same time, AI has greatly improved the quality of news and information available to the public, at least for anyone interested enough to look. The major models (mainly OpenAI, Anthropic, and Google), and many value-added services enabled by them, are now remarkably good. Hallucination is still a problem, but far less so than even a year ago. The models have also made deals – mostly secret, but a few publicly known – with some of the serious news providers. The Financial Times has an agreement with OpenAI, the New York Times has one with Amazon, and the AP wire service has agreements with OpenAI and Google.
Already, AI models provide a miraculous portal to knowledge for more than a billion users. I use Perplexity at least a dozen times a day, and I used it repeatedly in writing this essay – far more often than I referred to legacy publications (or Google Search).
Similarly, there has been an explosion of specialized AI services, including reference resources for lawyers, scientists, doctors and patients, and now also AI therapists, through providers such as Ash and Lovon. Make light of it if you wish, but several friends have told me that Ash, Lovon, and even ChatGPT have proven surprisingly helpful in times of need, comparing favorably with most human therapists.
A Sloppery Slope
But there is a dark side. AI models do not create knowledge. They harvest and distribute knowledge superbly, but they are totally dependent on information created by others. We (and the models) still need Politico, the New York Times, the Financial Times, the Kyiv Independent, eKathimerini, the Guardian, Le Monde, Asahi Shimbun, El País, Der Spiegel, AP, Reuters, ProPublica, and the whole world of news organizations. They alone have commissioning editors, full-time journalists, fact-checkers, and networks of stringers, fixers, and sources on the ground. AI models do not hire investigative journalists or war correspondents willing to take risks and work hard to get the truth.
Yet as much as AI models depend on legacy journalism, they also profoundly threaten it in at least two ways. As in the case of Hollywood, these threats are further amplified by the fact that the legacy industry isn’t paying attention.
The first problem is direct competition. If you want to know something specific, or want to stay current with some issue, you don’t need a news publication anymore; you can just ask a model. And you can ask exactly the question you want answered, at exactly the level of detail you desire. You can even ask for the equivalent of the news section or home page of your favorite newspaper.
Moreover, the currently available models can answer many questions that the news organizations cannot, on topics ranging from appliance repair to psychotherapy and medical advice. Perhaps worst of all, they are cheaper – much cheaper. For individual users, they typically charge $10 per month, whereas the New York Times typically costs about $25 per month, and the Financial Times and Bloomberg News much more.
The AI models have a cost advantage in part because they can amortize their fixed costs across huge numbers of users. But they also benefit greatly from not paying for most of the information they use. The major AI vendors have been exceptionally ruthless and amoral in using valuable publications, including books and news periodicals, to train their systems, usually with little or no compensation to writers or publishers.
Anthropic, regarded as the most responsible among the major models, recently settled a lawsuit supported by the Authors Guild for $1.5 billion. The New York Times sued OpenAI and Microsoft in 2023 (both defendants continue to fight), and shortly before I finished this essay with plenty of assistance from Perplexity, the New York Times and the Chicago Tribune both sued that company, too.
There is a strong moral and practical argument for forcing model vendors to compensate creators fairly. But this will probably require new court decisions or new laws. In the meantime, there is a very real risk that unless news organizations, journalists, writers, and documentary filmmakers are compensated sufficiently, the AI industry will eventually kill the very sources on which it depends to provide accurate results.
This brings us to the second problem posed by AI: the potential destruction of trustworthy news sources as a result of overwhelming pollution from AI junk and fraud. Innumerable AI services will arise, and even the major foundation models and the most careful news organizations might be degraded by skillful AI fakery that cannot be distinguished from reality. So far, the models have been trained on reality; but soon, most training “content” will be AI-generated.
Heads in the Sand
The legacy industry itself bears some responsibility for this impending crisis. Faced with the looming threat of AI, the major news organizations have, like Hollywood, done … virtually nothing. The New York Times and the Financial Times cover the AI industry fairly well, employing reporters who know what’s going on. But do either of them have a chat interface so that subscribers can ask questions? No, they do not. Just try using the New York Times (or Financial Times, or Guardian, or Politico) search feature and then compare it to ChatGPT or even conventional Google Search. It’s not even close, and we are talking about searching for something within the same publication.
Nor do most news organizations use AI more broadly. Their journalists may use chat services, but they could be doing much more. They could be deploying AI systems that provide continuous scans of reliable sources for news developments, automate production of first drafts, provide citations and footnotes accompanying articles (which Perplexity does), streamline fact-checking, and copy edit.
Then there’s language translation. The New York Times offers limited services in Chinese and Spanish, but if you read only Arabic, Japanese, Polish, Ukrainian, or Vietnamese, you’re out of luck. That is absurd, given the current quality of AI translation.
The legacy publications still mostly deny the foundation AI models access to their content. But if they think this will slow things down, they are deluded. The digital universe is incomparably larger than any single publication, and the AI models and their tools are getting very good at finding everything they need. And the news industry as a whole is no match for the forces being unleashed upon it, and to the companies doing the unleashing.
This last point is under-appreciated. In 2024, the New York Times Company had revenues of slightly under $3 billion; the Financial Times, Guardian, and the Economist have combined revenues of under $2 billion. Only Bloomberg has real financial muscle. Even the entire global book and scientific publishing industry (dominated by Bertelsmann, Springer, and Elsevier) has total revenues of less than $50 billion.
In contrast, Google’s annual revenues are around $400 billion; Microsoft’s, $300 billion; Meta’s, $200 billion; Amazon’s, $700 billion; Apple’s, $400 billion. Google’s profits alone top $100 billion annually. Even OpenAI and Anthropic already have revenues far larger than any news provider except Bloomberg. The New York Times’ revenue is growing at about 10% per year, which means that it is losing market share, fast. In a fight for the eyeballs of the planet, who do you think will win, particularly if news organizations are falling ever further behind in using AI technology?
One can hope that news organizations will wake up, that courts and legislatures and popular demand will force AI companies to compensate journalists and researchers fairly, and that AI will give rise to a new industry of high-quality journalism. But one can also reasonably worry, as I do, that none of this will happen.
Charles Ferguson is an angel investor, a limited partner in six AI venture capital funds, and a nonexclusive partner in Davidovs Venture Collective. His direct investment positions include three technology incumbents (Apple, Microsoft, and Nvidia) and many AI startups, including Perplexity, Etched, CopilotKit, Paradigm, Browser Use, FuseAI, and Pally.
Charles Ferguson, AI Editor at Project Syndicate, is a technology investor, policy analyst, and the director of many documentary films, including the Oscar-winning Inside Job.
Copyright: Project Syndicate, 2026.
www.project-syndicate.org
The post The AI takeover of all media is coming appeared first on The Business & Financial Times.
Read Full Story
Facebook
Twitter
Pinterest
Instagram
Google+
YouTube
LinkedIn
RSS