AI won’t just reshape work and markets, Joseph Stiglitz says it will quietly rot the information those systems depend on. As large language models (LLMs) scrape our sarcastic Reddit comments and loud marginal voices on extremist forums, the Nobel laureate warns of a world where everything looks more data‑driven, yet the underlying data is increasingly, well, “garbage.”

“In the case of AI, I think there are a couple of other deeper problems,” the economist told Fortune. “We have not only a problem in the labor market … but there’s another side of what I would call information externalities,” which Stiglitz describes simply as garbage in, garbage out (GIGO).

The risk isn’t just lost jobs; it’s a broken feedback loop between truth and the systems we use to interpret reality—from prediction markets to financial models to political debate. In essence, AI is only as smart as the input it receives, and when it continues to scrape less-than-accurate information, the output becomes just as distorted as the information it absorbed. 

In his view, today’s models are built on a faulty bargain: They voraciously scrape journalism, research, and online chatter while undermining the very institutions that produce high‑quality knowledge in the first place. The result, he fears, is a world where people are driven by the online rhetoric they see perpetuated by AI—think of the market downturn prompted by a Citrini Research paper publicizing “ghost GDP” or Matt Shumer’s viral AI doomsday essay—and not one based in actual reality.

AI is ‘stealing information’ from the sources it needs

Stiglitz would like to thank you for reading this article, even if his starting point is blunt. “AI is basically stealing information from legacy media,” he said, “and that means the legacy media doesn’t have the resources or incentives to produce information.” To be sure, some AI companies do pay for certain journalism. OpenAI, for example, has a content deal with Wall Street Journal owner News Corp.

Still, Stiglitz said, AI has neither the interest nor capacity to produce new quality information. “And the result of all this is that there is a real risk of a deterioration of the overall information ecosystem.”

If the best sources of information are slowly defunded while the cheapest forms—like comment threads, partisan memes, and low‑effort content—proliferate, the training data tilts toward whatever is most abundant and least expensive, meaning chatbots will overwhelmingly regurgitate what they take from forums online.

That’s the first way AI’s hunger for what’s online can backfire: by cannibalizing the business models that sustain serious work and shifting the mix of what exists to be scraped in the first place.

Garbage in, garbage out, at an industrial scale

Stiglitz, who references the information ecosystem in his 2024 book, The Road to Freedom: Economics and the Good Society, referred back to the GIGO cliché. “If you are processing and disseminating garbage, all you get at the end is garbage—garbage in, garbage out, GIGO.”

The phrase might be old, but Stiglitz says it’s still fairly relevant. AI systems are superb at processing whatever we give them, but they are not nearly as skilled at distinguishing knowledge from noise. “There is a real risk that in spite of the potential for the new technologies to improve the information ecosystem in critical areas … we actually may wind up in a worse situation,” he said. The more junk goes in—unverified claims, conspiracies, Astroturf campaigns, low‑quality commentary—the more polished junk comes out.

He worries that users will mistake that polish for truth. “They’ll think that they’ve gotten highly processed information without realizing fully the extent to which all that they’ve been doing is reprocessing garbage,” he said. “AI processing garbage isn’t a substitute for a single good research paper.”

When anti‑vaxxers outweigh scientists

Nowhere is that risk clearer than in the far-off corners of the internet where extreme viewpoints are often the loudest. Think of your stereotypical community message board on a certain topic. Thanks to the anonymity of the internet, users are more than welcome to voice their opinions on the latest political decision or cultural happening. As a result, these corners are areas where misinformation is more prolific—and the science that debunks that misinformation receives little mention, if at all. Vaccines are a perfect case study, Stiglitz says.

“Anti‑vaxxers are much more active on the internet than people who say that vaccines work,” he said. Scientists run trials, publish a few dense papers, and move on. Conspiracy theorists flood forums and social platforms every day.

“So there may be many more articles on the anti- side than the one critical article that says, ‘Here’s the test of the vaccine, and it works … Here’s the efficacy,’” Stiglitz explained. “Do the AIs today have the ability to say that one article is all we need? They don’t.”

For models trained on raw frequency and engagement, the loudest voices win. AI’s hunger for more information can warp reality by pushing the passionate minority over the careful majority, especially in domains where the public good depends on trust in slow, methodical science.

Prediction markets based on the lack of information

In a 1980 paper with Sanford Grossman, Stiglitz argued that there’s a paradox at the heart of efficient markets: If prices fully reflect all available information, then no one has an incentive to pay to collect that information, so the very information that makes markets “efficient” disappears.

He says AI and modern prediction markets are replaying that story at a larger scale. “It’s interesting you mentioned Grossman‑Stiglitz,” he told Fortune, “because I wrote a paper with one of my graduate students, Max Ventura, extending the Grossman‑Stiglitz to AI, and the result I described before about how we can worsen the information ecosystem was actually a reference to that extension.”

When “you don’t force AI companies who are scraping the data from Fortune and from every other producer of media” to pay for what they take, “they don’t get the returns, and so the incentives to do the basic quality research that leads to a good information ecosystem is attenuated.” Prediction markets and trading algorithms then lean on the outputs of those models, further decoupling their bets from any underlying investment in truth.

“It has undermined the incentives for producing high-quality information, increasing the ability to produce low-quality information, and therefore there’s more garbage going in, and therefore more garbage coming out,” he said. A system meant to aggregate knowledge ends up amplifying whatever is cheapest and most plentiful instead.

AI as prop, not oracle

Despite all this, Stiglitz doesn’t think the answer is to ban or ignore AI. He uses it himself, and he’s trying to teach his students how to do the same—without confusing a slick answer for a solid argument.

“We try to teach them to use AI as a research tool,” he said. “So, you know, we’re not walking away from AI. I use AI as part of my research. So it is an amazing research tool, but it’s not a substitute for thinking, and it’s not a substitute for analysis.

“It can help you find sources, develop ideas,” he added. “But in the end, you have to do the hard work.” For him, the outputs of a model are “really props for me to start thinking about things maybe slightly differently,” not verdicts to be accepted unchanged.

Still, he believes there must be some intervention on a governmental level to prevent the deterioration of information from worsening. “In the absence of government regulation,” he warned, “there is at least a significant risk that we will wind up with a worse information ecosystem in a number of areas of concern.”

This story was originally featured on Fortune.com

Read More