Commentary: AI chatbots have been used to create hundreds of news websites

Tribune Content Agency

Consider Hollywood’s evolving villains over the years: Nazis, Russians, South African white nationalists, Middle Eastern terrorists, South American drug lords, dinosaurs, aliens, Wall Street cutthroats, and, now, Tim Robbins.

It’s Robbins as creepy “Bernard,” a dour IT director in “Silo,” an AppleTV+ sci-fi series about thousands of people living underground for reasons they do not understand — and that Bernard is loath to explain.

An IT boss as bad guy (so far) taps into our growing cultural anxiety about technology, be it a threat to jobs or the expansion of artificially generated misinformation.

The apprehension was even evident at the University Club of Chicago the other night during a discussion of development on the South and West sides. There, Discover Financial Services Chief Executive Officer Roger Hochschild detailed a giant new South Side call center built by the Riverwoods-based banking and payments giant, which has 30 million customers worldwide and 20,000 employees.

When the floor opened to questions, the first two had nothing to do with development or shameful poverty. They were about how Discover uses artificial intelligence.

A heretofore esoteric topic is now mainstream. It includes stories such as “Microsoft’s AI reaches Indian villages,” and a Chicago sports radio host fretting about how the chatbot ChatGPT spit out a 14-page paper on the French Revolution for her sixth grader.

The unease was the catalyst for a U.S. Senate Judiciary Committee hearing, featuring Sam Altman founder of San Francisco startup OpenAI, the creator of ChatGPT.

My colleagues at NewsGuard, which tracks online misinformation and rates the credibility of news sources, did a pleasant double-take as Democratic Sen. Amy Klobuchar, the daughter of a journalist, brought up AI’s potential perils to witness Gary Marcus, a professor emeritus of psychology and neural science at New York University.

“A lot of news is going to be generated by these systems,” Marcus, who was sitting next to Altman, said. “They are not reliable. NewsGuard already has a study … showing that something like 50 websites are already generated by bots …. the quality of the sort of overall news market is going to decline as we have more generated content by systems that aren’t actually reliable.”

In fact, NewsGuard has identified more than 125 websites, ranging from news to lifestyle reporting and published in 10 languages, with substantial content written by AI tools. They include a health information portal that has published more than 50 AI-generated articles offering medical advice, some imprecise or even bogus, including on subjects such as end-stage bipolar disorder.

Some sites have respectable-sounding names such as News Live 79, Daily Business Post, iBusiness Day, Ireland Top News, and Daily Time update, and have benign, if poorly written, stories on politics, entertainment, and travel.

There may be a tipoff here or there that a human wasn’t involved (language like “I cannot complete this prompt”). More important, there are false claims, such as celebrity death hoaxes and fabricated events.

CelebritiesDeaths.com, which posts generic obituaries and news on famous figures who have supposedly died, published an April 2023 article titled “Biden dead. Harris acting President, address 9 a.m. ET.” The article began, “BREAKING: The White House has reported that Joe Biden has passed away peacefully in his sleep …”

As Bloomberg News wrote, NewsGuard’s work is “raising questions about how the technology may supercharge established fraud techniques,” and exploit so-called programmatic advertising placed by ad tech firms. That means large amounts of advertising can wind up on suspect “news” sites due to a few simple prompts that created the site. The tech firms and advertisers are probably unwitting victims.

“It seems to me that AI chatbots generating news will finally realize that (former Donald Trump aide) Kellyanne Conway’s vision of alternative facts by automating misinformation,” said Rich Neimand, a Washington political and corporate strategist.

New AI tools are “a toadstool masquerading as a morel,” or edible fungus, Neimand said. “It is poisonous in a society that lacks emotional intelligence and operates on relative ethics.”

For Chicago political consultant Tom Bowen, the new tools sharply lower the barrier to create dubious websites in political campaigns. Such ideologically driven sites, generally hiding the identities of their sponsors, are proliferating.

As with pornography, “propagandists are eager to embrace new technologies and often move faster than traditional journalists,” Jeremy Gilbert said. Gilbert is the Knight Chair in Digital Media Strategy at Northwestern’s Medill School of Journalism, and formerly a digital strategist at The Washington Post.

It is why NewsGuard and others “can play a valuable role in determining valuable sources of trusted news and information.”

“The same algorithms that generate text should be capable of identifying generated text,” Gilbert said. “AI companies that offer these generative tools should also offer complementary detection systems so propaganda and clickbait will be harder to spread.”

Like the confused captives of “Silo,” one hopes technology is a liberating friend, not a deceiving foe.

____

(Jim Warren, former managing editor of the Tribune, is executive editor of NewsGuard)

___