Book fans around the world are expressing outrage over an AI-generated “summer reading list” that includedÌýseveral major errors and was distributed in multiple American print newspapers over the weekend.Ìý
The list,Ìýpublished as part of a “summer guide” insert in the Ìýon May 18 and the on May 15, included 15 recommended novels,Ìý“new and old,” that promised to “deliver the perfect summer escape.”
People on social media were quick to point out that 10 of the entries wereÌýnovels that do not exist.Ìý
Some of the fake titles include:
- “Tidewater Dreams” by Chilean-American novelist Isabel Allende, “a multigenerational saga set in a coastal town where magical realism meets environmental activism.”
- “Salt and Honey” by Delia Owens, author of “‘Where the Crawdads Sing,’ which is described as “an atmospheric novel that blends science with a coming-of-age narrative.”
- “The Last Algorithm” by science fiction writer Andy Weir, about “a programmer who discovers that an AI system has developed consciousness —Ìýand has been secretly influencing global events for years.”
The list does not includeÌýa byline but was created by Marco Buscaglia, a Chicago-based freelance writer and content strategist, who also wrote most of the content for the summer guide insert.Ìý
In an ,ÌýBuscaglia said that he was “completely embarrassed” by the errors and takes full responsibility. “I can’t believe I missed it because it’s so obvious,” he said. “No excuses.”Ìý
Buscaglia said that he usesÌýAI “for background at times,” but always double checks the material.
“We are aware that a supplement published by The Inquirer on May 15 contains material generated by AI that is apparently fabricated, outright false, or misleading,”ÌýLisa Hughes, publisher and CEO of the Inquirer, told the Star. “We do not know the extent of this but are taking it seriously and investigating.”
Hughes said that the 56-page printed supplement, called “Heat Index,”Ìýalso appeared on The Inquirer’s e-edition, but has since been removed.
“The Inquirer newsroom is not involved in the production of these syndicated features,” Hughes added. “Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.”
Victor Lim, vice president of marketing and communications at Chicago Public Media, told the Star that the list was licensed content and “was not created by, or approved by, the Sun-Times newsroom.” Lim said the company is investigating how the list made it to print, adding that “it is unacceptable for any content we provide to our readers to be inaccurate.”
“We value our readers’ trust in our reporting and take this very seriously,” he said.
‘Undermines the credibility of media organizations’
“Too often, the embrace of AI is done without considering the potential for unfortunate side effects,” said Jeffrey Dvorkin, a Senior Fellow at Massey College and the former director of the Journalism program at the University of Toronto.Ìý
Dvorkin told the Star that when erroneous AI-generated articles are published, it “undermines the credibility of media organizations, making readers even more skeptical and suspicious of the media, which it can ill afford.”
“This is just another nail in our reputation as providers of reliable information,” he added.
Angela Misri, a mystery novelist and journalism professor at º£½ÇÉçÇø¹ÙÍøMetropolitan University who researches AI, told the Star that it is not surprising to see mistakes like this generated by an AI chatbot, also known as Large Language Models (LLMs).Ìý
These models, which are trained by large amounts of digital data scraped from across the internet, are not capable of creating new content, Misri explained. Instead, these models compile online content and “mix it up” into something that makes sense, though it doesn’t have to be factual. Sometimes, these models generate fake or made-up ideas, which
This sort of error can create ripple effects. Earlier in the day, the Star searched the name Isabel Allende and the made-up title on Google, and was given a similar, yet slightly different synopsis of the non-existent novel. The results included a disclaimer: “AI responses may include mistakes.”

The AI-generated results of a search conducted by the Star.
GoogleMisri said newsrooms are increasingly using AI-generated tools to generate content, while failing to insert humans to check for mistakes before it gets to publication.Ìý
“I just don’t understand the comfort level with that,” she said. “Most journalists are so terrified of getting something wrong, we stay up nights because we think we might have gotten something wrong or when we do get something wrong, it haunts us, right? And I don’t understand how that has been removed from the process of journalism. “We’ve removed that editorial brain from the production.”
Misri, who alsoÌý,Ìýadded that it was “so weird” to see the work she does “literally represented by AI.”
“This damages all of us,” she continued, pointing out that Canadian media is already suffering from a diminishing number of people who are willing to pay for journalism. “New subscriptions are going to drop with this kind of garbage, because who’s gonna pay for that?”
To join the conversation set a first and last name in your user profile.
Sign in or register for free to join the Conversation