Google is testing a tool that uses AI to write news stories and has started pitching it to publications, according to a new report from The New York Times. The tech giant has pitched the AI tool to The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp.
The tool, internally codenamed “Genesis,” can take in information and and then generate news copy. Google reportedly believes that the tool can serve as a personal assistant for journalists by automating some tasks in order to free up time for others. The tech giant sees the tool as a form of “responsible technology.”
The New York Times reports that some executives who were pitched on the tool saw it as “unsettling,” noting that it seemed to disregard the effort that went into producing accurate news stories.
“In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work,” a Google spokesperson said in a statement to TechCrunch.
“For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles,” the spokesperson added. “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and in Google Docs. Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.”
The report comes as several news organizations, including NPR and Insider, have notified employees that they intend to explore how AI could responsibly be used in their newsrooms.
Some news organizations, including The Associated Press, have long used AI to generate stories for things like corporate earnings, but these news stories represent a small fraction of the organization’s articles overall, which are written by journalists.
Google’s new tool will likely spur anxiety, as AI-generated articles that aren’t fact-checked or throughly-edited have the potential to spread misinformation.
Earlier this year, American media website CNET quietly began producing articles using generative AI, in a move that ended up backfiring for the company. CNET ended up having to issue corrections on more than half of the articles generated by AI. Some of the articles contained factual errors, while others may have contained plagiarized material. Some of the website’s articles now have an editor’s note reading, “An earlier version of this article was assisted by an AI engine. This version has been substantially updated by a staff writer.”