Google is testing ‘Genesis’, a news-writing AI platform
If only AI was ever too far from the field of Journalism, Google is now actually testing a new AI tool named ‘Genesis’ with a news organisations, that is aimed at “assisting journalists” in writing news stories. Google’s latest work in the rapidly-advancing AI game is said to streamline tasks and provide a news copy directly from the data fed into it, whether it be current events or other types of information.
In fact, Google seems to position the tool as an assistant to aid journalists in their work – bringing automation of tasks to free them up for other things. A report from The New York Times states the Google has already demonstrated the AI tool for executives from the Times itself, as well as those from The Washington Post and News Corp, which owns The Wall Street Journal.
Two of the executives did not seem so keen on the AI tool itself, saying that it “seemed to take for granted the effort that went into producing accurate and artful news stories.” Duh! Google, for its part, said that it was “in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.”
Ever since the onset of the ChatGPT-led AI era, journalism has been one area where the deployment of such generational AI models seemed imminent. And while some have attempted the same, it hasn’t really brought much success and hasn’t led to increased engagement or better content quality, as expected. Additionally, ChatGPT is prone to outputting false and inaccurate info, thus keeping it further away from use by journalists. But then, no one has so far tried a focused approach to develop an AI model, tailored specifically to creating journalistic content, till now.
Jenn Crider, a Google spokeswoman, said in a statement that “in partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide A.I.-enabled tools to help their journalists with their work.”
Should Genesis actually roll out as a “helpmate” to journalists, the positives could be worth looking at. AI-generated content could potentially offer personalized news experiences tailored to individual readers’ preferences, thus leading to an increase in user engagement and loyalty, while it could result in more comprehensive and accurate news coverage by providing journalists with data-driven insights and proper context. Though if not used cautiously and if used as more of a quantity-generating and journalist-replacement tool, the credibiity of news organisations could well be questioned, including that of the tool.
“For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles. Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and in Google Docs. Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” the company wrote in its official statement.
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and in Google Docs,” Google added.
That given, the use of AI has disturbing implications for journalism as well. So far, the likes of ChatGPT, Bing Chat, and Bard have been known to provide inaccurate responses, and using an AI tool to generate news copies and stories carries a massive risk of generating misinformation and disseminating it through the media channels. This can, obviously, lead to greater and graver implications, but that is a different can of worms. Furthermore, the use of AI in journalism – even as an assistant – carries significant ethical concerns and considerations, such as transparency and accountability.
The use of AI tools to aid in jobs in professional sectors have already set a bad precedent – a New York lawyer found out the hard way that ChatGPT created cases and lawsuits out of thin air, while US-based media website CNET got the short end of the stick after it produced several articles using generative AI. Later, CNET had to issue corrections on more than half of the articles generated by AI owing to plagiarism or numerous factual errors.