Technologies/AI

Google is testing an AI tool that can write news articles

Google is testing a tool that uses AI to write stories and has started rolling it out to publications new report from The New York Times. The tech giant has pitched the AI ​​tool to The New York Times, The Washington Post, and owner of The Wall Street Journal, News Corp.

The tool, internally coded “Event”, can receive information and then generate a news copy. Google reportedly believes the tool could serve as a journalist’s personal assistant by automating some tasks to make time for others. The tech giant sees the device as a form of “responsible technology”.

The New York Times reported that some executives who chose the tool saw it as “intrusive,” noting that it seemed to undermine the effort put into producing accurate news.

“In partnerships with news publishers, particularly smaller publishers, we are in the early stages of exploring ideas for providing AI-enabled tools to assist journalists with their work,” a Google spokesperson said in a statement to Zero2Billions.

“For example, an AI-enabled tool can help journalists with their choice of headlines or different writing styles,” the spokesperson added. “Our goal is to give journalists options to use this new technology in ways that enhance their work and productivity, just as we provide helpful tools for people in Gmail and Google Docs. Simply put these tools are not intended to, and cannot, replace the important role that journalists have in reporting, creating and fact checking their articles.”

The report comes as several news organizations, incl NPR And Insidershave informed employees that they intend to explore how AI can be used responsibly in their newsrooms.

Some news organizations, including The Associated Press, have long used AI to generate stories for things like company earnings, but these stories represent a small part of the organization’s overall articles, written by journalists.

Google’s new tool is likely to spur anxiety, as AI-generated articles that aren’t fact-checked or thoroughly edited have the potential to spread misinformation.

Earlier this year, American media website CNET quietly started producing articles using generative AI, as such eventually backfired for the company. CNET eventually had to issue corrections on more than half of the AI-generated articles. Some articles contain factual errors, while others may contain plagiarized material. Some website articles now have an editor’s note that reads, “Earlier versions of this article were assisted by an AI engine. This version has been substantially updated by the writing staff.”

Related Articles

Back to top button