On Tuesday, my colleague Ren LaForme wrote how The New York Times put out a rare editors’ note, trying to explain why it rewrote a headline about last week’s bombing at a hospital in Gaza. Many news outlets, including the Times, initially relied on claims from Hamas that Israel was responsible for the attack.
The Times’ note read, “The Times’s initial accounts attributed the claim of Israeli responsibility to Palestinian officials, and noted that the Israeli military said it was investigating the blast. However, the early versions of the coverage — and the prominence it received in a headline, news alert and social media channels — relied too heavily on claims by Hamas, and did not make clear that those claims could not immediately be verified. The report left readers with an incorrect impression about what was known and how credible the account was.”
The Times changed its early headline and said that within two hours, “the headline and other text at the top of the website reflected the scope of the explosion and the dispute over responsibility.”
It added, “Given the sensitive nature of the news during a widening conflict, and the prominent promotion it received, Times editors should have taken more care with the initial presentation, and been more explicit about what information could be verified.”
Of course, it’s easy to see that in hindsight.
But now, Vanity Fair’s Charlotte Klein reports that there was concern in real time inside the Times about its coverage. That’s based on Times Slack messages obtained by Klein.
Klein wrote, “… senior editors appear to have dismissed suggestions from an international editor, along with a junior reporter stationed in Israel who has been contributing to the paper’s coverage of the war, that the paper hedge in its framing of events.”
In the Times’ first online story about the bombing, the headline read, “Israeli Strike Kills Hundreds in Hospital, Palestinian Officials Say.”
According to Klein’s story, a senior news editor tagged two senior editors on the Times’ live team and wrote in Slack, “I think we can be a bit more direct in the lead: At least 500 people were killed on Tuesday by an Israel airstrike at a hospital in Gaza City, the Palestinian authorities said.”
One of those editors said, “You don’t want to hedge it?”
A junior reporter covering the war from Jerusalem wrote, “Better to hedge.”
The senior news editor replied, “We’re attributing.”
Later, an editor on the international desk said in the same Slack channel, “The (headline) on the (home page) goes way too far.”
When questioned about that, the international editor wrote, “I think we can’t just hang the attribution of something so big on one source without having tried to verify it. And then slap it across the top of the (homepage). Putting the attribution at the end doesn’t give us cover, if we’ve been burned and we’re wrong.”
Klein didn’t identify any of the editors by name and the Times declined to give Klein a comment, but it would appear the international editor was correct.
Meanwhile, New York Times executive editor Joe Kahn spoke with the Times’ Lulu Garcia-Navarro about the editors’ note and how the Times plans to cover the widening conflict between Israel and Hamas
Also, check out the latest from NPR’s David Folkenflik: “News outlets backtrack on Gaza blast after relying on Hamas as key source.”
Gannett takes down Reviewed articles after outcry from staff
For this item, I turn it over to my Poynter colleague, Angela Fu.
Reviewed, Gannett’s product reviews site, took down several affiliate marketing articles that some of its journalists claimed were generated by artificial intelligence.
The articles in question first went up on Friday and included reviews of products that Reviewed does not typically cover, like dietary supplements, according to the Reviewed Union, which represents journalists and lab and operations workers at the outlet. The posts, which were part of a new shopping page, did not have bylines, and union members decried the work as an attempt to replace their labor. By Tuesday morning, the page was gone. Reviewed then republished the stories in the afternoon with a disclaimer that they had not been written by staff before taking the page down again.
As of Tuesday evening, the shopping page was still down, though links to individual stories still worked.
The articles were created by third-party freelancers hired by a marketing agency partner, not AI, Reviewed spokesperson Lark-Marie Anton wrote in an emailed statement: “The pages were deployed without the accurate affiliate disclaimers and did not meet our editorial standards.”
Reviewed follows USA Today’s ethical guidelines regarding AI-generated content, Anton added. Those guidelines stipulate that journalists disclose the use of AI and its limitations when publishing AI-assisted content.
One of the freelancers credited on the shopping page wrote on his LinkedIn profile that he has experience in “(d)etail-oriented and eloquent copywriting and editing focused on polishing AI generative text.”
On Tuesday, the Reviewed Union, part of the NewsGuild of New York, publicly blasted the company, claiming that the articles had been made using AI tools. They pointed to “a mechanical tone and repetitive phrases” within the reviews.
The union also suggested that several of the freelancers listed on Reviewed’s page were not real. They identified “nondescript” and “stilted” biographies and said that Google searches had failed to uncover past work or LinkedIn profiles.
The dispute comes a few weeks after unionized staff at Reviewed staged a one-day strike. Workers there went public with their union drive in December and will soon begin negotiating a first contract.
“The timing of it is no accident,” said senior editor Alex Kane. “It's an ugly card for them to play because it goes against everything we've heard for the years that I've been at Reviewed about how authorities, expertise, quality and helping readers are what matters."
Unionized journalists at other Gannett newsrooms are currently trying to negotiate protections against the use of AI. They seek to ensure that their work will not be replaced by AI and that any content produced with the help of AI meets journalistic standards. Gannett faced heavy criticism in August after it partnered with Lede AI to generate high school sports recaps that contained awkward phrasing and errors.
33 states sue Meta