|
| | | Start your year strong. Submit your best 2025 work to the Poynter Journalism Prizes. |
|
| Did you produce great journalism in 2025? The 2026 Poynter Journalism Prizes are now open for entries across 12 award categories, including two brand new prizes recognizing reporting on climate change and poverty.
The competition now honors the Poynter Journalism Prize for Excellence in Climate Change Reporting — with the contest's largest cash prize ever at $10,000, sponsored by the Hennecke Family Foundation — and the Poynter Journalism Prize for Distinguished Reporting on Poverty, which offers a $2,500 award. Other categories honor writing excellence, accountability reporting, justice reporting, diversity leadership, editorial and opinion work, commentary and innovation in journalism. See the full list of categories here.
All U.S. news organizations are eligible to enter, regardless of size or platform.The entry fee is $75 through Jan. 31, then increases to $85. The deadline for entries is 6 p.m. Eastern on Friday, Feb. 13. Read the full press release here.
Visit the contest site to review full steps and rules, see all the categories and submit your work. |
|
| | | If you’re still looking for extra fingers or melting faces to spot AI-generated images, you’re working from an outdated playbook. Experts used to flag synthetic images by checking whether they obeyed basic physics: shadows fell the wrong way, reflections didn’t match, bodies bent strangely. Those tells still show up sometimes, but you can’t rely on them.
I hear this often from accredited fact-checkers around the world confronting AI-driven falsehoods every day. In 2025, AI advanced enough that even experts often struggled to tell, at first glance, whether a viral image or video was authentic or created with AI. The tools keep improving, from image generators such as Google’s Nano Banana to video models like OpenAI’s Sora, and the same is now true for audio: voice clones can fool people who know the speaker.
The practical implication is simple: you can’t treat realistic-looking content as evidence it’s authentic.
If you mistakenly share something false, the damage is not limited to that one post or that one story. It can undercut credibility you’ve built over years, and the correction rarely travels as far or as fast as the mistake.
Three tips to avoid being misled online: Don’t mistake polish for proof. A clean image, a natural voice, and a plausible video no longer earn trust by default. When a post creates urgency, anger, or shock, pause and ask two questions: Where did this come from, and what evidence would confirm it? Check the source, not the shortcut. AI detectors can be wrong in both directions, missing synthetic content and flagging authentic material. Use them, if at all, as a minor input, not a conclusion. Put your weight on accountability instead: who posted it first, what is their record for accuracy, and who is willing to answer questions on the record? Confirm off-platform, on trusted channels. Posts, clips, and screenshots are leads, not evidence, and the account sharing them may not be the source. Look for confirmation beyond the platform through official statements, records, and direct outreach using contact information you already trust.
AI has lowered the cost of making convincing fakes. For those of us who depend on audience trust, it has raised the cost of getting it wrong. When in doubt, slow down, trace the origin, and confirm independently.
|
|
| 💡 See IFCN’s latest articles on developments in fact-checking. Subscribe to Factually, IFCN’s newsletter about navigating misinformation and getting at the truth. |
|
| | | | | | | | | Need a language and style primer? Check out these pieces from Poynter. |
|
| | | | | | | | | | | | |
| | | |
|
|
|
| |
|