Over the last few months, I’ve taught generative AI ethics and literacy to hundreds of journalists and media professionals from Alabama to Egypt. One thing stands out: Few organizations have AI ethics guidelines.
That’s troubling because your audience wants AI policies, according to a large scale study Poynter did with the University of Minnesota. And it seems every week we witness an AI-related mishap at a news organization — which continues to fray the tenuous trust we have left with our readers.
When crafting a policy, start with your values. Your ethics guidelines — and AI principles — should be downstream from those. For example:
Fairness as a value might lead you to create an AI steering committee with diverse members from across the newsroom to develop guidelines. And conduct regular bias audits to make sure your use isn’t unfair to certain communities.
Transparency means a strategy for disclosing when and where you use AI. The more public and AI-assisted, the more transparent you should be.
Accountability means you should have a corrections policy and consequences when a reporter or editor runs afoul of your policy.
And a very specific example: If environmental sustainability is among your values, consider guardrails against excessive use of paid AI models, and guidance for using open-source models on your own computers.
Change is hard. Especially in the media industry. But, AI guidelines can be a useful springboard to prepare your organization or team upheaval the technology will bring — and it’s easier than you think.