Many AI companies claim to be dedicated to seemingly conflicting goals: accelerating technological progress as rapidly—but also as safely—as possible.
According to RAND's Douglas Yeung, these companies don't seem to be succeeding. Their claims about the safety of AI products are undercut by reports of negligence and secrecy aimed at preserving profits, accelerating progress, or simply sparing the feelings of leaders.
So what can AI companies do differently?
To start, they could ban nondisparagement or confidentiality clauses, which discourage employees from expressing safety concerns. Tech companies could even take this a step further by encouraging employees and customers to identify flaws or safety issues in their software. Finally, these companies can learn from how other organizations and teams cultivate constructive dissent and promote transparency.
Taking steps such as these is vital to ensuring the safe development of AI technologies. “The stakes of silencing those who don't toe the company line, instead of viewing them as vital sources of mission-critical information,” Yeung says, “are too high to ignore.”
|