Artificial intelligence could bring many benefits. But the proliferation of AI also comes with many potential dangers. For example, if an AI is stolen or leaked, it could easily end up in the wrong hands—with dire consequences.
This particular risk isn't hypothetical. It has already happened. Earlier this year, an AI model developed by Meta (which was not intended to be publicly accessible) was leaked online. Fortunately, Meta's model is relatively harmless.
But as RAND president and CEO Jason Matheny writes in the Washington Post, the next AI model that is compromised may not be so benign. That’s why it's time to establish a system of oversight that focuses on the three parts of the AI supply chain: hardware, training, and deployment.
Taking action to increase the safety of the U.S. AI industry would go a long way to boosting public confidence, he says, especially as consumers grow wary of just what sort of future AI might bring.
|