AI tools are already changing the finance sector, opening up new ways for individuals to interact with their banks and service providers. But without thoughtful regulation, they could destabilize the financial system, Roosevelt Fellow Todd Phillips argues in a new brief. As more financial services firms begin to use generative AI agents, Phillips explores the risks they pose and how policymakers should respond.
Generative AI agents can include everything from customer service chatbots to models that determine credit ratings—and their potential harms are just as wide-ranging. AI agents could allow malicious actors to engage in fraud, manipulate the market, or conduct cyberattacks; hallucinate, or generate false outputs that harm customers; or engage in herding behavior—reacting to market conditions in nearly identical ways—that leads to bank runs or flash crashes.
Concerns around AI agents are “not simply about the use of algorithms in finance,” Phillips writes, “but about a world in which AI agents are widely available to individuals and small businesses as well as the largest financial firms; in which malicious actors may easily use Generative AI to scam financial institutions and their customers; and in which financial institutions use Generative AI to interact with their customers, rather than human employees.”
“To ensure that the worst potential outcomes do not become reality,” Phillips writes, policymakers must act.
Read on for the six things Congress can do.
|
|
|