From Project Liberty <[email protected]>
Subject đź–Ą How to protect yourself from AI-generated fraud
Date January 9, 2024 3:48 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
We explore how online scams are changing with AI and what we can do about it

View in browser ([link removed] )

January 9, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here ([link removed] ) .

Image from Project Liberty

The rise of AI-generated fraud

Last year, Jennifer DeStefano received a horrifying call.

It was a call from an unknown number, but when the Arizona mother picked up, her 15-year-old daughter’s voice was unmistakable: “Mom, I messed up. Help me, Mom. Please help me.”

With her daughter crying in the background, a man’s voice came on the phone demanding a $1 million ransom for her safe return.

“It was completely her voice. It was her inflection. It was the way she would have cried. I never doubted for one second it was her,” DeStefano said ([link removed] ) . “A mother knows her child.”

Fortunately, almost immediately, she determined that her daughter was actually safe—at which point she realized she had a scammer on the phone. But how was her daughter’s voice so accurate?

According to FBI investigators, the voice of her daughter on the phone was the work of an AI-generated voice deepfake ([link removed] ) , an increasingly common type of fraud.

We’ve entered a dangerous renaissance in AI-powered online fraud, and this week, we’re exploring the latest in scam tech and the solutions to tackle it.

// How synthetic fraud works

Creating a clone of a loved one’s voice is an example of synthetic fraud ([link removed] ) , where scammers use real data found online to create fake personas and new identities.

- Last year, the news program 60 Minutes hired an ethical hacker to generate a voice deepfake to show how convincing it could be, successfully deceiving a 60 Minutes employee ([link removed] ) .

To create a voice deepfake, a scammer can download an audio clip ([link removed] ) of someone's voice from a social media video or voice message—even if the clip is only 30 seconds long—and use AI-powered synthesizing tools to make that voice say anything.

Given the size of our digital footprints ([link removed] ) , scammers have a lot to work with. They can also transform stolen or illegally-purchased data ([link removed] ) , like Social Security Numbers, addresses, and bank account details, into entirely new identities.

Alain Meier, Head of Identity at Plaid ([link removed] ) , a company that builds tech products for the banking and financial industries, said that scammers are finding photos of victims from their social media ([link removed] ) and 3D-printing realistic masks to create fake IDs that bypass facial recognition technology. Plaid has begun to analyze the texture and translucency of skin to authenticate when users are logging in with facial-recognition technology.

//

Globally, the cost of cybercrime was projected to hit $8 trillion in 2023, or more than the GDP of Japan.

//

// The AI x-factor

AI is making scams more convincing, harder to detect, and easier to perpetrate at scale.

- Deepfakes: There were 550% more deepfake videos ([link removed] ) online in 2023 than there were in 2019. Last year, a Russian-generated fake image of an explosion at the Pentagon impacted US financial markets ([link removed] ) .
- AI chatbots: AI chatbots are making text-based online fraud far easier to perpetrate because they eliminate obvious grammatical mistakes and typos ([link removed] ) . SlashNext, a cybersecurity company, found through its own data that AI chatbots helped drive a 1,265% increase ([link removed] ) in the number of malicious emails sent between 2022 and 2023.
- The rise of “cheapfakes:” Cheapfakes ([link removed] ) are manipulated media made with readily-available technology (from in-camera effects to Photoshop) that’s often easy-to-use and either cheap or completely free.

// The numbers don't lie

- According to Federal Trade Commission (FTC) data, Americans lost nearly $8.8 billion ([link removed] ) to all scams (not just cybercrime) in 2022, a 44% increase from 2021.
- Globally, the cost of cybercrime was projected to hit $8 trillion in 2023 ([link removed] ) , or more than the GDP of Japan, the world’s third-largest economy. By 2025, cybercrime costs will exceed $10 trillion, a 3x increase in just one decade.
- Consumers are not the only victims. Just in the last year, 37% of global businesses ([link removed] ) have been impacted by synthetic voice fraud.

// The solutions

Where there is growing fraud, there is a growing effort to combat it.

The individual level

Consumer education and action is critical.

- The FBI provides a list of recommendations ([link removed] ) on how to avoid becoming a victim (for example, do not purchase or send sensitive information when connecting to a public Wi-Fi network), and the Federal Trade Commission offers guidance on how to protect personal data ([link removed] ) (for example, use multi-factor authentication to secure your accounts).
- For those who have been scammed, one of the biggest dangers is the “recovery scam ([link removed] ) ” where victims get re-scammed by fake scam mitigation services that promise to recover the initial money lost.
- By filing a fraud report with the FTC ([link removed] ) and sharing what happened with their communities, consumers can reduce the likelihood that others will become victims.
- The MIT Media Lab built Detect DeepFakes ([link removed] ) to help people identify the telltale signs of what’s a deepfake and what’s not.

The technology level

The same technology enabling online fraud can also be used to detect and prevent it ([link removed] ) .

- AI can be used to analyze millions of digital financial transactions ([link removed] ) and identify patterns or anomalies that might indicate fraud.
- AI can ingest massive amounts of data and identify if an image is authentic or just a deepfake ([link removed] ) . The startup DeepMedia, which recently won a contract with the Pentagon ([link removed] ) to use its technology to detect deepfakes, creates its own fake images, audio, and video to refine and improve the accuracy of its detection tools.
- AI can also help in fraud detection on social media platforms. Platforms like Meta use AI to moderate harmful content ([link removed] ) and remove deepfakes, which are banned across their platforms. In 2020, Meta ran a competition ([link removed] ) for new tech innovations to identify deepfakes.

The corporate level

Corporations are aggressively fighting online fraud with software that detects unusual consumer behavior ([link removed] ) and transaction histories that might indicate a scam is afoot.

- The company Pindrop ([link removed] ) , which monitors audio calls for financial institutions, provides anti-fraud and authentication technologies to major financial institutions.
- In the face of more sophisticated fraud, banks and other financial institutions are turning to more advanced biometrics ([link removed] ) to authenticate customers.

The government level

Regulators and government agencies are also stepping up.

- In the US, the Fraud and Scam Reduction Act was passed in 2022 ([link removed] ) . The act strengthens the FTC’s efforts to prevent and respond to fraud perpetrated against elderly Americans.
- Last year, American lawmakers put pressure on banks to refund consumers ([link removed] ) who unknowingly participated in a fraud with the peer-to-peer cash transfer service, Zelle.
- The US federal government has a Consumer Financial Protection Bureau ([link removed] ) that protects customers from unfair practices and fraud in banking, lending, and financial services.

// From vulnerability to vigilance

While a phone call from a kidnapper might sound like an extreme, worst-case scenario, we’ve entered an age when it’s harder than ever to detect truth from fiction and actual voices from synthetic ones. This creates opportunities for fraudsters, but it also requires consumers, companies, and government agencies to step up.

Who do you know who needs to be prepared for AI-powered fraud? Consider forwarding this email.

Project Liberty news

// 🤔 Can disruptive technology be responsible in 2024 and beyond? That’s the question posed in a recent policy brief in Tech Policy Press, ([link removed] ) co-written by Project Liberty’s Institute and the Centre for International Governance Innovation.

Other notable headlines

// 🍪 In the biggest change in the $600 billion-a-year online-ad industry, Google is finally killing cookies, according to an article in the Wall Street Journal ([link removed] ) .

// 🗣 An article in The Markup ([link removed] ) reported on new research that found assigning a role to a chatbot doesn’t result in more accurate responses.

// 🤖 An article in WIRED ([link removed] ) explored a new generation of robots built to provide care and emotional support for people with dementia.

// đź‘‚ New AI technology can differentiate between tuberculosis and other respiratory conditions just by hearing the sound of a cough, according to an article in the MIT Technology Review ([link removed] ) .

// 👩‍💻 An article in Tech Policy Press ([link removed] ) considered the high cost everyone will pay for big tech laying off trust and safety teams.

// 🧱 What’s ahead for crypto and Web3 technologies in 2024? A series of articles from CoinDesk ([link removed] ) predicted the future.

// 📞 The most radical new year’s resolution: switching to a flip phone. Kashmir Hill, a reporter at The New York Times gave up her iPhone for a month and shared her experience ([link removed] ) .

// đź§® An article in TechCrunch ([link removed] ) explored how the world is shifting from software to data, and how data ownership will lead the next tech megacycle.

Partner news & opportunities

// Virtual conference on open data & civic tech

January 17-19

The U.S. Census Bureau ([link removed] ) is hosting the Census Open Innovation Summit 2024, an annual innovation conference showcasing technology built with open data, and highlighting government innovations, cross-sector collaboration, and federal-community partnership. Register here ([link removed] ) .

// Responsible AI Symposium

January 18-19, Washington DC

The National Fair Housing Alliance ([link removed] ) is hosting a Responsible AI Symposium in Washington DC where researchers, innovators, civil and human rights experts, and regulators will explore algorithmic fairness. Register here ([link removed] ) .

// Virtual celebration of works entering the public domain

January 25 at 1pm ET

Creative Commons ([link removed] ) , Internet Archive ([link removed] ) , and other leaders are hosting Public Domain Day 2024, a virtual celebration of previously copyrighted works that are entering the public domain in 2024 (like the mouse that became Mickey). Register here ([link removed] ) .

// New collaboration between Georgetown & Ukrainian government

A new collaboration between Georgetown’s McCourt School of Public Policy ([link removed] ) and the Ukrainian government will connect Ukrainian digital leaders with McCourt and Georgetown faculty for an immersive learning experience in areas including public policy, digital public infrastructure, data privacy, and cybersecurity. Learn more ([link removed] ) .

/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

X Logo (formerly Twitter) ([link removed] )

Instagram ([link removed] )

PLslashes_logo_green ([link removed] )

501 W 30th Street, Suite 40A,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2023 Project Liberty
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a