A new AI-powered deepfake tool called ProKYC that allows nefarious actors to bypass high-level KYC measures on crypto exchanges demonstrates a "new level of sophistication" in crypto fraud, says cybersecurity firm Cato Networks.
In an Oct. 9 report, Cato Network's chief security strategist Etay Maor said the new AI tool represents a major step up from the old-fashioned methods cybercriminals used to beat two-factor authentication and KYC.
Instead of purchasing forged ID documents on the dark web, AI-powered tools allow fraudsters to spin up brand-new identities out of thin air.
Cato said the new AI tool had been customized specifically to target crypto exchanges and financial firms whose KYC protocols include matching webcam pictures of a new user's face to their government-issued ID document such as a passport and or a driver's license.
A video provided by ProKYC demonstrated how the tool can generate fake ID documents and accompanying deepfake videos to pass the facial recognition challenges used by one of the world's largest crypto exchanges.
In the video, the user creates an AI-generated face and integrates the deepfake image into a template of an Australian passport.
Next, the ProKYC tool creates deepfake an accompanying video and image of the AI-generated person, used to successfully bypass the KYC protocols on the Dubai-based crypto exchange Bybit.
Cato said that with AI-powered tools like ProKYC, threat actors are now far more capable of creating new accounts on crypto exchanges, a practice known as New Account Fraud (NAF).
The ProKYC website offers a package with a camera, virtual emulator, facial animation, fingerprints, and verification photo generation for $629 as part of an annual subscription. Outside of crypto exchanges, it also claims to be capable of bypassing KYC measures for payment platforms Stripe and Revolut, among others.
Related: AI deepfake attacks will extend beyond videos and audio -- Security firms
Maor said properly detecting and safeguarding against this new breed of AI fraud is quite challenging, as overly strict systems could cause false positives whereas any lapses would be allowing fraudulent actors through the net.
"Creating biometric authentication systems that are super restrictive can result in many false-positive alerts. On the other hand, lax controls can result in fraud."
However, there are several potential detection methods for these AI tools, some of which rely on humans to manually identify unusually high-quality images and videos, as well as inconsistencies in facial movements and image quality.
The penalties for identity fraud in the United States can be severe and vary depending on the nature and extent of the crime, with the maximum penalty being up to 15 years imprisonment and heavy fines.
In September, software firm Gen Digital, the parent company of antivirus firms Norton, Avast, and Avira, reported that crypto scammers using deepfake AI videos to lure in victims to fraudulent token schemes have grown increasingly active in the last 10 months.