In the world of ever-growing artificial intelligence-based features, one can easily be fooled by fake photos and videos created by AI tools. Some of the most popular such tools include OpenAI's ChatGPT-4.0 or GPT-4o and Google's Gemini. The ChatGPT-4.0 version is one of the most popular tools among these. It can easily generate deep-faked videos of a real person. Now, researchers have suggested that OpenAI's GPT-4o can easily be hacked to launch voice-based scams for people to misinterpret.
According to a report by BleepingComputer, the ChatGPT-4o version can be hacked to launch voice-based scams. Researchers have shown that it's possible that OpenAI's real-time voice API for ChatGPT-40 can be hacked to create some scams. These can have low to moderate success rates, suggests the report. It's worth noting that ChatGPT-40's hacked scams are mostly based on a fake voice.
The GPT-40 version's hacked scams can already be a "multi-million dollar problem", suggests the source. They can use the deepfake technology and AI-powered text-to-speech tools of ChatGPT to make things worse. Furthermore, the source suggests that these tools can be used to make and create large-scale scam operations without human effort.
Speaking to BleepingComputer, OpenAI explained its latest model, which is currently in preview, supports "advanced reasoning" and was built to better spot these kinds of abuses: "We're constantly making ChatGPT better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity," the company's spokesperson told the publication.
Notably, OpenAI's latest model which offers a voice-based AI agent can be used to create some scams using this tech. Notably, the latest ChatGPT version has some safeguards against its tools being abused in some way. Furthermore, some hackers have managed to hack it to create vulnerability tools, suggests the source. Notably, the success rate of these scams varies from app to app like stealing credentials from Gmail.