In a stark warning, Starling Bank has highlighted the rise of AI-powered voice cloning scams that could potentially ensnare millions, including those who are unaware of such tactics. The online lender explains that with just a brief audio clip, likely gleaned from social media or other online platforms, fraudsters can now use artificial intelligence to mimic individuals with alarming precision. This technology enables them to deceive and manipulate friends and family into parting with money under false pretenses.
Starling Bank's recent survey of over 3,000 adults, in collaboration with Mortar Research, reveals a distressing reality: more than a quarter of those surveyed reported being targeted by voice cloning scams within the last year. A significant portion of respondents, 46%, were found to be oblivious to the existence of these scams, and 8% admitted they would likely comply with a request for funds, even from a suspicious call.
Lisa Grahame, the bank's Chief Information Security Officer, underscores the urgency of the situation, noting that the casual sharing of voice recordings online significantly increases the risk of falling prey to such scams. In response, Starling Bank advocates for the adoption of a "safe phrase" strategy. This involves establishing a pre-agreed, memorable, and unique phrase known only to trusted individuals, used to authenticate identity during phone calls.
The bank also advises against communicating this safe phrase via text, as it could be intercepted by scammers. If shared via text, it is crucial to delete the message once it has been read. As AI continues to advance, raising concerns about its misuse in facilitating fraud and spreading disinformation, the case of OpenAI's decision not to release its Voice Engine to the public due to similar concerns highlights the growing wariness around synthetic voice technology.
The warning from Starling Bank serves as a call to action for individuals to remain vigilant and to take proactive measures to safeguard themselves against the evolving landscape of AI-enabled fraud.
By William Miller/Nov 13, 2024
By Ryan Martin/Oct 18, 2024
By Sarah Davis/Oct 18, 2024
By Sophia Lewis/Oct 18, 2024
By Sarah Davis/Oct 18, 2024
By Victoria Gonzalez/Oct 18, 2024
By Ryan Martin/Oct 18, 2024
By John Smith/Oct 18, 2024
By Daniel Scott/Oct 18, 2024
By Natalie Campbell/Oct 18, 2024
By Sophia Lewis/Oct 18, 2024
By Ryan Martin/Oct 15, 2024
By William Miller/Oct 15, 2024
By Noah Bell/Oct 15, 2024
By Sarah Davis/Oct 15, 2024
By Daniel Scott/Oct 15, 2024
By Christopher Harris/Oct 15, 2024
By Grace Cox/Oct 15, 2024
By Emily Johnson/Oct 15, 2024
By Natalie Campbell/Oct 15, 2024