A recent incident has raised concerns about the use of AI to impersonate someone’s voice without their consent. YouTuber Jeff Geerling, a Raspberry Pi expert, claims that his voice was cloned and used to promote a tech company’s tutorial videos.
It’s likely that the voice used in the videos was created by feeding Geerling’s own content into an AI voice creation tool. The resulting voice was then used to narrate the tutorial videos, which were posted on Elecrow’s YouTube channel. Geerling notes that he has no bad blood with Elecrow, having reviewed one of their products in the past.
Geerling is unsure of how to proceed, as there is no clear legal precedent for unauthorized voice cloning. While President Biden has called for a ban on AI voice impersonations, there is no concrete law in place. However, there is precedent for not using someone’s voice in commercial work without their consent.
This is not the first time that AI voice cloning has raised concerns. OpenAI was previously involved in a controversy surrounding a voice assistant that sounded like Scarlett Johansson. The company removed the voice after Johansson hired legal counsel. Similarly, actor Ned Luke blasted a company for using his voice without permission to create a chatbot.
The incident highlights the need for clearer laws and regulations surrounding AI voice cloning. As the technology continues to evolve, it’s essential that we prioritize the rights and consent of individuals whose voices are being used. Geerling’s situation serves as a cautionary tale, and it’s essential that we take steps to prevent similar incidents in the future.