NFT

Apple’s New AI Feature Lets You Clone Your Voice

Apple released a new feature this week as part of its ongoing iOS 17 public beta – the ability to clone your voice and use it across the iPhone’s native and third-party communication applications. 

Personal Voice for the iPhone uses artificial intelligence (AI) to create a near-exact replica of your voice that can then be stored on the phone, designed to work alongside that Apple refers to as “on-device machine learning” to help ensure user privacy.

Apple first teased these new software features back in May that would be specifically catered to cognitive, vision, hearing, and mobile accessibility, which were expected to release later this year. 

The following month, the Cupertino-based tech company first announced iOS17 at WWDC23, its annual developers conference, where it spoke in more detail about new features – Contact Posters, Live Voicemail, FaceTime audio and video messages, Personal Voice, and more. 

What is ‘Personal Voice?’

Last month, Apple dropped the second iOS 17 public beta release, which added Personal Voice to its growing lineup of previously dropped features, including, but not limited to Contact Posters, Live Voicemail, and StandBy Mode. 

For users at risk of losing their ability to speak, including those who have been diagnosed with ALS or other conditions that progressively impact an individual’s speaking ability, Personal Voice is that bridge, serving as a speech accessibility feature that uses on-device machine learning that invites users to read a randomized set of text prompts that allow for the device to capture the individual’s voice.

“Accessibility is part of everything we do at Apple,” said Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives in a press release. “These groundbreaking features were designed with feedback from members of disability communities every step of the way, to support a diverse set of users and help people connect in new ways.”

On Tuesday, CNET’s Nelson Aguilar shared his experience as he tested out the new Personal Voice feature, which lives under Settings → Accessibility → Live Speech → Voices → Personal Voice.

“You’ll have to read out loud 150 phrases, which differ in length,” he said, noting that if you make a mistake during recording, you can simply hit the record button to re-record the phrase. Aguilar also added that depending on how quickly an individual speaks, it may take anywhere from 20-30 minutes to complete. 

“At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, board member and ALS advocate at the Team Gleason nonprofit, who has experienced significant changes to his voice since receiving his ALS diagnosis in 2018. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”

The general release of iOS 17 is expected to release sometime in September. 

Leave a Reply

Your email address will not be published. Required fields are marked *