In a world that is dominated by conversations on artificial intelligence, some new technologies might sound scarier than they are.
The first is something called Live Speech, and the second, designed to enhance Live Speech, is called Personal Voice.
Live Speech allows users on iPhone, iPad and Mac to type something and have it spoken aloud during phone, FaceTime and in-person conversations. Users will be able to save certain common phrases for quick use during a conversation.
This feature was designed, according to the update, to “support millions of people globally who are unable to speak or who have lost their speech over time.”
Building on the intention of Live Speech comes Personal Voice, a feature that will allow users in danger of losing their voice to create a digital voice that sounds like them. This digital vocal clone only takes about 15 minutes to create.
Apple added that the technology making this possible is “on-device machine learning,” which keeps users data private and secure, while integrating with Live Voice.
“At the end of the day, the most important thing is being able to communicate with friends and family,” Philip Green, board member and ALS advocate said in a statement. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world.”
The tech giant revealed a number of other accessibility updates that are designed with disabled communities in mind, notably ‘Point and Speak,’ a feature that “identifies text users point toward and reads it out loud to help them interact with physical objects such as household appliances.”
“At Apple, we’ve always believed that the best technology is technology built for everyone,” Tim Cook, Apple’s CEO, said.