When David DuBois, a U.S. Army veteran and former chief of physical security for the U.S. Capitol Police, was diagnosed with ALS in 2022, his family was unable to hear his voice for more than 24 months.
Using less than 40 minutes of old voicemails and videos, voice AI company Wellside was able to create a custom voice that matched Dubois’s real voice, complete with emotion and humor, including his favorite phrase, “You’re killin’ me, Smalls.”
Wellside says it ultimately wants to focus on artificial intelligence that augments humanity, rather than replacing humans.
Miksail Petrochuk developed a realistic AI voice algorithm just three months after graduating from the University of Washington while working at Paul Allen’s Ai2 Lab, a nonprofit research center exploring the possibilities of artificial intelligence, where he met Matt Hocking, who would later become Wellside’s co-founder.
Petroczuk, who has autism, was inspired to turn his challenges into opportunities, and WellSaid was one of the ways he sought to make a positive impact on the world.
“I was born with more neurons in my brain than most people, which means my brain is always working at full speed – thinking, processing, feeling,” Petrochuk says. “I often generate ideas that many miss. I notice patterns throughout my work and use them to generate important insights.”
WellSaid has been a strong advocate of AI responsibility work in the text-to-speech space: When reports about AI risks became more widely known, WellSaid already had programs established and running around revenue sharing, content moderation, and voice actor anonymity.
WellSaid launched in 2018 and was initially built to help educators create informational content. Today, WellSaid is used to inform and support millions of people, including voice actors, seniors, customers with disabilities, and affiliated organizations like Audible Sight, which provides captioning with realistic, human-like audio for the visually impaired.
What differentiates WellSaid from its competitors?
WellSaid’s competitors include ElevenLabs and Murf AI, but WellSaid focuses on tightly controlled training models that don’t use publicly available open-source data.
Companies like ElevenLabs were founded out of a desire to translate text and languages and provide seamless, lifelike speech – like WellSaid, they work with people with degenerative diseases like ALS – and Murf AI has a Voice Data Sourcing option where you can submit voice recordings and get paid.
But with open-source data, having autonomy over one’s likeness isn’t always a choice, whether you pay $5 to try out a voice doppelganger or submit your voice recording for compensation.
After all, it’s natural to worry about the misuse of AI-generated voices. Remember OpenAI and Scarlett Johansson? Or the AI-generated robocalls impersonating President Joe Biden that the Federal Communications Commission declared illegal?
With a proliferation of media reports and lawsuits surrounding flawed AI-generated trust and safety policies, WellSaid’s private sourcing of its data is not just a wise decision, it’s a necessary one.
WellSaid says it wants to focus on its mission of using AI for good, rather than aggregating millions of voices from the internet. In this scenario, high-quality output and permission to use the voices on the platform will be key.
“All of our voices are borrowed from actors,” Wellside CEO Cook said. “They’re recorded in professional environments. They’ve been vetted and approved. We’ve paid them, we’ve paid royalties. We’ve paid them for their time and training. We pay them ongoing royalties.”
How ‘ethical’ AI can support the human experience
When it comes to using AI, the question remains whether people want artificial intelligence tools in their daily lives. According to a CNET survey published in September based on data collected by YouGov, 25% of respondents said they don’t find AI tools useful and don’t want them built into their phones.
Additionally, 34% are concerned about privacy when using AI on their devices, and 45% would not pay a subscription fee for an AI tool.
Cook said comfort and trust will play a big role in how humans interact with AI, and he believes AI will eventually become integrated into everyday life, allowing people to interact with the technology and make judgments about it based on their personal experiences.
So is there a world in which ethical AI can exist comfortably in ordinary people’s homes?
“If you think about AI as a tool that can enable things that we can’t do today, I have a pretty good feeling about the role that AI can play in preventing disease or preventing the spread of disease or providing quality healthcare to disadvantaged or marginalized people,” Cook said.
“I think we’re going to look back in 30, 40, 50 years and say, ‘That was groundbreaking.’ This is a really big thing that’s going to make a lot of people’s lives better.”