ATU317 – Voiceitt with Sara Smolley & Katie Ehlers


ATU logo23

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Voiceitt – Sara Smolley (smohwley), VP Business Development, Voiceitt & Katie Ehlers Speech Language Consultant (Danny Weissburg – co-founder and CEO) |
If you have an AT question, leave us a voice mail at: 317-721-7124 or email
Check out our web site:
Follow us on Twitter: @INDATAproject
Like us on Facebook:

——-transcript follows ——

KATIE EHLERS:  Hi, I’m Katie Ehlers, Voiceitt’s speech thing which consultant.

SARA SMOLLEY:  Hi everyone, this is Sara SMOLLEY, I’m VP of business and development at Voiceitt, and this is your Assistive Technology Update.

WADE WINGLER:  Hi, this is Wade Wingler with the INDATA Project at Easter Seals crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Welcome to episode number 317 of Assistive Technology Update. It’s scheduled to be released on June 23, 2017.

Today we are going to break from our format of news and apps and focus the entire show on an interview with Sara SMOLLEY and Katie Elhers of voiceitt. They’ve got an interesting new speech generating device or augmentative and alternative communication system. We hope you check out our website at, give us a note on Twitter at INDATA Project, or call our listener line at 317-721-7124.


So you in my audience always know I am constantly scanning the inter-webs to find out what’s happening in the world of assistive technology. Recently I found a great article in Tech crunch about a thing called Voiceitt. It reminded me, and I thought this is augmentative communication, but then I realized it’s something different. It isn’t something I’ve seen before. Of course I had to have the folks on the show. I’m super excited today to have Sara SMOLLEY who is the vice president of business development and also Katie Elhers who is a speech and language consultant for Voiceitt, which is a company based in Tel Aviv but has a presence in the US and a very international world traveling group and exciting folks to talk to. Before we jump into the context of the show, Sarah, Katie, thank you so much for being with us today.

SARA SMOLLEY:  Thanks so much for inviting us. Really happy to be here.

KATIE EHLERS:  Very happy, thank you.

WADE WINGLER:  We are happy to have you on the show. I really want to dig in pretty hard to what Voiceitt is and what it does and how it’s different from some of the systems I’ve seen before. I really want to spend time on the origin story. I want to know about why you and the folks at Voiceitt became interested in the speech issues for people with disabilities.

SARA SMOLLEY:  So everyone on our team really comes to this project was some sort of personal connection. For me, I actually don’t have professional experience in assistive technologies, but I was always interested in combining business with a social impact and finding ways to use technology to truly improve lives. Our cofounder Danny Weissberg who was also now the CEO came to this idea through a personal experience with his grandmother who he was extremely close to and experienced a stroke where she was unable to communicate clearly again. From this idea he and our other cofounder Stas Tiomkin decided to create Voiceitt which is a speech recognition technology that aims to give people with speech impairments their voices back.

WADE WINGLER:  Excellent. I want to talk a little bit about the people who might use Voiceitt. It’s the disability groups we are talking about and some of the social situations as well.

KATIE EHLERS:  So far within development of Voiceitt, we seen a lot of positive feedback from our beta testers. A lot of our participants typically have a diagnosis of cerebral palsy, multiple sclerosis, or a traumatic brain injury. We’ve also worked with a couple of children who are diagnosed with autism as well as Down syndrome. But we typically tell people is we give them the examples of our testers’ diagnoses, but we also say two things. First being that in order for the Voiceitt application to work for someone, that user has to have consistent speech patterns that are linked with consistent intention to communicate. It doesn’t necessarily have to be an intelligible speech pattern. It can be whatever that user is capable of pronouncing. If that pronunciation is constantly linked with, let’s say I need water or hi how are you, Voiceitt will be a great communicative tool for that person to use their voice and be heard and understood.

WADE WINGLER:  Just so I understand, if I for example said “wah-wah” instead of “water” as my little kid often does, if I say it constantly then Voiceitt will say water?  Is that where we are getting with this?

KATIE EHLERS:  Exactly. The other detail that I wanted to touch on as well is we have found that a lot of our testers have dysarthric vocal characteristics. Typically if somebody comes to us wanting to trial the application, we often ask them for the qualities of their voice. When they describe dysarthria, that’s typically one of the first types of all qualities that seems to work very well with the Voiceitt technology.

SARA SMOLLEY:  Just to emphasize, we see our community as very diverse. Some of our users are children with cerebral palsy but also could be older adults with different degenerative diseases or even a traumatic brain injury. The one thing they have in common is a consistent motor speech disorder. What’s really special about our technology – and I guess we will delve deeper into this – is it’s not a question of the severity of the speech impairment or the severity of the dysarthria, but rather whether or not it’s a consistent speech pattern. As long as there is consistency in the speech, we’ve seen our algorithm is powerful enough to recognize it and translate the unintelligible speech into speech so that the person is able to communicate freely and spontaneously.

WADE WINGLER:  As I’m understanding, if somebody’s disability means that their speech pattern is changed dramatically throughout the day based on fatigue or other factors, but this probably isn’t the best solution for them.

KATIE EHLERS:  I will say that that’s not completely true.

WADE WINGLER:  I love it when I’m wrong.

KATIE EHLERS:  I’ve been asked this question multiple times. It opens up a new conversation regarding the tech and science behind Voiceitt. The beautiful thing about what Voiceitt has created is it is a system that is trained by the user. What we mean by that is the speaker is actually creating their own synaptic model for their speech recognition. If the user is recording their voice saying certain words and phrases at the beginning of the day before they take their medication, for example, and at the end of the day after they take their medication and they are a little bit fatigued, as long as they are using Voiceitt at all hours of the day, they are going to be able to train the system so that the different kinds of slight variations within their speech patterns can still be recognized. We say, if you don’t use it, you lose it. The more you use Voiceitt, the more it learns your speech, the better recognition you’ll end up having.

WADE WINGLER:  I’ve already taken us down the rabbit hole before I even asked the most basic questions. I’m going to back us up and we will probably end up here again. Let’s talk about what Voiceitt is and how it works. It’s an app, an algorithm, software, a system. Let’s do the basics. What is it?

SARA SMOLLEY:  Voiceitt technology, our core technology is an algorithm that can recognize nonstandard speech patterns. What we’ve done is created a mobile application that can translate dysarthric speech or the speech of the person that has nonstandard speech pattern for whatever reason and then translate that into clear speech and real-time. The output is in text as well as a voice output. In the same way that a lot of the people listening to this and a lot of your listeners will be familiar with something like Apple’s Siri, the speech recognition on the iPhone, it works and looks similar in what you want to say. The person will be able to simply say the word to vocalize the sound in our algorithm will recognize it and will come out on the phone or iPad as text or clear voice output.

WADE WINGLER:  Excellent. I think some of the answers to this question are inherent in the response you given me. Tell me how this is different from other speech generating devices or other augmentative and alternative communication system that we may be more familiar with.

KATIE EHLERS:  When it comes to a speech generating device, typically first and foremost they don’t utilize the user’s natural voice. That is something that Voiceitt alone enables a user to have access to, to have that ability to communicate using their own voice and have the translation that Sarah is describing as a communicative tool so they can be better understood. They are better able to maintain eye contact. They are able to use body language, talk with their hands, or move throughout their environment without having to sit with a large device with a large screen in front of them. I have personally done a lot of research on the speech generating device usage. From what I’ve seen, the average amount of time it takes someone to write a message is around five minutes. That’s just a short sentence. You can imagine how much of the natural flow of the communication gets lost a little bit. Voiceitt not only allows the user to have their own voice to speak, but they get all those other parts of human interaction that get lost when someone is just looking at a screen.

The other nice thing about Voiceitt is that it will allow someone to have multiple utterances. It’s not just a predetermined utterance that they have to scan through a screen to find. They can say in Realtime what is on their mind. They don’t have to scan through pages of different topics of conversation. They can say what they want to say, Voiceitt will recognize it, and say with them. Then the listener is able to respond and that back and forth communication that everyone should have.

WADE WINGLER:  As I’m listening to you describe this, I’m sitting here nodding my head as I often do because I have a lot of friends who use augmentative communication. Specifically as I think about folks who have cerebral palsy, I find myself in a situation where the head nods and the body language is very important part of that communication. I also know it’s not a myth that the more I learn so with speech pattern, the more I understand. It seems like the app is doing a little bit of what humans tend to do in a situation anyway.

KATIE EHLERS:  Absolutely. We would hope that a lot of people would take that time. I can say the three of us definitely take the time to listen and understand. There are a lot of people who don’t. We will get into that a little bit I’m sure, what our testers have told us about Voiceitt will change their lives.

WADE WINGLER:  Let’s talk about the setup and configuration. From the user perspective, what does it mean to start using Voiceitt and how does that work?

KATIE EHLERS:  Sarah, you can definitely jump in whenever you want to add something or what I’m going to say. I guess I’ll explain it like you get the voice app and what do you do. What we would do is ask our users to create a login. They get an email or can create a username with a password, and that will secure their data. What will happen next is they will have a page that would pop up, asking them to type in a certain amount of utterances, maybe short phrases, or even single words isolated it that’s what – whatever the characteristics of speech may be within the language. They can decide what it is they want to say.

The next page is where the user will pair their speech patterns with the text that they typed on the first page. The app will ask the user, please say, hi my name is Katie. I would hit the record button, there would be a beep sound, I would say hi my name is Katie, and I hit stop recording. Right there I am starting to build my own phonetic speech recognition within Voiceitt software.

What we ask each user to do is have around three repetitions right now for each. We do 3 to 5 repetitions for each utterance. That helps train the model. Then we have the user test the recognition. I would pull up the next page of the app, which you go through the buttons of word bank, recording, and recognition, and I would say hi my name is Katie, and Voiceitt should say hi my name is Katie. There is no delay within the recognition. It is instantaneous. The user is in the waiting a couple of seconds for the app to say what it was that the individual spoke. That’s an example of what the usability features of Voiceitt would be and how you would start using it as a brand-new user. The one that makes sense. In fact, it reminds me of the days of training Dragon NaturallySpeaking, where you read stories and it is preprogrammed. In this situation, you defined what the vocabulary is going to be like. That process sounds familiar to me.

SARA SMOLLEY:  It’s interesting that you mention Dragon because it’s something people often ask us about and ask us how are you different from Dragon. One of the things that we will talk about sometimes is our cofounders, English is not their first language. When they speak English, there is an accent. Our developers also speak with accents. We are a very diverse team. We always joke, something like Dragon or Siri or Amazon Alexa, all of these speech recognition technologies don’t work for us. That’s because these speech recognition systems are based on a standard speech pattern. As soon as your speech pattern diverges from this standard, how can it recognize your voice?  They can’t. That’s where Voiceitt comes in. It learns a person’s unique pronunciation, learns a person is unique speech pattern, and continues to learn and adapt over time. Even if the person’s voice changes over time for the better or for the worse, our machine learning technology will continue to work so that the person, the idea is that the person wouldn’t necessarily have to calibrate again. It’s also powerful enough to pick up the very nuanced discrimination between the sounds of someone with a speech disability, and with an accent as well. That’s a major difference between what we are doing, our speech recognition technology, and the app that we’ve developed, and a standard speech recognition system like Dragon or Siri.

WADE WINGLER:  That’s a great point. It leads me to a couple of other questions. Once it’s calibrated and working well for someone, what does accuracy look like?  The second part which is related to that question, what languages are you supporting or plan to support?

SARA SMOLLEY:  The question of accuracy is a little bit hard to answer because it varies between the user. Something important that we are working on is a standard of accuracy in different noise environments, which is intuitively – you understand it is quite important if a person using it at home versus a school or a hospital setting or in the workplace. What we are seeing overall amongst all of our current testers and users is we seen a consistency of about 85 percent and are moving towards more of a 90 or 95 percent accuracy in our recognition.

WADE WINGLER:  Tell me about the reception you fat from the users, family members, and caregivers, people who are trying this. What do they say?

KATIE EHLERS:  They love it. I have the privilege of doing a lot of testing in Western New York. I’ve been able to spend quite a bit of time with all different types of users. I’ve had a woman in her 70s who’s been married to her husband for 20 years, they both have a diagnosis of cerebral palsy. She looks at me one day and says, I can finally say good night to my husband because he has trouble hearing me. Voiceitt will be able to help her when she says good night, voice it will help her be heard. I thought that was incredible.

We have another young woman in her 30s, other active within the Buffalo community. She also has a diagnosis of cerebral palsy. She said to me one day, I really wish that this had been around when I was a little girl. I had such a hard time learning how to pronounce my words and had to work so hard at it. Even still so many people just didn’t understand me and it was so frustrating growing up in a book school system.

I’ve had a very business savvy man who tells me all the time about how he’s going to be able to hire and fire his own employees on his own and not have to have someone else had that awkward conversation for him. He will be able to do it himself because he will be understood.

We get all these different types of stories that are not only heartfelt from a family perspective or growing up in a school system and owning your own small business. There are so many types of people and use cases that will come out of Voiceitt being the technology that it’s meant to be for these users.

I’ve had parents cry. It’s so emotional because they are so thrilled to see that their child can finally be understood by their peers and teachers. It’s really been a very rewarding journey to see the hard work that our developers put into this technology and the hard work that Danny and Stas have done, and Sarah as well. We all together want this to be the best possible technology it can be now that we’ve seen it working and get to see real people using it. It’s inspiring. We are very passionate about this.

WADE WINGLER:  Those are impactful stories. I understand why you are passionate about them. We are getting close on time for the interview, so I have a few questions I want to make sure we get to. I know people are listening now and say I want to try it, how do I take the next action steps. Talk to me about the platforms that you plan to support and what cost and availability might look like.

SARA SMOLLEY:  The app right now is being developed for use on iOS devices, meaning iPhones or iPads. That’s what we are testing on. We do plan in the future to be compatible with android as well. Is very important to us that the product be accessible and affordable. We are quite aware that there are amazing products and technologies in the industry, but they are accompanied by quite a hefty price tag. Sometimes but not always they are covered by various insurance programs that aren’t necessarily accessible to everyone who needs it. It is one of our goals as a company to make sure that the technology that we have built and the product we are providing in the promise we feel like we’ve made to our community, is for the product to be affordable. It will most likely be a subscription fee on the app store or a one-time fee. Later on we are exploring incorporation into various insurance or government funding programs.

We are currently doing close testing. We are working with volunteers, individuals mostly in the Western New York area. We will be doing another round of testing this summer together in collaboration with our institutional partners, some speech and hearing clinics, and disability organization networks as well with the goal of the product being commercially available in the beginning of 2018.

WADE WINGLER:  I know people in the audience are going to want to follow your journey. I know I want to follow it and see what happens. How should people learn more, stay in touch?  What kind of contact information would you like us to share?

SARA SMOLLEY:  We will be launching the website very soon. It will be You can also reach us at Any of your listeners are also welcome to reach out to us personally. I’m Sara@voiceitt. Katie is with us as well, We emphasize that we are building a community as well as a product. We are working very closely with members of the community, users, caregivers, medical professionals, speech and occupational therapist. Getting all this feedback and support along the way is extremely important to us. We always love to hear from people, their stories and how they would like to use the product or just be part of our project. We welcome people to reach out to us.

WADE WINGLER:  For those who are listening, Voiceitt has two T’s, right?


WADE WINGLER:  I’ll pop that in the show notes so folks who are clicking around will be directly there. Sara Smolley is vice president of business development. Katie Ehlers is the speech language consultant for Voiceitt. They’ve been our delightful guest today. Thank you so much for being on the show.

SARA SMOLLEY:  Thanks so much.

KATIE EHLERS:  Thanks for having us.

WADE WINGLER:  Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124, shoot us a note on Twitter @INDATAProject, or check us out on Facebook. Looking for a transcript or show notes from today’s show? Head on over to Assistive Technology Update is a proud member of the Accessibility Channel. Find more shows like this plus much more over at That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.

***Transcript provided by TJ Cortopassi.  For requests and inquiries, contact***