Podcast: Play in new window | Download

—– Transcript Starts Here—–
Hi, this is Chris Hamblin and I’m a senior assistive technology specialist at CareScribe. And this is your Assistive Technology Update.
Josh Anderson:
Hello, and welcome to your Assistive Technology Update, a weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson, with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 769 of Assistive Technology Update. It is scheduled to be released on February 20th, 2026.
On today’s show, we’re super excited to welcome Chris Hamblin, senior assistive technology specialist for CareScribe to the show. Chris is going to tell us all about Caption.Ed and TalkType and how these tools can help out individuals with different needs. We’re also joined by our friends from BridgingApps with an app worth mentioning. We got a couple of stories, one about Google creating AI to help with accessibility, as well as a really great interview story with Therese Willkomm.
We want to thank you for taking time out of your day to give us a listen, but if you’d like to reach out, you can always send us an email at tech@eastersealscrossroads.org, or call our listener line at (317) 721-7124. Now let’s go ahead and get on with the show.
You find yourself with a little bit more time on your hands, or maybe listening to this has you thinking, “Well, what about this? Well, what about that?” Well, if you have questions about assistive technology, we have other podcasts that might just fit your needs. If you happen to have questions about assistive technology, we have Assistive Technology Frequently Asked Questions, or ATFAQ. The show is hosted by Brian Norton and features yours truly, along with Belva Smith, as we all talk about assistive technology with questions that come in from email, phone calls, and other means. We also don’t always know the answer, so it’s very important that we have listeners that can help us out with some of those questions because while we like to think every once in a while that we may know everything, we’re proven wrong almost daily on that one. So if you’re looking for more podcasts to listen to, or if you have questions about assistive technology, make sure to check out Assistive Technology Frequently Asked Questions wherever you get your podcasts.
Listener, I found a story over it. Google’s the keyword. It’s written by Sam Sepah and it’s called Natively Adaptive Interfaces: A New Framework for AI Accessibility. I’ll put a link to the story over in the show notes because it’s got a really great video on it to learn a little bit more about it, but it talks a little bit about natively adaptive interfaces and what these are. So basically, natively adaptive interfaces or NAIs, they put adaptability directly into a product’s design from the beginning. Kind of gives an example here. So let’s say an AI agent built with the NAI framework can help you accomplish tasks with your guidance and oversight, intelligently reconfiguring itself to deliver a more accessible, personalized experience. So this could be something like making a document more accessible by changing the text, by adding audio descriptions, or maybe changing the layout for folks with other needs, but actually understanding those needs and being able to do it on its own.
It does say that developers are collaborating with different members of the disability community throughout the design and development process, just making sure that everything’s going to be able to work for individuals with disabilities. Some of these organizations, the Rochester Institute of Technology, National Technical Institute for the Deaf, The Arc of the United States, RNID, and Tim Gleeson. And then it goes on to talk about, and this is what the video on there is about, is about something called Grammar Lab, which was made by the English lecturer, Erin Finton, at the National Technical Institute for the Deaf. This program uses AI to really strengthen students’ skills and language goals in both ASL and English. So I’m going to put a link to this over in the show notes and you go and watch the video and learn a little bit more about some of the cool things Google’s doing to make sure that AI is accessible and that it can make natively adaptive interfaces and how they’re using natively adaptive interfaces to make AI work for all.
Listeners, I want to share a story from over at IEEE Spectrum. The title of the story is How YouTube and Adhesive Tape are Disrupting Assistive Technology. The MacGyver approach lets disabled users reconfigure their tech. This is written by Jason Hahr, and it’s basically an interview with Therese Willkomm. Now for folks in the world of assistive technology, you probably know who Therese Willkomm is. She’s the emeritus professor of occupational therapy at the University of New Hampshire. She’s written three books, cataloging more than 2,000 different assistive technology hacks. She also used to be a director of Assistive Technology Act program so has years and years and years of assistive technology experience and really is just known for making these low cost assistive technology solutions, just little things that are made on the fly in order to meet a need.
But I’ll put a link to the story down in the show notes that you can go check it out for yourself. I won’t get into it too much because it is an interview. So I think you should definitely go and check it out, but she talks about a bunch of different things, everywhere from where she got her start in assistive technology, what are some of her most memorable solutions, and then how things have evolved over the decades and maybe where they’re going in the future. She talks a little bit about how legislation impacts assistive technology and then just where focus needs to be moved in order to be able to help the most individuals.
So it’s a very good interview, a very good story. And again, Therese Willkomm has been around the AT world at least in the time I’ve done it. She’s presented for us here at INDATA before at some of our full day trainings, and it’s a very good and thoughtful interview. So I thought I would share it with everybody. So again, I’ll put the link down in the show notes so that you can go check out the interview with Therese Willkomm from IEEE Spectrum.
Listeners, next up on the show, we’re very excited to welcome back Ale Gonzalez from BridgingApps with an app worth mentioning.
Ale Gonzalez:
This is Ale Gonzalez with BridgingApps, and this is an app worth mentioning. This week’s featured app is called Read&Write. The Read&Write app for iPad and Read and Write for Android is an alternative keyboard, an incredibly useful toolbar for the iPad and Android. The app is designed for maximum literacy support by allowing students to read and complete literacy assignments independently.
Features like text to speech, highlighting text, dictionary access, and visual representations of what a word means can be used to practice vocabulary with students who have processing disorders, cognitive disabilities, or even ELL students. Most students benefit from experiencing new vocabulary in a variety of ways. The Read&Write app and software is a great resource for ELL learners and those diagnosed with dyslexia, autism spectrum disorder, ADD, and even typically developing students and adults. Homeschool families and students completing homework may also find it helpful.
Read&Write is currently available for both iOS and Android devices and is free to download. For more information on this app and others like it, visit www.bridgingapps.org.
Josh Anderson:
Listeners, please join me in welcoming Chris Hamblin to the show. He’s here to tell us about CareScribe and the unique accommodations they offer, and I, for one, cannot wait to learn more. Chris, welcome to the show.
Chris Hamblin:
Thank you, Josh. It’s a pleasure to be here.
Josh Anderson:
Yeah, it is a pleasure to have you on. Before we start talking about CareScribe, can you tell our listeners a little bit about yourself?
Chris Hamblin:
Yes, I can indeed. I am very much a family man. I’m based in the UK. I live in a place called Cardiff, which is in Wales. So if people know the UK, then they’ll know that’s probably a couple of hours further south of London. I used to be a former professional golfer, but I clearly wasn’t good enough at the time, so I had to go and get a quote, “real job.” I still enjoy golf. I’m a family man. I have two wonderful daughters, wonderful wife. Yeah, and I enjoy gardening, golf, sports in general, and a huge American sports fan, both ice hockey and football.
Josh Anderson:
Nice. Very nice. I was going to say, and I think I know where Cardiff is from English Premier League Soccer and those things over there. So it’s fun that you’re a fan of American football, and I always do like to watch football from across the pond as well. Well, Chris, on to CareScribe, to start us off, can you tell us a little bit about the company as a whole, maybe when it was started and things like that?
Chris Hamblin:
Of course, absolutely. So CareScribe is an assistive technology company. We are based in Bristol in the UK, so slightly closer to London than I live. We have built software and have built software to help neurodivergent and disabled people study and work more independently. The company was founded just before 2020, so that lockdown period, and both of our founders and directors are neurodivergent themselves. So one of them, Dr. Richard Purcell, is actually an NHS doctor, and it was during his time studying medicine while living with dyslexia and dealing with highly complex language that his interest in assistive tech really began.
It’s from that lived experience that CareScribe has grown to create two core tools, Caption.Ed, which provides real-time, accurate captioning, transcription, and a note-taking solution, helping people capture and retain spoken information, and then TalkType, allowing individuals to get their thoughts down quickly, highly accurate, lightning-fast dictation software. And together they remove those barriers in education and in workplace and support people to thrive and work at their best.
Josh Anderson:
Awesome. And you lend me straight into my next question. So let’s start with Caption.Ed, and I would’ve called it Caption Ed, so I’m so glad you pronounced it beforehand. I guess as you said, it can help with a lot of different things, but what all can Caption.Ed do?
Chris Hamblin:
It’s a good point with the name. Yes, it’s pronounced Caption.Ed But a lot of people, because of that period, and then the E-D say “Caption dot E-D, Caption Ed.” It’s the same software, that’s what I’ll say, however it’s pronounced. So I guess in terms of maybe some of the problems that people were facing, certainly in the early days, by the nature of its name, Caption.Ed was really supporting and designed to help individuals with hearing loss. As you can imagine, during the pandemic period, certainly from an education perspective, Caption.Ed’s ability to provide live, accurate captions for any live or pre-recorded media supported those students when everything went online. The human support disappeared, and then suddenly you’re on a Zoom or a Teams or a Google Meet meeting, and people with hearing loss struggled, and that’s where Caption.Ed really supported.
But it’s changed quite dramatically over the last… Well, as I said, nearly five years that I’ve been working with the company. It can support cognitive overload, information loss, physical and mental strain, that fatigue of having to really concentrate and listen in and focus on… And multitasking, being able to make notes at the same time. And it’s capturing without friction. So you can caption what’s being said. It will provide those captions. It will provide speaker labels. It will provide a summary for you. It will provide a transcript. You have notes available. And then more importantly, you can review these things retrospectively. So you’re not going to need to worry too much about retaining all the information from a call. You could go and review that retrospectively and play it all back. And then we’ve got cool features where we’re using AI to get summaries, et cetera, and even create templates and documents. But it’s the levels of accuracy and the ease of use are the reason why it’s so popular.
Josh Anderson:
Nice and excellent. And expand a little bit on the note-taking the summaries, just because I know for me, if I’ve got a generated caption of an entire hour long meeting, let’s not lie, I’m not going to read that whole darn thing when I really go back, maybe to parse and pull out some information. So how do those work and how can those be helpful for individuals?
Chris Hamblin:
Yeah, really good point. So what we’ve tried to do is be mindful that each individual likes to work and operate in a slightly different way, and therefore there is real flexibility with how you utilize the software. If you’re somebody that likes to listen along and then type out your notes, then you can certainly do that. Some people like to follow along with the captions and then on the transcription and then use maybe some flags to mark something as important, maybe actually highlight certain blocks in various colors with a key code relevant to them.
Copy sections across, because whilst I’ve mentioned hearing loss, it can support neurodivergent traits. And if we’re thinking about dyslexia and then typing out notes can be a real difficulty, so copying sections of the accurate transcription across at the push of a button becomes really useful. And then what you do, Josh, is when you review it after the fact, to your point, you’re not going to read it all back and then play it all back and listen to it maybe. So you’ve got those bespoke sections and a summary that you’ve created yourself as a snapshot that you can access with synchronized timestamps to play it back if you need to, and then push a button and get your own summary. So there’s lots you can do depending on how you work and operate.
Josh Anderson:
Oh, that’s awesome. And the captions themselves, I have to ask how they’re generated and is the information safe and secure? So if I was maybe not in a classroom, but maybe having an important meeting that involved… I don’t know, private health information or client information or that kind of stuff, just how are those generated and is it safe and secure?
Chris Hamblin:
Yeah, of course. So our captions aren’t generated by using AI. It’s using our software from a speech recognition perspective and converting speech into accurate text. And then when it comes to saving sessions, it’s a cloud-based solution. So you’re right, if you save it… Because you don’t have to save it, you could use it purely for the captions and then turn it off. But if you’re using it to save and then review, they’re all cloud-based around real stringent security setup, which we’ve jumped through a lot of security hoops. We are mindful of the concerns around that sensitive nature of conversations that can be had, as you said, à la the healthcare industry. They are stored safe and secure. Yes, we are things such as ISO 27001 certified. HIPAA compliance is in there as well. And then depending on the license type, Josh, you can add additional layers of security. So single sign-on comes in. Data purging can come in. And then even toggled organization controls of AI and various other bits and pieces. So yeah, security is at the forefront of what we do.
Josh Anderson:
No, that’s awesome. And used to be it never had to be such a concern, but yeah, especially with AI and things like that these days, you just want to make sure you know where your information’s going. So that’s awesome. Well, kind of onto TalkType, I guess start us off by just telling us what is it?
Chris Hamblin:
TalkType is really simple, but so effective. It is highly accurate lightning fast dictation software. And just like Caption.Ed, it also works across all devices. So both our solutions will work across Windows and Mac and Chromebook and mobiles and tablets. So you’re opening up all devices and it allows individuals to simply, without any training or profile creation, sign in, hit the microphone button, and then just get their thoughts out and onto the screen without any, as I said, extensive training or speaking a certain way, just speak naturally. And you’ve got accurate dictation. Another really great feature is the ability to dictate wherever you want to type. So by pressing a button, you can then dictate into any third party application. So that could be a Word doc, Google Doc, Excel spreadsheet, Slack, Teams, even your own regulatory platform. The software will allow you to dictate effortlessly with ease.
Josh Anderson:
Wow. No, that is a big difference. That was actually going to be my next question. Will this work inside my programs that I want to use? But thanks for answering that before I even had a chance. That’s really great because I know sometimes that could be a big limitation for different ones. I have to use a different thing for each piece and part. Chris, can I set up custom commands and phrases within TalkType?
Chris Hamblin:
Absolutely. Yes, you can, Josh. So there are the commands naturally that exist within the software, but you have control over creating custom words. Now, these could be the spelling of certain names. They could be the creation of acronyms to support organizational acronyms that are used. And then we also have what we call shortcuts. So again, to reduce maybe workload, you can create your own custom shortcuts. You could put a long paragraph in. For example, if you’re using it repetitively, and then you’re able to just use a trigger word, say that, and then it will deliver that shortcut for you. So nice and easy, simple command, shortcuts, and custom dictionaries.
Josh Anderson:
That’s awesome. Chris, what really sets TalkType apart from other dictation software or built-in accommodations?
Chris Hamblin:
Perfect. So as far as inbuilt is concerned, I would say two things. One is quite simply the accuracy levels are… As we say in the UK, it’s like chalk and cheese. So we are far more accurate. The next thing I’ll point out there with inbuilt is that you’re not limited to the environment in which you are working. For example, if you’re working in Microsoft, then you’re able to use inbuilt in Microsoft Word. But if you jump to say a Slack message or another third party application, you’re limited there. So TalkType works across the board.
And as far as other solutions are concerned, I will be completely honest and say if individuals need a dictation tool that really requires the software to support using the power of someone’s voice to control their machine, then maybe not TalkType, I’ll be honest. But if you’re looking for easy to operate, an accurate and intuitive dictation software, then that’s what it does very well. And as I mentioned earlier, it’s not about speaking a certain way or training it, it’s sign in, turn the microphone on and start talking on any device wherever you need to. Simple as that.
Josh Anderson:
Awesome. Awesome. Very, very cool. Chris, you’ve probably got a ton of these, but can you tell me a story of someone’s experience using these tools? Maybe if you have it, maybe a story about each one.
Chris Hamblin:
Let’s talk about a Caption.Ed case, user case. There are lots of students that, as I mentioned, struggle with online, and if they have hearing loss, for example, lose that in person support. So covering a Teams or a Google Meets session, those highly accurate captions make it far easier. But there’s one workplace case where an individual that worked in a call center really struggled to perform his job effectively. He has hearing loss and was really struggling to support his role when on calls. He found that Caption.Ed effectively, and to use his quote, “changed his life.” It made it far easier for him to understand exactly what was being said with the plethora of different accents that he would experience on a day-to-day basis from all the calls that he would take. He could not only follow along with the call, he was able to compile some notes and then very easily review all of his sessions retrospectively and then create these summaries that he would need to support his input into CRM, et cetera. So really helped him in that case.
And as far as TalkType is concerned, the first thing is the amount of individuals that we support that use Mac is incredible because a lot of tools that exist are maybe Windows only. So I could list off a lot of people that are using our software because they have a Mac. But I can use myself and there are a lot of other people. Whilst I don’t have necessarily an official diagnosis, Josh, maybe I would if I got assessed at later in life, but I struggle sometimes to get thoughts maybe when composing an email or writing up a document out of my head. I can’t type at speed very well. I do get RSI now because I’m at a computer a lot and my shoulders hurt.
So for me and others I know being able to turn the microphone on and then just simply get out what I need to, even if it’s not punctuated properly, I can get my thoughts out, I can input it into our CRM, I can input it into my email, I can input it into Slack, and now I can just use our tools to just jazz it up with a bit of AI if I want to, but it just makes me more streamlined, makes me more productive, makes me more confident.
Josh Anderson:
Nice. Nice. And yeah, you brought up really great points there about just all the things. I know a lot of folks that I’ve worked with that use dictation. You mentioned being able to get your thoughts out. For some folks, that can just be the biggest frustration and can really slow things down and not just slow things down, but even cause people to… I don’t know, have anxiety or other things come up, which can really affect not just productivity, but just the actual finished product. So I love that you brought that up in the needs in there. And I also love that you found out that Caption.Ed is good for so many folks, not just folks that need the captions due to being hard of hearing. So that is very cool. Chris, if our listeners want to find out more about CareScribe, about Caption.Ed, about TalkType, what’s a good way for them to do that?
Chris Hamblin:
So the obvious one would be to visit our websites. And whilst we have websites for both Caption.Ed and TalkType, our company is CareScribe. So I would suggest visiting carescribe.io. That will provide you with high level overview of who we are, our solutions. And then look, you can by all means fill in a contact us form, or if you need me to provide my details, I’d be happy to have some conversations and give you more information on our solutions.
Josh Anderson:
So Chris, what’s next for CareScribe? What are you all working on?
Chris Hamblin:
So Josh, we’ve been very fortunate to see strong adoption and success in the UK, particularly across education and the workplace. And the natural next step for us would be for North America. We find there’s a real and a growing focus on inclusion and accessibility and supporting diverse ways of working, which aligns really closely with what we’re trying to achieve. So my role now is very much about raising awareness, showcasing what our solutions can do, and then building relationships so that individuals and organizations across North America can benefit hopefully from our technology in the same way that people in the UK do. Look, ultimately it’s about making sure access to the right support isn’t limited by geography.
Josh Anderson:
Most definitely. There’s enough barriers facing individuals with disabilities. There’s no reason to put borders and things like that on there as well.
Chris Hamblin:
Exactly.
Josh Anderson:
Awesome. We will put all the links down in the show notes so that folks can easily be able to learn more, to reach out and to really be able to find how these solutions are able to help them. Chris, thank you so much for coming on today for telling us all about CareScribe, Caption.Ed, TalkType, and all the great things that you all do.
Chris Hamblin:
Josh, it’s been a pleasure. Thank you so much for having me.
Josh Anderson:
Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on an Assistive Technology Update? If so, call our listener line at (317) 721-7124. Send us an email at tech@eastersealscrossroads.org or shoot us a note on Twitter @indataproject. Our captions and transcripts for the show are sponsored by the Indiana Telephone Relay Access Corporation or InTRAC. You can find out more about InTRAC at relayindiana.com.
A special thanks to Nikol Prieto for scheduling our amazing guests and making a mess of my schedule. Today’s show was produced, edited, hosted, and fraught over by yours truly. The opinions expressed by our guests are their own and may or may not reflect those of the INDATA Project, Easterseals Crossroads, our supporting partners or this host. This was your Assistive Technology Update and I’m Josh Anderson with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. We look forward to seeing you next time. Bye-bye.


