AT Update Logo

ATU742 – SensePilot with Mike Hazlewood

Play

AT Update Logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.
Special Guest:
Mike Hazlewood – CEO & Co-Founder – SensePilot
For More on SensePilot:
IG: @sensepilot
FB: @sensepilot
For more about Bridging Apps: www.bridgingapps.org
Stories:
Hearing Technology Story:
——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: http://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA  
—– Transcript Starts Here —–
Mike Hazlewood:

Hi, this is Mike Hazlewood and I’m the co-founder and CEO of SensePilot, and this is Your Assistive Technology Update.

Josh Anderson:

Hello and welcome to Your Assistive Technology Update, a weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 742 of Assistive Technology Update. It is scheduled to be released on August 15th, 2025. On today’s show, we are super excited to welcome Mike Hazlewood, CEO and co-founder of SensePilot to tell us all about how SensePilot can help you control your computer using head movements and other means. We are joined by our friends from Bridging apps with an app worth mentioning. We also have a story from the New Yorker talking about technology that benefits those with hearing loss. Don’t forget, listeners, we always love hearing from you. You can shoot us an email at Tech@Eastersealscrossroads.org, call our listener line at 317-721-7124, or you can also reach us through our website at Eastersealstech.com. Thank you so much for listening today and let’s go ahead and get on with the show.

Folks, we cannot thank you enough for giving us a listen here at Assistive Technology Update, but did you know that this is not the only podcast that we have? You can also check out our sister show, Assistive Technology Frequently Ask Questions. This show comes out once a month and it features panelists, Belva Smith, Brian Norton, and myself as we try to answer the questions that are plaguing your mind about assistive technology. We gather up all the questions we get during the month from emails, phone calls, and many other means, and then we do our best to answer them. But I got to tell you folks, believe it or not, we do not know everything. So we rely on our listeners a lot to reach out to us and give us some of those answers or maybe just talk about their personal experiences and things that have happened to them.

So if you like Assistive Technology Update, you may very well love Assistive Technology Frequently Asked Questions. Again, it’s Assistive Technology Frequently Asked Questions where you can get your questions about assistive technology answered, or if you happen to have the answers to some of the questions asked on that show, please, please, please do reach out and let us know so that we can help the community with the answers that they so desperately seek. Much like Assistive Technology Update, you can find Assistive Technology Frequently Asked Questions wherever you prefer to get your podcast. And as always listeners, thank you for listening.

Listeners, our story today comes to us from The New Yorker, and it’s titled Subtitling Your Life, written by David Owen. And it really begins by saying there’s no better time in human history to be a person with hearing loss. This is a pretty long story. Now, the nice thing is as you first get in here, I’ll put a link to it, but you can always hit play and it will dictate the whole thing for you. But it starts off by talking about just a little bit about hearing loss, and it talks about having a friend who was losing his hearing and then kind of that person’s experience with working with hearing aids and other devices like that, and then just how far they’ve come over the last however many years, how much better the hearing aids and all these different things work. It digs a little bit deeper into technology and into some of the newer technologies, specifically a couple of things that we’ve had here on this show, including Transcribed Glass and Zander Glasses and talks about how these devices allow people to have real-time captioning on their glasses while having a conversation.

Also talks a little bit about how some folks’ experiences were changed a lot during the pandemic. So instead of being in in-person meetings with lots of background noise, they’re now in kind of a secluded area in their home without all that background noise. So maybe they could hear a little bit better. But not only that, Team, Zoom, all of these tools have transcription built in so that you do have that other component. As I said, it’s a pretty long story with a lot of deep dives, and then it even talks about ASL, which is American Sign Language, and then just some different kinds of teaching. So I mean, individuals who are deaf and how they were forced to kind of learn in a speaking world. So instead of teaching classes in ASL, they would teach classes and make folks learn how to lip-read and make them kind of speak orally as opposed to just teaching ASL and working in that talks about the formation of Gallaudet and how that all went.

So it’s a very, very deep story that really gets into a lot of the challenges faced by individuals who are deaf or hard of hearing, but then also gets into some of the really great tools and technology that are out there to assist. So again, pretty long story, and I don’t want to get too much into it, but if you want to go check it out, it’s a really great take on hearing loss as well as just on the technology that’s out there to assist. So we’ll put a link to the New Yorker story over in the show notes. Next up on the show, listeners, please join me in welcoming back Bridging apps with an app worth mentioning.

Ali Gonzales:

This is Ali Gonzales with Bridging Apps, and this is an app worth mentioning. This week’s featured app is called Evidence 111. Evidence 111 is an interactive detective audio game. You play as Chief Inspector Alice Wells, who receives a strange phone call instructing to bring an envelope marked evidence 111 to a hotel. Listen to the story as it unfolds and make choices along the way. The story is about an hour and a half long and has around 10 endings, so there’s a lot to discover. When the app starts, you are instructed to swipe right to start a new game, left to load an existing game, up to toggle accessibility for blind and low vision players, or down to purchase the full game. After the game starts, you will be presented with a tutorial that teaches you all of the controls you need to know in order to play.

When you reach a point in the story where you must make a choice, it will pause and Alice will ask what she should do. After the tutorial has concluded, the story begins. The demo allows you to play for about 20 or 30 minutes. If you want to find out how the story ends, you must purchase the rest of the content for $5. This game provides hours of entertainment for those who are interested in detective stories. The story is engaging and the production value is excellent. Evidence 111 is currently available for only iOS devices and is free to download. For more information on this app and others like it, visit Bridgingapps.org

Josh Anderson:

Listeners today, we are very excited to welcome Mike Hazlewood, CEO and co-founder of SensePilot, and he’s here to tell us all about how this new input method can help individuals control their computer using their head and facial expressions without the need for outside hardware. Mike, welcome to the show.

Mike Hazlewood:

Thank you very much. Great to be here.

Josh Anderson:

Yeah, I am excited to get into talking about SensePilot, but before we do that, could you tell our listeners a little bit about yourself?

Mike Hazlewood:

Yeah, certainly. So I live in London. My background is actually probably a bit bizarre of how I’ve got to here. So I was a material scientist by trade, but I worked in sales for over a decade within aerospace and medical, so selling raw material into medical devices before moving into product management into tech, and then started on this venture with my family.

Josh Anderson:

Oh, really? Well, I guess that kind of just leads me into, can you tell me how SensePilot came about?

Mike Hazlewood:

Yeah, so it’s my brother-in-law who’s our CTO, Linus Yonica. So he previously worked as a developer for an AAC company, so he worked on Predictable, and it was part of his idea of he was using a lot of eye gaze and specialist head cameras among other things to test out the software just to make sure it was fully accessible in all capacities. And then he realized the cost of those devices. So that’s where this idea came about.

So he came with an idea, we decided to see how we could go about it basically. At a hackathon as part of AI Lithuania back in May last year, we decided to just assemble a team, so Linus, myself, my wife Erica, who does some software development for us, and then their cousin Iveta, who specializes in social media and marketing. So as part of the hackathon, we just kind came together, just wanted to see what was possible, did some research, built a very rough prototype. And then yeah, came second in that. So gave some initial validation of what we could do. So we started digging in a little bit deeper, asking around end users, asking around occupational therapists just to see if there’s any kind of need for this tech and what sort of areas can be improved.

Josh Anderson:

How was SensePilot funded?

Mike Hazlewood:

So yeah, started working on it from there. We launched the company in September last year and started out bootstrapping. So just seeing what we could do as a scrappy startup with improved funding. We’ve been lucky enough to be able to get grant funding from a few places. So we’re actually a Lithuanian company, so we’ve had funding from a few European Union health programs, EIT Health, and then we’re also part of an accelerator program called Social Tides by Grow AI, which is powered by Inco and supported by Google.org. So yeah, been very fortunate to get funding through those. And we also won a Accessibility Innovation Prize from ContentSquare Foundation, which gave us supports and also some mentoring from Microsoft L’Oreal Group and Skyscanner.

Josh Anderson:

Nice. And then I guess just what is SensePilot?

Mike Hazlewood:

So it’s software that turns your webcam, just any webcam integrated into the device or an external one, turns them into a tool for accessing the device. So with SensePilot, you can control the cursor using your head movements. It picks up just the standard webcam feed and picks up very small movements. And then also you can map out individual facial gestures. So blinking, raising your eyebrows, smiling, sticking your tongue out, opening your mouth, blowing a kiss, those sorts of things to different clicks. So left click, right click, scrolling, et cetera, but also to keyboard clicks. So what it allows you to do is to access your device, access AEC apps, but also play quite complex video games, all completely hands-free.

Josh Anderson:

Nice. And I guess that was what I wanted to dig into is some of the customization. So you said I can do a lot of different things with the different expressions. I guess what kind of customizations are available and how do I set that all up?

Mike Hazlewood:

So all within app, we’ve taken a lot of thoughts on the design of the app and the user interface thinking around independence specifically. So how can a person easily change all of their settings as needed if some of the settings are wrong? So we actually designed or had a lot of help designing the app from special effect here in the UK. So disability gaming charity, they were amazing with testing out and we’d send them screenshots of what we thought, and they also tested the app out quite heavily for us.

But in terms of personalizing it, you can just go through and map out. We have an initial landing page where you can just get an understanding of your individual face’s biomechanics, that every person’s face is different, your preference is different, so you can map it out to your ranges of motion, so you can set the range and the size of the facial gesture, which the trigger will happen. It is also switch compatible, and you can also turn a standard keyboard into a switch. So if you still want to click with a switch, you’re still able to. And then also we have profiles, which that’s kind of what really makes this super powerful, that you are able to have different profiles for different apps, different games, so you can set up all the different controls based on your preferences and abilities.

Josh Anderson:

Excellent. So as I’m open, if I go from playing a video game to sending an email, I don’t have to actually get in behind the scenes and redo everything. I just change the profile and I’m ready to roll.

Mike Hazlewood:

So you can switch between the profile, either from a facial gesture again, or you can do a switch press. So you can have one set of controls for playing Fortnite, Call of Duty, whatever you want, and then come back, send your emails, browse the web, that’s your standard default profile.

Josh Anderson:

Very nice. And I know you said that nothing but just the standard webcam is necessary. Does it have to be up to a certain standard or will it work just pretty much any webcam that’s built in or external?

Mike Hazlewood:

Pretty much any that’s built in. So if you get a more expensive webcam, so right now we ignore some of that feed. We will kind of support some of the higher frame rates webcams in the future, just so you get that extra responsiveness, but there’s no real lag or delay with anything like that. So really the kind of only constraints are what you want to do. So if you want to kind of play a game, you need a bit more of a powerful computer, that sort of thing.

Josh Anderson:

Sure. Yeah, exactly. So I could definitely understand how that could make a change, but I just want to make sure that you didn’t have to have anything higher end, but it sounds like anyone will work on there. Mike, with all the different kind of controls, I guess how many different ones can I use at once? Is it as many as I need for whatever I’m trying to do?

Mike Hazlewood:

Exactly, yeah. So if you look at clicking, what we don’t let you do is have the same facial gesture for left click and right click, for example, just because that will cause a bit of confusion and a bit of an issue with that. But yes, you can go through and set left click, right click, scroll up, scroll down, double click. We’ve actually got a couple of three different clicking methods. So we’ve got toggles, so you’ve got one where you do the facial gesture and it clicks on and off for you. One where the click will be held down for as long as you hold that facial gesture. So if you’re dragging and dropping or kind of pressing to hold things, that can help. And then it’ll unclick when you relax your face and then a toggle where you can do the facial gesture you want and it’ll hold down the click for you.

You can relax your face, move whatever you need to, and then do the action again and it’ll release. For keyboard presses, we do let you combine facial gestures. So what that allows you to do is to, for kind of keyboard shortcuts, so bringing up the onscreen keyboard for copying and pasting, control C, control V, those sorts of things. You can map those to the same facial gesture. And then if you look at that from a gaming perspective, you can do some pretty cool things with that. So you can do a smaller smile to make a character walk forwards, but then you can do a bigger smile to make them sprint.

Josh Anderson:

Nice. I love that. It’s just so customizable to the needs of the individual. And does this work on just Windows PC? Does it work on Windows and Mac? What operating systems support this?

Mike Hazlewood:

Yeah, just Windows currently. We’re a small team, so we decided to focus just on Windows based systems. We do have plans to go after Mac and potentially Android in the future, just kind of about how we can get more developer resources in. But yeah, so Windows 10 and 11 compatible, and yeah, you don’t really need to do too much. You don’t really need to have that powerful computer to operate it. We’ve been testing on some older Microsoft Surface Pros, and it works great on there. Your limitation then is if you want to do anything more complex, like playing a game or things like that.

I should add as well, actually, all of our facial recognition is done locally on your device, so we are not taking any data back to the cloud. So it’s great for your data privacy. We’re not retraining any models based off of it, but what it also allows and really makes this great is it reduces the CPU usage, and it also just means that it’s more responsive, so it just works faster and equally for AAC users who you shouldn’t be relying on cellular internet for your voice. So the tool should work if you are out and about in a rural area with limited cell reception, or if you’re in a very busy area with cell reception drops out.

Josh Anderson:

Oh, definitely, definitely. And I know especially in today’s day and age, it’s great that that information is just stored locally. Nothing’s going out, especially when it’s pictures of your face and the things that you’re kind of doing. You definitely don’t want that out there. Mike, we talked a little bit just about the different customizations, the different things that I can do. You said that it picks up on very slight movements, so I don’t have to have a whole lot of head movement maybe left, right, or anything to be able to control. Is that correct?

Mike Hazlewood:

Correct. Yeah. So you don’t really need a lot of movement, and what we’re also building right now is a pathway for individuals to move the cursor purely by facial gesture. So you can raise your eyebrows, look left, look up, look down, smile, and the cursor will move in that direction for you. So we are just kind of testing that out at the moment just to kind of see how that goes. But yeah, quite promising if individuals have very limited movement.

Josh Anderson:

No, and it’s exciting as somebody who used to spend hours and hours trying to set up a system to measure someone’s eyes or something, and then the wind blows and it uncalibrates, and you have to start from complete and utter scratch. So having something that an individual can turn on, look at and be able to start functioning, and like you said, even independently, be able to change those settings and do things is just super, super exciting. Mike, I got a ton of these, but can you tell me a story about someone’s experience using SensePilot?

Mike Hazlewood:

Yeah, I’ve been very fortunate to help a few people set up as we go along the journey. So it’s been, I guess, one of the first people we started developing with. So right after the hackathon, we have a friend who unfortunately suffered a spinal cord injury, and he uses a rollable mouse to operate his computer. Still able to work, but when it came to downtime, so playing first person video games, so something like Call of Duty, he really struggled with… We’d tried initially bumping up the sensitivities, but it still took quite a lot of fine scrolling.

So he was actually one of the ones we were testing out the software a lot with, and we were working on the principles of if we can solve video gaming where you need that responsiveness, that accuracy and stability, everything else on the computer should follow in line. Obviously, there’s some other considerations to think about, but everything should be a little bit easier, so to say. But yeah, so he was very patient with us. He tested some interesting softwares in the early days, but then got into a point where he is able to, in his downtime, play and keep up with everyone.

Josh Anderson:

Nice. And I love that. I love the, oh, just the accessible gaming anyway, because it’s such a level playing field, you never know who you’re playing with, you don’t have to worry about any of those kinds of things. You can do it from the privacy of your own home and be able to interact with folks all over the world of all different kinds of abilities. So I love that this gives them a tool to be able to do that without having to set up an entire system in their house or be even really all that techie to be able to set up and kind of do. Mike, if our listeners want to find out more or kind of download SensePilot for themselves, what’s a good way for them to do that?

Mike Hazlewood:

Yeah, so you can visit SensePilot.tech. That’s our website. You can download from there. We do a 30-day free trial, so no obligations. We don’t take any card details or anything like that. So it is just purely try it out, see if you like it. We think it’s important for people to be able to try it for a longer term, just to really understand how it works and see all the different settings and what could work for you as a person. And yeah, it also helps to build up those muscle memory pathways. You are learning a new access method, so it’s trying to get through those. And yeah, we’ve got a few tutorial videos on there, and if anyone wants to reach out, set up a personal walkthrough, we’re more than happy to facilitate that. You can just shoot us an email at info@SensePilot.tech.

Josh Anderson:

Awesome. We’ll put those down in the show notes. And I do love that you give a little bit of an extended trial because like you said, to really get used to those facial expressions, but also to make sure we all use our computers differently. We use them for different things, so making sure that it works really well in the way they want to, for what they want to do with it. I mean, just opening up and I don’t know, surfing the Internet’s one thing, but yeah, if you want to get into some little bit deeper gameplay and some other stuff, it’s nice to have a little bit of time to test it out and really try. Mike, you kind of hinted at this just a little bit, but what are some exciting things coming down the pipeline for SensePilot that you’re allowed to share with us? No insider information that’ll get you in trouble, but…

Mike Hazlewood:

No, no, no, it’s all good. So we’ve been working on, we’ve got an early prototype actually for sound recognition. So for individuals with atypical speech patterns, that’s going to be quite an interesting one. So if you can repetitively make a sound, you can train a model. Again, locally on device, we’re not taking anything back, but if you think kind of whistling, clicking, humming, et cetera, whatever noise you can make repetitively, if you repeat that five to 10 times, you can map that to a clicking action. We’re working right now with a music specialist actually, but he specializes heavily on training these models and how to do these. What we’re trying to increase right now is the confidence that it is the individual making that noise and not an external interference. So clicking for example, sounds very similar to someone dropping some keys or a room full of people applauding.

So we need to see how do we cut out that noise and have absolute certainty that it is the individual controlling the device that wants to make that command. So that’s one of the things we’re working through now, and we set ourselves a goal in the future to see if we could tackle eye gaze with a webcam. I think that’s going to be quite a complex one, but it was just to see what we could do. So something to play around with, how far can we go, how would it work, and those sorts of things. I think that’ll be an interesting thing for us to dig into. Just again, seeing if we can create assistive tech without needing any additional hardware. So there is still very much a place for the dedicated eye trackers and things, but yeah, we just wanted to see what’s possible without those. So right now we’re picking up eye direction, but that’s a major up down, left right movement, so how can we correlate those into a more accurate eye gaze system?

Josh Anderson:

No, that’d be pretty cool. Like you said, there’s always a time and a place for the dedicated stuff, but maybe for the more, I don’t want to say casual user, that kind of isn’t really yet, but maybe the folks who don’t have funding or who just need another input method. Maybe they don’t need that all the time, or just another way to be able to use it. That would be amazing. And I know I think other folks have tried, but nobody’s ever really got that to completely work. I’ve played with some of them, and it’s not… Well, it’s there, but it’s not wonderful.

Mike Hazlewood:

So that’s something we want to try. Why not set ourselves a challenge? So yeah, let’s see what we can do.

Josh Anderson:

Well, I mean, especially if you can mix that with all the other input methods. I mean, I just think of, I’ve worked with folks with very complex switch systems, but if you can take away some of those and use facial expressions, use your eyes, use your head, use sound as you said, it just makes it so much easier. And just for folks with all different kinds of needs, it opens up so many more possibilities and so many new ways to control things.

Mike Hazlewood:

Yeah.

Josh Anderson:

So that is awesome. Well, Mike, we’ll put all the links down in the show notes so that folks can reach out, so that folks can check out SensePilot for themselves and be able to see how that might be able to meet their needs. But thank you so much for coming on today and telling us all about it. We’re very excited.

Mike Hazlewood:

Thank you for having me.

Josh Anderson:

Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on assistive technology update? If so, call our listener line at 317-721-7124. Send us an email at tech@Eastersealscrossroads.org, or shoot us a note on Twitter at In Data Project. Our captions and transcripts for the show are sponsored by the Indiana Telephone Relay Access Corporation or Intrac. You can find out more about Intrac at relayindiana.com. A special thanks to Nicole Prieto for scheduling our amazing guests and making a mess of my schedule. Today’s show was produced, edited, hosted, and fraught over by yours truly. The opinions expressed by our guests are their own and may or may not reflect those of the INDATA Project, Easterseals Crossroads, our supporting partners or this host. This was Your Assistive Technology Update. I’m Josh Anderson with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. We look forward to seeing you next time. Bye-bye.

Leave a Reply

Your email address will not be published. Required fields are marked *