AT Update Logo

ATU513 – AVA App Project with Nicholas Giudice and Richard Corey

Play

AT Update Logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Special Guests: Dr. Richard Corey and Dr. Nicholas Giudice – VEMI Lab at the University of Maine – AVA App Project
Also find them on Twitter and Instagram
Bridging Apps: www.bridgingapps.org
——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: http://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA
——————  Transcript Starts Here ———————
Richard Corey:
Hi, I’m Richard Corey.

Nicholas Giudice:
And I’m Nicholas Giudice. We run the VEMI lab at the University of Maine, and this is your Assistive Technology Update.

Josh Anderson:
Hello, and welcome to your Assistive Technology Update, a weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson, with the INDATA project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to Episode 513 of Assistive Technology Update. It’s scheduled to be released on March 26th, 2021.

Josh Anderson:
On today’s show we’re excited to welcome Richard Corey and Nicholas Giudice on. They’re from the VEMI lab at the University of Maine, and they’re going to talk all about the AVA app project that they’re currently working on. We also have Amy Fuchs back from BridgingApps with an app worth mentioning. Now let’s go ahead and get on with the show.

Josh Anderson:
After all these months of lockdown, maybe you’re looking for some new podcast to listen to. Well, make sure to check out our sister podcast, Accessibility Minute, and ATFAQ, or Assistive Technology Frequently Asked Questions. If you’re super busy and don’t have time to listen to a full podcast, be sure to check out Accessibility Minute, our one-minute-long podcast that gives you just a little taste of something assistive technology based so that you’re able to get your assistive technology fix without taking up the whole day hosted by Tracy Castillo. This show comes out weekly.

Josh Anderson:
Our other show is Assistive Technology Frequently Asked Questions or ATFAQ, or Assistive Technology Frequently Asked Questions. Brian Norton leads our panel of experts including myself, Belva Smith, and our own Tracy Castillo as we try to answer your assistive technology questions. This show does rely on you, so we’re always looking for new questions, comments, or even your answers on assistive technology questions. So remember if you’re looking for more assistive technology podcasts to check out, you can check out our sister shows, Accessibility Minute and ATFAQ wherever you get your podcasts, now including Spotify and Amazon Music.

Josh Anderson:
Next up on the show we’re happy to welcome back Amy Fuchs from BridgingApps with an app worth mentioning. Amy, take it away.

Amy Fuchs:
This is Amy Fuchs with BridgingApps, and this is an app worth mentioning. This week’s featured app is called Speech Sounds Visualized. Speech Sounds Visualized is an app that can be used by a speech language pathologist to help teach children or adults how to pronounce speech sounds. The app presents a video x-ray of the mouth as a sound is produced. The therapist can use the video to show the client how to move each articulator to produce a sound correctly. The app provides videos of 23 consonant sounds and 12 vowel sounds. Our sound combinations can be bought for an additional 99 cents. A front-facing video of a woman producing the sounds appears first, and then a side view of her mouth in x-ray appears with her producing the sound several times. There are written instructions on how to produce the sound, common errors made while producing the sounds, and suggested activities to teach the production of the sound.

Amy Fuchs:
A microphone is available for the client to record their own productions. The recording can be made while the video runs so that the client can compare their production to the correct one. There are also practice words in the initial, medial, and final positions, and practice sentences with initial and final position words. Each level has an example of recording and the capacity to record the client’s voice. This app was reviewed with older elementary students working a specific sound production. They were fascinated by the videos and gained important information that resulted in improved sound production. Specific phonemes were isolated and the client could compare their production to the model. Improvement in speech sound production was noted at the sound and word levels. This app is an excellent resource for teaching sound production to older children and adults. The x-ray videos are excellent quality and provide valuable information to clients in how to manipulate each articulator when making a particular phoneme. Speech Sounds Visualized is available for free on the iTunes store and is compatible with iPad and iPhone. For more information on this app and others like it, visit bridgingapps.org

Josh Anderson:
Folks, on today’s show I’m really excited to introduce Richard Corey and Nicholas Giudice. They’re working on the AVA app project which will help open the door, no pun intended, to autonomous travel for individuals with disabilities. Today we’re going to learn a little bit more about them and about this exciting technology that they’re working on.

Josh Anderson:
Nicholas, Richard, welcome to the show.

Richard Corey:
Hey, thanks for having us.

Nicholas Giudice:
Thanks, Josh.

Josh Anderson:
You guys, I’m really excited to kind of talk about this technology and how it can really help folks, but before we do that could you tell our listeners a little bit about yourselves?

Richard Corey:
Yeah, sure. We’re from the VEMI lab at the University of Maine. This is a laboratory that’s focusing on human technology interfaces. We’ve been in existence for 12, almost 13, years now. Nick and I have been collaborative partners on this for that time, and we almost jokingly say that we have this academic marriage. But yeah, we’re studying the way in which humans interface with the technology in front of us, and we do a lot of work into way finding, especially blind, visually impaired assistive technology. We’ve been doing a lot of work into virtual and augmented reality and how to use some of that technology to assist with even simple tasks like riding a bike, for example, in the fog. We’ve also started to move into biotechnology as well.

Nicholas Giudice:
Just as a little background, so we bring kind of an interesting background to our studies. My background is experimental psychology and Rick has a background in lots of things, including interactive design and collaboration, and so we’re really interested in – and we’re in kind of a computer science department to broadly define it – so we’re really interested in saying, “Well, how can we take knowledge of human understanding and human interactions and lead to better technology and information access?” So we come very strongly from the human side and also from our own first-person experiences. I’m congenitally blind, and so I bring a lot of my own kind of phenomenology and my own use of technology, frustrations with technology, frustrations with information access, into the types of things that we design to be multisensory. As Rick mentioned, bio-inspired designs. How can we make technology to be more like how our brain works using all of our senses instead of just visual design?

Nicholas Giudice:
So that’s kind of what gets us into this autonomous vehicle stuff that we’ve been now working in for a couple years, and this newest project is one leg of that where this is an amazing opportunity for blind folks, for older individuals, in terms of increasing transportation accessibility, independence, dealing with a lot of the challenges that are out there and just getting from place to place. But these vehicles are not currently accessible and they’re not necessarily… That’s not on the design table in terms of the engineers are thinking about how to keep these things on the road, understandably, for them not to crash, and thinking about the sensors and the control algorithms, but not the human factors and certainly not the accessibility aspects of the human factors. So we’re really interested in this end-to-end process. It’s not just making the car work once you’re in it, but you have to figure out where it is and how to find it and how to get to it safely, which is what the AVA project is about. So it’s kind of a multistage process, and we’re trying to think about it from beginning to end so this new technology can be truly accessible for underrepresented drivers and people that would hugely benefit from this technology.

Josh Anderson:
I think we need to say yet. They’re not thinking about it yet. Yeah, exactly. [crosstalk 00:08:35] In talks with the people in the autonomous vehicle industry, it’s on the agenda but it’s not priority yet.

Nicholas Giudice:
Well, and the DOT, posting this challenge to Rick’s point, shows that they are beginning to think about it at least at kind of the policy administrative level.

Josh Anderson:
But you guys brought up a ton of great points there. I was going to kind of dissect it a little bit because you just kind of said it. Tell us a little bit about the US Department of Transportation inclusive design challenge. We’ve had some other folks on that are involved in it, but tell us a little bit about that.

Nicholas Giudice:
It’s a really neat mechanism. They’ve set this up as groups that are kind of competing for a prize. So it’s not set up as a traditional research grant, which has kind of lots of formal aspects and everyone’s kind of in competition with each other. But this is set up with as here’s a real problem. This is a growing problem that’s being understood by DOT and other governmental agencies and even beginning with the car manufacturers. How can we get people to think about this, to work together, to kind of build synergies and leverage expertise to solve some of these problems? So they used this idea of coopetition, so cooperating. It is kind of still a competition in some ways because we are competing for these prizes, but there’s a real effort by the DOT to get the 10 groups that got this initial prize to work together. I say hats off to them. It’s a very innovative mechanism.

Richard Corey:
Yeah. I mean, you have to understand how unusual this is for a government agency to be giving out a prize this way. Both Nick and I are absolutely amazed and thrilled. What a great way to get inclusive design out into the forefront and not just for autonomous vehicles, but in general to say, “Hey listen, we really need to start talking about this and we need to start investing in it”, and for them to give out these prizes and have this kind of collaborative structure that they have set up, and we’re talking to other people in the IDC prize phase, too, and yeah. I couldn’t be more impressed because sometimes you don’t really get the governments working this efficiently, so it’s kind of nice.

Nicholas Giudice:
And it’s not just academics. I mean, that’s what happens a lot of times when you have an NSF or NIH prize or award, which we have other grants like that, but this is actually getting people out of academia, which is really important because as many listeners may know, I mean, you get academics doing things and you write a paper and do a conference presentation, and then that’s it. What’s really driven Rick and I is we want the work that we do at the VEMI lab to get out and actually make a difference and to get out to people and to be something that transcends any academic silos, and I think this mechanism is really encouraging that.

Josh Anderson:
Oh, definitely. Sometimes just getting all the different groups in the same room is amazing, because you never know what everyone else is working on. So being able to feed off each other, really, that collaboration can get some great things done.

Richard Corey:
Yeah, and it’s been really interesting, especially when we talk to people that are currently working in the industry. We’ve had talks with people in Washington who were thinking about policy on this, and you realize that everybody’s kind of got a different slant at it and it sort of changes the way in which you start to think about how is this going to work, especially in particular about just the autonomous vehicles and how is that going to work down the road? What does it look like? Where are people going? And that’s… Yeah. In thinking about inclusivity, it’s a much broader term right now because there’s ideas out there of using fully autonomous vehicles to get your kids to school, and it’s like oh, well, there’s a different design problem. How do you have a four year old travel in a car by itself? I don’t know.

Josh Anderson:
So guys, just getting back, tell me all about the AVA app project. You kind of mentioned it a little bit, but let’s dig a little bit deeper into it.

Richard Corey:
I’ll start. Nick will likely interrupt or jump in because that’s how we work.

Richard Corey:
The AVA app is an application that right now is looking at helping people get from the location they’re at using a ride-sharing fully autonomous vehicle to the vehicle, and we’re looking at ways in which-

Nicholas Giudice:
Let’s just stop for a second. This is important because almost all models/projections are saying that these vehicles will not necessarily be personally owned, but will be following this kind of ride-share Uber/Lyft-type model.

Richard Corey:
That’s a good point. Yeah. No matter what we’re looking at, there seems to be a big, big push towards this ride-sharing model. Actually, I’m going to switch this around. Let’s look at the current models of ride sharing that are out there with Uber and Lyft. You have a human driver that is going to pull up and park and they can communicate with a person and say, “Hey, I’m over here”, or when they’re parking their vehicle they can be aware of obstacles or trash cans, or I don’t know, snowbanks. We’re up in Maine. So-

Nicholas Giudice:
Or you can text them. So I’ll often text them and say, “Hey, I can’t see, so when you get here you need to look for me and call out and say ‘I’m over here'”.

Richard Corey:
Yeah, yeah. And so-

Nicholas Giudice:
Because showing me the picture of the car and the license plate isn’t going to do much.

Richard Corey:
Yeah. It’s not going to help. So we are looking at this going, “Okay, that’s the current model, and we currently have this human-to-human interaction. How do we look at human-vehicle interaction, human-vehicle collaboration?”, which is a term that Nick and I have coined. How do we look at this, because we know that these AI, these robotic vehicles, will be picking us up, and in some cases it will just be a single human getting in the back as a passenger. So now you go back to the same problem. Does the vehicle think in terms of “Don’t park to the door. You’re going to enter next to the trash cans or the snowbank or next to an icy patch that you can see”, and then the question is how do you know because there’s no talk back. There’s nobody saying to you through a window, “Hey, I’m over here.” So what do you do in that case? What sort of technology is going to be available in the vehicle, on your phone, on your person to get there? And that’s really the heart of what AVA is. How do you get from A to B safely just through technology, because we’re at this junction in society where we’re kind of overriding manual control to an AI and nobody’s saying it’s a good or a bad thing. We’re just saying it’s happening and how can we use this technology to connect?

Nicholas Giudice:
And this is a general problem. Part of our vision of inclusivity is that it gets at inclusive design, universal design. How can something that might help a blind person or an older adult also be relevant for a 20 year old that has normal vision? So the issue for just finding these things is… A lot of people experienced this back when we went to concerts or what have you. You come out and you’re trying to find your Uber and there’s 25 cars. How do you know what one’s yours? This is going to be even worse when everything is an AI autonomous vehicle. There’s no human. They’re going to all often look similar within fleets because that’s easier for the technology and the various sensors, and so a lot of people are going to experience this problem. But without the human in the loop, as Rick said, it’s particularly difficult for this last meter problem, which may be a few meters, but how do you actually get me in to when I’m on the sidewalk to get to the actual car when we don’t currently have a way of communicating with the human, me, and the AI driver?

Nicholas Giudice:
So what we’re working on within this app are ways that the car and the human can communicate more and techniques that we can use to help. Imagine when you’re using the app, you have your smartphone and it is connecting and talking with the desired vehicle. How can we provide other cues to kind of guide you in that would be very similar to what you might otherwise use?

Josh Anderson:
So what are some of those cues that you’re working on?

Richard Corey:
The simplest one right now is knowing that… I mean, all the technology has pretty much gone electronic. The simplest one is honk the horn just so you can at least get kind of a spatialized angle for where the vehicle is. The other things we’re looking at is using computer vision on the phones, using the cameras on our phones to help identify vehicles or the door, or even down to the handle if there’s going to be a handle. There seems to be some argument over that. But looking down to how to get there. But basically Nick and I have been working on this for 12 years, and the idea of how do we use current technology, simple consumer technology to get people there so you don’t have to buy these $100,000 whatever items to get through it. So we just want to use the phones. Current standard, maybe future standards, of where phones are and what’s in that technology that we can use and what sort of wireless technology can we use to start to identify both the combination of GPS and then maybe we can get something a little shorter out of a WiFi signal, or maybe if they improved Bluetooth. But yeah.

Nicholas Giudice:
And this is where working with the OEM car manufacturers will be helpful in this type of coopetition in the IDC grant and other formal connections with car companies that say, “We’re developing stuff. We need to be able to connect into your central system so we can talk more seamlessly with a vehicle”, and I think that’s what will start happening.

Nicholas Giudice:
Part of it’s targeting, as Rick said, beeping the horn, but you will also be doing this through an app so the thing will know Josh and it could say, “Josh, over here”, and we can actually have it speaking and from a standpoint of hearing, that’s very useful because it provides a spatialized location because you’re hearing it coming from a specific place in space. Our auditory system is very good at that, and that’s the beauty of having a human driver doing that because you know where that thing is.

Nicholas Giudice:
Rick mentioned using visual overlays and what we think of as compensatory augmentations or visual augmented reality. You can say, “Well, why is that useful?”, especially if your focus is on blind people and older adults, but most blindness is not total. The vast majority of blindness is people have residual vision that’s usable, and a lot of assistive technologies don’t account for that. People are very interested in developing stuff that’s purely nonvisual. It’s essentially ignoring 90-95% of the distribution, the population of people that are legally blind or have visual impairments. So in many cases, if we can make something that highlights the edge of something – imagine seeing a train platform that has these high-contrast edges – that’s hugely important for reduces falls and attracts people’s attention. So if we can use the phone and some computer vision to detect the outline of the car, highlight the handle to draw people’s attention, this is really useful for just making efficient navigation that limits the awkwardness of trying to… Even if you know you’re at the car, you don’t want to be feeling around trying to find the handle to get into it. You want to make these things as graceful and seamless as possible, and so the more we can use these different senses and different technologies using the commercial hardware, as Rick said, we really feel this is hugely important.

Nicholas Giudice:
We can couple that with other nonvisual senses, too. So we do a lot of work with vibrations and using vibration on the phone, so we can also use that for our guidance mode. So when the phone is pointing in the right way, it’s going to be vibrating. We can use different auditory cues. So kind of combining all this multisensory information into our interfaces, this is what we’ve really… Kind of our bread and butter, and it’s something we’ve just always found so interesting. Why does so much technology rely on one sensory mode when that’s just not at all how we actually experience the world?

Richard Corey:
It’s funny Nick brought that up. One of the projects we did years ago was just that question alone is why do we rely so heavily on vision? So we made an entire VR simulation with the idea of it all goes black, and then you have to navigate without vision in a virtual reality simulator, which seems like complete counter to the idea of you have this head-mounted display that is visual, but then we just basically break it down to just audio cues only.

Josh Anderson:
I’d like to try that out. I bet that actually would be kind of fun or at least give you a better idea of those folks that you’re working with and what they’re dealing with all the time.

Nicholas Giudice:
Josh, you have no idea how badly I want you to try that out, and as soon as the darn pandemic is over we’ll come and have some fun.

Josh Anderson:
Yes, yes, yes. We definitely will. And I really like the way that you guys are making not a one size fits all. Like you said, you’ve got all different kinds of cues and different kinds of ways, because I know even when they first started talking about autonomous vehicles all of us were very excited because transportation is just such a barrier for folks with all different kinds of disabilities, just getting around and being independent, not having to rely on other folks, and depending on where you live sometimes there’s transportation available and it may or may not be good. Then if you live other places there isn’t even anything that’s available, so you’re always relying on others. So I love that you guys are thinking of this part that may have gotten completely overlooked.

Richard Corey:
I know. It’s funny you bring that up because I’m not 100% sure that it has been overlooked. I think that the autonomous vehicle company, so they’re very aware there’s an entire demographic that they haven’t even touched yet, and it’s not just an accessibility issue. I mean, you’re looking at older adults. How many people have had their licenses taken away for one reason or another or just age related or they shrunk down below the steering wheel or something like that.

Nicholas Giudice:
Well, that’s the aspect of inclusion. Thinking about accessibility as being tied to disability I think is kind of a limited way to think about it. I think we talk about it in terms of information access. Information can be lots of different types of information and lots of different scenarios for lots of different people. Maine has the oldest population in the country, so it’s a great place for us to be studying this. But it’s also one of the things that we have thought about and don’t necessarily have data to support this, but this is something that we’re beginning to look into as Rick said, older adults, it’s a real challenge because taking a license away is a big deal or people… If you look at the statistics for accidents and injury, it’s huge in people over 70 and particularly 80. So older adults have really adopted ride sharing in a lot of ways, because it’s a way to get around. You don’t have to worry about driving. You could [crosstalk 00:23:44]

Richard Corey:
[crosstalk 00:23:43]

Nicholas Giudice:
Yeah, exactly. So there may be a scenario here where older adults are actually some of the early adopters for autonomous vehicles because they see it as something that can really solve a problem. Normally, when you’re thinking about technology and high-tech technology, it’s the early adopters or the 20-somethings or teens. So I think this is a potential to have a very large demographic of our population. One of the fastest-growing demographics is aging corresponding with vision loss, and so I think that these groups really get the potential here if we can make it work.

Josh Anderson:
Well, guys, if our listeners want to find out more about you, about the AVA project, or about any of that stuff, what’s the best way for them to do that?

Richard Corey:
The VEMI lab at the University of Maine. Pretty simple. Look it up, the University of Maine. Either do a search online for the VEMI lab, V-E-M-I, or we’re on Twitter, we’re on Instagram. I don’t know. We’re all over the place.

Josh Anderson:
[inaudible 00:24:42] the actual website?

Richard Corey:
Do we still have the… I think it’s… What is it now? It’s vemilab.org?

Nicholas Giudice:
They can go there. Yeah. vemilab.org. Or you can go through the University of Maine.

Richard Corey:
Yeah, the University of Maine, which [crosstalk 00:24:55]

Nicholas Giudice:
umaine.edu/vemi. There’s also a specific website for autonomous vehicle research group, which if you search on you’ll find work on this project and other stuff that we’re doing in the autonomous realm.

Richard Corey:
Which is under the VEMI lab.

Nicholas Giudice:
If there are people here that are interested or part of the demographics that we’re interested in, to reach out because we’re going to be doing some surveys. We’re going to be doing some stuff where we really want to get a broader range of people, and so the more people that we can get from this really broad… You have a big base of listeners. A lot of them are in the demographics particularly that we’re interested in. If we can get them to be part of the grassroots to make this work, that’d be great.

Josh Anderson:
All right. Excellent, guys. We’ll put links to those over in the show notes so that folks can easily find those.

Josh Anderson:
Well, Richard, Nicholas, thank you so much for coming on the show today, telling us all about the AVA app project, but also just making sure that we’re thinking about these things as we make new technologies and make sure that we make everything inclusive for all, and I’m glad you guys are involved in that US Department of Transportation inclusive design challenge along with some other folks we’ve talked to here on the show, because I think some really great things are going to come out of that and I really do like that inclusive design is being included in the early process as opposed to something that folks are just trying to add on later.

Richard Corey:
Fully agree with that. Thank you.

Nicholas Giudice:
Thanks, Josh. It was fun to be here.

Josh Anderson:
Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? If you do, call our listener line at (317) 721-7124. Shoot us a note on Twitter at INDATA Project, or check us out on Facebook. Are you looking for a transcript or show notes? Head on over to our website at www.eastersealstech.com.

Josh Anderson:
Assistive Technology Update is a proud member of the Accessibility Channel. For more shows like this plus so much more, head over to accessibilitychannel.com.

Josh Anderson:
The views expressed by our guests are not necessarily that of this host or the INDATA project. This has been your Assistive Technology Update. I’m Josh Anderson with the INDATA project at Easterseals Crossroads in Indianapolis, Indiana. Thank you so much for listening, and we’ll see you next time.

Please follow and like us:
onpost_follow
Tweet
Pinterest
Share
submit to reddit

Leave a Reply

Your email address will not be published. Required fields are marked *