AT Update Logo

ATU559 – ATLAS II with Dr. Julian Brinkley

Play

AT Update Logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Special Guest:

Dr. Brinkley – Assistant Professor of Human Centered Computing at Clemson University and Director of the Drive Lab

ATLAS II (Accessible Technology Leveraged for Autonomous Vehicle Systems)

Twitter – @julianbrinkley

website – www.drivelab.org

Stories:

Wordle Story: https://bit.ly/338EFPu

INDATA Full Day Training: Job Accommodation Bootcamp
Registration and more information: https://bit.ly/3qBRBXc

Info on all of our Full Day Trainings: https://bit.ly/3472bK7
——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: http://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA

—– Transcript Starts Here —–

Dr. Julian Brinkley:
Hi, this is Dr. Julian Brinkley and I’m an assistant professor of human center computing at Clemson University and also the director of the DRIVE Lab, and this is your Assistive Technology Update.

Josh Anderson:
Hello, and welcome to your Assistive Technology Update. A weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson with the INDATA project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 559 of Assistive Technology Update, it’s scheduled to be released on February 11th, 2022. On today’s show we’re very excited to have Dr. Julian Brinkley on, means you’ll talk about the ATLAS two project, and that stands for Accessible Technology Leverage for Autonomous Vehicle Systems. Dr. Brinkley and his crew are part of the finalists at the department of transportation’s inclusive design challenge. He’s going to tell us not just about the ATLAS two, but also about the DRIVE Lab, which he’s the director of it, Clemson University. We have a quick story about those Wordle results that you’re all sharing online and what they sound like to someone who uses the screen reader.

Josh Anderson:
Please don’t forget if you would ever like to reach out to us, you can send us an email at tech@eastersealscrossroads.org, call our listener line at (317) 721-7124 or drop us a line on Twitter @INDATAproject. We’re always looking for your questions or comments, ways where we can make the show better, things we can add, things we could remove or if you got an idea for a guest or maybe something you’ve just always wanted to know more about, please reach out to us, we would love to hear from you. As always, we thank you so much for taking time out of your day to listen to us. Now, let’s go ahead and get on with the show.

Josh Anderson:
We at INDATA are very excited to announce that our next full day training will be coming up on Thursday, February 17th from 9:00 AM to 2:00 PM Eastern. This online training is called Job Accommodation Bootcamp, and we will have some great presentations about reasonable accommodations, a panel discussion with different consumers talking about their experiences with job accommodations, and then I’ll be talking in the afternoon about the whole job accomodation process hiccups and glitches, things that can go wrong as well as showing some examples of some different job accommodations. If you ever want to learn a little bit more about the ADA, what reasonable accommodations might be, and the whole job accommodation process as a whole, please join us for our next full day training online on February 17th from 9:00 AM to 2:00 PM. I’ll put a link down in the show notes so that you can register. And I look forward to seeing you at least virtually there.

Josh Anderson:
Maybe you’re looking for some new podcast to listen to. Make sure to check out our sister podcast Accessibility Minute and ATFAQ or Assistive Technology Frequently Asked Questions. If you’re super busy and don’t have time to listen to a full podcast, be sure to check out Accessibility Minute, our one minute long podcast that gives you just a little taste of something assistive technology based so that you’re able to get your assistive technology fixed without taking up the whole day. Hosted by Tracy Castillo, the show comes out weekly. Other show is Assistive Technology Frequently Asked Questions or ATFAQ on Assistive Technology Frequently Asked Questions, Brian Norton leads our panel of experts, including myself, Beva Smith, and our own Tracy Castillo as we try to answer your assistive technology questions. The show does rely on you, so we’re always looking for new questions, comments, or even your answers on assistive technology questions. Remember if you’re looking for more assistive technology podcast to check out, you can check out our sister shows Accessibility Minute and ATFAQ. Wherever you get your podcast now including Spotify and Amazon Music.

Josh Anderson:
So our first story today is about Wordle. Now, if you’ve ever played this game and or if you have any social media, you probably know what this game is. Wordle is just a small game, it gives you six chances to guess a five letter word, tells you if you get the letters right and things like that. It was created by a gentleman that he made for his partner, so she’d have something to do. She always loved word games so he programmed it, made this for and has turned to do a pretty big thing. Well, if you have social media and you have friends who play wordle, they probably send you these little readouts to let you know how they did. At the top it’ll have the number of the Wordle, So whatever it is. And then it’ll have five out of six, six out of six, three out of six, tell you how many tries it took for them to get it.

Josh Anderson:
And then below that, it’s got a bunch of squares, white squares, green squares, yellow squares, to let you know which letters that person got right. Then you can compare and fight with each other and see who got it faster and things like that. Well, I found a story over at Slate and it’s titled, Your Wordle Results Are Annoying, but Not for the Reasons You Think. It goes on to talk about individuals who use a screen reader or other kinds of assistive technology and what these results actually sound like or look like to them. Much like emojis and other things these world results are very hard to understand and fill up your feed if you are using a screen reader or anything. Just to hear what it sounds like to an individual who’s using that, this was shared on Twitter by Crystal Preston-Watson. She’s an accessibility engineer and a screen reader user and this is what those Wordle results sound like on her Pixel 6 using TalkBack

Speaker 3:
Wordle 199 and five, six, large square, green square, white large square yellow, square yellow, square white large, square green, square yellow, square white large, square green, square green, square green, square white large, square yellow, square white large, square green, square green, square green, square white large, square green, square green, square green, square green, square green, square green, square.

Josh Anderson:
As you can see, that’s not only really hard to understand what’s going on, that’s obnoxious. I mean, could you imagine if you got 10 friends playing Wordle and they’re all sharing their stuff and you come through these, and this is what you’re hearing every damn time. I mean, that would just make you probably get rid of the app, quit going through it, not follow or friend folk again, just because you don’t want to have to keep hearing that. Not only do you probably really not care how they’re doing on Wordle, you probably don’t want to have that filling up and taking up all of your time. This is an issue that’s been around for a while, I think we’ve talked about it on this show before. If you do have a friend or somebody who’s using a screen reader, do your best to maybe leave out some of those emojis and don’t do too awful many of them.

Josh Anderson:
It doesn’t talk about the game at all, or whether the game itself is inaccessible. And if you really do think this was made by one individual for someone else that had just took off and blew up and he probably had no idea that this many people were going to use it, so therefore probably didn’t even really consider the need for accessibility. He made a simple game, simple rules, simple way to play, didn’t even think about those things. Well, Wordle recently was bought by the New York Times for more than a million dollars. New York Times definitely has the funds in order to make it a little bit more accessible and include everyone on being able to use it. What I really like about this story is as you get down towards the bottom, it shows you how to share a more accessible Wordle score.

Josh Anderson:
And it walks you through some steps. Instead of just sharing your results directly from Wordle, you would instead shared as an image with all text behind it. It actually walks through how to do all those steps on there. And I’ll put a link in the show notes over to this, but really I think it’s just a great reminder to all of us, whether we have a disability or don’t to just make sure that if we’re sharing things, try to make that stuff accessible. I mean, if you’re sharing stuff on social media, it means you want someone to read it, I assume, or you wouldn’t be putting it out there. If you make it more accessible, more folks can access your content. We talk on here a lot about website accessibility and things like that, but we really have to think of this on a personal level as well.

Josh Anderson:
The information we’re sharing, the information we’re trying to get out there can be accessed by anyone regardless of how they access that information. Again, we’ll put a link to this over in the show notes so that if you are going to share those Wordle results and how well you did today, maybe there’s a way to make them a little more accessible so all your friends and anyone else can access it as needed. Listeners this week we’re super excited to welcome to the show Dr. Julian Brinkley from the DRIVE Lab at Clemson University to talk about the ATLAS two project that he’s a part of for the department of transportation’s inclusive design challenge. Dr. Brinkley, welcome to the show.

Dr. Julian Brinkley:
Hello. Thank you for having me.

Josh Anderson:
Thank you so much for coming on. Before we get into talking about ATLAS two and the really cool things you guys are doing. Could you tell our listeners a little bit about yourself?

Dr. Julian Brinkley:
Yes. I am a assistant professor of human center computing at Clemson University, I run a research lab there that’s located on the automotive engineering campus in Greenville, South Carolina called the DRIVE Lab. Within that lab, we work on a variety of different topics and technologies specifically focused on assisting persons with disabilities. We work on things like accessible autonomous vehicles and a variety of other technologies. Accessible, social networking and science, and things of that nature. A lot of the work that we’ve been doing over the past four to five years or so has revolved around on issues of personal mobility and transportation, which is what has led us to the work that we’re currently doing. That’s focused on understanding how to make autonomous vehicles accessible for people with a variety of different disabilities.

Josh Anderson:
Well, and you led me right into my next question, because your project for the department of transportation’s inclusive design challenge is the ATLAS two. Tell us about that.

Dr. Julian Brinkley:
The ATLAS two project largely stems from work that I did at the University of Florida while I was working on my PhD under the direction of Dr. Juan Gilbert. Within that project specifically or within the initial ATLAS project, the goal was really to prototype an accessible Human Machine Interface specifically for a visually impaired person.

Dr. Julian Brinkley:
People who are blind, who have no useful vision or people who may have a low vision. The goal with that project was to really prototype and not anything that you would necessarily put directly in an actual vehicle. Moreso prototyping a technology to really explore some of the different aspects of accessibility for people who are visually impaired. But the goal of that project was really to prototype really an early stage technology attempted to be accessible for visually impaired persons.

Dr. Julian Brinkley:
The ATLAS two project that has been largely funded by the department of transportations inclusive design challenge has been more or less focused on really expanding the ATLAS one system and prototype to really further what we’re able to do with it. That has been the goal of the ATLAS two project. ATLAS one, dissertation research, university of Florida. ATLAS two, really building on that and going a lot further in terms of the features and capabilities of that specific prototype.

Josh Anderson:
Excellent. What are some of those features and capabilities that you’re trying to get up and work in?

Dr. Julian Brinkley:
We used a user center design process in designing ATLAS one, really with the goal of trying to understand what the specific needs of visually impaired persons are with respect to using an autonomous vehicle. The goal with that project was to really get a foundational understanding of what are the specific needs, what does a person with limited usable vision really need from a Human-Machine Interface. And by Human-Machine Interface, just for the purposes of your listeners, basically the mechanism with which a person may interact with a technology, in this case interact with a autonomous vehicle.

Dr. Julian Brinkley:
The goal in this case was to really try to understand what those needs are from a really foundational perspective. What we found was… And with the ATLAS one project similar to ATLAS two, we really wanted to allow our visually impaired co-designers. We basically brought in people who are visually impaired, a variety of different degrees of vision loss to really assist us and lead the effort in basically designing this Human-Machine Interface. The work that we did was largely steered by actual human beings, which is what I prefer in terms of doing this type of work. And they really let us down the path of what accessibility looks like in this context. Some of the specific features to more directly answer your question, the ATLAS system basically is very similar to what you would find with just a standard navigation system and automobile.

Dr. Julian Brinkley:
But what it does is, it does things like monitor the user’s affect. Basically you can think about that as the system trying to get an understanding of what the user’s degree of comfort is in the vehicle to take certain actions one way or the other. And by that, I mean the system basically monitors continuously the users face to basically detect whether there’s any degree of discomfort as a for instance. To basically adjust the vehicle dynamics and ride performance and things of that nature. And again, this is a prototype so a lot of these things are not necessarily completely integrated into an actual vehicle. We use a process called the [REVS 00:14:49], which was developed at Stanford to really allow us to basically integrate our prototype in a safe way into a conventional vehicle that we then configure to appear to be an autonomous vehicle.

Dr. Julian Brinkley:
One of the other features is that the system uses facial audio to basically broadcast within the vehicle where potential hazards are. One of the things that we identified in some of our work was that if you’re a person with limited to no useful vision, it may be particularly challenging if you’re thinking about an autonomous vehicle used as part of a ride sharing service. An autonomous vehicle that may actually drop you off at a location and then leave. If you can’t physically see to verify that you arrived at the correct location, that can be potentially disconcerted and fundamentally dangerous. One of the things that the system does as well is to basically vocalize, it uses computer vision to basically see for you in a sense what is outside of the vehicle and then describes that internally. For instance, if you intend to go to a shopping center and the system vocalizes that you’ve arrived at a field with cows, you’re probably not at the appropriate location. Things like that.

Josh Anderson:
Oh, that’s excellent. I know that’s always been a problem with maybe not so much with like ride sharing, but a lot of the paratransit things. They do a lot of dropping off of the back door or on the other side of the street and just not always giving you that information. And from a safety perspective and just from a comfort perspective, that’s great that it’s able to do that for these individuals.

Dr. Julian Brinkley:
Yes.

Josh Anderson:
And Dr. Brinkley talking about the design challenge, how has that been going?

Dr. Julian Brinkley:
That’s been interesting. That’s been one of those types of things I really have enjoyed the design challenge and it has been challenging. I will say that because we’ve had a relatively large team for university is basically working on this project. We’ve had faculty from George Mason University collaborate with us on this project, Dr. Vivian Genaro Motti, we have Faculty in Automotive Engineering, Dr. Jerome McClendon working on this project. Then of course we have people from our own lab to include human factors, psychology, PhD, students, data science students so on and so forth. We’ve had a relatively large team actually working on this project. And I don’t want to reveal too much in terms of what we are intending to ultimately produce but one of the things that we wanted to do for ATLAS two, is in terms of building on some of the capabilities of the first prototype was to basically build in and adapt this component.

Dr. Julian Brinkley:
By adaptive I mean essentially the ability for the system to basically learn from the user and basically end up accordingly. In terms of that, to describe that a little bit more concretely, what we really envisioned was a system that instead of presenting one graphical user interface, that’s just pretty much standard, so your interface looks like the next person’s interface, looks like the next person’s interface. We wanted to try to design an interface that could basically learn from the user’s interactions. And by interactions I mean interactions both within the system and interactions external to the system, so smartphone Interactions via an app for instance. Basically learning from the interaction that the user may have on their mobile device and also within the system. Using that data to basically modify the user interface in such a way to provide a more accessible and usable user experience.

Dr. Julian Brinkley:
We’ve been trying to build in that adaptive component, that has been very challenging. My hope is that we’ll be able to have that completed by the conclusion of the challenge to demo, but that has been really significantly challenging. One of the other big things that we’ve done with respect to the challenge is think about the prospect of interaction within the vehicle. We wanted to focus on interaction within the vehicle and what that basically looks like and consists of, but also thinking about the transition from the vehicle to the final destination. How do we get the user from the vehicle to that final destination? To that doorway or entryway where he or she is trying to get to. As part of that process, we have basically built some computer vision based glasses that basically read the external environment, so they can use inside of the vehicle to basically provide input on what’s going on, where you are, and what you’re actually seeing when you actually look out. And they can also be used to basically guide and navigate the user to that final destination.

Josh Anderson:
Nice. I’m glad you guys are doing that because so many times, yeah, it’s just getting in the vehicle to where you’re going, but not actually, yeah. That last step sometimes is the hardest part, so that’s great that you guys are thinking about that and working on that as well. Now I know we had you on today to talk about ATLAS two and the design challenge and everything else, but I know you’ve also done a lot of other research and leveraging technology for the social good and I’m always interested in the good things that technology can do. Could you give us a little taste of some of your studies, findings or passions for these projects?

Dr. Julian Brinkley:
One of the big things that motivates me is basically how we can use technology to make the world a better place. How can we use technology to basically connect people, to improve quality of life, to address some of the social ills that we deal with like homelessness and some of these other things? That’s been a big motivating factor for me, some of that research and some of the recent research that we’ve been doing, and I’ve been doing research on social networking sites at this point for about 10 years on social networking site accessibility. Some of that work, I think really reflects what we’re trying to do and something really the mission for the lab in a broader sense. In that work, social networking sites can have a number of positive impacts in terms of basically reducing social isolation for people who may have mobility challenges, basically increasing feelings of social inclusion, so they can have a number of positive impacts.

Dr. Julian Brinkley:
I know we hear a lot about some of the negative aspects of social networking sites. But really properly utilized, they can have a number of positive benefits. Some of the work that we’ve been doing in the lab has specifically focused on how do we make social networking sites truly accessible and inclusive for people with a variety of different disabilities. My students have been leading the effort on a lot of that work, so we’ve been trying to come strategies and technologies that basically improve accessibility. Another area that we do a lot of work in, and this is really championed by my student Earl Huff, Jr who’s actually going to be a Dr. Earl Huff, Jr here in the next few months.

Josh Anderson:
Oh, good.

Dr. Julian Brinkley:
That work is specifically focused on understanding how we can make computer science education specifically more accessible for people who are visually impaired. We often hear about many of these challenges that people with disabilities face just in terms of securing employment and being successful in the workplace and things of that nature. We take a different approach. Based on what we have looked at and the information that we’ve explored, people who are visually impaired may have a number of unique capabilities in terms of just things that they have learned how to do as a result of their visual impairment. Things like the memory, being able to commit to memory certain things that just by nature of the type of disability are really necessary. We try to take a look at how can we basically leverage those capabilities and skills to really apply those to different workplace activities.

Dr. Julian Brinkley:
My student Earl has been looking at how do we make K through 12 computer science education more accessible so that we can get some of these highly skilled and highly capable of visually impaired coders into the workforce. We think instead of taking an approach of… Well, instead of taking an approach of a deficit where we basically look at the situation from the perspective of visually impaired persons may need additional support or they can’t do certain things or things of that nature. We really try to look at it from a perspective of what are the abilities that they basically bring to the table that may really benefit their work as coders. And we think there’s a lot that they can potentially bring to the table in that regard.

Josh Anderson:
That is awesome. Well, Dr. Brinkley, if our listeners want to find out more about you about the DRIVE Lab, what’s the best way for them to do that?

Dr. Julian Brinkley:
I’m on Twitter. I normally follow and like and retweet more than I actually post but we do try to make sure as a lab that we post and keep everyone abreast of what we’re doing via Twitter. Your listeners can follow me @JulianBrinkley, J-U-L-I-A-N B-R-I-N-K-L-E-Y. Also our website, drivelab.org is where we basically try to keep everyone abreast of the new papers that we have coming out and things of that nature. Basically those two mechanisms are normally the best way to follow what we’re doing and what we’re working on.

Josh Anderson:
Excellent. We’ll put that down in the show notes where listeners can keep up on all the amazing things that you are all are doing. Well, Dr. Julian Brinkley, thank you again for coming on the show and just telling us about all the amazing work that you and your team are doing. Not just in the department of transportation’s inclusive design challenge but the really and just making things a little bit more accessible for all.

Dr. Julian Brinkley:
Thank you. Thank you for having me.

Josh Anderson:
Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on an Assistive Technology Update? If so, call our listener line at (317) 721-7124. Send us an email at tech@eastersealscrossroads.org or shoot us an note on Twitter @INDATAproject. Our captions and transcripts for the show are sponsored by the Indiana Telephone Relay Access Corporation or InTRAC. You can find out more about InTRAC at relayindiana.com. A special thanks to Nicole Preto for scheduling our amazing guests and making of mess of my schedule. Today’s show was produced, edited, hosted, and fraught over by yours truly. The opinions expressed by our guest are their own and may or may not reflect those of the INDATA project. Easterseals Crossroads are supporting partners or this host. This was your Assistive Technology Update and I’m Josh Anderson with the INDATA project at Easterseals, Crossroads in beautiful Indianapolis, Indiana. We look forward to seeing you next time. Bye, bye.

 

One comment:

Leave a Reply

Your email address will not be published. Required fields are marked *