ATU204 – Tongue Mapping, Daredevil on Netflix, Free K-12 Android Apps, RESNA early bird registration, ATIA call for papers, Functional Communication System Lite

Play

ATU logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Show notes:

Tongue Mapping: JJ Moritz & Dr Leslie Stone, Colorado State University

ATFAQ show with Brian Norton www.ATFAQShow.com 317-721-7124 or use #ATFAQ on Twitter

Daredevil and Netflix http://buff.ly/1IH8VLy | http://buff.ly/1IH8X6d

K-12 Assistive Technology Professionals Group http://buff.ly/1yREBOf | http://linkd.in/1ErziY6

Registration | Rehabilitation Engineering & Assistive Technology Society of North America http://buff.ly/1EaGvKc

ATIA call for papers: http://bit.ly/1PgQJMI

App: Functional Communication System Lite www.BridgingApps.org

——————————

Listen 24/7 at www.AssistiveTechnologyRadio.com

If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org

Check out our web site: https://www.eastersealstech.com

Follow us on Twitter: @INDATAproject

Like us on Facebook: www.Facebook.com/INDATA

——-transcript follows ——-

 

  1. LESLIE STONE: Hi, this is Leslie Stone, and I’m an assistant professor at Colorado State University.

JOE MORITZ: I’m Joe Moritz, Jr. I am the lead researcher on this project.

  1. LESLIE STONE: And this is your Assistive Technology Update.

WADE WINGLER: Hi, this is Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Welcome to episode number 204 of Assistive Technology Update. It’s scheduled to be released on April 24 of 2015.

Today we are going to have JJ Moritz and Dr. Leslie Stone, both of Colorado State University, talk to us about a very interesting tongue mapping project that’s designed to help people who are deaf or hard of hearing better understand the cues from their environment.

Also we spent a little bit of an extended time talking about Daredevil and the fact that it wasn’t audio described on Netflix originally, but now it is. We include some of those audio descriptions so you can get a feel for what that’s like.

We’ve got some K-12 android apps that are free; also a couple of things related to the RESNA and ATIA conferences. We have an app review from BridgingApps on the Functional Communication System Lite, and other great information.

We hope you’ll check out our show notes at www.eastersealstech.com, give us a call on our listener line at 317-721-7124, or shoot us a note on Twitter @INDATA Project.

***

I just grabbed Brian Norton who is the new host of ATFAQ. Brian, how goes the new show?

BRIAN NORTON: The show is going great. I’m really enjoying it. We are having lots of fun just entering everyone’s questions about assistive technology.

WADE WINGLER: In fact, this morning after I’m done recording this show, we are going to record an episode about augmentative and alternative communication with a guest is John Effinger from Missouri, right?

BRIAN NORTON: That’s correct.

WADE WINGLER: So if people want to send questions to your show so you and your experts can enter them, how do they send them?

BRIAN NORTON: They can call our listener line at 317-721-7124. They can go on to Twitter and with the hashtag #ATFAQ we can find your questions there. We are always searching for them there. You can also send us an email at tech@eastersealscrossroads.org.

WADE WINGLER: Why do you want questions?

BRIAN NORTON: Well, because without your questions we don’t have a show.

WADE WINGLER: There you go. Brian, thank you for stopping in unannounced and letting me poke into your morning like that.

BRIAN NORTON: No problem at all. Thanks, Wade.

***

WADE WINGLER: There’s been a lot in the news lately about the Daredevil comic. Matt Murdock is the attorney who is blind who is daredevil and is frankly a character that I have followed since I was a kid. The show has been the center of some drama because it was recently released on Netflix, but it wasn’t released with audio description, which is a little bit ironic because the show is about a blind person who is a hero to many folks who are blind. But it wasn’t made in a way that was accessible to people who are blind or visually impaired. So for a week or two there was kind of a lot of back-and-forth about Netflix not doing this. But then they fixed it. I found an article here on the polygon.com website that actually includes a little bit of the narrative description. So I’m going to play a couple of clips you to let you experience what this is like, the narrative description for the newly described Netflix series. Check this out.

>> At the end of a long industrial hallway, a fluorescent light colors the walls and floors sickly green. One of the Russians carries a plate of food and an apple. He unbolts the outside lock of a door beneath the light.

[Door creaks]

>> I want to go home. I want my daddy.

[Speaks Russian]

>> The man comes back out, eating the apple himself, looking bored. He goes into another room off the hall where four men play poker around a roundtable.

[Russian continues]

>> A shadow appears first as Matt, head lowered and mask down, moves slowly around the corner. The ropes from the roof top are strapped around his hands and on his forearms. He looks ferocious and calm, taking an iconic stance.

[Sounds of fighting]

>> He keeps knocking them down, but they keep getting up again. Finally they are down but not quite out. Matt staggers, resting against the doorjamb, when the apple eater charges again. Matt demolishes him with some hard face punches.

WADE WINGLER: Two things. I’ve watched a little bit of the Netflix series on Daredevil and I have to say I’m not into violent shows. It’s only partially appealing to me. But Justin McElroy, the writer over at Polygon site who talks about this makes the note, and I think it’s important, that the narrative you just heard, the description you just heard, isn’t mechanical. It is not just perfunctory. It’s actually well done. He describes the narrator as gravelly who has a tone that’s perfectly at home echoing through hell’s kitchen which is where the show is set.

Kudos to Netflix. Maybe it should’ve been done right out of the gate, but I’m glad that it’s there. Congratulations to the blind community who now have not only this but they are also talking about other shows that Netflix is going to audio describe the pure in fact, there is an article over at the Washington Post and it quotes Tracy Wright who is Netflix’s director of content operations. She says, “We are expanding our accessibility options by adding audio descriptions on select titles.” They are starting with Daredevil. And then she says in coming weeks they are going to add some more like House of Cards and Orange is the New Black and Tina Fey’s Unbreakable Kelly Schmidt.

I’m glad that this kind of coming to the forefront. I know there is some recent legislation that’s going to increase the amount of described content. But I wanted to bring this up because it’s been in the press and I’m glad to know that not only did Netflix kind of write this, and make it right, but they did it well. I hope you enjoyed that and I’ll stick the links in the show notes to the Washington Post article that talks about where Netflix is going with this as well as over to the Polygon website where you can actually listen to the full scene that thought you described. I just gave you a couple of clips there. Check our show notes.

***

LinkedIn has a very robust community called the K-12 Assistive Technology Professionals. It’s a private group, so you have to join it to be included in the conversation. But recently, there was an interesting announcement from the folks over at Ideal. In fact, Steve Jacobs, who is the manager of the Lincoln group, posted recently a list of free android apps that are relevant to the K-12 space. There is a long list of them here. There are 12 different apps. Some of the names you’re going to see are Ideal Web Math: algebra, trigonometry, plots and graphs, general math; there is even a currency identifier, item identifiers, talking tags, group readers, magnifiers, and then easy access for the Kahn Academy.

I encourage you to check out this LinkedIn group. I’m going to put a link to this article and then I’ll also take you to the K-12 Assistive Technology Professionals LinkedIn group where you can join in the conversation and learn more about what’s happening in the space. You don’t always see a ton of stuff about 3 Android apps and I always want to encourage collaboration. So I think this is a great place for you to check out. I’ll pop a link right there in our show notes.

***

We love talking about ways to learn about assistive technology here on the show. RESNA and ATIA are two of the big conferences that happen in our industry. A couple of news items related to both of those conferences: on April 29, that will be the last day for earlybird registration for the RESNA conference. The conference is going to happen in June in Denver. It’s going to have more than 40 workshop sessions, research oriented platform and poster sessions, an AT Pavilion, and a bunch of instructional courses. If you want to save money and get the earlybird registration for the RESNA conference, you need to do that by April 29. Go to RESNA.org.

ATIA has announced their call for presentations is open for the ATIA 2016 conference. That one’s going to happen in February, February 2-6 of 2016 in Orlando. But now you can submit a conference presentation proposal, Call for Paper. From April 20 through June 19, there’s going to be an opportunity to submit. They are suggesting that people who are interested in assistive technology in the school, home, recreation, or people who are accessibility professionals, administrators, AT specialists, communications specialists, IT professionals, OTPT speech, all those kinds of folks should submit. They are looking for poster sessions. They are looking for longform sessions. They have 11 different strands of assistive technology topics that day, and they have advisors associated with each one of those. It’s an online submission. It’s not that tough. If you ever thought you wanted to present at one of the major AT conferences, well, here’s your chance with ATIA. I’m going to pop a couple of links in the show notes so you can figure out how to register and take advantage of that RESNA earlybird registration or how you can be a presenter at ATIA. Check the show notes.

***

Each week, one of our partners tells us happening in the ever-changing world of apps, so here’s an app worth mentioning.

AMY BARRY: This is Amy Barry with BridgingApps, and this is an app worth mentioning. Today’s app is called Functional Communication System Lite. This app is a customizable communication tool that uses speech, real pictures, and video to present words and phrases for communication. The unique feature of this system is that, attached to each button or word, is a video that shows a situation for use and describes what each word means. So, when the user touches and holds the icon, a screen appears where the user can listen to the word, look at a larger picture, or even watch a video that tells what the word or action means.

The app is completely customizable so that, using the camera and microphone embedded in the iPad, a custom user interface can be created. When the app opens, the user touches the category and a screen with labeled pictures. Under the conversation category, there are five pages of up to 12 icons that can be chosen. The icons are listed in alphabetical order. When the icon is touched, the picture appears on the top of the screen as a message. Words can be put together in the message box. Once the message is complete, the message box is touched and the complete message is spoken.

The lite version has one premade category called “Conversation” with more than 50 items included and allows the user to create five additional categories and 10 custom items. This app presents a unique and useful way to help students understand when to use specific words for communication. It also allows students to be photographed so that they can see themselves as the student speaking in the message board. And they did not have to learn a symbol system for communication.

Functional Communication System Lite is free in the iTunes store, and is compatible with the iPad. For more information on this app and others like it, visit BridgingApps.org.

***

WADE WINGLER: I spend a lot of time watching the news of assistive technology. I look at keywords. And recently haven’t been able to see any keywords about deaf or hard of hearing and technology without seeing some stories coming out of Colorado State having to do with a tongue mapping system. It made my eyebrows go up, and I thought, well, we really need to see if we can reach out to the folks there and find out more about this tongue mapping system.

I’m excited to have JJ Moritz and Dr. Leslie Stone on the Internet with me. JJ, Leslie, are you there?

  1. LESLIE STONE: Yes, we are here.

JOE MORITZ: Yep.

WADE WINGLER: Good. Hey, thank you so much for taking time out of your day and explaining to us what’s going on with a tongue mapping system there in Colorado. I’m going to ask you guys first to tell me a little bit about what is this and where did it come from and how does it work.

JOE MORITZ: Well, John Williams, he was the one who kind of started this project. He has pretty bad tinnitus from working around propulsion systems and vacuum pumps. He was looking for a way to – he was researching a way to cure his tinnitus when he came up with this initial idea and realized that it could be used to help people with other hearing problems as well.

  1. LESLIE STONE: The technology has been done by people at the University of Wisconsin in a little bit more of a rudimentary format. So they’ve used tongue stimulation to help people with balance issues as well as using it in some vision studies. So John had the idea of using this type of technology, advancing it quite a bit, and using it for the auditory system.

WADE WINGLER: I don’t think the field of assistive technology is new to having mouth- and tongue based interfaces. I know some devices that work at the keyboard. I’ve heard of some other lollipop sort of sensors. But tell me a little bit about how somebody who is deaf or hard of hearing would use this particular kind of technology to get meaningful information out of their environment.

JOE MORITZ: The idea is that the final device, the one we are working developing right now, would be something about the size of a dental retainer, and it would fit entirely in the mouth. It would be discreet. No one would know you were wearing it. It would stimulate the tongue in response to audio signals that would be sent from a wireless device that you could wear on your ear or on your clothing, and then over time the brain can – and Leslie can talk about this a lot better than I could – the brain can learn to interpret those sensations as auditory information.

  1. LESLIE STONE: So what’s happening is once JJ gets the device to electronically simulate the tongue so that you get a feeling of touch sensation kind of like pop rocks or champagne bubbles on your tongue. That somatosensory information is going to be taken up by your brain. What’s different between this and, for example, braille where people use bumps to represent letters and words, is that we are actually taking the sound frequency information and mapping that frequency information onto the tongue. So it’s sound information rather than specific words or letters. That somatosensory system will take that, and over time your somatosensory cortex can develop so that it can interpret those signals more precisely, because there is a large amount of the cortex that’s devoted to processing information from the tongue because it’s very sensitive, similar to your fingertips.

WADE WINGLER: So if I’m understanding this right, than if somebody is receiving information from their environment, and a trash truck drove by which would be kind of a deep lower frequency, a loud sound, it might have sort of a big feeling on one area of the tongue. But then if a fire truck went by with a higher pitch, intermittent sound, you might get a different area of the tongue with a sort of intermittent signal? Am I thinking of that the right way?

JOE MORITZ: That’s exactly right. We are doing more research to find out exactly – a big part of what we are doing right now is where exactly to stimulate on the tongue for different frequencies. Is there a way that we can try to replicate the signal that the cochlea sense to the brain, and can we do that on the tongue.

  1. LESLIE STONE: And sort of mapping the frequency, because the auditory system maps out the frequency very nicely in a tonotopic map in the inner ear. So we are trying to figure out if there is a way we can put that tonotopic map on the tongue. The challenge is that the innervation of the tongue varies. It’s not consistent across the tongue surface. So our current research is devoted to mapping how precise we can get that information to be at different areas of the tongue.

WADE WINGLER: So give me some details about those locations on the tongue. I’m kind of fascinated with that. How does the mapping work right now? Is it the left side is lower frequencies and the right side is higher? Tell me some more about that. That’s fascinating.

JOE MORITZ: Right now what we are doing is we are kind of doing a front to back frequency scale. So right now we have the front of the tongue a lot of high-frequency information, or that’s where we put the high-frequency information. As you move back on the tongue, we change to lower frequency information. That’s something that we are studying more and might change a little bit in the future.

But finding the nerve density and the discrimination distance on different parts of the tongue, as well as the intensity threshold at different parts of the tongue, is we have a large array we put on the tongue and we move it to map the full tongue, or about 4 square centimeters of the tongue. We activate two electrodes on this electrode array that are a certain distance apart, and then we randomly move them closer together and farther apart. We do that all over the surface of the tongue. It’s completely randomized. And then the participant records whether they feel one, two, or zero distinct sensations, kind of like a traditional touch discrimination – what are those tests called, Leslie?

  1. LESLIE STONE: Two-point discrimination test.

JOE MORITZ: Yeah, two-point discrimination test. And then the participant also records the perceived intensity. So on a scale from 1 to 10, how intense that sensation feels. That varies across the surface of the tongue depending on if we are in an area that we are activating more than one nerve ending with this electrode or a single one or none.

We are able to get both intensity information – relative intensity information because we stimulate the tongue the same way across the entire surface. But that perceived intensity changes. So we are able to get intensity information as well as discrimination, two-point discrimination information across the surface of the tongue.

WADE WINGLER: So then are you finding a lot of variation from tongue to tongue? You said the amount of variation varies, so you kind of have to map each person’s tongue to find reliable point of contact. Is that fair to say?

  1. LESLIE STONE: But there are some consistencies with people. For example, in general, the tip of their tongue is much more sensitive, and you can discriminate two points that are much closer together near the tip of the tongue. So what we are looking for is common patterns across the tongue surface so that we can develop the arrays to associate with the way the tongue is in most people. We will have to maybe make a few different arrays because there’s probably different populations of people. But we can categorize them a little bit, and that helps, or we may determine that we do need to do individual arrays per person to make it the most effective possible. But that’s part of what we are studying.

WADE WINGLER: That’s fascinating. This is very interesting. So right now I assume it’s about frequency and duration of tone. What kind of real-world sounds are you mapping on the tongue right now? Or has it gotten that far?

JOE MORITZ: We are mapping any sounds right now. We’ve mapped realtime speech, music, different tones from tone generators, different waveforms. A technical challenge that we are working on is the dynamic range of the tongue. There is a large dynamic range between —

  1. LESLIE STONE: The softest sound you can hear and where it starts to be painful. So that dynamic range is a little hard to replicate on the tongue’s surface, as far as intensity.

JOE MORITZ: There are some things we can do. We can use the taste and geometry and things like that to get higher dynamic range on the tongue. But there is a certain amount of lost information when you map a sound through the surface of the tongue. But we’ve done all sorts of different stuff right now. Music is actually really fun to listen to on the tongue.

WADE WINGLER: It kind of makes sense, because as we’re sitting here recording this, I am watching the waveform of our conversation. I spend a lot of time editing audio. I know that I can discern different voices just by looking at the waveforms. Although I started out this composition think that’s kind of a disconnect, I sit here and realize that if I say “Um” in my recording, I know exactly what that looks like. I can find that visually. That’s an interesting kind of revelation I just had.

JOE MORITZ: It’s very similar. It’s kind of exactly the same thing. Phoneticists, people who study aspects of language, they can actually look at the signal, the spectrograph of an audio signal and pull out specific sounds and words and derive meaning from that. So if you are looking at that all day, then you can kind of see how you can pick that up. Or if you are experiencing that information on your tongue all day, you can see how that would be kind of a natural way to learn.

WADE WINGLER: So the kind of things that can be discerned at this point in your research, obviously people aren’t feeling the audio sensations on their tongue and discerning speech. But what kind of things can be picked up at this point?

JOE MORITZ: Right now we’ve been focusing on the nerve density mapping. We’ve done some initial unpublished studies. You can tell duration of a sound very easily and then you can tell one and two, X and O. we’ve just done some very preliminary tests with that, but we are gearing up for a larger scale study where we are teaching enough words to communicate a simple sentence, things out of children’s books. That’s something we’ll be starting here in the next few weeks.

WADE WINGLER: So it sounds to me like you are still kind of early in the research. As I think about an ultimate goal of discriminating speech or understanding other kinds of signals in the environment, it sounds like you guys are getting a lot of interesting and fascinating stuff figured out. But it sounds kind of early. Is there a goal for a commercialized product at some point?

JOE MORITZ: Yeah. Ultimately we want to have something that somebody can buy in the store or get from their doctor. We expect it might be probably two years before we see anything like that. After we do clinical trials, and of course we need to get FDA approval on this device.

  1. LESLIE STONE: Yeah, we are still pretty early in the research. Part of the problem right now is funding. All of us are working on this on the side. This is not our primary job. We’ve applied for a couple of grants and are waiting to hear. If we can get some funding coming in to the study, things will progress much more rapidly.

WADE WINGLER: Let’s fast-forward a little bit. Let’s say that there is some funding and the research goes very well and there is a commercialized product available. If somebody is walking down the street in five years or whatever using this product, what does that look like? What kind of benefit with that person get?

  1. LESLIE STONE: It depends on their disability or where they started from. Walking down the street, you wouldn’t be able to tell they have it in their mouth at all because it would be completely in the mouth with receivers behind the ear. If they, for example, are losing half their frequently range for hearing and can just hear some sounds, then it would be tuned to pick up the sounds that they can’t hear with their auditory system. And then if something came out of the side, a car or bike or something that they could hear the screeching of tires, they might be able to pick that up where they couldn’t pick it up before. Or they could be listening to their iPod at the same time as it’s playing on their tongue and just be in a total immersive music experience.

JOE MORITZ: In some ways, you can compare it to cochlear implants. We are using a lot of the same methods of stimulation or roughly the same resolution as a cochlear implant. We are stimulating a different area of the body, but we expect to have comparable effectiveness.

WADE WINGLER: Excellent. So a very un-technical question: does it taste funny?

JOE MORITZ: Sometimes. Leslie, do you want to elaborate on that?

  1. LESLIE STONE: Yes, it does. And it’s actually very exciting. So occasionally with certain waveforms that JJ produce with his device, or in certain areas of the tongue, you can get a sour taste. JJ said he’s also tasted better. I’ve only tasted sour when I’ve used it. But that’s also very exciting. My background is actually in taste. That’s where I’ve done the majority of my research. We are also looking at ways to refine that technology to where we can produce a sour taste on the tongue surface.

WADE WINGLER: That’s interesting. If people wanted to reach out to you and wanted to learn more about the tongue mapping system and research, what would you recommend they do? Is there a website or something for them?

JOE MORITZ: It’s advancing.colostate.edu/eng/give. Anyone who wants to contribute any funding can put in the comments for cochlear substitution fund. They would donate through the plasma engineering research laboratory which is where we operate right now. John Williams is a propulsion expert, rocket scientist. So we operate primarily out of his lap and Leslie’s lab too.

WADE WINGLER: JJ Moritz and Dr. Leslie Stone are both with Colorado State University in working on this tongue mapping system. JJ, Leslie, thank you so much for being with us today.

  1. LESLIE STONE: Thank you, Wade.

JOE MORITZ: Thank you.

WADE WINGLER: Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124. Looking for show notes from today’s show? Head on over to EasterSealstech.com. Shoot us a note on Twitter @INDATAProject, or check us out on Facebook. That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.

Please follow and like us:
onpost_follow
Tweet
Pinterest
Share
submit to reddit

Leave a Reply

Your email address will not be published. Required fields are marked *