AT Update Logo

ATU568 – MagTrack with Nordine Sebkhi and Arpan Bhavsar

Play

AT Update Logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Special Guests:

Nordine Sebkhi – Co-Creator of MagTrack
Arpan Bhavsar – Co-Creator of MagTrack
Website: https://magtrack.ece.gatech.edu/

More on the study: https://b.gatech.edu/3tDYBmG

Register for our INDATA Full-Day Training here: https://www.eastersealstech.com/our-services/fulldaytraining/

Find out more about INTRAC at www.indianarelay.com

——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: http://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA

—– Transcript Starts Here —–

Nordine Sebkhi:
Hi, this is Nordine Sebkhi and I am the co-creator of MagTrack.

Arpan:
Hey, this is Arpan. I’m a research engineer at Georgia Tech and co-creator of MagTrack.

Nordine Sebkhi:
And this is your Assistive Technology Update.

Josh Anderson:
Hello and welcome to your Assistive Technology Update, a weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson with the INDATA project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 568 of Assistive Technology Update. It’s scheduled to be released on April 15th, 2022. On today’s show, we have the co-creators of MagTrack on to talk about this amazing new way to access a computer, a wheelchair and other devices. We also want to remind you that a transcript of today’s show is available @eastersealstech.com and those are generously sponsored by InTRAC. You can find out more about InTRAC at indianarelay.com. We also love to hear from you, so don’t forget to reach out. You can shoot us an email at tech@eastersealscrossroads.org, or call our listener line at 3 1 7 7 2 1 7 1 2 4. But for now, let’s go ahead and get on with a show listeners.

Josh Anderson:
Listener, I wanted to make you aware that our registration for our next full day training assistive technology in transitioning from high school is now open. This training will take place on Thursday, April 21st from 9:00 AM until about 2:30 PM. The training is all online and CEUs are available for attendees. Anyone interested in learning how to assist students with disabilities in transitioning from high school to work or college individuals with disabilities, healthcare workers, parents, families and professionals are all encouraged to attend. This training will feature transitioning into adult life presented by InSource a panel featuring college students and college disability refs to talk about their experiences, best practices and other things that go into this. And the afternoon will be filled with all kinds of different assistive technology for note taking, capturing information, reading, writing, arithmetic and all those other things that come up as we transition into college. I’ll put a link down in the show notes that you easily go over to our website and register for the training, but again, please join INDATA and all of us for our next full day training, completely online. It’s assistive technology and transitioning from high school, and it will be Thursday, April 21st from 9:00 AM till 2:30 PM. Check out the show notes for a link or check out easterseals tech.com for more information. We look forward to seeing you at least virtually there.

Josh Anderson:
Listeners, we’re always excited around here when a new way to access the technology that we use on a daily basis comes about. Well, our guests today are here to talk about MagTrack and how its users can control their connected devices and even their wheelchairs with a new alternative multimodal controller. Nordine, Arpan, welcome to the show.

Nordine Sebkhi:
Thank you, Josh. Really, really [crosstalk 00:03:21] happy to be here.

Josh Anderson:
Yeah, guys, I’m really excited to talk about this technology, but before we get into talking about it, could you tell our listeners a little bit about yourselves?

Nordine Sebkhi:
Sure. Yeah. So my name is Nordine Sebkhi and I am a post-doctoral researcher at George Tech. My thesis was all about magnetic tracking technology that now we create a version for this field, helping people with disability. I am French, been in the US for about 10 years and I work with Arpan Bhavsar.

Arpan:
Yeah, and I’m Arpan. I did my undergrad at Georgia Tech and during my undergrad, I was looking for research to do so I fortunately actually found Nordine and started doing undergrad research with him also on the magnetic tracking technology. And since then, for now, several years now, I have now completed my master’s at Georgia Tech also on this technology and am now a research engineer to trying to further this technology for commercialization.

Josh Anderson:
Excellent. Well, it sounds like it was great that you guys met and got to work on this together so, let’s get into talking about the technology. So what exactly is MagTrack?

Nordine Sebkhi:
Yeah, so actually MagTrack is a body motion tracking technology. So it’s a different way that we can track the motion of anything that moves in your body. And so the way that that works is by using a tracer that we have developed that has a bunch of initial sensors in it and using some properties of magnetic field so we have a bunch of magnet next to this tracer and using the magnetic field of those magnets and using those initial sensor, we put all of that together into one algorithm that can tell you how stressor is moving. So that’s the basics of MagTrack. That’s what MagTrack comes from, magnetic tracking. So cleverly put together that’s the name, MagTrack.

Nordine Sebkhi:
And so for this specific application with helping people on the poor wheelchair. So the idea is to track the motion of facial gestures, head motion, anything that somebody can still move above the neck specifically. We try to track that small motion and then we translate that into an action, into a comment and that comment can be driving your wheelchair so giving a forward, backward, left and right, changing the mode on your wheelchair, all these kind of things. But also we added this control of connected devices, smartphone, computer, all of that also using the same type of motion that the person has. So that’s kind of in brief.

Josh Anderson:
Excellent. And you said it can kind of track motion. How much motion does there need to be? I know some folks it’s very, oh, we just can’t move a whole lot. They might have some control, but kind of how much motion does there need to be?

Nordine Sebkhi:
Yeah. So that’s the beauty of MagTrack is that you can track a very small movement and that can be movement at a millimeter scale. This actually, the technology originally was developed for speech and specifically tongue tracking. So one of our research focus alongside helping people in [inaudible 00:06:43] wheelchair is also helping people with speech impediment or speech modern disorder. And for that, we want to be able to track the tongue and when you make sound, it’s really few millimeters between one sound to another in your mouth, like position of your tongue in your mouth to make this sound. So, this is why we try to make MagTrack that can track really few millimeter movement. So this is what it’s able to do. So the technology can track wide motion, like wide ahead movement, but also minute small facial gestures. If you cannot twitch, the way it works right now, if you twitch your cheek or your eyebrow [inaudible 00:07:23] small twitching, we are able to detect that.

Josh Anderson:
Oh, nice, nice. So really anything you can control on the face, it can track and you can use that as an input device, basically.

Nordine Sebkhi:
Yes, exactly. Yeah. You got it.

Josh Anderson:
Excellent.

Arpan:
Yeah. And the good thing about our devices, it’s completely customizable to the user’s range of motion. So if one user has a larger range of motion, they’ll end up training the system with their range of motion. And if someone has a very small range of motion, the system will end up learning that and use that to relay commands.

Josh Anderson:
Perfect. And how many different kind of inputs could one person have? I just think of controlling all the different stuff. Can they have more than one kind of input? I can, I don’t know, use my tongue in a facial gesture kind of different ones. How many can it track at once?

Nordine Sebkhi:
Yeah. So that’s a very good question. The answer is as many as we can connect to our system. So right now we have only three traces. So you can stick three traces on your face, on your tongue. Plus there is one other sensor in the pair of smart glasses, which might change in the future. We can talk about that later on, but right now it’s a pair of smart glasses and you connect the tracer to that pair of smart glasses. We create MagWear. So instead of saying, pair of smart glasses, it’s mouth full. So MagWear is that as a system. And so that system [inaudible 00:08:47] has a tracer inside it, and this is nothing new. Any type of smart glasses now has some sort of initial sensor in there you can try to hand movements. So we can do the same thing, but the differentiator is that now you can add, you can plug the tracer to that system.

Nordine Sebkhi:
And so all of them are all fed into one system, one cohesive integrated system that can now be a little bit smarter, but hard to understand the movement and what kind of intent the user has. So right now, three tracers, but this is not a physical limitation. It’s just we had to start somewhere and we thought three would be a good way to start, but in the future, if we can make them really small and people want more, then we can add as many as physically possible to put on somebody’s face.

Josh Anderson:
Sure, sure. You don’t want to have too many or…

Nordine Sebkhi:
Exactly. Yeah. So this is kind of the future thing we’re going to try. And now we’re going to start getting MagTrack out to some testers and we want to get this feedback. That one part of that feedback would be, “Hey, how many traces will be reasonable for you,” because you have to glue them in the morning. And so you have to go with them all through the day so we don’t want them to be too obtrusive for you. So how many are you okay with it? And then based on the feedback, then we will select a certain number that we can, we will start with that. But yeah, in the future, you can add more, you can use less. It would be whatever the user would want.

Josh Anderson:
Very cool. And kind of talking about getting users and stuff, I actually found out about this by reading about a study that you guys were doing at Georgia Tech. Could you tell me a little bit about that?

Nordine Sebkhi:
So the study that we just published, some of the results recently was in collaboration with the Brooks Rehabilitation Center in Jacksonville, Florida. So that’s a rehab and we work with a team of physical therapists, the director of a spinal cord injury program. And so the idea was to just test an earlier version of our system and this one was on the head motion and tongue motion. So what we asked the participants, which were [inaudible 00:11:05] individuals from the patient population of the Brooks so, that’s why it was a great research collaborative work between the two institutions, Georgia Tech with all the engineers here so myself, Arpan and our team in the school of electrical and computer engineering at Georgia Tech and the clinician at the Brooks Rehab Center.

Nordine Sebkhi:
So anyway, so what we were asking them to do basically is perform some simple tasks. So some of them were driving a wheelchair and we have three simple tasks. You drive you [inaudible 00:11:40] your wheelchair forward between two cones. You go backwards and you swerve between cones. And those are some of the standard simple tests that you do at the beginning when you try to validate a new type of alternative controller. And then we did something a little bit more complex, where there was a small course that we designed and this one has U-turns and there was a going around kind of roundabout kind of thing, backing up, all these kind of things that you will use in everyday driving experience in indoor and outdoors.

Nordine Sebkhi:
So we have that and we ask people to complete that course, and we just time them and see over time how they’re improving and we ask them to do that with their own personal automatic controller, which was mostly sip and puff, head array, specialized joysticks. We try to compare with our system, how our system is, how good in terms of completing those tests our system is.

Nordine Sebkhi:
So anyway, so this is one part of the test that we did. Another one was control of a computer. So we just asking them to do a simple game. So we have [inaudible 00:12:54] solitaire, we have some maze that you have to move the mouse cursor through a maze and try to keep it inside the maze and try to get to the end, sending an email, those kind of things. So simple things but this shows a lot about what capabilities a system has, because it integrates a lot of the common human computer interaction behavior.

Nordine Sebkhi:
For example, drag and drop. This is one of the example we use a lot in our paper that we publish. Drag and drop is not easy to do. For us, it is pretty easy, right? With your mouse. We can do that easy, but not think about the technology and alternative control, try to do drag and drop. You have to get to something, select it, actively holding it, moving it while holding it and then releasing it. That’s not a simple thing to do. And this is kind of the thing without technology, because it’s [inaudible 00:13:50] and it has a proportional controller, your head, if you still have this motion of your head or the motion of your tongue, if you don’t have any head motion anymore. So you can use that as a [inaudible 00:14:03] moving it around or dragging it around, and then you can use a facial gesture or tongue motion to select it or hold it and release it.

Nordine Sebkhi:
So this is what the [inaudible 00:14:14] model aspect of our system is really important. And that everything is integrated into one system that we can make this kind of complex human computer interaction behavior happen more efficiently, that is something else.

Arpan:
And I just want to add something really quickly to that because it’s in one system that can control both your power wheelchair and connected devices, whatever you calibrate or whatever you set up for one will work for another. So once you have it set up for a power wheelchair, all you have to do then is just switch over to phone control or computer control and the same exact modalities will work. They’ll just control different things like a mouse cursor and clicking left click, right click, all that kind of stuff.

Josh Anderson:
Very nice. And then there’s a lot of things to unpack there. It’s funny, you mentioned drag and drop it. I always try to use that to explain what we do because I’m like, “Oh, you know. Individuals with disabilities use a computers. That can’t be that hard.” And I’m like, “Oh, do a drag and drop without using your mouse.” And people are always like, “I have no idea how to do that.” I said, “Yeah, it’s not as easy as you think, is it?” But it’s funny that you happen to mention that. Nordine, with the study and everything can you tell me kind of what some of the feedback you got from some of the participants was?

Nordine Sebkhi:
Yeah, it was really positive. So virtually all of them. And so this was 17 [inaudible 00:15:32] that participated throughout those studies. They were overwhelmingly positive about the value of the system and that they would want to use the system if it was available because for the majority of them, and what was great about that study is that we get people from all ages, but also people with all experience with assistive technology. So we have people that just got an injury and you started using an assistive technology, right. But we have somebody, I believe it has been more than 20 years that this person has been using an alternative controller and so this person was really experienced with that. So we get all of that. And virtually all of them said that they can see a value of the system and they know that people will benefit from it.

Nordine Sebkhi:
So that was great because for us, for engineers here at Georgia Tech, Georgia Tech doesn’t have a medical facility, medical school or access to rehab centers, right, that other university can have. So it’s really hard for us to have this user centric approach, right? When you try to design something like that, you would want to have a end user as part of the team that can give you lot of feedback while you are developing in the system.

Nordine Sebkhi:
With Brooks, it was one of the first time that we have this contact with this population that we’re trying to help. And it was just great to see how people were tolerant about small problems here and there because this is still a research prototype, but also just being so positive about it and being like, “Wow, yes. Guys, I’ve never seen that before. If this was on the market, I would love to use that.” And for us, it give us a little bit of motivation to keep going because as you know, it can be tough switching this field to try to get new technology for many, many different reasons. And so having the end user who is saying, “Please, we would love to have something like that because I can see a lot of value for it, “so this is what help us keep going.

Josh Anderson:
Oh, for sure. And that’s good that you’re involving them. They’re kind of at the beginning so that you can actually find out if it’s going to work and if there’s a use for it. I mean [crosstalk 00:17:52]

Nordine Sebkhi:
Yes.

Josh Anderson:
Yeah. We see a lot of assistive technology that it seems like they do the end user kind of at the very end. They’ve already made it, we know that it works, it’s great and then they find out that, well, there’s already something that does it, or if people prefer a different method and it just kind of fizzles away. So that’s great.

Nordine Sebkhi:
Yeah, yeah. And kind of going a little bit beside this conversation, but this is something we experience a lot and may be our responsibility also but it’s what I was arguing before. It’s really hard for us to find the partners that would help us having end users having an easy access to end users, right, without having a lot of buyers around it, that makes it hard for us to do it. So I understand why a lot of other technologies cannot make it because it is difficult to go through that, to get something new and put it in this market. So, yeah, and this is very sad because technology exists that can [inaudible 00:18:49] individuals, right. We have wonderful technology in all of these research labs throughout the US, throughout the world, all of these talented people, engineers and others that have great ideas, but then they die because it’s hard for them to actually get in front of end users and get things going. So at least we try to stick to that, this mission and try to get something to them.

Josh Anderson:
Excellent. Excellent. Nordine, I know you said that this kind of started with speech, but where did, really, the idea come from?

Nordine Sebkhi:
So this idea come from, so we were using a different technology before. When I was doing my PhD, they was this other technology that was using a similar idea using a magnetic localization or tracking kind of things. But they were using a magnet as a tracer and then a bunch of sensor outside of the head. And this is great idea. I work on that as part of my thesis and there is a lot of good stuff about that technology. The problem is it was very limiting because you can only track one magnet. So you can have only one thing you can track, because if you add bunch of magnet as the tracers, then it’s hard for you to know which one is which. They all kind of add together.

Nordine Sebkhi:
So at some point, so when we were done with my PhD and then with Arpan, we sat down and with our advisor or the advisor of the lab and also my PhD advisor. So we kind of sat down and thought about, okay, what else can be better because we know it’s going to be limited? There was so much they can do with only one tracer, right. And we sat down and thought about it and we come up with this idea as, okay, listen, there was all these initial sensors that exist. They’re becoming cheaper and cheaper because they’ve been used to on smartphone computers. I mean, pretty much anything that has electronic has one of those [inaudible 00:20:53] sensors, right. And so we didn’t see too much of it being used in this field. It’s starting. You kind of see some of that being used now, but for us, it’s like, okay, we have access to those, we know how to use them and we have this new magnetic localization technique that we have invented. And so we’re like, okay, let’s put them up together and let’s see what we can do with that.

Nordine Sebkhi:
So really this was born out of limitation and frustration we had with the previous technology where we knew that it was great for research but we cannot make it out of research. It’s not going to be practical for people. It has this user centric design problem that we are talking about. And so, right, we sat down as engineers, kind of think about what’s the best way to do it, overcome those buyers and then we found out, okay, let’s do it. Let’s try this idea. We didn’t know at all, because nobody has done that the way that we are doing it. So we tried it and turned out that it was working great. We could track that tracer very great. And now we can add as many tracer as we want, because there’s no differences between each other. So that’s kind of the story and now kind of going full circle is that now we’re using this new technology, this new track [inaudible 00:22:08] to do a better speech tracking system.

Josh Anderson:
Very cool. And I know that you’d said, you mentioned this a little bit earlier that right now it’s kind of in a set of glasses, but you said that might kind of change. Where does MagTrack go from here? What’s kind of next?

Nordine Sebkhi:
Yeah. So what’s next is we are going to do a few more studies in this year. We are partnering with the Shepherd Center in Atlanta, Georgia. And what we are going to do is have a limited number of individuals in the [inaudible 00:22:46] wheelchair, people [inaudible 00:22:47] to try and use this some, but what we want to do is get more of their feedback so doing more focus group. It’s not about, okay, do this task. How long did it take you to do this task, right. We’re going to do that more kind of quantifying the performance of our system. But what we are going to do with Shepherd is really getting that in the hands of some of the patient and be like, what do you think about, just what you want to do with it, right? What kind of things, every day that you would love to be able to do, and you cannot do with your current system, right.So really see what is the added value of our system compared to what exists right now.

Nordine Sebkhi:
So that’s what’s going to happen this year. And then from that, we are going to refactor the system, redesign it to make it a bit better, more comfortable, more practical and then getting a bigger study where we would love some participant to take it home, use it at home every day for a few months saying what are the big problems, if there are any, try to fix those problems, [inaudible 00:23:53] fixing, giving a new version to those participants until we reach a point when they’re like, “Yeah, listen. The system works fine. I think it’s good,” and then we can move on and move on meaning really creating a system that we can actually commercialize to people. So, this is where we are heading. And so for that, the design of the system you might change.

Nordine Sebkhi:
So some ideas that we are floating around is why not having a chest bend where all of our system will be built in and then the tracer would just put on the face because what we try to do is to remove as much electronics or things on your head, on your face, right? We want to be as inconspicuous as possible. And so this is kind of the thing. So there’s going to be limit from what you have to have somebody electronic somewhere. So this is working with them and see what it is the best embodiment of the system that’s still going to make it work conspicuous as possible.

Arpan:
And all these designs we’re going to be showing to end users constantly and really just going back and forth, having an open ended conversation with them to make sure that whatever design we come up with, it’s not just us in the loop. It’s really us and the users in the loop trying to make a practical design that these users really want to have and then take home with them.

Josh Anderson:
Well guys, if our listeners want to find out more, what’s the best way for them to do that?

Nordine Sebkhi:
Yeah. So we have a website that is magtrack.ece.GAtech.edu. And it’s going to be the show notes because little bit long. So that would be a website where we going to start posting more information about the system and how people can actually reach out directly to us because we are looking for people that will be interested to be participant in our future human studies specifically at home studies that we’re going to do hopefully later on this year. So that would be, I think, the best way to reach out to us.

Josh Anderson:
Excellent. We will make sure to put that down in the show notes so that folks can easily reach out. Well, Nordine, Arpan, thank you guys so much for coming on, telling us about this amazing technology that you’re working on and we can’t wait to kind of see where it all goes.

Nordine Sebkhi:
Thank you, Josh.

Arpan:
Awesome. We appreciate it [crosstalk 00:26:06] Josh.

Josh Anderson:
Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? If so, call our listener line at 3 1 7 7 2 1 7 1 2 4. Send us an email at tech@easterseals crossroads.org, or shoot us a note on Twitter at INDATA project. Our captions and transcripts for the show are sponsored by the Indiana Telephone Relay Access Corporation or InTRAC. You can find out more about InTRAC at relay indiana.com. A special thanks to Nikol Prieto for scheduling our amazing guests and making a mess of my schedule. Today’s show was produced, edited, hosted and fraught over by yours truly. The opinions expressed by our guest are their own and may or may not reflect those of the INDATA Project, Easter seals Crossroads, our supporting partners or this host. This was your Assistive Technology Update and I’m Josh Anderson with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. We look forward to seeing you next time. Bye-bye.

Leave a Reply

Your email address will not be published.