ATU329 – Self Driving Wheelchair

Play

ATU logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Self Driving Wheelchair – Jonathan Kelly, PhD, University of Toronto Institute for Aerospace Studies | jkelly@utias.utoronto.ca | www.starslab.ca | cyberworksrobotics.com

——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA

——-transcript follows ——

 

JONATHAN KELLY:  Hi, my name is Jonathan Kelly and I’m an assistant professor at the University of Toronto institute for aerospace studies, and this is your Assistive Technology Update.

WADE WINGLER:  Hi, this is Wade Wingler with the INDATA Project at Easter Seals crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Welcome to episode number 329 of assistive technology update. It’s scheduled to be released on September 15, 2017.

Today I have a fascinating and honestly futuristic conversation with Dr. Jon Kelly who is at the University of Toronto institute for aerospace studies. We talk about a real self driving wheelchair.

We hope you’ll check out our website at www.eastersealstech.com, sent us a note on Twitter at INDATA Project, or call our listener line at 317-721-7124.

Recently I saw a video of a wheelchair with nobody in the seat, driving itself around an office building. It wasn’t bumping into stuff for people or anything like that. It blew my mind. I thought this has to be a trick, technology that’s been for shopped or edited. Then I thought we are talking these days about driverless cars, and this is not that far off in terms of technology. I thought I had to learn more because I had a million questions when I saw myself driving wheelchair. We reached out to Dr. Jonathan Kelly who is an assistant at the University of Toronto in the Institute for aerospace studies. He agreed to come on the show and we were super excited. Dr. Kelly, thank you so much for being with us today.

JONATHAN KELLY:  My pleasure.

WADE WINGLER:  Tell us a little bit about yourself and your background and how that translated into you becoming interested in self driving wheelchairs.

JONATHAN KELLY:  My background is in visual navigation systems for robots. My doctoral work, my PhD was in visual navigation using multiple sensors in addition to vision to allow sensory robots to navigate in various types of environments, indoors, outdoors, and so on.

I run a lab at the University of Toronto that does research in this area. We were approached a couple of years ago by a company called Siebel works robotics who had this very compelling idea to apply a lot of this visual sensing technology to wheelchairs and assistive devices. Once their CEO explained what they were thinking of, it struck me that this could be a tremendous opportunity to take what I had learned and studied over many years and actually deploy it for assistive applications that could be game changing for a very large number of individuals.

It was incredibly exciting, so that was the genesis of our getting involved in the project.

WADE WINGLER:  I know wheelchairs have been around for a long time in many forms. Even the modern power wheelchair has been around for a while and is fairly mature in technology. Why do we need one that drives itself?  Why do we care about that?

JONATHAN KELLY:  We do have power chairs today that are quite capable in terms of their ability to drive able-bodied users around. Users that are accustomed to and adapt at using the standard joystick interface for a power chair, which is how many individuals use their chairs – initially we were looking at a population of individuals, and this was one of the exciting parts, is a group with spinal cord injuries, ALS, severe hand tremors due to Parkinson’s, who are simply not able to operate power chairs using the standard joystick and are relegated to using somewhat arcane devices like sip and puff switches that are very difficult to use, very tedious, exhausting, and basically slow.

We looked at this segment of the population of power chair users and thought, wow, this group currently is left out of the power chair game, as it were. What can we do to actually enable them to achieve the same level of mobility that someone able to use a joystick would be able to achieve?

WADE WINGLER:  The current status of this chair:  is in production?  Is it a prototype?  What is the current status of the project?

JONATHAN KELLY:  At the moment it is a prototype chair.  In cooperation with the company I mentioned, Cyberworks, and one other University here in Canada, the University of Sherbrooke, together we all collaborated on building a quite sophisticated and capable prototype that is able to demonstrate all of the basic capabilities that you would want from this type of system. We are able to do cord or following, narrow doorway traversal which is difficult for even individuals who are relatively good at using a joystick with a chair. We can go through doorways that have clearances of four inches on each side relative to the chair. That’s all done automatically. We have a whole mapping component that allows you to go point-to-point. You pick a location such as the kitchen, and if you are in the living room he would simply indicate via one of a number of different methods they would like to travel to the kitchen, and the chair would take you there automatically.

We are at the prototype with the technology, and now we are in cooperation with some additional folks at the University of Toronto, we are working on the next step of the project which is essentially a series of user studies and also working towards regulatory approval here in Canada and also in the United States.

WADE WINGLER:  That answers one question. Now I’ve got a bunch more. My audience tends to be nerdy and enjoys the technical details of how these things work. I want to get into some of the how to. It has to be sensing the environment, calculating that, making navigational decisions, then all kinds of stuff. My research before our talk today – does it use videogame technology for part of this?

JONATHAN KELLY:  Yeah. That’s a really interesting part of it. At the moment, for sensing the environment, it uses a Microsoft Kinect sensor, which many people would be familiar with from their Xbox. Xbox users are using the exact technology that the chair is using. The interesting thing about these Kinect sensors – and there are a variety of competitors to the Kinect that have this capability – is the capability of giving you’re not just a picture, like a flat photograph which obsolete cameras can provide, but the Kinect also provides in-depth information. They provide reliable information about distances to allow the object that it can see with in its field of view. That’s actually one of the critical components of the system, as being able to sense how far away things are. It sounds like a trivial problem, but if you try to do it using just simple cameras, it actually is quite complicated and difficult. The Kinect enables us to sense this distant information to use it for mapping and other great tasks.

The other great thing about the Kinect is because it has been mass-produced by Microsoft and sold with millions of Xbox systems, the price point for the unit has come down so dramatically compared to other equivalent sensing technology that it has been a game changer in robotics for a variety of applications beyond wheelchairs because it is so inexpensive. We are actually then able to consider putting it on a wheelchair and having an affordable device as opposed to spending $10,000 on sensors that use to do what the Kinect now does for $200.

WADE WINGLER:  Thank you Microsoft.

JONATHAN KELLY:  Indeed. It was a company they partnered with that developed the technology. It’s very neat.

WADE WINGLER:  So you’re using that technology to build a map, but then you have to interpret that map and decide what to do with it. How does that work?

JONATHAN KELLY:  In partnership with the University of Sherbrooke and ourselves and several others can we put together a mapping a package based on an open source piece of mapping software that was developed at Sherbrooke. It does this interpretation. It allows us to take in a variety of information from the Kinect sensor and put it all together to assemble a map that is geometric, so it would look much like a posing floorplan that you would look at on paper.

However, this map contains additional information. As we built it, we incorporate information about visual details in the environment so you can re-recognize when you are somewhere you’ve been before so you’re confident in where you’re going and where you’ve been, which is a critical thing for the chair to always know a good approximation of where it is in an environment it needs to operate in. The map also includes admission about clearances and obstacles. We make sure that there is a buffer around any type of obstacle that could be dangerous or anything you would want to avoid. There is clearance information that we make sure we don’t drive too close to walls are too close to doors. We are also able to sense things like stairs. Obviously it is already aware that it must not travel to close to a stairwell. All this information is incorporated into the map.

WADE WINGLER:  My next question is more about the where it works. Are we talking about indoor environments exclusively, or can you go outdoors with this?  Does there need to be a previous familiarity with the environment, or can it go into unfamiliar environments and figure it out?

JONATHAN KELLY:  Both great questions. At the moment, we have primarily targeted indoor environments, places like care homes, individual homes, office spaces, shopping sensors, warehouses, wherever a person may live or work. The project has evolved and the company is currently looking at moving to outdoor spaces where one of the benefits is you have access to GPS information. You can actually get a GPS signal that gives you a pretty good idea of where you are. Of course, GPS doesn’t work indoors, so that’s why we use the visual technology.

The one difficulty with the current sensing devices like the Kinect that are available in the market is that they don’t work well outside in bright sunlight. They rely on infrared light, which they project into the world and read back. It happens that the sun, in addition to putting out a lot of bright light that we can see, also puts out a spectrum of light that essentially partially blinds the Kinect-type sensors. There are already sensors coming down the pipe that addressed this and will be in the same price range, but they are not quite at the market level yet.

I’ve now missed the second question.

WADE WINGLER:  That’s okay. How familiar does the environment need to be and how good is the system at figuring it out?

JONATHAN KELLY:  It does two things. It has multiple modes of operation. The first mode is what we will call the mapping mode where a user would be given a chair to operate, but first and occupational therapist who had diagnosed the individual would take the chair to home or work, where they are going to operate the device, and simply drive the chair manually or through a semiautomated approach around the places the person is likely to travel. The system is able to build a larger map which it stored in its onboard computer, at the moment an Intel i7 that sits bolted on the side of the chair. The full map is stored on board the computer, and at that point, once that map is built, the user is able to go from any point to any point in that map automatically.

Of course, as you pointed out quite rightly, they are going to be many situations where it may not be possible to build such a map in advance. The chair that has additional modes of operation that enables it to do certain things that are useful but do not require a format. For example, the user can choose to move to a location that may not be mapped simply by indicating they want to drive in a certain direction. If there is a doorway, the chair will realize and understand that there is a narrow corridor or, a passage, and automatically drive through that. Also if the user is in a hallway that has not been previously mapped, he or she can simply indicate they would like to drive down the hallway to the end, and the chair without any prior map is able to drive down the middle of the hallway, understanding that it should maintain a proper distance from both walls.

Most recently, in cooperation with us, the company has begun to investigate desk docking. The automated procedure for driving up and positioning the chair at the appropriate distance from a desk or table, sliding in underneath it if there is clearance or stopping before it if there is not. That cannot be performed without a prebuilt map.

WADE WINGLER:  We know it knows to stop at the top of stairs because it is measuring clearance. I would have to assume it also deals with an air that doesn’t have enough head clearance, like you’re going under the bottom of a staircase or something like that.

JONATHAN KELLY:  That’s exactly correct. It is also aware of clearance above and below the body of the chair. If you were to try and pass through a doorway that does not have sufficient vertical clearance, maybe a low door, the chair would know this and stop and not allow the user to drive through. It would simply post a warning and error message saying the path is not traversable.

WADE WINGLER:  What about fast-moving objects that are coming at it from inside its path or from alongside or behind?  Can it detect those?

JONATHAN KELLY:  Yes, it can. It does in fact have to. The system operates in a number of layers. You can think of it almost like a layer cake where there is a high-level module that understands where the user would like to go – for example, living room to kitchen – and there are a series of lower-level layers that handle situations that may be unexpected.

For example, if there is a fast-moving object that pops interview – a dog runs in front of the chair or something, or a child runs by – there is a lower-level module that understands and handles identifying and categorizing fast-moving objects and then determines what the chair should do. Of course, those objects are usually not in the map that the chair has stored because the map was usually captured when there was nobody around or when there is not a dog running back and forth.

The lower-level module will recognize that there is an object that is moving in the path of the chair and will do one of two things. Depending on the speed and direction of the obstacle, it will either stop, smoothly come to a stop and inform the user of the chair that there is an optical and it would like to wait until the obstacle moves; or if the obstacle is slow-moving or stationary, but is not been seen before, then if the chair considered it safe based on a series of safety criteria, it will attempt to plot a course that drives around the obstacle. This is used for scenarios where in someone’s home, for example, a caretaker or the individual themselves may put a laundry basket full of laundry on the floor. The laundry basket is not going to jump out and do anything dangerous, but it is blocking part of the path that was previously wider and has not been noted before. The chair will then recognize this laundry basket, realized that it is an obstacle that was not there before but it is stationary, and if there is sufficient clearance, again taking into account those clearance constraints to make sure we’re not going to come close to bumping into walls, a low-level module will simply plan a short path that takes it around the obstacle, and after that the higher level of this layer cake will resume control and proceed wherever the user has selected as their ultimate destination.

I should also add briefly that, thanks to the great work from the folks at Sherbrooke, one of the things the mapping module is able to do is realize after a period of time, if it sees the same altered state of the environment – for example, you’ve moved a new table or put in a new chest of drawers in your bedroom – and that object is there consistently at an approximately fixed position, then the mapping software, after seeing it several times, will realize this is something that is likely to remain for a period of time. It is not an obstacle I would expect to see once in a while, something that has become a semipermanent part of the environment, so I will simply update my map to incorporate that new piece of furniture. From that point on it doesn’t need to run this low-level obstacle avoidance procedure because it already knows that that’s expected to be there.

WADE WINGLER:  I could do this all day with you. This is fascinating. Sadly we are about out of time for the interview. I have a couple of questions we have to cover. When do you think something like this might be available?  What kind of cost do you anticipate for such a technology?

JONATHAN KELLY:  “When” is a very interesting question. It largely depends on regulatory approval. We are working to that now and hope that a device with a basic marginality would be on the market and perhaps two years, just depending on how fast we are able to proceed through the regulatory process. We believe the technology is there, but the regulatory process of demonstrating safety and utility to individuals is quite stringent. We are working through that with partners here at the University of Toronto in the medical field.

The cost is an interesting question. When we over approached by Cyberworks, the other fantastic challenge we found with their proposal was that they wanted to do this for a cost of less than – and this is ballpark – $1500 as an addition to existing power chairs. The technology and device that has been built is what you might call chair agnostic. It is built for any power chair that operates with a joystick. We utilize the joystick interface, so if you are able to plug into the joystick port, this device can essentially be added. That was one of the major goals. We obviously don’t want individuals who already have perfectly functioning power chairs but may not be using them to their full capacity due to mobility impairments or other issues, we don’t want them to have to spend thousands of dollars on a new chair just to get this technology. From the very beginning, it was designed to simply be a bolt on addition that could be added to existing chairs. That’s the goal. There is the visual sensing components, the computer that is added, and smart additions to the wheels that tell you how many times the wheels have rotated and how fast. That’s essentially it. It is designed to be a bolt-on kit that would be installed by a technician in a couple of hours. After that you would rule out of the shop with your new self driving power chair.

WADE WINGLER:  Very cool. Before we go, what kind of contact information can you provide us so that our listeners can follow up with this fascinating story?

JONATHAN KELLY:  If interested listeners would like to get in touch, they are more than welcome to contact me. I would love to answer questions by email. My email address is jkelly@utias.utoronto.ca. That’s the main site. You can also find us at the URL www.starslab.ca. Stars Lab is our lab website; we have some information on a paper that recently published that individuals who are interested can read to get more information about the technology. The last thing I would point you to is the partner company that we work with who is now continuing to take that development to the next level. They are cyberworksrobotics.com.

WADE WINGLER:  Dr. Jonathan Kelly is an assistant professor at the University of Toronto, working on a self driving wheelchair. Thank you so much for being our guest today. Two it was my pleasure. Thank you for having me.

WADE WINGLER:  Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124, shoot us a note on Twitter @INDATAProject, or check us out on Facebook. Looking for a transcript or show notes from today’s show? Head on over to www.EasterSealstech.com. Assistive Technology Update is a proud member of the Accessibility Channel. Find more shows like this plus much more over at AccessibilityChannel.com. That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.

***Transcript provided by TJ Cortopassi.  For requests and inquiries, contact tjcortopassi@gmail.com***

Please follow and like us:
onpost_follow
Tweet
Pinterest
Share
submit to reddit

Leave a Reply

Your email address will not be published. Required fields are marked *