Robotic Industries Association Logo

Member Since 1900

LEARN MORE

RIA has transformed into the Association for Advancing Automation, the leading global automation trade association of the robotics, machine vision, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Our Autonomous Future with Service Robots

POSTED 07/24/2014

 | By: Tanya M. Anandan, Contributing Editor

Robots are increasingly broadening our horizons beyond the factory floor. From robotic vacuums, bomb retrievers, exoskeletons and drones, to robots used in surgery, space exploration, agriculture and construction, service robots are building a formidable résumé. Yet the world of service robotics is still a largely untapped mine of possibilities.

By all accounts, the growth potential for service robotics is huge, destined to eventually surpass its big brother, industrial robotics. Many have likened this phenomenon to the proliferation of personal computers in the 1980s.

While there have been remarkable advances in the technologies enabling robots to augment our strengths and talents, there is still much work ahead. Experts agree that we have a long way to go before robots are omnipresent in our everyday lives.

Autonomous videoconferencing mobile robot bridges the distance between coworkers in different locations. (Courtesy of iRobot Corporation)The future of service robotics lies in the research taking place today in our universities, research institutes and corporate R&D labs. Advances in algorithms, haptics, mobile manipulation, hybrid actuation, and soft robotics are getting us closer to the ultimate goal, true autonomy.

At Home, the Office and First on the Scene
Perhaps the best-known ambassador for service robots is the iRobot Corporation of Bedford, Massachusetts. With its robots for domestic cleaning (most notably the Roomba®), defense and security, and now telepresence, iRobot commands the widest breadth in professional and personal service robotics.

“When you start thinking about the future of the service robotics industry, in particular systems that will be operating in more unstructured environments or in environments that have close interactions with people, you end up with some additional challenges or opportunities as compared to the industrial automation setting,” says Dr. Chris Jones, Director of Strategic Technology Development at iRobot.

“Whether they’re in a hospital, or the office, or in the home, you end up with some new challenges given the dynamic nature of those environments, where the people aren’t necessarily savvy with robots,” says Jones. “They may not even know what they are, or what the robots are trying to do. You have to thoughtfully design your robot systems to be effective in those types of environments.”

iRobot has invested considerable time and energy into mobility, navigation and manipulation, and these continue to be key areas of development.

Mobile Autonomy? Here!
The iRobot Ava® 500 videoconferencing mobile robot recently made its debut. We’ve seen mobile telepresence before, but this system has something the others don’t – autonomy.

Once you’re in the human environment, you can never know what you’re going to encounter, so that robot needs to be able to effectively sense and navigate whatever might be thrown at it in that type of office environment,” says Jones. “One day there might be a bunch of boxes in the hallway that it needs to navigate around, and sometimes the hallway may be completely blocked. It needs to recognize that it’s blocked, and then effectively find another way to get to its destination.”


What’s the definition
of a service robot?

What’s the difference between
personal and professional
service robots?

Whether it’s cruising around the office or the factory floor, the mobile collaborator finds its way. Jones describes how it self-maneuvers its environment once users schedule a time and location for their video conference.

“The Ava 500 will drive to that location at that time. The person logs into the robot and they’re exactly where they want to be. When they’re done with their meeting, they just log out and the robot will autonomously drive itself back to its charging station and recharge, or move onto its next meeting location. That autonomous operation distinguishes us in the field.”

Cost-Effective Navigation and Perception
One of the development areas iRobot is focusing on is cost-effective, vision-based navigation and the use of commodity cameras or image sensors to navigate. The challenge lies in the robot not only being able to “see” its surroundings, but also perceive, understand and adapt to them.

“Let’s say you have a fictitious mobile robot with an arm, which can move around an office building,” says Jones. “It might even need to open and close doors. To do that, it needs to be able to effectively know what a door is. It needs to be able to perceive the environment around it and say oh, that’s a door, and there’s the door handle. The environmental factors around recognizing that door are going to be more challenging.”

“Lighting conditions could be changing all the time. One day you could see the door with bright sunlight coming through the window, and the next day it could be overcast and that door could look very different to an imaging sensor.”

“We believe there’s much more we can do to provide a more valuable solution,” says Jones. “That’s going to be very important if you’re going to see robots deployed in more environments outside of an industrial setting.”

He also thinks manipulation will be an important area of development, and says that mobility poses additional challenges compared to traditional fixed-base manipulators common in industrial settings.

“When you’re talking about mobile platforms with manipulation capability, you have to be very sensitive to the size and weight of those arms,” says Jones. “Yet, you still want to have very robust capabilities. The types of costs that we would be targeting are two to three orders of magnitude less expensive than the state of the art. There are real challenges in developing a highly capable manipulation system at significantly lower cost to make it viable for many of these applications.”

Soft Robotics for Compliance
Jones believes soft robotics and compliant arms and hands will play an important role in the service sector. A soft, pliable bot that grabbed the tech media’s attention a few years ago could be the precursor.

All-terrain mobile robot used for defense and security applications equipped with an experimental inflatable arm (Courtesy of iRobot Corporation)ChemBot was the product of DARPA-funded research activities in physical compliance, and iRobot was a major contributor to the program. Affectionately nicknamed the blob, ChemBot’s design was based on jamming transition technology.

This video illustrates jamming transition and shows the ChemBot in action.

Our article on the future of robot end effectors featured an ingenious gripper that uses jamming transition technology to grasp everything from eggs to chards of glass. Researchers at iRobot experimented with the jamming transition gripper on its PackBot® line of defense and security robots. Check out the video

Jones says although ChemBot is no longer active and the jamming gripper on PackBot was early-stage research, that line of thinking is still very much alive.
 
“As part of the DARPA ARM program, we’re building a three-finger hand that has physical compliance built into the knuckles of the fingers,” says Jones. “Those fingers will passively flex and deflect whenever they make contact with the environment. The lineage goes back to the ChemBot.”

This video shows the ARM-H being put through its paces and scoring a homerun.

Mobile Manipulation
Algorithm development to control soft mobile manipulation systems will also play an important role in new and varied service robot applications.

This video collage provides a snapshot of the various research projects in mobile manipulation taking place around the globe and the variety of robots leading the way.

iRobot partnered with Professor Dmitry Berenson at Worcester Polytechnic Institute to create an exclusive software framework for its PackBot manipulator. This video demonstrates the process. 
 
“People like Dmitry are doing great work in terms of how you do control and planning when you’re talking about using highly compliant manipulation mechanisms,” says Jones. “That’s a new area. It’s still very fresh and Dmitry is one of a handful of people that are starting to look into control and planning using those types of mechanisms.”

Motion Planning
Software frameworks focusing on motion planning are one of the breeding grounds for greater degrees of autonomy, a vital element in both professional and personal service robotics.

“To make a robot that can do useful tasks in your home, like cleaning up your kitchen or dining room, or helping someone with disabilities get out of bed or into the bathtub, means interacting with the world. In that case, we need serious research into algorithms and robot intelligence,” says Dmitry Berenson, Assistant Professor of Computer Science and Robotics Engineering at Worcester Polytechnic Institute (WPI) in Massachusetts.

“There’s not going to be a useful domestic robot that manipulates the environment until you have more intelligence capabilities,” he says. “That’s where we’re actively doing research.”

A Robotic Industries Association member, WPI is the first school to have an undergraduate degree in robotics, as well master’s and doctoral programs in the discipline. Professor Berenson leads the Autonomous Robotic Collaboration Lab (ARC), which focuses research on human-robot collaboration, machine learning, and manipulation planning.

The challenge, or opportunity says Berenson, is that in the personal service robotics sector in particular, the environment is unstructured and constantly changing. The robot needs to be able to adapt to its surroundings and the task at hand.

Motion planning algorithm testing with humanoid robot in preparation for valve-turning task in DARPA Robotics Challenge (Courtesy Worcester Polytechnic Institute)“We want robots to be intelligent enough to work with us and interact with the environment as collaborators,” says Berenson. “The two biggest projects that we are working on right now are collaboration in a factory setting and collaboration in a more autonomous domain where the person simply says I need you to manipulate these kinds of objects, now go and do it.”

“Our focus is to make robots that can interact with people in their environments as autonomously as possible,” explains Berenson. “What that means is that the robot must understand what the human is doing. They have to understand it at two levels. The first level is the ‘what’ and the second is the ‘how’ level. The ‘what’ is what are you trying to achieve. Why did you pick that thing up? Well, it’s because I know that thing goes into this slot and that’s part of the assembly plan. The ‘how’ level is how you’re actually doing this.”

In our recent collaborative robots article, we featured research underway at the Massachusetts Institute of Technology where Professor Julie Shah’s Interactive Robotics Group is studying the ‘what’ that Berenson referenced. This is the task planning side of human-robot collaboration, or the reason for doing certain movements and their proper sequence. Meanwhile, Berenson is focused on the ‘how’ or the motion planning side.

“The how level is actually something very subtle. It has to do with how the person moves,” explains Berenson. “Just by looking at how the person moves, we can infer how they are doing a certain task. Are you going to pick it up this way or that way? Are you going to slide the object along the table like this, or are you going to pick it up and put it into its place like that? We want to be able to understand this as quickly as possible, how people do that, so the robot can do a complementary action in a way that is both efficient and safe around the human.”

Whether they’re studying human-robot collaboration for a factory setting or for a less-structured environment in the home, Berenson says the goal is the same.

“I can make a robot that goes into your kitchen and loads the dishwasher, or I can make a robot that goes into a warehouse and packs cups into boxes. It doesn’t really matter to us. It’s using exactly the same code. The key ability is that it knows how to pick up cups. Now that’s very specific to cups, but in general, that it knows how to manipulate objects in the world. If it can do that, then it can do a lot of general-purpose applications.”

“We provide these fundamental methods – algorithms – that enable all of these tasks. The applications are manufacturing, domestic service robots, and someday I think we can automate some parts of surgery.”

The Power of Algorithms
Algorithms are Berenson’s area of expertise. They give robots their intelligence and are the step-by-step paths to greater autonomy.

“We focus on algorithms for motion planning, a way to generate movement for the robot that is intelligent. The robot is not just playing back a sequence of motions (as in a repetitive industrial task), nor is it just reacting to you pushing against it by shifting to one side (some collaborative robots are capable of this).

“The robot is actually saying, okay, there’s that cup on the table, I need to figure out a whole sequence of motions for how I reach for that cup, how I pick it up, and put it somewhere,” he says. “I need to make sure that the motion is going to be safe, that it doesn’t take too long, and that it obeys all the constraints. For example, if the cup has water in it, I want to move it so the water doesn’t spill out.”

The task may be even more open-ended suggests Berenson, such as taking something out of the refrigerator, which might require moving stuff out of the way.

“There’s no manual or sequence of steps. You have to look at the refrigerator, think about how you’re going to get this item out of there, and come up with a plan on the spot. People are very good at doing that, but for a robot that’s a very difficult task. We need to create algorithms to program all of that ability.”

Rising to the Challenge
One of Berenson’s algorithms is playing a recurring role in the DARPA Robotics Challenge (DRC). The Constrained Bi-directional Rapidly-Exploring Random Tree (CBiRRT) algorithm was developed during Berenson’s doctoral work at Carnegie Mellon University (CMU) and was used on the DRC-Hubo robot that competed in the DRC Trials this past December. Berenson is now a member of Team WPI-CMU, which will compete in the DRC Finals in June 2015.

This video shows the algorithm running on the DRC-Hubo robot, allowing it to autonomously rotate a valve wheel, one of the task challenges in the DRC.

“It’s a kind of general-purpose algorithm for doing motion planning for robotic arms and humanoids,” explains Berenson. “Say you have a robot in a certain position and it needs to reach for an object, or turnaround, or pick up a box. The algorithm will generate the motion for the robot.”

“The hard part is that those tasks require you to take into account constraints on your motion. Imagine when you pick up a box, if the box is really heavy, you pick it up in a different way than if it’s light. With a heavy box you have to pay more attention to your center of gravity. That algorithm allows you to create motion that obeys constraints such as balance and closed chain kinematics.”

Humanoid robot running on a pose-constrained whole-body motion planning algorithm stacks boxes (Courtesy of Dmitry Berenson)This video shows one of Berenson’s algorithms running on a humanoid robot stacking boxes, among other applications where robot pose constraints play a pivotal role.

The algorithms Berenson and his ARC Lab team have created are freely available online at its Github Page. While Berenson says the open-source software is mostly used in a research context, the algorithms have found use in some in-house industrial projects.

Berenson says his CBiRRT algorithm doesn’t care what kind of robotic arm it’s running on. “I’ve had it work on everything from a six degree-of-freedom arm to a 28 degree-of-freedom humanoid robot.”

He says the core of the DRC valve-turning code was the CBiRRT algorithm. “The majority of things you see in the DARPA challenge are very tightly teleoperated. There was a person commanding every motion of the robot. Our approach was to use more autonomy, which is our research direction.”

“The way that the DRC-Hubo Team did it is that we locate the valve in the world and then we tell the robot where it is,” says Berenson. “Then the robot autonomously plans a sequence of motions to turn the valve.”

This video shows a summary of the ARC Lab’s work for the valve-operating part of the DRC Trials in December. It includes actual footage from the event and test runs of the CBiRRT algorithm on the DRC-Hubo, PR2 and Hubo2 humanoid robots.

“Because we create these kinds of fundamental algorithms, it doesn’t matter if it’s disaster response or helping you in your kitchen, or packing boxes in a factory, it’s the fundamental capability to manipulate objects in the world autonomously that’s very important,” says Berenson. “That’s why I wanted to get involved in the DRC in the first place.”

Autonomy Continuum
So how far away is true autonomy? Berenson says the big leap will have to come from the software side. He feels he’s in the right place with his algorithms work.

“There are still some things that will have to come from hardware,” he says. “Robots need to get less expensive, lighter (you don’t want a robot that weighs 300 lbs to fall on you), and softer in case they make unintended contact. But even if you gave me one of those tomorrow, we would still have to do research on how to make it do useful tasks.”

“I see it more as a continuum. I think it will be a lot like how cell phones evolved. It didn't go from the clunky old phones to an iPhone overnight. I think it will be similar with robots.”

That continuum runs all the way to the tips of our fingers, or should we say the robot’s fingers. Haptic technology is allowing us to feel our way around the robot’s environment.

Feeling Haptic Technology
Haptics is giving robots a sense of touch and taking service robots to new depths and far reaches of our world.

Francois Conti is a visiting lecturer at Stanford University in the Artificial Intelligence Laboratory, which continues to be a hotbed for haptics research in Silicon Valley. He teaches a course in Experimental Haptics with Professor Kenneth Salisbury, a renowned researcher in haptics and medical robotics at Stanford.

“Both fields (robotics and haptics) have a lot of common aspects and are growing to become really one field at this point,” says Conti. He explains how 3D haptic devices work.

“It’s like holding a computer mouse, but these devices can be moved at any point in 3D space. There are mechanisms and motors that produce forces at the tip where you’re holding the device. Now imagine a cursor on your screen in a 3D environment, where you have an object in front of you. As you move the cursor towards the object, you’re now able to actually feel the shape, feel the contours, and the textures.”

Haptic device provides force sensing in robotic teleoperation (Courtesy of Force Dimension)Haptics in the OR and in Orbit
Conti is also a cofounder of Force Dimension, a developer and manufacturer of haptic devices in Nyon, Switzerland. He says his company developed the first commercial haptic device to be approved for clinical usage in an operating room.

“Today, there are robotic catheters used in cardiac surgery that allow you to go inside the heart and perform procedures with the aid of this haptic device,” he says. “You can actually feel the workspace constraints inside of the heart.”

This video from a British television news clip shows the haptic device being used to guide a robotic catheter system for atrial fibrillation ablation.

Force Dimension’s first haptic interface was the brainchild of Conti and at the time three other master’s students at the Swiss Federal Institute of Technology in Lausanne, Switzerland.

“People would visit the lab, we would run demos, and visitors would try the technology,” says Conti. “Suddenly, someone asked if they could buy one.”

Conti says he and his colleagues didn’t take the offer seriously at first. Then a week later when the interested party asked how soon for delivery, they realized they were onto something.

“We bootstrapped the company by selling our haptic devices to universities and research institutions,” says Conti.

The year was 2001. The four founders were all starting their doctorate studies and now launching a new start-up.

Conti had first stepped foot on Stanford’s campus as part of a special project during his Swiss-based master’s program. He was invited by Professor Oussama Khatib, another renowned Stanford researcher focused on human-friendly robot design, motion planning and control, and haptic interaction. Now Conti was back in California pursuing his doctorate, with professors Salisbury and Khatib as his research advisors.

“I lived a hybrid life working on research and at the same time running this company,” says Conti. “Being in Silicon Valley was an incredible experience, because there are so many companies around there working in all sorts of fields. I begin working with some medical companies and we started to integrate our haptic devices in various medical products.”

In addition to medical robotics, the haptic devices are used in aerospace applications (as shown in the movie at this site), pharmaceutical, and entertainment industries. Conti says less-expensive versions have been designed for computer gaming platforms such as the Novint Falcon controller. 

As his company flourished and he earned his Ph.D., Conti kept his connection to Stanford through the haptics class he co-teaches with Salisbury once a year.

Open-Source Haptics Framework
An offshoot of that annual course is a set of open-source haptic visualization libraries written in C++ that Conti helped develop. The software set is called CHAI 3D, pronounced just like the tea.

“We needed software that could talk to different haptic devices and even speak to a virtual system that students could debug at home if they didn’t have their own device,” says Conti. “At one point, we said let’s put this all online. People are using it around the world. Every year we’ve grown it by releasing a new version.”

Open-source repositories such CHAI 3D and ARC Lab’s algorithm library bring a whole new meaning to collaborative robotics.

“Hansen Medical (maker of the robotic catheter system featured in the previous news clip) was one of the first companies I worked with when we were building our first device,” says Conti. “They have CHAI 3D integrated in their system. Different medical firms have embraced robotics technology and just adapt it for their needs.”

Haptics in Different Degrees
Just like robots, haptic devices come in different degrees of freedom. One of the simplest devices is a three dimensional haptic device, meaning you can move it in space along three axes. This only allows for positioning an object.

“If you want to provide an orientation, for instance move an object to a position but adjust the angle, this requires additional degrees of freedom,” explains Conti. “It requires additional mechanical parts that would allow you to generate forces not only in space, in x, y and z, but also in rotational forces and translational forces. You may also want to add a gripper for grasping capabilities.”

“So you assemble all those components together and you get a device like our sigma.7, which offers seven degrees of freedom,” says Conti. “You have three for translation, three for rotation, and one for grasping. With seven degrees of freedom you can pretty much do any task at that point, and this is what is used to operate the most advanced types of robots.”

Semi-Autonomy
Surgical robotics was an early adopter in the professional service robotics sector. As others have noted, it gets trickier in personal service robotics where the environments are less structured. Conti explains how haptics helps bridge the gap until full autonomy is a reality.

“When you bring robots into the medical field, it’s a very well-controlled environment. It’s the doctor or surgeon who controls those robots. The robots are not autonomous.”

“What we’re trying to do in (personal) service robotics is make the robots as autonomous as possible,” says Conti. “As soon as you move out of a controlled environment, you add a lot of uncertainties. The algorithms today remain limited and that’s why a lot of those robotic applications are still limited to labs or inside companies where they can control all the uncertainties.”

“Artificial intelligence just isn’t there yet. Until there are really big breakthroughs in AI, it will be some time until we see robots running through the streets.”

In the Stanford AI Lab, researchers are working with humanoid robots such as the PR2 of Willow Garage fame and Honda’s ASIMO to experiment with different service scenarios. You may recall the virally celebrated PR2 coffee run. Here’s the video in case you missed it, while this article does a nice job of noting some of the technology behind the demo.

Haptic-enabled humanoid robot for underwater exploration uses hybrid actuation (Courtesy of Stanford University)Under the Sea, and Above
Perhaps more practical and in closer reach than a robot replacing an intern for that next latte fix is a semi-autonomous manipulator exploring the depths of our waterways. The Red Sea Robotics Research Exploratorium is an ongoing project at Stanford in collaboration with King Abdullah University of Science and Technology (KAUST).

“We’re currently designing an underwater robot in partnership with a company recently acquired by Google called Meka Robotics to go underwater and perform fine, dexterous tasks for biologists,” explains Conti. “We’re building an underwater humanoid robot that has two arms and we’re controlling it with two haptic devices. You’re looking at a huge screen with 3D glasses, and now you’re controlling the hands of this robot as if you were underwater.”

“You can feel and interact with the environment,” says Conti. “So you can really control something that’s at a remote location. We could have robots working in mines, in dangerous areas like Fukushima, or on a volcano. There might be some local intelligence on the robot, but really the knowledge will come from the operator.”

Haptics is also providing force feedback in industrial applications. Conti describes an example.

“In big foundries where you create huge metal parts for maybe creating bridges or ships, you basically create a mold made out of sand and you pour in hot metal to make a casting. Once you separate the mold, the part is far from perfect. There are many surface defects that need to be smoothed out. But these parts are huge and can weigh 20 to 50 tons.”

Conti and his Swiss colleagues’ tapped their expertise in surgical haptics and merely scaled it for the foundry.

“You’re sitting behind a glass window and you’re holding one of these haptic devices, but instead of controlling a surgical instrument, you’re controlling this giant robot with a powerful milling machine at the end,” explains Conti. “So you’re using the milling machine to clean up the part, and at the same time you’re feeling the forces through the robot. You’re feeling forces that are sometimes thousands of pounds, but you scale them down to human scale. Now, 8 hours a day they’re using those devices in foundries to work on these really heavy parts.”

Conti says when he first presented a demonstration of this application at an industry conference last year, the audience was astounded.

“For some this was science fiction, driving this giant robot and being able to feel what it’s touching. But for us, it was pretty straightforward coming from robotic surgery.”

Hybrid Actuation
Another area of development destined to significantly impact both industrial and service robotics is new hybrid actuation. We touched upon this in last month’s article on collaborative robots where we featured research underway with the SAPHARI program and variable impedance actuation. Stanford researchers are also exploring alternative forms of actuation and the algorithms to control them, to not only make robots safer around humans, but also more energy efficient.

Schematic of hybrid actuation system using series elastic brake actuator for haptic operation (Courtesy of Stanford University)

“Hybrid actuation – you’re going to find this term coming up in robotics a lot,” says Conti. “It’s one of the technologies being integrated in our next generation of haptic devices.”

He describes the advantages of series elastic brake actuation.

“Electrical motors have been primarily designed to spin. You turn on a hair dryer or a mixer, and the motors spin really fast to produce mechanical energy. But robots aren’t always moving really fast. Such situations occur when a robot is holding a heavy load, for instance. As high currents are draining through the motors to produce high static forces, all the electrical energy is converted into heat instead.”

“Using small brakes, springs and small motors, we can produce actuators that combine these different technologies to produce much higher forces for much less energy,” explains Conti. “We can also make those devices more compact, so when you move the haptic devices, they feel much lighter. In robotics we have the same problem. We want to produce those forces and motion in ways that are more efficient, but that don’t require as much energy. That’s where hybrid actuation is coming into play and we’re exploring different ways of creating those forces.”

This is the same idea as the series elastic actuators in Rethink Robotics’ cobot. The flex in Baxter’s joints comes from the spring in its SEAs.

“You’re building a robot that’s operating in an environment where there is more uncertainty,” says Conti. “You can’t have a very stiff robot without any force sensing capabilities. You need a robot that can feel and comply with its environment. That’s what this Rethink robot can do. This is where hybrid actuation has come into place, because those robots now have the capability of finally controlling the forces.”

“This is why I say robotics is merging with haptics. Robots are now machines that can sense and feel the environment.”

Technologies enabling robots to perceive, manipulate and feel their environment are all leading toward greater autonomy. Nowhere will this leap be felt more profoundly than by persons with acute and chronic physical disabilities hoping to lead more independent lives.

Assistive and Rehabilitation Robotics
One of the foremost researchers in assistive and rehabilitation robotics is Dr. Rory Cooper, Founding Director of the Human Engineering Research Laboratories (HERL), a VA Rehabilitation Research and Development Center of Excellence in partnership with the University of Pittsburgh in Pennsylvania. He is also a Distinguished Professor of the university’s Department of Rehabilitation Science and Technology, and a professor of Bioengineering, Mechanical Engineering, Physical Medicine & Rehab, and Orthopedic Surgery.

Cooper was recently distinguished with the highest honor in the robotics industry, the Engelberger Robotics Award, for his career-spanning contributions to the field.

As a U.S. Army veteran with a spinal cord injury, Cooper knows firsthand how advancements in assistive technology devices are vital to wheelchair users. He has dedicated his time, talents and unique perspective to furthering technologies that not only enable robots to achieve high levels of autonomy, but more importantly, allow people to live more autonomously mobile lives.

This video produced by a local television station provides an overview of HERL research.

Cooper, who holds a doctorate in electrical and computer engineering from the University of California at Santa Barbara, says robotics is where many of the future breakthroughs in assistive technology ‎will emerge. “Intelligent systems and devices will eventually be able to permit people with disabilities to perform, or at least direct, many daily living tasks independently.”

He says HERL became involved in developing algorithms to improve control of electric powered wheelchairs (EPW) early after digital controllers became available.

“Our work focuses on allowing as many people as possible to have independent mobility,” says Cooper. “HERL contributed to such advances as bias axis for interfaces, virtual templates, super quadratic templates, gain memory control, virtual reality assessment of driving, and tuning control parameters. ‎These tools are embedded in many of the EPW controllers in use around the world.”

Additional advancements include the SMARTWheel, which measures and transmits key biomechanical parameters of wheelchair propulsion. It’s used by laboratories around the world to optimize mobility and reduce ergonomic barriers. The Virtual Seating Coach uses machine learning and adaptation to help people at risk of further injury due to their inability to position themselves.

Experimental electric powered wheelchair with robotic arms and optional haptic teleoperation (Courtesy of Human Engineering Research Laboratories)Under development is the PerMMA electric powered wheelchair (pictured) that has 32 degrees of freedom and uses JACO arms by Kinova mounted on a custom carriage and track of HERL’s design. The PerMMA, or Personal Mobility and Manipulation Appliance, uses the ROS open-source operating system and a real-time OS for integrating Internet of Things capabilities in the device.

“We’re in the process of completing the third-generation unit,” says Cooper. “It has a vision system. It can ‘see’ objects and do path planning.”

Autonomy for Independence
You may have noticed the uniformed researcher operating the haptic device in the background.

“One of the options is to provide remote operation, so you can have a remote caregiver or assistant, and that’s a way to overcome some of the barriers of autonomy,” explains Cooper. “There are so many degrees of freedom. It can be more efficient to basically call for a friend to assist with device manipulation.”

“The other thing that we’re really working on is this concept of sliding autonomy,” says Cooper. “When does the user want to do the task by themselves, where they direct the entire steps of the task, and when do they want the robot to operate autonomously? And where do you go in between? Our goal is to have the system open-ended enough, so that the user can make that decision on a case-by-case basis.”

“It’s one thing to do it in the lab. It’s another thing to do it in the real world. And of course, the autonomy part is the biggest challenge.”

A spin-off of PerMMA research is the MeBot, or Mobility Enhanced Robotic Wheelchair. Designed to seamlessly negotiate a wide range of indoor and outdoor terrains, the MeBot ascends and descends 8-inch curbs, self-levels the seat on slopes, and crosses soft terrain (sand, gravel, grass). The third-generation lab prototype is expected in August 2014.

Cooper says the development process takes time. “We’re trying to build a robot that people are going to sit in and use 16 hours a day, 365 days a year, for up to 5 years in any environment, and with a budget considerably smaller than NASA sending something to Mars.”

“The ideal product would be a combination of MeBot and PerMMA,” says Cooper. “It would go anywhere and manipulate anything. My long-term goal is to go anywhere that anyone else could go, and do anything anybody else could do, and do it in the same amount of time.”

Ever since he was paralyzed in a cycling accident more than 30 years ago, it’s been his dream to develop a more agile wheelchair. Cooper is not alone in taking this vision personally. He says many of the students and researchers he works with at HERL are also wheelchair users with their own aspirations for greater independence and a better quality of life.

This video profiling the ongoing development of the PerMMA robotic platform was produced by some of those students.

For these researchers, autonomy can’t come fast enough.

Service Robots This content is part of the Service Robots curated collection. To learn more about Service Robots, click here.