Trust Me, I'm a Robot
In all the Singularity-based angst over whether robots are going to take over, few have considered the human qualities that might let our silicon cousins to prevail. Specifically, will robots gull united states into doing something impaired or dangerous because nosotros trust them too much?
PCMag discussed this recently with robotics skilful Dr. Ayanna Howard after her keynote at the first IEEE Multi-Robot Systems (MRS) conference. Dr. Howard spent 12 years at NASA JPL as a Senior Robotics Researcher only is at present the founder and CTO of Zyrobotics, which creates advanced engineering to assist children living with disabilities. She'due south also manager of the Homo-Automation Systems Lab (HumAnS) at the Georgia Institute of Engineering science.
Hither are edited and condensed excerpts from our conversation.
Firstly, can you talk about your work at NASA on future Mars missions?
At NASA, I was looking at how rovers tin mimic behaviors of the scientists, especially geologists, and that's when I started thinking nearly the whole homo-in-the-loop scenario, knowing that humans have a lot to offer in the Human-Robot Interaction field. Back then, though, I considered the human primarily as the input factor from which the arrangement could larn.
And so you wanted the robot to have some measure of autonomy as opposed to Houston controlling information technology, proverb "turn left, please"?
Right, but by mimicking human behavior.
In fact, when I came to Georgia Tech, my offset projection was the SnoMotes, funded past NASA, which focused on developing autonomous-exploration robots that could work in ice environments, collecting data in ways similar to how a scientist would get it, in an surface area where we couldn't send man geologists.
So you went wider with your robotic awarding work.
Yes. I was looking for what I wanted to do next, and I was interested in developing ideas that could benefit a wider human population, in everyday situations. At Georgia Tech, as Director of the Human-Automation Systems (HumAnS) Lab, we describe on the disciplines of robotics, cognitive sensing, machine learning, computational intelligence, and human-robot interaction, carrying out research effectually the concept of humanized intelligence and the process of embedding homo cognitive capability into the control path of democratic systems.
Sounds like you're coming at this field from the applied science perspective, equally opposed to a psychological, or pattern, aspect.
Yes, I'm a hybrid—a classical engineer by training and a computer scientist past merchandise. And then I examine problems through systems thinking. I came to robotics with the groundwork of looking at robotics as a system of components, with the human as i of those components whose inputs need to be tweaked, with well-defined parameters. For instance, in my work, "emotions" provide a valuable input into the robot, and vice versa. Within this systems thinking framework, human being feedback influences robot behavior, and robot beliefs influences human behavior, so on.
Which robots do you use in your research studies?
We use several, including the Darwin-OP Humanoid Research Robot, Nao and Darwin Robotis Mini.
Do you program them using the ROS middleware?
Yes, we utilise a bunch of software, including ROS; TensorFlow car intelligence to process the information, and Microsoft Emotion API, which we add to, with boosted tools on peak.
Tin can you lot give united states some examples of your work?
One time I'd transitioned to Georgia Tech, I started to pivot my research into robots within healthcare. There are currently 150 million children living with disabilities worldwide and, in the US solitary, the pediatric rehabilitation industry is worth $ane.six billion, and then information technology's a significant addressable market.
We have run studies with children with cognitive palsy, using a robot to extend their physical therapy into the home environment to amend outcomes. In a co-authored paper, which appeared in the periodical Applied Bionics and Biomechanics, we show how robots are effective when integrated into therapy instruction for upper-arm rehabilitation.
These children are often in astute hurting, just in your studies, you found they were willing to go the extra mile for the robot.
Sometimes children don't always want to practice an extensive practise task for their homo carer because information technology's physically uncomfortable. But nosotros institute that children want to cooperate with the robot. Also the robot doesn't get tired or lose patience with the kid and that helps with maintaining longer-term interactions. We quantified our results of upper-arm exercises involving adduction and abduction and lateral and medial movements, observed in therapeutic situations by the robot, using a variety of computer vision techniques including Motion History Imaging (MHI), edge detection, and Random Sample Consensus (RANSAC). Results obtained showed improvement when doing these physical rehabilitation exercises with the robot as aide. Also, through recording their emotional country, nosotros could see their facial expressions correlated with a happy state when interacting with the robot.
Can you now talk almost the robots dancing as another example of your human-in-the-loop systems thinking?
Children like to share what they know. So, in some other one of our research studies on improving physical and cerebral abilities for children with disabilities, we asked them to instruct the robot to play Angry Birds on a tablet [video below]. In the game, if the building toppled and points increased, the robot's optics lit upwardly, information technology emitted a happy sound and did a little dance. This was to provide positive feedback to the child. We found that nosotros could extend the length of the interaction past providing appropriate robot feedback.
You found that this feedback response increased efficacy.
Yes, if the robot displays appropriate human being-like behavior, such as joy, there is a class of trust developed, and the kid is willing to engage longer in the game to teach the robot how to improve their score. And then, using our methods for extracting the child's motor performance, including position, arm trajectory, move units during active movements, we can prove that the robot'south blithe state, which correlates with the child's behavior, is effective in engaging the child correctly.
Through your spin-off company, Zyrobotics, y'all're now taking some of this inquiry into the commercial realm. Can you talk about any specific products?
In our Access4Kids inquiry study [video below], nosotros adult a wireless controller for tablet accessibility to allow people with express fine motor control to still utilize the common pinch-and-swipe gestures required for tablet control. This work is now commercially available, licensed to Zyrobotics, as the TabAccess product.
In all your work, you establish that trust is an essential part of human-in-the-loop robotics, just information technology's not always the wisest conclusion.
Trust is non what people say, it's their deportment—what they're doing in any given situation. At Georgia Tech we've done several studies looking at trust. In our study of a simulated fire [video below], the subjects unquestionably followed the robot, considering it was clear they perceived it as an authority effigy. And this was even after it made mistakes and lead people into a room with no visible exits. We know that pedagogy the robot to mimic human behavior encourages interaction and builds trust.
Only even when the robot doesn't have a inkling?
Worryingly then. In our inquiry paper...we presented work that suggested people tend to be overly trusting and overly forgiving of robots in certain situations. Our experiments showed that, at all-time, human participants in our simulated emergencies focus on guidance provided by robots, regardless of a robot's prior operation or other guidance information, and at worst, believe that the robot is more capable than other sources of information.
You mean, every bit long as the robot illustrated learning from its mistakes, by doing a "re-think" and a possible reboot, it denoted intelligence and so people trusted it?
We found that, even when the robots do break trust, a properly timed statement can convince a participant to follow information technology.
How are you using this work in your lab at Georgia Tech?
Trust is a two-edged sword. We need trust in the healthcare domain to ensure that children are compliant with their do goals. Nosotros're pushing the trust aspect to help children see the robot as a friend who helps them to improve life outcomes, but we also need to ensure that they do non wholly over-trust the robot. The key is to maximize the rewards while minimizing any potential risk.
So what are roboticists doing about the manner humans seem to trust robots?
We're continuing to do inquiry studies on this issue, in social club to amend fix the manufacture when developing autonomous systems, also as contributing to the growing trunk of work, and suggested guidelines, at IEEE.
To learn more most Dr. Howard's research, she'll be speaking at the Tigers Advance Distinguished Speaker Series in Due south Carolina on February. 20 .
Source: https://sea.pcmag.com/news/19216/trust-me-im-a-robot
Posted by: williamsblospas73.blogspot.com

0 Response to "Trust Me, I'm a Robot"
Post a Comment