Kenneth Shaw
CV |
Google Scholar |
Twitter
|
Hi I'm Kenny. I'm a 1st year PhD student (as of fall 2023) at the Robotics Institute in Carnegie Mellon University,
advised by Prof. Deepak Pathak .
My research focuses on anthropomorphic robot hands and dexterous manipulation. How should robot hands be designed to be dexterous and usable in our daily lives? How do we teach robot hands to act human-like through imitation of humans such as in videos? Finally, how do we unlock new dexterous manipulation behavior using simulation and large-scale data?
More broadly I'm interested in the intersection of hardware and data-driven machine learning for robotic systems. How can we design new robots that leverage machine learning to have new behaviors? And how can these data-driven policies inform our robot design?
Previously, I graduated from Georgia Tech in Computer Engineering and worked on multi-agent systems and HRI with Prof. Sonia Chernova and Prof. Harish Ravichandar.
Please contact me via email at kshaw2 -at- andrew dot cmu dot edu . I'm always looking for new and interesting collaborations!
|
|
DEFT: Dexterous Fine-Tuning for Real-World Hand Policies
Aditya Kannan*, Kenneth Shaw*, Shikhar Bahl, Pragna Mannam, Deepak Pathak
CoRL 2023
webpage |
abstract |
bibtex |
CoRL
Dexterity is often seen as a cornerstone of complex manipulation. Humans are able to perform a host of skills with their hands, from making food to operating tools. In this paper, we investigate these challenges, especially in the case of soft, deformable objects as well as complex, relatively long-horizon tasks. Although, learning such behaviors from scratch can be data inefficient. To circumvent this, we propose a novel approach, DEFT (DExterous Fine-Tuning for Hand Policies), that leverages human-driven priors, which are executed directly in the real world. In order to improve upon these priors, DEFT involves an efficient online optimization procedure. With the integration of human-based learning and online fine-tuning, coupled with a soft robotic hand, DEFT demonstrates success across various tasks, establishing a robust, data-efficient pathway toward general dexterous manipulation.
@article{kannan2023deft,
title={DEFT: Dexterous Fine-Tuning for Real-World Hand Policies},
author={Kannan, Aditya* and Shaw, Kenneth* and Bahl, Shikhar and Mannam, Pragna and Pathak, Deepak},
journal= {CoRL},
year={2023}
}
|
|
Dexterous Functional Grasping
Ananye Agarwal, Shagun Uppal, Kenneth Shaw, Deepak Pathak
CoRL 2023
webpage |
abstract |
bibtex |
arXiv
While there have been significant strides in dexterous manipulation, most of it is limited to benchmark tasks like in-hand reorientation which are of limited utility in the real world. The main benefit of dexterous hands over two-fingered ones is their ability to pickup tools and other objects (including thin ones) and grasp them firmly to apply force. However, this task requires both a complex understanding of functional affordances as well as precise low-level control. While prior work obtains affordances from human data this approach doesn't scale to low-level control. Similarly, simulation training cannot give the robot an understanding of real-world semantics. In this paper, we aim to combine the best of both worlds to accomplish functional grasping for in-the-wild objects. We use a modular approach. First, affordances are obtained by matching corresponding regions of different objects and then a low-level policy trained in sim is run to grasp it. We propose a novel application of eigengrasps to reduce the search space of RL using a small amount of human data and find that it leads to more stable and physically realistic motion. We find that eigengrasp action space beats baselines in simulation and outperforms hardcoded grasping in real and matches or outperforms a trained human teleoperator.
@inproceedings{agarwal2023dexterous,
title={Dexterous Functional Grasping},
author={Agarwal, Ananye and Uppal, Shagun and Shaw, Kenneth and Pathak, Deepak},
booktitle={Conference on Robot Learning},
pages={3453--3467},
year={2023},
organization={PMLR}
}
|
|
DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction
Pragna Mannam*, Kenneth Shaw*, Dominik Bauer, Jean Oh, Deepak Pathak, Nancy Pollard
IEEE Humanoids 2023 Best Oral Paper Award Finalist
webpage |
abstract |
bibtex |
arXiv
Modeling and simulating soft robot hands can aid in design iteration for complex and high degree-of-freedom (DoF) morphologies. This can be further supplemented by iterating on the design based on its performance in real world manipulation tasks. However, iterating in the real world requires a framework that allows us to test new designs quickly at low costs. In this paper, we present a framework that leverages rapid prototyping of the hand using 3D-printing, and utilizes teleoperation to evaluate the hand in real world manipulation tasks. Using this framework, we design a 3D-printed 16-DoF dexterous anthropomorphic soft hand (DASH) and iteratively improve its design over five iterations. Rapid prototyping techniques such as 3D-printing allow us to directly evaluate the fabricated hand without modeling it in simulation. We show that the design improves over five design iterations through evaluating the hand’s performance in 30 real-world teleoperated manipulation tasks. Testing over 900 demonstrations shows that our final version of DASH can solve 19 of the 30 tasks compared to Allegro, a popular rigid hand in the market, which can only solve 7 tasks. We open-source our CAD models as well as the teleoperated dataset for further study.
@article{mannam2023Dashhand,
title={DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction},
author={Mannam, Pragna* and Shaw, Kenneth* and Bauer, Dominik and Oh, Jean and Pathak, Deepak and Pollard, Nancy},
journal= {IEEE Humanoids},
year={2023}
}
|
|
LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning
Kenneth Shaw, Ananye Agarwal, Deepak Pathak
RSS 2023
Start your dexterous manipulation journey here!
webpage |
abstract |
bibtex |
RSS
Dexterous manipulation has been a long-standing challenge in robotics. While machine learning techniques have shown some promise, results have largely been currently limited to simulation. This can be mostly attributed to the lack of suitable hardware. In this paper, we present LEAP Hand, a low-cost dexterous and anthropomorphic hand for machine learning research. In contrast to previous hands, LEAP Hand has a novel kinematic structure that allows maximal dexterity regardless of finger pose. LEAP Hand is low-cost and can be assembled in 4 hours at a cost of 2000 USD from readily available parts. It is capable of consistently exerting large torques over long durations of time. We show that LEAP Hand can be used to perform several manipulation tasks in the real world—from visual teleoperation to learning from passive video data and sim2real. LEAP Hand significantly outperforms its closest competitor Allegro Hand in all our experiments while being 1/8th of the cost. We release the URDF model, 3D CAD files, tuned simulation environment, and a development platform with useful APIs on our website.
@article{shaw2023Leaphand,
title={LEAP Hand:Low-Cost, Efficient,
and Anthropomorphic Hand for Robot Learning},
author={Shaw, Kenneth and Agarwal, Ananye
and, Pathak, Deepak},
journal= {RSS},
year={2023}
}
|
|
Learning Dexterity from Human Hand Motion in Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Aravind Sivakumar, Aditya Kannan, Deepak Pathak
IJRR 2022 Special Issue
abstract |
bibtex
To build general robotic agents that can operate in many environments, it is often useful for robots to collect experience in the real world. However, unguided experience collection is often not feasible due to safety, time, and hardware restrictions. We thus propose leveraging the next best thing as real world experience: videos of humans using their hands. To utilize these videos, we develop a method that retargets any 1st person or 3rd person video of human hands and arms into the robot hand and arm trajectories. While retargeting is a difficult problem, our key insight is to rely on only internet human hand video to train it. We use this method to present results in two areas: First, we build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. This enables the robot to collect real-world experience safely using supervision. Second, we retarget in-the-wild human internet video into task-conditioned pseudo-robot trajectories to use as artificial robot experience. This learning algorithm leverages action priors from human hand actions, visual features from the images, and physical priors from dynamical systems to pretrain typical human behavior for a particular robot task. We show that by leveraging internet human hand experience, we need fewer robot demonstrations compared to many other methods.
@article{shaw_internetvideos,
title={Learning Dexterity from Human Hand Motion in Internet Videos},
author={Shaw, Kenneth and Bahl,
Shikhar and Sivakumar, Aravind and Kannan, Aditya and Pathak, Deepak},
journal= {IJRR},
year={2022}
}
|
|
VideoDex: Learning Dexterity from Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Deepak Pathak
CoRL 2022
webpage |
abstract |
bibtex |
arXiv |
demo
To build general robotic agents that can operate in many environments, it is often imperative for the robot to collect experience in the real world. However, this is often not feasible due to safety, time and hardware restrictions. We thus propose leveraging the next best thing as real world experience: internet videos of humans using their hands. Visual priors, such as visual features, are often learned from videos, but we believe that more information from videos can be utilized as a stronger prior. We build a learning algorithm, VideoDex, that leverages visual, action and physical priors from human video datasets to guide robot behavior. These action and physical priors in the neural network dictate the typical human behavior for a particular robot task. We test our approach on a robot arm and dexterous hand based system and show strong results on many different manipulation tasks, outperforming various state-of-the-art methods.
@article{shaw_videodex,
title={VideoDex: Learning Dexterity
from Internet Videos},
author={Shaw, Kenneth and Bahl,
Shikhar and Pathak, Deepak},
journal= {CoRL},
year={2022}
}
|
|
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
Aravind Sivakumar*, Kenneth Shaw*, Deepak Pathak
RSS 2022
Best Paper Award Finalist in Scaling Robot Learning Workshop
webpage |
abstract |
bibtex |
arXiv |
demo |
in the media
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world.
@article{telekinesis,
title={Robotic Telekinesis: Learning a
Robotic Hand Imitator by Watching Humans
on Youtube},
author={Sivakumar, Aravind and
Shaw, Kenneth and Pathak, Deepak},
journal={RSS},
year={2022}
}
|
Modified version of template from here
|
|