Kenneth Shaw

CV | Google Scholar | Twitter

Hi I'm Kenny. I'm a 1st year PhD student (as of fall 2023) at the Robotics Institute in Carnegie Mellon University, advised by Prof. Deepak Pathak .

My research focuses on anthropomorphic robot hands and dexterous manipulation. How should robot hands be designed to be dexterous and usable in our daily lives? How do we teach robot hands to act human-like through imitation of humans such as in videos? Finally, how do we unlock new dexterous manipulation behavior using simulation and large-scale data?

More broadly I'm interested in the intersection of hardware and data-driven machine learning for robotic systems. How can we design new robots that leverage machine learning to have new behaviors? And how can these data-driven policies inform our robot design?

Previously, I graduated from Georgia Tech in Computer Engineering and worked on multi-agent systems and HRI with Prof. Sonia Chernova and Prof. Harish Ravichandar. Please contact me via email at kshaw2 -at- andrew dot cmu dot edu . I'm always looking for new and interesting collaborations!


  Recent Publications

DEFT: Dexterous Fine-Tuning for Real-World Hand Policies
Aditya Kannan*, Kenneth Shaw*, Shikhar Bahl, Pragna Mannam, Deepak Pathak
CoRL 2023

webpage | abstract | bibtex | CoRL

  @article{kannan2023deft,
  title={DEFT: Dexterous Fine-Tuning for Real-World Hand Policies},
  author={Kannan, Aditya* and Shaw, Kenneth* and Bahl, Shikhar and Mannam, Pragna and Pathak, Deepak},
  journal= {CoRL},
  year={2023}
  }
  

Dexterous Functional Grasping
Ananye Agarwal, Shagun Uppal, Kenneth Shaw, Deepak Pathak
CoRL 2023

webpage | abstract | bibtex | arXiv

@inproceedings{agarwal2023dexterous,
  title={Dexterous Functional Grasping},
  author={Agarwal, Ananye and Uppal, Shagun and Shaw, Kenneth and Pathak, Deepak},
  booktitle={Conference on Robot Learning},
  pages={3453--3467},
  year={2023},
  organization={PMLR}
}

DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction
Pragna Mannam*, Kenneth Shaw*, Dominik Bauer, Jean Oh, Deepak Pathak, Nancy Pollard
IEEE Humanoids 2023 Best Oral Paper Award Finalist

webpage | abstract | bibtex | arXiv

  @article{mannam2023Dashhand,
  title={DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction},
  author={Mannam, Pragna* and Shaw, Kenneth* and Bauer, Dominik and Oh, Jean and Pathak, Deepak and Pollard, Nancy},
  journal= {IEEE Humanoids},
  year={2023}
  }
  

LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning
Kenneth Shaw, Ananye Agarwal, Deepak Pathak
RSS 2023

Start your dexterous manipulation journey here!

webpage | abstract | bibtex | RSS

  @article{shaw2023Leaphand,
  title={LEAP Hand:Low-Cost, Efficient,
  and Anthropomorphic Hand for Robot Learning},
  author={Shaw, Kenneth and Agarwal, Ananye
  and, Pathak, Deepak},
  journal= {RSS},
  year={2023}
  }
  
Learning Dexterity from Human Hand Motion in Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Aravind Sivakumar, Aditya Kannan, Deepak Pathak
IJRR 2022 Special Issue

abstract | bibtex

  @article{shaw_internetvideos,
    title={Learning Dexterity from Human Hand Motion in Internet Videos},
    author={Shaw, Kenneth and Bahl,
    Shikhar and Sivakumar, Aravind and Kannan, Aditya and Pathak, Deepak},
    journal= {IJRR},
    year={2022}
  }

VideoDex: Learning Dexterity from Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Deepak Pathak
CoRL 2022

webpage | abstract | bibtex | arXiv | demo

  @article{shaw_videodex,
    title={VideoDex: Learning Dexterity
    from Internet Videos},
    author={Shaw, Kenneth and Bahl,
    Shikhar and Pathak, Deepak},
    journal= {CoRL},
    year={2022}
  }
sym

Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
Aravind Sivakumar*, Kenneth Shaw*, Deepak Pathak
RSS 2022
Best Paper Award Finalist in Scaling Robot Learning Workshop

webpage | abstract | bibtex | arXiv | demo | in the media

We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world.

@article{telekinesis,
  title={Robotic Telekinesis: Learning a
  Robotic Hand Imitator by Watching Humans
  on Youtube},
  author={Sivakumar, Aravind and
  Shaw, Kenneth and Pathak, Deepak},
  journal={RSS},
  year={2022}
}


Modified version of template from here