Kenneth Shaw
CV |
Google Scholar |
Twitter |
Formal Bio |
|
Hi I'm Kenny. I'm a final year PhD student at the Robotics Institute in Carnegie Mellon University,
advised by Prof. Deepak Pathak .
My reserach centers on dexterous manipulation. I have designed several low-cost, highly capable dexterous robotic hands aimed at making manipulation research and education more accessible. I develop highly dexterous AI policies for these hands by leveraging human demonstrations from internet videos, teleoperation, and simulation.
More broadly, I'm interested in how can we create new democratized robotic hardware systems with unlocked capabillities from machine learning. How does the design of a robot's hardware shape the way it learns—and how does the learning influence how the hardware should be designed?
Previously, I graduated from Georgia Tech in Computer Engineering and worked on multi-agent systems and HRI with Prof. Sonia Chernova and Prof. Harish Ravichandar.
Please contact me via email at kshaw2 -at- andrew dot cmu dot edu .
|
|
IFG: Internet-Scale Functional Grasping
Ray Muxin Liu*,
Mingxuan Li*,
Kenneth Shaw,
Deepak Pathak
In Submission
website
|
abstract
|
bibtex
|
arXiv
Large Vision Models trained on internet-scale data have demonstrated strong capabilities in segmenting and semantically understanding object parts, even in cluttered, crowded scenes. However, while these models can direct a robot toward the general region of an object, they lack the geometric understanding required to precisely control dexterous robotic hands for 3D grasping. To overcome this, our key insight is to leverage simulation with a force-closure grasp generation pipeline that captures local hand–object geometries. Because this pipeline is computationally slow and requires ground-truth observations, the resulting data is distilled into a diffusion model that operates in real time on camera point clouds. By combining the global semantic understanding of internet-scale models with the geometric precision of simulation-based local grasp optimization, IFG achieves high-performance functional grasping without any manually collected training data.
In Submission
|
|
|
Deep Reactive Policy: Learning Reactive Manipulator Motion Planning for Dynamic Environments
Jiahui Yang*,
Jason Jingzhou Liu*,
Yulong Li,
Youssef Khaky,
Kenneth Shaw,
Deepak Pathak
CoRL 2025
website
|
abstract
|
bibtex
|
arXiv
Generating collision-free motion in dynamic, partially observable environments is a fundamental challenge
for robotic manipulators. Classical motion planners can compute globally optimal trajectories but require
full environment knowledge and are typically too slow for dynamic scenes. Neural motion policies offer a
promising alternative by operating in closed-loop directly on raw sensory inputs but often struggle to generalize
in complex or dynamic settings. We propose Deep Reactive Policy (DRP), a visuo-motor neural motion policy designed
for reactive motion generation in diverse dynamic environments, operating directly on point cloud sensory input.
At its core is IMPACT, a transformer-based neural motion policy pretrained on 10 million generated expert trajectories
across diverse simulation scenarios. We further improve IMPACT's static obstacle avoidance through iterative
student-teacher finetuning. We additionally enhance the policy's dynamic obstacle avoidance at inference time
using DCP-RMP, a locally reactive goal-proposal module. We evaluate DRP on challenging tasks featuring cluttered
scenes, dynamic moving obstacles, and goal obstructions. DRP achieves strong generalization, outperforming prior
classical and neural methods in success rate across both simulated and real-world settings. We will release the
dataset, simulation environments, and trained models upon acceptance.
@article{yang2025deep,
title={Deep Reactive Policy: Learning Reactive Manipulator
Motion Planning for Dynamic Environments},
author={Jiahui Yang and Jason Jingzhou Liu and
Yulong Li and Youssef Khaky and Deepak Pathak},
journal={9th Annual Conference on Robot Learning},
year={2025},
}
|
|
|
LEAP Hand V2 Advanced: Dexterous, Low-cost Hybrid Rigid-Soft Hand for Robot Learning
Kenneth Shaw, Deepak Pathak
IEEE Humanoids 2025
webpage |
abstract |
bibtex |
IEEE Humanoids
The human hand is a remarkable feat of biology, with the ability to handle intricate tools with great precision and strength yet softly handle delicate objects. Robot hands attempting to emulate this have often fallen into one of two categories: soft or rigid. Soft hands, while compliant and yielding lack the precision and strength of human hands. Conversely, rigid hands are brittle to bumps and do not conform naturally to their environment. We call our solution LEAP Hand v2 Adv, a dexterous, $ 3000, simple anthropomorphic hybrid rigid-soft hand that bridges this gap. First, it achieves a balance of human-hand-like softness and stiffness via a 3d printed soft exterior combined with a 3d printed internal bone structure. Next, LEAP Hand v2 Adv incorporates two powered articulations in the foldable palm: one spanning the four fingers and another near the thumb-mimicking the essential palm flexibility for humanlike grasping. Lastly, LEAP Hand v2 Adv boasts a dexterous Metacarpophalangeal (MCP) kinematic structure, making it highly human-like, easy to assemble, and versatile. Through thorough real-world experiments, we show that LEAP Hand v2 Adv exceeds the capabilities of many existing robot hands for grasping, teleoperated control, and imitation learning. We release 3D printer files and assembly instructions for the dexterous hand research community to use on our website.
@inproceedings{shaw2025leapv2adv,
title={LEAP Hand V2 Advanced: Dexterous, Low-cost Hybrid Rigid-Soft Hand for Robot Learning},
author={Shaw, Kenneth and Pathak, Deepak},
booktitle={2025 IEEE-RAS International Conference on Humanoid Robots (Humanoids)},
year={2025}
}
|
|
|
DexWild: Dexterous Human Interactions for In-the-Wild Robot Policies
Tony Tao*, Mohan Kumar Srirama*, Jason Jingzhou Liu, Kenneth Shaw, Deepak Pathak
RSS 2025
Best Paper Award at EgoAct Workshop 2025
website
|
abstract
|
bibtex
|
arXiv
Large-scale, diverse robot datasets have emerged as a promising path toward enabling dexterous manipulation policies to generalize to novel environments, but acquiring such datasets presents many challenges. While teleoperation provides high-fidelity datasets, its high cost limits its scalability. Instead, what if people could use their own hands, just as they do in everyday life, to collect data? In DexWild, a diverse team of data collectors uses their hands to collect hours of interactions across a multitude of environments and objects. To record this data, we create DexWild-System, a low-cost, mobile, and easy-to-use device. The DexWild learning framework co-trains on both human and robot demonstrations, leading to improved performance compared to training on each dataset individually. This combination results in robust robot policies capable of generalizing to novel environments, tasks, and embodiments with minimal additional robot-specific data. Experimental results demonstrate that DexWild significantly improves performance, achieving a 68.5% success rate in unseen environments-nearly four times higher than policies trained with robot data only-and offering 5.8x better cross-embodiment generalization.
@article{tao2025dexwild,
title={DexWild: Dexterous Human Interactions for In-the-Wild Robot Policies},
author={Tao, Tony and Srirama, Mohan Kumar and Liu, Jason Jingzhou and Shaw, Kenneth and Pathak, Deepak},
journal={Robotics: Science and Systems (RSS)},
year={2025},
}
|
|
|
FACTR: Force-Attending Curriculum Training
for Contact-Rich Policy Learning
Jason Jingzhou Liu*, Yulong Li*, Kenneth Shaw, Tony Tao, Ruslan Salakhutdinov, Deepak Pathak
RSS 2025
website
|
abstract
|
bibtex
|
arXiv
|
code
Many contact-rich tasks humans perform, such as box pickup or rolling dough, rely on force feedback for reliable execution. However, this force information, which is readily available in most robot arms, is not commonly used in teleoperation and policy learning. Consequently, robot behavior is often limited to quasi-static kinematic tasks that do not require intricate force-feedback. In this paper, we first present a low-cost, intuitive, bilateral teleoperation setup that relays external forces of the follower arm back to the teacher arm, facilitating data collection for complex, contact-rich tasks. We then introduce FACTR, a policy learning method that employs a curriculum which corrupts the visual input with decreasing intensity throughout training. The curriculum prevents our transformer-based policy from over-fitting to the visual input and guides the policy to properly attend to the force modality. We demonstrate that by fully utilizing the force information, our method significantly improves generalization to unseen objects by 43% compared to baseline approaches without a curriculum.
@article{liu2025factr,
title={FACTR: Force-Attending Curriculum Training for
Contact-Rich Policy Learning},
author={Jason Jingzhou Liu and Yulong Li and Kenneth Shaw
and Tony Tao and Ruslan Salakhutdinov and Deepak Pathak},
journal={arXiv preprint arXiv:2502.17432},
year={2025},
}
|
|
|
Demonstrating LEAP Hand v2: Low-Cost, Easy-to-Assemble, High-Performance Hand for Robot Learning
Kenneth Shaw, Deepak Pathak
RSS 2025
webpage |
abstract |
bibtex |
RSS
Replicating human-like dexterity in robotic hands has been a long-standing challenge in robotics. Recently, with the rise of robot learning and humanoids, the demand for dexterous robot hands to be reliable, affordable, and easy to reproduce has grown significantly. To address these needs, we present LEAP Hand v2, a $200 8-DOF highly dexterous robotic hand designed for robot learning research. It is strong yet compliant, using a hybrid rigid-soft structure that is very durable. Its universal dexterous MCP joint provides exceptional finger mobility, enabling a variety of different grasps. The parts are all 3D printed and can be assembled very easily in under two hours using our instructions. Importantly, we offer a suite of advanced opensource software tools to support robot learning research. This includes human video retargeting code from MANO and Vision Pro, motion capture teleoperation code using the Manus Glove, and a URDF with simulation examples for various simulation engines. We will showcase LEAP Hand v2—designed specifically for this demonstration—alongside our previous robot hands with real robot interactive demos. Following our successful demos at RSS 2023 and 2024, we will again offer an engaging opportunity for attendees to get hands-on experience and information about the accessibility of low-cost, open-source robotic hands.
@inproceedings{shaw2025leapv2,
title={Demonstrating LEAP Hand v2: Low-Cost, Easy-to-Assemble, High-Performance Hand for Robot Learning},
author={Shaw, Kenneth and Pathak, Deepak},
booktitle={2025 Robotics: Science and Systems},
year={2025}
}
|
|
|
Bimanual Dexterity for Complex Tasks
Kenneth Shaw*, Yulong Li*, Jiahui Yang, Mohan Kumar Srirama, Ray Liu, Haoyu Xiong, Russell Mendonca†, Deepak Pathak†
CoRL 2024
webpage |
abstract |
bibtex |
CoRL
To train generalist robot policies, machine learning methods often require a substantial amount of expert human teleoperation data. An ideal robot for humans collecting data is one that closely mimics them: bimanual arms and dexterous hands. However, creating such a bimanual teleoperation system with over 50 DoF is a significant challenge. To address this, we introduce Bidex, an extremely dexterous, low-cost, low-latency and portable bimanual dexterous teleoperation system which relies on motion capture gloves and teacher arms. We compare Bidex to a Vision Pro teleoperation system and a SteamVR system and find Bidex to produce better quality data for more complex tasks at a faster rate. Additionally, we show Bidex operating a mobile bimanual robot for in the wild tasks. The robot hands (5k USD) and teleoperation system (7k USD) is readily reproducible and can be used on many robot arms including two xArms ($16k USD).
@inproceedings{shaw2024bimanual,
title={Bimanual Dexterity for Complex Tasks},
author={Shaw, Kenneth and Li, Yulong and Yang, Jiahui and Srirama, Mohan Kumar and Liu, Ray and Xiong, Haoyu and Mendonca, Russell and Pathak, Deepak},
booktitle={8th Annual Conference on Robot Learning},
year={2024}
}
|
|
Adaptive Mobile Manipulation for Articulated Objects In the Open World
Haoyu Xiong, Russell Mendonca, Kenneth Shaw, Deepak Pathak
ArXiv 2024
webpage |
abstract |
bibtex |
arXiv
Deploying robots in open-ended unstructured environments such as homes has been a long-standing research problem. However, robots are often studied only in closed-off lab settings, and prior mobile manipulation work is restricted to pick-move-place, which is arguably just the tip of the iceberg in this area. In this paper, we introduce Open-World Mobile Manipulation System, a full-stack approach to tackle realistic articulated object operation, e.g. real-world doors, cabinets, drawers, and refrigerators in open-ended unstructured environments. We propose an adaptive learning framework in which the robot initially learns from a small set of data through behavior cloning, followed by learning from online self-practice on novel variations that fall outside the BC training domain. We develop a low-cost mobile manipulation hardware platform capable of repeatedly safe and autonomous online adaptation in unstructured environments with a cost of around 20k USD. We conducted a field test on 20 novel doors across 4 different buildings on a university campus. In a trial period of less than one hour, our system demonstrated significant improvement, boosting the success rate from 50% of BC pre-training to 95% of online adaptation without any human intervention.
@article{xiong2024adaptive,
title={Adaptive Mobile Manipulation for Articulated Objects In the Open World},
author={Xiong, Haoyu and Mendonca, Russell and Shaw, Kenneth and Pathak, Deepak},
journal={arXiv preprint arXiv:2401.14403},
year={2024}
}
|
|
SPIN: Simultaneous Perception Interaction and Navigation
Shagun Uppal, Ananye Agarwal, Haoyu Xiong, Kenneth Shaw, Deepak Pathak
CVPR 2024 (Oral)
webpage |
abstract |
bibtex |
arXiv
While there has been remarkable progress recently in the fields of manipulation and locomotion, mobile manipulation remains a long-standing challenge. Compared to locomotion or static manipulation, a mobile system must make a diverse range of long-horizon tasks feasible in unstructured and dynamic environments. While the applications are broad and interesting, there are a plethora of challenges in developing these systems such as coordination between the base and arm, reliance on onboard perception for perceiving and interacting with the environment and most importantly, simultaneously integrating all these parts together. Prior works approach the problem using disentangled modular skills for mobility and manipulation that are trivially tied together. This causes several limitations such as compounding errors, delays in decision-making and no whole-body coordination. In this work, we present a reactive mobile manipulation framework that uses an active visual system to consciously perceive and react to its environment. Similar to how humans leverage whole-body and hand-eye coordination, we develop a mobile manipulator that exploits its ability to move and see, more specifically -- to move in order to see and to see in order to move. This allows it to not only move around and interact with its environment but also, choose when to perceive what using an active visual system. We observe that such an agent learns to navigate around complex cluttered scenarios while displaying agile whole-body coordination using only ego-vision without needing to create environment maps.
@InProceedings{Uppal_2024_CVPR,
author = {Uppal, Shagun and Agarwal, Ananye and Xiong, Haoyu and Shaw, Kenneth and Pathak, Deepak},
title = {SPIN: Simultaneous Perception Interaction and Navigation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {18133-18142}
}
|
|
|
DEFT: Dexterous Fine-Tuning for Real-World Hand Policies
Aditya Kannan*, Kenneth Shaw*, Shikhar Bahl, Pragna Mannam, Deepak Pathak
CoRL 2023
webpage |
abstract |
bibtex |
CoRL
Dexterity is often seen as a cornerstone of complex manipulation. Humans are able to perform a host of skills with their hands, from making food to operating tools. In this paper, we investigate these challenges, especially in the case of soft, deformable objects as well as complex, relatively long-horizon tasks. Although, learning such behaviors from scratch can be data inefficient. To circumvent this, we propose a novel approach, DEFT (DExterous Fine-Tuning for Hand Policies), that leverages human-driven priors, which are executed directly in the real world. In order to improve upon these priors, DEFT involves an efficient online optimization procedure. With the integration of human-based learning and online fine-tuning, coupled with a soft robotic hand, DEFT demonstrates success across various tasks, establishing a robust, data-efficient pathway toward general dexterous manipulation.
@article{kannan2023deft,
title={DEFT: Dexterous Fine-Tuning for Real-World Hand Policies},
author={Kannan, Aditya* and Shaw, Kenneth* and Bahl, Shikhar and Mannam, Pragna and Pathak, Deepak},
journal= {CoRL},
year={2023}
}
|
|
|
Dexterous Functional Grasping
Ananye Agarwal, Shagun Uppal, Kenneth Shaw, Deepak Pathak
CoRL 2023
webpage |
abstract |
bibtex |
arXiv
While there have been significant strides in dexterous manipulation, most of it is limited to benchmark tasks like in-hand reorientation which are of limited utility in the real world. The main benefit of dexterous hands over two-fingered ones is their ability to pickup tools and other objects (including thin ones) and grasp them firmly to apply force. However, this task requires both a complex understanding of functional affordances as well as precise low-level control. While prior work obtains affordances from human data this approach doesn't scale to low-level control. Similarly, simulation training cannot give the robot an understanding of real-world semantics. In this paper, we aim to combine the best of both worlds to accomplish functional grasping for in-the-wild objects. We use a modular approach. First, affordances are obtained by matching corresponding regions of different objects and then a low-level policy trained in sim is run to grasp it. We propose a novel application of eigengrasps to reduce the search space of RL using a small amount of human data and find that it leads to more stable and physically realistic motion. We find that eigengrasp action space beats baselines in simulation and outperforms hardcoded grasping in real and matches or outperforms a trained human teleoperator.
@inproceedings{agarwal2023dexterous,
title={Dexterous Functional Grasping},
author={Agarwal, Ananye and Uppal, Shagun and Shaw, Kenneth and Pathak, Deepak},
booktitle={Conference on Robot Learning},
pages={3453--3467},
year={2023},
organization={PMLR}
}
|
|
|
DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction
Pragna Mannam*, Kenneth Shaw*, Dominik Bauer, Jean Oh, Deepak Pathak, Nancy Pollard
IEEE Humanoids 2023 Best Oral Paper Award Finalist (Top 3)
webpage |
abstract |
bibtex |
arXiv
Modeling and simulating soft robot hands can aid in design iteration for complex and high degree-of-freedom (DoF) morphologies. This can be further supplemented by iterating on the design based on its performance in real world manipulation tasks. However, iterating in the real world requires a framework that allows us to test new designs quickly at low costs. In this paper, we present a framework that leverages rapid prototyping of the hand using 3D-printing, and utilizes teleoperation to evaluate the hand in real world manipulation tasks. Using this framework, we design a 3D-printed 16-DoF dexterous anthropomorphic soft hand (DASH) and iteratively improve its design over five iterations. Rapid prototyping techniques such as 3D-printing allow us to directly evaluate the fabricated hand without modeling it in simulation. We show that the design improves over five design iterations through evaluating the hand’s performance in 30 real-world teleoperated manipulation tasks. Testing over 900 demonstrations shows that our final version of DASH can solve 19 of the 30 tasks compared to Allegro, a popular rigid hand in the market, which can only solve 7 tasks. We open-source our CAD models as well as the teleoperated dataset for further study.
@article{mannam2023Dashhand,
title={DASH: A Framework for Designing Anthropomorphic Soft Hands through Interaction},
author={Mannam, Pragna* and Shaw, Kenneth* and Bauer, Dominik and Oh, Jean and Pathak, Deepak and Pollard, Nancy},
journal= {IEEE Humanoids},
year={2023}
}
|
|
|
LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning
Kenneth Shaw, Ananye Agarwal, Deepak Pathak
RSS 2023
Start your dexterous manipulation journey here!
webpage |
abstract |
bibtex |
RSS
Dexterous manipulation has been a long-standing challenge in robotics. While machine learning techniques have shown some promise, results have largely been currently limited to simulation. This can be mostly attributed to the lack of suitable hardware. In this paper, we present LEAP Hand, a low-cost dexterous and anthropomorphic hand for machine learning research. In contrast to previous hands, LEAP Hand has a novel kinematic structure that allows maximal dexterity regardless of finger pose. LEAP Hand is low-cost and can be assembled in 4 hours at a cost of 2000 USD from readily available parts. It is capable of consistently exerting large torques over long durations of time. We show that LEAP Hand can be used to perform several manipulation tasks in the real world—from visual teleoperation to learning from passive video data and sim2real. LEAP Hand significantly outperforms its closest competitor Allegro Hand in all our experiments while being 1/8th of the cost. We release the URDF model, 3D CAD files, tuned simulation environment, and a development platform with useful APIs on our website.
@article{shaw2023Leaphand,
title={LEAP Hand:Low-Cost, Efficient,
and Anthropomorphic Hand for Robot Learning},
author={Shaw, Kenneth and Agarwal, Ananye
and, Pathak, Deepak},
journal= {RSS},
year={2023}
}
|
|
|
Learning Dexterity from Human Hand Motion in Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Aravind Sivakumar, Aditya Kannan, Deepak Pathak
IJRR Special Issue
Featured on the front page of IJRR Special Issue April 2024
abstract |
bibtex
To build general robotic agents that can operate in many environments, it is often useful for robots to collect experience in the real world. However, unguided experience collection is often not feasible due to safety, time, and hardware restrictions. We thus propose leveraging the next best thing as real world experience: videos of humans using their hands. To utilize these videos, we develop a method that retargets any 1st person or 3rd person video of human hands and arms into the robot hand and arm trajectories. While retargeting is a difficult problem, our key insight is to rely on only internet human hand video to train it. We use this method to present results in two areas: First, we build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. This enables the robot to collect real-world experience safely using supervision. Second, we retarget in-the-wild human internet video into task-conditioned pseudo-robot trajectories to use as artificial robot experience. This learning algorithm leverages action priors from human hand actions, visual features from the images, and physical priors from dynamical systems to pretrain typical human behavior for a particular robot task. We show that by leveraging internet human hand experience, we need fewer robot demonstrations compared to many other methods.
@article{shaw_internetvideos,
title={Learning Dexterity from Human Hand Motion in Internet Videos},
author={Shaw, Kenneth and Bahl,
Shikhar and Sivakumar, Aravind and Kannan, Aditya and Pathak, Deepak},
journal= {IJRR},
year={2022}
}
|
|
|
VideoDex: Learning Dexterity from Internet Videos
Kenneth Shaw*, Shikhar Bahl*, Deepak Pathak
CoRL 2022
webpage |
abstract |
bibtex |
arXiv |
demo
To build general robotic agents that can operate in many environments, it is often imperative for the robot to collect experience in the real world. However, this is often not feasible due to safety, time and hardware restrictions. We thus propose leveraging the next best thing as real world experience: internet videos of humans using their hands. Visual priors, such as visual features, are often learned from videos, but we believe that more information from videos can be utilized as a stronger prior. We build a learning algorithm, VideoDex, that leverages visual, action and physical priors from human video datasets to guide robot behavior. These action and physical priors in the neural network dictate the typical human behavior for a particular robot task. We test our approach on a robot arm and dexterous hand based system and show strong results on many different manipulation tasks, outperforming various state-of-the-art methods.
@article{shaw_videodex,
title={VideoDex: Learning Dexterity
from Internet Videos},
author={Shaw, Kenneth and Bahl,
Shikhar and Pathak, Deepak},
journal= {CoRL},
year={2022}
}
|
|
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
Aravind Sivakumar*, Kenneth Shaw*, Deepak Pathak
RSS 2022
Best Paper Award Finalist in Scaling Robot Learning Workshop
webpage |
abstract |
bibtex |
arXiv |
demo |
in the media
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world.
@article{telekinesis,
title={Robotic Telekinesis: Learning a
Robotic Hand Imitator by Watching Humans
on Youtube},
author={Sivakumar, Aravind and
Shaw, Kenneth and Pathak, Deepak},
journal={RSS},
year={2022}
}
|
Demos
Come meet me at CoRL, ICRA and RSS! I have attended CoRL 2023, 2024 and 2025, RSS 2023, 2024 and 2025, and ICRA 2025, and I plan to attend many more. I would love to chat with you, and you can even try out the robot hands yourself!
Education & Outreach
I am deeply committed to supporting educational use of low-cost robot hands, from university labs to K-12 classrooms. My goal is to inspire curiosity, teach core robotics and machine learning skills, and make hands-on research accessible to everyone.
Please reach out if you need additional resources or support beyond what's available on our website. I'd love to help!
Modified version of template from here and here
|
|