Boston Dynamics and Toyota Research Institute have created Large Behavior Models that enable the Atlas humanoid robot to learn complex tasks through human demonstration.
The system uses VR-based teleoperation where operators control Atlas through full-body motion capture, teaching it everything from delicate manipulation to whole-body coordination.
The 450-million parameter neural network processes visual and sensor data to predict movement sequences, similar to how ChatGPT processes language.
Atlas can now autonomously perform multi-step tasks like organizing workshops, adapting when unexpected situations arise without requiring specific programming for each scenario.
This approach shifts robotics from traditional coding-based control to demonstration-based learning, potentially making advanced automation more accessible and scalable across diverse applications.
The system uses VR-based teleoperation where operators control Atlas through full-body motion capture, teaching it everything from delicate manipulation to whole-body coordination.
The 450-million parameter neural network processes visual and sensor data to predict movement sequences, similar to how ChatGPT processes language.
Atlas can now autonomously perform multi-step tasks like organizing workshops, adapting when unexpected situations arise without requiring specific programming for each scenario.
This approach shifts robotics from traditional coding-based control to demonstration-based learning, potentially making advanced automation more accessible and scalable across diverse applications.
- Category
- Artificial Intelligence
Comments