AI researchers at Anthropic have successfully programmed a robot dog, the Unitree Go2 quadruped, using their Claude AI model. The experiment, dubbed Project Fetch, aimed to understand how Claude could automate tasks in robotics and pave the way for more complex AI systems that can interact with the physical world.
In the study, two groups of researchers were asked to program a robot dog to complete specific tasks. One group used Claude's coding model, while the other wrote code without AI assistance. The results showed that the team using Claude was able to complete some tasks faster than the human-only programming group, demonstrating the agentic coding abilities of modern AI models.
However, it's essential to note that these systems are not yet capable of taking full control of a robot, according to Logan Graham, a member of Anthropic's red team. Nevertheless, future models might be able to do so, and studying how people leverage large language models (LLMs) to program robots could help the industry prepare for this possibility.
Anthropic's experiment highlights the growing trend of AI systems interacting with physical objects and potentially extending into the physical realm. This raises concerns about the potential risks and misuse of such systems, but it also has the potential to make robots more useful and autonomous.
The results of Project Fetch are significant, as they demonstrate that LLMs can now instruct robots on tasks and might pave the way for more complex AI systems in the future. However, there is still much work to be done to ensure these systems are developed responsibly and safely.
In the study, two groups of researchers were asked to program a robot dog to complete specific tasks. One group used Claude's coding model, while the other wrote code without AI assistance. The results showed that the team using Claude was able to complete some tasks faster than the human-only programming group, demonstrating the agentic coding abilities of modern AI models.
However, it's essential to note that these systems are not yet capable of taking full control of a robot, according to Logan Graham, a member of Anthropic's red team. Nevertheless, future models might be able to do so, and studying how people leverage large language models (LLMs) to program robots could help the industry prepare for this possibility.
Anthropic's experiment highlights the growing trend of AI systems interacting with physical objects and potentially extending into the physical realm. This raises concerns about the potential risks and misuse of such systems, but it also has the potential to make robots more useful and autonomous.
The results of Project Fetch are significant, as they demonstrate that LLMs can now instruct robots on tasks and might pave the way for more complex AI systems in the future. However, there is still much work to be done to ensure these systems are developed responsibly and safely.