- These robots weren't really "walking" in the sense that humans walk through continuous dynamic balancing, i.e falling forward.
These used quasi-static walking, where the zero moment point (like a moving centre of gravity) is kept within the support polygon of the footprint. This is what gives them their weird swaying gait and extremely conservative movement characteristics. You could never make a bipedal robot run, jump or respond to large and sudden external forces using this method. It's essentially a balance free movement hack.
- Computer science research tends to look like this. Take a seemingly ambitious idea, then spend eons making a version of that idea which doesn't scale and probably doesn't work. But no one is really sure how far this incomplete idea will go. After demonstrating it we realize what the limits and next steps are.
- MIT's Leg Lab was doing dynamic walking around the same time. http://www.ai.mit.edu/projects/leglab/robots/robots.html
- yeah came here to say that. Leg lab was doing robotic locomotion in the 80s
Leg Lab evolved into Boston Dynamics, which have been (and maybe continue to be) the leader in real bipedal walking.
- Ya, they walk like old people. By keeping thier cog over thier feet they are able to stop at any moment without tipping over. That's how old people with diminished motor neuron function walk. Both play it safe because they know they lack the reaction time to prevent a fall once cog is outside their footprint. It is also how one walks when on very slippery surfaces.
- According to the article E0 was static, E3 was dynamic.
What none of them did, however, was “learn” (as the title suggests). They used hardcoded algorithms.
- There's no robot that aren't built around hardcoded algorithms.
They use neural networks these days, which is just a different kind of hardcoded algorithm, that require bazillion node-hours on NVIDIA GPUs to compile instead of requiring humans doing diagrams with pens and paper. The resultant binaries are still 100% static and hardcoded.
Some humanoid demos incorporate LLMs. So what. GGUF is always static. They don't change or improve as you interact them. So still 100% hardcoded.
- This is 12 years old now:
- The sad part though: What even has Honda done with their humanoid robotics research? I remember being starry-eyed, excited as a kid to see ASIMO and all the amazing things it was doing. Past a couple hardware revisions they basically just let the thing rot out to die and it hurts.
https://www.youtube.com/watch?v=X23jNzL3wuE
This commercial still holds a lot of spirit and heart to it. I really wish we could tap progress on the shoulder and ask for more forwards again...
- All those car companies use A LOT of robotics, automation, and simulation to build cars. They just don't seek for an autonomous sentient humanoids as means to it.
They all have their own predecessors to things like NVIDIA Isaac used for things like worker toolpath planning or for absorbing worker height variances. They just don't use artificial robots with those systems, but use human laborers.
Anyone with even workshop level knowledge or experience in robotics knows we are minimum one whole decade away from humanoids building cars, let alone economically, and there's not going to be much first mover dominance advantages carrying over from doing pre-viability humanoids.
And so they just, keep raking in money from robotics assisted and hand built hybrid cars. Some more some less.
- They've done nothing. Because there was nothing to do, back then.
Humanoid robotics wasn't a hardware problem back then, and isn't a hardware problem today. It was, and is, an AI problem at its core. You can make a humanoid robot, but you can't make it do useful things.
This is what's changing today. AI tech is actually advancing enough that "useful humanoid robots" might be within reach.
- > Humanoid robotics wasn't a hardware problem back then, and isn't a hardware problem today.
It definitely still is a hardware problem today - humanoids force you to miniaturize gears, motors, and other parts (especially with the hands) which make them incredibly fragile and inaccurate. You're basically fighting against the laws of physics, so improvement on this front has been pretty slow. And tactile sensors which are key for complex manipulation tasks are still a far way to go in terms of resolution and reliability - so most robotics startups tend to rely on cameras for everything.
I think that in order to have humanoids that are actually capable of matching or exceeding the actual mechanical capability of humans, you need large advancements not just in AI but in material science as well - no machine can still match the efficiency of humans with its biological muscles, tendons, the skin / fat that surrounds them, and its vast array of sensory input.
- Not a hardware problem, and never was.
The problem of today's robots isn't somehow "the hands are insufficiently dextrous". That isn't a capability bottleneck at all: the performance of modern robots isn't somehow limited by the fingers being too stiff or too brittle. The capability bottleneck is "the AI sucks ass". Robots underperform because the AI can't get them to perform to the limits of their hardware - or anywhere near, really.
- Doraemon is the one you befriend Gundam is the one you ride to lead the way The round thing that cleans the whole room is the Roomba Robots these days are there to help with the housework Next up, emotional comfort? We might not even need arms and legs
- [dead]