In December 2011 I wrote that forgetting is the key to AI. I still stand by it. To function independently, robots will have a compact offline model (intelligence) plus, limited memory (short term and long term), and a hierarchy of goals (meaning of life). The long term memory will need to be purged periodically (sleep) and the model retrained (learning).
In January 2012 I read "Facing the Singularity", and concluded that if we cannot ensure that AI is friendly to us, we need to ban its development.
In April 2012 I wrote that I was scared by the progress in humanoid robots and AI development. Boston Dynamic's PETMAN looked like this then:
In October 2012 I read "Google Speaks" where I found this quote by Larry Page: "Google will fulfill its mission only when its search engine is AI-complete. You guys know what that means? That's artificial intelligence."
In July 2016 I read about the math equation for universal AI and this definition of intelligence by Shane Legg, co-founder of DeepMind:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
On his blog Shane wrote in December 2010:
My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025. Over the last year computer power has increased as expected, and so it looks like we’re still on target to have supercomputers with 10^18 FLOPS around 2018. In terms of neuroscience and machine learning, I think things are progressing well, maybe a little faster than I’d expected. I was toying with the idea of moving the prediction very slightly closer, but decided to play it safe and keep the prediction unmoved at 2028. With many people thinking I’m too optimistic, showing restraint is perhaps wise 🙂 I can always move my prediction nearer in a year or two.
Two days ago, Boston Dynamics and Google's DeepMind announced a partnership. The prototype of the new Atlas that walks "like it’s going to steal your girl":
Boston Dynamics is planning to make about 30 thousand Atlas units in 2026.
Boston Dynamics was the pioneer and is the best known, but there are multiple robot companies planning to ramp up production in 2026. They are primarily Chinese and American. Out of a plethora of Chinese robot makers I will show these: Unitree, Xpeng, EngineAI, and UBtech:
Unitree may become another DJI (a leader in flying drones). They make multiple humanoids:
The smallest and cheapest R1: 121 cm tall, 25 kg:
Bigger and heavier G1: 127 cm tall (Basic), 35 kg:
And a full-size human H2: 180 cm tall, 70 kg:
XPeng is an electric car maker, who made a robot that many thought was a person in a costume, so then they had to cut the fabric open on stage:
IRON: 178 cm tall, 70 kg:
EngineAI introduced T800 using a video made like a movie, and subsequently many people thought it was CGI, so they released the backstage video:
T800: 173 cm tall, 75 kg:
..........
The world changed in November 2022, when OpenAI released ChatGPT.
We, programmers, always had to cope with a constantly changing work environment: new languages, new tools, new platforms, new frameworks, new libraries. We were still coding though. With AI, we got help: first a smarter search - we no longer google or ask questions on StackOverflow - our source of solutions for all problems, not so long ago:
| Number of questions asked per month. Jeff Atwood left in 2012. |
Then AI was a more intelligent auto-complete, a unit test generator, an API implementer, a smart colleague who can introduce us to a new framework or concept and build a prototype for us, a senior developer who can find a bug that we struggled with for weeks, the pace accelerated so much that even the people at the bleeding edge are feeling behind:
| Andrej Karpathy worked as a director of AI at Tesla, co-founded OpenAI, taught AI at Stanford. |
In November 2025 Anthropic released Claude Opus 4.5, which may be a superhuman-level AGI for coding - it's better at coding than 100% of programmers who applied for a job at Anthropic.
The risk of human extinction because of AI is real and may be about 20%. You can learn more by listening to interviews with Stuart Russell and Tristan Harris.
No comments:
Post a Comment