Note: I'm going through my Drafts folder. This blog was written in 2013.
....
The meaning of life is survival. For now. Because we don't know enough about the world in which we live in. If one day we prove that we live in a simulation that the goal may change, but to what? Trying to comunicate with the simulators? They can see us. Do they need us? Or maybe that will be game over - the end of the world.
Survival doesn't mean stagnation. Evolution is necessary for indefinite survival. Survival long after Earth is gone and long after our universe is gone. Evolution is powered by knowledge: trying things out, experimenting, keeping what works, and discarding what doesn't.
Intelligence is correlated with the amount and scope of sensory data.
Our current form cannot evolve fast enough.
We can build a life form that will be smarter than us.
We can build a robot that will be able to walk, run, swim, and fly. We can give it better audio sensors than our ears - including directional microphones, better vision than our eyes - think night vision, panoramic vision, microscopic vision, and telescopic vision, we can give it senses that we don't have: receiving ultraviolet, infrared, gamma. It will be able to communicate much faster than us using wide range of radio frequencies. It will be able to react in a fraction of our reaction time. We can make it fully aware of its construction, we can make it know how to create a copy of itself. We can encourage him to create at least one copy of itself a little bit improved. We will give it the same goals that we have: survival and quest for knowledge. It will be a superhuman.
In December 2011 I wrote that forgetting is the key to AI. I still stand by it. To function independently, robots will have a compact offline model (intelligence) plus, limited memory (short term and long term), and a hierarchy of goals (meaning of life). The long term memory will need to be purged periodically (sleep) and the model retrained (learning).
In January 2012 I read "Facing the Singularity", and concluded that if we cannot ensure that AI is friendly to us, we need to ban its development.
In April 2012 I wrote that I was scared by the progress in humanoid robots and AI development. Boston Dynamic's PETMAN looked like this then:
In October 2012 I read "Google Speaks" where I found this quote by Larry Page: "Google will fulfill its mission only when its search engine is AI-complete. You guys know what that means? That's artificial intelligence."
My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025. Over the last year computer power has increased as expected, and so it looks like we’re still on target to have supercomputers with 10^18 FLOPS around 2018. In terms of neuroscience and machine learning, I think things are progressing well, maybe a little faster than I’d expected. I was toying with the idea of moving the prediction very slightly closer, but decided to play it safe and keep the prediction unmoved at 2028. With many people thinking I’m too optimistic, showing restraint is perhaps wise 🙂 I can always move my prediction nearer in a year or two.
Two days ago, Boston Dynamics and Google's DeepMind announced a partnership. The prototype of the new Atlas that walks "like it’s going to steal your girl":
Boston Dynamics is planning to make about 30 thousand Atlas units in 2026.
Boston Dynamics was the pioneer and is the best known, but there are multiple robot companies planning to ramp up production in 2026. They are primarily Chinese and American. Out of a plethora of Chinese robot makers I will show these: Unitree, Xpeng, EngineAI, and UBtech:
Unitree may become another DJI (a leader in flying drones). They make multiple humanoids:
The smallest and cheapest R1: 121 cm tall, 25 kg:
Bigger and heavier G1: 127 cm tall (Basic), 35 kg:
And a full-size human H2: 180 cm tall, 70 kg:
XPeng is an electric car maker, who made a robot that many thought was a person in a costume, so then they had to cut the fabric open on stage:
IRON: 178 cm tall, 70 kg:
EngineAI introduced T800 using a video made like a movie, and subsequently many people thought it was CGI, so they released the backstage video:
T800: 173 cm tall, 75 kg:
UBtech created a stir with a video of their freshly made robots marching into shipping containers. They also were accused (wrongly) of using CGI.
Walker S2: 176 cm, 70 kg:
..........
The world changed in November 2022, when OpenAI released ChatGPT.
We, programmers, always had to cope with a constantly changing work environment: new languages, new tools, new platforms, new frameworks, new libraries. We were still coding though. With AI, we got help: first a smarter search - we no longer google or ask questions on StackOverflow - our source of solutions for all problems, not so long ago:
Number of questions asked per month. Jeff Atwood left in 2012.
Then AI was a more intelligent auto-complete, a unit test generator, an API implementer, a smart colleague who can introduce us to a new framework or concept and build a prototype for us, a senior developer who can find a bug that we struggled with for weeks, the pace accelerated so much that even the people at the bleeding edge are feeling behind:
Andrej Karpathy worked as a director of AI at Tesla, co-founded OpenAI, taught AI at Stanford.
In November 2025 Anthropic released Claude Opus 4.5, which may be a superhuman-level AGI for coding - it's better at coding than 100% of programmers who applied for a job at Anthropic.
A good book. Sometimes a bit too wordy. Sometimes a bit off topic. Nevertheless, very interesting. It's not just about the mind. It is about evolution, genes, neural networks, optics, and mathematics of music. It is about reverse-engineering us. Why we act the way we act. How our senses work.
To understand sight, we have to look to optics and computer vision systems. To understand movement, we have to look to robotics. To understand sexual and familial feeling, we have to look to Mendelian genetics. To understand cooperation and conflict, we have to look to the mathematics of games and to economic modeling.
Reverse engineering is the key to understanding how we work. Take morning sickness for example. Why is it part of pregnancy? A biologist, Margie Profet assumed that if it is there, it must be beneficial in some way, and she found it:
Pregnancy sickness protects the developing baby from toxins: plant toxins in quantities that adults tolerate can cause birth defects
Pregnancy sickness starts when the embryo is most vulnerable to teratogens (birth defect inducing chemicals)
Pregnancy sickness ends when the embryo is past the critical stage of organ development and needs nutrients for further growth
Women with more severe pregnancy sickness are less likely to miscarry
Natural selection started with a replicator. Itself not a product of natural selection, but of chemical and physical processes. Replicators compete for resources. Copying errors that improve chances of reproduction are preserved by pushing out organisms that don't reproduce as well. Confucius said "A common man marvels at uncommon things; a wise man marvels at the commonplace." Steve Pinker is looking at commonplace human acts and wonders how they work, how could a robot do them.
On 300-odd small pages of "Google Speaks" Janet Lowe tells the story of Google and the key people who run it: Sergey Brin, Larry Page, and Eric Schmidt.
This book was published in 2009 - only 3 years ago - and it already shows signs of age: Facebook, which is the biggest danger to Google's dominance of Internet, is not mentioned even once.
The book contains a guess of Google having 1 million servers back in 2006. This was probably an overshot. Google may have about 2 million servers currently.
Sergey Brin: "It’s not enough not to be evil. We also actively try to be good." Despite the many controversies mentioned in the book, this guiding principle is clearly visible when you compare the history of Google with the history of Facebook. Google may be first to create AI and bring an end to the human rule on Earth, but it will do so with good intentions.
I'm scared. As a civilisation, we are moving very quickly towards our total destruction.
No, I am not talking about global warming. We can deal with that. Even if we do nothing to control the amounts of greenhouse gases in the atmosphere, and it becomes unbearably hot in some places, and sea levels rise 80 meters making countries like the Netherlands and Bangladesh disappear under water, we will survive. We will relocate to higher, cooler places. We will ban combustion engines, or more likely we will get CO2 out of the air and store it underground. Climate change will be disruptive, it will be unpleasant, but we will adjust. Countries will help each other, or will fight each other, or mayhem will rein, but our civilisation will survive. We may even be better off: we may develop technologies that allow us to control the climate globally whatever the cause, and we may embark on ambitious projects like making deserts habitable by changing local climate. All good stuff.
I am talking about something much more sinister, namely: robots and AI. We blindly keep developing technologies that are a mortal danger to us. Our latest "achievement" is Boston Dynamics' newest robot: PETMAN, or as it should be called, the Terminator, because it is ghastly similar to the original killing robot.
PETMAN's development started at least a year ago under a ruse of testing soldier uniforms. It is however the first serious example of a dangerous new weapon category: a humanoid robot. These robots will be able to use anything that was designed to be used by humans, anything that has a human interface. They will walk the stairs, open doors, move things, turn on and off switches, valves, operate heavy machinery, drive tanks, fly planes. They won't need machine guns for arms. They will be able to pick up any gun and use it. This is what makes them a new weapon category. There was no such thing in the past. We were making specialised military robots: flying drones, scouting, guarding, and combat vehicles, and now we created a robot that can control them.
Can we control it? Can we control the person who will send millions of these robots on a mission? As a civilisation we have a poor track record of stopping those in power from committing crimes against humanity. We actually have laws in place that give those in power immunity and the right to secrecy.
Combined with the latest advancements in sensory input processing and AI, examples of which are Google's self-driving cars, IBM's Watson supercomputer, and Apple's Siri, we are making a weapon much more dangerous than nuclear, chemical, or biological weapons. Those weapons were dumb. We would use them once or twice, see how terrible they were, and put them away. It may not be possible to put away a million highly intelligent PETMANs that can build, repair, and recharge themselves, know exactly were they are, access any map, and all human knowledge in milliseconds. Once let out, they may not come back.
Behind many of these technologies is DARPA, and their newest robotic challenge is probably one step too far. We could ban robotic and AI development on international level before singularity happens, but will we do it?
In "Facing the Singularity" Luke Muehlhauser describes the inherent flaws of homo sapiens and the inevitability of Artificial Intelligence (AI) taking over. The future is quite bleak for us. The Earth is ours now, but maybe as soon as in 10 years we may be struggling to survive. Compared to AI we are mentally and physically inferior. Just one example:
Our axons carry signals at 75 meters per second or slower. A machine can pass signals along about 4 million times more quickly.
Luke is the director of Singularity Institute and he is researching friendly AI.
What if we cannot ensure that the AI is friendly to us?
We need to stop it. We need to introduce laws that prohibit AI and robotics research. We need to control it the same way we control chemical, biological, and nuclear weapons research and manufacturing.
And if we don't?
I can see two possibilities for us after the AI takes over:
1. It ignores us as long as we don't interfere with it, or stand in the way. Similar to how we ignore other animals. We may be treated as pets, or as pests depending on how we behave towards it. If we are hostile, it will be hostile. If we are docile, it will let us live.
2. At some point the AI may reach the conclusion that since the universe is slowly falling apart, and nobody can win the endgame, there is no point playing, and it will erase itself, like a depressed man with no future who kills himself.
I think that making computer programs fortget most of what they receive is the key to building Artificial Intelligence. Why? Because having to much data creates three problems:
1. Space needed to store it.
2. Speed of retrieval.
3. Classification: which piece of data is important?
Selective forgetting would solve these problems.
Spaced repetition algorithm should be used backwards to achieve that goal: when the AI program sees something for the first time, it keeps it until the next day. If it sees it again the next day, it keeps it for 6 more days. If by then it sees it again close to the 6th day, then it keeps it for another 15 days, and so on. When something stops occurring, it is gradually forgotten. So, the AI program keeps less data, so the information can be retrieved faster, and the data kept is important in understanding the environment, because it keeps coming.
This mechanism works ok with the AI program learning about normal situations. It may stumble upon extraordinary, life-threatening situations, which need to be remembered separately from the spaced repetition algorithm.