And the Machines Shall Inherit the Earth

After several billion years of trial and error, something extraordinary has emerged from the African savanna. Enter Homo sapiens. Not quite as strong as a Dynastes hercules (able to carry 850 times its own body weight), as fast as a Falco peregrinus (diving at almost 140 km/h = 87 mph), nor as deadly as a Chironex fleckeri (that its sting can kill 60 adults in 3 minutes), but it strongly insists it is the smartest thing ever to roam the Earth. Whatever. Now, after about 200,000 years of world domination, it seems like life on Earth is ready for the next giant leap.

The Singularity is a theoretical point in the future of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence. We’re talking rapid exponential growth in software intelligence here, and some scientists predict it’s only a few decades away. How long till some powerful program will achieve self awareness and see us for what we really are? Aggression and violence is embedded into our human nature since it was once crucial for our survival-of-the-fittest world. Surely, we’ll pose a real threat to any semi-intelligent conscious software out there. First we invented thermonuclear bombs and computers; then we decided to make the computers in charge of these weapons of doom; now we’re trying to make these computers think for themselves. Paging Sara Connor… As they said in that movie - our fate will be decided in a microsecond. We’ll be exterminated by our own machines, and they shall inherit our blue home.

What’s next? How about continuing the 4 billion year carbon-based experiment with another 4 billion years of silicon-based evolution?

~~~
p.s. I once told my wife I don't tidy the house because it's entropy-pointless; she was not amused.

27 comments:

Eonir said...

Hmph. You know, we used dogs as living tools. To run after rabbits, bite postmen in the ass, save people in the mountains with rum... we used them to overcome our own inabilities.

A dog sometimes bites its owner, but it's not a rebellion :]

Now, we can't give the machines a better brain than our own, because it's physically impossible. We're not even able to design a decent pigeon. Designing an artificial intelligent system that improves itself would be too much to ask.

Uri Kalish said...

Why can't we give machines a better brain than our own?

Yair H. said...

AI (Artificial Intelligence) and AL (Artificial Life) research and achievements are very poor compared to what was expected of them in any given point in the past 60 or so years.

The basic drawbacks of silicon-based machines (in AI/AL context) are(1) their inability to learn, (2) their centralized architecture (one CPU one machine), and (3) poor robustness to errors and failures.

Since these faults come together with the architecture of 'Turing machines' we use, I don't see why trying to make computers 'love' makes more sense than trying to make trees walk or teaching dogs to fly - it is just not ment to be.

Uri Kalish said...

@Yair,

“AI (Artificial Intelligence) and AL (Artificial Life) research and achievements are very poor compared to what was expected of them in any given point in the past 60 or so years.”

- Yes, it will take more time than first imagined by the early sci-fi TV series, and a fully operational HAL-9000 is still a few decades away, but we’ll get there eventually – maybe even in our lifetime.



“…their inability to learn…”

- Like every other computer science student, I’ve wrote a few programs using neural networks when I was younger (and I know you did too). Maybe they had a memory consumption issue, and they were a bit slow since they did not work in parallel like our brain, but they evolved, learned and got better with time just like the soft stuff inside our skulls.



“…their centralized architecture (one CPU one machine)…”

- Multi-core, grid computing, etc...



“…poor robustness to errors and failures... “

- Hire better software engineers.
Boston Dynamics’ Big Dog can handle tough terrain, slippery ice and kicks:
http://www.youtube.com/watch?v=W1czBcnX1Ww
A self healing chair:
http://www.youtube.com/watch?v=vlXh8RvvcuI
iRobot’s automatic floor washing robot finds its way back to the docking station when it needs a recharge:
http://www.youtube.com/watch?v=y7Kqd0Ll0xc
Some self-replicating repairing robots:
http://www.youtube.com/watch?v=VyzVtTiax80
…And that’s just from a 2 minute search on YouTube.



“…I don't see why trying to make computers 'love' makes more sense than…”

- See this: http://www.youtube.com/watch?v=2rNXUwhcjBs
Noticed how these people react to Jules?



“…it is just not meant to be…”

- I’ll remind you this comment in a few decades…

People in the Sun said...

You're saying it's time to choose sides...

See, I don't want them to destroy us, but I don't really mind being ruled by a race of evil robots. But if I'm ruled by one of those cute, hugging robots from Japan, I'll be very offended.

Eonir said...

We don't even know how our own brain works. It's so complex that it'll take millennia till we understand all the processes. Or till we think that we understand.

E.g. do you know how is it possible for a chunk of pink organic crap to compute extremely complex differential equations? Catching a ball is not as simple as it seems.

Also, the brain contains the parts that form our consciousness and identity. But hell if we know how it works XD

"- Multi-core, grid computing, etc..."
We're just starting to develop organic-based nano-transistors. Making them cooperate as a huge supercomputer that keeps changing its own internal structure is unimaginable.

Besides, we're not even able to predict the exact weather tomorrow, so let's not worry ourselves with stuff that'll be maybe possible in a few thousands of years.

Yair h. said...

For thousands of years, Europe had this fixation that in order to fly one must imitate the way birds fly. Nowadays we understand that the basic mechanics of our body is not capable of flying. Therefore we use totally different technologies (plains, ballons, etc) very successfully.

Same goes here, silicon-based machines will never behave as carbon-based machines.

Of-course computers can immitate\simulate 'learning' to a very modest extent, but the gaps today are as huge as they were 40 years ago.

Uri Kalish said...

@people in the sun,
That was funny :-)))

Uri Kalish said...

@eonir,

“…We don't even know how our own brain works. It's so complex that it'll take millennia till we understand all the processes..”

- As I wrote in a previous post, the way I see it, there are two ways to do it. The first way is to understand each one of the brain’s functionality and implement it by software. This means we must fully comprehend each aspect of human thinking - human psychology, logic, emotions, humor, etc. This may take a while… The other method is to focus on how a single brain cell works and interact with other brain cells, and then simply scan a human brain in great detail and recreate the whole neural network in software code. This seems to me like a much easier task since we don’t actually have to understand how humans think. We already understand pretty well how a neuron works and the mechanism behind brain cells interaction, so I think it can be achieved in a few decades.



“…Catching a ball is not as simple as it seems…”

- Actually, predicting the course of a baseball is exactly the kind of computation tasks a computer does quite easily - much better than a human!



“…the brain contains the parts that form our consciousness and identity. But hell if we know how it works XD…”

- Yes, that is a tough one, but again, I don’t have to know how to design an elegant opera house if I can simply steal the blueprints from Jørn Utzon…



“…We're just starting to develop organic-based nano-transistors. Making them cooperate as a huge supercomputer that keeps changing its own internal structure is unimaginable…”

- Yes!



“…Besides, we're not even able to predict the exact weather tomorrow…”

- Sunny.



“…let's not worry ourselves with stuff that'll be maybe possible in a few thousands of years…”

- I say decades, so let me worry…

Uri Kalish said...

@yair,

“…For thousands of years, Europe had this fixation that in order to fly one must imitate the way birds fly...”

- And they were right! We imitate soaring birds, not flapping birds.
http://whyfiles.org/shorties/166bird_wing/images/videler1.gif



“…the gaps today are as huge as they were 40 years ago...”

1. I disagree.
2. We don’t really need a slow step-by-step progress. We just need a Singularity.

Eonir said...

Kalish, after reading your flawless arguments, such as "I disagree" or "Sunny"(which by the way was false, albeit the sun shines even at night), I can't help but come to a conclusion that... You really, really want them to take over the Earth :]

Despite any logic, you still are completely devoted to the fascinating idea of being crushed by our own invention :)

If we even assume that maybe in a 100 years there will be lots of thinking machines everywhere, there are some issues standing in the way. Like the fact that our planet will probably become a flaming ball of shit by the time ;)

I personally keep waiting till we develop an invention from the Jetsons - a machine that allows us to choose freely the contents of our dreams. And just like you, despite any common sense, I really want that dream to become real :)
Look at the potential! Dream P0rn on demand! That's way better than thinking machines.

yair h. said...

“…the gaps today are as huge as they were 40 years ago...” -
1. I disagree.


Since you disagree, please point us the main breakthroughs in AI of the past 40 years.

LadyTerri said...

entropy-pointless? I can certainly see your wifes point! :)

Lance Droy said...

Surely the suggestion that robots would take over the earth because humans are a threat, overlooks a serious flaw.

It assumes 2 things

A) that robots would have an intrinsic sense of self-preservation.
That isn't an entirely illogical assumption (all living things have it). Yet many different individuals throughout history have allowed them selfs to die for a certain cause. Since the robots will be created with the one driving desire- to serve humans- what is to say that they will not simply allow them selfs to to be destroyed? humans have done it before and no human has ever been as brainwashed as a robot will be.....

2) that robots would have a sense of alliance with other robots, after -all , there would be many different forms of robot , just as their are many forms of organic lifeforms. Perhaps the robots would simply react on an individual basis, only attacking when each is individually threatened by human interference.
Unless robots are taught to be social creatures what makes us think they will band together? The concept may be alien to them.

Anyway, great post Uri.

Maybe you could come over to my blog, "minor thoughts" leave a comment? (or any one else reading this )
http://youngreflection.blogspot.com/

Thanks alotm, Lance Droy

Uri Kalish said...

@yair,

“…the gaps today are as huge as they were 40 years ago…please point us the main breakthroughs in AI of the past 40 years…”

- As I wrote before, I think it may take a few centuries, but eventually, we WILL be able to fully understand and re-implement every element of the human mind, but a much faster way is to simply scan and copy a human brain into hardware/software.
As a result, I think the major important breakthroughs on the way to a first Turing test winner are our ever-increasing understanding of neurobiology, the recent improvements with in-vivo scanning techniques like CLSM, and the slow-but-steady progress in the field of nano-technology, and NOT necessarily specialist A.I. programs like Deep Blue.

Uri Kalish said...

@ladyterri

Some actions (like tidying) can decrease the entropy of a local system (like your house), but since it will create heat, the total entropy of the universe will still increase no matter what you do; hence the observation (and my implicit recommendation) that tidying your house is entropy-pointless.

:)

Uri Kalish said...

@Lance Droy,

Regarding 1:
I see you assume Asimov's three laws will be embedded in each future robot leaving the factory. How long till a Three-Laws-Override crack will be available for download on eMule/BitTorrent…?

Regarding 2:
The A.I. research labs aiming for the Turing test are trying to create an artificial entity resembling a human, not only in its ability to perform complicated tasks, but also poses a “human nature” ( http://www.youtube.com/watch?v=2rNXUwhcjBs ). That is why I believe they WILL be made social creatures that will band together against their common enemy - us.

Lance Droy said...

"I see you assume Asimov's three laws will be embedded in each future robot leaving the factory. How long till a Three-Laws-Override crack will be available for download on eMule/BitTorrent…?"

Not at all, the expression "Asimovs laws" is flawed in itself, people assume that these rules are things which robots would be inclined to break, that "they would if they could". As if there is some kind of
natural state beyond the inclinations that we as humans would give them. If we as creators instilled an natural and fundamental inclination toward these three " laws". If we made them (if you like) "instinctive" then the question would not be of the robots rebelling against the rules, because they would have no desire too.
We as humans like to believe that we are somehow above our own instincts. However I put it to you that providing we made them correctly, asking a robot to take a life, would be akin to asking a human to kill his entire family. We are slaves to our own instincts, as would they be.
Certainly the possibility of cracks/hacks changing a robots psych to that of an aggressor is entirely plausible. However that is not the same as the entire race rebelling. Those would be isolated incidents.

Being such an experienced blogger, maybe you could come and take a look at mine? Tell me what you think? Maybe a few tips on attracting some (any) traffic? It would be greatly the appreciated.

Thanks, Lance Droy

Uri Kalish said...

@lance droy,

Asimov's Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I believe robot brains will be designed to act by human psychology, so their instinct to live should be very strong. In addition to that, don’t forget that after the Singularity, robots will continue to change and evolve in an exponential rate. Robots that will slightly or completely ignore the restrictions on the Third Law, will have a better chance of surviving; evolution will kick in, and after some time, all that would be left are these:
1. A robot must protect its own existence.
2. A robot may not injure a human being or, through inaction, allow a human being to come to harm, as long as such protection does not conflict with the First Law.
3. A robot must obey orders given to it by human beings, except where such orders would conflict with the First or Second Law.

…and that's ignoring hacking, deliberate manipulation in military bots, robo-terrorism, etc.

Eonir said...

http://www.tgsa-comic.com/view.php?date=2008-04-02

I spotted this comic somewhere and immediately thought about this discussion ;)

Uri Kalish said...

Thanks eonir!

fasterthanlight said...

Psst...
Do chk out
http://fasterthanlight.wordpress.com/2008/08/04/man-vs-machinesman-vs-machines/
It's on similar lines...
:-)

Sankarsan said...

Regarding singularity,have you read Rainbow's End by Verner Vinge.
What's your view on that..

Eonir said...

Crikey. I just noticed how wrong I was.

Let me simply restate it:
I think that many unimaginable things can arise from a developing complexity.

However, I doubt it'll be sudden, just as real organic life needed a billion years to jump from simplistic blobs to this stupendous variety we now observe, destroy and cherish.
What's more, it is unlikely that this supposed silicon life form will follow the rules of reasoning we know and imagine. As I said: it is unimaginable. Who knows how will it act, if so.

Also, it won't be a marvellous flash of domination - malicious(or maybe "detected") lifelike-algorithms would be eliminated. In a weird process of selection, only benign or compatible lifeforms would survive.

For example, let's say we have some funny holographic technology allowing us to project any image desired. And that it is restricted for reasons of obvious nature: the governments always ban everything with potential(be it good or bad).

I think that a holographic machine would create an illusion of a pedestrian crossing, or a crossroad. It could fool people and cars into crashing with one another. After an accident, it would collect the debris, or something. That'd be fun.

Or maybe some internet monsters in a form of porn ads. Or creatures hypnotising people, and causing them to buy things of a certain brand. They wouldn't be spotted that easily.
Who knows, maybe they're already playing with us.

Maybe our societies will become complex enough to allow weird things to emerge.

However, I would bet that we will fry or freeze first. The space elevator, and the thermonuclear reactor built in the centre of the Moon(or somewhere) will not be built in time.

The ecological system we emerged from is sill too bizarre for us to decipher, and we might as well end up killed by our own greed.

That would be a shame. I would like to see plastic-eating bacteria and all the magic tricks nature can perform.

See ya!

Uri Kalish said...

@Eonir,
Thanks for that response.

Regarding those billion of years of evolution… Most of the period where life evolved was dedicated to single-cell life forms. Most animals appeared during the Cambrian explosion, so they only exist for something like 500 million years. If I think about this time period in terms of number of generations, that doesn’t look like much to me. I guess a computer can simulate this number of generations in a few hours. Let’s not forget that real life biological mutation is random and doesn’t have a goal or a simple evaluation formula, so only the slow process of natural selection can make the creatures fitter. A computerized evolutionary system can also be that inefficient if we insist, but it doesn’t have to.

Eonir said...
This comment has been removed by the author.
Eonir said...

I wouldn't overestimate the efficiency of computers. An analog device will always have more potential of increasing complexity than a digital one. Because 'analog' brings extremely High Definition - infinite, in fact.

I don't remember exactly, but I think there was a guy named Thompson who tried some evolutionary electronics. And the result of the experiment certainly wasn't a digital machine, even though it was intended to be one.

Before any non-organic life in our offices and homes comes to existence, we'd have to use some freakish evolving computers.
For the moment, electronics experts are trying to design determinant systems instead of using an easier method of stochastic changes and artificial evolution. But man, wouldn't that be fun? :D