Killer robots with A.I. are worrying scientists

CLICK ABOVE to listen to discussion with Keelin Shanley on the dangers of killer robots with A.I. on Today with Sean O’Rourke (broadcast 5th August 2015)

Killer Robot
Scientists are worried about how mankind will control robots with advanced built in artificial intelligence (Credit: Warner Bros)

Huge advances in  in robotics and artificial intelligence mean that intelligent ‘killer robots’ could be ‘living’ among us in just a few years, and scientists and experts in the field are worried.

Origins

Artificial intelligence is the name given to how scientists try and replicate human intelligence in a computer. At its most basic it is software based on mathematics.

The scientific ‘father’ of A.I., as it is called, is Alan Turing, the brilliant English mathematician and code-breaker whose life was portrayed in The Imitation Game last year which many listeners will have seen.

We can, in fact, lay claim to Turing for Ireland, as he was half Irish. His mother, Ethel Sara Stoney, was Irish, attended Alexandra College in Milltown Dublin, and was part of a famous Anglo-Irish scientific family.

Ethel’s relations included George Stoney, the scientist who invented the term electron, and after whom a street in Dublin’s Dundrum is named; as well as Edith Stoney, regarded as the first woman medical physicist.

Turing’s idea was that a machine, using a mathematical alphabet which consisted of just two numbers, 0 and 1 could solve any problem.

This machine was the Turing Universal machine, and Turing came up with the idea, as far back as 1936, when he was just 24-years old.

Many identify the birth of A.I. as occurring at a now famous scientific conference at Dartmouth College Summer Research Project on Artificial Intelligence in the U.S.A. in 1956.

After that, in the 1960s and 70s, A.I. researchers developed programmers that could solve basic problems of algebra, prove mathematical theorems and speak English.

The public was astonished, and this was the background to the creation of the Hal 9000 computer on the Discovery spaceship in 2001: A Space Odyssey.

The US government poured money into A.I. research and it was predicted that an intelligent machine, to rival or surpass a human, would be built inside 20 years.

Like, is often the case in science, however, the predictions were overly optimistic, and didn’t anticipate the scale of the technical problems that had to be overcome.

Yet, in the past decade, A.I. researchers have started to create more intelligent software which learns from the environment like a toddler, and think for itself.

Earlier A.I. systems could only respond to direct commands, and were unable to learn from, or adapt to, the environment around them.

The advances in A.I. mean that machines with A.I. can now start to do things really useful, like helping humans to be better pilots, doctors and teachers.

But, creating machines that can help us in these many ways also means that we have created another intelligence, which may, or may not be under our control.

Like the Frankenstein story, written by Mary Shelley in 1818, we may ultimately create a life form which we cannot control, and which destroys us.

Concerns

For a long time, A.I. was not very intelligent at all, and came no-where close to replicating the miracle of bio-engineering that is the human brain.

However, recent advances in developing ‘self-learning systems’ which interact with the environment, and learn from it, like humans, have become a lot better.

So, much so that a Campaign to Stop Killer Robots was set up in 2013, with the support of about 1,000 A.I. scientists and researchers.

The main aim was to ban the development of what they called ‘autonomous weapons’ before they became a reality. That is weapons that can ‘think’ for themselves.

The United States already uses ‘drones’ in its conflicts in Afghanistan and Syria, and in March an ISIS-operated drone was shot down by the US near Fallujah, Iraq.

Drones are robotic planes that require an operator to select and kill targets, but with advances in A.I. drones could select targets and kill without an operator.

Obama likes Drones because they don’t risk pilots’ lives and they cost less. A drone costs about $12 million, while a new fighter costs about $120 million.

Self-learning systems, as they are called, are now being developed, which interact with the environment, and learn from it, in much the way humans do.

The rapid recent developments in A.I. worried A.I. researchers and 1,000 or so of them set up The Campaign to Stop Killer Robots in 2013.

Earlier this year, an open letter signed by leading lights of science and industry, said mankind is heading for a dark future without controls on A.I.

This was signed by Stephen Hawking, top A.I. researcher and pioneer Professor Stephen Russell and Elon Musk of the space company, SpaceX.

Hawking said: “The development of full artificial intelligence could spell the end of the human race.”

Musk said that allowing A.I. to develop freely without controls would be akin to “summoning the demon.”

Science, one of the world’s most highly regarded science magazines, entered the debate with a special edition on the subject of A.I. which also highlighted many concerns.

The issue right now is that A.I. is no longer in the realm of science fiction and that scientists are worried it will become reality without proper human controls.

Awareness

There are two schools of thought among A.I. researchers on whether robots will ever develop an intelligence that is truly ‘self-aware’ like humans.

One view holds that A.I. will always be artificial and will never truly replicate life and become self-aware like we are.

The other believes that A.I. systems will, at some point in the future, become ‘self aware’ in precisely the same way that humans are aware they exist.

It’s hard to decide who might be correct, as right now, we understand little about what makes us humans, conscious and self aware.

It is one of the great mysteries of science, and this means it will be very hard for us to determine whether a machine is truly conscious or not.

We don’t know what we are looking for essentially.

However, we can say that, as humans, by whatever magic of biology, we are aware of ourselves existing, and can take decisions based on our own moral code.

The point is, should we risk allowing A.I. machines, with weapons and killing capacity to become ‘self aware’ and autonomous? It seems a big risk to take.

Benefits

A.I. can be used to make our environment and our devices more intelligent, leading to a higher standard of living, arguably, for us all in the future.

All our devices will be connected to the ‘Net, the Internet of Things, and these things, equipped with A.I. will be better able to serve the needs of their human masters.

A robot that prepares your dinner, using fresh ingredients and has it ready for you when you come home tired from work? It’s already happening.

A driver less car that takes you safely home from the pub on a Friday night when you have been out enjoying a few drinks with friends.

A robotic surgeon, who doesn’t make mistakes, and has all the skills learned from hundreds of years of surgery to call on, may save your life on the operating table. `

Superior speech recognition systems, which mean you can talk to devices as you move around your home, and do things for you, as required.

A domestic robot that can do chores around the house, or act as a companion or carer to people, with built in empathy, or personality?

Machines with A.I. could rapidly go through the massive amount of data that is out there in the world now, and make sense of it, in a way that humans struggle to do.

The sky’s the limit with A.I., but like always with science, and like, there can be a dark side.

Super intelligence

Super intelligence, as articulated by Oxford University philosopher, Nick Bostrom, at the Future of Humanity Institute, is a huge concern for some A.I. thinkers.

At some point in the not-too distant future, machines will surpass humans in general intelligence. At that point machines will replace humans as the dominant ‘life form’ on Earth. Life here will have entered its post-biological phase. We’ll be extinct.

Sufficiently, intelligent machines could improve themselves, to reach an even higher level of intelligence, without the need for humans.

The fate of humans, whether they continued to exist or not, would, be dependent on the whim of the machine super intelligence.

Our relationship to the super intelligence would be like the relationship gorillas, for example, have with humans today. We’d be endangered, or doomed.

Thinkers like Bostrom, and futurist Ray Kurzweil, talk about a moment called a ‘technological singularity’ when A.I. becomes truly super intelligent.

This is the moment when a computer or a robot with A.I. becomes capable of designing better, more intelligent versions of itself.

Rapid repetitions of this would result in an intelligence explosion, and very quickly, a super intelligence would emerge, way beyond human intelligence.

It would be like putting evolution into super-fast forward, and our own slow biological evolution would be unable to compete with this.

This super intelligence might be able to solve problems, and answer questions which have proved beyond the capabilities of human beings to solve.

Scientists argue as to when this moment might arrive, Kurzweil, predicts it will be with us by 2045, some have argued it will be with us as early as 2030.

Threat

No-one is agreed on how best to deal with unregulated ‘autonomic weapons’ or with the prospect of hostile super intelligent machines.

The aforementioned Elon Musk, the SpaceX entrepreneur, has put $10 million of his money into projects aimed at keeping A.I. ‘under control’ and ‘beneficial’.

We would try and build in elements that would prevent A.I. machines from turning on humans, like with the protective Terminator in the Hollywood film.

We might do well to take on board ‘The Three Laws of Robotics’ devised by brilliant science fiction author Isaac Asimov (author of I, Robot) back in 1942.

These are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Future 

Or perhaps our future is to become cyborgs, to adopt and incorporate this immense A.I. intelligence as part of our own existence.

We could decide to ditch our biology, and to become a race of super intelligent, immortal machines.

Our ‘primitive’ fragile, biological beginnings may, in time become forgotten.