What does it mean to be human in the age of AI and robotics?

Professor Kathleen Richardson, who will give a talk in UCD in November, is Director of the Campaign Against Sex Robots (Source: UCD)

The advance of AI, robotics, and other new technologies are leading to an unprecedented transformation of the human world.

The plotting the future series of public lectures at UCD is seeking to explore how the place of humans in the world is changing, and the implications of that.

Kathleen Richardson, Professor of Ethics and Culture of Robotics and AI, De Montfort University Leicester will deliver the fifth lecture in the series on the 9th November with a talk entitled ‘Turning Persons into Property and Property into Persons’.

Kathleen Richardson is the Director of the Campaign Against Sex Robots and Senior Research Fellow in Ethics of Robotics and part of the Europe-wide DREAM project (Development of Robot-Enhance Therapy for Children with AutisM).

Kathleen is author of An Anthropology of Robots and AI: Annihilation Anxiety and Machines. She is now working on her second manuscript The Robot Intermediary? An Anthropology of Attachment and Robots for Children with Autism.

This is a public lecture and all are welcome, but registration is required in advance. To register visit http://www.ucd.ie/humanities/events/plottingthefuture/

Killer robots with A.I. are worrying scientists

CLICK ABOVE to listen to discussion with Keelin Shanley on the dangers of killer robots with A.I. on Today with Sean O’Rourke (broadcast 5th August 2015)

Killer Robot

Scientists are worried about how mankind will control robots with advanced built in artificial intelligence (Credit: Warner Bros)

Huge advances in  in robotics and artificial intelligence mean that intelligent ‘killer robots’ could be ‘living’ among us in just a few years, and scientists and experts in the field are worried.

Origins

Artificial intelligence is the name given to how scientists try and replicate human intelligence in a computer. At its most basic it is software based on mathematics.

The scientific ‘father’ of A.I., as it is called, is Alan Turing, the brilliant English mathematician and code-breaker whose life was portrayed in The Imitation Game last year which many listeners will have seen.

We can, in fact, lay claim to Turing for Ireland, as he was half Irish. His mother, Ethel Sara Stoney, was Irish, attended Alexandra College in Milltown Dublin, and was part of a famous Anglo-Irish scientific family.

Ethel’s relations included George Stoney, the scientist who invented the term electron, and after whom a street in Dublin’s Dundrum is named; as well as Edith Stoney, regarded as the first woman medical physicist.

Turing’s idea was that a machine, using a mathematical alphabet which consisted of just two numbers, 0 and 1 could solve any problem.

This machine was the Turing Universal machine, and Turing came up with the idea, as far back as 1936, when he was just 24-years old.

Many identify the birth of A.I. as occurring at a now famous scientific conference at Dartmouth College Summer Research Project on Artificial Intelligence in the U.S.A. in 1956.

After that, in the 1960s and 70s, A.I. researchers developed programmers that could solve basic problems of algebra, prove mathematical theorems and speak English.

The public was astonished, and this was the background to the creation of the Hal 9000 computer on the Discovery spaceship in 2001: A Space Odyssey.

The US government poured money into A.I. research and it was predicted that an intelligent machine, to rival or surpass a human, would be built inside 20 years.

Like, is often the case in science, however, the predictions were overly optimistic, and didn’t anticipate the scale of the technical problems that had to be overcome.

Yet, in the past decade, A.I. researchers have started to create more intelligent software which learns from the environment like a toddler, and think for itself.

Earlier A.I. systems could only respond to direct commands, and were unable to learn from, or adapt to, the environment around them.

The advances in A.I. mean that machines with A.I. can now start to do things really useful, like helping humans to be better pilots, doctors and teachers.

But, creating machines that can help us in these many ways also means that we have created another intelligence, which may, or may not be under our control.

Like the Frankenstein story, written by Mary Shelley in 1818, we may ultimately create a life form which we cannot control, and which destroys us.

Concerns

For a long time, A.I. was not very intelligent at all, and came no-where close to replicating the miracle of bio-engineering that is the human brain.

However, recent advances in developing ‘self-learning systems’ which interact with the environment, and learn from it, like humans, have become a lot better.

So, much so that a Campaign to Stop Killer Robots was set up in 2013, with the support of about 1,000 A.I. scientists and researchers.

The main aim was to ban the development of what they called ‘autonomous weapons’ before they became a reality. That is weapons that can ‘think’ for themselves.

The United States already uses ‘drones’ in its conflicts in Afghanistan and Syria, and in March an ISIS-operated drone was shot down by the US near Fallujah, Iraq.

Drones are robotic planes that require an operator to select and kill targets, but with advances in A.I. drones could select targets and kill without an operator.

Obama likes Drones because they don’t risk pilots’ lives and they cost less. A drone costs about $12 million, while a new fighter costs about $120 million.

Self-learning systems, as they are called, are now being developed, which interact with the environment, and learn from it, in much the way humans do.

The rapid recent developments in A.I. worried A.I. researchers and 1,000 or so of them set up The Campaign to Stop Killer Robots in 2013.

Earlier this year, an open letter signed by leading lights of science and industry, said mankind is heading for a dark future without controls on A.I.

This was signed by Stephen Hawking, top A.I. researcher and pioneer Professor Stephen Russell and Elon Musk of the space company, SpaceX.

Hawking said: “The development of full artificial intelligence could spell the end of the human race.”

Musk said that allowing A.I. to develop freely without controls would be akin to “summoning the demon.”

Science, one of the world’s most highly regarded science magazines, entered the debate with a special edition on the subject of A.I. which also highlighted many concerns.

The issue right now is that A.I. is no longer in the realm of science fiction and that scientists are worried it will become reality without proper human controls.

Awareness

There are two schools of thought among A.I. researchers on whether robots will ever develop an intelligence that is truly ‘self-aware’ like humans.

One view holds that A.I. will always be artificial and will never truly replicate life and become self-aware like we are.

The other believes that A.I. systems will, at some point in the future, become ‘self aware’ in precisely the same way that humans are aware they exist.

It’s hard to decide who might be correct, as right now, we understand little about what makes us humans, conscious and self aware.

It is one of the great mysteries of science, and this means it will be very hard for us to determine whether a machine is truly conscious or not.

We don’t know what we are looking for essentially.

However, we can say that, as humans, by whatever magic of biology, we are aware of ourselves existing, and can take decisions based on our own moral code.

The point is, should we risk allowing A.I. machines, with weapons and killing capacity to become ‘self aware’ and autonomous? It seems a big risk to take.

Benefits

A.I. can be used to make our environment and our devices more intelligent, leading to a higher standard of living, arguably, for us all in the future.

All our devices will be connected to the ‘Net, the Internet of Things, and these things, equipped with A.I. will be better able to serve the needs of their human masters.

A robot that prepares your dinner, using fresh ingredients and has it ready for you when you come home tired from work? It’s already happening.

A driver less car that takes you safely home from the pub on a Friday night when you have been out enjoying a few drinks with friends.

A robotic surgeon, who doesn’t make mistakes, and has all the skills learned from hundreds of years of surgery to call on, may save your life on the operating table. `

Superior speech recognition systems, which mean you can talk to devices as you move around your home, and do things for you, as required.

A domestic robot that can do chores around the house, or act as a companion or carer to people, with built in empathy, or personality?

Machines with A.I. could rapidly go through the massive amount of data that is out there in the world now, and make sense of it, in a way that humans struggle to do.

The sky’s the limit with A.I., but like always with science, and like, there can be a dark side.

Super intelligence

Super intelligence, as articulated by Oxford University philosopher, Nick Bostrom, at the Future of Humanity Institute, is a huge concern for some A.I. thinkers.

At some point in the not-too distant future, machines will surpass humans in general intelligence. At that point machines will replace humans as the dominant ‘life form’ on Earth. Life here will have entered its post-biological phase. We’ll be extinct.

Sufficiently, intelligent machines could improve themselves, to reach an even higher level of intelligence, without the need for humans.

The fate of humans, whether they continued to exist or not, would, be dependent on the whim of the machine super intelligence.

Our relationship to the super intelligence would be like the relationship gorillas, for example, have with humans today. We’d be endangered, or doomed.

Thinkers like Bostrom, and futurist Ray Kurzweil, talk about a moment called a ‘technological singularity’ when A.I. becomes truly super intelligent.

This is the moment when a computer or a robot with A.I. becomes capable of designing better, more intelligent versions of itself.

Rapid repetitions of this would result in an intelligence explosion, and very quickly, a super intelligence would emerge, way beyond human intelligence.

It would be like putting evolution into super-fast forward, and our own slow biological evolution would be unable to compete with this.

This super intelligence might be able to solve problems, and answer questions which have proved beyond the capabilities of human beings to solve.

Scientists argue as to when this moment might arrive, Kurzweil, predicts it will be with us by 2045, some have argued it will be with us as early as 2030.

Threat

No-one is agreed on how best to deal with unregulated ‘autonomic weapons’ or with the prospect of hostile super intelligent machines.

The aforementioned Elon Musk, the SpaceX entrepreneur, has put $10 million of his money into projects aimed at keeping A.I. ‘under control’ and ‘beneficial’.

We would try and build in elements that would prevent A.I. machines from turning on humans, like with the protective Terminator in the Hollywood film.

We might do well to take on board ‘The Three Laws of Robotics’ devised by brilliant science fiction author Isaac Asimov (author of I, Robot) back in 1942.

These are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Future 

Or perhaps our future is to become cyborgs, to adopt and incorporate this immense A.I. intelligence as part of our own existence.

We could decide to ditch our biology, and to become a race of super intelligent, immortal machines.

Our ‘primitive’ fragile, biological beginnings may, in time become forgotten.

Can we live very long life-spans? If we can, Is it moral?

 

Tombstones

Can we defeat death? A growing number of scientists and philosophers believe we can. But, do we want to live forever? And should we? [Credit: Columbia University Mailman School of Public Health]

Many scientists believe that children born today, in the western world at least, will live to an average of 100 years. What happens after that is uncertain, but many influential thinkers believe it will be ultimately possible for mankind to defeat death – to become immortal.

The questions we pose in this episode of Life Matters (episode 10) is: Do we want to live forever? and if we did would it be the right thing to do?

Click HERE to LISTEN

This week’s contributors:

Professor Andrew Jackson 

An evolutionary biologist at TCD, Andrew gave us examples of animals that live very long lives, which are hard to kill, as well as the routine growing of ‘immortal’ cell lines by scientists to study cancer. He ‘wouldn’t be surprised’ if science found a way if not in his own lifetime, in the lifetime of his children, for people to become immortal.

Click here for more

Zoltan Istvan 

Transhumanist, who wants to live for several centuries. He knows the value of life, having survived many near death experiences on a number of exploring adventures around the world. He has been nominated as the Presidential Candidate for Transhumanist Party of the US in the 2016 US Election. The goal of his party is to conquer death. and he and his colleagues believe that ageing can be stopped and reversed by science and technology.

Click here for more

————————————

Dr John Messerly 

Dr Messerly is an author, philosopher and transhumanist who has written extensively on God and religion. He believes that technology will make it possible for humans to defeat death – the greatest evil. He would like to live forever, to do the many things that there is no time to do over a normal human life. There may be some people that don’t want to live forever, he says, and for those people, suicide should be permissible if they want to opt out.

Click here for more

———————————–

Stephen Cave

An author, and former British diplomat, he wrote the award-winning book ‘Immortality: The Quest to Live Forever and How it Drives Civilization’. He believes that humans tell themselves 4 types of stories to cope with the idea of death and to provide an illusion that they will somehow live forever. These stories are common across all cultures and religions. He believes that these stories are all appealing, but false, so every moment of life on Earth is precious.

Click here for more

————————————

Professor Christine Overall 

A Canadian philosopher that believes that it is ethically justifiable to enable people to live very long lives, perhaps up to the maximum possible, around 120 or 140. It is important, Professor Overall states, that people who have often lived very hard lives, are permitted to do some of the things that they have not been able to do at the end of their lives. She rejects the idea there is ‘duty to die’ at a certain age. This is sexist, she says because women live longer, and so will suffer more from such a duty, and ageist, because it assumes that elderly people are a ‘burden’ on society.

Click here for more

————————————

Professor John Ludwig

An American philosopher and the leading proponent of the idea that people have an ethical ‘duty to die’ once they reach a certain age. Prof Ludwig believes that it is wrong for people to become a burden on the younger generation and consume society’s resources long past the point where they can look after themselves or contribute to society. He believes that people living longer, perhaps indefinitely long lives, would create unsustainable pressures on society.

Click here for more

——————————