The device pictured above – called a Nu Nrg Reformer – invented in Ireland has been scientifically proven to increase fuel burning efficiency in engines by 15%.
How it Works
The Reformer device pictured on the right works by extracting water from the reservoir tank above, and splitting it into hydrogen and oxygen gas, the two elements that make up water.
The electrical power needed to get the Reformer working initially comes through a link to the combustion engine’s battery.
When the Reformer gets going, it starts to produce the two gases, and this causes the water in the tank to circulate and heat up.
This in turn causes gases to form and bubbles of gas rise to the top of the water.
The hydrogen gas exits the Reformer via a pipe which carries it to the engine which it enters via its air intake valve.
The hydrogen is burned as a fuel, which reduces the engine’s need to burn petrol or diesel. This improves combustion efficiency and almost eliminates the smoke and soot, which are particularly associated with some diesel engines.
Water vapour is also generated along with hydrogen and oxygen, and it is also piped to the engine via the air intake valve.
This cools the combustion temperature and ensures that oxides of nitrogen (commonly known as NOx) which are damaging to human health are significantly reduced.
Meanwhile, the electronic control module – the ‘brain’ of the device – adjusts the electrical charge going into the Reformer.
This ensures that only the required amount of electricity is going into the Reformer for the job in hand.
The water in the reservoir can be re-filled, and is ‘deionized’ thereby removing charged molecules which conduct electricity and could interfere with the electrical current.
The steel enclosure acts to hold the unit safely and secure it in position on the vehicle.
“The device is an electrolyser,” explained Professor Cassidy, “which means it splits water into its basic parts, hydrogen and oxygen.”
“There is a lot of work going on trying to produce hydrogen from water and other sources, as a cleaner fuel option to burning fossil fuels.”
Fossil fuels are running out at an alarming rate, explained Prof Cassidy, and this is being exacerbated by the growing fuel demand in China and India, yet there has been little progress on identifying new fuel sources.
While fracking and nuclear power have competed with renewable forms of energy such as wind, tide and solar, Prof Cassidy continued, there is scope for what’s called the ‘hydrogen economy’ to expand.
The majority of hydrogen is synthesised using ‘steam reforming’, said Prof Cassidy, which requires fossil fuel and steam to produce hydrogen.
There is another method for producing hydrogen which is called electrolysis, where water is split into hydrogen and oxygen using a direct electrical current. This accounts for only 4% of hydrogen production.
The Reformer device invented by David Harvey is an electrolyser.
The electrolysing method of producing hydrogen, said Prof Cassidy, is attracting greater interest because it offers the possibility of obtaining large amounts of hydrogen without the consumption of fossil fuels, emission of pollutant gases or use of nuclear power.
Cian O’Reilly, a chemistry graduate from DIT worked on the portable electrolysis cell in the DIT.
“The idea is to produce hydrogen, which can be used as a fuel in a combustion engine, reducing the need to burn petrol or diesel.”
“In a combustion engine, whether it uses petrol or diesel, this device supplies hydrogen to the engine via the air intake valve.”
“Since the device works at high temperature water vapour is also generated and introduced to the engine – essentially cooling it.”
“This cooling reduces the combustion temperature of petrol or diesel fuel in the engine, and, thus, the amount of NOx emissions produced.”
“Research has shown that introducing water vapour into the air intake valve of a diesel engine will reduce emissions and lead to a decrease in fuel consumption as less hydrocarbons will be burnt.”
The net result of fitting this portable device to an engine should increase its fuel burning efficiency by 15% and reduce emissions, said Prof Cassidy.
Inventor David Harvey, says that preliminary emissions tests in the laboratory showed a reduction in CO and NOx emissions – two gases which are poisonous to humans – by 91 and 93.4 per cent respectively.
Prof Cassidy said that while the emissions tests, which were carried out by Dr Jack Tracey at DIT, look promising, more testing is required.
The recent VW emissions scandal centred on the VW claiming falsely low NOx emissions from some of its popular brand of diesel powered cars.
Car manufacturers generally have struggled to find meet stringent NOx emissions regulations, particularly in the US, from diesel engines, while maintaining diesel engine performance.
“This device works at low direct electrical current, which means it requires less power input to achieve its efficiencies,” said Prof Cassidy.
“This also means it is safe, and we have shown it to be reliable and durable, as well as portable and inexpensive.”
A climate deal in Paris looks unlikely, and storms will hit the Irish coastline with greater frequency and intensity in coming years and decades as a result (Credit: Independent.ie)
The COP 21 UN Climate Conference is underway in Paris, and hopes are high, yet, it will take a political miracle for a solid, lasting, and enforceable agreement to be reached.
The one common denominator, of course, is that we are all potentially facing into a climate Armageddon. The problem is we don’t know precisely when that will happen or how bad it will be.
Click above to hear discussion of COP 21 deal prospects with Anne Marie Donelan, host of The Grapevine, on CRC FM.
Politics and human nature being what they are, means that politicians are very good at dealing with a ‘clear and present danger’ but very bad at dealing with some kind of vague, not easily identifiable threat.
This is what unites the problems of dealing with ISIL and climate change. The enemy is out there, but not in full view.
Meanwhile, there have already been rumblings of ‘breaches of trust’ between the rich nations at COP 21 and the poorer nations. Meetings have been held outside of the main group, and this has increased the sense of paranoia and tension that surrounds the meeting.
Think about how hard it can be to get two nations to agree on difficult issues. Ireland and the UK perhaps, or Israel and Palestine? Imagine trying to get agreement from all the nations of the world, on climate, when each and every nation working to a different agenda.
This time around, as opposed to the last attempt to get climate deal in Copenhagen in 2009, each country has been asked to submit its own assessment of what it can achieve on emissions reduction.
The plans, when taken all together, are, one analyst reported, likely to lead to a disastrous 3 Celsius rise in global temperatures over pre-industrial levels. So, a lot of painful compromise will be required.
For a multitude of political reasons that looks an almost impossible task, and a fudge of some sort looks the likely outcome.
Expect the announcement of a deal – as no deal would look bad for all the politicians gathered in Paris – but the reality to be different.
Countries like China, India, Japan, and the western nations are not going to risk damaging their economies, by making a real deal.
There is too much to lose, and not enough – concrete – to gain. Each nation wants a deal, but they want others to do the compromising.
For example, Ireland’s emissions are high, largely due to agricultural practices here, and we are coming under pressure to reduce them. This will not be easy, and will come at a financial and political cost.
Does the Government have the political will to do this, and risk annoying rural voters with an election coming? I think not.
The US, meanwhile, is the main contributor to greenhouse gas, but they don’t want to do anything to reduce emissions which will hurt their economy. They have an election coming up too.
China, the other main offender, is busy burning cheap coal, trying to get into the elite club of developed nations, and it too, is not inclined to do anything to hinder its progress.
Then there is India. The west is in a very weak position when trying to preach about reductions in greenhouse gas emissions to a country where some 240 million people are living without electricity.
Then there are the agreed targets. Everyone is talking about limiting the damage to a rise of 2 Celsius. However, 2 Celsius is far too high for low lying nations, which could be underwater with that kind of rise. These countries need something of the order of 1.5 C or less to survive.
There is talk about eliminating the use of coal, which will have ‘no future’ as a result. Yet, try telling that to countries like India, and China and, even Japan, that are still burning huge amounts of cheap coal, to provide for their growing demand for energy.
For countries to develop they need cheap energy. Coal provides that, while renewable sources don’t – for now at least.
The best hope of success is if the rich western nations, led by the USA, agree to allow developing nations to continue to burn coal, while paying for technology which will reduce the emissions of carbon dioxide from that burning, or to bury it safely under the ground.
The west also needs to make massive investment in developing renewable sources of energy, such as wind, wave, and solar so that it can truly start to meet growing energy demand – at the right price.
The problem of course is that there is no immediate threat here, which could be the impetus to push a deal over the line.
Think about how Europe dealt with the financial crisis. The can was pushed down the road continually until a gun was put to the leader’s heads, and the break of the EU looked imminent.
Then there is the question of enforcement. How will leaders ensure that everyone is adhering to the deal, if one is reached? What will the penalties be like? Will they be sufficient?
The European Central Bank only enforced its will on reluctant nations like Ireland by threatening to stop money being available in the ATMs. Something just as drastic will be required here if a deal is to work.
However, in the absence of Paris, New York, and London being hit this week by a climate-change inspired Superstorm, a deal looks unlikely.
The history of this issue is one of fudge, from the first climate change conference in Rio in 1992, up to 2009 and failure at Copenhagen. If something real emerges from Paris it will be truly historic.
Click above to hear discussion broadcast on Today with Sean O’Rourke, RTE Radio 1, 30th November, ’15
Google plans to bring a driverless electric car to market in 2018, and is already road testing driverless vehicles in California (Credit: Google)
Electric cars have been around the late 19th century, but they have never matched the appeal of cars run on either petrol or diesel.
That is all set to change, as the most popular cars on the market in coming decades are likely to be both electric and driverless.
The question is, is Ireland ready for electric, driverless cars, how do they work, are they safe? and how will they potentially make our lives better?
The first commercial electric cars appeared as early as the 1880s and ‘electric drive’ cars as they were called were popular with early drivers.
However, from the turn of the 20th century, there was a growing demand for cheaper automobiles, from the general public.
From the 1920s, petrol was becoming more easily available and cheaper, petrol driven cars had a longer range, had greater horsepower, and the introduction of automatic starting mechanisms in petrol cars increased their appeal to all groups.
Yet, from as early as 1908, when the first Model T Ford’s were mass produced, the popularity of the electric car was waning.
In the mid 1960s the United States Congress introduced the first bills recommending support for the development of a new generation of commercial electric cars to try and deal with the issue of air pollution.
This paved the way for a revival of interest in electric cars in the 1970s, a revival which was further helped following the soar in oil prices following the Oil Crisis of 1973, and the birth of the environmental movement.
It seemed to many back then, 40 years ago, that the time had come for electric cars, but people resisted buying them, due to their cost, so-called ‘range anxiety’ and the daily hassle of recharging their batteries.
The situation stayed like that for the following decades, with electric cars remaining a niche market, but in the last decade two things happened.
Governments, including the Irish government, began actively promoting e cars as a way to reduce emissions of carbon dioxide greenhouse gas, and to reduce reliance on imports of fossil fuels from The Middle East.
In Ireland this mean grants for people buying e cars (there is a 5k grant in place) and tax relief. Allied to that the ESB began building a network of public charging points, and there are now about 2,000 on the island.
The other thing that happened is that battery technology – which has been slow to develop for technical reasons – has started to improve.
Fully electric cars (there are also electric/petrol and electric/diesel hybrids) are totally dependent on batteries, usually lithium ion types.
These batteries, like the ones in our smartphones, are efficient, but the are expensive. This of course, affects the sale price of e cars.
The e car batteries need to be 80 per cent cheaper, some industry analysts say, in order for e cars to break through into mass use, and truly compete with cars based on the internal combustion engine (ICE).
Some believe it will be possible to make cost cutting improvements to the lithium ion battery, while others say a new battery technology is needed.
Electric are based on pretty simple technology, which hasn’t changed all that much since the first electric cars appeared in the 19th century.
One hundred per cent electric cars such as the Nissan Leaf, the Ford Focus Electric and the VW e golf all make use of an electric motor.
There is a battery, of a series of connected batteries, that link to the electric motor and provide the power to drive the car forward.
They are green because they are based on electricity rather than petrol or diesel, but, of course, electricity can be produced by burning fossil fuels.
The battery is vital, as it charges the electric motor, and determines how far the car can travel without a charge, and its performance.
The first battery used in any electric vehicle was an old fashioned lead-acid battery which was itself invented in 1859.
The batteries that are, these days, used in electric cars are lithium ion batteries which are light, and have a good ability to store energy.
The problem with lithium ion batteries, as many of us will know from using smartphones, is that they need to be regularly recharged, and that after hundreds of recharges, they can become depleted, and just ‘die’.
So, there is a desperate need for a new battery technology that do not need to be recharged as often, and don’t die with lots of re charges.
From the buyers point of view, the big downside with electric cars is that they have to be recharged for hours, overnight, and that the driver might still, with a long journey, feel that he might needed a top up recharge.
This is something called ‘range anxiety’ and it’s a well known factor that has turns off buyers and that e car makers are trying to address.
Yes, there are a few competing options. Perhaps the most promising is one being developed in the UK at Cambridge University.
Scientists there last month announced they had found a way to develop batteries that are one-fifth the coast and weight of current e car batteries.
The technology is called lithium air technology and it’s important because it can reduce the cost of electric cars, while also enabling them to match the range of petrol and diesel cars.
Electric cars, based on these, the scientists say, could drive from London to Edinburgh with a single charge, hugely increasing the range of e cars.
This new technology also produces batteries which can store a lot of energy, and can recharge thousands of times without the battery dying.
Yet, lithium ion batteries, as well all know from our smartphones, have to be recharged often, and after repeated charging they can gradually die.
A lithium air battery can create a voltage from oxygen molecules – air – in the vicinity of the positive electrode. It appears to be a big breakthrough.
This all looks promising, but it is just emerging from the lab, is at the development stage, and may be a decade before it enters the real world.
Sales of e cars in Ireland remain disappointing low, despite the efforts of Government to promote e cars through subsidies, grants and tax breaks.
The ESB have been actively promoting the greater use of e cars in Ireland by building a network of public charging points and grants. Grants are of 5k are available from the Sustainable Energy Authority of Ireland for buyers of new e cars.
Minister Coveney has been pictured driving a fully electric Nissan Leaf, and the ESB has been busy building infrastructure to support e cars.
Yet, in 2014, Ireland’s Central Statistics Office reported that just 222 electric cars were sold, which, is poor, but significantly up on the 55 cars that were sold in 2013.
The Government has set itself a target of 230,000 e cars being in use in Ireland by 2020. We currently have a little over 10,000 e cars here.
To compare, there were 13,929 petrol cars sold in 2014, and 47,559 diesel cars. So, electric is still very much a niche market in Ireland.
Ireland might use Norway as a comparison, a country of similar size, where 23, 390 electric vehicles were registered in 2014 alone.
The Norwegians have encouraged this through the lack of VAT on e cars, and free car parking, free access to bus lanes and free public charging points for e car owners. Ireland has followed some of these measures.
People are still reluctant to purchase e cars, and one of the mainr reasons is the ‘range anxiety’ already mentioned as well as the perceived hassle of charging batteries for hours overnight.
People might also enjoy driving, and feel that an electric car, running silently without gear changes, is not what they traditionally enjoy.
For e cars to really take hold here, the Government might have to follow Norway’s lead and allow e cars travel in bus lanes, and park for free.
Allied to that, the cost of e cars needs to come down. I think they really need to be cheaper than existing petrol or diesel cars to break through.
They might also need to have a ‘unique selling point’ that marks them out as distinctly different or superior to petrol or diesel cars.
There are signs that this might happen, as electric cars are set to become driverless, and that this will happen a lot faster than we might imagine.
Hard-nosed analysts of the global car industry are convinced driverless cars WILL happen, and will happen in the near future.
Certainly, companies with huge reputations like Google, and Apple are reportedly investing in developing a driverless, electric car.
Volvo are working on one too, as are BMW, and legislation has already been passed in some US states permitting cars to be driverless.
VW too, who are under huge pressure these days of course, are reportedly work on an electric driverless car of their own.
The people who look at these things closely are expecting that a driverless car will be for sale inside the next five years.
The market potential is huge, according to the Boston Consulting Group, who estimate the driverless car market will be worth $42 billion by 2015.
The Google X driverless car is expected to hit the market in 2018, with Apple’s Project Titan to arrive in or around the same time.
It is very interesting that technology companies like Google and Apple are investing so heavily and secretively in driverless cars.
These giants clearly believe that people will be travelling in driverless, electric cars in future, using the Net, Apps, or whatever else freely.
Inside a Google car, Google have a captive audience to promote all kinds of other technology which people will use freely on their way to work.
Many of the barriers that would have blocking the development of the driverless car are being removed.
The two biggest blocks are legislation and the willingness of people to use them. A lot is happening on the legislation side.
For example, six states in the US have already passed legislation allowing the testing of driverless cars out on the public roads.
The world has already had its first driverless car crash, which happened in July last when a driverless Lexus crashed and three Google employees got minor injuries.
Also, just last week a the Google driverless car had an encounter with the law in Silicon Valley California for driving 24 mph in a 35 mph zone.
The police officer pulled over the prototype car and spoke with the people inside, but no ticket was issued.
Irish and UK legislation would have to be substantially changed to allow for driverless cars to operate here, but it needs to happen urgently.
The UK is addressing this in law, and we need to too.
The other legal issue people would have is who is to blame if a driverless car crashes. People don’t want to be held account for something that is not under their control – understandably.
This led Volvo last month to say that it would take liability for any crash of any of its driverless cars – others will probably follow.
But, generally speaking the driverless car will be far safer than a car piloted by a human, who may be tired, distracted, or drunk.
We have had technologies in our cars which are not under our control already for years.
The best example perhaps would be ABS braking. This has been around since the 1980s, where control of the braking is taken from the driver to best ensure that wheels don’t lock, and spin out of control.
There are also systems which help us to park -self parking systems – where sensors guide a car as well as cruise control.
But, the vision for a driverless car goes way beyond these familiar features to a situation where a person, or persons, sit in, type or speak in a destination point, and then sit back and relax, read or work.
The driverless car will be able to sense its surrounding using existing technologies like RADAR, GPS and computer vision.
They will update their maps based on sensory input, and be able to track their position everywhere and adjust to all driving conditions.
Most of the ideas for driverless envisage a person in a driver’s seat, with a cloud, or wifi connection to other vehicles all around them.
The vehicles will communicate each other’s position and destination, and share the sensory input on road blocks, accidents or weather conditions.
All that intelligence will better get everyone safely from A to B. Dublin might have a swarm of electric vehicles, efficiently moving all of us.
A giant, traffic management system, with zero pollution, and an order of magnitude safer than what have. Safety, and efficiency might drive this.
It is not about breakthrough technology it is about incorporating a range of existing technology into a 21st century vehicle, which has, up to now, been run on an internal combustion engines, born in the 19th century.
Click above to listen to discussion with Ann-Marie Donelan, presenter of The Grapevine show, on CRC 102.9 FM
The diesel engine was designed more than a century ago, yet it remains the engine that, more than other, powers our 21st century world.
The diesel is used everywhere from mines, cars, trains, ships and lorries, yet it has changed little since it was invented by Rudolf Diesel in 1892.
There are many remarkable aspects to the history and development of the diesel, still the world’s favourite engine.
A Volkswagen Passat CC car is tested for its exhaust emissions at a testing station in London (Credit: John Stillwell/PA)
Diesel engines are in the news because it is a diesel engine that is at the heart of the Volkswagen pollution emissions scandal, which is still playing out.
The background to the scandal is the tightening restrictions by the US, the European Union and others on emissions of certain gases in cars.
There is a dual demand on car manufacturers to produce cars that perform well, run smoothly, are fuel efficient, and ‘clean and green’.
Car manufactures must deliver both, because if they don’t, they their cars will be taxed heavily, and people don’t want to buy ‘dirty’ cars.
The problem is, according to some engineers, that our law-makers were essentially asking VW and the other car makers to do the impossible.
We can’t have our cake and eat it, the engineers say. We can either have clean, green, fuel efficient cars, or we can have high performing cars, we can’t have both.
People buy diesel cars in particular, because they want to buy a car that is cheaper to run, reliable, fuel efficient, and performs well.
The noose has been tightening around the necks of VW and others because the regulations on emissions have been steadily tightening.
At some point, a decision was obviously made that the only option – faced with the impossible – was to cheat the regulator’s tests.
It was relatively easy to cheat the tests, as EPA car tests in the US are standard, and done on machines. Who else is doing this we must ask?
A 1906 diesel engine built by MAN AG (Source: Wikipedia)
How does a Diesel engine work?
Diesels work by converting chemical energy in diesel fuel into mechanical energy which is put to use by the engine.
The energy in diesel is released following an uncontrolled explosion when it comes into contact with very hot, pressurised air.
This ignition, or explosion occurs when diesel, which has first been atomised is sprayed by fuel injection into the compressed air.
This creates energy which initially drives a linear motion, up and down, of a piston, which is transferred to a rotary motion of the crankshaft.
Because the diesel ignition is uncontrolled, it is not smooth like a petrol engine, and the cylinders must be contained inside a heavy engine block.
The energy from the ignition pushes the piston down, inlet valves open, and fresh air is allowed into the engine from the outside.
The diesel engine effectively takes an ‘in breath’.
When the energy is expended, the piston moves up again, the ‘second stroke’ of the engine, and the fresh air is compressed.
The inlet and exhaust valves are closed so that the air cannot escape and is compressed. The temperature and pressure of the air rise to a value that is higher than the self-ignition value of the diesel.
This means the diesel ignites immediately on contact with the pressurised air. The air is circulated by a bowl (during the compression stroke) at the top of the piston which ensures an even spread of fuel.
Each engine cycle requires two strokes, breath in, and breath out if you like. However, many diesel engines are four stroke so that the energy produced is more evenly spread, and there is less shaking.
There are different amounts of energy produced by the uncontrolled explosion of diesel via each stroke. The more strokes, the more even the energy spread.
In a four-cylinder engine, with 4 driving pistons, there can be 4 power strokes happening at the same time, so the power stroke is always present in the engine.
The more cylinders a diesel engine has, the smoother it will operate. A heavy flywheel (timing belt) also helps to smooth out non uniformity of power, as do various weights applied to the crank shaft.
The operation of a diesel engine is all about producing high temperature and high pressure air continuously.
Rudolf Diesel, the inventor of the diesel engine (Source: Wikipedia)
It was invented by German engineer Rudolf Diesel, who took out patents on a diesel engine in 1892 and 1993.
Diesel became famous and successful very quickly, as his engines went into production all around the world.
In 1897 the American brewery magnate Adolphus Busch acquired a license to make the machine for about one million marks, or about $50,000.
Soon the Busch Diesel engines were being built in the USA and Canada for locomotives, factories and ships.
Diesel now became primarily a salesman for his engine, and he moved with family moved into a palatial mansion in Munich.
From there, he spent much of his time taking legal action to prevent patent applications, by other engineers seeking to improve on his engine.
This cost him a lot of time and nervous energy. It was mostly a waste of his effort, as he wasn’t usually successful. in court.
Overworked, stressed by patent trials, and pressurised by his family’s expensive lifestyle, Diesel got sick, and his fortune was gobbled up, without his knowledge.
When he became aware that his fortune was gone, it took it very badly. In 1913, Rudolf Diesel vanished from the ferry, the S.S. Dresden, as she sailed to England.
The date of his death is marked in his diary by a cross. Suggests Diesel chose to take his own life by suicide.
Diesel versus petrol
Diesel fuel is far less refined than petrol. It is a mixture of hydrocarbon molecules produced by the distillation of crude oil.
Petrol is far more explosive, and will light instantly when a match is put to it. Petrol is volatile even at room temperature and lets off fumes, and the vapour is flammable, so it is a dangerous fuel to have in an engine.
Diesel engines are based on a design where fuel is atomised and sprayed onto a compressed chamber of air, which results in small explosions.
This provides a lot of power potentially, but it is also means that the engine can be subject to shaking, and needs a hard body to contain it.
The petrol in petrol engines are ignited by spark plugs which light a fuel that has been highly refined and premixed before entering the engine.
The petrol engine, because it uses a more refined fuel, and because its ignition is less explosive, tend to be smoother running than diesel.
Both engines convert chemical energy present in the fuel into mechanical energy, which does useful work in driving the pistons up and down.
Diesel engines are better at converting more chemical energy into useful work, so they are said to be more efficient engines with less energy loss.
So, in most petrol engines, petrol and fuel are pre-mixed before being compressed. This was done in the ‘old days’ by a carburetor, but in cars today there is electronically controlled fuel injection.
In a diesel engine, the fuel is injected into very hot air, which has been compressed, at the end of a compression ‘stroke’ and self ignites.
Diesel is a thicker, heavier fuel than petrol, which works best in an engine going at a constant speed, and can solidify at low temperatures.
The world’s biggest and most powerful engines, like this one built for a supertanker, are invariably diesel engines (Source:
Why is the diesel so important ?
Diesel is crucial because it is the workhorse of industry. Diesel engines are reliable, powerful and safe, as they don’t use flammable fuel.
They are used everywhere, particularly where a lot of power is required such as trains, boats, lorries, submarines and tractors.
They are also used in cars, where they are touted to provide power, performance, as well as low emissions of pollutants.
No other engine still today is so versatile and is used in so many applications. The vast majority of the world’s commercial, industrial , agricultural, mining and military vehicles are diesel powered.
It is remarkable that a 19th century invention is still the most important engine in the world in the 21st century. Diesel engines power the world.
What kind of pollutants do diesel engines emit?
Diesel exhaust emissions contain toxic air contaminants some of which listed as cancer-causing.
Diesel cars, emit around 20 times more so called NOx (nitrogen oxide and nitrogen dioxide) as a result of their combustion design, than petrol engines, as well as small amounts carbon monoxides.
NOx is formed when nitrogen and oxygen from the air are combined – under heat and pressure. More heat and pressure gives you more NOx.
Diesel also contains sulphur, which can be hazardous to human health. Exposure to diesel particulate matter (or dpm, such as soot particles)
The US Environmental Protection Agency (EPA) says that even short term exposures to NO2, of 30 minutes, can result in airway inflammation in healthy people, and worse effects for those with asthma.
Exposure to NO2 linked with increased visits to A&E for respiratory issues. NO and NO2 are together often referred to as NOx and both are potentially harmful.
People living near roadways have been shown exposed to 30 to 100% higher concentrations of NO2 than those living away from roads, the US EPA says.
When nitric oxides are subject to heat and sunlight they react with volatile organic compounds to produce Ozone, which is also linked with all kinds of respiratory problems.
What did the ‘real world’ tests on VW diesels show up?
The hidden damage from these 11 million VW vehicles affected could equate to all of the UK’s NOx emissions from all power stations, vehicles, industry and agriculture.
The EPA tests have known practices and profiles. In many cases, the test vehicles are put on rollers and run at a certain speed for a certain time, then at another known speed for another known period.
The car’s central computer can detect whether inputs match those expected in test conditions.
A non governmental agency, the International Council on Clean Transportation (ICCT), performed independent – and crucially on-road – emissions tests, on the VW Passat, the VW Jetta, and a BMW X5.
The Jetta was found to be emitting up to 35 times the allowable limit of nitrogen oxide and the Passat up to 20 times.
These tests followed five routes on similar lines to the EPA simulations: highway, urban, suburban and rural up/downhill driving.
The emissions performance of the Volkswagen, but not the BMW, cars was so much worse than expectations that the ICCT ran further tests on a dynamometer.
In these circumstances, the cars passed with flying colours. It was at this point that the ICCT contacted the EPA.
The first diesel powered car was the 1936 Mercedes Benz 260D (Credit: Zoltan Glas)
How has the engine improved over the years?
Before World War 1, submarines were built with diesel engines, which were not as flammable and dangerous to the submariners.
After World War 1, where the diesel was widely used, the engine was adapted for an increasing number of peacetime usages.
The first big improvement was the move away from the cumbersome air blast injection system for diesels.The fact that a large compressor had to be attached to the engine prevented it being used in many situations.
Then in the 1920s, engineers developed. something called the Jerk type pump. This pump measured out a precise amount of fuel to be delivered as a spray to the engine at the precise moment it was required.
This fuel injection technology got rid of the need for an air compressor, and allowed for smaller, lighter diesels to be built and widely used.
Only on the roads, was the diesel engine slow to come in, and it was not until 1924, that MAN and Daimler Benz built the first diesel lorry.
It took even longer, until 1935, until the first diesel powered car appeared, the Mercedes Benz 260D.
The traditional design for the diesel engine was too noisy and heavy for road vehicles. However, in the late 20 century that engineers gave diesel cars better ‘road manners’.
Achieving this, however, was, it the expense of the environment. The more efficient combustion of diesel engines meant that the soot particles in diesel car exhausts became smaller and smaller, and more harmful to health since they could be inhaled more easily.
Engineers came up with a particle filter under the car, which collects and burns the soot particles. This type of filter is used in race cars and light aircraft.
The diesel became the engine of choice for military equipment on the ground and at sea during World War 11.
After the war, it was adapted for use in construction machinery, large tractors, most large trucks and buses.
Ultra-reliable diesel engines came into use in hospitals, telephone exchanges, and airports to provide power during power outages.
What happens if I put gasoline into a diesel engine?
Diesel in a gasoline engine will not even cause ‘firing’ because diesel is less volatile and will not mix with the air properly-sparking will not initiate combustion.
But, if you put gasoline in a diesel engine, you are putting a highly volatile fuel into a chamber of highly compressed and hot air.
This will lead to detonations, rather than smooth combustion, and eventually the engine components can get damaged!
Why is it so hard to develop clean diesel engines?
Diesel fuel is full of long hydrocarbon chains, and a gallon of diesel fuel contains more energy than a gallon of petrol.
The problem is that when diesel is burned in an uncontrollable way it is hard to control the waste products, which often include sulphur.
The old diesel cars, such as the 1979 Oldsmobile in the US spewed lots of tiny sulphur-containing particles into the air.
These days diesel car makers are good at trapping this kind of emission and the use of ultra-low sulphur diesel fuel helps.
However, it has proved more difficult for car makers to deal with the NOx gases, NO, NO2, and NO3.
These form at naturally at high temperatures, which are essential for a diesel engine to work. The react with sunlight and form ozone, which is O3.
Ozone is an irritant and bad for human health. It makes our yes water, our throat hurt, worsens asthma and causes heart problems too.
Diesel cars produce far more NOx than petrol cars.
The problem for engineers is that the temperatures and pressures under which a diesel engine runs best (in terms of pep and fuel efficiency) are also the conditions which will convert the maximum amount of oxygen and nitrogen into NOx.
With spontaneous ignition of diesel it’s not easy to keep track of what compounds have formed, and then to clean them up.
Also, it should be noted that in Europe, where about half of all cars run on diesel, there is less regulatory focus on NOx than greenhouse gases.
Does the Diesel engine have a future?
Many engineers believe the diesel has a bright future. It’s an engine that can run on peanut oil, and other biodiesels as Diesel himself showed.
There is no cheaper or more environmentally form of power today than combining a diesel engine with plant oil. If eco fuels catch on, it will be the final fulfillment of Rudolf Diesel’s dream.
Battery technology has changed little since Alessandro Volta’s stacked battery of 1799, here being demonstrated above to Napoleon. But, a technological breakthrough may finally be on the way (Source: Jean Loup Charmet/ Science Photo Library)
We take batteries for granted, but it is hard to imagine a world without them. Think about it for a moment. Almost everything that requires power, makes use of battery power.
The list includes cars (electrical and fuel powered), children’s toys, bicycle lights, recording devices, hearing aids, and, of course, our beloved laptops, tablets and smartphones.
Batteries have, however, become a limiting technology, and for years have been acting like a brake on the development of ever faster, more powerful electronic devices and gadgets.
Whereas, the power of a microchip – the brain of our electronic devices – has doubled every two years or so, since the 1970s, battery power, upon which they rely, hasn’t kept pace.
While the microchip has been doubling its power relentlessly every couple of years, engineers have struggled to get an extra 30 per cent of power from batteries over the same time frame.
The remarkable thing is that until recently, the technology upon which batteries are based hadn’t changed much since the first working battery designed by Alessandro Volta in 1799.
Yet there are many new technologies in development which could provide the long sought breakthrough that would provide us – at last – with batteries that can provide power at a high enough level and long enough to suit our needs.
In 1791 Luigi Galvani noticed that an electrical circuit created with two different metals, when touched on two ends of the leg of a dead frog, would cause the frog’s leg to twitch.
The two metals were creating an electric current within the frog’s leg, causing its muscles to contract. This was a transfer of chemical energy into electrical energy – a primitive battery.
The first simple, working battery, as we would recognise it today, which became known as the Voltaic pile, was built by Alessandro Volta, an Italian physicist in 1799.
Volta’s battery was not the first device created by humans which could produce electricity, as the famous ‘Baghdad Battery’ dates back to about 200 BC.
These batteries were discovered by an archaeologist called Wilhelm Konig, outside Baghdad in 1938. They were small jars, which held an iron rod contained in copper.
Tests on the batteries indicated that the jars had been filled with some kind of acidic substance like vinegar or wine, leading researchers to theorise that they were ancient batteries.
However, the Volta battery was the first to produce a steady, lasting electrical current.
Volta’s battery had two electrodes. An electrode, to explain, is something which exists to create a connection between an electric conductor and a non electrical conductor.
So, a lamp that is connected to a battery would be connected by an electrode, which would carry the electrical current from the battery, to the lamp, via safe, non conduction materials.
The electrodes in the Volta battery were circular disks of zinc metal and copper metal, separated by cardboard paper in between them, which was soaked in salty water.
An electrolyte is something either liquid, or molten, which is full of ions, or negatively charged atoms, which are the basic building blocks of electricity. Volta’s electrolyte was salty water.
Chemical reactions in the electrolyte led to a positive charge being created at the zinc electrode – the anode – and a negative charge created at the copper end – the cathode.
The electricity in battery flows from towards the positive cathode, because electricity by its nature is negatively charge, and in Volta’s battery this flow could not be reversed.
One problem with Volta’s battery was there was a buildup of hydrogen gas, a by-product of the chemical reactions,which formed a barrier between the electrolyte and the electrodes.
Thus, the effectiveness of the Volta battery diminished over time. Furthermore, when more acidic electrolytes came into use, batteries could often be dangerous to handle.
Another problem was that because the Volta battery was built in a stack, the weight of the stack would, after a certain height, begin to squeeze the brine out of the cardboard.
The 1926 Model T Ford, pictured here, was the first mass produced car to have an automatic starting key. This was possible by using a battery designed by Irish priest Fr Nicholas Callan in 1837 (Credit: automotivehistoryonline.com)
Fr Callan’s Battery
One of the key researchers in what scientific historians call ‘the electric century’ – the 19th century – when electricity was harnessed and made widely available – was an Irish priest.
Fr Nicholas Callan was a 19th century battery pioneer and Catholic priest, based at at what was then part of The Catholic University of Ireland (now called Maynooth University).
He built some of the most powerful batteries and magnets that had ever been built in his workshop at Maynooth, and he spent long hours there, immersed in his researchers.
Callan, unlike scientists today, did not publish his findings, but when he had mastered some aspect of knowledge, he simply moved on to the next topic that he was interested in.
This meant that he did not get credit for the extent of his contribution to the development of the battery, and to the widespread availability of electricity until relatively recent times.
One of his inventions, called the induction coil, was a quantum leap for battery technology when he invented it in 1837. It was the first immensely powerful battery ever invented.
Our modern cars can be started by a simple turn of a key, thanks to a battery designed by Irish priest in the 19th century, and put into a Model T Ford in 1926.
Up until the 1920s, cars had to be started by manually by turning a hand crank. This was physically demanding, and people that were not young and fit often couldn’t manage it.
Callan developed an induction coil in 1837, almost a century before, which provided a way to massively ramp up the electrical power available to a small Model T car battery.
The 1926 Model T Ford, was the first car that went into mass production with an electrical starting mechanism, and this meant anyone, regardless of age or health, could drive a car.
The technical trick that Callan uncovered was to repeatedly break the electrical circuit in a battery by dipping copper wires in liquid mercury cups.
Callan found that the more rapidly he could break the current, using his ‘repeater’, the more intense the flow of electricity produced would become.
He was a quiet intense man, who spent hours in his laboratory at what is now NUI Maynooth. HIs fellow clerics wondered at his interest in science, and regard his lab work as useless.
Around the same time Fr Callan was working, in the 1830s, a British scientist John Daniell, developed an improved version of Volta’s battery which was called the Daniell cell.
The so-called Daniell Cell was made up of two metal plates, one of copper and one of zinc, and two solutions, of copper sulfate and zinc sulfate, all in a simple glass jar.
Copper sulfate is denser than zinc sulfate so it sank to the bottom of the glass jar and surrounded a copper plate. The lighter zinc sulfate floated on top of the copper sulfate and it was surrounded by a zinc plate. The zinc plate was negative and the copper the positive.
This worked well for stationary applications, such as powering doorbells, and early telephones, but it didn’t work for mobile applications such as powering a flashlight. But, it worked.
The principles of what happens when you put a battery into your remote control or flashlight today, in September 2015, is similar to the early batteries, going back more than 200 years.
Basically, chemistry is being used to generate electricity, and move it from one part of the battery to the other, and then into the device where the electrical power is consumed.
In simplest terms, the chemical reactions in the anode, or negative end of the battery, creates electrons, which are the basic units of electricity.
These electrons are transferred in the electrolyte substance, which is liquid of some sort, often an acid, from the anode, to the positive end of the battery, the cathode, via a current.
At the cathode chemical reactions occur which essential absorb the electrons, and their energy, to produce electricity, which is transferred to a device running on battery power.
The battery will continue to produce electricity until one, or both, of the electrodes, run out of the substances which are needed to produce and absorb electricity respectively.
Modern batteries are still based on using chemistry to produce, absorb and transfer electricity. We have got better – somewhat – at manipulating the chemistry to make better batteries.
There are zinc-carbon batteries, alkaline batteries, lithium ion batteries and lead-acid batteries in common usage today.
The lead-acid battery, which is used in a typical car battery has electrodes made of lead oxide and metallic lead, while the electrolyte is a sulfuric acid solution.
These are dangerous to handle, and an environmental nightmare, but they produce enough electricity to get a car started in the morning, and that is what we all ultimately want.
The alkaline batteries, are the kinds of batteries we buy in shops, to put into children’s toys, for example. The cathode here is a manganese dioxide mixture, and the anode a zinc powder.
It gets its name, however, from its potassium hydroxide electrolyte, an alkaline substance..
Acids are often excellent electrolytes, because they strongly ionize in solution. They can produce a lot of ions when put into solution, whether they are positive or negative.
ither way, acids don’t form stable molecules when put in solution. The create ions, which are highly mobile in solution, and facilitate the conduction of electricity.
In the modern era it has become important to develop decent rechargeable batteries, such as mobile phone charges, which can be plugged in and recharged on the move.
However, rechargeable batteries have been around a long time. In fact they date back as far as 1859 when Gaston Plante, a French physicist invented the humble lead acid battery.
We know that lead acid batteries in our cars can run out, hence the jump leads we carry in the boot. The jump leads are used to re-charge the battery from another battery usually.
The difference between a rechargeable battery and a non-rechargeable one is that the chemical reactions producing electric current in a rechargeable battery are reversible.
As the world, a became increasingly mobile, it was vital to invent a powerful, rechargeable battery. Along came the lithium-ion battery in 1991 (by Sony and Asahi Kasei)
In this battery, charge could be reversed, and the products that were in the battery were not going to be used up rapidly, or diminished in power with multiple weekly charges.
The lithium-ion battery, which goes into so many of our devices, is one such rechargeable battery. These are high performance batteries, which often used lithium cobalt oxide as the cathode and carbon as its anode. These materials, lithium, and carbon, are also very light.
owever, lithium ion batteries still need an electrolyte, typically lithium salt, which is in solution. So, these high technology batteries are still limited by the need for a liquid solution.
There are many competing technologies working to develop the breakthrough that will move batteries on to the next stage.
There are solid state batteries, and solar batteries, and even batteries, which scientists have recently proposed, which could be based on thin air?
Solid state batteries will be made of solid electrodes and solid electrolytes. They can be easily miniaturised, and long shelf lives. They also are not prone to reduced performance due to temperature like liquid electrolytes, when exposed to near freezing or boiling conditions.
The technical problem with solid state batteries, however, is that it is proving difficult for engineers to get high electrical currents moving easily across solid to solid surfaces.
Solar batteries are another technology being explored, as the next big thing in batteries, and these are based on converting the energy in sunlight directly into electrical energy.
The materials used are those that change their electrical characteristics in response to sunlight. They work in a similar way to solar panels, but they need to be smaller of course.
Tesla Motors, from the US, meanwhile, are developing industrial scale batteries, which can be used to power the home, they say, or to store energy from renewable sources like wind.
In May Tesla and an Irish company Gaelectric announced they were going to work on a large utility scale battery power project in Ireland.
The plan is to demonstrate that Tesla batteries, can store energy from the sun and the wind, which there is plenty of in Ireland, and release in quantities sufficient for utilities to use.
Tesla also wants to enable business and homes to be able to store renewable energy from the Sun, and wind to manage their power needs, and reduce reliance on fossil fuels.
However, when it comes to our electronic devices, it seems that a workable solar battery, which is powerful and cheap, and reliable is still no-where in sight.
Solar powered batteries can be sluggish on start-up when they are cold, and they don’t have enough power, of the type that an iPhone requires, for example to do the job.
Their role maybe to have a solar battery on the iPhone as a back-up to use in an emergency when the battery is running low and there is no electrical socket in sight.
Future iPhones could run on hydrogen refuelled via the headphone socket. Intelligent Energy, a British firm behind the breakthrough, expects there’ll be a gas cartridge slot (Credit: iFixit)
Intelligent Energy says it made an iPhone 6 with a battery which creates electricity by combining hydrogen and oxygen – that means air! – to last the phone for a week.
The bonus is that the combination of hydrogen and oxygen, produces only small amount of water and heat as waste products.
This announcement has been shrouded in secrecy as it correct this will be a massive breakthrough. The company said its fuel cell system was incorporated into the current iPhone 6 without any alteration to the size or shape of the device.
The only difference, compared to other handsets is that there are rear vents where a tiny amount of water vapour waste is allowed to escape.
Intelligent Energy, who are reportedly working closely with Apple, said they are considering what price to sell their cartridges at, so it’s not going to be part of the iPhone per se.
It’s likely the cartridge might sell for just the cost of a latte, company executives said, and even so, a 300 billion Sterling market per year could open up.
Click above to listen to discussion with Keelin Shanley on Today with Sean O’Rourke, broadcast on RTE Radio 1 on 27th August 2015
DNA-based computers have already been built and they look set to replace silicon computers in coming years (Source: http://www.news.discovery.com)
We love our electronics, or most of us do, and every year or two, when we go to buy a new phone, computer or laptop we all expect to buy a faster, more intelligent device.
The microchips inside our electronics are ‘the brain’ of the device. They are currently made up of silicon, an abundant material found in sand.
However, some time soon, perhaps very soon, silicon-based chips will no longer be able to provide devices with the extra speed and functionality that buyers demand.
The big question is, if electronic devices are not based on silicon, as they have been for decades now, what will they be based on?
It might come as a surprise to some to learn that DNA, the genetic material inside every human cell, is a leading contender to fill silicon’s shoes.
In a way, it makes perfect sense to use DNA for computers. DNA is brilliant at storing and processing information, and is made up of a simple, reliable code.
Yet the idea of using DNA in computers didn’t emerge until as late as 1994.
That was when Leonard Adleman, of the University of Southern California showed that DNA could solve a well-known mathematical problem.
The problem was a variation of what mathematicians call the ‘directed Hamilton Path problem. In English that translates to ‘the travelling salesman problem’.
In brief, the problem is to find the shortest route between a number of cities going through each city just once.
The problem gets more difficult the more cities are added to the problem. Adelman solved the problem,using , for seven cities in the US.
Thing is, it is not a hugely difficult problem, and a clever enough human using paper and pencil could probably work it out faster than Adelman’s DNA computer.
The importance of what Adelman did was to show that DNA could be used to solve computational problems – what we might call a proof of concept today.
He used synthesised DNA strands to represent each one of the seven cities and other strands were made for each of the possible flight paths between the cities.
He then performed a number of experimental techniques on the DNA strands to get the single answer that he wanted. Like putting a jigsaw puzzle together.
It was slow, but he showed it could be done.
The question now was, what else can we do with DNA?
Purified silicon, pictured here, is sourced primarily from sand and is an abundant element in the Earth’s crust (Source: Wikipedia)
The most important element is silicon, pictured here on the right,which is the material used to make the microchip; the brain of our phones, pads and laptops if you like.
The first silicon chip was made in 1968, and it became the material of choice for the emerging computer industry in the years and decades that followed.
It is an abundant material, found in sand, and in rocks like granite and quartzite, and this abundance means it is cheap, and easy to find, all over the world.
It is also asemiconductor, which means it conducts electricity, although badly. It is halfway between a conductor, such as metal, and an insulator, such as rubber.
It would be very hard to control electricity, in terms of switching transistors on and off, using a material that conducted electricity or block its flow entirely.
This semiconducting property makes it easier to control the flow of electricity in a silicon microchip, which is crucial to success of the microchip technology.
Aside from silicon, there are plastics, which make up a lot of the weight of many devices and laptops, in the body, circuit boards, wiring, insulation and fans.
These are plastics like polystyrene, a common one, are made up of carbon and hydrogen, two of the most common elements in nature.
There are metals, but usually light metals, such as aluminium, which is popular because it is light, and strong and has a sleek, modern appearance.
Aluminium comes from bauxite mining, and a lot of energy is spent in extracting the ore aluminium from the bauxite rock in big producer nations like Australia.
There is some steel for structural support and for things like screws, and copper is still used in wiring on circuit boards and to connect electrical parts.
The battery is key, of course, and typically it is a lithium-iron battery these days. These batteries also have cobalt, oxygen and carbon.
There are also small elements of rare materials, or rare earths such as gold or platinum, or neodymium, which is used for tiny magnets inside tiny motors.
electronic devices, including iPhones and other devices. This,has proved controversial as the process that extracts those rare earths from the ground is environmentally risky, some believe.
Minerals such as neodymium are used in magnets inside the iPhones to make speakers vibrate and create sound.
Europium is a material that creates a bright red colour on an iPhone screen and Cerium is used by workers to polish phones as the go along the assembly line.
The iPhone wouldn’t work without the various rare earths contained in it. Ninety per cent of the rare earths are mined in China, where environmental rules are slacker.
There is a human price to be paid – elsewhere – for our shiny, fast, new devices.
For example, a centre of rare earth mining is a place called Baotou, in Inner Mongolia. The town has dense smog, and a radioactive ‘tailings’ lake west of the city, where rare earth processors dump their waste, described as “an apocalyptic sight”.
Radioactive waste has seeped into the ground, plants won’t grow, animals are sick, and people report their teeth falling out, and their hair turning white.
The people that risk their lives mining for the rare materials that need to make make the electronics we love, usually live far away from Europe or North America.
China is a major centre for such mining, and Australia is significant too.
DNA is ‘clean’
When scientists built a computing running on DNA in Israel in 2003, it contained none of the silicon, metals or rare earths used in our devices today.
It could also perform 330 trillion operations per second, which was a staggering 100,000 times faster than silicon-based personal computers.
A DNA computer would be much ‘greener’ and more in keeping with our 21st century ideas of sustainability and reducing the carbon footprint.
DNA computers don’t need much energy to work. It is just a case of putting DNA molecules into the right chemical soup, and controlling what happens next.
If built correctly, and that is where the technical challenge likes, a DNA computer will sustain itself on less than one millionth of the energy used in silicon chip technology.
There have been a few important milestones since the pioneering work of Adelman in California opened the door to DNA computers back in 1994.
Between 2002 and 2004, scientists there produced a computer based on DNA and other biological materials, rather than silicon.
They came up with a DNA computer which was, they said, capable of diagnosing cancer activity inside a cell, and releasing an anti-cancer drug after diagnosis.
More recently in 2013, researcher stored a JPEG photo, the text of a set of Shakespearean sonnets and an audio file of Martin Luther King’s famous ‘I have a dream’ speech using DNA.
This proved that DNA computers were very good at storing data, which is something that DNA has evolved to do over millions of years in the natural world.
DNA computers are on the way that will be far better at storing data than existing computers which use cumbersome magnetic tape or hard drive storage systems.
The reason is simple. DNA is a very dense, highly coiled molecule that can be packed tightly into a small space.
It lives in nature inside tiny cells. These cells are only visible under a microscope, yet the DNA from one cell would stretch to 2 metres long if uncoiled and pulled straight.
The information stored in DNA also can be stored safely for a long time. We know this because DNA from extinct creatures, like the Mammoth, has lasted 60,000 years or more when preserved in ice, in dark, cold and dry conditions.
One of the few advantages of our Irish weather is that it is makes it an attractive place for high technology companies to base their data store centres here.
It was a factor in the announcement by Google last week that it was to locate a second data centre in Dublin.
Many industry experts believe the days of the silicon chip, like this one, are numbered, and some believe DNA will replace it as the material of choice in our future devices (Source: http://www.tested.com)
A DNA computer chip – if we call it that- will have to be far more powerful than existing silicon chips to establish itself as a new technology.
This will be ‘disruptive’,and a lot of money is invested in manufacturing plants like Intel in Leixlip, which have been set up and fitted out to make silicon chips.
But, regardless of the level of investment, and Intel have invested something like $12.5 billion in their Leixlip plant since 1990, silicon’s days are numbered.
In 1965, Gordon Moore, one of the founders of Intel, came up with a law governing the production of faster and faster computing speeds, which has proved accurate.
He said that the number of transistors on an ‘integrated circuit’ – the name given to chips before silicon became the material of choice – would double every two years.
This doubling has continued every two years since 1965, but engineers say that they are fast reaching the point where they have exhausted silicon transistor capacity.
The need for something to replace silicon is becoming urgent, and this is why a recent breakthrough in DNA computing in the UK is especially timely.
Scientists at the University of East Anglia have just announced they have found a watch to change the structure of DNA – twice – using a harmless common material.
The material is called EDTA and it is found in shampoo, soaps and toiletries to keep their colour, texture and fragrance intact.
The scientists used EDTA to change DNA to another structure, and the, after changing it, to change it back into its original structure again.
In silicon, the transistors switch between ‘on’ and ‘off’ states and this provides the means of controlling the way that the silicon chip works.
Similarly, this breakthrough has shown, for first time, that scientists can now also switch DNA between two ‘states’ or forms.
CLICK ABOVE to listen to discussion with Keelin Shanley on the dangers of killer robots with A.I. on Today with Sean O’Rourke (broadcast 5th August 2015)
Scientists are worried about how mankind will control robots with advanced built in artificial intelligence (Credit: Warner Bros)
Huge advances in in robotics and artificial intelligence mean that intelligent ‘killer robots’ could be ‘living’ among us in just a few years, and scientists and experts in the field are worried.
Artificial intelligence is the name given to how scientists try and replicate human intelligence in a computer. At its most basic it is software based on mathematics.
The scientific ‘father’ of A.I., as it is called, is Alan Turing, the brilliant English mathematician and code-breaker whose life was portrayed in The Imitation Game last year which many listeners will have seen.
We can, in fact, lay claim to Turing for Ireland, as he was half Irish. His mother, Ethel Sara Stoney, was Irish, attended Alexandra College in Milltown Dublin, and was part of a famous Anglo-Irish scientific family.
Ethel’s relations included George Stoney, the scientist who invented the term electron, and after whom a street in Dublin’s Dundrum is named; as well as Edith Stoney, regarded as the first woman medical physicist.
Turing’s idea was that a machine, using a mathematical alphabet which consisted of just two numbers, 0 and 1 could solve any problem.
This machine was the Turing Universal machine, and Turing came up with the idea, as far back as 1936, when he was just 24-years old.
At some point in the not-too distant future, machines will surpass humans in general intelligence. At that point machines will replace humans as the dominant ‘life form’ on Earth. Life here will have entered its post-biological phase. We’ll be extinct.
Sufficiently, intelligent machines could improve themselves, to reach an even higher level of intelligence, without the need for humans.
The fate of humans, whether they continued to exist or not, would, be dependent on the whim of the machine super intelligence.
Our relationship to the super intelligence would be like the relationship gorillas, for example, have with humans today. We’d be endangered, or doomed.
Thinkers like Bostrom, and futurist Ray Kurzweil, talk about a moment called a ‘technological singularity’ when A.I. becomes truly super intelligent.
This is the moment when a computer or a robot with A.I. becomes capable of designing better, more intelligent versions of itself.
Rapid repetitions of this would result in an intelligence explosion, and very quickly, a super intelligence would emerge, way beyond human intelligence.
It would be like putting evolution into super-fast forward, and our own slow biological evolution would be unable to compete with this.
This super intelligence might be able to solve problems, and answer questions which have proved beyond the capabilities of human beings to solve.
Scientists argue as to when this moment might arrive, Kurzweil, predicts it will be with us by 2045, some have argued it will be with us as early as 2030.
No-one is agreed on how best to deal with unregulated ‘autonomic weapons’ or with the prospect of hostile super intelligent machines.
The aforementioned Elon Musk, the SpaceX entrepreneur, has put $10 million of his money into projects aimed at keeping A.I. ‘under control’ and ‘beneficial’.
We would try and build in elements that would prevent A.I. machines from turning on humans, like with the protective Terminator in the Hollywood film.
We might do well to take on board ‘The Three Laws of Robotics’ devised by brilliant science fiction author Isaac Asimov (author of I, Robot) back in 1942.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Or perhaps our future is to become cyborgs, to adopt and incorporate this immense A.I. intelligence as part of our own existence.
We could decide to ditch our biology, and to become a race of super intelligent, immortal machines.
Our ‘primitive’ fragile, biological beginnings may, in time become forgotten.
This perfectly customised arm cast was produced using a three-dimensional printer [Credit: http://www.3ders.org]
The 3D printing revolution is here, with printers using wood, plastic, metal, concrete and even living tissue to make things as diverse as engine parts, replica guns or bionic ears.
Ireland is well placed to be at the forefront of this high-tech manufacturing technology.
The personal genomics industry is exploding. This is the business where DNA kits are sold, over the counter, or online, which use saliva samples to provide reports of risk of Alzheimer’s disease, hereditary cancers and more.
Two British women have given birth – last month – to baby boys with the help of wombs donated from their mothers. This world first took place in Sweden and plans are being made to make the procedure available in other countries.