Evolutionary computation has been promising self-programming machines for 60 years – so where are they?

March 27, 2018 by Graham Kendall, The Conversation
Credit: Shutterstock

What if computers could program themselves? Instead of the laborious job of working out how a computer could solve a problem and then writing precise coded instructions, all you would have to do is tell it what you want and the computer would generate an algorithm that solves your problem.

Enter evolutionary computation, which can be seen as a type of and a branch of machine learning. First suggested in the 1950s, evolutionary computation is the idea that a computer can evolve its own solutions to problems, rather than humans having to go through a series of possibly complex steps to write the computer program ourselves. In theory, this would mean computer programs that might take weeks to program manually could be ready in a matter of minutes.

This idea enabled computers to solve complex problems that may not be well be understood and are difficult for humans to tackle. Computer scientists have used evolutionary computation on many problems, including formulating the best mix of ingredients for shrimp feed, portfolio optimisation, telecommunications, playing games and automated packing.

And researchers who have been studying evolutionary computation for over 60 years have made tremendous advances. It is even the subject of several scientific journals. Yet, as I noted in a recent paper, the idea still isn't used widely outside the research community. So why isn't evolutionary computing evolving faster?

How does evolutionary computation work?

Evolutionary computation draws on Charles Darwin's principles of natural evolution, commonly known as survival of the fittest. That is, the weakest (less well adapted) members of a species die off and the strongest survive. Over many generations, the species will evolve to become better adapted to its environment.

Evolutionary computation has been promising self-programming machines for 60 years – so where are they?
Genetic programming tree. Credit: Wikimedia

In evolutionary computation, the computer creates a population of potential solutions to a problem. These are often random solutions, so they are unlikely to solve the problem being tackled or even come close. But some will be slightly better than others. The computer can discard the worst solutions, retain the better ones and use them to "breed" more potential solutions. Parts of different solutions will be combined (this is often called "crossover") to create a new generation of solutions that can then be tested and the process begins again.

Another important element of evolutionary computation, as with natural selection, is mutation. Every so often a small, random change is made to one of the solutions being tested. This means new potential solutions can be created that wouldn't be possible from just using crossover.

Hopefully a combination of crossover and mutation will produce new potential solutions that are better than their "parents". This might not happen every time, but as more generations are produced, better solutions are more likely to emerge. It's not unusual for evolutionary computation to involve many millions of generations, just as can take many millions of years to noticeably alter a living species.

One of the most popular types of evolutionary computation is genetic programming. This involves one computer program evolving another working program to tackle a specific problem. The user provides some measure of what comprises a good program and then the evolutionary process takes over, hopefully returning a program that solves the problem.

We can trace genetic programming back to the late 1980s, with one of the main proponents being John Koza. But even though it has since made significant research advances, genetic programming is not used on a daily basis by commercial organisations or home computer users. Given how tricky it can be to develop software systems that work effectively and efficiently, it would seem sensible to get computers to help in the same way they are changing many other industries.

Selection, Mutation and Crossover.

Why hasn't evolutionary computation been adopted?

The commercial sector hasn't embraced evolutionary computation as it has other technologies developed by researchers. For example, 3-D printing was invented in the 1980s and after a long period of development is now being used in industrial manufacturing and even by people in their homes. Similarly, augmented reality, virtual reality and artificial intelligence have emerged from the and become major products for big tech companies.

One of the key issues holding evolutionary computation back is the failure of researchers to focus on problems that the commercial sector would recognise. For example, computer scientists have intensively studied how evolutionary computation could be used to schedule exam timetables or working out routes for vehicles.

But researchers often only study simplified versions of problems that are of little use in the real world. For example, many vehicle routing simulations involve calculating the distance between two points using a straight line. Vehicle routes in the real world rarely follow straight lines, and have to contend with one way systems, breakdowns, legal issues (such as how long before a driver must rest), time constraints and a whole lot more. However, this complexity is actually where evolutionary computation could help. If you can adequately define the problem as it occurs in the , then the evolutionary algorithm should be able to deal with its complexity.

Another problem is that the solutions evolutionary computation generates are often hard to explain. For example, even though a system might create a with a perfect outcome, how it actually works might be a mystery to a human programmer as the system may have produced complex code that is difficult to interpret and understand.

An evolutionary computation system is also complex to implement and support and this may put off some commercial organisations. It would help if there was an easy-to-use framework that hid much of the underlying complexity. While these frameworks exist in the scientific community, they are not easily accessible by the , never mind home users.

IBM's famous architect Frederick Brooks said that you cannot tackle increasingly large software development projects simply by throwing more people at them. It would be an immense help to the software development industry if, instead of having to manually develop every piece of a system, developers could specify the requirements of its key parts and let an evolutionary process deliver the solutions.

Explore further: Retrospective test for quantum computers can build trust

Related Stories

Retrospective test for quantum computers can build trust

January 24, 2018

Tech companies are racing to make commercial quantum computers. A new scheme from researchers in Singapore and Japan could help customers establish trust in buying time on such machines—and protect companies from dishonest ...

New book on physical computation

August 18, 2015

If you're reading this, chances are you're doing so on a smartphone or a computer. Experts would call the manipulation of electricity that brings us web pages, email and digital photographs "physical computation."

Recommended for you

Ready-to-use recipe for turning plant waste into gasoline

September 25, 2018

Bioscience engineers at KU Leuven, Belgium, already knew how to make gasoline in the laboratory from plant waste such as sawdust. Now, the researchers have developed a road map, as it were, for industrial cellulose gasoline.

3 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
4 / 5 (1) Mar 27, 2018
so where are they?


Hype Alley 7, Buzzword Town, Silicon Valley, California.

Genetic programming requires you to define the selection criteria in a way that is narrow enough to only let through programs that solve the task without introducing errors or bugs that would lead to undesired behaviour. In order to do that, you must understand the problem so thoroughly that you know what the program absolutely shouldn't be doing, even as the end result might look right.

For example, if you tell the program to earn you $10, you must specify that it must absolutely not do that by breaking into a bank's computer system and stealing you the money.

In this way, defining the "victory condition" becomes just as complicated, if not more so, than just writing the program the old fashioned way. The hype was that you could give the program some vague criteria and it would fill the blanks, but then you wouldn't understand why the program works, or how, and you couldn't use it.

Eikka
5 / 5 (1) Mar 27, 2018
3-D printing was invented in the 1980s and after a long period of development...


3D printing was patented in the 1980's, and the patents expired around 2009 - 2014 at which point the industry picked up the technology and actually started using it. In 2016 the patents for SLS printing in metals expired, and that's when we started reading reports about 3D printed rocket nozzles etc.

Not much development went on in between, because the patents already covered every imaginable way to potentially implement a 3D printer, all the way down to the computer algorithms that would control the tool paths.

Companies like 3D Systems or Stratasys were kicked up in an effort to commercialize the technology back in the day, but the machines they offered were madly expensive, thanks to the monopoly, so of course practically nobody used them for anything. They sold a few, but the 3D printing industry didn't start to emerge until the IP on the machines expired.
Eikka
not rated yet Mar 27, 2018
3D printing experienced the Xerox effect; xerographic printing was patented in the 1940's but it wasn't until 1970's that photocopiers started appearing everywhere because Xerox's patents on the method expired and the FTC forced them to license their competitors for some of the supporting technologies, so cheaper and better alternatives flooded the markets, putting Xerox from 100% market share to 14% market share in just four years.

In this way, patents delay the onset of disruptive technologies by about 25 years from the technology trigger to wider market adoption. One can only imagine what's in store for 2043 when today's patents start to expire.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.