Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bingo. So many genetic/evolutionary algorithms assume you need a large population with crossover to avoid these supposed 'local minima'.

Very often, the most efficient evolutionary search algorithm involves a population of one (or two, depending how you count them): generate a mutation, compare it to the current best. If it's worse, throw it away; if it's better or the same, keep it and throw the original away. Rinse and repeat.

Search for "neutral networks" for more information about research into determining the "shape" of the fitness landscape. (Unfortunately, Google throws in plenty of search results about neural networks and network neutrality).



There's a bit of goofiness about crossover in this thread.

So far as I know, since its inception in the 1970s, crossover has never been thought of as a mechanism for "avoiding local minima". Indeed, the early trend of crossover without mutation was what resulted in studies in so-called "premature convergence," where the population would converge to a single genome and be permanently locked in a local optimum.

Rather, crossover is a procedure for spreading good subsolutions (so-called "building blocks") rapidly through the population. This is effectively projecting the search space into many smaller spaces, so has the potential of searching much faster if -- and this is a big if -- the problem in question has very low dependance among its parameters. In the GA world, this is known as parameter "linkage" or "epistasis". Other techniques do something similar: for example, EDAs or coevolution or ant colony optimization all perform a related projection and likewise make a related assumption of low dependance.

> Very often, the most efficient evolutionary search algorithm involves a population of one (or two, depending how you count them): generate a mutation, compare it to the current best.

Not really true. There are lots of EAs which are touted as "the most efficient". A few, like CMA-ES, are by and large mutation-only algorithms. But others, like cooperative coevolution or hBOA (and other EDAs), are essentially extremes in crossover-like mixing.


Just nitpicking here: how is cooperative coevolution a crossover-dependent approach? Are you referring to some specific algorithm?


> how is cooperative coevolution a crossover-dependent approach?

It's not. What I meant was that CCEAs and univariate EDAs (and most forms of ACO) make the same basic linkage assumption inherent in crossover, indeed to even more of an extreme: that you can assemble individuals from bits and pieces scattered about the space without any dependence issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: