Plotkin insists we will soon enter the age of Automatic Invention Age, and I have no reason to doubt it. Still, I'm not going to recommend the book, as Plotkin spends entirely too much time on the legal process of patents, and how this new process will affect the legal world, and not enough on the fun geek stuff.
But here's an excerpt from the book that caught my eye, perhaps because it stirred a memory from something I read in the late 90s (italics mine):
"Thinking Machines founder Danny Hillis used evolutionary computation software to create programs for sorting numbers. When you give such a program a list of scrambled numbers 9 8 2 7 3, it gives you back a sorted list of the same numbers: 2 3 7 8 9. Hillis examined the number-sorting programs that his software had evolved but could not understand how they work. "I have carefully examined their instruction sequences, but I do not understand them: I have no simpler explanation of how the programs work than the instruction sequences themselves. It may be that the programs are not understandable"".
In other words, Hillis could not find a simpler way of describing the program except for the program statements themselves, or by watching a computer execute the program. Now, this is actually a problem faced by mathematicians and physicists.
In order to understand the behavior of something, say, a water spout, or an avalanche, scientists will try to create a simpler, more idealized model. In doing so, they have to eliminate or ignore some parts of what they are studying, and hope that the parts they ignore are not too important in determining a general behavior. (You can imagine that, prior to the age of computers, this was a very dull, dreary, boring series of steps involving a great deal of arithmetical drudgery, and you'd be right. Even now, in the age of computers, this still can be a difficult task).
But there is a problem with simulation. If the model is made too simple, the model is just a toy. It really doesn't simulate properly what it is you want to study. It may not show behaviors you know - through careful observation - to occur in the real thing. Then again, if you include too many parts to simulate, you find that your model is too complex. In fact, what you find, to your great dismay, is that the model is even more complicated than the actual thing you are studying.
To put it in a silly way (as in philosophical, as philosophy is generally silly), if I were to create a model of the universe, and run the model on a computer, I would need to create a computer bigger than the universe, and run a program on it more complex than the universe. And well, that's just very, very silly.