Knowledge Acquisition Engines
In the 1970's the science museum in London had a computer terminal displaying rsults from a relatively simple program - one which learned as you used it. It worked as a guessing game, guessing an animal that you were thinking of, by asking a series of yes/no answer questions. If it got the wrong answer, it would ask you for a yes/no question that would help it get the answer right in future.
Other programs of similar vintage learned games of skill like chequers, using similar principles.
Self Improving GUIs
A limitted form of learning is found in some Microfot programs. After a time, menu items which are infrequently used are no longer shown. In theory at least, the GUI learns which menu items you use most often, and helps you improve your productivity by showing those first.
Programs that Write Programs
Most modern processors have built in multiplication hardware. This was not the case in the late 80's. In around 1988 an interesting program was written to search for fast algorithms for multiplication using addition, subtraction conditional jumps and shifts. Because the computer could do an exhaustive search, it was able to find algorithms for multiplying a number by fixed constants that were faster than algorithms found by expert programmers.
Perhaps even more interesting, the speed of the algorithms were data dependent. One algorithm for multiplying by 376152 might be faster for small numbers than another one that was faster for larger numbers. Given a stream of numbers to multiply, a refinement of the idea enabled the computer to pick the algorithm which statistically gave the best results for that data stream.
Profile Based Optimisation
Princeton University researchers have designed algorithms that learn from data that they don't have details of ahead of time. The algorithms tune themselves to better handle those types of data.
Even though any given piece of data is random, an algorithm can learn statistical properties of the data. An algorithm can also improve after learning from a relatively small number of samples.
The researchers built two self-improving algorithms, a sorting algorithm and a clustering algorithm. Sorting algorithms put pieces of data into some type of order and clustering algorithms group like pieces of data.
Sorting is a particulalrly good case for this approach. An algorithm that is good for sorting data that is already almost in order may do better in those cases than an algorithm which beats it for more chaotic data.
The Princeton algorithms which choose which algorithms to use promise to be forerunners of software that alters its default configuration on its own as it learns how it is used.