John Chamberlain |
Developer Diary |
Developer Diary · You Heard It Here First · 3 December 2003 |
Can AI Write Algorithms? |
Perhaps the most direct test of whether AI is achieved is whether a program can write another program, or more specifically an algorithm. Sometimes this is called "program generation". A hot topic now is "genetic algorithms" that gradually change their own parameters. Some researchers are trying to accomplish program generation with genetic algorithms.
An algorithm is a sequence of instructions that take a range of inputs and generate a range of output. For example, a text search algorithm has as its input two arrays of characters: the text to be searched and the sub-string to be found. The algorithm tries to find the occurrences of the substring in the search text. Ideally we would like to have an algorithm that does this in the fewest number of instructions. Also it would nice if the algorithm itself was as small as possible and used as little memory as possible. The challenge of generating the program to do this is make the computer understand what is wanted. Imagine we had a computer that could execute an infinite number of instructions per second. One strategy would be to first generate a sample set of inputs and outputs ("jobs") desired and then have the computer generate every possible program using I instructions. We could start with I small, say 15 instructions and then repeatedly increment it by one. The computer might then come back with a result that a sequence of 24 instructions is the minimum necessary to solve all the sample jobs. If the sample jobs were representative the algorithm might be a general solution to the problem. We could then continue finding larger algorithms that also solved all the jobs and record the average number of operations each one took the solve the sample jobs. Thus, we would end up with a graph of algorithm size against speed. We could then add the constraint of amount of extra memory used. Our graph would then be three-dimensional. By deciding what constraints to impose we could pick the optimal algorithm for our purpose. Also, a certain algorithm might be clearly identified as the best if other algorithms were only slightly faster and used a lot more resources or were a lot larger. In other words there might be a "naturally best" algorithm. The problem with this scenario is that an infinitely fast computer does not exist. There are 256 instruction types (opcodes) in the Java virtual machine. Also, many of these take arguments. If we restrict arguments to a dozen possibilities that still makes for thousands of possible instructions. To investigate all possible 10-instruction programs would therefore be 10^thousands tests which is more than there are atoms in the universe. An improvement would be to aggregate instructions into larger functional units and then permute the larger units instead. More than this we would like to build a solution to the problem. For example, if we already know how to find a single character in a string, then we could start with this algorithm and expand it in various ways. To break the problem into sub-components like this involves knowing more than inputs and outputs--we need to characterize those inputs and outputs and relate them to algorithms we have already devised and remembered. The problem of program generation can hypothetically be solved by brute force, but with knowledge representation there is the possibility of reducing the number of steps to find a solution. |
Developer Diary · info@johnchamberlain.com · bio · Revised 3 December 2003 · Pure Content |