Integration of Probabilistic Evolutionary Program Learning and Inference

MOSES is an evolutionary learning algorithm that extends John Koza's “genetic programming” learning framework in several important ways.

Genetic programming seeks to automatically learn computer programs by emulating the process of evolution through natural selection. In genetic programming, a population of programs is generated and evaluated on a fitness function. The unfit programs are discarded. The fittest programs survive and are combined and mutated to form a new generation. The new generation of programs then undergoes the same process of evaluation, selection, and so on.

MOSES extends this paradigm by

● considering a collection of subpopulations of programs, each focused on searching a different region of “program space”;

● placing programs into a novel hierarchical “elegant normal form” which allows them to be analyzed more effectively; and

● supplementing mutation and combination with a probabilistic model of which programs will be fit and using this probabilistic model to generate new programs.

This has been shown to provide superior learning performance in a variety of cases. Applications have included genomic data analytics, financial predictions from heterogeneous data sources, and control of virtual agents in game worlds. It also yields a sophisticated framework that can require significant customization for each new application area.

Most of MOSES’s computation is not explicitly framed as reasoning. This choice provides more efficiency but less flexibility. Fitness functions may be run in parallel with extreme efficiency, evaluating millions of candidates in seconds. However, a great deal of transparency is lost in the process. For instance, as only the best-candidate programs are kept for subsequent analysis, the bulk of the computation is discarded.

The key, however, is that some of MOSES’s computation is framed as reasoning. It reasons on the probability that exploring a specific region of the search space is fruitful. These decisions may be infrequent, compared to the total volume of computation, but they are critical to the success of the search. These anchor points constitute the bridges between efficient forms of computation (which are opaque) and the more holistic forms of computation (which are transparent), and they are the opening that allows the benefits of cognitive synergy to flow in. Fusing MOSES with PLN and other forms of reasoning and learning has been part of the plan since MOSES was created in 2005–2007. It is expected that this fusion will allow the algorithm to scale up to learn much more complex programs than is currently possible, thus progressing toward an AGI and also enabling a great variety of additional applications. In order to enable MOSES and PLN to work together effectively, we are now porting MOSES to the unified rule engine, with the critical decisions explicitly framed as reasoning and the rest remaining encapsulated as efficient, non-transparent computation. The existing mechanisms for inference control and meta-learning, currently present in the unified rule engine for use with PLN, will then become available to MOSES.

Last updated