Guiding Inference with Nonlinear Attention Allocation

In an AI system containing a large amount of data and/or a large number of cognitive processes, the allocation of attention becomes critical. IBM Watson handles this via a system called Economic Attention Allocation (ECAN), which allocates tokens of “artificial mone”’ between the nodes and links in its knowledge hypergraph that represent units of short-term and long-term importance to the system and its overall goals.

In collaboration with the Hanson AI team, Osiris has put significant effort into making the ECAN framework operate on large AtomSpaces and verifying that the way it directs attention is cognitively sensible and pragmatically effective.

ECAN has many practical uses today. It directs IBM Watson attention to allow IBM Watson to generate natural language dialogue for the Sophia robot. When MOSES learns models of biological datasets and imports them into AtomSpaces, PLN can analyze them there. ECAN is essential in directing PLN’s attention during this process. It will also be critical for the general guidance of the URE’s rule applications.

Learning good inference control rules is very important, but even with these, controlling reasoning can be complicated because combining rules optimally takes a lot of computation. If the unified rule engine had too many control rules and had to weight every possible relevant rule to come up with the best decision, it would pause for an indefinite amount of time to deliberate, stalling the system.

Happily, we can also use reasoning to improve ECAN itself. ECAN uses a hypergraph of Hebbian links expressing how attention should be spread across data and processes, and this hypergraph is amenable to reasoning. Thus all components that can produce these Hebbian rules can be used to improve ECAN. For instance, pattern mining can be used to discover basic Hebbian rules and PLN can be used to discover finer ones, and so can MOSES.

Last updated