Analogies and Intelligence

At Existor, we have formed the view that analogy is at the heart of general intelligence. Intelligence has no heart except by analogy. This page is one by analogy to a book. You recognise each letter here by analogy to all those you have seen before. Nothing means anything except by analogy.

We suggest that analogies to “analogy” include contextuality, semantic similarity, categorisation, generalisation, memory recall, insight and inference.

Researching optimal ways to model analogy in language is one of the major strands of our Machine Learning work, on a path towards a Cleverbot 2.0, which will demonstrate new levels of natural language understanding and further build upon user engagement.

The way people learn language involves contexts of many kinds: words and sequences of words in relation to each other, and the same in relation to sights, sounds, touch, feelings, time, place, who we are with, and many more. Though of varying significance, those contexts are in many senses all equivalent to each other.

At Existor right now we have only text to work with, but a very large amount – over two billion interactions between our machine and real people. Working with it all is not easy, and so far we’ve concentrated on just a small subset.

Yet, do you expect a machine to be able to work out for itself that the closest equivalents to “a machine” are as follows?

  a bot

  a robot

  a computer

  a program

  a computer program

  an ai

 

Or that what comes next is “a human“? Our model can do that with “a lot of” the language we give it…

  lots of

  many

  some

  all

 

Even more, would you expect “humanity” to be understood to be analogous, amongst others, to these…?

  reality

  knowledge

  consciousness

  artificial intelligence

  technology

  the internet

 

Most models of language have focused on words as the principal unit of measure. We treat words and phrases of all lengths equally, being seen in the contexts of each other. Doing so encodes sequence and provides much greater contextual information than single words alone. If you doubt it, consider the difference between “you doubt” and “doubt” alone.

Most models of language compress information into as few vectors, or “dimensions” of data as possible. We may well do so too when everything is understood, but for the moment we work with raw, complete data and very high dimensionality. That would pose a performance issue but we have found major new optimisations, without compression. As a result we are able to look at data from “multiple directions” and test many hypotheses. Complete data enables still greater contextuality to be “seen”, and the words and phrases of this specific sentence to be interpreted their own unique contexts.

Okay let’s” consider an example of a phrase like the one at the start of this paragraph…

  how about we

  then let’s

  well let’s

  let’s just

  i think we should

  could we

  do you wanna

 

On this page, at this stage, we are not ready to expand on the details of how our analogies are generated, but we will show a topical example – a complete sentence that was unique, the only one exactly like it in our training dataset. It was “do you know mathematical equations?“, and analogies included:

  what equations do you know?

  do you know a mathematical formula?

  can you solve equations?

 

We believe our early results are showing an extraordinary ability to hit the nail on the head… by analogy.

  yes i think so

  i guess so

  that’s great

  same here