Our unique, universal contextual machine learning techniques are key. Memory of what happened when, where and why enable predictions of what should come next.
From past conversations, whether with the public or specialist trainers, we predict the thing most appropriate for the AI to say. In the same way we predict and time reactions and emotions for an avatar to display, and a virtual typing style to present.
With enough learning, these techniques enable lifelike, entertaining interactions on any subject, in any language. And all that is needed for the learning to work is interaction itself, setting up a positive feedback loop.
Evie when used in commercial applications calls for more than lifelike probabilities. It calls for certainty in the delivery and gathering of information, and the completion of processes. So we can write the script.
Any software can provide a branching tree of possibilities, loops, asides, searches and more. Any software could provide fixed outputs along the way. Our outputs - what the Evie says - are highly dynamic, reflective of the users' needs and their language style.
The real difference, however, is understanding. People can say things in nearly infinite numbers of ways, yet we humans understand seemingly without effort. Machines have usually failed at this task - for example, picking up on a few words and all too often getting the sense wrong.
You will see Evie expressing emotions as you type, and as she responds. Our AI has learned how to do this from user feedback over several years. Words describing reactions to input, and emotions felt as she replies, including intensity, are converted to dynamic subtly-changing motion in our avatars.