Could Chance Bridge the AI Empathy Gap?

James Marshall
July 3, 2017

Feedback is the root power of any Artificial Intelligence learning system. Like a child, an AI grows through contact and interaction with the world around them. And their world is by definition our world, all that we as a species have made and written down, logged and drawn and inputted and recorded. Impression from every sense – all we hear and smell and touch and see must one day be made available to a construct mind if true AI is to be achieved.

But it is not enough to merely provide the rolling footage, and layer it with texture even down to every granular detail. To fully experience life as a sentient being, our prospective AI must have a stake at the table, a part to play and some compelling reason to play it. And lest we forget, it must also learn to play nicely with others. As Elon Musk is fond of pointing out (though possibly not verbatim), it would be a terrible irony for the cream of the crop of computer scientists to spend their best years and immense corporate fortunes only to give birth to something that turns out like Skynet.

No-one likes a smug scifi writer, and everybody hates that tinny phrase “I told you so”, so the race is on as to teach AIs empathy, a shared sensory bond with one’s fellow beings on this most fragile of planets. For an AI to develop a simulacrum of a moral conscience, a sense of empathy will be essential to the mix.  Empathy certainly isn’t easy to teach – a disappointing number of humans comprehensively fall by the wayside on this score, so how best then, to teach what must be taught?

Bingo by cote, on Flickr
Bingo" (CC BY 2.0) by cote

Step forward the William Hill online bingo product. No, there’s no such entity as things stand, but in the future? Of course, it doesn’t have to be bingo – it could be backgammon, poker, any game really where chance is a random factor. The thinking on this runs something along these lines: equip an AI with a tool-set of simulated emotions, then expose the entity to the vagaries of fortune ensconced within a structured ruleset – hope, frustration, triumph, disappointment, etc. Measure the feedback as it rolls in, quantify the data, draw up a set of parameters that encompass the raw spectra of human feeling as “discovered” by the AI, and in a manner that it can “learn” to apply to others.

The upshot of this is (or should be) an AI with a measure of emotional intelligence, drawn from experience and applicable beyond the limits of said experience. Which was the object of the exercise in the first place.

Can it be done? It must be done; if we are to let loose the full potential of the future AI, it has to be bound within its own moral constraints. To be elevated to the status of a thinking being, rather than a sophisticated tool, an AI needs personality, and for us to survive side by side with our electronic progeny, it has to be a fully-rounded personality.

A machine with narcissistic personality disorder doesn’t bear thinking about. And there’s no need to do so, Arthur C Clarke has covered this area already. He called his monster HAL, and look where that led.

The future may hold all sorts of interesting developments, particularly in a field as shifting and complex as AI, where iterative evolution is pitched at just as close to the organic model as it might conceivably be. We can only hope that where we consistently fail, machine minds of the future might be better equipped. If machine intelligence is the next evolutionary step, it behoves us all to make sure we get it right.

Other reports by Click Lancashire

Discuss This Article