Thursday, January 19, 2012

how to get computers to understand human language

The fundamental problem in getting computers to understand human language that they can't understand what it feels like to be a human.

This appears to be crucial to understanding natural speech as used by live human beings.  As a recurrent learner of languages, it seems to me that 90% of following running speech is being able to predict what the speaker is going to say next.  Much of this is about collocationality, to be sure, but a lot is about understanding the flow of ideas involved---which is itself an extralinguistic matter---and then using that to predict what ideas will be expressed next, through the medium/technology of language.

One of the helpful things here is that it's not necessary for this device to model what the world IS---only a normative person's PERCEPTION of the world (and its objects, especially people): how they attend to and process their sensory input (input deriving both from external senses and internal senses/states).  In short, we just need to recapitulate the rupa loka, the world of the senses that the Buddha noted we were all trapped in.

But how DO we model what it's like to be a human, especially with regard to internal states?  (Modeling those of others---i.e. programming in a theory of mind--should be comparatively simple, once we can get a device to model its own: just tell it to identify humans and attribute the same internal system to them.)  Here again, the Buddhist tradition of cognitive-emotional science is helpful: desire is the foundation of much of human experience.  Most emotional states, and pretty much all judgements, can be modeled along a graded spectrum of positive and negative desirability.  What contrasts most emotions from each other are generally the specific sensory experiences tied into that wanted-unwanted continuum.  The physiological differences associated with different emotions are for all intents and purposes mere facts---worth knowing, if you are this device, as part of world/self knowledge---but not nearly as important to understanding others' current and future states of mind (and actions) as knowing where and how they sit on the desirability cline.  Since knowledge of desires---and particularly, of complexes of desires---predicts this best.

Which brings us to a point about knowledge itself.  Knowledge is basically a means to predict from current percepts (again, both internal and external in origin) what future percepts will be. 

Science, whatever its limits, has made a name for itself in being the most reliable fortuneteller around: and this is simply because science is an art of knowledge.  Which is an art of prediction.

And prediction is important to human cognitive systems (or humans, if you will).  Because the predictable, the known, is safe.  In the world of the known, we can act to meet our desires (individual and affinity group integrity being a big one) with the confidence that things will work out as we, well, desire.  The unknown is scary (and felt to be undesirable) for that same reason: the set of circumstances that do not keep us healthy and happy is far greater than that of ones that do---consider our warm wet airy planet in the great cold vacuum of space, for a start, but also all those cliffs you can fall off of and rivers you can drown in.  And in the unknown is where much of that set of unhelpful, undesirable circumstances lurks.  Once shifted to the known, the familiar, we are at ease: the salience/intensity of the desire, the need to predict, drops back down to zero.

This is the difference between instinct and rationality.  Instinct is a set of---formally speaking, at least---arbitrary subroutines of sensation and action that are tripped off by certain percepts.  Arbitrary only in that they are not products of a built-in system deriving them: they do indeed happen to be the subroutines that jibe well with the ongoing constraints of natural selection.  Rationality or logic, in contrast, is internal narrative: the ability to tell stories in which this happens after that, and then attribute that event-ordinalizing narrative to the future, based on current percepts.  In short, rationality is again an effort at prediction.  It can of course be used to explain the past---or rather, our currently accessible memories of the past (which are mostly already (re-)constructed narratives anyways), plus external percepts of their present consequences.  But we mostly only ever bother do that because we think the knowledge, the narratives so built up, will help us in current and future predictions.

So prediction is a desire, an important one, and that drive for prediction is the drive for knowledge.  And therein lies the interesting bit: while sticking to the known, keeping conservative, is by and large the more reliable way to keep on surviving, we still have this thing called curiosity.  Why?

Well, in principle, all curiosity does is get us to mess about in the novel, the oh-so-dangerous unknown.  But in so doing, curiosity is the same thing as sexual reproduction: it enhances overall survival chances in an ever-changing world by introducing a measured degree of new information, of variation from the established, conservative norm.  Sometimes the risks of curiosity, just like mutation and meiotic recombination, don't pay off, and out goes the individual via the natural-selectional filter.  But the overall population survives better, and even the curious individual over a life time of experiments (a.k.a. experience), provided they gain more from their investigations than they lose---when their operating system runs with this small back door open, always welcoming in a bit of the potentially helpful unknown.

So this is hardly even a start.  There is so much to modeling a human mind, even just to create something that can truly, properly engage with natural human language beyond oblique, brute-force approximations like formally arbitrary preprogrammed responses (rather than knowledge built from the ground up) or clever statistical guessing.  Knowledge, i.e. predictive expectation, of force dynamics is one thing that will also be of foundational use for a device that needs to understand human experience; same again for knowledge of social dynamics.  What I hope to introduce here is the idea that this kind of knowledge is perhaps best built into such devices indirectly, by designing them with the core human sensation---desire---and then atop that base, analogs of the other internally- and externally-derived sensations; top that off with what I suspect is a very constrained set of algorithms for processing all that input and all that percept...and then wind it up and let it go create its own world within itself, just as we do.

2 comments:

solnak said...

Great post!!!

solnak said...

(this is me, jen dubay, by the way)