Motivations, Values and Emotions
Posted 14 Nov 2007 at 21:26 UTC by steve
University of Memphis researchers Stan Franklin and Uma Ramamurthy have
released a new paper titled, Motivations,
Values and Emotions: Three Sides of the Same Coin (PDF format). The
paper talks about the interrelationships of the three concepts in
autonomous agents, whether they are robots or humans. "Motivations
prime actions, values serve to choose between motivations, emotions
provide a common currency for values, and emotions implement
motivations." As always, one needs to understand how the researchers
use the terms. In this case, the authors seem to be using the words feelings
and emotions in opposite roles from those I've seen defined in the past.
They define feelings as raw sensory inputs such as heat, pain, or thirst
and emotions are defined as feeling with cognitive content. Normally, I
see feeling defined as the subjective or phenomenological aspect of
emotion rather than the other way round. Of course,
many people still use the two words interchangeably, so any
distinction is helpful. Otherwise,
the authors rely on the LIDA (Learning Intelligent Distributed Agent)
model of cognition.
Thank you for pointing out the article. I appreciate your summary and
comments. I believe all should ponder those relationships, even if not
Not trying to cause more debate, but... I think we all deal with "moving
targets" often. I see this as one of them. In the past I have been
motivated to debate definitions when I think someone has missed root
meanings of specific human language. Lately however, I observe that the
more effort we spend debating terminology, the less time we have to
colaborate on what to do with resulting terms.
What is more important? Which term means what: less important. Keeping
terms consistent between scientific minds: more important.
I believe that we should allow some
distinguished organization to solidify the terminology, and then use it
as a foundation for work. Has this been done before? If so then why debate
terminology any more? Of course there would be a requirement, that there
is sufficient terms to distinguish finite elements and not leave many
holes. Still, since I feel smart today, I beleive that I can handle
adapting to resources which I find available. Adaptation is our human
strength. If we give that to our creations, fine. Until then, we should
use ours. Those elements may evolve
the same way any development suite does, but it gives a common
foundation for universal research on a higher level. I don't mean too
much by all this, but see a need for some common tool that uses
fundamental "english" terminology, and helps us
model systems, make libraries, etc.
It seems to me some software "suite" should already exist, but I'm new
to this colaboration. What is already out there (but is apparently
dismissed because humans tend to resist personal adaptation)?
That said, I do have some opinions:
- I agree that "feeling" serves better as a term for raw sensory
- I observe that other words used in original post are often confused,
like "choose". How does a human choose? Not the same way an "autonomous
agent". We choose using something termed "free angency". I see obvious
differences. We only called it a "decision" in software because it was
the best word we could come up with at the time. This brings me to the
- While it may be a common term to define both humans and robots as
autonomous agents, that is often the reason for our failure in designing
a good autonomous robot. We are most definitely not the same. No matter
how advanced all our "human technology" has become, there is absolutely
no invention for a robot to have "free agency" as humans
do. Even if I wanted to believe it was true with all my soul, I could
not argue against that.
semantics, posted 15 Nov 2007 at 21:42 UTC by steve »
Thanks for the insightful comments.
I've found most debates over anything eventually boil down to semantics.
But precisely defining words and concepts is a necessary step in
science. Emotion, feeling, consciousness, free will and the like are all
terms that have been used with little or no precise meaning for
hundreds of years and it will take a while to nail down what,
if anything, they describe in the real world. These words have been
mostly used in the realm of religion and philosophy until recently.
Psychologists and now cognitive scientists are trying to sort out the
mess and refine the definitions into something, as you say, that can be
used as a foundation for work.
Regarding your points, it sounds like you're saying you don't think
humans are autonomous agents! But, again, you may mean something
different by "autonomous agent" than I do. Free will is as hotly debated
today as ever and there seem to relatively few arguments on either side
that I find coherent, with the exception of Dennett's.
There seem to be roughly three classes of opinion on free will 1) nobody
has it 2) agents made of meat (humans) can have it but agents made of
silicon (robots) can't 3) any sufficiently autonomous agent can have it.
Sounds like you fall into group 2. I'd put myself in group 3. In either
case, though, it's purely speculation at this point.