KurzweilAI has republished an old but good article by Daniel Dennett
in Humans and Robots. The article presents and addresses many of the
reasons given by AI skeptics for the "impossibility" of creating a
conscious machine. Like many articles on the problem of consciousness,
it avoids the issue of actually defining what might be meant by the
word. But it's quite interesting and definitely worth the time to read,
in any case.
An earlier article: Consciousness an
Electromagnetic Field?, interestingly put forward an unorthodox
theory which explains some phenomenas which had not been solved before.
This time we are presented with the idea of consciousness in humans
and robots. Most of the arguments going back and forth are running in
circles, proposing scientists will never come to an agreement until
they are all proven wrong (or right depending which way you look at it).
Simply put, consciousness is the system that gives us the ability to
recognize and be aware of the surrounding environment and
appropriatelly act upon it. When one is asked the question: "Are you
conscious?", it is implies if the person is able to receive information
from the surrounding environment, if they are able to process it and
respond accordingly to it. There are few ways we receive information:
visually (recognizing patterns), by touch, by smell, by taste, and by
listening (recognizing various sound frequencies).
After processing the information, we act in the environment. We are
awear of the environment as we keep track of it, sorting (remebering)
it by time, events, or arrangements. So according the the previous
environment state and the newly received information we act or respond
either physically (using our hands, legs, and or body), or sending out
patterns of sounds (i.e. speaking).
As for I have made an attemt to map everything a human does into
The same thing has been applied to many programs that
appear "intelligent", or have "consciousness". When we build robots, we
arm the robot with those intelligent programs so the robot can keep
track of its states, its internal and external states, and autonomously
respond or act in the environment upon receiving new information from
What my main idea was to understand human consciousness in terms
from computer science, while having a robotic design in mind.
If you define consciousness, as you did above, it's much easier to do
something useful. The problem is getting anyone else to agree that
you've defined consciousness correctly. You're definition sounds similar
to that proposed by Philip Johnson-Laird in his book, The Computer and
the Mind - he defines it (I'm paraphrasing here) as: an operating system
at the top of a hierarchy of processors, some of which relay messages
about the world and other which transmit signals about how to act within
But there are all sorts of other definitions out there. Some say
consciousness is a mental mechanism for coping with complex social
interactions by allowing the mind to imagine what others are thinking by
analogy to it's own thoughts. Some that it simply
means to be alert or aware of the world around us. Some that it means to
be self-aware. Others say it is a "court of appeals" for resolving
internal conflicts that arise within the brain which might otherwise
result in endless loops or deadlocks (if you've ever read the Marvin
Minsky/Harry Harrison sci-fi book, The Turing Option, this idea plays
into the plot at one point when the AI under construction repeatedly get
stuck because of a lack of such an appeals process).
If you ask a psychologist, they'll point out that you are considered
conscious when you are daydreaming or dreaming in your sleep even though
you may not be aware of sensory data from the real world (and there are
other "alter states" of consciousness that can be induced by hypnosis or
drugs). William James defined it is a "river of awareness" containing
our thoughts and sensations. A fair number of psychologists, on the
otherhand, have maintained that there is no such thing as consciousness
or suggested that if you can't measure or directly analyze a thing, then
not suitable for scientific inquiry in the first place.
That's why I was a little disappointed the Dennett article didn't
term - once you agree on a definition, there's much higher chance of
learning something useful about it.