The Guardian published a piece by Charles Arthur titled, "Artificial
intelligence: God help us if machines ever think like people", in
which he questions the idea that Humans are a good model on which to
base machine consciousness. Why? "We don't build skyscrapers based on
the same principles as the human spine; if we did, then they'd be
constantly falling down or showing signs of significant weakness. We
don't build transport systems that work like the human body, using
muscle-like elastic bands snapping back and forth to power them." He
notes that even the Human body is a mess because "evolution is a
terrible designer". His premise is based on a recent book by Gary
The Haphazard Construction of the Human Mind which investigates
the conflicts between the millions of years old features of the brain
that are now conflicting with the relatively recently acquired features
based on language. In the end he suggests that giving a machine a mind
as badly designed as ours would be an act of cruelty.
"I just think that humans are a terrible example to follow if you want to develop something that's conscious".
Depending on the definition of 'conscious', there may be no choice for the outcome of a conscious machine to be like a human - because that potentially includes everything that makes up the definition.
The thing about sky scrapers and transport systems seem to be a red herring. Although you could argue that buildings made bend to withstand earth quakes are similar to a human spine, and that any stored energy element used to power transport etc is just a complex version of potential kinetic energy stored in a stretched elastic band.
I would have to agree with much of what Mr Arthur writes. I can think
of no good reason why an intelligent machine would have to be
organized or function in a manner analogous to an intelligent animal.
On the other hand, the human brain, with all of it's evolutionary
baggage, is amazingly capable.
Although neuroscience progress has been slow, I am inclined to believe
that there will come a day when we will have a thorough understanding
of how the human brain operates, and will also be able to construct
artificial minds that are so much more effective and efficient.
I'm pretty sure most attempts at "replicating human thought" arn't trying to out-and-out copy every detail of us, just the parts that whoever is making that paticular intelligence thinks is important. Most likely, any (or at least the most obvious) flaws in the human reasoning system will be looked over as details which can be abstracted away, and thus thus replaced with whatever's more efficient, much like how most robot developers don't intentonally use messed up cameras in order to simulate the human eye. So no, I don't think our machines will be riddled with weird quirks of human biology. They will, however, be riddled with weird quirks of human culture. ;)