A new ACM
Ubiquity article addresses what the author, Alexandru
Tugui, believes are limitations of artificial intelligence that
cannot be overcome. The article is primarily concered with AI as a
simulation of biological intelligence rather than with the creation of
real machine intelligence. Even so, some of his objections seem a bit
odd, such as the claim that AI can never truly simulate biological
intelligence because it is limited to 1s and 0s whereas biological
intelligence can have intermediate values. A CD player is a computer
that deals only with 1s and 0s, yet it seems to simulate analog music
it seems to me like he is arguing in a similar way as Penrose did (The
emperor's new mind). But don't forget that Penrose said that
intelligence could be artificially created though, but overcoming
limitations of deterministic systems.
I've seen determinism raised as an argument against free will but never
against intelligence itself. How does he explain human intelligence?
Does he claim the Universe itself is non-deterministic? (come ot think
of it, he's in the school of thought that relies of some kind of quantum
weirdness as an escape from determinism isn't he)... I just
read Daniel C.
Dennett's book, Freedom
Evolves, which spends a lot of time dealing with these ideas and he
doesn't see determinism as particularly
relevent to either. He thinks the problem stems from people confusing
determinism with the idea that sequences of events are inevitable or
Some philosophers and even neuroscientists (who ought to know better)
share common missconceptions about what computers are. They get too
hung up upon the fundamental aspects of computing: Turing machines and
the 1s and 0s. What they miss out is the most powerful aspect of TMs in
that they can simulate other types of machines, provided that they are
capable of being described by some definite procedure.
They also usually fail to recognise that within any complex system there
can be multiple levels of organisation, all of which are equally valid
in their own right. As an example if I take my car to a garage because
it won't start I don't go to the mechanic and say "the atoms in this
part of my car are not moving with a specific amount of energy in order
to bring about a self sustaining electro-chemical reaction", but instead
say something like "the damn thing won't work, can you check the spark
plugs?". Both levels of description are valid, and you could describe
the higher levels (spark plugs and pistons) to be emergent properties of
the interactions between the lower levels (atoms and forces).
In short, computers should be thought of as modelling tools rather than
just things that shuffle symbols or 1s and 0s.
I think the binary argument was particularly weak. With 1's and 0's
you can represent whatever analogue quantity you like subject to
quantisation levels. Most of the artificial neural nets using delta
rules and things that I've seen are digital simulations and make use
of this sort of thing. I think a bigger limitation is our own
understanding of how best we can devise a method solve the problems of