When we first reported on the Ouroboros Model last
year, it was just a paper containing a description of a conceptual
model. Now, Knud
Thomsen of the Paul Scherrer Institute has released a new paper, Flow of Activity in the
Ouroboros Model (PDF format), with lots of color
diagrams to make the Ouroboros Model easier to understand. If you
haven't heard of it, the Ouroboros Model is a proposed cognitive
algorithm for self-steered, self-learning agents such as robots. The
idea behind the algorithm is to take a top down engineering approach to
general intelligence and, perhaps the author notes, consciousness. The
model's name is derived from Jung's interpretation of
Ouroboros as an archetype of the human psyche. There
does not appear to have been any attempt yet to actually build a robot with
this architecture so it's anyone's guess if this would really work. If
there are any adventurous robot builders
looking for a new project, have at it!
Any reference of the word "Ouroboros" is by itself cause for celebration, but "Ouroboros Model Snakes its way..." just takes the cake as far as I'm concerned.
A kudo to you, sir. Well done.
That's funny, I thought of our old friend Mentifex too when I saw it.
I almost linked to one of his ASCII diagrams in the article. Actually
implementing his algorithm on a working robot was where the Mentifex
idea ran out of steam too I think. Unless someone has built one since we
last heard from him.
It's hard to get excited about vaporware but here lately there have been several advancements that are exciting. Is this one of them? It's very hard to know especially when it's all just buzz words and smoke and mirrors at this point. Many buzz words and nothing to show for it, though makes me very suspicious. The flow chart looks like doublespeak spaghetti to me. Anyone able to decypher it? Anyone really have a glimmer of what it does? Anyway, there it is! Eat it up!
I once approached a guy after a lecture where he described a new idea in the area of motion control. I offered to help him make a prototype and see if it would work. He plainly stated that he didn't think the technology could actually be implemented in real life in spite of the amount of work he had done and the enthusiasm he had for it. I appreciated his honesty, but was amazed that someone could work on something they knew could not work. I'm afraid this is common in academia, and one of the reasons I bailed.