I stumbled across this entry by Wesner Moise about intelligence. I was particularly interested in the stuff about Stephen Wolfram who developed Mathematica. I haven’t used Mathematica but I intend to correct that over the next few weeks but some of the theorhetical stuff that Wesner pointed out fits nicely with some things I’ve been pondering on over the past twelve months.

Enterprise applications are all well and good, but they aren’t terribly interesting to write most of the time, so I entertain my brain from time to time with subjects that are quite frankly beyond my mental reach. One of those areas is artificial intelligence.

While those close to the AI community no doubt are proud of the work they have done I can’t help but feel that the majority of the population is unaffected by AI. Sure there are specialised applications in the field such as lending evaluation, entertainment and related areas such as speech recognition and linguistics but we haven’t uncovered some central truth yet, or atleast I don’t think we have – remember, its beyond my mental reach.

To quote Wesner on Jeff Hawkins work:

Jeff found that observation surprising given that sight, hearing and touch seemed very different with fundamentally different qualities. He concludes the that human brain is fundamentally memory-driven machine using pattern recognition techniques—essentially a rules-based machine.

Are we trying to find a complex solution to a simple problem? I subscribe to the theory that our brain is a rules-based engine (probably less sophisticated than the one in BizTalk) but I also think that looking at the rules alone means you miss quite a big piece of the picture.

I believe that humans are essentially a series of devices attached to an organic network. The organic network pushes signals to and from devices and our brain. Every time we get some kind of sensory input we evaluate that input and if the evaluation turns out to be true then we take some kind of action. In the human body that might mean the release of a chemical from a gland for example. Diagramatically, this is how I think of it.


Lets apply this to some software. Lets say we are writing an agent that plays an online game. For the sake of simplicity lets choose a text-based MUD. If I was to write an agent/bot to play this game I could build the sensor that feeds in the stream of text coming over the socket where it could be evaluated by a rule.

In this case that rule could be a regular expression looking for some kind of pattern in the text, if it found it then it could send some data back and get ready for the next round of input. Here we can visually see how we would build those rules up.


Assuming we were handling the text correctly this little algorithm neatly defines how we would handshake with the game. As the system was trained to break up the rules so that there was a reduced possibility of one getting invoked in the wrong context. Our brains do this automatically but with a piece of software we might need to get explicit about what context we are in. I see it like this:


I see a system like this sitting on top of a whole heap of devices which deal with the procedural aspects of getting the job done and leaving this higher-level system to orchestrate them. I see great applications in robotics and semi-autonomous systems. Its certainly interesting to think about - the possibilities are both amazing and scarey.