As part of my thoughts on the internals of my droid, I've come across two very interesting things. Firstly, a few people have ported OpenCV, an open-source computer vision project, to the Beagle Board-xM board, running under Linux. Very nice, 1GHz processor, very low power consumption, quite powerful really... Vision would be handy to have but, of course, but is dependent on the power required to process - and the software required to make best use of it.
I could easily use a board or three of these - especially as they will happily cluster. However, my personal preference is to have boards running dedicated processes, rather than a single board (or two) doing everything. Additionally, one board could be run just for any orbital mechanics calculations - what's the point of having an astromech if it can't do any astrodynamics calculations? *g*
Secondly, I came across (a couple of years back) a company which apparently makes Neural Network processor chips... *cough* MwahahahaHaHaHAHAHA! *ahem* Again, it's the programming which is the problem. They have boards dedicated to vision processing, plus the idea of speaker-independent speech recognition is interesting. If neural network ICs are available, of course, it makes possible the option of having a few chips processing environmental/etc data on the fly; alternatively you log a run (start to stop of operation), with a suitable sampling of data, and use it to provide some action/reaction mental processing.
Here's the thing, though... Say you have a few neural processors. You log sufficient data... NNs have two modes - Training and Operation. During a run, you set the processors to their Operation mode, and log operational data. The robot goes back to recharge, goes into Sleep Mode. During sleep mode, you have taken care to have motors, sensors, etc., turned off - you use the time to Train the network...
Is the robot dreaming?
If you didn't quite fully disable everything, would the robot twitch while it sleeps?