Anyone fancy buying me a neural processor board designed for experimenters?
Astromech Diary
Saturday, February 15, 2014
Saturday, January 5, 2013
An update
Well, I haven't posted much in ages, so a quick summary...
Over the past 15 months, I've been working on pretty much the equivalent of a Computer Science degree, unaccredited and online... So, my (frame) building has taken a back step.
I'm still counting this as progress towards building though, as I'm doing these courses to help on the "autonomous" idea. Seriously, when some big names and big universities are offering to help you understand what to do, and how to do it, you sit down and listen.
So - find some time, get on Coursera, Udacity, and edX (plus whatever other good ones are around), and start learning things.
This is, of course, assuming that you don't just want a static model, or radio controlled... and I don't.
Not to say that I haven't been doing some hardware related stuff. My RaspberryPi is encased, I'm currently waiting on some analog-to-digital converter chips, and some input/output chips that interface very, very easily to the Pi. The RPi (if you weren't aware) has an input/output header on it, giving you some interfacing - the chips make it easier to do so. The only problem is that the RPi runs at 3.3V, rather than 5V so you have to allow for that.
I've also taken part in the crowd-funding of Adapteva's Parallela boards - 16 core processor, easy to program and interface. Hence, currently taking a course in parallel programming. Particle filtering can really benefit from parallel processing. And if the mathematics on the wiki page are indecipherable, you should really check out Sebastian Thrun's Robot Navigation course on Udacity.
What else... I have found about $3,000 worth of chips and sensors that I want. I've fallen in love with the listed capabilities of CMUcam v4 - so I've ordered one... and am thinking about a second, which would mean stereo vision, and making passive visual ranging possible.... I've also found a laser ranger that could be worth the cost and effort of getting... Of course, this means I have three small cameras... and am moving towards an R5 head... Just going to need plenty of computing power... So maybe another RPi or two...
Over the past 15 months, I've been working on pretty much the equivalent of a Computer Science degree, unaccredited and online... So, my (frame) building has taken a back step.
I'm still counting this as progress towards building though, as I'm doing these courses to help on the "autonomous" idea. Seriously, when some big names and big universities are offering to help you understand what to do, and how to do it, you sit down and listen.
So - find some time, get on Coursera, Udacity, and edX (plus whatever other good ones are around), and start learning things.
This is, of course, assuming that you don't just want a static model, or radio controlled... and I don't.
Not to say that I haven't been doing some hardware related stuff. My RaspberryPi is encased, I'm currently waiting on some analog-to-digital converter chips, and some input/output chips that interface very, very easily to the Pi. The RPi (if you weren't aware) has an input/output header on it, giving you some interfacing - the chips make it easier to do so. The only problem is that the RPi runs at 3.3V, rather than 5V so you have to allow for that.
I've also taken part in the crowd-funding of Adapteva's Parallela boards - 16 core processor, easy to program and interface. Hence, currently taking a course in parallel programming. Particle filtering can really benefit from parallel processing. And if the mathematics on the wiki page are indecipherable, you should really check out Sebastian Thrun's Robot Navigation course on Udacity.
What else... I have found about $3,000 worth of chips and sensors that I want. I've fallen in love with the listed capabilities of CMUcam v4 - so I've ordered one... and am thinking about a second, which would mean stereo vision, and making passive visual ranging possible.... I've also found a laser ranger that could be worth the cost and effort of getting... Of course, this means I have three small cameras... and am moving towards an R5 head... Just going to need plenty of computing power... So maybe another RPi or two...
Saturday, June 2, 2012
Still around, still experimenting...
It's just that I haven't had time or inclination to write.
Slowly getting electronics played with... I've come across OpenKinect and OpenNI, which might prove easier to port to other platforms that the Microsoft Kinect SDK. Primarily because I've ordered my Raspberry PI, and want to use that for such processing...
Anyone who wants to buy me $2,000 worth of Neural Network processing chips is welcome to, of course.
A friend and I are looking at designing and building a search and rescue robot, and a lot of myplaying around experimentation is ostensibly for that... but finding money for it is the issue. Even though I have about $70 worth of various gas sensors sitting on my bench unused at the moment...
Slowly getting electronics played with... I've come across OpenKinect and OpenNI, which might prove easier to port to other platforms that the Microsoft Kinect SDK. Primarily because I've ordered my Raspberry PI, and want to use that for such processing...
Anyone who wants to buy me $2,000 worth of Neural Network processing chips is welcome to, of course.
A friend and I are looking at designing and building a search and rescue robot, and a lot of my
Saturday, December 17, 2011
Down the track...
I've got photos to post, just haven't had the time...
Well, I've completed the final exam for Stanford's open AI course, I have to finish off some stuff for the Machine Learning one... Not really happy with the ML - massive troubles getting the videos to work.
Am getting the legs together, although I've found I needed to go back over the seams with JB-Weld. I'm still going to put carbon fibre inside in order to add enough strength. I also got the A&A R5 head kit - I need somewhere to put sensors. I interact, therefore I am...
Well, I've completed the final exam for Stanford's open AI course, I have to finish off some stuff for the Machine Learning one... Not really happy with the ML - massive troubles getting the videos to work.
Am getting the legs together, although I've found I needed to go back over the seams with JB-Weld. I'm still going to put carbon fibre inside in order to add enough strength. I also got the A&A R5 head kit - I need somewhere to put sensors. I interact, therefore I am...
Saturday, October 15, 2011
Back in to the swing of things...
Well, after an interim, here's what's been happening...
Firstly, I've inspired a friend to be into robotics - which means that I'm having to improve my gain actually going things.... which is very, very good. I've actually gotten around to getting my mosfet based h-bridge to work. I'm going with a complementary pair system, rather than using just n-channel ones... I'll go into my reasons why later, though. And I should actually check the values for the resistors I'm using with it. I have found, however, that you can't leave the gates floating - they tend to over heat... but it's easy enough a problem to solve.
Secondly, Stanford University have started their online, unofficial Artificial Intelligence and Machine Learning courses - very timely when you're attempting to build an autonomous robot.
And finally, my legs and feet are on their way care of A&A! Of course, I'm now looking at ways to... improve things. The way that I figure things, it doesn't look like it will be difficult to make a few moulds, and build them out of fibreglass... or carbon fibre... or carbon/kevlar...
This is becoming a really good way to learn plenty of stuff.
Firstly, I've inspired a friend to be into robotics - which means that I'm having to improve my gain actually going things.... which is very, very good. I've actually gotten around to getting my mosfet based h-bridge to work. I'm going with a complementary pair system, rather than using just n-channel ones... I'll go into my reasons why later, though. And I should actually check the values for the resistors I'm using with it. I have found, however, that you can't leave the gates floating - they tend to over heat... but it's easy enough a problem to solve.
Secondly, Stanford University have started their online, unofficial Artificial Intelligence and Machine Learning courses - very timely when you're attempting to build an autonomous robot.
And finally, my legs and feet are on their way care of A&A! Of course, I'm now looking at ways to... improve things. The way that I figure things, it doesn't look like it will be difficult to make a few moulds, and build them out of fibreglass... or carbon fibre... or carbon/kevlar...
This is becoming a really good way to learn plenty of stuff.
Friday, September 16, 2011
After an interim...
Short post, covering a few topics...
Let's see... Am going to list a couple of more projects that I'm working on...
1. Speakjet based text-to-speech reader. I'm using a Etherten Arduino board, primarily because it has a micro-SD slot, which will go into the Speakjet via the matching dictionary chip. The idea being to be able to download text files from, for example, Project Gutenburg, and have the board read them aloud. Obviously, this isn't going to be as good as a properly produced audio book, or even the best quality text-to-speech programs, but it's an easy enough project...
2. Electronic binoculars. Or rather, monocular. Okay, I'm a Star Wars freak - I'm building my own astromech, after all. But these aren't that difficult to do, really... Will start off with a 3 axis accelerometer, outputting to a video overlay processor, on top of a camera signal. Can then add magnetometer, gps, etc., info. The hard parts will be range finding, autofocus... and making my own zoom lens.
3. Have ordered legs and feet for my droid! Now just waiting...
Let's see... Am going to list a couple of more projects that I'm working on...
1. Speakjet based text-to-speech reader. I'm using a Etherten Arduino board, primarily because it has a micro-SD slot, which will go into the Speakjet via the matching dictionary chip. The idea being to be able to download text files from, for example, Project Gutenburg, and have the board read them aloud. Obviously, this isn't going to be as good as a properly produced audio book, or even the best quality text-to-speech programs, but it's an easy enough project...
2. Electronic binoculars. Or rather, monocular. Okay, I'm a Star Wars freak - I'm building my own astromech, after all. But these aren't that difficult to do, really... Will start off with a 3 axis accelerometer, outputting to a video overlay processor, on top of a camera signal. Can then add magnetometer, gps, etc., info. The hard parts will be range finding, autofocus... and making my own zoom lens.
3. Have ordered legs and feet for my droid! Now just waiting...
Friday, March 4, 2011
Toward a Theory of Mind, The Precis.
Tacto, ergo Sum. I touch, therefore I am.
I was trying to mentally compose this post earlier today, but find myself lost for words. Basically what it comes down to is that intelligence seems to depend a lot on being able to detect the environment around one's self, as it does sheer processing power.
A robot requires sensors, both proprioceptive and exteroperceptive - it needs to know about itself and about the world around it. Of course, this is potentially a lot of information - hence my interest in distributing such processes. Mind is a function of brain and the information it can process.
The simplest robots have touch sensors, wandering aimlessly. Such a robot cannot have much awareness of what's going on around it. Navigation, planning, and so on, are all impossible.
We introduce proprioception. Some simple additions, and a robot can learn when its batteries are low - it becomes hungry. It learns how far and fast it is moving, it can start to sense when its motors are overworking. We add additional sensors - it can become more aware of what's going on around it...
The more sensors we add, the more processing we have to do for the inputs, but the more the robot knows what's going on around it. I will write more later... but the thrust of it is - we interact with the environment, therefore we are.
On a different note, I actually made some progress on the motor controllers, the interfacing between the microcontroller(s) and the mosfets that control the power to the motor. Although, thinking about it, I perhaps should use optoisolators to eliminate the chance of interference.
I was trying to mentally compose this post earlier today, but find myself lost for words. Basically what it comes down to is that intelligence seems to depend a lot on being able to detect the environment around one's self, as it does sheer processing power.
A robot requires sensors, both proprioceptive and exteroperceptive - it needs to know about itself and about the world around it. Of course, this is potentially a lot of information - hence my interest in distributing such processes. Mind is a function of brain and the information it can process.
The simplest robots have touch sensors, wandering aimlessly. Such a robot cannot have much awareness of what's going on around it. Navigation, planning, and so on, are all impossible.
We introduce proprioception. Some simple additions, and a robot can learn when its batteries are low - it becomes hungry. It learns how far and fast it is moving, it can start to sense when its motors are overworking. We add additional sensors - it can become more aware of what's going on around it...
The more sensors we add, the more processing we have to do for the inputs, but the more the robot knows what's going on around it. I will write more later... but the thrust of it is - we interact with the environment, therefore we are.
On a different note, I actually made some progress on the motor controllers, the interfacing between the microcontroller(s) and the mosfets that control the power to the motor. Although, thinking about it, I perhaps should use optoisolators to eliminate the chance of interference.
Subscribe to:
Posts (Atom)