For my honours dissertation, I decided to undertake a project involving the programming of a Lego Mindstorm Robot. Granted, this blog post is a bit late (2 years later!), but I figured I’d share my experiences so that someone doesn’t attempt the same 🙂 .The outcome of the application was to have a robotic “guide dog” that a blind user would be able to follow. This was purely a concept, as anyone following this robot would probably have died 3 times over 🙂 But a concept none the less.
Lessons Learnt from this project:
- Microsoft Robotics Studio is a terrible idea.
- Lego Mindstorm robots are pretty pathetic in their abilities, they are good for basic stuff, but as soon as you want to go into complex object avoidance algorithms, they pretty much fail dismally.
- Real world vs Theoretic world are two completely different things. When somebody says “Theoretically this should work”, DO NOT believe them.
- Its better to simulate real life than to actual be in it.
In the end I had 3 components to the system, in the end.
1. An android application that would take in voice commands and determine whether the robot should stop, turn around, left, right etc.
2. The robotic dog
3. A desktop application which issued commands to the robot via Bluetooth, and connects to the android phone via bluetooth.
Ideally, I would have liked to have the application running entirely on the robot, but unfortunately the Lego Mindstorm robot had a tiny amount of memory (maybe the choice of a raspberry pi would have been a better option?).
So the lesson learnt from this experience, a robotic dog is a bad idea for a blind person, and perhaps something like a ultrasound device that you wear would be a better choice than a physical robot that a user would have to carry.
Regardless, below is an example video of how the robot detected obstacles and moved around them.
This video wasn’t the final version, as the final version had the voice commands controlling the robot 😀