Brushed motors & simple path navigation - conceptual help requested

I’ve been working on my little FEZDA for a little while and am happy with my “hello world” project.

The next project for my bot I have in mind is a 2 pass maze navigation. By 2 pass I mean this:

  • 1st pass through the maze is essentially to map out the maze using my sensors.
  • 2nd pass is a “high speed” run based on a map of the maze created in the 1st pass.

What I need to address before going too far into this my navigation and doing this without sensors like GPS, accell, mag, etc.

With the DC Motor Drivers, one just sets the speed of the respective motors and it goes and turns. This works perfectly fine with my wandering obstacle avoiding bot.

But with any type of navigation, things are bit different. The path through the map of the maze may require this:

  1. go straight for 10 feet/meters/whatever
  2. turn 90 degrees left
  3. go 1 foot/meters/whatever
  4. turn 45 degrees right
  5. go 2 feet/meters/whatever
  6. etc, etc, etc

All I have to control this bot is power levels and timing. But different floor surfaces have different friction levels and different battery power levels will throw off any “precise” timings.

If I ignore the friction and battery issues, my thought that I would essentially create a grid for the map. Instead of saying go forward 10 feet, it would be to go forward 10 units.

Each unit would be based on a timing of (to be calculated, but for discussion’s sake) 50 milliseconds. Once a movement is completed, then come to a full stop.

So to go forward 10 units, it would be 100% for 500ms total. But the problem with this is that there’s ramp up/down times for the motor. I guess I could avoid this by running at a constant speed, then there’s no ramp times any more.

Any suggestions for this idea?

How about suggestions on different ways to accomplish this?

Would doing this type of navigation be better handled with servos? or steppers?

Is some type of positioning sensor, like a GPS, pretty much required in doing this? I would to keep the costs/hardware complexity down, but am not 100% opposed to do this, as I could reuse that for other projects.

Thanks!

Hi,

I’m busy trying to build a room mapping robot.

I have settled on a quad encoder on each motor. It’s a lot cheaper than GPS, although dead reckoning is still just guessing. If a wheel slips then you are lost without an external reference…

For your application, how about not using distance to map but count “doors” that must be passed before turning? Not eligant, but should work… :slight_smile:

Thanks,
Errol

I like the idea of using landmarks and will look how I would mount the encoders.

Making the maze the house seemed too complicated to me, as there’s so much “noise” - like chair legs, sofas, etc, etc.

To simplify the environment, I was just thinking of some lengths of building lumber - 2x4s - laid out on the floor to make the maze.

Here’s what I am thinking: - YouTube

Think about how humans who are blind navigate. They use a combination of sound clues and the touching of objects in the environment to figure out where they are. What does this have to do with your robot? Well if your map says that there should be a wall 10cm ahead of you that is something that you can test out. Drive towards the wall until you bump into it or sense it with IR or an ultrasonic ranger. At that point you have a very good idea where you are. Now lets say your map says it should be clear to the right, turn right and see if it is clear…

Probably overkill, but using the Xbox Kinect as a sensor opens some options: The Kinect Sensor in Mobile Robotics: Initial Experiments - YouTube

[quote]Probably overkill, but using the Xbox Kinect as a sensor opens some options: The Kinect Sensor in Mobile Robotics: Initial Experiments - YouTube
[/quote]

Overkill is a good term for my needs; this doesn’t mean that I didn’t think of this already :smiley:

Laser range finder - starting @ 2300$ USD
Atom powered PC - minimum (just a guess) 350$ USD (+ power issues)
Natal sensor bar - 140$ USD

I would love to just have the laser range finder.

This is essentially what I’m doing now with my wandering obstacle avoiding bot.

I too was thinking that I would still need to use my sensors even when in “mapped” mode - as a collision avoidance.

This thread though was to see how I could follow a mapped path with using DC motors - since these have to ramp up/down and are imprecise.

I do appreciate your comparision to a visually impared person, it’s a good way to think about it.

With “Doors” I ment “possible turn points”. Example:
After you have explored the maze you know that you must take the second left turn, fourth right turn, etc… A lot simpler than forward 1.2 meters, left 90deg, forward 0.6 meters… :slight_smile:

Thanks,
Errol

[quote]With “Doors” I ment “possible turn points”. Example:
After you have explored the maze you know that you must take the second left turn, fourth right turn, etc… A lot simpler than forward 1.2 meters, left 90deg, forward 0.6 meters… :)[/quote]

Gotcha. I like this idea, it’s certainly a different way of looking at the problem than I had. It also is based on the hardware I have now.

Currently I have 2 IRs in place scanning about -11 degrees + 11 degrees. These many not have sensor width I need for this solution. When in mapping mode I can move the bot left/right to map to overcome this, but not when doing the 2nd phase. So I may want to try to servo mount these 2 or get a servo and one more IR.

With this I get to learn another aspect of the bots with a servo, and the bot looks “cooler” to my non-techie friends/family.

So i gather from what you are saying you want to make a maze mapper. The first pass it tries every direction it can go the second phase is where it knows the route and just dos the shortest possible.

I have one of these coded on a AVC. The initial mapping part is pretty simple use blind sensor mapping to get to your destination. The real hard part is what you describe how do i then turn that into where and how i go.

The hardest part was removing the dead-end turns from my calculation and or course map. This get even more complicated with multiple turn dead ends. Then after a dead end you need to make sure it doesn’t turn the way you already came.

Some people solve that by only following one direction. Wall follow on left to map out whole maze. Then remove dead ends Which is determined by some comparing the route you have gone to the route you have just taken.

All this happens while recording sensor input to know how far at certain turns each distance sensor is going off for. So you when left sensor says wall is 3 inches away you know it the first turn then you drive until the 4th sensor dead spot make your left turn for the next part of the course there.

Sadly i just made that sound really complicated. The reason you use the logic i am describing is it does not matter the course or what you are driving on or how the walls are made. The distance for how far forward you go is determined by side or front sensors that tell you this is the spot to turn. They don’t know you have gone 10 or 50 feet they know first wall in front of me i turn right. Then i drive till the left side sensor shows no wall for the 4 time. Then i turn left and rive till right sensor shows no wall and so on.

Again the real hard part in this is how do you prune your first run down to the straightest path. For that you will need to design a algorithm that can back out to you max dead-end turn cycle. If this is 3 then simple dead-ends if it gets up to 9-12 turns or more then you have make something that can thread back that far. In mine is did this by marking a turn invalid in my saved file it would then know a gap to the left to ignore.

Sorry for the extended babble.

First i would focus on a making solve the maze. Then making it write its path then you going through the path by hand to make the best course. From that you can come up with a algorithm that should be able to solve it with out you going by hand.

Thanks for the input bstag

My initial phase would be a simple no dead-end maze. The bot would map it, and then run it.
A second phase would involve what you are describing.

I was thinking that I’d setup a WPF app to display the map generated. Then I could work out on the algorithm on screen and the see the results mapped on the screen. Instead of code, deploy and observe, repeat over and over and over.

Just a heads up, I have ran into a problem with my quad encoders.

My Panda can decode 2 quad encoders with my motors running at full speed(±90rpm, very slow). But if it does ANYTHING else then it starts missing pulses. :frowning:

Thus i’m now busy building a I2C dual quad decoder.

BTW, has anybody considered using an optical mouse chip on the bottom of a robot? Most of those chips have a quad encoder output for X and Y. And the one I looked at last night does 400 pulses per millimeter… :slight_smile:

Thanks,
Errol

Tried the mouse thing many years ago with a avago 2610. The focal length and distance from the surface are critical. Floor surface is critical too. Worked OK on desk top at low velocity. What PPR is your encoder? The other option is to use a absolute 360 encoder. Then it’s a matter of reading an analog signal once per revolution if you don’t need direction or 3 times per revolution if you want to track direction. Some continuous rotation pots have up to 357 electrical degrees. Others use SPI with no dead band. We use a cherry AN8 encoder in FIRST robotics but it’s a 5 volt device. One thing I always wanted to try was take a camera and point it up at the ceiling. If there was a pattern of recessed lights and the camera was filtered to black white. Would this give precise navigation? Didn’t our ancestors do it that way?

My encoders are 181 PPR.

The mouse optical sensor i’m looking at is spec-ed at 16 feet per second. So speed shouldn’t be an issue. And I saw that some sensors can give you the 16x16 pixels, that it is using for tracking, over I2C. :slight_smile:

Mounting any kind of pot or wheel sensor is always an issue for me. That is why i’m using Faulhaber motors with encoders build in… :slight_smile:

You do get systems that use a camera and “landmarks” on the ceiling. The price is a bit steep for me… :slight_smile:
[url]RobotShop | Robot Store | Robots | Robot Parts | Robot Kits | Robot Toys

Thanks,
Errol

As far as speed and timing choices I think you would be far better using sensory data and proper control systems with compensation to account for noise.