... | ... | @@ -104,9 +104,9 @@ At this point in time, we realized that a GUI for transmitting motor power - and |
|
|
| 55 | Movement, but too small angle |
|
|
|
| 60 | Movement, seems ok, but turns in opposite direction of light and keeps moving after last check |
|
|
|
|
|
|
From these small motor power value experiments we realised a flaw in our program, namely that we had forgotten to stop the light sensor motor after trying to reposition the sensor at the original spot, which ment that it just kept going to the left until a new light registration was initiated by the follow behavior.
|
|
|
From these small motor power value experiments we realized a flaw in our program, namely that we had forgotten to stop the light sensor motor after trying to reposition the sensor at the original spot, which meant that it just kept going to the left until a new light registration was initiated by the follow behavior.
|
|
|
|
|
|
With this flaw in our algorithm corrected we where able to see the actual effect of the parameters. We therefore observed the robot with a motor power of 60 and a delay before stopping at 100 ms. This turned out to be a much to small angle with the result that the difference between the left and right measured light values was not large enought whereby the driving motors were simply set to move forward, rather than following the light properly. We therefore tried increasing the delay to 300 ms. This didn't seem to make much of a difference, so we increased it to 500 ms, which was the initial value. The reason for us concluding that this might be a solution was, that we had realized that the initial observations couldn't be trusted due to the algorithmic flaws. This seemed to work better as seen in video 7. The video shows that the light sensor was now turning at a satisfying speed until a big enough angle, but ends up reporting the opporsite behavior to the driving motors, as the robot drives right, when the light source is left of the robot and vice versa.
|
|
|
With this flaw in our algorithm corrected we where able to see the actual effect of the parameters. We therefore observed the robot with a motor power of 60 and a delay before stopping at 100 ms. This turned out to be a much to small angle with the result that the difference between the left and right measured light values was not large enough whereby the driving motors were simply set to move forward, rather than following the light properly. We therefore tried increasing the delay to 300 ms. This didn't seem to make much of a difference, so we increased it to 500 ms, which was the initial value. The reason for us concluding that this might be a solution was, that we had realized that the initial observations couldn't be trusted due to the algorithmic flaws. This seemed to work better as seen in video 7. The video shows that the light sensor was now turning at a satisfying speed until a big enough angle, but ends up reporting the opposite behavior to the driving motors, as the robot drives right, when the light source is left of the robot and vice versa.
|
|
|
|
|
|
[![Motorized Follower](http://img.youtube.com/vi/63BJjsBgtrg/0.jpg)](https://www.youtube.com/watch?v=63BJjsBgtrg)
|
|
|
*Video 7: Motorized LightSensor working well with the proper variables.*
|
... | ... | @@ -140,7 +140,7 @@ while ( frontLight > lightThreshold ) |
|
|
frontLight = light.getLightValue();
|
|
|
}
|
|
|
```
|
|
|
*Code snippet 1: Control loop of Follow behavior with seperate motor for light sensor control*
|
|
|
*Code snippet 1: Control loop of Follow behavior with separate motor for light sensor control*
|
|
|
|
|
|
After believing that the follow behavior now worked with the additional motor controlling the light sensor, we decided to test it in the dark to obtain a bigger light difference. In here (the bathroom of Suze), the robot's response was easy to observe as seen in video 8. Note, though, that the big difference in measured light values caused it to increase the motor speed of one motor a lot and by that turning much faster than in the room with a normal ambient light setting.
|
|
|
|
... | ... | @@ -160,12 +160,8 @@ Fred Martin's arbiter [3, pp. 214-218] on the other hand takes a third approach |
|
|
The three approaches of implementing a behavior control are more or less the same and from our point of view one wouldn't see the difference of shifting one approach with another unless the program had a lot of different behaviors. If this was the case Martin's arbiter would have to run through all of the behaviors while Jones, Flynn and Sieger's approach would be much faster as it breaks the control loop after finding the most important enabled behavior and thereby decreasing the robots reaction time.
|
|
|
|
|
|
## Conclusion
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels). The avoid behavior was somewhat trivial, though we had to carefully consider why the robot was looking to each side the way it was. Modifying also quickly showed results though it required some calibration to find the right amount of time to turn. Combining the different behaviors were more complicated. The way they were implemented made it easy to observe the different behaviors though, since we could just comment out a single line of code to disable each of them. Implementing the escape behavior came somewhat easy with the inspiration and it worked. The motorized light sensor might have been the most difficult task of this lab session, since it required more fine tuning than the rest of the exercises. We also had to consider the wire connected to the light sensor. The resulting robot worked well, and the priority of behaviors made sense when held up against other arbitrations. All in all, the lab was a success and we learned how to combine different behaviors caused by different kinds of stimuli.
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels). The avoid behavior was somewhat trivial, though we had to carefully consider why the robot was looking to each side the way it was. Modifying also quickly showed results though it required some calibration to find the right amount of time to turn. Combining the different behaviors were more complicated. The way they were implemented made it easy to observe the different behaviors though, since we could just comment out a single line of code to disable each of them. Implementing the escape behavior came somewhat easy with the inspiration and it worked. The motorized light sensor might have been the most difficult task of this lab session, since it required more fine tuning than the rest of the exercises, and while in hindsight a quick GUI might have been nice, we ended up finding appropriate variables by manually recompiling each time without too much hassle. We also had to consider the wire connected to the light sensor, as this was now connected to a moving object. The resulting robot worked fairly well, and the priority of behaviors made sense when held up against other arbitrations. All in all, the lab was a success, we completed all the exercises, and we learned how to combine different behaviors caused by different kinds of stimuli, and having the robot react to these in the prioritized order of the behaviors.
|
|
|
|
|
|
TODO:
|
|
|
- opsummér delkonklusioner
|
|
|
- samlet konklusion (succes? hvad har vi lært? - især i forhold til næste uges klatre-øvelse)
|
|
|
- Er der noget vi ikke har fået lavet?
|
|
|
|
|
|
## References
|
|
|
[1] Rodney Brooks, [A robust layered control system for a mobile robot](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1087032), IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986
|
... | ... | |