... | ... | @@ -160,8 +160,7 @@ Fred Martin's arbiter [3, pp. 214-218] on the other hand takes a third approach |
|
|
The three approaches of implementing a behavior control are more or less the same and from our point of view one wouldn't see the difference of shifting one approach with another unless the program had a lot of different behaviors. If this was the case Martin's arbiter would have to run through all of the behaviors while Jones, Flynn and Sieger's approach would be much faster as it breaks the control loop after finding the most important enabled behavior and thereby decreasing the robots reaction time.
|
|
|
|
|
|
## Conclusion
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels). The avoid behavior was somewhat trivial, though we had to carefully consider why the robot was looking to each side the way it was. Modifying also quickly showed results though it required some calibration to find the right amount of time to turn. Combining the different behaviors were more complicated. The way they were implemented made it easy to observe the different behaviors though, since we could just comment out a single line of code to disable each of them. Implementing the escape behavior came somewhat easy with the inspiration and it worked. The motorized light sensor might have been the most difficult task of this lab session, since it required more fine tuning than the rest of the exercises, and while in hindsight a quick GUI might have been nice, we ended up finding appropriate variables by manually recompiling each time without too much hassle. We also had to consider the wire connected to the light sensor, as this was now connected to a moving object. The resulting robot worked fairly well, and the priority of behaviors made sense when held up against other arbitrations. All in all, the lab was a success, we completed all the exercises, and we learned how to combine different behaviors caused by different kinds of stimuli, and having the robot react to these in the prioritized order of the behaviors.
|
|
|
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels). The avoid behavior was somewhat trivial, though we had to carefully consider why the robot was looking to each side the way it was. Modifying also quickly showed results though it required some calibration to find the right amount of time to turn. Combining the different behaviors were more complicated. The way they were implemented made it easy to observe the different behaviors though, since we could just comment out a few lines of code to disable each of them. Implementing the escape behavior came somewhat easy with the inspiration and it worked. The motorized light sensor might have been the most difficult task of this lab session, since it required more fine tuning than the rest of the exercises, and while in hindsight a quick GUI might have been nice, we ended up finding appropriate variables by manually recompiling each time without too much hassle. We also had to consider the wire connected to the light sensor, as this was now connected to a moving object. The resulting robot worked fairly well, and the priority of behaviors made sense when held up against other arbitrations. All in all, the lab was a success, we completed all the exercises, and we learned how to combine different behaviors caused by different kinds of stimuli, and having the robot react to these in the prioritized order of the behaviors.
|
|
|
|
|
|
## References
|
|
|
[1] Rodney Brooks, [A robust layered control system for a mobile robot](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1087032), IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986
|
... | ... | |