... | @@ -62,7 +62,7 @@ Furthermore, observing the robot running with the Follow behavior, we became awa |
... | @@ -62,7 +62,7 @@ Furthermore, observing the robot running with the Follow behavior, we became awa |
|
![Pale hand and dark shirt](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/handshirt.PNG)
|
|
![Pale hand and dark shirt](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/handshirt.PNG)
|
|
*Figure 4: Pale hand and t-shirt made of dark fabric. The hand reflected too much light back to the sensor causing it not to register that the torch light was blocked, while the dark fabric succesfully blocked the torch light.*
|
|
*Figure 4: Pale hand and t-shirt made of dark fabric. The hand reflected too much light back to the sensor causing it not to register that the torch light was blocked, while the dark fabric succesfully blocked the torch light.*
|
|
|
|
|
|
We speculate that this might lead to unintuitive behavior. For instance, if the robot gets close to a white wall, the Follow behavior might cause it to be drawn to the wall before the Avoid behavior kicks in and drives it away from the surface. We however concluded that in other cases it wouldn't effect the behavior, as ...[TODO reflective might only measure specific light values LOOK INTO IT CAMILLA - EMIL]
|
|
We speculate that this might lead to unintuitive behavior. For instance, if the robot gets close to a white wall, the Follow behavior might cause it to be drawn to the wall before the Avoid behavior kicks in and drives it away from the surface. We however concluded that this would not affect the robots follow behavior in other cases, as it would just pick up upon the red color spectrum of the ambient light or the mobile torch and still measure a higher value.
|
|
|
|
|
|
Additionally, we discovered a slight nuisance during closer observations of the Follow behavior. The robot would, when left alone, constantly stop up and check its sides for light sources, even when not given any significant light source from a torch, as shown in video 4.
|
|
Additionally, we discovered a slight nuisance during closer observations of the Follow behavior. The robot would, when left alone, constantly stop up and check its sides for light sources, even when not given any significant light source from a torch, as shown in video 4.
|
|
|
|
|
... | @@ -73,13 +73,15 @@ This was caused by the Follow behavior being programmed to trigger a 'search' fo |
... | @@ -73,13 +73,15 @@ This was caused by the Follow behavior being programmed to trigger a 'search' fo |
|
|
|
|
|
|
|
|
|
### Implementing the Escape behavior
|
|
### Implementing the Escape behavior
|
|
As suggested in the lesson plan, we used the implementation on page 305 in [2] as inspiration. The final result is included in [5].
|
|
As suggested in the lesson plan, we used the implementation on page 305 in [2] as inspiration. The final result is included in [6].
|
|
|
|
|
|
We tried turning by just powering one motor, but that didnt work so now we power forward in one and backwards in one. This worked well, even though we had the other behaviors turned on, so they affected the run. We tried with them turned off aswell, and it worked even better this time.
|
|
The turns of the pseudo code of page 305 in [2] was initially implemented by just powering one motor, but that didn't work so now we power forward in one and backwards in one. This worked well, even though we had the other behaviors turned on, so they affected the run. We tried with them turned off aswell, and it worked even better this time.
|
|
|
|
|
|
[![Implemented Escape behavior](http://img.youtube.com/vi/nINQb163pyQ/0.jpg)](https://www.youtube.com/watch?v=nINQb163pyQ)
|
|
[![Implemented Escape behavior](http://img.youtube.com/vi/nINQb163pyQ/0.jpg)](https://www.youtube.com/watch?v=nINQb163pyQ)
|
|
*Video 5: Robot avoiding obstacles using an Escape behavior via the TouchSensor.*
|
|
*Video 5: Robot avoiding obstacles using an Escape behavior via the TouchSensor.*
|
|
|
|
|
|
|
|
[TODO skriv ovenstående afsnit sammenhængende]
|
|
|
|
|
|
### Motor for light sensor
|
|
### Motor for light sensor
|
|
We just changed the car movement to a new motor. We used too much power though! TODO Insert video of motorized choking of robot. Delay was 500 at first, but we changed that to 100. TODO Insert picture of motorized light sensor.
|
|
We just changed the car movement to a new motor. We used too much power though! TODO Insert video of motorized choking of robot. Delay was 500 at first, but we changed that to 100. TODO Insert picture of motorized light sensor.
|
|
|
|
|
... | @@ -119,20 +121,20 @@ Switching on everything else again: all behaviors were observable - we saw avoid |
... | @@ -119,20 +121,20 @@ Switching on everything else again: all behaviors were observable - we saw avoid |
|
*Video 9: Robot driving and reacting to all 4 behaviors*
|
|
*Video 9: Robot driving and reacting to all 4 behaviors*
|
|
TODO: insert program code
|
|
TODO: insert program code
|
|
|
|
|
|
### SharedCar and Arbiter
|
|
|
|
|
|
|
|
|
|
### SharedCar and Arbiter
|
|
In the provided code, the arbitration suggested by Jones, Flynn, and Seiger [2, pp. 306] is implemented in the classes ***SharedCar*** and ***Arbiter*** in the "reverse order" compared to the code presented by Jones et al.: The Arbiter goes through the list and for the first ***SharedCar*** whose ***CarCommand*** instance is not null, calls ***CarDriver***'s *perform()* method with the *SharedCar*'s ***CarCommand*** instance and reports the *SharedCar* as the winner. It then breaks the for loop and starts over with the *SharedCar array* from 0, thus always starting from the most competent behavior layer. The reason that this order is "reverse"; is that in the code presented by Jones et al., the arbiter goes through the behaviors in increasing order of competence, overruling the previous setting of the motor input.
|
|
In the provided code, the arbitration suggested by Jones, Flynn, and Seiger [2, pp. 306] is implemented in the classes ***SharedCar*** and ***Arbiter*** in the "reverse order" compared to the code presented by Jones et al.: The Arbiter goes through the list and for the first ***SharedCar*** whose ***CarCommand*** instance is not null, calls ***CarDriver***'s *perform()* method with the *SharedCar*'s ***CarCommand*** instance and reports the *SharedCar* as the winner. It then breaks the for loop and starts over with the *SharedCar array* from 0, thus always starting from the most competent behavior layer. The reason that this order is "reverse"; is that in the code presented by Jones et al., the arbiter goes through the behaviors in increasing order of competence, overruling the previous setting of the motor input.
|
|
|
|
|
|
Martin's arbiter [3, pp. 214-218] takes a third approach by keeping a list of priorities and a list of enabled/disables statuses (represented by the associated priority value for *enabled* vs. a 0 for *disabled*). Each behavior is in charge of enabling/disabling itself, and a ***prioritize()*** function continuously checks these lists and sets the motor power to values specified by the enabled behavior with the highest priority.
|
|
Fred Martin's arbiter [3, pp. 214-218] on the other hand takes a third approach by keeping a list of priorities and a list of enabled/disables statuses (represented by the associated priority value for *enabled* vs. a 0 for *disabled*). Each behavior is in charge of enabling/disabling itself, and a ***prioritize()*** function continuously checks these lists and sets the motor power to values specified by the enabled behavior with the highest priority.
|
|
|
|
|
|
TODO: sig noget mere, eller er det her nok?
|
|
The three approaches of implementing a behavior control are more or less the same and from our point of view one wouldn't see the difference of shifting one approach with another unless the program had a lot of different behaviors. If this was the case Martin's arbiter would have to run through all of the behaviors while Jones, Flynn and Sieger's approach would be much faster as it breaks the control loop after finding the most important enabled behavior and thereby decreasing the robots reaction time.
|
|
|
|
|
|
## Conclusion
|
|
## Conclusion
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels).
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels).
|
|
|
|
|
|
TODO:
|
|
TODO:
|
|
- opsummér delkonklusioner
|
|
- opsummér delkonklusioner
|
|
- samlet konklusion (succes? hvad har vi lært?)
|
|
- samlet konklusion (succes? hvad har vi lært? - især i forhold til næste uges klatre-øvelse)
|
|
- Er der noget vi ikke har fået lavet?
|
|
- Er der noget vi ikke har fået lavet?
|
|
|
|
|
|
## References
|
|
## References
|
... | | ... | |