... | ... | @@ -106,12 +106,21 @@ At this point in time, we realized that a GUI for transmitting motor power - and |
|
|
|
|
|
From these small motor power value experiments we realised a flaw in our program, namely that we had forgotten to stop the light sensor motor after trying to reposition the sensor at the original spot, which ment that it just kept going to the left until a new light registration was initiated by the follow behavior.
|
|
|
|
|
|
With this flaw in our algorithm corrected we where able to see the actual effect of the parameters. We therefore observed the robot with a motor power of 60 and a delay before stopping at 100 ms. This turned out to be a much to small angle with the result that the difference between the left and right measured light values was not large enought whereby the driving motors were simply set to move forward, rather than following the light properly. We therefore tried increasing the delay to 300 ms. This didn't seem to make much of a difference, so we increased it to 500 ms, which was the initial value. The reason for us concluding that this might be a solution was, that we had realized that the initial observations couldn't be trusted due to the algorithmic flaws. This seemed to work better as seen in video 7. The video shows that the light sensor was now turning at a satisfying speed until a big enough angle, but ends up reporting the opporsite behavior to the driving motors, as the robot drives right, when the light source is left of the robot and vice versa. This issue was quickly solved by simply inverting the directions of the light sensor motor in all used places.
|
|
|
With this flaw in our algorithm corrected we where able to see the actual effect of the parameters. We therefore observed the robot with a motor power of 60 and a delay before stopping at 100 ms. This turned out to be a much to small angle with the result that the difference between the left and right measured light values was not large enought whereby the driving motors were simply set to move forward, rather than following the light properly. We therefore tried increasing the delay to 300 ms. This didn't seem to make much of a difference, so we increased it to 500 ms, which was the initial value. The reason for us concluding that this might be a solution was, that we had realized that the initial observations couldn't be trusted due to the algorithmic flaws. This seemed to work better as seen in video 7. The video shows that the light sensor was now turning at a satisfying speed until a big enough angle, but ends up reporting the opporsite behavior to the driving motors, as the robot drives right, when the light source is left of the robot and vice versa.
|
|
|
|
|
|
[![Motorized Follower](http://img.youtube.com/vi/63BJjsBgtrg/0.jpg)](https://www.youtube.com/watch?v=63BJjsBgtrg)
|
|
|
*Video 7: Motorized LightSensor working well with the proper variables.*
|
|
|
|
|
|
We tested in the dark to obtain a bigger light difference. In here, the robot's response was easy to observe as seen in video 8. Note, though, that the big difference in measured light values caused it to turn A LOT - a little too much.
|
|
|
The issue with the robot trying to avoid light rather than follow it, was quickly solved by simply inverting the directions of the light sensor motor in all uses, resulting in a control loop as seen in code snippet 1.
|
|
|
|
|
|
´´´ Java
|
|
|
if (int i = 0) {
|
|
|
}
|
|
|
´´´
|
|
|
TODO: insert program code - tjek syntax for codning - tror det er korrekt.
|
|
|
*Code snippet 1: Control loop of Follow behavior with seperate motor for light sensor control*
|
|
|
|
|
|
After beliving that the follow behavior now worked with the additional motor controlling the light sensor, we decided to test it in the dark to obtain a bigger light difference. In here (the bathroom of Suze), the robot's response was easy to observe as seen in video 8. Note, though, that the big difference in measured light values caused it to increase the motor speed of one motor a lot and by that turning much faster than in the room with a normal ambient light setting.
|
|
|
|
|
|
[![Follower in dark room](http://img.youtube.com/vi/bbRn_WVlDwk/0.jpg)](https://www.youtube.com/watch?v=bbRn_WVlDwk)
|
|
|
*Video 8: Testing the Follower behavior in a darker ambient lighting environment.*
|
... | ... | @@ -121,9 +130,6 @@ Switching on everything else again: all behaviors were observable - we saw avoid |
|
|
[![All behaviors active](http://img.youtube.com/vi/YZ3ukgE0WIk/0.jpg)](https://www.youtube.com/watch?v=YZ3ukgE0WIk)
|
|
|
*Video 9: Robot driving and reacting to all 4 behaviors*
|
|
|
|
|
|
TODO: insert program code
|
|
|
|
|
|
|
|
|
### SharedCar and Arbiter
|
|
|
In the provided code, the arbitration suggested by Jones, Flynn, and Seiger [2, pp. 306] is implemented in the classes ***SharedCar*** and ***Arbiter*** in the "reverse order" compared to the code presented by Jones et al.: The Arbiter goes through the list and for the first ***SharedCar*** whose ***CarCommand*** instance is not null, calls ***CarDriver***'s *perform()* method with the *SharedCar*'s ***CarCommand*** instance and reports the *SharedCar* as the winner. It then breaks the for loop and starts over with the *SharedCar array* from 0, thus always starting from the most competent behavior layer. The reason that this order is "reverse"; is that in the code presented by Jones et al., the arbiter goes through the behaviors in increasing order of competence, overruling the previous setting of the motor input.
|
|
|
|
... | ... | |