... | ... | @@ -10,7 +10,7 @@ Camilla M. V. Frederiksen, Ida Larsen-Ledet, Nicolai Nibe and Emil Platz |
|
|
**Activity duration:** 4 hours
|
|
|
|
|
|
## Goal
|
|
|
To implement observable behaviours on a vehicle fittet with three kinds of sensors, using the behaviour control paradigm described by Brooks [1] and by Jones, Flynn & Seiger [2]. The sensors used will be an ultrasonic sensor, a light sensor and two touch sensors. This is further described in [lesson plan 7](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/Lesson.html).
|
|
|
To implement observable behaviors on a vehicle fitted with three kinds of sensors, using the behavior control paradigm described by Brooks [1] and by Jones, Flynn & Seiger [2]. The sensors used will be an ultrasonic sensor, a light sensor and two touch sensors. This is further described in [lesson plan 7](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/Lesson.html).
|
|
|
|
|
|
## Plan
|
|
|
We will attempt to complete the exercises within 6 hours (based on the fact that we have not managed to complete the previous lessons in less time than that).
|
... | ... | @@ -23,16 +23,16 @@ We rebuilt the robot to use four sensors, of which the two touch sensors were pl |
|
|
![rebuilt robot](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/TODO.PNG)
|
|
|
*Figure 1: The robot, refitted with two touch sensors, one light sensor, and one ultrasonic sensor:
|
|
|
|
|
|
### Observing the avoid behaviour
|
|
|
We were provided with the program ***AvoidFigure9_3.java*** which implements a behaviour that tries to avoid obstacles using the ultrasonic sensor. Figure 2 shows a diagram, representing the robot's overall behaviour when running it. We ran the program in order to observe the resulting conduct of the robot.
|
|
|
### Observing the Avoid behavior
|
|
|
We were provided with the program ***AvoidFigure9_3.java*** which implements a behavior that tries to avoid obstacles using the ultrasonic sensor. Figure 2 shows a diagram representing the robot's overall behavior when running it. We ran the program in order to observe the resulting conduct of the robot.
|
|
|
|
|
|
![the avoid behaviour](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/fig93.PNG)
|
|
|
*Figure 2: Diagram of the avoid behaviour. The image is taken from [2, Figure 9.3].*
|
|
|
![the avoid behavior](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/fig93.PNG)
|
|
|
*Figure 2: Diagram of the avoid behavior. The image is originally Figure 9.3 in [2].*
|
|
|
|
|
|
The program worked quite well. The robot succesfully avoided obstacles registered by the ultrasonic sensor, and when approaching a corner it seemed like the robot was attempting to find a way around the obstacle by scanning from side to side by turning its body, increasing the angle for left-turns with each try. When we looked into the program code, we saw that this observation is correct albeit a little too specific: the robot increases its turn angle in the direction where it measures the largest distance to any obstacles - i.e. it will not necessarily be the angle for left-turns that is increased. Video 1 shows the robot avoiding different obstacles [TODO: grundigere beskrivelse]
|
|
|
|
|
|
![robot running avoid behaviour](TODO)
|
|
|
*Video 1: The robot running AvoidFigure9_3.java, implementing the avoid behaviour*
|
|
|
![robot running avoid behavior](TODO)
|
|
|
*Video 1: The robot running AvoidFigure9_3.java, implementing the avoid behavior*
|
|
|
|
|
|
|
|
|
#### Incorpoating a 180 degree escape turn
|
... | ... | @@ -40,11 +40,17 @@ We tried modifying *AvoidFigure9_3.java* to make the robot perform a 180 degree |
|
|
|
|
|
We tried changing the program by making the robot drive backwards a little when encountering an obstacle and then spin around 180 degrees (by making one motor drive forward and the other drive backwards). Initially we made the robot perform the 180 turn for 1 second (1000ms), which wasn't enough, but when we changed it to 2 seconds (2000ms) it spun approximately 180 degrees. TODO: video description
|
|
|
|
|
|
![Improved avoid behaviour](TODO)
|
|
|
![Improved avoid behavior](TODO)
|
|
|
*Video 2: The robot performing a 180 degree turn when encountering a corner (approximately TODO seconds in)*
|
|
|
|
|
|
|
|
|
### Behaviour from figure 9.9
|
|
|
### Observing the Avoid, Follow and Cruise behaviors
|
|
|
We were provided with the program ***RobotFigure9_9.java*** which implements a behavior control network incorporating three behaviors: Avoid (as described earlier), Follow (which seeks out bright light and follows it), and Cruise (which simply drives the robot forwards). Figure 3 shows a diagram of the behavior control network. We ran the program in order to observe the resulting conduct of the robot.
|
|
|
|
|
|
![behavior control network](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week9/img/fig99.PNG)
|
|
|
*Figure 3: Diagram of the behavior control network of section 9.5 in [2]. The image is originally Figure 9.9 in [2].*
|
|
|
|
|
|
|
|
|
The car stopped and looked around every time it was bothered by either an obstacle or light. It was hard to distinguish whether it reacted due to light or an obstacle. TODO video of avoid program + description - EMIL
|
|
|
|
|
|
Only:
|
... | ... | @@ -60,7 +66,7 @@ We realized that a GUI would have been a good idea, to ease testing of different |
|
|
|
|
|
The reason for lowering power and interval time: less resistance on motor so less force needed.
|
|
|
|
|
|
Switched off all other behaviours to be able to test it.
|
|
|
Switched off all other behaviors to be able to test it.
|
|
|
|
|
|
70 went crazy (see video TODO)
|
|
|
Change length of interval from 500 to 100.
|
... | ... | @@ -73,19 +79,19 @@ Stopping the motors in the end helped. But it seemed that the sensor wasn't turn |
|
|
|
|
|
We tested in the dark to obtain a bigger light difference. In here, the robot's response was easy to observe (TODO: video). Note, though, that the big difference in measured light values caused it to turn A LOT - a little too much.
|
|
|
|
|
|
Switching on everything else again: all behaviours were observable - we saw avoid overrule follow, and we saw the robot immediately respond to bumping into something (escaping overruling the rest). TODO: video. In general, we don't observe follow a lot, but as we reasoned about earlier, this is because the ambient light in the room induces too little a difference to the flashligt, whereby the motorpower for the turn is not very large.
|
|
|
Switching on everything else again: all behaviors were observable - we saw avoid overrule follow, and we saw the robot immediately respond to bumping into something (escaping overruling the rest). TODO: video. In general, we don't observe follow a lot, but as we reasoned about earlier, this is because the ambient light in the room induces too little a difference to the flashligt, whereby the motorpower for the turn is not very large.
|
|
|
|
|
|
TODO: insert program code
|
|
|
|
|
|
### SharedCar and Arbiter
|
|
|
|
|
|
In the provided code, the arbitration suggested by Jones, Flynn, and Seiger [2, pp. 306] is implemented in the classes ***SharedCar*** and ***Arbiter*** in the "reverse order" compared to the code presented by Jones et al.: The Arbiter goes through the list and for the first ***SharedCar*** whose ***CarCommand*** instance is not null, calls ***CarDriver***'s *perform()* method with the *SharedCar*'s ***CarCommand*** instance and reports the *SharedCar* as the winner. It then breaks the for loop and starts over with the *SharedCar array* from 0, thus always starting from the most competent behaviour layer. The reason that this order is "reverse"; is that in the code presented by Jones et al., the arbiter goes through the behaviours in increasing order of competence, overruling the previous setting of the motor input.
|
|
|
In the provided code, the arbitration suggested by Jones, Flynn, and Seiger [2, pp. 306] is implemented in the classes ***SharedCar*** and ***Arbiter*** in the "reverse order" compared to the code presented by Jones et al.: The Arbiter goes through the list and for the first ***SharedCar*** whose ***CarCommand*** instance is not null, calls ***CarDriver***'s *perform()* method with the *SharedCar*'s ***CarCommand*** instance and reports the *SharedCar* as the winner. It then breaks the for loop and starts over with the *SharedCar array* from 0, thus always starting from the most competent behavior layer. The reason that this order is "reverse"; is that in the code presented by Jones et al., the arbiter goes through the behaviors in increasing order of competence, overruling the previous setting of the motor input.
|
|
|
|
|
|
TODO: Compare this with the arbiter of Fred Martin, [2, pp. 214-218].
|
|
|
|
|
|
|
|
|
## Conclusion
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behaviour control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels).
|
|
|
We managed to stay well within our allocated timeframe, spending only four hours in total on building and performing experiments. We attribute this to the nature of the exercises; this time the focus was mainly on observation and *understanding* the structure and implementation of the behavior control paradigm, rather than on making something function (such as trying to get the robot to balance on two wheels).
|
|
|
|
|
|
TODO:
|
|
|
- opsummér delkonklusioner
|
... | ... | |