|
|
# Group 15
|
|
|
|
|
|
## Lab Notebook - Lesson 7
|
|
|
|
|
|
**Date:** 21-4 2016
|
|
|
|
|
|
**Group members participating:** Kirstine, Simon, Martin, Leander
|
|
|
**Activity duration:** 3 hrs
|
|
|
|
|
|
## Goal
|
|
|
The goal of this project is to experiment with the behavior control paradigm and implement robots exhibiting several different behaviors at the same time.
|
|
|
|
|
|
## Plan
|
|
|
The plan for the project is to start by implementing the base design of the robot and adding the light sensor, the distance sensor and the two touch sensors so that everything is in order for the rest of the exercises. Then the single behavior program in the AvoidFigure9_3 java class will be tested and then modified to make the car change direction after avoiding obstacles.
|
|
|
After this we will start to experiment with the multiple behavior program found in the AvoidFigure9_9 java class, where we again will observe the robot’s behavior without alterations. Then we will add a new behavior to this program, such that triggering one of the touch sensors makes the program escape the obstacle responsible for triggering the sensor.
|
|
|
Finally we will add a horizontal turning engine to the top of the robot which makes us able to turn the light sensor without moving the robot, and we will correct the relevant programs accordingly.
|
|
|
|
|
|
## LEGO vehicle that exhibits a single behavior
|
|
|
We have used the program AvoidFigure9_3.java [1] and filmed the behaviour, which can be seen on the video [2]. The observed behaviour of the robot was that it continuously drove forward, until an obstacle within a certain distance is detected by the ultrasonic sensor. When an obstacle is detected, the robot turns to the left, then to the right, and then continues in the direction with furthest distance to the next obstacle. When reading through the code, we confirmed that the logic is matching the observed behaviour.
|
|
|
|
|
|
Changing the program so that the car reverses and turns around when all readings are below the stopThreshold was done by adding another control structure in the code. The control structure checks the values and turn the car when all three readings are below the threshold. Spinning the car has been implementing as a spin-method for the car. The spin-method turns the left motor backwards and the right motor forwards. Using a stopwatch, we measured the car to take 6.5 seconds to turn 360 degrees, so in the code we turn for half of that amount. This can be seen in the following code:
|
|
|
|
|
|
if (
|
|
|
leftDistance < stopThreshold
|
|
|
&& rightDistance < stopThreshold
|
|
|
&& frontDistance < stopThreshold
|
|
|
) {
|
|
|
// Drive backwards for 2 seconds
|
|
|
car.backward(power,power);
|
|
|
Delay.msDelay(ms * 4);
|
|
|
car.stop();
|
|
|
|
|
|
// Turn approximately 180 degrees
|
|
|
car.spin(power);
|
|
|
Delay.msDelay(spin_time/2);
|
|
|
car.stop();
|
|
|
}
|
|
|
|
|
|
A video of the car in action can be seen at [3].
|
|
|
Behaviors as concurrent threads
|
|
|
|
|
|
The program RobotFigure9_9.java [4] has been uploaded to the robot. The program makes use of an arbiter class, which acts as a director for which behaviour gets to overrule the other behaviours at any given time. This setup makes it simple to create different behaviours loosely coupled, meaning that the behaviours don’t need to know about each other. This is to make sure that race conditions between behaviors don’t occur, for instance that one behaviour detects an obstacle and tries to reverse, while another behavior using another sensor doesn’t detect anything and tries to move forward. Instead, the arbiter handles which behaviours overrule which. This arbiter method is described in details in the section below.
|
|
|
Both the Avoid class and the Follow class does three checks determining their effect on the behavior. The Avoid class handles the behavior resulting from the ultrasonic sensor readings. The first check is in the forward direction, where the measurement is compared to the "stopThreshold". As long as the measured distance is larger than the threshold, the noCommand() method of the SharedCar is called. This allows the Cruise class or others with lower priority, to manage the behavior. If the measurement goes below the threshold, i.e. the robot has come too close to some obstacle, the robot will rotate slightly to the left and take a measurement, and rotates back past the initial angle slightly to the right measuring once again. Finally the two measurements are compared, and the robot rotates towards the left if that side had the most space, and otherwise it stays rotated to the right, thus making this the new forward direction. The following figure illustrates behavior if for example an obstacle is in close proximity in the forward and left direction.
|
|
|
|
|
|
*Figure 2* - Illustration of the behavior resulting from an obstacle
|
|
|
|
|
|
The Follow class implements a similar behavior. In the beginning of the while loop the forward light measurement is compared to the lightThreshold. As long as the light level is below the threshold, once again the noCommand() method is called. In the same way as with the Avoid class, measurement are taking to both the left and the right. After returning to the forward direction again, the light levels from both sides are compared, and the robot rotates towards the light.
|
|
|
The Cruise class simply requests the drive forward behavior.
|
|
|
Adding an Escape behavior
|
|
|
Because of the arbiter usage, implementing a new behaviour is simple. The escape behaviour uses the touch sensors. The behaviour should allow the robot to avoid obstacles that the ultrasonic sensor didn’t detect, for instance a low profile object on the ground. When the robot bumps into an object, either one or both touch sensors will trigger. The car should then avoid by driving away from the object.
|
|
|
|
|
|
This has been implemented by handling three different cases. A press event on both sensors at the same time results in the robot reversing for 2 seconds, then turning a bit to the left and continuing.
|
|
|
|
|
|
A press event on either one of the sensors results in the car turning the opposite motor backwards for 2 seconds, turning the car away from the obstacle. The arbiter will then resume and decide what the robot should do.
|
|
|
Adding a third motor to turn the light sensor
|
|
|
Reimplementing the follow behaviour, we decided that the motor for turning the sensor would be controlled directly from the follow-class using the Motor.A instance. The main drawback of this is that if other behaviors wanted access to this motor we could get concurrency problems, and an alternative solution would be to reimplement the CarDriver class, enabling it to control the third motor given the right car commands. This would be a lot more involved than simply using the motor directly from the Follow behavior and this was the reasoning for controlling the motor directly.
|
|
|
|
|
|
The newly implemented follow-behaviour works by stopping the car, turning the motor to -45 degrees, measuring light, turning to 45 degrees and measuring again. It then applies power to the motors, depending on the readings of each side. A higher reading in a direction equals to more power to the motor on the other side, meaning the car will turn towards the light. This can be seen in the code snippet below:
|
|
|
|
|
|
|
|
|
// Follow light for a while
|
|
|
// leftLight = Lightvalue from sensor turned to the left
|
|
|
// rightLight = Lightvalue from sensor turned to the right
|
|
|
delta = leftLight-rightLight;
|
|
|
car.forward(power-delta, power+delta);
|
|
|
|
|
|
The reason the car stops when reading the light values is that turning the sensor takes a second, and the car might pass by a light source in that timeframe.
|
|
|
|
|
|
|
|
|
# Arbitration
|
|
|
The Arbiter java class [6] implements arbitration in a rather simple way. First a list of SharedCar objects is created:
|
|
|
SharedCar [] car = { new SharedCar(), new SharedCar(),
|
|
|
new SharedCar(), new SharedCar()};
|
|
|
|
|
|
Then the objects from this list is used to initialize the 4 behaviors:
|
|
|
Escape escape = new Escape(car[0]);
|
|
|
Avoid avoid = new Avoid(car[1]);
|
|
|
FollowSensorMotor follow = new FollowSensorMotor(car[2]);
|
|
|
Cruise cruise = new Cruise(car[3]);
|
|
|
|
|
|
The arbiter has access to the list of SharedCar objects and arbitration is performed by looping through this list and picking the first SharedCar which has a valid command ready:
|
|
|
for (int i=0; i < car.length; i++)
|
|
|
{
|
|
|
CarCommand carCommand = car[i].getCommand();
|
|
|
if ( carCommand != null)
|
|
|
{
|
|
|
cd.perform(carCommand);
|
|
|
winner = i;
|
|
|
Break;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
The different behaviors avoids control by calling the noCommand() function on their own SharedCar instance which makes the carCommand field equal to null which in turn makes the arbiter skip to the next behavior in the list. In this way a hierarchy of behaviors is established with the 0th behavior Escape overruling any other behaviors if it is active.
|
|
|
|
|
|
Fred Martin implements a similar logic in his prioritization code [5, p.216]. In this program each process is assigned a priority and an array of enabled processes is kept by setting the i’th entry to process i’s priority when the i’th process is enabled and setting the i’th entry to 0 when the i’th process is disabled. Arbitration is performed by scanning the array of enabled processes and picking the maximum priority. Then the array of process priorities is scanned in order to find the first process, which has the found priority, which is then allowed to be performed. If the maximum active priority is tied between several processes the arbiter picks the first it encounters in the list of process priorities.
|
|
|
|
|
|
The main difference between these two implementations is that Fred Martin’s implementation explicitly gives each process a priority whereas in the implementation we use every process is implicitly given a priority based on its position in the arbiter’s array. This makes ties impossible in our implementation but a special case in Fred Martin’s.
|
|
|
|
|
|
# Conclusion
|
|
|
In this project we experimented with the behavior control paradigm and subsuming control. We first constructed a robot with a single behavior which read distance readings from an ultrasonic sensor and steered away from nearby objects. After testing this behavior we added logic which avoided getting stuck in corners by having the robot back away from situations where there were nearby objects on all sides.
|
|
|
|
|
|
The next program was implemented to exhibit multiple behaviors at the same time, and an arbiter algorithm choose between avoiding objects using the ultrasonic sensor, moving in the direction of the most light using the attached light sensor and simply moving forward when none of the other two behaviors were applicable. Arbitration was done in a simple hierarchical manner with the active behavior with the lowest index subsuming the other two.
|
|
|
After experimenting with this design we added a new behavior to the program which steered away if obstacles collided with one of the two touch sensors which made out the bumper in front. We also added a new motor to the robot, which steered the light sensor independently of the rest of the robot, such that light values could be read without moving the robot.
|
|
|
These two additions were tested and were found to be reliable. We did not have time to implement an adaptive light threshold but we assume that this would have improved the light following behavior and made its activity more spread out instead of having it constantly used in long stretches of time.
|
|
|
|
|
|
Lastly we reflected on the usage of arbitration in our multiple behavior program and compared it to similar logic from other implementations.
|
|
|
References
|
|
|
|
|
|
[1] http://goo.gl/cmf22O (http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/Programs/AvoidFigure9_3.java)
|
|
|
|
|
|
[2] https://goo.gl/UJkzmG (https://drive.google.com/a/ceromedia.dk/file/d/0Bxn4zObl7UglQUZnSlplMlF4RWc/view)
|
|
|
|
|
|
[3] https://goo.gl/blhYwf (https://drive.google.com/a/ceromedia.dk/file/d/0Bxn4zObl7UglTkJTMG1qMWFVblE/view)
|
|
|
|
|
|
[4] http://goo.gl/9rJl3F (http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/Programs/RobotFigure9_9.java)
|
|
|
|
|
|
[5] Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.
|
|
|
|
|
|
[6] https://goo.gl/NMXJq2 (https://gitlab.au.dk/lego15/NXT_Programming_Lesson_7/blob/master/src/Arbiter.java)
|
|
|
|