... | @@ -149,3 +149,142 @@ As the sensors on the robot require a lot of tweaking, we implemented a GUI to t |
... | @@ -149,3 +149,142 @@ As the sensors on the robot require a lot of tweaking, we implemented a GUI to t |
|
To get the race car around the bends, we decided to hardcode a turn which would be triggered if the light sensor read a series of white readings. This caused some issues as the robot guided by the PID controller would reach the bend at different positions and thus the hardcoded turn would not always line up correctly and reach the next part of the ramp at the right angle. If the robot turned too little, it would simply miss the black line and drive of the side of the ramp.
|
|
To get the race car around the bends, we decided to hardcode a turn which would be triggered if the light sensor read a series of white readings. This caused some issues as the robot guided by the PID controller would reach the bend at different positions and thus the hardcoded turn would not always line up correctly and reach the next part of the ramp at the right angle. If the robot turned too little, it would simply miss the black line and drive of the side of the ramp.
|
|
|
|
|
|
This finding has lead us to the conclusion that we need to add another sensor. We think that a gyro sensor could better detect the change in the angle of the slope on the ramp so the next experiment is going to incorporate this.
|
|
This finding has lead us to the conclusion that we need to add another sensor. We think that a gyro sensor could better detect the change in the angle of the slope on the ramp so the next experiment is going to incorporate this.
|
|
|
|
|
|
|
|
# Experiment 3: The Race Car with gyro, color sensor and light sensor
|
|
|
|
|
|
|
|
## Goal
|
|
|
|
To use a behavior pattern to make the robot drive the course.
|
|
|
|
Implement the color- and the gyro sensor in the car build.
|
|
|
|
|
|
|
|
## Plan
|
|
|
|
We first made a gyro test setup on a separate LEGO NXT to approximate the change in angle and see if the setup worked (see fig. 10). We hoped to use a gyro for detecting the change in angle of the slope and using it to determine when a turn was coming up.
|
|
|
|
We wanted to use the knowledge gained from our last lab report, so we want to implement three different behaviors as concurrent threads. One behavior for each sensor that should be prioritized in order to control the robot during straight passages, turns and one for stopping in the end zone.
|
|
|
|
|
|
|
|
![gyro test setup](http://gitlab.au.dk/uploads/group-22/lego/82c4927ec0/gyro_test_setup.jpg)
|
|
|
|
##### Fig. 10: NXT gyro test setup.
|
|
|
|
|
|
|
|
On the track, the robot has to act according to colors, corners and the black line. To do this we implemented a ‘behavior pattern’ [1] with different prioritised behaviors. The color sensor has the highest priority and indicates when the robot reaches the goal zone marked by green color. The gyro sensor has second priority and is used to sense when the robot has reached a turn. The last prioritised behavior is the light sensor which is used for the PID controller and tells the robot to follow the black line along the racetrack. The behavior diagram and the code used for implementation is shown below (see fig. 11):
|
|
|
|
|
|
|
|
![BehaviorPattern](http://gitlab.au.dk/uploads/group-22/lego/2691ec786c/BehaviorPattern.png)
|
|
|
|
##### Fig. 11: Diagram showing the prioritised behaviors implemented in the robot.
|
|
|
|
|
|
|
|
```
|
|
|
|
public class RaceCarMain {
|
|
|
|
|
|
|
|
public static void main (String[] args) throws Exception {
|
|
|
|
|
|
|
|
SharedCar [] car = { new SharedCar(), new SharedCar(),
|
|
|
|
new SharedCar()};
|
|
|
|
|
|
|
|
CarDriver cd = new CarDriver();
|
|
|
|
|
|
|
|
Goal goal = new Goal(car[0]);
|
|
|
|
Bump bump = new Bump(car[1]);
|
|
|
|
LineFollowerPID follow = new LineFollowerPID(car[2]);
|
|
|
|
|
|
|
|
// Room for more behaviors
|
|
|
|
|
|
|
|
Arbiter arbiter = new Arbiter(car, cd);
|
|
|
|
Button.waitForAnyPress();
|
|
|
|
|
|
|
|
arbiter.setDaemon(true);
|
|
|
|
arbiter.start();
|
|
|
|
|
|
|
|
follow.setDaemon(true);
|
|
|
|
follow.start();
|
|
|
|
|
|
|
|
goal.setDaemon(true);
|
|
|
|
goal.start();
|
|
|
|
|
|
|
|
Delay.msDelay(2000);
|
|
|
|
|
|
|
|
bump.setDaemon(true);
|
|
|
|
bump.start();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
```
|
|
|
|
##### Fig 12: Code snippet from RaceCarMain.java showing the hierarchical behavior implementation. The robot prioritises in the following order ‘Goal’, ‘Bump’ and ‘PID’.
|
|
|
|
|
|
|
|
## Results
|
|
|
|
Results from the gyro test gave us a value to trigger the behavior. When the robot is stable the gyro measures a raw value of roughly 600. When the robot drives over a bump at the plateau the value dips to around 500. When testing the behavior on the ramp, we found that a value of 560 would trigger the turn, while still not registering any small bumps made by the robot itself.
|
|
|
|
|
|
|
|
After testing we mounted the extra sensors to our pivot mounting rig as seen in fig 13.
|
|
|
|
|
|
|
|
![Racecar_withText](http://gitlab.au.dk/uploads/group-22/lego/6dcb59aa37/Racecar_withText.jpg)
|
|
|
|
##### Fig. 13: Showing our robot for this experiment. A light sensor, a color sensor and a gyroscope is mounted in the front and the wheels are mounted directly to the motors.
|
|
|
|
|
|
|
|
#### Gyroscope
|
|
|
|
The racetrack is equipped with three flat plateaus separated by three steep paths. One of the track’s challenges is to detect when the robot is on a plateau. Multiple approaches can be used to overcome this challenge. In order to detect the three plateaus on the racetrack, we use a gyroscope. The idea is that angular changes to the robot’s path can be detected by the gyroscope (see Fig. 14) and thus indicate that it is time to turn.
|
|
|
|
|
|
|
|
![racetrack_sideview2](http://gitlab.au.dk/uploads/group-22/lego/e360f5dbee/racetrack_sideview2.jpg)
|
|
|
|
##### Fig. 14: Shows the robot on the racetrack. The gyroscope’s job is to detect when there is a downwards angular change and a plateau has been reached.
|
|
|
|
|
|
|
|
Troubleshooting
|
|
|
|
During testing we encountered a strange glitch in the code where the robot would ignore the given PID values and input from the gyro, despite the gyro having the highest priority.
|
|
|
|
We still do not know why this happened. In the end we made the gyro having the 2nd priority, after the color sensor.
|
|
|
|
|
|
|
|
The way we programmed the PID controller, the robot is only able to work its way from right to left when looking for a black line to follow. This meant that the robot should always be started to the right of the black line when beginning the program.
|
|
|
|
Finding: Target power (TP) in the PID controller heavily affects how well the robot is able to locate the black line it is supposed to follow.
|
|
|
|
We then encountered a more serious problem. The robot would pass over the black line and not correct itself enough to actually follow the direction of the line. We tried to implement a behaviour where the robot would stop after turning a corner and slowly adjust to find the black line again. (see fig 15).
|
|
|
|
|
|
|
|
![soft_turn](http://gitlab.au.dk/uploads/group-22/lego/b850ac6eb2/soft_turn.jpg)
|
|
|
|
##### Fig. 15: Illustration of the soft turn needed for the robot to slowly find the black line again.
|
|
|
|
|
|
|
|
We had issue with the gyro being triggered when starting the robot, because of the movement of the operator. This resulted in the robot thinking it was at the first corner, and would therefore turn right, after getting out of the green start zone. To solve this issue we used a time measurement to register when the program was started and only start the inner loop of the gyro sensor class when 1.5 seconds had passed. This ensured the robot would be on the slope of the ramp before measuring for bumps in the plateau.
|
|
|
|
|
|
|
|
# Iteration 4: Hard coded car with no sensors
|
|
|
|
|
|
|
|
## Goal
|
|
|
|
Our goal with our last experiment, was to see if we within reasonable time could make a robot follow the track, without using sensors whatsoever. To do this we wanted to implement the NXTRegulatedMotor, so we could give specific commands to the two electrical motors, instead of just a power level and duration.
|
|
|
|
|
|
|
|
## Plan
|
|
|
|
We intended to accomplish this by measuring the length of the track that the robot had to drive. We then measured the diameter of the wheels and calculated the circumference. We could then use the length of the track and circumference of the wheel to calculate how many rotations that the wheels would have to turn in order to cover the desired distance.
|
|
|
|
We then calculated the number of rotations in degrees, since this is the required input for the NXTRegulatedMotor.
|
|
|
|
|
|
|
|
In order to turn the facing of the robot 90 degrees, we would turn one wheel 180 (in reality 190 degrees) in one direction and the other wheel 180 (again 190 degrees, due to the motors) degrees in the opposite direction. To turn the facing of the robot 180 degrees, we would turn each wheel 360 degrees in opposite directions.
|
|
|
|
|
|
|
|
Diameter * Pi = circumference
|
|
|
|
81.6mm * Pi = ~ 256mm
|
|
|
|
|
|
|
|
Distance to cover / circumference = number of rotations
|
|
|
|
Distance of turns: 380mm / 256mm = 1.48 ~1.5 rotations
|
|
|
|
Distance of the long side of the track 1850mm / 256mm = 7.22 ~ 7.2 rotations
|
|
|
|
|
|
|
|
We also did two small experiments to make sure that the mathematical theory was correct, as there is always the chance of the electrical motors being inaccurate because of wear and tear.
|
|
|
|
The first experiment was to see if the distance covered was the same as the circumference of the wheel:
|
|
|
|
|
|
|
|
![Car distance](http://gitlab.au.dk/uploads/group-22/lego/dee869bbcd/Car_distance.jpg)
|
|
|
|
##### Fig 16: The test setup for measuring the distances covered by one rotation of the motor. The arrow indicates where the robot started and the line at the wheel is where it ended after one rotation.
|
|
|
|
|
|
|
|
We learned that the robot only covered a distance of 24,7 cm, and not 25,6 cm as it should have. Here we also noticed that the robot was slightly at an angle comparing to where it started.
|
|
|
|
|
|
|
|
We then conducted a small experiment to see if this was the same case when turning, and we found that is was, but it was inconsistent how much the electrical motors would turn, resulting in a slight variation in angle on every turn.
|
|
|
|
|
|
|
|
We came up with the following route that the robot would have to drive to make it up the track and all the way down again. We hoped that there would be enough room on the track to have enough space for the robot to make small mistakes.
|
|
|
|
|
|
|
|
Pseudo code:
|
|
|
|
7.2 rotations forward, 90 degrees right, 1,5 rotations forward, 90 degrees right,
|
|
|
|
7.2 rotations forward, 90 degrees left, 1.5 rotations forward, 90 degrees left,
|
|
|
|
7.2 rotations forward,
|
|
|
|
180 degrees,
|
|
|
|
7.2 rotations forward, 90 degrees right, 1.5 rotations forward, 90 degrees right,
|
|
|
|
7.2 rotations forward, 90 degrees left, 1.5 rotations forward, 90 degrees left,
|
|
|
|
7.2 forward,
|
|
|
|
(Stop)
|
|
|
|
|
|
|
|
![Robot race track route measurements](http://gitlab.au.dk/uploads/group-22/lego/3ad939910b/Robot_race_track_route_measurements.png)
|
|
|
|
##### Fig. 17 - Distance that the robot is supposed to drive.
|
|
|
|
|
|
|
|
## Results
|
|
|
|
|
|
|
|
We managed to finish the track with a hardcoded robot in 57 seconds.
|
|
|
|
When running this way, the robot is far from reliable, since the smallest error in the deployment of the robot can result in a failed run.
|
|
|
|
Likewise, any changes in the environment (track) will almost certainly result in a failed run. We encountered dust and dirt at some point on the track, which made one of the wheels spin out. This made the robot drive off at a slight angle, something that made it drive off the track on the way down again.
|
|
|
|
Theoretically the fastest possible run for this rigid type of hard coded approach would be 27 seconds, but the robot itself is not able to accelerate accurately at that speed (x3), as one wheel often spins out. This is because of lack of grip of the cheap rubber tyres and the robot trying to accelerate instantly to the required speed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## References
|
|
|
|
|
|
|
|
[1] Behavior pattrn: Jones, Flynn, and Seiger, "Mobile Robots, Inspiration to Implementation", Second Edition, 1999, http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/11128_Mobile_Robots_Inspiration_to_Implementation-Flynn_and_Jones.pdf |
|
|
|
\ No newline at end of file |