... | @@ -6,13 +6,27 @@ |
... | @@ -6,13 +6,27 @@ |
|
|
|
|
|
**Group members participating:** Camilla M. V. Frederiksen, Ida Larsen-Ledet, Nicolai Nibe and Emil Platz
|
|
**Group members participating:** Camilla M. V. Frederiksen, Ida Larsen-Ledet, Nicolai Nibe and Emil Platz
|
|
|
|
|
|
**Activity duration:** TODO hours
|
|
**Activity duration:** 29 hours
|
|
|
|
|
|
|
|
8 hours on Thursday 28th (All)
|
|
|
|
|
|
|
|
3 hours on Monday 2nd (Nicolai)
|
|
|
|
|
|
|
|
1 hours on Wednesday the 4th (Ida)
|
|
|
|
|
|
|
|
9 hours on Thursday the 5th (Emil, Ida, Nicolai)
|
|
|
|
|
|
|
|
8 hours on Monday the 9th (Camilla, Emil, Ida og Nicolai)
|
|
|
|
|
|
## **Goal**
|
|
## **Goal**
|
|
|
|
|
|
TODO: Skriv
|
|
Our goal is to implement a program that autonomously takes the robot to the top of the given track and back again as fast as possible, as described in [lesson plan 9](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson8.dir/Lesson.html). The "as fast as possible" part is a secondary focus to succeeding in getting the robot to the top and back down.
|
|
|
|
|
|
|
|
TODO: ret kode-eksempler til at være i kodeformat
|
|
|
|
|
|
... [lesson plan 9](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson8.dir/Lesson.html).
|
|
TODO: ret andre formatteringsting, som ikke kan laves i Google Docs
|
|
|
|
|
|
|
|
TODO: ordn refs
|
|
|
|
|
|
## **Plan**
|
|
## **Plan**
|
|
|
|
|
... | @@ -20,13 +34,13 @@ We plan to work on Thursday the 28th of April and on Thursday the 5th of May. We |
... | @@ -20,13 +34,13 @@ We plan to work on Thursday the 28th of April and on Thursday the 5th of May. We |
|
|
|
|
|
In the first session, we will discuss ideas and experiment with them, using the track and the videos provided in the lesson plan as inspiration. Based on an initial wannabe-brainstorm, we will make a plan for the experiments.
|
|
In the first session, we will discuss ideas and experiment with them, using the track and the videos provided in the lesson plan as inspiration. Based on an initial wannabe-brainstorm, we will make a plan for the experiments.
|
|
|
|
|
|
We have decided not to make a clear division into roles, due to the rather ad hoc nature of our task. We will most likely have some sort of unspoken division of labor, but.we want it to be flexible so that everyone can take part in all tasks.
|
|
We have decided not to make a clear division into roles, due to the rather ad hoc nature of our task. We will most likely have some sort of unspoken division of labor, but we want it to be flexible so that everyone can take part in all tasks.
|
|
|
|
|
|
## **Brainstorming**
|
|
## **Brainstorming**
|
|
|
|
|
|
We began by discussing how to make a trigger for sensing whether or not the robot is located in the green start field. A suggestion was to interpret green after having seen white as "goal". One implication of this is knowing whether or not the robot has successfully completed its run on the track - it could have just briefly left the start field and returned again without going up the track, in which case it should not stop (obviously, returning to the start field after entering the track implies going the wrong way on the track which is an undesirable behavior). An additional point that was brought up was the fact that we do not want the robot to stop until it is entirely back in the green field. If the sensor is placed on the front of the robot, we will want it to continue forward for a bit after registering green.
|
|
We began by discussing how to make a trigger for sensing whether or not the robot is located in the green start field. A suggestion was to interpret green after having seen white as "goal". One implication of this is knowing whether or not the robot has successfully completed its run on the track - it could have just briefly left the start field and returned again without going up the track, in which case it should not stop (obviously, returning to the start field after entering the track implies going the wrong way on the track which is an undesirable behavior). An additional point that was brought up was the fact that we do not want the robot to stop until it is entirely back in the green field. If the sensor is placed on the front of the robot, we will want it to continue forward for a bit after registering green.
|
|
|
|
|
|
Next, we discussed how to follow the track and turn on the platforms. We discussed how to handle the Y-shaped track lines on the platform - assuming that we use a line-following behavior, how will the robot know which line to continue along? It was suggested that we simply keep to the right side of the line (if possible). However, when reaching the second platform, the robot would need to keep to the left side. Aside from that, it the assumption that the robot will never accidentally drive past the line seems unrealistic.
|
|
Next, we discussed how to follow the track and turn on the platforms. We discussed how to handle the Y-shaped track lines on the platform - assuming that we use a line-following behavior, how will the robot know which line to continue along? It was suggested that we simply keep to the right side of the line (if possible). However, when reaching the second platform, the robot would need to keep to the left side. Aside from that, the assumption that the robot will never accidentally drive past the line seems unrealistic.
|
|
|
|
|
|
The discussion regarding the Y-shaped lines led to a suggestion that rather than having the robot turn on the platform, the shape might be utilized to have the robot go down the "leg" of the Y and then drive backwards up to the next platform, in the hopes of saving time. The idea is illustrated in Figure 1. There was debate as to whether or not it was doable (or practical) to do this. We might try it out when we get further into the project.
|
|
The discussion regarding the Y-shaped lines led to a suggestion that rather than having the robot turn on the platform, the shape might be utilized to have the robot go down the "leg" of the Y and then drive backwards up to the next platform, in the hopes of saving time. The idea is illustrated in Figure 1. There was debate as to whether or not it was doable (or practical) to do this. We might try it out when we get further into the project.
|
|
|
|
|
... | @@ -46,15 +60,17 @@ It was further suggested that we could let the robot race as fast as possible up |
... | @@ -46,15 +60,17 @@ It was further suggested that we could let the robot race as fast as possible up |
|
|
|
|
|
Continuing along the lines of going straight at high-speed, we decided to investigate how fast we can make the robot go when using the LineFollower; i.e. we will attempt to minimize wasteful oscillations from side to side.
|
|
Continuing along the lines of going straight at high-speed, we decided to investigate how fast we can make the robot go when using the LineFollower; i.e. we will attempt to minimize wasteful oscillations from side to side.
|
|
|
|
|
|
Finally, we briefly continued our discussion on how to perform the turns on the platforms. It was suggested that we use a pressure sensor (like the bumper used in lesson 7) to know when the robot had turned far enough. Another suggestion was to make use of the tachocount.
|
|
Finally, we briefly continued our discussion on how to perform the turns on the platforms. It was suggested that we use a pressure sensor (like the bumper used in lesson 7) to know when the robot had reached the edge of the platform after a turn. Another suggestion was to make use of the tachocount to make a full turn.
|
|
|
|
|
|
|
|
In Appendix A (Unconstrained idea generation), we have presented some of the less realistic ideas that were brought forth during our brainstorming session.
|
|
|
|
|
|
## Resulting plan
|
|
## Resulting plan
|
|
|
|
|
|
We decided to begin by testing the use of different sensors, with regards to following the track and making turns on the platforms. Later on, it might be interesting to focus on testing the efficiency of different ways of turning on the platforms.
|
|
We decided to begin by testing the use of different sensors, with regards to following the track and making turns on the platforms. Later on, it might be interesting to focus on testing the efficiency of different ways of turning on the platforms.
|
|
|
|
|
|
We ended up with the following initial, concrete plan:
|
|
We ended up with the following initial, concrete plan for the first steps:
|
|
|
|
|
|
1. Run the LineFollower program from week TODO on the track to see how it performs
|
|
1. Run the LineFollower program from Lesson 1 on the track to see how it performs
|
|
|
|
|
|
2. Experiment with modifications of LineFollower
|
|
2. Experiment with modifications of LineFollower
|
|
|
|
|
... | @@ -70,80 +86,283 @@ In order to use the LineFollower program, we rebuilt the robot so that the light |
... | @@ -70,80 +86,283 @@ In order to use the LineFollower program, we rebuilt the robot so that the light |
|
|
|
|
|
*Figure 2: The robot rebuilt to use the light sensor for following a line on the ground.*
|
|
*Figure 2: The robot rebuilt to use the light sensor for following a line on the ground.*
|
|
|
|
|
|
In order to have some clear goals for our experiments, we began by identifying issues with using the LineFollower program, in order to try and solve them individually before attempting to figure out a solution for the entire task - sort of a divide and conquer approach. We strived to specify the issues as "what and why"; i.e. how to detect a certain context or situation and what behavior to respond with. We identified the following issues that we decided to work with:
|
|
In order to have some clear goals for our experiments, we began by identifying issues with using the LineFollower program, in order to try and solve them individually before attempting to figure out a solution for the entire task - sort of a divide and conquer approach. This resulted in the following problems and motivations to solve them:
|
|
|
|
|
|
TODO: Synes ikke at "what" og “why” helt passer (Ida)
|
|
|
|
|
|
|
|
<table>
|
|
<table>
|
|
<tr>
|
|
<tr>
|
|
<td>Problem</td>
|
|
<td>Problem</td>
|
|
<td>What</td>
|
|
<td>Motivation to solve</td>
|
|
<td>Why</td>
|
|
|
|
</tr>
|
|
</tr>
|
|
<tr>
|
|
<tr>
|
|
<td>Getting out of the start area</td>
|
|
<td>Getting out of the start area. Since the robot start on an entirely green surface, it is hard for it to find the track.</td>
|
|
<td>Since the robot starts on an entirely green surface, it is hard for it to find the track.</td>
|
|
<td>We need to get out of the start zone to complete the track. </td>
|
|
<td></td>
|
|
|
|
</tr>
|
|
</tr>
|
|
<tr>
|
|
<tr>
|
|
<td>Following the track up the ramp</td>
|
|
<td>Following the track up the ramp.</td>
|
|
<td></td>
|
|
<td>We need to drive straight up the ramp to perform a turn on the platform, and to avoid driving off the track. </td>
|
|
<td></td>
|
|
|
|
</tr>
|
|
</tr>
|
|
<tr>
|
|
<tr>
|
|
<td>Detecting platforms</td>
|
|
<td>Detecting platforms, as this is not straightforward using just LineFollower. </td>
|
|
<td>Detecting the platform context is not straightforward using just LineFollower.</td>
|
|
<td>We want to turn on the platform, and therefore have to know when the robot is on one, to successfully perform the turn and reach the next part of the track. </td>
|
|
<td>Performing turns in order to follow the track, and stopping.</td>
|
|
|
|
</tr>
|
|
</tr>
|
|
<tr>
|
|
<tr>
|
|
<td>Recognizing the top platform</td>
|
|
<td>Recognizing the top platform</td>
|
|
<td></td>
|
|
<td>We need to know when the robot is at the top, as the turn is different from the others.</td>
|
|
<td>At the top platform, the robot needs to urn 180 (to go back down)</td>
|
|
|
|
</tr>
|
|
</tr>
|
|
</table>
|
|
</table>
|
|
|
|
|
|
|
|
|
|
The LineFollower program from week TODO, when attempting to correct the robot’s course, turns the robot by only having one motor turned on. This was observable when we ran the program, and we discussed modifying the program so that the robot would always use both motors. The purpose of this modification would be to increase the forward motion along the desired path such that less time would be wasted in the turns.
|
|
*Table 1: Issues with the LineFollower program.*
|
|
|
|
|
|
|
|
TODO: Manuel konvertering af tabellen ovenfor
|
|
|
|
|
|
|
|
We became aware that some of the identified issues express behaviors, while others express triggers.
|
|
|
|
|
|
|
|
#### Running and modifying
|
|
|
|
|
|
|
|
We performed an initial run with the LineFollower program as it was in Lesson 1. Following the line went fine when going up the first ramp, but then the robot lost track of the line and nearly toppled over the edge. Going down, the robot followed the line for a while, before turning (and driving TODOQ?? Emil) into a wall. Later on in our experiments, we realized that the inexplicable turns were caused by the robot’s rear support wheel.
|
|
|
|
|
|
|
|
TODO: video med første run af LineFollower på rampen?
|
|
|
|
|
|
|
|
The LineFollower program from Lesson 1, when attempting to correct the robot’s course, turns the robot by only having one motor turned on. This was observable when we ran the program [TODOQ: hvordan?]. We had already, in the very beginning of our experiments, discussed modifying the program so that the robot would always use both motors. The purpose of this modification would be to increase the forward motion along the desired path such that less time would be wasted in the turns.
|
|
|
|
|
|
|
|
We decided to give it a try - a description of the outcome follows.
|
|
|
|
|
|
|
|
The robot still succeeded in following the line when going up the first ramp. However, it turned completely off course when going down. This seemed to be fixed by switching the robot’s wheels, which indicated to us that the issue was not due to bugs in the code or lighting issues.
|
|
|
|
|
|
|
|
The robot was still moving very slowly, so we changed the motor powers in the turns from (20, 65) to (80, 100). This modification caused the robot to race off the track, and inspired us to temporarily disable line following and see if we could get the robot to reach the first platform faster simply by placing it facing the correct direction and letting it go full speed ahead.
|
|
|
|
|
|
|
|
#### Experimenting with faster initial ramp climb
|
|
|
|
|
|
|
|
We wanted to find a faster method for climbing the first ramp than using the old LineFollower program. As described, we initially disabled line following and instead used the program FullDowney.java which simply makes the robot drive forward with full speed. We did this to see if, by positioning it correctly, we could get the robot to go straight up and not drive off the ramp.
|
|
|
|
|
|
|
|
The robot performed very poorly, often making a sudden turn off the track. After taking a closer look, we ended up replacing the left motor, a lego-stick-thing [TODO Ida: "change slightly bent sticky-thing to not bent sticky-thing" - tror at “sticky” = pinde-agtig, ikke klistret], and the support wheel. This seemed to improve the robot’s driving slightly, but did not fix the issues with seemingly random turns completely. We gave up on trying to get the robot to go straight without sensor input and went on to experimenting with the gyro sensor.
|
|
|
|
|
|
### Experimenting with the gyro sensor
|
|
### Experimenting with the gyro sensor
|
|
|
|
|
|
…
|
|
The next experiment was whether or not equipping the robot with a gyro sensor would enable it to detect the movement when driving onto a platform.
|
|
|
|
|
|
|
|
As in the other experiments we started out by rebuilding the robot - this time using the gyro sensor. Initially we intended to place the sensor at the center of the robot (directly above the wheels). We however realized that there would be less motion on the center axis than on either front or back of the robot, resulting in more steady readings of the gyro, which in our case, where we want to detect small changes, is an unwanted property. Therefore we decided to mount the gyro in front of the robot as seen in figure 4.
|
|
|
|
|
|
|
|
[TODO robot with gyro sensor]
|
|
|
|
|
|
|
|
*Figure 3: The robot rebuilt to use a gyro sensor for detecting a level change.*
|
|
|
|
|
|
|
|
To test whether the gyro sensor can be used to detect when the robot reaches a platform, we wrote a small program GyroTest.java, that records and displays the minimum - and maximum seen values so far while the robot drives forward at a steady paste (both motors run with a power of 80). We intended to log the gyro readings throughout each test, but had some errors that we couldn’t find an immediate solution for. Instead we moved on without logs and used the displayed minimum - and maximum values instead.
|
|
|
|
|
|
|
|
Running the GyroTest.java program was done in two tries. First we started the program on the ramp just before the platform, letting the robot drive onto the platform and reading the maximal and minimal seen gyro values. Secondly we decided to let the robot gain some momentum before driving onto the platform, and therefore started it at the bottom of the ramp and let it drive all the way up onto the platform. Neither of these runs gave a noticeable result that we felt we could rely on, when attempting to detect that the robot reached a platform and therefore should start behaving differently. We ended up discarding the idea of using a gyro sensor as a way of detecting the platform and moved on to build a robot that with no idea of time-consumption climbed the ramp and returned to the floor.
|
|
|
|
|
|
|
|
[TODOQ: har vi max og min værdier til ovenstående? Ellers mener jeg at det var maksimal max 610 og minimalt min 580 /Camilla]
|
|
|
|
|
|
|
|
#### Line following with two light sensors
|
|
|
|
|
|
|
|
As the robot was going way too slow with the initial values, we decided to increase the motor power of the non-turning motor in the Line2Sensor program to 50 instead of 0. This made the robot turn of the ramp as seen in video Z [TODO].
|
|
|
|
|
|
|
|
[TOOD: Indsæt video hvor robot kører ned af første rampe og kører af - Ida siger ‘Uhh’]
|
|
|
|
|
|
|
|
Video … :
|
|
|
|
|
|
|
|
This reaction is probably caused by the robot not responding rapidly enough to losing the line, as we actually see the robot trying to correct its path a bit when driving of the line, but ending up driving off the ramp either way. This reaction however gave us the idea of introducing a PID controller to our program, such that the the robot would turn more rapidly towards the last direction it saw the line when going away from its correct path. … [TODOQ: Ved ikke hvordan jeg skal flette den følgende note ind /Camilla] …
|
|
|
|
|
|
|
|
NOTER:
|
|
|
|
|
|
|
|
Forslag: PID-inspired integral control - så de seneste målinger tages into account (også en optimering)
|
|
|
|
|
|
|
|
We however decided that the initial goal was to build a robot that was able to climb the ramp but with no concern of speed. Therefore we noted the above optimization suggestions and moved on to experiment with using the gyro sensor to detect entry onto a platform.
|
|
|
|
|
|
|
|
### Making a basic up-down robot
|
|
|
|
|
|
|
|
At this point in time, we decided to stop experimenting and generating ideas for optimizations and instead just tried implement a program that enabled the robot to go up and down the ramp with no special concerns about time. This program could then later be optimized with the earlier findings to give a better resulting time.
|
|
|
|
|
|
|
|
We started out by rebuilding the robot such that it now had two light sensors instead of one (see figure 4) and no gyro sensor. We also made it easy to change the distance between the two sensors, making it adjustable if necessary later on.
|
|
|
|
|
|
|
|
[TODO: Insert rebuild picture]
|
|
|
|
|
|
|
|
*Figure 4: The robot rebuilt to use two light sensor for following a line on the ground.*
|
|
|
|
|
|
|
|
After rebuilding the robot, we began implementing the **InitialClimber.java** program. The program started out as a copy of **LineFollower.java**, and the idea was to use one of the two sensors to follow the line on the ramp, and the other to detect and initiate a turn at the big black line at the end of the platforms. The general idea can be seen in figure 5.
|
|
|
|
|
|
|
|
[TODO: Lav illustration af ideen]
|
|
|
|
|
|
|
|
*Figure 5: An illustration of the robot driving forward onto the platform and backwards of it.*
|
|
|
|
|
|
|
|
The extension of the code consisted in an if-statement checking if both sensors were reading black. If this was the case, the robot should turn a little less than 180 degrees and thereby drive towards the next ramp where the hope was that it would be able to rediscover the black line. We purposefully set the power for the motors very low, in order to more easily observe errors. The turn was implemented by rotating the robot while letting the thread sleep for a certain amount of milliseconds - the sleep time was figured out by performing test runs with the robot to try out different values.
|
|
|
|
|
|
|
|
Running this program resulted in the robot driving exceptionally slow up the ramp. As the robot drove onto the platform it kept on following the line, but instead of driving all the way to the end of the platform it started turning in the middle where the three black lines of the Y-shape meet. This can be seen in video .. [TODO number].
|
|
|
|
|
|
|
|
[TODO: Indsæt video, hvor Ida siger ‘OHH Myyy Gooood!’ og roboten drejer for meget ved Y’et]
|
|
|
|
|
|
|
|
We reasoned that the premature turn in the middle of the platform was caused by the two sensors coincidentally being positioned above each their "arm" of the Y and thus both reading black at the same time, thereby triggering the turn. Instead of moving the sensors further away from each other, to try and stop them from seeing black in the middle of the platform, we decided to decrease the turn, as turning in the center instead of at the edge would decrease the total time spend on the track and thereby improve our solution. We decreased the turn time and tried again.
|
|
|
|
|
|
|
|
[TODO: indsæt video hvor den kravler til platform 2]
|
|
|
|
|
|
|
|
As seen in Video [TODO number] we succeeded in getting the robot to drive onto the next ramp by adjusting the turn time, but as the robot reached the second platform it read black on both sensors and began the turning behavior of the previous platform, it turned the wrong way and began driving down the ramp it had just climbed. This was as expected, as we had only implemented the turn for the first ramp. We then introduced a boolean variable, **turnRight**, initially set to true. The turn behavior (left or right) would then be selected according to this value, and would negate the value of the variable upon entry. Running this program, we observed the robot behave exactly as before the introduction of **turnRight** (see Video [TODO nummer]).
|
|
|
|
|
|
|
|
We discussed whether the angle was too pointy when the robot was climbing on the left side of the line on the second platform, but concluded that this still wouldn’t make it turn to the right, as it should turn left by our new boolean variable, **turnRight**. We concluded that something was wrong with how this variable was updated and decided to display it’s value in the robot’s screen. Running the program again, we now observed that **turnRight** was *true* as the robot went up the second ramp. We were never sure why this was happening, but speculated that it might be because the sensors were both reading black at some point during the right turn on the first platform, which would flip the variable one time too many and result in the robot making a right turn on the second platform when it should in fact turn left. Another explanation might be that the robot performed the first observed turn simply as part of its correction in order to follow the line and not because both sensors read black, in which case **turnRight** would not be flipped on the first platform.
|
|
|
|
|
|
|
|
After closely observing several different runs of the same program, we concluded that the robot was not behaving the exact same way each time, as its entrance onto the platform was different for every run. We therefore decided to improve the line following behavior to make the robot follow the line more closely - this way, the platform entrances should be more similar as the robot’s entrance angle would not diverge as much. We figured that the best way to obtain more smooth line following behavior would be to use PID control.
|
|
|
|
|
|
|
|
#### Introducing PID control
|
|
|
|
|
|
|
|
We made the class **_PIDClimber.java_** which uses the same kind of correction as **_PIDpolice.java_** in Lesson 4, simply multiplying the deviation from the setpoint by a constant. In **_PIDClimber_**, the robot interprets black as a sensor measuring less than the black/white threshold (used as the setpoint). This resulted in the robot triggering a turn when its front was being pushed over the edge of the platform, as the sensors were raised too far from the platform to register the white surface. Instead, we tried comparing with the light reading for black (plus a small margin), which successfully got rid of the undesired triggering of turns. The robot still missed a few turns once in a while, most likely due to the same issue that was described in the previous section with the line following behavior performing the turn on its own and missing the double black reading. To remedy this, we tried lowering the speed of the robot. This reduced the number of occurrences of this kind of error.
|
|
|
|
|
|
|
|
The observations described above were not filmed, as we felt that our video documentation was turning into a giant blob of videos, and the improvements seemed minor.
|
|
|
|
|
|
|
|
##### Using two sensors to follow the line
|
|
|
|
|
|
|
|
-----
|
|
|
|
|
|
|
|
PRØVER MED TO SENSORER, SOM ER PÅ HVER DERES SIDE AF STREGEN (caps igen)
|
|
|
|
|
|
|
|
----------
|
|
|
|
|
|
### **TODO SUBHEADING**
|
|
MODIFY InitialClimber.java:
|
|
|
|
|
|
|
|
* larger turn (by increasing time from 1000 ms to 2000 ms)
|
|
|
|
|
|
|
|
Should change to use tacho counter?
|
|
|
|
|
|
|
|
[TODOQ: beskriv at vi arbejdede med to forskellige forsøg på PID LineFollowing, hhv. Nicolais og Emils, og sørg for at beskrive forskellene mellem de to tilgange, når begge afsnit er skrevet - DENNE TODO SKAL NOK ORDNES AF NICOLAI OG EMIL!]
|
|
|
|
|
|
|
|
We divided the work amongst us so that we could try out a larger number of approaches concurrently. While discussing issues and how to handle them with each other, we each worked on separate implementations. We also regularly watched when another group member ran some code on the robot. As group members, we mainly served as rubber ducks for the other members’ verbalization of current problems being tackled, although we did also provide concrete input to each others’ solutions.
|
|
|
|
|
|
|
|
As described previously, Nicolai and Emil worked on each their implementation of a PID-controlled line follower. Below, we present a description of both of these attempts as well as a discussion on the advantages and disadvantages of the two compared to each other [TODOQ: tror at sådan en sammenligning vil være en virkelig god idé /Ida].
|
|
|
|
|
|
|
|
### PID LineFollower with high precision
|
|
|
|
|
|
|
|
In an attempt to improve our line follower we wanted to make several changes. First off, we started calibrating the two light sensors for seperate black/white light values using the BlackWhiteSensor.java program, in order to guard ourselves against potential small differences in values read by two difference sensors. Then a basic controller utilizing both light sensors properly was implemented by calculating the *error* value for both light sensors, *leftError* and *rightError* (Black-White threshold subtracted from light reading), multiplying left*Error* value by -1, and then adding it to *rightError* for a combined *turn* value. Inverting one sensors error was to ensure that the two sensors were working together to correct toward the center of the black line, as they would be placed so each sensor was attempting to stay on one of the black tapes borders (left sensor on the left border, and right sensor on the right border obviously). This combined *turn *value was then used by subtracting it from the robot’s left motor’s power, and adding it to the right motor’s power.
|
|
|
|
|
|
|
|
Running this program with a *targetPower* = 70 followed the line for the most part, but reacted far too violently to the line, causing big oscillations. It occurred to us that even though we didn’t implement P, I or D variables in the controller, the current program is basically a P controller with P = 2, since we are getting error values from two sensors and adding them together for doubled the effect on motor power. We solved this by simply dividing the *leftError + rightError* by 2 before adding to the motor’s power. This resulted in a robot that follows the line decently, although still oscillating a fair bit. We reduced the *targetPower* to 60, resulting in a *very* slow robot, but one that follows the line more steadily. As speed isn’t our top priority at the moment, we decided this was fine for an initial attempt at getting all the way to the top and back down.
|
|
|
|
|
|
|
|
The robot now follows the line up until the end of the first ramp, but has trouble staying on the line during the sharp turn. Using the fact that we have 2 light sensors, we started storing a variable called *lastBlack*. This variable is a string set to "left" or “right” whenever left or right light sensor reads a black light value. Using this, whenever both light sensors read white (means we have driven off the black line), the robot will start steering right or left, according to which sensor most recently saw a black line, to get the robot back on track. This allowed the robot to very consistently make it past the first Y-section, to the back of the first platform.
|
|
|
|
|
|
|
|
Next problem was turning around on the platform and following the line up the next ramp. Once the robot had seen black on both sensors a couple of program loops in a row (3-4), we made the robot turn right (Setting left and right motor powers to 70 and -70 respectively) for a certain defined amount of milliseconds, and then drive forward for another set amount of time, in an attempt to connect with the black line on the next ramp. In order to give ourselves some room for error in *shooting* for the next black line, we didn’t attempt to hit it dead on, instead we just wanted the robot to end up in roughly the right direction and to be on the left side of the black line. We could then just set our *lastBlack* variable to *right*, which would cause the robot to search on its right side for the black line, and once found, follow it up the next ramp. We could have used the Tacho counter for a more reliable turn, but in this iteration we felt the hardcoded time windows worked well enough.
|
|
|
|
|
|
|
|
We used the previously mentioned *turnRight* boolean to switch between the two turn behaviors when hitting the second platform. Additionally, after the robots second turn (the first left turn), we set a boolean *nextStopIsTop* to true. When the robot saw black on both sensors a few time in a row, it would first check if *nextStopIsTop* was true, and if so, it would drive forward a set amount of time, to get completely up on the top platform, then turn around 180, and drive forward a short time to get down on the ramp, before continuing its line following.
|
|
|
|
|
|
|
|
The program as a whole worked some of the time, but it was especially difficult to get it to drive down the ramps again. The black tape’s Y-sections were different on the two platforms, which meant we had to do extensive tweaking of the variables deciding how long the robot should turn when hitting the back of the platforms, as the numbers would vary depending on which platform the robot was turning on, and whether it was going up or down. Despite a lot of calibrations and messing around with our variables, we never got it to go all the way up and down, it would usually crash somewhere along the way, usually on the way down.
|
|
|
|
|
|
|
|
##### Putting the P, I and D in PID.
|
|
|
|
|
|
|
|
One recurring problem when running our simple GodBot.java program, was that it didn’t follow the line very precisely, which caused inconsistencies in the angle at which the robot would enter the Y-section of the black tape trail, or the big black line at the end of the platforms. This inconsistency in entrance angle meant it was difficult to get the robot to properly get out of the Y-section, and follow the black line up the next ramp, as we couldn’t make any solid assumptions as to where exactly the robot would be when seeing the thick black ramp line, not to mention it didn’t always follow the line correctly past the Y-section to begin with.
|
|
|
|
|
|
|
|
In an attempt to follow the black line more precisely for more reliable results, we decided to implement a proper PID LineFollower. Based slightly on our Sejway.java code from our previous PID balancing robot, PIDGod.java was made.
|
|
|
|
|
|
|
|
After experimenting for a long time with the P, I and D variables based on past experiences we arrived at the values *targetPower *= 60, P = 5, I = 0,4 and D = 8. We dampened the integral error by multiplying by ⅔ every run to prevent it accumulating indefinitely. This PID line follower followed the black line incredibly smoothly, even around the sharp corners of the tape at the top of the ramps, and as such gave us a much stronger foundation for handling our turns.
|
|
|
|
|
|
|
|
However, a new problem arose. The robot would go crazy when hitting the intersection at the Y-sections, due to the sudden unexpected values read by both sensors, since there are two black lines going in separate directions. The results were a completely unpredictable robot flying in every and all directions when hitting the intersection due to it’s powerful attempts at correcting the sudden big error values being read.
|
|
|
|
|
|
|
|
To solve this problem, we decided to use it to our advantage. The wild behavior exhibited by the robot clearly indicated the sensors reading values they otherwise never would. In this case, since the robot followed the line very reliably centered, it would always enter the Y-section in a way that both sensors were reading black (or very close to black). We simply did a check for both sensors reading black, and when this happened, we wanted to turn. Now that we could check for a turn on the exact Y intersection, finding the next line became much easier. We just had the robot drive forward a very short distance, and then rotate right until the right sensor read black (it hit the black line going up the next ramp), and then continue its line following up the ramp. The same method was used for the second platform’s turn, although we had to adjust the values for what was acceptable to be considered ‘black’ on both sensor for it to trigger the turn, as the intersection was slightly differently skewed on the second ramp, causing a different pattern of sensor values compared to the first platform. This implementation worked extremely well, and the robot would now very reliably drive all the way to the top of the track, while now also cutting off some time by not having to drive all the way to the back of the platforms. As for turning around the robot on the top platform, we switched to using the Tacho counter to reliably turn exactly 180 degrees. We experimented in a separate program for a while to figure out which values equated to a 180 degree turn, and ended up with a tacho value of 385 working well.
|
|
|
|
|
|
|
|
The turns mostly worked fine on the way down as well, although the robot would not follow the line as reliably down the ramps as it did up. The reduced power needed because of gravity meant the robot would correct too much at times, and we tried to remedy this by changing our PID variables once the top of the track was hit. After much experimentation, we arrived at the variables P = 5, I = 0.5, d = 15. Additionally, we started limiting the motor power by cutting the final power variable down to 80, if it was above 80. This eliminated the situations where the robot would reconfigure its power too strongly, and completely lose control.
|
|
|
|
|
|
|
|
Now we hit a new obstacle. When hitting the second platform on the way down, our light sensors were hitting the floor. This meant they weren’t reading light values correctly, and would lose track of the black line. As a result, we had to rebuild the robot to put the light sensors further up. When placing the robot at the top of the platform initially, we were now able to get it to drive successfully all the way down to the green zone. However, we were now no longer able to get the robot to act appropriately when driving up the track. The newly increased height of the light sensors meant that when driving up the ramps, as the robot was driving onto the platform, the sensors would be raised so far above the ground, that they weren’t reading any reflected light, and as a result were reading ‘black’, causing the robot to think it was at the Y-section and should turn.
|
|
|
|
|
|
|
|
We were at an impasse trying to solve this problem. Lowering the light value for which we would accept to be considered ‘black’ on both sensors, meant that the robot was no longer recognizing the Y-sections either. We briefly attempted to simply accept the reading when driving onto a platform as ‘black’ and had code a slight drive onto the platform and finding the next black line. The problem then became that no value for what was accepted ‘black’ on both sensors, resulted in reliable behavior. Either the robot wouldn’t react to one or more of the four ‘black’ triggers (the two platforms and the two Y-sections) - both on the way up and the way down, or it would trigger at times it weren’t supposed to at all.
|
|
|
|
|
|
|
|
We spent a long time fiddling with these triggers, but weren’t able to ever get the robot to behave reliably at all turns, both on the way up and down.
|
|
|
|
|
|
|
|
### PID LineFollower with tacho counter in turns
|
|
|
|
|
|
|
|
In an attempt to slow down as little as possible while following a line, we tried to build a robot that takes advantage of having two light sensors and their relative position known to us. Having two light sensors makes it possible for us to know whether we are to the left or to the right of the line we’re following. Partially knowing our position (or at least which side of the line we’re on) means that we can use the integral part of a PID controller well, since that enables us to drive smoothly in a (sine) curve instead of the erratic behavior, we get from reacting everytime we sense black. Currently it only uses the integral part of PID, since that’s the place we thought that the robot would benefit the most from. The robot remembers, which sensor saw black the last and decreases the power of the respective motor exponentially until it sees black again. This way we can have both motors running with a lot of power continuously. A very naive implementation worked pretty well, and it didn’t seem to benefit a lot from more advanced techniques, but that might have something to do with the light in the room, since the readings became more unstable. An improvement we tried to make was to note how many reading of black a sensor made and decrease the turn rate proportionally with the amount of readings, since we want the turns to be as smooth as possible when we are close to following the direction of the line. This change hasn’t improved anything so far, but some calibration might help. This robot reached the second plateau a couple of times without any turning mechanism, because it was lucky and avoided the black tape ‘cross’. Maybe it can be improved to deterministically avoid the cross, but we haven’t looked into that yet. Another way to handle the plateaus could be to determine or identify that the robot has reached one and then do a hard-coded turn each time. This could be done eg. with the tachometer.
|
|
|
|
|
|
|
|
### Introducing DifferentialPilot
|
|
|
|
|
|
|
|
Since we had so much trouble getting the robot to turn correctly on the platform by following a line, we decided to also try a different approach using the LeJOS class DifferentialPilot. At first, we made an implementation, CantTouchThis, that did not attempt to follow the line at all, but simply drove straight ahead until reading black on both sensors (i.e. reaching the end of a platform) at which point it perform a turn using the following sequence of commands:
|
|
|
|
|
|
|
|
1. stop (pilot.stop();)
|
|
|
|
|
|
|
|
2. back up 10 cm (pilot.travel(-100);)
|
|
|
|
|
|
|
|
3. turn 90 degrees (pilot.rotate(angle); where angle is originally set to 90 and then multiplied by -1 at each platform in order to switch between turning right and left)
|
|
|
|
|
|
|
|
4. go forward 40 cm (pilot.travel(400);)
|
|
|
|
|
|
|
|
5. turn 90 degrees
|
|
|
|
|
|
|
|
6. go straight ahead until next black platform end (pilot.forward();)
|
|
|
|
|
|
|
|
The turn on the top platform was performed similarly, and on the way down the program simply made the robot turn to the right and topple over the edge onto the starting platform.
|
|
|
|
|
|
|
|
In most runs, the robot succeeded in passing the second platform onto the third ramp, but would then run into the wall or off the track because it had come too much off course. The turn on the top platform as well as the final turn were therefore tested separately, by commenting out sections of the code and placing the robot appropriately on the track.
|
|
|
|
|
|
|
|
[TODO: Video fra mandag d. 9. som viser kørsel af CantTouchThis]
|
|
|
|
|
|
|
|
We had foreseen the issues with keeping the robot going in the right direction and not off the track, as expecting to be able to place the robot completely aligning with the track and not having it diverge from this direction later on is unrealistic in a real world environment: Aside from human motorics not and eye-measuring skills not being precise enough, irregularities in the surface and the tires of the robot, as well as imbalances in the robot’s construction, etc. all introduce drifts in its course.
|
|
|
|
|
|
|
|
The robot may have had trouble getting safely up the ramps using this approach, but it was able to clear the platforms a lot faster compared to line following, so we decided to try and combine the two approaches.
|
|
|
|
|
|
|
|
#### Combining line following ramp climbing with piloted platform turns
|
|
|
|
|
|
|
|
Initially, we included code from GodBot in a copy of CantTouchThis, LiberationBot, to drive the robot up the ramp instead of using pilot.forward(). In the API for DifferentialPilot [ref. til [http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html](http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html)] it is stated that the results of other objects making calls to the motors when DifferentialPilot is in use are unpredictable. We had trouble figuring out how to separate line following control from the differential pilot’s control, to be able to switch between them. Our first attempt was to instantiate the DifferentialPilot within the if-statement that captures the case where the robot reads black on both sensors (TODOQ: Måske kode-eksempel), so that it would only try to run in this isolated case and would be terminated upon exit from the if-statement’s scope. The robot had no trouble getting up the first ramp using line following and individual control of the motors, and it performed the turn on the first platforms - but after this, when exitting the if-statement, the robot started randomly turning and racing around. We spent several hours trying out different approaches to getting the differential pilot to work along with the GodBot line following, but did not succeed.
|
|
|
|
|
|
|
|
##### DifferentialPilot line following (unsuccessful)
|
|
|
|
|
|
|
|
Deciding to take a break from attempting the combined approach, we resorted to implementing line following using the DifferentialPilot’s arc-method, in the class SimpleLibBot.java. It proved difficult to properly steer the robot according to divergence in the black/white readings of the sensors. Though this was perhaps most likely due to a limited overview of how to utilize our error measurements in the arguments to arc() and/or increasing tiredness and frustration levels, we decided to give up on creating a satisfactory implementation. Had we decided to continue, a PID-implementation of DifferentialPilot line following would most likely have proven to work far better.
|
|
|
|
|
|
|
|
[TODOQ (til Ida): nyt forsøg på kombination vha. interface mv.]
|
|
|
|
|
|
|
|
##### Using a bumper
|
|
|
|
|
|
|
|
An implementation similar to CantTouchThis.java, that used a touch sensor to detect when the robot hit the wall by the track *and then* making the second 90 degree turn, was also made (FRIse.java), but was never tested. To test it, we would have to refit the robot with a touch sensor, preferably with a bumper to ensure that collision with the wall would be detected no matter the angle of the robot. Since the difference to using CantTouchThis seemed minor and we figured that this version would take longer to complete the track due to having to wait until hitting the wall and then backing up, we decided to postpone refitting with a touch sensor until we might see an actual gain in using one. We never got to this point. A benefit we might have seen would be the possibility of letting the robot use the bumper to align with the wall, by waiting a little before stopping upon impact, thus enabling it to enter the next ramp perpendicularly after making the 90 degree turn following collision.
|
|
|
|
|
|
|
|
[TODOQ (til Ida): rename CantTouchThis, FRIse og LiberationBot]
|
|
|
|
|
|
|
|
### Andre noter (ingen fast plads i rapporten endnu)
|
|
|
|
|
|
OBS skriv om: ville nok være bedre at have behaviors og at håndtere dem så én behavior kører færdig, før en behavior med lavere prioritet kan overtage (lige nu kører vi på if-statements og er derfor nødt til at sleepe, mens vi drejer 180 grader, så den ikke bliver fucket op af at læse hvid undervejs i drejet)
|
|
OBS skriv om: ville nok være bedre at have behaviors og at håndtere dem så én behavior kører færdig, før en behavior med lavere prioritet kan overtage (lige nu kører vi på if-statements og er derfor nødt til at sleepe, mens vi drejer 180 grader, så den ikke bliver fucket op af at læse hvid undervejs i drejet)
|
|
|
|
|
|
## *Video 1: TODO*
|
|
Idea for going even faster: gearing
|
|
|
|
|
|
|
|
Idé til torsdag d. 5. maj: brug ultralydssensor til at måle afstanden til platformen og dermed til at se, om robotten er på vej over en platform-kant.
|
|
|
|
|
|
|
|
Brug evt. tacho counter til at dreje.
|
|
|
|
|
|
#### **TODO SUBSUBHEADING**
|
|
Random fra torsdag d. 5. maj:
|
|
|
|
|
|
|
|
Switched back to GodCar (efter at have opgivet at kombinere line following med DifferentialPilot, red.), attempting to replace hard coded milisecond turns into using the tacho counter for more reliable turns.
|
|
|
|
|
|
|
|
Attempting to implement PID controller with I and D part to follow line more smoothly, for less inconsistent angle entries into the black line.
|
|
|
|
|
|
## **Conclusion**
|
|
## **Conclusion**
|
|
|
|
|
|
…
|
|
TODO: Overall success
|
|
|
|
|
|
|
|
We ended up having to work for more than the two days originally planned. We worked for a total of three days, on getting the robot to climb and descent the track. We spent a total of TODO hours, spread over 7 hours on the 28th of April, 9 hours on the 5th of April and TODO hours on Monday the 9th of April.
|
|
|
|
|
|
We spent around 7 hours on the 28th of April and TODO hours on the 5th of April.
|
|
TODOQ: Konklusion på line following og PID - Emil og Nicolai
|
|
|
|
|
|
|
|
Our attempt to use the **DifferentialPilot** class was illustrative of the inconsistencies that the real world brings into robot programming; although **DifferentialPilot** is very effective at controlling the robot’s movements precisely, it is not enough to make a functioning hill-climbing robot as external factors bring the robot off a perfectly straight course.
|
|
|
|
|
|
|
|
[TODOQ: line following med DifferentialPilot - Ida og Camilla]
|
|
|
|
|
|
|
|
Using **DifferentialPilot** was different from the other approaches that we have tried during the course, as **DifferentialPilot** provides us with methods that utilize calculations based on tacho count and physical parameters (i.e. wheel diameter and distance). These calculations are far more precise than the ones we have attempted to do using light measurements and time (e.g. **sleep(1000)****;**), and even if we had used the tacho count, we would have had to spend a very long time finding out how to turn e.g. 90 degrees given our robot’s construction. Thus, **DifferentialPilot** relieves us of some basic but important calculations.
|
|
|
|
|
|
|
|
Lack of (video) documentation has been a recurring theme in this report - and a mistake in our experimentation work. We should have documented our experiments better, and we have learned from while in writing the report. To not make our video documentation seem unmanageable, perhaps we should consider re-introducing the use of title post-its, as in Lesson 5 (this was actually suggested on the first lab session of this Lesson, but for some reason we did not elect to do it).
|
|
|
|
|
|
|
|
TODOQ: Samlet konklusion
|
|
|
|
|
|
## **References**
|
|
## **References**
|
|
|
|
|
|
|
|
TODO
|
|
|
|
|
|
[1] Rodney Brooks, [A robust layered control system for a mobile robot](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1087032), IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986
|
|
[1] Rodney Brooks, [A robust layered control system for a mobile robot](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1087032), IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986
|
|
|
|
|
|
[2] Jones, Flynn, and Seiger, ["Mobile Robots, Inspiration to Implementation"](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/11128_Mobile_Robots_Inspiration_to_Implementation-Flynn_and_Jones.pdf), Second Edition, 1999.
|
|
[2] Jones, Flynn, and Seiger, ["Mobile Robots, Inspiration to Implementation"](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson7.dir/11128_Mobile_Robots_Inspiration_to_Implementation-Flynn_and_Jones.pdf), Second Edition, 1999.
|
|
|
|
|
|
[3] Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.
|
|
[3] Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.
|
|
|
|
|
|
## **Appendix: Unconstrained idea generation**
|
|
## **Appendix A: Unconstrained idea generation**
|
|
|
|
|
|
TODO: Skriv om idéer fra "dum" idé-generering (kommentér på længere oppe i rapporten)
|
|
|
|
|
|
|
|
* råbe af robotten
|
|
In the experiments with the faster initial ramp climb, where we had the robot simply race up the track, it was suggested that we give the robot "arms" to allow it to hold onto the sides of the ramp while going up. This way, the robot would be prevented from running off the track. However, we foresaw an abundance of issues with raising the arms before reaching the platform, and doing so in time. The idea is illustrated in Figure TODO-x.
|
|
|
|
|
|
* kran-arme med kroge til at hejse sig op
|
|
![image alt text](image_0.png)![image alt text](image_1.png)
|
|
|
|
|
|
* grappling hook
|
|
*Figure TODO-x: The robot (Frej) with arms extending below the track to keep Frej in place.*
|
|
|
|
|
|
* faldskærm
|
|
Another idea along the same lines involved extendable arms to reach for the second platform and pull the robot from the starting area. This led to the even more unrealistic suggestion of letting the robot throw a grappling hook at hoist itself to the second platform.
|
|
|
|
|
|
* dreje på en stok
|
|
Regarding turns on the platforms, it was suggested to have the robot lower a cane and somehow lean on it in order to turn on it. The idea of having the robot respond to shouts in order to make turns or stop was also brought forth. This latter solution, as it turned out, is not allowed, as the robot should be completely autonomous.
|
|
|
|
|
|
* ...
|
|
Regarding the temptation of letting the robot drop from the second platform to the start area upon return from the top, it was suggested that we create some sort of parachute for it, so as not to destroy the robot on impact.
|
|
|
|
|