... | ... | @@ -84,7 +84,7 @@ We ended up with the following initial, concrete plan for the first steps: |
|
|
|
|
|
In order to use the LineFollower program, we rebuilt the robot so that the light sensor pointed towards the ground (see Figure 2).
|
|
|
|
|
|
[TODO: Insert rebuild picture]
|
|
|
![Robot rebuilt for line following](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week1011/img/[TODO])
|
|
|
|
|
|
*Figure 2: The robot rebuilt to use the light sensor for following a line on the ground.*
|
|
|
|
... | ... | @@ -150,7 +150,7 @@ The next experiment was whether or not equipping the robot with a gyro sensor wo |
|
|
|
|
|
As in the other experiments we started out by rebuilding the robot - this time using the gyro sensor. Initially we intended to place the sensor at the center of the robot (directly above the wheels). We however realized that there would be less motion on the center axis than on either front or back of the robot, resulting in more steady readings of the gyro, which in our case, where we want to detect small changes, is an unwanted property. Therefore we decided to mount the gyro in front of the robot as seen in figure 3.
|
|
|
|
|
|
[TODO robot with gyro sensor]
|
|
|
![Robot with gyro sensor](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week1011/img/[TODO])
|
|
|
|
|
|
*Figure 3: The robot rebuilt to use a gyro sensor for detecting a level change.*
|
|
|
|
... | ... | @@ -164,15 +164,15 @@ At this point in time, we decided to stop experimenting and generating ideas for |
|
|
|
|
|
We started out by rebuilding the robot such that it now had two light sensors instead of one (see figure 4) and no gyro sensor. We also made it easy to change the distance between the two sensors, making it adjustable if necessary later on.
|
|
|
|
|
|
[TODO: Insert rebuild picture]
|
|
|
![Robot rebuilt to use to light sensors](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week1011/img/[TODO])
|
|
|
|
|
|
*Figure 4: The robot rebuilt to use two light sensor for following a line on the ground.*
|
|
|
|
|
|
After rebuilding the robot, we began implementing the **_InitialClimber.java_** [TODO ref] program. The program started out as a copy of **_LineFollower.java_**** **[TODO ref], and the idea was to use one of the two sensors to follow the line on the ramp, and the other to detect and initiate a turn at the big black line at the end of the platforms. The general idea can be seen in figure 5.
|
|
|
|
|
|
[TODO: Lav illustration af ideen]
|
|
|
![180 degree turn](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week1011/img/InitClimber-idea.png)
|
|
|
|
|
|
*Figure 5: An illustration of the robot driving forward onto the platform and backwards of it.*
|
|
|
*Figure 5: An illustration of the robot driving forward onto the platform and turning nearly 180 degrees to go back onto the next ramp*
|
|
|
|
|
|
The extension of the code consisted in an if-statement checking if both sensors were reading black. If this was the case, the robot should turn a little less than 180 degrees and thereby drive towards the next ramp where the hope was that it would be able to rediscover the black line. We purposefully set the power for the motors very low, in order to more easily observe errors. The turn was implemented by rotating the robot while letting the thread sleep for a certain amount of milliseconds - the sleep time was figured out by performing test runs with the robot to try out different values.
|
|
|
|
... | ... | @@ -246,11 +246,13 @@ The turns mostly worked fine on the way down as well, although the robot would n |
|
|
|
|
|
Now we hit a new obstacle. When hitting the second platform on the way down, our light sensors were hitting the floor. This meant they weren’t reading light values correctly, and would lose track of the black line. As a result, we had to rebuild the robot to put the light sensors further up. When placing the robot at the top of the platform initially, we were now able to get it to drive successfully all the way down to the green zone. However, we were now no longer able to get the robot to act appropriately when driving up the track. The newly increased height of the light sensors meant that when driving up the ramps, as the robot was driving onto the platform, the sensors would be raised so far above the ground, that they weren’t reading any reflected light, and as a result were reading ‘black’, causing the robot to think it was at the Y-section and should turn.
|
|
|
|
|
|
We were at an impasse trying to solve this problem. Lowering the light value for which we would accept to be considered ‘black’ on both sensors, meant that the robot was no longer recognizing the Y-sections either. We briefly attempted to simply accept the reading when driving onto a platform as ‘black’ and had code a slight drive onto the platform and finding the next black line. The problem then became that no value for what was accepted ‘black’ on both sensors, resulted in reliable behavior. Either the robot wouldn’t react to one or more of the four ‘black’ triggers (the two platforms and the two Y-sections) - both on the way up and the way down, or it would trigger at times it weren’t supposed to at all. Video [TODO: video nr] shows this problem, where the robot triggers two left turns when hitting the top of second ramp as a result of somehow still seeing black after hitting the line on the platform.
|
|
|
We were at an impasse trying to solve this problem. Lowering the light value for which we would accept to be considered ‘black’ on both sensors, meant that the robot was no longer recognizing the Y-sections either. We briefly attempted to simply accept the reading when driving onto a platform as ‘black’ and had code a slight drive onto the platform and finding the next black line. The problem then became that no value for what was accepted ‘black’ on both sensors, resulted in reliable behavior. Either the robot wouldn’t react to one or more of the four ‘black’ triggers (the two platforms and the two Y-sections) - both on the way up and the way down, or it would trigger at times it weren’t supposed to at all. Video 4 shows this problem, where the robot triggers two left turns when hitting the top of second ramp as a result of somehow still seeing black after hitting the line on the platform.
|
|
|
|
|
|
[TODO: Fix to markdown video format]
|
|
|
|
|
|
*Video X: PilotedLineFollower succeeding in performing the turn on the first platform*
|
|
|
https://www.youtube.com/watch?v=B-lPjFO3Bh0
|
|
|
|
|
|
*Video 4: PilotedLineFollower succeeding in performing the turn on the first platform*
|
|
|
|
|
|
We spent a long time fiddling with these triggers, but weren’t able to ever get the robot to behave reliably at all turns, both on the way up and down.
|
|
|
|
... | ... | @@ -258,13 +260,13 @@ We spent a long time fiddling with these triggers, but weren’t able to ever ge |
|
|
|
|
|
As an alternative to the above approach, which was beginning to seem complex, we began simultaneous work on a more simple approach with the additional intention of focusing more on the track completion time.
|
|
|
|
|
|
The two light sensors’ only purpose in this robot was to make sure we knew on which side of the robot the line was. We took advantage of their relative positions, by keeping a state of which sensor last saw the black line. Partially knowing our position (or at least which side of the line we’re on) means that we can use the integral part of a PID controller well, since that enables us to drive smoothly in a (sine) curve instead of the erratic behavior, we get from reacting every time we sense black. It mainly uses the integral and the proportional part of PID, since that’s the parts we thought that the robot would benefit the most from. The robot remembers, which sensor saw black the last and decreases the power of the respective motor exponentially until it sees black again. This way we can have both motors running with a lot of power continuously. A very naive implementation worked pretty well, and it didn’t seem to benefit a lot from more advanced techniques, but that might have something to do with the light in the room, since the readings became more unstable. An improvement we tried to make was to note how many reading of black a sensor made and decrease the turn rate proportionally with the amount of readings, since we want the turns to be as smooth as possible when we are close to following the direction of the line. This change didn’t appear to improve anything from the testing we did. Another approach was to introduce more states so that we react while sensing black as well, where the simple robot only start turning after sensing black. This worked alright most of the time, but it made the robot a lot more complex without that much to show for it. Due to that, we continued with the simple implementation, since time definitely was an issue for us at this point of time. This simple robot reached the second plateau a couple of times without any turning mechanism, mainly because it was lucky and avoided the black tape ‘cross’. An improvement we considered was to deterministically avoid the cross, but we didn’t look into that due to lack of time. What we ended up doing was to approximate what the tachometer would read at different plateaus. The turns were approximated and hard-coded as well. This took a lot of tuning and calibration, but made us reach the top relatively easily. Getting down never happened, since the light readings (especially going down) became very unstable at night. Considering our experiences driving up to the top, getting back down wouldn’t take that much effort, since we already had experience in driving down. This robot is implemented in the program **_OldPIDTest.java_**. Some of the attempted improvements can be found in **_PIDTest.java_**. The program runs a loop with a sampling time of a few ms. In each sample, it first checks whether the tachometer is above the threshold on which it is supposed to do something, and accordingly does exactly that and otherwise drives using LineFollower functionality. The thresholds are approximated from observations and testing. The first turn should happen at approximately 2200 on the tachometer, which is supposed to be when the robot hits the first plateau. The next threshold is a bit different according to new approximations stemming from observation and testing. As mentioned, this works all the way to the top when the stars align. The LineFollowing functionality works by taking a light reading on each light sensor. These readings lead into different cases, where the previous state is taking into consideration as well. An improvement to this implementation could be to separate the functionality into behaviors, such as LineFollowing and Turning, with an arbitrator, since it could be useful to have the light sensing running, while a hard-coded turn was happening. This way the robot would know where to look for the line after a turn, since this was mainly the place, where the approximations failed. The robot would either over- or understeer and end up placed differently to where it thought it was. The robot drove fairly fast and did reach the top every once in a while and even seemed consistent at times, but the approximations combined with external factors such as battery power affecting the motor speeds, made it increasingly unstable the further on the ramp it got. The video [TODO video ref nr.] shows the robot reaching the second plateau, but failing due to an oversteer. The different improvements, we considered, could have probably made it a lot more stable.
|
|
|
The two light sensors’ only purpose in this robot was to make sure we knew on which side of the robot the line was. We took advantage of their relative positions, by keeping a state of which sensor last saw the black line. Partially knowing our position (or at least which side of the line we’re on) means that we can use the integral part of a PID controller well, since that enables us to drive smoothly in a (sine) curve instead of the erratic behavior, we get from reacting every time we sense black. It mainly uses the integral and the proportional part of PID, since that’s the parts we thought that the robot would benefit the most from. The robot remembers, which sensor saw black the last and decreases the power of the respective motor exponentially until it sees black again. This way we can have both motors running with a lot of power continuously. A very naive implementation worked pretty well, and it didn’t seem to benefit a lot from more advanced techniques, but that might have something to do with the light in the room, since the readings became more unstable. An improvement we tried to make was to note how many reading of black a sensor made and decrease the turn rate proportionally with the amount of readings, since we want the turns to be as smooth as possible when we are close to following the direction of the line. This change didn’t appear to improve anything from the testing we did. Another approach was to introduce more states so that we react while sensing black as well, where the simple robot only start turning after sensing black. This worked alright most of the time, but it made the robot a lot more complex without that much to show for it. Due to that, we continued with the simple implementation, since time definitely was an issue for us at this point of time. This simple robot reached the second plateau a couple of times without any turning mechanism, mainly because it was lucky and avoided the black tape ‘cross’. An improvement we considered was to deterministically avoid the cross, but we didn’t look into that due to lack of time. What we ended up doing was to approximate what the tachometer would read at different plateaus. The turns were approximated and hard-coded as well. This took a lot of tuning and calibration, but made us reach the top relatively easily. Getting down never happened, since the light readings (especially going down) became very unstable at night. Considering our experiences driving up to the top, getting back down wouldn’t take that much effort, since we already had experience in driving down. This robot is implemented in the program **_OldPIDTest.java_**. Some of the attempted improvements can be found in **_PIDTest.java_**. The program runs a loop with a sampling time of a few ms. In each sample, it first checks whether the tachometer is above the threshold on which it is supposed to do something, and accordingly does exactly that and otherwise drives using LineFollower functionality. The thresholds are approximated from observations and testing. The first turn should happen at approximately 2200 on the tachometer, which is supposed to be when the robot hits the first plateau. The next threshold is a bit different according to new approximations stemming from observation and testing. As mentioned, this works all the way to the top when the stars align. The LineFollowing functionality works by taking a light reading on each light sensor. These readings lead into different cases, where the previous state is taking into consideration as well. An improvement to this implementation could be to separate the functionality into behaviors, such as LineFollowing and Turning, with an arbitrator, since it could be useful to have the light sensing running, while a hard-coded turn was happening. This way the robot would know where to look for the line after a turn, since this was mainly the place, where the approximations failed. The robot would either over- or understeer and end up placed differently to where it thought it was. The robot drove fairly fast and did reach the top every once in a while and even seemed consistent at times, but the approximations combined with external factors such as battery power affecting the motor speeds, made it increasingly unstable the further on the ramp it got. The video 5 shows the robot reaching the second plateau, but failing due to an oversteer. The different improvements, we considered, could have probably made it a lot more stable.
|
|
|
|
|
|
[TODO: Fix to markdown format & number
|
|
|
[TODO: Fix to markdown format & number]
|
|
|
|
|
|
https://www.youtube.com/watch?v=w5JwM_dEYVQ
|
|
|
|
|
|
*Video X:...*
|
|
|
*Video 5:...*
|
|
|
|
|
|
### Introducing DifferentialPilot
|
|
|
|
... | ... | @@ -290,7 +292,7 @@ In most runs, the robot succeeded in passing the second platform onto the third |
|
|
|
|
|
https://www.youtube.com/watch?v=EdtbRaXm6NA
|
|
|
|
|
|
*Video 4: SimplePilot doing well but eventually running off the track*
|
|
|
*Video 6: SimplePilot doing well but eventually running off the track*
|
|
|
|
|
|
We had foreseen the issues with keeping the robot going in the right direction and not driving off the track, as expecting to be able to place the robot completely aligning with the track and not having it diverge from this direction later on is unrealistic in a real world environment: Aside from human motorics not and eye-measuring skills not being precise enough, irregularities in the surface and the tires of the robot, as well as imbalances in the robot’s construction, etc. all introduce drifts in its course.
|
|
|
|
... | ... | @@ -302,13 +304,13 @@ Initially, we included code from **_GodBot.java_** [TODO ref] in a copy of **_Si |
|
|
|
|
|
##### DifferentialPilot line following
|
|
|
|
|
|
Deciding to take a break from attempting the combined approach, we resorted to implementing line following using the DifferentialPilot’s arc-method, in the class **_Piloted_****_LineFollower_****_.java _**[TODO ref]. It proved difficult to properly steer the robot according to divergence in the black/white readings of the sensors. Though this was perhaps most likely due to a limited overview of how to utilize our error measurements in the arguments to **_arc()_** and/or increasing tiredness and frustration levels, we temporarily gave up on creating a satisfactory implementation. Instead, we began an implementation using a switcher to switch between behaviors implemented in an interface. This is described in the section below. After some debate regarding this solution (to be found in the section below), we continued the attempt at implementing line following using DifferentialPilot. We experimented with different parameters for **_arc()_** and introduced a turn utilizing **_rotateRight()_** to turn until aligned with the bottom bar of the Y on the platform in order to let the robot more easily find its way back on track after making a platform turn. We also performed other minor additions. We managed to get the robot to the second platform, successfully performing the intermediate turn operation on the first platform (see Video 5). At some point, however, the robot began failing in detecting the black end line - perhaps due to poor lighting in the later hours of the day, perhaps due to the sensors being raised as they were sometimes scraping on the track, and perhaps simply due to low battery. As black sensing was still working for the other two ongoing implementations, we decided to leave the DifferentialPilot at this.
|
|
|
Deciding to take a break from attempting the combined approach, we resorted to implementing line following using the DifferentialPilot’s arc-method, in the class **_Piloted_****_LineFollower_****_.java _**[TODO ref]. It proved difficult to properly steer the robot according to divergence in the black/white readings of the sensors. Though this was perhaps most likely due to a limited overview of how to utilize our error measurements in the arguments to **_arc()_** and/or increasing tiredness and frustration levels, we temporarily gave up on creating a satisfactory implementation. Instead, we began an implementation using a switcher to switch between behaviors implemented in an interface. This is described in the section below. After some debate regarding this solution (to be found in the section below), we continued the attempt at implementing line following using DifferentialPilot. We experimented with different parameters for **_arc()_** and introduced a turn utilizing **_rotateRight()_** to turn until aligned with the bottom bar of the Y on the platform in order to let the robot more easily find its way back on track after making a platform turn. We also performed other minor additions. We managed to get the robot to the second platform, successfully performing the intermediate turn operation on the first platform (see Video 7). At some point, however, the robot began failing in detecting the black end line - perhaps due to poor lighting in the later hours of the day, perhaps due to the sensors being raised as they were sometimes scraping on the track, and perhaps simply due to low battery. As black sensing was still working for the other two ongoing implementations, we decided to leave the DifferentialPilot at this.
|
|
|
|
|
|
[TODO: Fix to markdown video format]
|
|
|
|
|
|
https://www.youtube.com/watch?v=LqK0Zw65I-c
|
|
|
|
|
|
*Video 5: PilotedLineFollower succeeding in performing the turn on the first platform*
|
|
|
*Video 7: PilotedLineFollower succeeding in performing the turn on the first platform*
|
|
|
|
|
|
##### Switching between navigation types
|
|
|
|
... | ... | |