... | ... | @@ -72,7 +72,7 @@ We decided to begin by testing the use of different sensors, with regards to fol |
|
|
|
|
|
We ended up with the following initial, concrete plan for the first steps:
|
|
|
|
|
|
1. Run the LineFollower [TODO ref] program from Lesson 1 on the track to see how it performs
|
|
|
1. Run the LineFollower [4] program from Lesson 1 on the track to see how it performs
|
|
|
|
|
|
2. Experiment with modifications of LineFollower
|
|
|
|
... | ... | @@ -136,7 +136,7 @@ The robot was still moving very slowly, so we changed the motor powers in the tu |
|
|
|
|
|
#### Experimenting with faster initial ramp climb
|
|
|
|
|
|
We wanted to find a faster method for climbing the first ramp than using the old LineFollower program. As described, we initially disabled line following and instead used the program **FullDowney.java **[TODO ref] which simply makes the robot drive forward with full speed. We did this to see if, by positioning it correctly, we could get the robot to go straight up and not drive off the ramp.
|
|
|
We wanted to find a faster method for climbing the first ramp than using the old LineFollower program. As described, we initially disabled line following and instead used the program **FullDowney.java **[5] which simply makes the robot drive forward with full speed. We did this to see if, by positioning it correctly, we could get the robot to go straight up and not drive off the ramp.
|
|
|
|
|
|
The robot performed very poorly, often making a sudden turn off the track. After taking a closer look, we ended up replacing the left motor, a lego-stick-thing [TODO Ida: "change slightly bent sticky-thing to not bent sticky-thing" - tror at “sticky” = pinde-agtig, ikke klistret], and the support wheel. This seemed to improve the robot’s driving slightly, but did not fix the issues with seemingly random turns completely. We gave up on trying to get the robot to go straight without sensor input and went on to experimenting with the gyro sensor.
|
|
|
|
... | ... | @@ -150,7 +150,7 @@ As in the other experiments we started out by rebuilding the robot - this time u |
|
|
|
|
|
*Figure 2: The robot rebuilt to use a gyro sensor for detecting a level change.*
|
|
|
|
|
|
To test whether the gyro sensor can be used to detect when the robot reaches a platform, we wrote a small program **GyroTest.java **[TODO ref], that records and displays the minimum - and maximum seen values so far while the robot drives forward at a steady paste (both motors run with a power of 80). We intended to log the gyro readings throughout each test, but had some errors that we couldn’t find an immediate solution for. Instead we moved on without logs and used the displayed minimum - and maximum values instead.
|
|
|
To test whether the gyro sensor can be used to detect when the robot reaches a platform, we wrote a small program **GyroTest.java **[6], that records and displays the minimum - and maximum seen values so far while the robot drives forward at a steady paste (both motors run with a power of 80). We intended to log the gyro readings throughout each test, but had some errors that we couldn’t find an immediate solution for. Instead we moved on without logs and used the displayed minimum - and maximum values instead.
|
|
|
|
|
|
Running the GyroTest.java program was done in two tries. First we started the program on the ramp just before the platform, letting the robot drive onto the platform and reading the maximal and minimal seen gyro values. Secondly we decided to let the robot gain some momentum before driving onto the platform, and therefore started it at the bottom of the ramp and let it drive all the way up onto the platform. Neither of these runs gave a noticeable result that we felt we could rely on, when attempting to detect that the robot reached a platform and therefore should start behaving differently. This might have been a result of us driving with a fairly slow speed initially, where a faster robot would have had an easier time triggering significant gyro sensor readings. From our experiences in previous lessons, we did not feel like running the risk of spending several hours on attempting an approach that did not seem to promise a useful result. Therefore, we ended up discarding the idea of using a gyro sensor as a way of detecting the platform and moved on to build a robot that with no concern of time-consumption climbed the ramp and returned to the floor.
|
|
|
|
... | ... | @@ -164,7 +164,7 @@ We started out by rebuilding the robot such that it now had two light sensors in |
|
|
|
|
|
*Figure 3: The robot rebuilt to use two light sensor for following a line on the ground.*
|
|
|
|
|
|
After rebuilding the robot, we began implementing the **_InitialClimber.java_** [TODO ref] program. The program started out as a copy of **_LineFollower.java_**** **[TODO ref], and the idea was to use one of the two sensors to follow the line on the ramp, and the other to detect and initiate a turn at the big black line at the end of the platforms. The general idea can be seen in Figure 4.
|
|
|
After rebuilding the robot, we began implementing the **_InitialClimber.java_** [7] program. The program started out as a copy of **_LineFollower.java_**[4], and the idea was to use one of the two sensors to follow the line on the ramp, and the other to detect and initiate a turn at the big black line at the end of the platforms. The general idea can be seen in Figure 4.
|
|
|
|
|
|
![180 degree turn](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week1011/img/InitClimber-idea.png)
|
|
|
|
... | ... | @@ -196,7 +196,7 @@ After closely observing several different runs of the same program, we concluded |
|
|
|
|
|
#### Introducing PID control
|
|
|
|
|
|
We made the class **_PIDClimber.java_** [TODO ref] which uses the same kind of correction as **_PIDpolice.java_** [TODO ref] in Lesson 4, simply multiplying the deviation from the setpoint by a constant. In **_PIDClimber_**, the robot interprets black as a sensor measuring less than the black/white threshold (used as the setpoint). This resulted in the robot triggering a turn when its front was being pushed over the edge of the platform, as the sensors were raised too far from the platform to register the white surface. Instead, we tried comparing with the light reading for black (plus a small margin), which successfully got rid of the undesired triggering of turns. The robot still missed a few turns once in a while, most likely due to the same issue that was described in the previous section with the line following behavior performing the turn on its own and missing the double black reading. To remedy this, we tried lowering the speed of the robot. This reduced the number of occurrences of this kind of error.
|
|
|
We made the class **_PIDClimber.java_** [8] which uses the same kind of correction as **_PIDpolice.java_** [9] in Lesson 4, simply multiplying the deviation from the setpoint by a constant. In **_PIDClimber_**, the robot interprets black as a sensor measuring less than the black/white threshold (used as the setpoint). This resulted in the robot triggering a turn when its front was being pushed over the edge of the platform, as the sensors were raised too far from the platform to register the white surface. Instead, we tried comparing with the light reading for black (plus a small margin), which successfully got rid of the undesired triggering of turns. The robot still missed a few turns once in a while, most likely due to the same issue that was described in the previous section with the line following behavior performing the turn on its own and missing the double black reading. To remedy this, we tried lowering the speed of the robot. This reduced the number of occurrences of this kind of error.
|
|
|
|
|
|
The observations described above were not filmed, as we felt that our video documentation was turning into a giant blob of videos, and the improvements seemed minor. Most of our work has not been video recorded, because almost all of our progress was an extremely incremental process where we ran a program dozens of times with minor changes between each run before we accomplished a small goal, and then moving on to tackling the next problem. It therefore became very difficult to judge when it made any sense to record videos at the time, and resulted in very few videos overall.
|
|
|
|
... | ... | @@ -208,9 +208,9 @@ As described previously, Nicolai and Emil worked on each their implementation of |
|
|
|
|
|
### PID LineFollower with high precision
|
|
|
|
|
|
The goal of this implementation was to use PID control to very accurately follow the black line, in order to use the resulting reliable entrance angle of the Y-sections and black borders on the platforms to guide the robot. As such, **_GodBot.java_** [TODO ref] was made.
|
|
|
The goal of this implementation was to use PID control to very accurately follow the black line, in order to use the resulting reliable entrance angle of the Y-sections and black borders on the platforms to guide the robot. As such, **_GodBot.java_** [10] was made.
|
|
|
|
|
|
In an attempt to improve our line follower we wanted to make several changes. First off, we started calibrating the two light sensors for seperate black/white light values using the **_BlackWhiteSensor.java_** [TODO ref] program, in order to guard ourselves against potential small differences in values read by two difference sensors. Then a basic controller utilizing both light sensors properly was implemented by calculating the *error* value for both light sensors, *leftError* and *rightError* (Black-White threshold subtracted from light reading), multiplying left*Error* value by -1, and then adding it to *rightError* for a combined *turn* value. Inverting one sensors error was to ensure that the two sensors were working together to correct toward the center of the black line, as they would be placed so each sensor was attempting to stay on one of the black tapes borders (left sensor on the left border, and right sensor on the right border obviously). This combined *turn *value was then used by subtracting it from the robot’s left motor’s power, and adding it to the right motor’s power.
|
|
|
In an attempt to improve our line follower we wanted to make several changes. First off, we started calibrating the two light sensors for seperate black/white light values using the **_BlackWhiteSensor.java_** [11] program, in order to guard ourselves against potential small differences in values read by two difference sensors. Then a basic controller utilizing both light sensors properly was implemented by calculating the *error* value for both light sensors, *leftError* and *rightError* (Black-White threshold subtracted from light reading), multiplying left*Error* value by -1, and then adding it to *rightError* for a combined *turn* value. Inverting one sensors error was to ensure that the two sensors were working together to correct toward the center of the black line, as they would be placed so each sensor was attempting to stay on one of the black tapes borders (left sensor on the left border, and right sensor on the right border obviously). This combined *turn *value was then used by subtracting it from the robot’s left motor’s power, and adding it to the right motor’s power.
|
|
|
|
|
|
Running this program with a *targetPower* = 70 followed the line for the most part, but reacted far too violently to the line, causing big oscillations. It occurred to us that even though we didn’t implement P, I or D variables in the controller, the current program is basically a P controller with P = 2, since we are getting error values from two sensors and adding them together for doubled the effect on motor power. We solved this by simply dividing the *leftError + rightError* by 2 before adding to the motor’s power. This resulted in a robot that follows the line decently, although still oscillating a fair bit. We reduced the *targetPower* to 60, resulting in a *very* slow robot, but one that follows the line more steadily. As speed isn’t our top priority at the moment, we decided this was fine for an initial attempt at getting all the way to the top and back down.
|
|
|
|
... | ... | @@ -224,9 +224,9 @@ The program as a whole worked some of the time, but it was especially difficult |
|
|
|
|
|
##### Putting the P, I and D in PID.
|
|
|
|
|
|
One recurring problem when running our simple **_GodBot.java_** [TODO ref] program, was that it didn’t follow the line very precisely, which caused inconsistencies in the angle at which the robot would enter the Y-section of the black tape trail, or the big black line at the end of the platforms. This inconsistency in entrance angle meant it was difficult to get the robot to properly get out of the Y-section, and follow the black line up the next ramp, as we couldn’t make any solid assumptions as to where exactly the robot would be when seeing the thick black ramp line, not to mention it didn’t always follow the line correctly past the Y-section to begin with.
|
|
|
One recurring problem when running our simple **_GodBot.java_** [10] program, was that it didn’t follow the line very precisely, which caused inconsistencies in the angle at which the robot would enter the Y-section of the black tape trail, or the big black line at the end of the platforms. This inconsistency in entrance angle meant it was difficult to get the robot to properly get out of the Y-section, and follow the black line up the next ramp, as we couldn’t make any solid assumptions as to where exactly the robot would be when seeing the thick black ramp line, not to mention it didn’t always follow the line correctly past the Y-section to begin with.
|
|
|
|
|
|
In an attempt to follow the black line more precisely for more reliable results, we decided to implement a proper PID LineFollower. Based slightly on our **_Sejway.java_** [TODO ref] code from our previous PID balancing robot, **_PIDGod.java_** [TODO ref] was made.
|
|
|
In an attempt to follow the black line more precisely for more reliable results, we decided to implement a proper PID LineFollower. Based slightly on our **_Sejway.java_** [12] code from our previous PID balancing robot, **_PIDGod.java_** [13] was made.
|
|
|
|
|
|
After experimenting for a long time with the P, I and D variables based on past experiences we arrived at the values *targetPower *= 60, P = 5, I = 0,4 and D = 8. We dampened the integral error by multiplying by ⅔ every run to prevent it accumulating indefinitely. This PID line follower followed the black line incredibly smoothly, even around the sharp corners of the tape at the top of the ramps, and as such gave us a much stronger foundation for handling our turns.
|
|
|
|
... | ... | @@ -256,7 +256,7 @@ We spent a long time fiddling with these triggers, but weren’t able to ever ge |
|
|
|
|
|
As an alternative to the above approach, which was beginning to seem complex, we began simultaneous work on a more simple approach with the additional intention of focusing more on the track completion time.
|
|
|
|
|
|
The two light sensors’ only purpose in this robot was to make sure we knew on which side of the robot the line was. We took advantage of their relative positions, by keeping a state of which sensor last saw the black line. Partially knowing our position (or at least which side of the line we’re on) means that we can use the integral part of a PID controller well, since that enables us to drive smoothly in a (sine) curve instead of the erratic behavior, we get from reacting every time we sense black. It mainly uses the integral and the proportional part of PID, since that’s the parts we thought that the robot would benefit the most from. The robot remembers, which sensor saw black the last and decreases the power of the respective motor exponentially until it sees black again. This way we can have both motors running with a lot of power continuously. A very naive implementation worked pretty well, and it didn’t seem to benefit a lot from more advanced techniques, but that might have something to do with the light in the room, since the readings became more unstable. An improvement we tried to make was to note how many reading of black a sensor made and decrease the turn rate proportionally with the amount of readings, since we want the turns to be as smooth as possible when we are close to following the direction of the line. This change didn’t appear to improve anything from the testing we did. Another approach was to introduce more states so that we react while sensing black as well, where the simple robot only start turning after sensing black. This worked alright most of the time, but it made the robot a lot more complex without that much to show for it. Due to that, we continued with the simple implementation, since time definitely was an issue for us at this point of time. This simple robot reached the second plateau a couple of times without any turning mechanism, mainly because it was lucky and avoided the black tape ‘cross’. An improvement we considered was to deterministically avoid the cross, but we didn’t look into that due to lack of time. What we ended up doing was to approximate what the tachometer would read at different plateaus. The turns were approximated and hard-coded as well. This took a lot of tuning and calibration, but made us reach the top relatively easily. Getting down never happened, since the light readings (especially going down) became very unstable at night. Considering our experiences driving up to the top, getting back down wouldn’t take that much effort, since we already had experience in driving down. This robot is implemented in the program **_OldPIDTest.java_**. Some of the attempted improvements can be found in **_PIDTest.java_**. The program runs a loop with a sampling time of a few ms. In each sample, it first checks whether the tachometer is above the threshold on which it is supposed to do something, and accordingly does exactly that and otherwise drives using LineFollower functionality. The thresholds are approximated from observations and testing. The first turn should happen at approximately 2200 on the tachometer, which is supposed to be when the robot hits the first plateau. The next threshold is a bit different according to new approximations stemming from observation and testing. As mentioned, this works all the way to the top when the stars align. The LineFollowing functionality works by taking a light reading on each light sensor. These readings lead into different cases, where the previous state is taking into consideration as well. An improvement to this implementation could be to separate the functionality into behaviors, such as LineFollowing and Turning, with an arbitrator, since it could be useful to have the light sensing running, while a hard-coded turn was happening. This way the robot would know where to look for the line after a turn, since this was mainly the place, where the approximations failed. The robot would either over- or understeer and end up placed differently to where it thought it was. The robot drove fairly fast and did reach the top every once in a while and even seemed consistent at times, but the approximations combined with external factors such as battery power affecting the motor speeds, made it increasingly unstable the further on the ramp it got. The video 5 shows the robot reaching the second plateau, but failing due to an oversteer. The different improvements, we considered, could have probably made it a lot more stable.
|
|
|
The two light sensors’ only purpose in this robot was to make sure we knew on which side of the robot the line was. We took advantage of their relative positions, by keeping a state of which sensor last saw the black line. Partially knowing our position (or at least which side of the line we’re on) means that we can use the integral part of a PID controller well, since that enables us to drive smoothly in a (sine) curve instead of the erratic behavior, we get from reacting every time we sense black. It mainly uses the integral and the proportional part of PID, since that’s the parts we thought that the robot would benefit the most from. The robot remembers, which sensor saw black the last and decreases the power of the respective motor exponentially until it sees black again. This way we can have both motors running with a lot of power continuously. A very naive implementation worked pretty well, and it didn’t seem to benefit a lot from more advanced techniques, but that might have something to do with the light in the room, since the readings became more unstable. An improvement we tried to make was to note how many reading of black a sensor made and decrease the turn rate proportionally with the amount of readings, since we want the turns to be as smooth as possible when we are close to following the direction of the line. This change didn’t appear to improve anything from the testing we did. Another approach was to introduce more states so that we react while sensing black as well, where the simple robot only start turning after sensing black. This worked alright most of the time, but it made the robot a lot more complex without that much to show for it. Due to that, we continued with the simple implementation, since time definitely was an issue for us at this point of time. This simple robot reached the second plateau a couple of times without any turning mechanism, mainly because it was lucky and avoided the black tape ‘cross’. An improvement we considered was to deterministically avoid the cross, but we didn’t look into that due to lack of time. What we ended up doing was to approximate what the tachometer would read at different plateaus. The turns were approximated and hard-coded as well. This took a lot of tuning and calibration, but made us reach the top relatively easily. Getting down never happened, since the light readings (especially going down) became very unstable at night. Considering our experiences driving up to the top, getting back down wouldn’t take that much effort, since we already had experience in driving down. This robot is implemented in the program **_OldPIDTest.java_** [14]. Some of the attempted improvements can be found in **_PIDTest.java_** [15]. The program runs a loop with a sampling time of a few ms. In each sample, it first checks whether the tachometer is above the threshold on which it is supposed to do something, and accordingly does exactly that and otherwise drives using LineFollower functionality. The thresholds are approximated from observations and testing. The first turn should happen at approximately 2200 on the tachometer, which is supposed to be when the robot hits the first plateau. The next threshold is a bit different according to new approximations stemming from observation and testing. As mentioned, this works all the way to the top when the stars align. The LineFollowing functionality works by taking a light reading on each light sensor. These readings lead into different cases, where the previous state is taking into consideration as well. An improvement to this implementation could be to separate the functionality into behaviors, such as LineFollowing and Turning, with an arbitrator, since it could be useful to have the light sensing running, while a hard-coded turn was happening. This way the robot would know where to look for the line after a turn, since this was mainly the place, where the approximations failed. The robot would either over- or understeer and end up placed differently to where it thought it was. The robot drove fairly fast and did reach the top every once in a while and even seemed consistent at times, but the approximations combined with external factors such as battery power affecting the motor speeds, made it increasingly unstable the further on the ramp it got. Video 5 shows the robot reaching the second plateau, but failing due to an oversteer. The different improvements, we considered, could have probably made it a lot more stable.
|
|
|
|
|
|
[TODO: Fix to markdown format & number]
|
|
|
|
... | ... | @@ -266,7 +266,7 @@ https://www.youtube.com/watch?v=w5JwM_dEYVQ |
|
|
|
|
|
### Introducing DifferentialPilot
|
|
|
|
|
|
Since we had so much trouble getting the robot to turn correctly on the platform by following a line, we decided to also try a different approach using the LeJOS class DifferentialPilot. At first, we made an implementation, **_SimplePilot.java_** [TODO ref], that did not attempt to follow the line at all, but simply drove straight ahead until reading black on both sensors (i.e. reaching the end of a platform) at which point it perform a turn using the following sequence of commands:
|
|
|
Since we had so much trouble getting the robot to turn correctly on the platform by following a line, we decided to also try a different approach using the LeJOS class DifferentialPilot. At first, we made an implementation, **_SimplePilot.java_** [16], that did not attempt to follow the line at all, but simply drove straight ahead until reading black on both sensors (i.e. reaching the end of a platform) at which point it perform a turn using the following sequence of commands:
|
|
|
|
|
|
1. stop (**_pilot.stop();_**)
|
|
|
|
... | ... | @@ -296,11 +296,11 @@ The robot may have had trouble getting safely up the ramps using this approach, |
|
|
|
|
|
#### Combining line following ramp climbing with piloted platform turns
|
|
|
|
|
|
Initially, we included code from **_GodBot.java_** [TODO ref] in a copy of **_SimplePilot.java _**[TODO ref], **_PilotedCarLineFollower.java _**[TODO ref], to drive the robot up the ramp instead of using pilot.forward(). In the API for DifferentialPilot [TODO ref. til [http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html](http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html)] it is stated that the results of other objects making calls to the motors when DifferentialPilot is in use are unpredictable. We had trouble figuring out how to separate line following control from the differential pilot’s control, to be able to switch between them. Our first attempt was to instantiate the DifferentialPilot within the if-statement that captures the case where the robot reads black on both sensors, so that it would only try to run in this isolated case and would be terminated upon exit from the if-statement’s scope. The robot had no trouble getting up the first ramp using line following and individual control of the motors, and it performed the turn on the first platforms - but after this, when exiting the if-statement, the robot started randomly turning and racing around. We spent several hours trying out different approaches to getting the differential pilot to work along with the GodBot line following, but did not succeed.
|
|
|
Initially, we included code from **_GodBot.java_** [10] in a copy of **_SimplePilot.java _**[16], **_PilotedCarLineFollower.java _**[17], to drive the robot up the ramp instead of using pilot.forward(). In the API for DifferentialPilot [TODO ref. til [http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html](http://www.lejos.org/nxt/nxj/api/lejos/robotics/navigation/DifferentialPilot.html)] it is stated that the results of other objects making calls to the motors when DifferentialPilot is in use are unpredictable. We had trouble figuring out how to separate line following control from the differential pilot’s control, to be able to switch between them. Our first attempt was to instantiate the DifferentialPilot within the if-statement that captures the case where the robot reads black on both sensors, so that it would only try to run in this isolated case and would be terminated upon exit from the if-statement’s scope. The robot had no trouble getting up the first ramp using line following and individual control of the motors, and it performed the turn on the first platforms - but after this, when exiting the if-statement, the robot started randomly turning and racing around. We spent several hours trying out different approaches to getting the differential pilot to work along with the GodBot line following, but did not succeed.
|
|
|
|
|
|
##### DifferentialPilot line following
|
|
|
|
|
|
Deciding to take a break from attempting the combined approach, we resorted to implementing line following using the DifferentialPilot’s arc-method, in the class **_Piloted_****_LineFollower_****_.java _**[TODO ref]. It proved difficult to properly steer the robot according to divergence in the black/white readings of the sensors. Though this was perhaps most likely due to a limited overview of how to utilize our error measurements in the arguments to **_arc()_** and/or increasing tiredness and frustration levels, we temporarily gave up on creating a satisfactory implementation. Instead, we began an implementation using a switcher to switch between behaviors implemented in an interface. This is described in the section below. After some debate regarding this solution (to be found in the section below), we continued the attempt at implementing line following using DifferentialPilot. We experimented with different parameters for **_arc()_** and introduced a turn utilizing **_rotateRight()_** to turn until aligned with the bottom bar of the Y on the platform in order to let the robot more easily find its way back on track after making a platform turn. We also performed other minor additions. We managed to get the robot to the second platform, successfully performing the intermediate turn operation on the first platform (see Video 7). At some point, however, the robot began failing in detecting the black end line - perhaps due to poor lighting in the later hours of the day, perhaps due to the sensors being raised as they were sometimes scraping on the track, and perhaps simply due to low battery. As black sensing was still working for the other two ongoing implementations, we decided to leave the DifferentialPilot at this.
|
|
|
Deciding to take a break from attempting the combined approach, we resorted to implementing line following using the DifferentialPilot’s arc-method, in the class **_Piloted_****_LineFollower_****_.java _**[18] [TODO pretty?]. It proved difficult to properly steer the robot according to divergence in the black/white readings of the sensors. Though this was perhaps most likely due to a limited overview of how to utilize our error measurements in the arguments to **_arc()_** and/or increasing tiredness and frustration levels, we temporarily gave up on creating a satisfactory implementation. Instead, we began an implementation using a switcher to switch between behaviors implemented in an interface. This is described in the section below. After some debate regarding this solution (to be found in the section below), we continued the attempt at implementing line following using DifferentialPilot. We experimented with different parameters for **_arc()_** and introduced a turn utilizing **_rotateRight()_** to turn until aligned with the bottom bar of the Y on the platform in order to let the robot more easily find its way back on track after making a platform turn. We also performed other minor additions. We managed to get the robot to the second platform, successfully performing the intermediate turn operation on the first platform (see Video 7). At some point, however, the robot began failing in detecting the black end line - perhaps due to poor lighting in the later hours of the day, perhaps due to the sensors being raised as they were sometimes scraping on the track, and perhaps simply due to low battery. As black sensing was still working for the other two ongoing implementations, we decided to leave the DifferentialPilot at this.
|
|
|
|
|
|
[TODO: Fix to markdown video format]
|
|
|
|
... | ... | @@ -310,11 +310,11 @@ https://www.youtube.com/watch?v=LqK0Zw65I-c |
|
|
|
|
|
##### Switching between navigation types
|
|
|
|
|
|
We constructed an architecture with a class, **_Switcher.java_** with a field variable of the interface type **_NavigatingRobot.java_**, initially instantiated as a **_PilotedBot.java_** which was intended to navigate using DifferentialPilot. The class **_SensorSteeringBot.java_** implements line following similar to that of GodBot.java. The idea was then to switch between a PilotedBot instance and a SensorSteeringBot instance on different triggers, such as reading black on both sensors. We never finished implementing this idea: It dawned on us that this was very similar to using an arbiter, and upon investigating how to implement it using an arbiter, we reached the conclusion that we were not certain that this would free us from the trouble of combining the individual motor control of Car with DifferentialPilot. We therefore went back to attempting to implement line following, now with a fresh mindset (as the arbiter discussion took place in the beginning of our session on Monday the 9th of May).
|
|
|
We constructed an architecture with a class, **_Switcher.java_** [19] with a field variable of the interface type **_NavigatingRobot.java_** [20], initially instantiated as a **_PilotedBot.java_** [21] which was intended to navigate using DifferentialPilot. The class **_SensorSteeringBot.java_** [22] implements line following similar to that of GodBot.java. The idea was then to switch between a PilotedBot instance and a SensorSteeringBot instance on different triggers, such as reading black on both sensors. We never finished implementing this idea: It dawned on us that this was very similar to using an arbiter, and upon investigating how to implement it using an arbiter, we reached the conclusion that we were not certain that this would free us from the trouble of combining the individual motor control of Car with DifferentialPilot. We therefore went back to attempting to implement line following, now with a fresh mindset (as the arbiter discussion took place in the beginning of our session on Monday the 9th of May).
|
|
|
|
|
|
##### Using a bumper
|
|
|
|
|
|
An implementation similar to **_SimplePilot.java_** [TODO ref], that used a touch sensor to detect when the robot hit the wall by the track *and then* making the second 90 degree turn, was also made (**_PilotWithBumper.java _**[TODO ref]), but was never tested. To test it, we would have to refit the robot with a touch sensor, preferably with a bumper to ensure that collision with the wall would be detected no matter the angle of the robot. Since the difference to using SimplePilot seemed minor and we figured that this version would take longer to complete the track due to having to wait until hitting the wall and then backing up, we decided to postpone refitting with a touch sensor until we might see an actual gain in using one. We never got to this point. A benefit we might have seen would be the possibility of letting the robot use the bumper to align with the wall, by waiting a little before stopping upon impact, thus enabling it to enter the next ramp perpendicularly after making the 90 degree turn following collision.
|
|
|
An implementation similar to **_SimplePilot.java_** [16], that used a touch sensor to detect when the robot hit the wall by the track *and then* making the second 90 degree turn, was also made (**_PilotWithBumper.java _**[23]), but was never tested. To test it, we would have to refit the robot with a touch sensor, preferably with a bumper to ensure that collision with the wall would be detected no matter the angle of the robot. Since the difference to using SimplePilot seemed minor and we figured that this version would take longer to complete the track due to having to wait until hitting the wall and then backing up, we decided to postpone refitting with a touch sensor until we might see an actual gain in using one. We never got to this point. A benefit we might have seen would be the possibility of letting the robot use the bumper to align with the wall, by waiting a little before stopping upon impact, thus enabling it to enter the next ramp perpendicularly after making the 90 degree turn following collision.
|
|
|
|
|
|
### Discussion of alternative approaches
|
|
|
|
... | ... | @@ -330,11 +330,11 @@ We ended up having to work for more than the two days originally planned. We wor |
|
|
|
|
|
At the 5th of April we decided to split the group into working with two different approaches to a PID line follower, and another using the DifferentialPilot. One PID follower focused on very precise line following for predictable light sensor readings when encountering the Y-sections to trigger the turns on the platforms, whereas the other used a simple idea of which side of the line the robot was on, whilst using the tacho counter to handle the turns on the platform.
|
|
|
|
|
|
The precision line follower, **_PIDGod.java _**[TODO ref], had a lot of success getting to the top of the platform, and once we raised the light sensors were able to drive down as well, but the problems caused by the raised sensors reading black from being too far off the ground when driving onto the platforms while driving up the track, meant we were never able to get a reliable reaction to all of our desired turning triggers, and could never get it to drive up and down in a singular build and program.
|
|
|
The precision line follower, **_PIDGod.java _**[13], had a lot of success getting to the top of the platform, and once we raised the light sensors were able to drive down as well, but the problems caused by the raised sensors reading black from being too far off the ground when driving onto the platforms while driving up the track, meant we were never able to get a reliable reaction to all of our desired turning triggers, and could never get it to drive up and down in a singular build and program.
|
|
|
|
|
|
The other PID line follower approach, **_OldPIDTest.java_**, wasn’t very consistent in reaching the top, but did it a lot faster. It was intended to be a simple solution, but it became obvious that the implementation could also easily become complex. At times it performed consistently, but it could also be very unstable. It wasn’t entirely clear which factors affected the performance, but it seemed like especially battery and the resulting motor power would affect both the tacho counter and linefollowing. The low level of ambient light also seemed to affect our light readings when attempting to drive down. With a few very good results and a relatively simple implementation, it definitely shows promise and serves as a fine proof-of-concept of what we initially attempted to achieve.
|
|
|
|
|
|
Our attempt to use the **_DifferentialPilot_** class was illustrative of the inconsistencies that the real world brings into robot programming; although **_DifferentialPilot_** is very effective at controlling the robot’s movements precisely, the simple approach in **_SimplePilot.java_** is not enough to make a functioning hill-climbing robot as external factors bring the robot off a perfectly straight course. The work on **_PilotedLineFollower.java_**, which adds line following to SimplePilot, showed promise, but due to time constraints we chose to opt out of our experiments with it before having implemented a fully functional up-down robot.
|
|
|
Our attempt to use the **_DifferentialPilot_** class was illustrative of the inconsistencies that the real world brings into robot programming; although **_DifferentialPilot_** is very effective at controlling the robot’s movements precisely, the simple approach in **_SimplePilot.java_** [16] is not enough to make a functioning hill-climbing robot as external factors bring the robot off a perfectly straight course. The work on **_PilotedLineFollower.java_** [18], which adds line following to SimplePilot, showed promise, but due to time constraints we chose to opt out of our experiments with it before having implemented a fully functional up-down robot.
|
|
|
|
|
|
Using **_DifferentialPilot_** was different from the other approaches that we have tried during the course, as **_DifferentialPilot_** provides us with methods that utilize calculations based on tacho count and physical parameters (i.e. wheel diameter and distance). These calculations are far more precise than the ones we have attempted to do using light measurements and time (e.g. **_sleep(1000);_**), and even if we had used the tacho count, we would have had to spend a very long time finding out how to turn e.g. 90 degrees given our robot’s construction. Thus, **_DifferentialPilot_** relieves us of some basic but important calculations.
|
|
|
|
... | ... | @@ -354,17 +354,46 @@ TODO |
|
|
|
|
|
[3] Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.
|
|
|
|
|
|
[4]
|
|
|
[4] The [LineFollower.java program]() from lesson 1.
|
|
|
|
|
|
[5]
|
|
|
[5] The [FullDowney.java program]()
|
|
|
|
|
|
[6]
|
|
|
[6] The [GyroTest.java program]()
|
|
|
|
|
|
[7]
|
|
|
[7] The [InitialClimber.java program]()
|
|
|
|
|
|
[8]
|
|
|
[8] The [PIDClimber.java program]()
|
|
|
|
|
|
[9] The [PIDPolice.java program]()
|
|
|
|
|
|
[10] The [GodBot.java program]()
|
|
|
|
|
|
[11] The [BalcWhiteSensor.java program]()
|
|
|
|
|
|
[12] The [Sejway.java program]() from lab session 7.
|
|
|
|
|
|
[13] The [PIDGod.java program]()
|
|
|
|
|
|
[14] The [OldPIDTest.java program]()
|
|
|
|
|
|
[15] The [PIDTest.java program]()
|
|
|
|
|
|
[16] The [SimplePilot.java program]()
|
|
|
|
|
|
[17] The [PilotedCarLineFollower.java program]()
|
|
|
|
|
|
[18] The [PilotedLineFollower.java program]()
|
|
|
|
|
|
[19] The [Switcher.java program]()
|
|
|
|
|
|
[20] The [NavigatingRobot.java program]()
|
|
|
|
|
|
[21] The [PilotBot.java program]()
|
|
|
|
|
|
[22] The [SensorSteeringBot.java program]()
|
|
|
|
|
|
[23] The [PilotWithBumper.java program]()
|
|
|
|
|
|
[9]
|
|
|
|
|
|
## **Appendix A: Unconstrained idea generation**
|
|
|
|
... | ... | |