... | ... | @@ -10,11 +10,11 @@ |
|
|
|
|
|
## **Goal**
|
|
|
|
|
|
Our goal is to get experience with behavior based programming and use this knowledge to make a sumo wrestling robot that drives all others off the track and wins the tournament.
|
|
|
Our goal is to get experience with behavior based programming and use this knowledge to make a sumo wrestling robot that drives other robots off the track and wins the tournament.
|
|
|
|
|
|
## **Plan**
|
|
|
|
|
|
We planned to follow the [9th lesson plan](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson9.dir/Lesson.html), starting out by implementing a BumperCar followed by a sumo wrestling car. Since this week’s lesson plan looked fairly well-structured (as opposed to lesson plan 8), we planned to go by our usual protocol and have specific roles: Since Camilla and Nicolai met up a few hours before the rest of the group, we had Camilla be in charge of the coding while Nicolai took notes. Emil and Ida were then to take care of testing and building. The role distribution quickly became muddled, however, due to theoretical discussions and separation of the group into two halves (see the section on motivation functions).
|
|
|
We planned to follow the [9th lesson plan](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson9.dir/Lesson.html), starting out by implementing a BumperCar followed by a sumo wrestling car. Since this week’s lesson plan looked fairly well-structured (as opposed to lesson plan 8), we planned to go by our usual protocol and have specific roles: Since Camilla and Nicolai met up a few hours before the rest of the group, we had Camilla be in charge of the coding while Nicolai took notes. Emil and Ida were then to take care of testing and building. The role distribution quickly became muddled, however, due to theoretical discussions and separation of the group into two halves (see the section on motivation functions). Having learned from our previous experience with splitting up the group work too much (see the report on lesson 8), we agreed that the split was only temporary and that we would keep each other updated frequently.
|
|
|
|
|
|
Camilla would not be able to participate in our second session, on Friday the 13th, so only three people would be available. We planned on having one or two people coding, depending on whether or not we would decide to test out alternative approaches, and one or two people working on (re-)construction and reinforcement of the robot as well as testing of the functionality of the robot’s physical aspects.
|
|
|
|
... | ... | @@ -35,7 +35,7 @@ As our robot currently was wearing all the gear for track climbing we started ou |
|
|
After rebuilding the robot, we uploaded the leJOS sample program **_BumperCar.java_** [3] and after checking that the correct sensor ports were used we ran the program and played arround with the touch sensors.
|
|
|
We observed that when the touch sensors are pressed the robot immediately breaks out of the behavior for driving forward and starts backing up, before doing a small backwards left turn. This further means that the arbiter used to control the different behaviors of the *BumperCar* must be suppressing the basic drive forward behavior and thereby letting the wall detecting behavior take over.
|
|
|
|
|
|
During the experimentation with the wall detection behavior we realized that only one of the touch sensors were used by the *BumperCar* program. We decided not to change this, as we, at the moment, just needed to test the program behavior and not have a fully functional robot.
|
|
|
During the experimentation with the wall detection behavior we realized that only one of the the *BumperCar* program only made use of one touch sensor, despite our intention to use two. We decided not to change this as we, at the time, just needed to test the program behavior and did not need a fully functional robot.
|
|
|
|
|
|
##### Additional behavior
|
|
|
|
... | ... | @@ -68,7 +68,7 @@ class Exit implements Behavior |
|
|
|
|
|
*Code Snippet 1: The Exit behavior class with no implementation of the suppress method*
|
|
|
|
|
|
Even though we chose not to implement *suppress()* we knew that this was not the best practise, as it now wasn’t possible to add a new behavior with even higher priority than *Exit*. Again we however chose to follow leJOS’ poor coding example for the sake of consistency.
|
|
|
We chose not to implement *suppress()* despite knowing that this is not best practise, as having a behavior with an empty *suppress* method makes it impossible to add a new behavior with a higher priority without having to implement the *suppress* method of the old behavior, thus (somewhat) defeating the purpose of modularity. We speculated that this might not be an issue in the case of *Exit*, as its implementation of *action()* is not continuous or prolonged and thus will finish execution (and thereby the entire program) rather quickly. However, it is at least theoretically possible for a potentially higher motivated behavior to become triggered before *Exit*’s *action()* is done. Then again, we do find it hard to imagine a behavior that should have a higher priority than the exit behavior. All this aside, we chose to follow the poor coding example of leJOS for the sake of consistency.
|
|
|
|
|
|
After implementing the *Exit* behavior we had to make sure that it could be used by the *BumperCar* program. Therefore we added the behavior to the *BumperCar* programs behavior list, and began adding a suppress method for the *DetectWall* behavior, that leJOS had ignored. To do this, we however had to change the *action*-method of the *DetectWall* behavior to ensure that it would return immediately after *suppress()* were called. This decision seemed correct as it is not desired that the robot finish backing away from the wall before stopping if the Escape button has been pressed. This means that the *action*-method of *DetectWall *will now terminate after only two short statements if the *suppressed-*method is called as seen in Code Snippet 2.
|
|
|
|
... | ... | @@ -120,15 +120,15 @@ public void action() |
|
|
|
|
|
##### Arbiters call sequence
|
|
|
|
|
|
The next task was to investigate whether the leJOS **_Arbitrator.java_** [5] called *takeControl()* of behaviors if one with a higher priority wantet to take control. By looking into the code of the *Arbiter* class we found that it starts up a monitor thread that has the responsibility of monitoring which behaviors wants to take control. It does this by going through the arbitrators list of behaviors from the one with the highest priority to the lowest each time calling the *takeControl*-method and if this call returns true it breaks the loop and starts over. Thereby the *DriveForward.takeControl()* will not be called when the *DetectWall.takeControl()* returns true.
|
|
|
The next task was to investigate whether the leJOS **_Arbitrator.java_** [5] called *takeControl()* on the behaviors if another behavior with a higher priority wants to take (or keep) control. By looking into the code of the *Arbitrator* class we found that it starts up a monitor thread that has the responsibility of monitoring which behaviors wants to take control. It does this by going through the arbitrators list of behaviors from the one with the highest priority to the lowest each time calling the *takeControl*-method and if this call returns true it breaks the loop and starts over. Thereby the *DriveForward.takeControl()* will not be called when the *DetectWall.takeControl()* returns true.
|
|
|
|
|
|
##### Design pattern for takeControl()
|
|
|
|
|
|
In the leJOS tutorial on behavioral programming [1], LeJOS has specified a specific design pattern for behaviors that basically states, that *takeControl()* should return without spending time, *suppress()* should just update a variable, and *action()* should be aborted as soon as possible when the suppressed-variable is true. This design pattern is not complied by the *DetectWall* behavior as it uses the ultrasonic sensors *ping- *and *getDistance*-methods that should be separated by at least 20 milliseconds. Therefore *takeControl()* doesn’t return right away and therefore doesn’t comply with the design pattern.
|
|
|
In the leJOS tutorial on behavioral programming [1], leJOS has specified a specific design pattern for behaviors that basically states, that *takeControl()* should return without spending time, *suppress()* should just update a variable, and *action()* should be aborted as soon as possible when the suppressed-variable is true. This design pattern is not complied by the *DetectWall* behavior as it uses the ultrasonic sensors *ping- *and *getDistance*-methods that should be separated by at least 20 milliseconds. Therefore *takeControl()* doesn’t return right away and therefore doesn’t comply with the design pattern.
|
|
|
|
|
|
To solve this issue we added an **_Updater_**** **class to the *BumperCar.java* file and used an instance of this class in the *DetectWall* behavior. The *Updater* would, given an ultrasonic sensor, keep pinging it and updating a local variable. Whenever the *takeControl-*method of *DetectecWall* was called it should just ask the updater-variable with the last measured distance was and use this number. This would return much faster than 20 ms and therefore better comply with the design pattern.
|
|
|
To solve this issue we added an **_Updater_**** **class to the *BumperCar.java* file and used an instance of this class in the *DetectWall* behavior. The *Updater* would, given an ultrasonic sensor, keep pinging it and updating a local variable. Whenever the *takeControl-*method of *DetectecWall* was called it should just ask the updater-object with the last measured distance was and use this number. This would return much faster than 20 ms and therefore better comply with the design pattern.
|
|
|
|
|
|
We chose to implement a separate thread class rather than have a local one, as done by LeJOS’ *Arbitrator*, as we expected that we could reuse the *Updater* class for other sensor-updaters.
|
|
|
We chose to implement a separate thread class rather than have a local one, as done by leJOS’ *Arbitrator*, as we expected that we could reuse the *Updater* class for other sensor-updaters.
|
|
|
|
|
|
##### Moving further backwards
|
|
|
|
... | ... | @@ -166,31 +166,23 @@ BumperCar.leftMotor.stop(); *// need to stop leftmotor to turn* |
|
|
|
|
|
##### Interrupt *DetectWall*s behavior by itself
|
|
|
|
|
|
The next task was to change the behavior of *DetectWall* such that it reinitiated the backup whenever the bumper was pressed during the *action*-method.
|
|
|
|
|
|
We discussed several ways of doing this:
|
|
|
The next task was to change the behavior of *DetectWall* such that it would re-start the evasive action whenever the bumper was pressed while the *action*-method was running. We discussed several ways of doing this, including:
|
|
|
|
|
|
* having a separate thread that performed the behavior which could then be interrupted by the *action-*method if another touch were measured.
|
|
|
|
|
|
* checking whether the touch sensors were pressed during execution and breaking if this was the case
|
|
|
|
|
|
[TODO: Er der nogle løsningsforslag som ikke er med i dette afsnit? Noterne var ikke udførlige grundet vores diskussion her. Læs lige grundigt og tænk tilbage /Camilla]
|
|
|
|
|
|
[TODO: Tanke: vi snakkede om at trykke på igen og at holde trykket nede = forskel - skal det med i det her afsnit, da vi snakker om det i næste? /Camilla]
|
|
|
* checking whether the touch sensors were pressed again during execution and breaking if this was the case
|
|
|
|
|
|
We spent a lot of time discussing these solutions. All of them had pros and cons, and we had a hard time choosing. The solution with adding checks for touch sensor presses seemed like the easiest, but we didn’t know how to stop during the first second of backing up and we worried about the program timing as it relied on the *touch.isPressed()* to return true long enough for the arbitrator to call *takeControl()* on *DetectWall* again.
|
|
|
There would be certain additional intricacies to the problem if we were to distinguish between continuous pressure and a second push following a release of the touch sensor. Although we figured that re-starting *action()* should only happen upon a push following a release, we decided to simply focus on how to re-start the evasive action and refine triggering later.
|
|
|
|
|
|
The thread solution seemed ok, but were hard to grasp and implement.
|
|
|
We spent a lot of time discussing different solutions, all with different pros and cons, and we had a hard time choosing. The solution with adding checks for positive touch sensor readings seemed like the easiest: we could simply perform a check using *touch.isPressed()* within *action()* and break if the check cleared. But we worried about the program timing as it relied on *isPressed()* to return true long enough for the arbitrator to call *takeControl()* on *DetectWall* and have it return true.
|
|
|
|
|
|
[TODO: mere ræsonnement omkring thread-løsning. Emil - tænkte det var et job for dig /Camilla]
|
|
|
|
|
|
[TODO: nævn potentiel thread-løsning - ref. ned til beskrivelse af lignende problem med FlipEnemy i sumo-implementation … Jeg kan ikke lave reference /Camilla]
|
|
|
The thread solution could perhaps work, but it was relatively hard for us to grasp and implement, and it makes the behaviors act asynchronously which would introduce more complexity in the case of driving. If it was in a behaviour that had full control over the motors or sensors used (such as our *FlipEnemy* behavior in the sumo car, where we however still had several issues), then it could probably work rather well, since it is relatively easy to manipulate, restart and stop/interrupt a thread in case of suppression or a trigger in *takeAction()*. It is very likely this would make it more complicated to have the behavior comply with the behavior design pattern though.
|
|
|
|
|
|
We decided to go with a simple solution of adding a *touch.isPressed()* check to our last while loop (the one that lets the robot turn). Even though this solution doesn’t restart the *DetectWall.action()* if it encounters a touch on the bumper while backing up, it seemed like a reasonable solution.
|
|
|
|
|
|
We added a *beep* sound to the beginning of the *action*-method to clearly indicate that the *action*-method had been restarted. This would make it easy for us to test if the *action()* started over before turning if the touch sensors were pressed.
|
|
|
|
|
|
We ended up concluding that the LeJOS Arbitrator is not implemented to ease interruption of behaviors with the same priority as the one that is running, which in case of this implementation is whether the behavior should in fact be restarted before it has ended.
|
|
|
We ended up concluding that the leJOS Arbitrator is not implemented to ease interruption of behaviors with the same priority as the one that is running, which in case of this implementation is whether the behavior should in fact be restarted before it has ended.
|
|
|
|
|
|
#### Division of work
|
|
|
|
... | ... | @@ -243,7 +235,7 @@ Next step of re-implementing the behavior paradigm to use Krink’s [2] motivati |
|
|
|
|
|
4. The old *Exit* behavior has been renamed to *MotivatedExit* and its *takeControl()* returns either 0 or 100 depending on whether or not the escape button is pressed
|
|
|
|
|
|
The motivation mapping function of *MotivatedDetectWall* is both the most complicated and interesting of the three behaviors’ *takeControl* methods. It is implemented around the idea that the motivation should change depending on whether the behavior is active or not (as described earlier). Initially we simply let it return the combined readings from the touch sensor and the ultrasonic sensor, mapped to values between 0 an 100: If the robot were to bump into something so that the touch sensor was pressed, the returned motivation would be 100. Otherwise it would be determined by the distance read by the ultrasonic sensor such that a value of 255 would result in 0 motivation for avoiding a wall while an ultrasonic distance reading of 0 would result in a motivation of 100 (note that this would never happen with our construction, as the touch sensor’s bumper was placed further in the front than the ultrasonic sensor - see Figure 1). The code for this mapping can be seen in Code Snippet 6.
|
|
|
The motivation mapping function of *MotivatedDetectWall* is both the most complicated and interesting of the three behaviors’ *takeControl* methods. It is implemented around the idea that the motivation should change depending on whether the behavior is active or not (as described earlier). Initially we simply let it return the combined readings from the touch sensor and the ultrasonic sensor, mapped to values between 0 and 100: If the robot were to bump into something so that the touch sensor was pressed, the returned motivation would be 100. Otherwise it would be determined by the distance read by the ultrasonic sensor such that a value of 255 would result in 0 motivation for avoiding a wall while an ultrasonic distance reading of 0 would result in a motivation of 100 (note that this would never happen with our construction, as the touch sensor’s bumper was placed further in the front than the ultrasonic sensor - see Figure 1). The code for this mapping can be seen in Code Snippet 6.
|
|
|
|
|
|
```java
|
|
|
|
... | ... | @@ -283,14 +275,6 @@ At this point, the *takeControl* method had become rather long and complex. We h |
|
|
public int takeControl()
|
|
|
{
|
|
|
int motivation = sensorContact.getMotivation();
|
|
|
*// If active, raise the motivation*
|
|
|
if(active && motivation > 80) {
|
|
|
|
|
|
motivation++;
|
|
|
|
|
|
}
|
|
|
|
|
|
// Before
|
|
|
|
|
|
if(active && motivation < 50) {
|
|
|
motivation = 50;
|
... | ... | @@ -432,13 +416,13 @@ We were tweaking the robot both in construction and program implementation up to |
|
|
|
|
|
The most important next behavior we would have looked at implementing was some kind of *searchEnemy* behavior, as a possible replacement for our base *Drive* behavior. Since our robot currently would just drive straight until hitting a white edge, and if our opponent happens to be in front of it, charge at it, it would be a vast improvement if the robot could actively search for our opponent itself, rather than hoping to bump into it randomly. A very basic way we considered doing this was to simply have the robot spin around on the spot, until it found the opponent with the ultrasonic sensor, in which case *PursueEnemy* would take over. Alternatively, we could have changed our *Drive* behavior to not just drive straight, but drive around in some random fashion. This would have also been made much easier, had we been able to get our **_MotivatedBumperCar_** to work, as it would have allowed us to, for example, have a *Turn* behavior that would randomly get higher motivation than *Drive*.
|
|
|
|
|
|
We also still had one sensor port available, which could have been used for a single touch sensor bumper somewhere, perhaps in the back as an input for a defensive behavior when being shoved from behind.
|
|
|
We also still had one sensor port available, which could have been used for a single touch sensor bumper somewhere, perhaps in the back as an input for a defensive behavior when being shoved from behind. Another use for the sensor port could be another UltraSonic sensor, which would would enable binocular vision and make it a lot easier to locate an enemy. A problem with just one UltraSonic sensor is that we risk picking up on enemies in either side of our periphery and speed up driving away from him, where we could take advantage of 2 sensors by knowing their relative position. The tournament winner Hans took advantage of this mechanism.
|
|
|
|
|
|
As far as the bumper goes, we also initially considered making a purely defensive robot using two bumpers on each side of the robot, and then attempt to win with defensive maneuvers reacting to which side we’d get shoved from. We ended up deciding that this would be a lot more difficult to implement cleverly, and also an incredibly obnoxious strategy for the other competitors to fight against, so we never ran with it. However, after seeing the tournament battles and how often some robots would lose due to shoving their opponent to the edge, and then the attacked robot would turn around (from seeing the white edge), and drag the attacking robot off the edge as a result, we realized that a defensive strategy of sorts might not have been a bad idea.
|
|
|
|
|
|
#### Problems
|
|
|
|
|
|
In addition to not getting to implement all the behaviors we would have liked, we also didn’t have time to properly debug our current ones. All our behaviors worked individually, and when running our SumoBot, they also all seemed to work initially… Until a couple had activated, in which case it seemed both *PursueEnemy* and *FlipEnemey* would stop ever taking control, which would indicate something went wrong with our *Updater*, as these two behaviors relied on readings from the ultrasonic sensor. This problem can be seen in video 2, where the *FlipEnemy* triggers correctly initially, but then ceases to function after hitting the white edge, although the robot continues to drive around and dodge the edges.
|
|
|
In addition to not getting to implement all the behaviors we would have liked, we also didn’t have time to properly debug our current ones. All our behaviors worked individually, and when running our SumoBot, they also all seemed to work initially… Until a couple had activated, in which case it seemed both *PursueEnemy* and *FlipEnemy* would stop ever taking control, which would indicate something went wrong with our *Updater*, as these two behaviors relied on readings from the ultrasonic sensor. A problem we considered was having a different Updater in each thread, which could mean that the updaters competed, due to the amount of downtime there is in the ultrasonic sensor. This problem can be seen in video 2, where the *FlipEnemy* triggers correctly initially, but then ceases to function after hitting the white edge, although the robot continues to drive around and dodge the edges.
|
|
|
|
|
|
[![Behavior problems](http://img.youtube.com/vi/1-54uEO9aJc/0.jpg)](https://www.youtube.com/watch?v=1-54uEO9aJc)
|
|
|
|
... | ... | @@ -452,7 +436,7 @@ Our sumo wrestler managed a solid 4th place in our group in the tournament with |
|
|
|
|
|
*Video 3: Frej’s victory over Praktikanten*
|
|
|
|
|
|
Two of our battles ended in a tie, as a result of the two robots getting stuck in eachother trying to drive forward, but getting nowhere. After seeing this, we realized it would have been a good idea to possibly include some sort of trigger in one of our behaviors in case too long time had passed since some event had happened (such as seeing a white edge), and then try to back up slightly, as this could have possibly prevented the stalemates.
|
|
|
Two of our battles ended in a tie, as a result of the two robots getting stuck in each other trying to drive forward, but getting nowhere. After seeing this, we realized it would have been a good idea to possibly include some sort of trigger in one of our behaviors in case too long time had passed since some event had happened (such as seeing a white edge), and then try to back up slightly, as this could have possibly prevented the stalemates.
|
|
|
|
|
|
#### Implementation
|
|
|
|
... | ... | @@ -501,23 +485,17 @@ DIT NOT WORK OUT! |
|
|
|
|
|
* Note: Can FlipEnemy be suppressed in between setting ForkLifted to true and flipping the fork? What happens then?
|
|
|
|
|
|
NOTES til Emils thread-løsningsidé til FlipEnemy-problem:
|
|
|
|
|
|
Tried putting all the ultrasounds sensor functionality in a thread shared by the PursuitEnemy behavior and the FlipEnemy behavior, where they both call getDistance. Initially it was meant for only the FlipEnemy behavior where it should sample distances over a set time (eg. 1 second) and then take the minimal distance in the array, but this didn’t work. When it was extracted from FlipEnemy it seemed obvious to give it to PursuitEnemy, but this behavior didn’t need the sampling. This wouldn’t be too bad though, since that behavior doesn’t need to look for an enemy more than a few times every second. We never got to the point where the sampling worked, but we made a solution that continuously ran over 50 elements in an FIFO array and took the minimal distance. The ultrasound sensor has a downtime of around 15-20 ms for every ping and 50 elements would [TODO Nicolai and Ida didnt consider the downtime, and called Thread.sleep(20) as another sampling time - to investigate: What happens if you call a method on a sleeping thread? The thread that calls the method aren’t sleeping, so it can process the method even if the runnable is sleeping?] be taking the minimum of the distances the last 1 second. This would make it lift the fork more than once at a sighting of an enemy (which is still a problem since we need to lift our enemy and keep it high and not raise it further (or what??)).
|
|
|
#### Attempted solution to ForkLift behavior
|
|
|
|
|
|
Note (regarding sepearate thread for vision): if sampling over an entire second, we would get a perhaps undersirable delay (shorter sampling timeslices would help with this)
|
|
|
When we decided to hardcode the forklift behavior, we had already considered other solutions, one of which was to sample over an amount of time and take the minimum reading as the distance we compared with the trigger. We had some initial problems libraries we wanted to utilize for this solution, so it wasn’t ready for the tournament, but in the end we had something close to a proof-of-concept. This solution also tried to solve the competition over the ultrasonic sensor by making the updater (and sampler) shared between the behaviors. The PursuitEnemy didn’t need the sampling over time, but it didn’t really harm the behavior either - it would be quite easy to solve this though. The only tests we had time to run used a FIFO queue of 50 samples and used the smallest distance in the getDistance() method. Since the UltraSonic sensor has a downtime of around 20ms, this means that the queue has approximately a second worth of samples at all times (except for this first second where the queue is being filled). This meant that a reading of distance <10 would be in effect for a whole second, which is way more than it should. Further testing would have allowed us to calibrate these sampling parameters though. There isn’t any delay to this method though, since it updates the currently smallest distance after every ultrasonic ping. The current version of this solution can be found in **_SharedThreadSumoBot.java_**.
|
|
|
|
|
|
## **Conclusion**
|
|
|
|
|
|
From the preliminary exercises we learned that the leJOS *Arbitrator* was not fit for letting a currently active behavior be overruled by itself as the status priorities didn’t enable this functionally. We however realized that this could be fixed if one were to use Krinks motivation functions [2].
|
|
|
|
|
|
TODO:
|
|
|
|
|
|
* hvad fik vi ud af de indledende øvelser?
|
|
|
From the preliminary exercises we learned that the leJOS *Arbitrator* was not fit for letting a currently active behavior be overruled by itself as the status priorities did not enable this functionally. We however realized that this could be amended by using Krink’s motivation functions [2].
|
|
|
|
|
|
Although we were short on time and didn’t get to try out all the attachments and behaviors we would have liked to, we managed to implement all of our core functionality of driving around at a reasonably fast speed while avoiding the edge of the platform, while also speeding up against opponents, which allowed our robot to do reasonably well in the tournament. While our forklift behavior worked on its own, we didn’t get to fix the bug that caused it to stop working early in the execution of the program. The build and weight of the robot worked well, and the plates appeared to prevent the other robots shovels from interfering with our wheels. Despite our bugs, all of this lead us to a respectable 4th place.
|
|
|
Although we were short on time and did not get to try out all the attachments and behaviors we would have liked to, we managed to implement all of our core functionality of driving around at a reasonably fast speed while avoiding the edge of the platform, while also speeding up against opponents, which allowed our robot to do reasonably well in the tournament. While our forklift behavior worked on its own, we didn’t get to fix the bug that caused it to stop working early in the execution of the program. The build and weight of the robot worked well, and the plates appeared to prevent the other robots shovels from interfering with our wheels. Despite our bugs, all of this lead us to a respectable 4th place (in our division).
|
|
|
|
|
|
Even though we thought that this lab session looked pretty structured and we therefore would be able to specify roles for each group member, we ended up shifting between roles and responsibilities. This was however mainly caused by our own decision of splitting the group into two smaller groups working on each their part of the lab exercises and sharing experiences.
|
|
|
Even though we thought that this lab session looked pretty structured and we therefore would be able to specify roles for each group member, we ended up shifting between roles and responsibilities. This was however mainly caused by our own decision of splitting the group into two smaller groups working on each their part of the lab exercises and sharing experiences.
|
|
|
|
|
|
## **References**
|
|
|
|
... | ... | @@ -527,9 +505,9 @@ Even though we thought that this lab session looked pretty structured and we the |
|
|
|
|
|
[3] [BumperCar.java](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/BumperCar.java) containing classes for the Bumper Car exercises
|
|
|
|
|
|
[4] The LeJOS [Behavior.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Behavior.html) interface
|
|
|
[4] The leJOS [Behavior.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Behavior.html) interface
|
|
|
|
|
|
[5] The LeJOS [Arbitrator.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Arbitrator.html) class
|
|
|
[5] The leJOS [Arbitrator.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Arbitrator.html) class
|
|
|
|
|
|
[6] Our implementation of the [Behavior](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/Behavior.java) interface, using motivational functions
|
|
|
|
... | ... | @@ -541,4 +519,3 @@ Even though we thought that this lab session looked pretty structured and we the |
|
|
|
|
|
[10] Ole Caprani’s implementation of [BumperCar.java](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson9.dir/BumperCar.java)
|
|
|
|
|
|
[TODO flere program refs??] |