... | ... | @@ -210,4 +210,335 @@ After considering how motivation functions could help us solve the problem of re |
|
|
|
|
|
We started out by changing the *Behavior *interface so that the *takeControl*-method returns an integer value instead of a boolean. We considered which range of numbers would be appropriate for a behavior to return, and concluded that a motivation between 0 and 10 was too narrow if more behaviors were added and therefore settled on a range from 0 to 100 (analogous to a sort of motivation percentage). This change can be found in our new interface, **_Behavior.java _**[6].
|
|
|
|
|
|
The next thing we had to do was change the arbitrator to account for the new return value of *takeControl().* The changes are implemented in **_Arbitrator.java_** [7]. In this class we simply find the behavior with the highest current motivation instead of finding the one with highest priority that wan |
|
|
\ No newline at end of file |
|
|
The next thing we had to do was change the arbitrator to account for the new return value of *takeControl().* The changes are implemented in **_Arbitrator.java_** [7]. In this class we simply find the behavior with the highest current motivation instead of finding the one with highest priority that wanted control. This is done by looping through the **behavior** list in the reverse order,i.e. starting from the behavior with the highest priority, and checking if the motivation of the current behavior (**currentMotivation**) is greater than the one currently registered as highest (**highestMotivation**). If it is, we update the value of *highestMotivation* and set the **highestMotivated** variable to the behavior currently being checked. Otherwise we simply move on to checking the motivation of the previous behavior in the list. Since we begin from the behavior with the highest priority and only update it if the value currently being checked is strictly greater, we are also ensured that ties between behaviors are resolved such that the highest prioritized wins.
|
|
|
|
|
|
The for loop, which is found in our *Arbitrator*’s local thread *Monitor* [7], is shown in Code Snippet 5.
|
|
|
|
|
|
```java
|
|
|
|
|
|
for (int i = maxPriority; i >= 0; i--)
|
|
|
{
|
|
|
int currentMotivation = behavior[i].takeControl();
|
|
|
*//The higher queue number will be prioritized if several *
|
|
|
|
|
|
*//behaviors are equally motivated*
|
|
|
if (currentMotivation > highestMotivation)
|
|
|
{
|
|
|
highestMotivation = currentMotivation;
|
|
|
highestMotivated = behavior[i];
|
|
|
}
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
*Code Snippet 5: The behavior-choosing loop of the Monitor thread.*
|
|
|
|
|
|
Next step of re-implementing the behavior paradigm to use Krink’s [2] motivation functions was to start using motivation values in the actual behaviors. We therefore made a copy of **_BumperCar.java _**[3] and named it **_MotivatedBumperCar.java_** [8]. In this file we made the following changes:
|
|
|
|
|
|
1. Each behavior implements our *Behavior* interface [6]
|
|
|
|
|
|
2. The old *DriveForward* behavior has been renamed to *MotivatedDriveForward* and returns 10 in *takeControl()* so as to easily be overruled by more urgent behaviors
|
|
|
|
|
|
3. The old *DetectWall* behavior has been renamed to *MotivatedDetectWall* and returns different values from *takeControl()* depending on the current state of the *action*-method and the values returned by the ultrasound and touch sensors
|
|
|
|
|
|
4. The old *Exit* behavior has been renamed to *MotivatedExit* and its *takeControl()* returns either 0 or 100 depending on whether or not the escape button is pressed
|
|
|
|
|
|
The motivation mapping function of *MotivatedDetectWall* is both the most complicated and interesting of the three behaviors’ *takeControl* methods. It is implemented around the idea that the motivation should change depending on whether the behavior is active or not (as described earlier). Initially we simply let it return the combined readings from the touch sensor and the ultrasonic sensor, mapped to values between 0 an 100: If the robot were to bump into something so that the touch sensor was pressed, the returned motivation would be 100. Otherwise it would be determined by the distance read by the ultrasonic sensor such that a value of 255 would result in 0 motivation for avoiding a wall while an ultrasonic distance reading of 0 would result in a motivation of 100 (note that this would never happen with our construction, as the touch sensor’s bumper was placed further in the front than the ultrasonic sensor - see Figure 1). The code for this mapping can be seen in Code Snippet 6.
|
|
|
|
|
|
```java
|
|
|
|
|
|
int motivation;
|
|
|
if(touch.isPressed()){
|
|
|
motivation = 100;
|
|
|
}else {
|
|
|
float percentage = distance/255;
|
|
|
motivation = (int) percentage - 100;
|
|
|
}
|
|
|
return motivation;
|
|
|
|
|
|
```
|
|
|
|
|
|
*Code Snippet 6: Mapping function from touch and ultrasonic sensor readings to motivation*
|
|
|
|
|
|
##### Re-activation of the action method during execution
|
|
|
|
|
|
We attempted to extend the mapping further, in order to incorporate reactivation of the *action* method when the touch sensor is pressed before the method has returned, as described previously and in the lesson plan. We began by looking at the implementation provided in the lesson plan [10]. Not yet being quite comfortable with the concept of motivation functions, we misread the code as if a second touch sensing during execution of behavior would introduce a lower motivation (50) than the initial touch sensing. Attempting to copy this approach, we added a local boolean variable, **active**, that keeps track of whether the *MotivatedDetectWall *behavior is currently executing. The idea was then to similarly set the value to something lower, but using the motivation mapping. The implementation in *MotivatedBumperCar* attempts to do this by setting the motivation value to 50 if the behavior is active and the calculated motivation is less than 50, and otherwise simply set it to the calculated motivation. This implementation is incorrect, which we understood upon later inspection and analysis of the code provided in [10] (looking at it now, it also seems like we did not even implement what we intended but we see no point in discussing this further as the idea was incorrect to begin with).
|
|
|
|
|
|
What actually happens in the provided code is that no matter if the code is still executing or not, a distance less than 25 cm will trigger a motivation of 100. Otherwise, if the distance is at least 25 cm, but the method is executing, *takeControl()* will return 50 - more than an inactive wall detection behavior, but less than if the robot is currently less than 25 cm from an object. Thereby, a newly registered near-wall presence will cause the motivation to go from 50 (currently executing) to 100 (*action* method should be started from the beginning), allowing the behavior to overrule itself.
|
|
|
|
|
|
As we discovered this error very late, we did not have time to test a new implementation, and so we simply describe and argue for an alternative idea below.
|
|
|
|
|
|
Our alternative idea consists in letting a positive touch reading during execution of the behavior’s *action* method result in a higher motivation than the previous triggering reading, thus ensuring that the *Arbitrator* restarts the behavior‘s *action*. This would be accomplished by adding 1 to the current motivation whenever the touch sensor registers pressure. To leave room for this, the motivation resulting from the initial trigger should not be 100, as there would then be no room for incrementation, but it should still be high enough that the behavior is prioritized over others. A possibly good choice could be 90; it is highly unlikely that an extra touch would be registered ten times before the behavior is done executing, and so we would most likely not exceed our limit of 100 (and even if we did, we are not enforcing this limit - this could be an issue, with respect to for instance the Exit behavior).
|
|
|
|
|
|
An alternative more closely resembling the implementation in [10] could be to simply return 100 if the distance was below a triggering threshold (or if pressure was registered), 90 if the behavior was active, and otherwise to return values mapped to [0;89]. Not having analyzed this idea very closely, it seems to run along the lines of the mapping in [10].
|
|
|
|
|
|
We have not figured out how to differentiate between continuous pressure on the touch sensor and a second push following a release. It was suggested that we include yet another boolean variable (*recentlyReleased* perhaps) that would be set if the pressure on the touch sensor was released while *action()* was running and then reset right before *action()* was to return. We never got round to implementing this solution either.
|
|
|
|
|
|
##### Introducing a separate thread for sensor readings
|
|
|
|
|
|
At this point, the *takeControl* method had become rather long and complex. We have therefore introduced a local variable, **sensorContact**, containing a thread of the type **SensorContact** (implemented in [8] as well), that queries the sensors like *Updater* of *BumperCar.java *[3]. Instead of just returning a reading, the *SensorContact* class also calculates the motivation (in the same way as in Code Snippet 6). Thereby *takeControl()* in *MotivatedDetectWall* has become a lot easier to read, as shown in Code Snippet 7.
|
|
|
|
|
|
```java
|
|
|
|
|
|
public int takeControl()
|
|
|
{
|
|
|
int motivation = sensorContact.getMotivation();
|
|
|
*// If active, raise the motivation*
|
|
|
if(active && motivation > 80) {
|
|
|
|
|
|
motivation++;
|
|
|
|
|
|
}
|
|
|
|
|
|
// Before
|
|
|
|
|
|
if(active && motivation < 50) {
|
|
|
motivation = 50;
|
|
|
}
|
|
|
return motivation;
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
*Code Snippet 7: The takeControl-method of MotivatedDetectWall*
|
|
|
|
|
|
##### (Not) testing our implementation and mapping functions
|
|
|
|
|
|
When we came round to testing our implementation, Emil and Nicolai were in the middle of rebuilding the robot into a functioning sumo wrestler, and so we could not use the robot for testing. We however planned to use the implementations using motivation functions for our sumo robot, and so will report on the results of this attempt in the next section.
|
|
|
|
|
|
An actual test of the implementation described in this section would have enabled a discussion, and investigation, of appropriate mapping functions. As this was not possible, we instead argue for our choices below.
|
|
|
|
|
|
We chose to have the basic *MotivatedDriveForward* behavior return 10 as a motivation all the time. Thereby, if no other behavior was motivated, the robot would drive forward, indeed making it the basic behavior. The *MotivatedExit* behavior on the other hand had to always overrule the others. Therefore it returns a motivation of 100 (the maximum of the motivational interval) if the escape button has been pressed, to make sure that it would always have the highest priority. The *MotivatedDetectWall *behavior was, as described, more complicated. We fairly quickly concluded that it should be able to have a motivation down to 0, but had a hard time deciding how far up the motivational interval this behavior should go. This was set to 100, to have a smooth reading-to-percentage mapping - as the MotivatedExit behavior has higher priority than MotivatedDetectWall, a press on the escape button should still overrule anything going on in the wall detection behavior.
|
|
|
|
|
|
### Sumo wrestler
|
|
|
|
|
|
As mentioned, after splitting the workforce, Emil & Nicolai began constructing and programming the sumo wrestler, and Ida & Nicolai continued on it the next day. We started out by brainstorming a lot of ideas for how we wanted to build the robot to deal with potential threats, as well as how most effectively to build and program it to go on the offensive.
|
|
|
|
|
|
#### Construction ideas
|
|
|
|
|
|
We thought of several possibilities to enhance our sumo robot in order to beat our opponents in the ring.
|
|
|
|
|
|
Border detecting light sensors - The most basic requirement for our wrestler should be the ability to not drive out of the ring of its own volition. This would require at least one light or color sensor to detect the white border, allowing our wrestler to react accordingly and drive away from the edge.
|
|
|
|
|
|
Ultrasonic Sensor ‘eyes’ - One or more ultrasonic sensors placed on the robot would allow us to spot not only where our opponent would be relative to us, but also how far away. Detecting them could be invaluable in seeking them out to shove them out of the ring.
|
|
|
|
|
|
Touch sensor bumpers - Attaching the touch sensor bumpers to the front or back would allow us to detect when we reached impact with our opponent. Having one in the front could allow us to know when we hit our opponent, allowing us to go full speed forward to shove them out, and having one in the back could allow us to know when we were getting shoved, giving us knowledge of when we would want to commence some sort of defensive/evasive maneuver.
|
|
|
|
|
|
Microphone - Since the rules stated the robot had to be autonomous, attaching a microphone and using this as input for us to shout at the robot to trigger behaviors would sadly be against the rules. Furthermore, with all the noise generated by both robots and spectators, and the potential for our opponents to maliciously trigger this input intentionally, we pretty quickly decided that this sensor was awful for us in many ways.
|
|
|
|
|
|
Third motorized wheel - We briefly considered adding a third actual motorized wheel in the back in replacement of the static one already there, as this would give us significantly more motor strength to shove the other wrestlers with. However we initially thought this would be against the rules, as we misinterpreted what it meant that the robot had to be *based* on the ExpressBot, and this base apparently didn’t include it being limited to the static back wheel. Even so, we also felt it would be more interesting to use the third motor for the following idea.
|
|
|
|
|
|
Forklift - Attaching a vertical third motor to the robot, with some form of forklift attachment, that would allow us to dig under the opposing robot and either raise it enough for its wheels to lose ground contact and thereby all control, or possibly even flip it over entirely. By combining this with another sensor, either the touch sensors or ultrasonic sensor, we could know when the opposing wrestler is immediately in front of us, and then raise the forklift.
|
|
|
|
|
|
Side/rear guards - Having seen most other groups work with some sort of front *shovel* on their robot, we discussed adding plates to the side and front of our robot. Assuming we secured the plates low enough to the ground, they would prevent the other teams shovels from digging under our robot and potentially hitting our wheels, causing our robot to lose traction and thereby control.
|
|
|
|
|
|
#### Construction
|
|
|
|
|
|
After debating all our options, we ended up deciding to use three sensors, two front facing light sensors and an ultrasonic sensor. Additionally, we used an third motor for a motorized forklift in the front. We also attached side/rear guards in the form of vertical black lego plates, although we decided ultimately only to have them on the rear and right side, as we didn’t have time to find a stable construction solution to attach the guard in this side.
|
|
|
|
|
|
We decided on using two light sensors positioned on the left and right front of our robot, since this would allow us to not only detect when we hit an edge, but also which direction we were hitting with, enabling us to turn back onto the platform more effectively.
|
|
|
|
|
|
The ultrasonic sensor would serve one primary purpose - enable our robot to react to an opponent being positioned in front of us. We ended up using this in a few ways, as will be described later.
|
|
|
|
|
|
The motorized forklift would serve as a secondary means of attack. In addition to just driving at our opponent when they were in front of us, the forklift would allow us to raise them, causing them to lose control, or perhaps flip them entirely for an easy win.
|
|
|
|
|
|
![sumo build](https://gitlab.au.dk/LEGO/lego-kode/raw/master/week12/img/build.PNG)
|
|
|
|
|
|
*Figure 2: The robot rebuilt with a third motor for a fork and with side and rear guards to protect from being swept up by other robots*
|
|
|
|
|
|
#### Programming the wrestler
|
|
|
|
|
|
We wrote our sumo wrestler in **_SumoBot.java_** [9], based heavily on **_BumperCar_**. Initially we tried basing it on the **_MotivatedBumperCar_** that we created in previous exercises but never had time to test. However, we couldn’t immediately get it to work, and did not feel we had the time to debug it, so we simply went with the **_BumperCar_**. We then implemented the following behaviors, presented in order of their priority:
|
|
|
|
|
|
Exit
|
|
|
|
|
|
Simple behavior that stops the program execution, copied directly from what we implemented in BumperCar in the previous exercises.
|
|
|
|
|
|
AvoidEdge
|
|
|
|
|
|
The most important behavior our robot should react to other than simply exiting - If we’re close to the edge, get away. As far as *takeControl()* goes, we wouldn’t really be able to calibrate the light sensors on each run in the sumo tournament, so we just used a hardcoded value. If either light sensor reads a light value over 40, this behavior wants control.
|
|
|
|
|
|
To drive away from the edge we first turn up the motor speeds (to get away quickly) and secondly check which sensor read white. If right sensor did we turn away to the left, and if the left sensor did we turn away to the right.
|
|
|
|
|
|
```java
|
|
|
BumperCar.leftMotor.setSpeed(600);
|
|
|
BumperCar.rightMotor.setSpeed(600);
|
|
|
|
|
|
BumperCar.leftMotor.rotate(-90, true);
|
|
|
BumperCar.leftMotor.rotate(-90);
|
|
|
|
|
|
if(leftLight.readValue() > 40) {
|
|
|
BumperCar.leftMotor.rotate(720, true);
|
|
|
BumperCar.rightMotor.rotate(-720, true);
|
|
|
}
|
|
|
if(rightLight.readValue() > 40) {
|
|
|
BumperCar.rightMotor.rotate(720, true);
|
|
|
BumperCar.leftMotor.rotate(-720, true);
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
*Code Snippet 8: Action-method of AvoidEdge behavior*
|
|
|
|
|
|
FlipEnemy
|
|
|
|
|
|
We wanted to use our forklift to lift an enemy immediately in front of us and drive them off the platform. To do this we used the **_Updater_** class from the previous exercises to keep an updated variable of our ultrasonic sensors readings. Once the sensor read distances that were close enough to be considered as being on our forklift (we did some testing and decided this to be ‘10’), this behavior would want control. The behavior would then raise the forklift, increase the speed of our two wheel motors, and drive straight ahead a set distance, or until the behavior got suppressed. Once the robot reached the end of its dash, it would lower the forklift and end the action. Additionally, we added a check for a boolean *forkRaised* in the suppress function, which would lower the fork in case this was true before suppressing. Otherwise, the behavior could get suppressed with the forklift still raised, which would be problematic.
|
|
|
|
|
|
```java
|
|
|
public void action() {
|
|
|
_suppressed = false;
|
|
|
forkRaised = true;
|
|
|
|
|
|
SumoBot.flipMotor.rotate(-30);
|
|
|
SumoBot.rightMotor.setSpeed(600);
|
|
|
SumoBot.leftMotor.setSpeed(600);
|
|
|
|
|
|
SumoBot.leftMotor.rotate(720, true);
|
|
|
SumoBot.rightMotor.rotate(720, true);
|
|
|
|
|
|
while(!_suppressed){
|
|
|
if(!SumoBot.rightMotor.isMoving() &&
|
|
|
|
|
|
!SumoBot.leftMotor.isMoving()) { *//motors are not going forward*
|
|
|
|
|
|
SumoBot.flipMotor.rotate(30);
|
|
|
forkRaised = false;
|
|
|
break;
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
*Code Snippet 9: *Action-method of FlipEnemy behavior
|
|
|
|
|
|
The reason we went for a static "Lift fork -> dash -> lower fork" way of attacking with this behavior was that it was very difficult to implement it by activating the lift whenever the ultrasonic sensor read 10 for a few reasons. First of all, the sensors readings are a bit erratic, and simply raising the forklift on readings under 10 and lowering it otherwise lead to *very* problematic behavior, as can be seen in video 1. Secondly, our forklift would end up blocking our ultrasonic sensor when raised, possibly causing it to assume an enemy was in front of it, when in reality it would be its own forklift.
|
|
|
|
|
|
[![Forklift being dodgy](http://img.youtube.com/vi/lLlhimXuy6I/0.jpg)](https://www.youtube.com/watch?v=lLlhimXuy6I)
|
|
|
|
|
|
*Video 1: Forklift behaving erratically*
|
|
|
|
|
|
PursueEnemy
|
|
|
|
|
|
A simple behavior that would increase the speed of our wheel motors when the ultrasonic sensor detected a presence within 30 cm. This was intended to allow our robot to drive around fairly steadily until detecting an opponent in front of it, and then charge with full power at it.
|
|
|
|
|
|
Drive
|
|
|
|
|
|
Our lowest priority absolut base behavior - drive steadily around with a medium speed until suppressed by a higher priority behavior.
|
|
|
|
|
|
#### Other implementation considerations
|
|
|
|
|
|
We were tweaking the robot both in construction and program implementation up to the very minute of the weigh-in of the sumo wrestling tournament, so we did not have time to test all of the potential behaviors to improve our wrestler.
|
|
|
|
|
|
The most important next behavior we would have looked at implementing was some kind of *searchEnemy* behavior, as a possible replacement for our base *Drive* behavior. Since our robot currently would just drive straight until hitting a white edge, and if our opponent happens to be in front of it, charge at it, it would be a vast improvement if the robot could actively search for our opponent itself, rather than hoping to bump into it randomly. A very basic way we considered doing this was to simply have the robot spin around on the spot, until it found the opponent with the ultrasonic sensor, in which case *PursueEnemy* would take over. Alternatively, we could have changed our *Drive* behavior to not just drive straight, but drive around in some random fashion. This would have also been made much easier, had we been able to get our **_MotivatedBumperCar_** to work, as it would have allowed us to, for example, have a *Turn* behavior that would randomly get higher motivation than *Drive*.
|
|
|
|
|
|
We also still had one sensor port available, which could have been used for a single touch sensor bumper somewhere, perhaps in the back as an input for a defensive behavior when being shoved from behind.
|
|
|
|
|
|
As far as the bumper goes, we also initially considered making a purely defensive robot using two bumpers on each side of the robot, and then attempt to win with defensive maneuvers reacting to which side we’d get shoved from. We ended up deciding that this would be a lot more difficult to implement cleverly, and also an incredibly obnoxious strategy for the other competitors to fight against, so we never ran with it. However, after seeing the tournament battles and how often some robots would lose due to shoving their opponent to the edge, and then the attacked robot would turn around (from seeing the white edge), and drag the attacking robot off the edge as a result, we realized that a defensive strategy of sorts might not have been a bad idea.
|
|
|
|
|
|
#### Problems
|
|
|
|
|
|
In addition to not getting to implement all the behaviors we would have liked, we also didn’t have time to properly debug our current ones. All our behaviors worked individually, and when running our SumoBot, they also all seemed to work initially… Until a couple had activated, in which case it seemed both *PursueEnemy* and *FlipEnemey* would stop ever taking control, which would indicate something went wrong with our *Updater*, as these two behaviors relied on readings from the ultrasonic sensor. This problem can be seen in video 2, where the *FlipEnemy* triggers correctly initially, but then ceases to function after hitting the white edge, although the robot continues to drive around and dodge the edges.
|
|
|
|
|
|
[![Behavior problems](http://img.youtube.com/vi/1-54uEO9aJc/0.jpg)](https://www.youtube.com/watch?v=1-54uEO9aJc)
|
|
|
|
|
|
*Video 2: FlipEnmy behavior dying*
|
|
|
|
|
|
#### Sumo Wrestling Tournament
|
|
|
|
|
|
Our sumo wrestler managed a solid 4th place in our group in the tournament with 3 wins, 2 ties and 2 losses. Due to our aforementioned problems with *FlipEnemy*, we never saw our motorized forklift in action, but we still saw the forklift often times disrupting our opponent’s robot by simply being in the way of their wheels. A video of one of our decent victories can be seen in video 3.
|
|
|
|
|
|
[![Frej Victory](http://img.youtube.com/vi/70-1zJPpsRc/0.jpg)](https://www.youtube.com/watch?v=70-1zJPpsRc)
|
|
|
|
|
|
*Video 3: Frej’s victory over Praktikanten*
|
|
|
|
|
|
Two of our battles ended in a tie, as a result of the two robots getting stuck in eachother trying to drive forward, but getting nowhere. After seeing this, we realized it would have been a good idea to possibly include some sort of trigger in one of our behaviors in case too long time had passed since some event had happened (such as seeing a white edge), and then try to back up slightly, as this could have possibly prevented the stalemates.
|
|
|
|
|
|
#### Implementation
|
|
|
|
|
|
Done:
|
|
|
|
|
|
* implemented AvoidEdge behavior (motivation 100 on detection of edge)
|
|
|
|
|
|
* our own arbitrator and motivatedcar do not work - we use Ole’s and see if we have time to figure out what’s wrong with our own implementation later
|
|
|
|
|
|
* robot turns too much (values -360 for one wheel, -1300 for other wheel); sometimes drives backwards off edge
|
|
|
|
|
|
* → implementation that turns on the spot. Works OK. Will optimize later if desired.
|
|
|
|
|
|
* Next: Implement detection of other robot and go to it. After that: Implement detection of other robot and try to flip it.
|
|
|
|
|
|
* On detection and following of other robot:
|
|
|
|
|
|
* How to test? Make robot go faster if it sees an enemy - that way we can tell that it has detected its enemy. Implement behavior Drive that only drives with speed 60 - makes it very obvious when the robot switches to PursueEnemy (where we set speed to 400)
|
|
|
|
|
|
* Don’t try to make it follow the enemy - simply go faster straight ahead if enemy is within vision
|
|
|
|
|
|
* Overvejelse: We let avoiding the edge outweigh enemy detection, as crossing of the edge leads to losing the match with much higher certainty than not pursuing an enemy. Perhaps we should even stop for a second when losing track of the enemy, so as to minimize the risk of being lured to the edge (or would stopping make us more vulnerable to attacks?).
|
|
|
|
|
|
* Using distance 30 cm for detection; have not tested other distances
|
|
|
|
|
|
* On detection and flipping other robot
|
|
|
|
|
|
* Change base speed to 200
|
|
|
|
|
|
* Use tacho counter to make sure that we don’t flip forklift "all over the place" like when Frej was almost strangled in a previous lesson
|
|
|
|
|
|
* Make test program for testing flip: FlipTest
|
|
|
|
|
|
* NB: Robot must not begin to chase because it is seeing its own forklift (should not happen, as pursue has lower priority)
|
|
|
|
|
|
* First test: -30 rotation on flip motor (motorport A, FlipMotor). Looks okay.
|
|
|
|
|
|
* Note: Use of motivation functions would be nice: Could lower motivation for flipping after a flip has been performed, such that the robot would rather push the other robot by running into it again
|
|
|
|
|
|
* Implement so that on suppress or loss of sight of enemy makes the forklift go back down
|
|
|
|
|
|
* Sometimes robot sees (bogus?) far distances even if something is very close in front of it → forklift flips out [VIDEO]
|
|
|
|
|
|
* → make update thread to keep track of distance over a timeslice
|
|
|
DIT NOT WORK OUT!
|
|
|
|
|
|
* Note: Can FlipEnemy be suppressed in between setting ForkLifted to true and flipping the fork? What happens then?
|
|
|
|
|
|
NOTES til Emils thread-løsningsidé til FlipEnemy-problem:
|
|
|
|
|
|
Tried putting all the ultrasounds sensor functionality in a thread shared by the PursuitEnemy behavior and the FlipEnemy behavior, where they both call getDistance. Initially it was meant for only the FlipEnemy behavior where it should sample distances over a set time (eg. 1 second) and then take the minimal distance in the array, but this didn’t work. When it was extracted from FlipEnemy it seemed obvious to give it to PursuitEnemy, but this behavior didn’t need the sampling. This wouldn’t be too bad though, since that behavior doesn’t need to look for an enemy more than a few times every second. We never got to the point where the sampling worked, but we made a solution that continuously ran over 50 elements in an FIFO array and took the minimal distance. The ultrasound sensor has a downtime of around 15-20 ms for every ping and 50 elements would [TODO Nicolai and Ida didnt consider the downtime, and called Thread.sleep(20) as another sampling time - to investigate: What happens if you call a method on a sleeping thread? The thread that calls the method aren’t sleeping, so it can process the method even if the runnable is sleeping?] be taking the minimum of the distances the last 1 second. This would make it lift the fork more than once at a sighting of an enemy (which is still a problem since we need to lift our enemy and keep it high and not raise it further (or what??)).
|
|
|
|
|
|
Note (regarding sepearate thread for vision): if sampling over an entire second, we would get a perhaps undersirable delay (shorter sampling timeslices would help with this)
|
|
|
|
|
|
## **Conclusion**
|
|
|
|
|
|
From the preliminary exercises we learned that the leJOS *Arbitrator* was not fit for letting a currently active behavior be overruled by itself as the status priorities didn’t enable this functionally. We however realized that this could be fixed if one were to use Krinks motivation functions [2].
|
|
|
|
|
|
TODO:
|
|
|
|
|
|
* hvad fik vi ud af de indledende øvelser?
|
|
|
|
|
|
Although we were short on time and didn’t get to try out all the attachments and behaviors we would have liked to, we managed to implement all of our core functionality of driving around at a reasonably fast speed while avoiding the edge of the platform, while also speeding up against opponents, which allowed our robot to do reasonably well in the tournament. While our forklift behavior worked on its own, we didn’t get to fix the bug that caused it to stop working early in the execution of the program. The build and weight of the robot worked well, and the plates appeared to prevent the other robots shovels from interfering with our wheels. Despite our bugs, all of this lead us to a respectable 4th place.
|
|
|
|
|
|
Even though we thought that this lab session looked pretty structured and we therefore would be able to specify roles for each group member, we ended up shifting between roles and responsibilities. This was however mainly caused by our own decision of splitting the group into two smaller groups working on each their part of the lab exercises and sharing experiences.
|
|
|
|
|
|
## **References**
|
|
|
|
|
|
[1] [LeJOS tutorial on behavior programming](http://www.lejos.org/nxt/nxj/tutorial/Behaviors/BehaviorProgramming.htm)
|
|
|
|
|
|
[2] Thiemo Krink (in prep.), [Motivation Networks - A Biological Model for Autonomous Agent Control](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson9.dir/Krink.pdf)
|
|
|
|
|
|
[3] [BumperCar.java](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/BumperCar.java) containing classes for the Bumper Car exercises
|
|
|
|
|
|
[4] The LeJOS [Behavior.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Behavior.html) interface
|
|
|
|
|
|
[5] The LeJOS [Arbitrator.java](http://www.lejos.org/p_technologies/nxt/nxj/api/lejos/subsumption/Arbitrator.html) class
|
|
|
|
|
|
[6] Our implementation of the [Behavior](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/Behavior.java) interface, using motivational functions
|
|
|
|
|
|
[7] Our implementation of the [Arbitrator](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/Arbitrator.java) class, using motivational functions
|
|
|
|
|
|
[8] [MotivatedBumperCar.java](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/MotivatedBumperCar.java) containing classes for the exercises on motivational functions
|
|
|
|
|
|
[9] [SumoBot.java](https://gitlab.au.dk/LEGO/lego-kode/blob/master/src/Lesson9programs/SumoBot.java) containing behavior classes for sumo exercises
|
|
|
|
|
|
[10] Ole Caprani’s implementation of [BumperCar.java](http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson9.dir/BumperCar.java)
|
|
|
|
|
|
[TODO flere program refs??] |