zteel created page: Week11 sumo wrestling robots authored by Sune Brinch Sletgård's avatar Sune Brinch Sletgård
......@@ -12,7 +12,8 @@
The goal is to experiment with programming a robot with different behaviours resulting in a sumo wrestling LEGO robot.
# Plan
To experiment with the leJOS behavior framework with the sample bumper car and
to try different sensor configurations, robot designs and behaviors to decide which works best as a sumo wrestling LEGO robot.
# Results
......@@ -24,7 +25,7 @@ To study the sample *bumpercar*[REF] from the leJOS NXJ distribution and how it
### Plan
Use the *Express bot*[REF] mounted with a bumper and a ultrasonic sensor, to ???
Use the *Express bot*[REF] mounted with a bumper and a ultrasonic sensor, to do the experiments.
### Result
......@@ -62,11 +63,7 @@ When we push the button when the robot is driving forward, it stops immediately.
#### Taking control
The arbitrator have a monitor that looks at all entries in the list of behaviors and sees if the behavior wants to get control. This is done by running through the list from high priority to low and return the entry with highest priority which want control.
(“Investigate the source code for the Arbitrator and figure out if takeControl of DriveForward is called when the triggering condition of DetectWall is true.”)
No, the while loop run through the behaviours from highest priority to lowest priority and breaks out of the loop after when it reach a behaviour whose takeControl returns true.
The arbitrator have a monitor that looks at all entries in the list of behaviors and sees if the behavior wants to get control. This is done by running through the list from high priority to low priority and returns the entry with highest priority which want control. The method breaks out of the loop as soon as it finds a behavior that wants to take control, so a high priority behavior will shadow for a lower priority behavior.
#### Separating taking control and distance measurements
......@@ -102,16 +99,6 @@ A problem with the implementation of Detect Wall is that both delay and the last
We have gained knowledge about the built in arbitrator and the behavioal interface.
## Sumo Wrestler Robot
### Building the robot
......@@ -120,7 +107,7 @@ The main purpose of the robot should be not to lose. To achieve this we got the
The original building plan for the robot was to build a big wall around the robot to protect against whatever unpredictable thing our opponent would do. Among these we wanted to prevent another robot to drive something below our robot and lift it from the floor.
Behind each wall we wanted 2 touch sensors one in each end for a total of 8. The idea here was we could use the ultrasonic sensor to detect the direction of an opponent, and use the touch sensor to detect when the robot was right next to our robot. By having touch sensor all the way around our robot and only try to push the opponent’s robot when they are pressed, we should be able to avoid the case where the opponent's robot drops something and our robot using the ultrasonic sensor interprets the dropped item as the opponent's robots, since that item probably won’t be heavy enough to push in the touch sensors.
Behind each wall we wanted 2 touch sensors one in each end for a total of 8. The idea here was that we could use the ultrasonic sensor to detect the direction of an opponent, and use the touch sensor to detect when the robot was right next to our robot. By having touch sensor all the way around our robot and only try to push the opponent’s robot when they are pressed, we should be able to avoid the case where the opponent's robot drops something and our robot using the ultrasonic sensor interprets the dropped item as the opponent's robots, since that item probably won’t be heavy enough to push in the touch sensors.
To detect the white edge of the arena plate, we wanted 2 light sensors in front connect to one port and 2 light sensor in the back to another port.
......@@ -134,13 +121,17 @@ So we started building our robot, but we didn’t get that far before we checked
After cutting down we had only small walls and no walls behind the robot, we also cut the light sensor behind the robot plus some other small optimizations and we managed to get below 1 kg.
![small_IMAG0692](http://gitlab.au.dk/uploads/lego-group-3/lego/a6438303cf/small_IMAG0692.jpg)
Image 2: Image of almost final version of the robot, some minor things were changed later
Image 2: Image of ultrasonic version of the robot.
We however found that the motor mounted ultrasonic sensor was simply too slow. It took about one second to do the entire reading of forward and 45 degrees left and right. Meaning that the guess of the position of the other robot always will be way too outdated, so we could not use the guess to anything constructive. One second is a lot of time in a lego sumo match. By discarding the motor and the ultrasonic sensor, we suddenly had a bit more weight to play with, so we could remount our back wall with touch sensors. And because we no longer needed a sensor port for the ultrasonic sensor, we were able to separate the front wall from the three other walls.
### Programming the robot
With this design we however have another problem. We only know if have something in front of us, which is what we want, or if we have been touch on one of the three sides we do not want. Question is if we can distinguish the walls from each other by small differences in the raw value from the touch sensors.
### Programming the robot
####MAYBE>>?
The prioritising of behaviors can either be dynamic or fixed. Krink[REF!] looks at autonomous robots as simple animals, which is a good abstraction if we want a dynamic behavior, we however have not found a fitting usage of this in our design and have gone with a simple fixed behavior priority queue.
// we edited robot again, image of new look
......@@ -167,7 +158,6 @@ should rotate to find where the attack came from)
# Conclusion
# References
......
......