|
|
# Group 7
|
|
|
|
|
|
## Lab Notebook - Lesson 5
|
|
|
|
|
|
**Date:** 12/03 2015
|
|
|
|
|
|
**Group members participating:** Ivan Grujic, Lasse Brøsted Pedersen, Steffan Lildholdt, René Nilsson
|
|
|
|
|
|
**Activity duration:** 5 hours
|
|
|
|
|
|
## Goal
|
|
|
The goal of this exercise is to test three different sensors and physical robot constructions in a self-balancing robot context.
|
|
|
|
|
|
## Plan
|
|
|
The plan is to follow the instructions for Lesson 5 [1]. This plan is divided into three parts:
|
|
|
|
|
|
* Build the NXTWay robot [2] and test its self-balancing capabilities using a light sensor and PID control.
|
|
|
* Build the NXT Segway with Rider [3] and test its self-balancing capabilities using a color sensor and PID control.
|
|
|
* Use the NXT Segway with Rider and test self-balancing capabilities using a gyro sensor and alternative control algorithm.
|
|
|
|
|
|
|
|
|
## Exercise 1
|
|
|
Self-balancing robot with light sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical setup
|
|
|
|
|
|
For this exercise we used a LEGO model build according to the description in [2]. The robot includes two motors which are connected to port A and C due to port B/C being connected to the same H-bridge. A light sensor is mounted on the lower part of the robot in a height of approximately 1.5 cm above the surface. In general it is a simple construction with a relatively high center of gravity which will likely pose a challenge for the control mechanism. Our final model is shown in the following image.
|
|
|
|
|
|
![NXTWay robot with light sensor mounted](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorRobot.JPG)
|
|
|
|
|
|
We know that the surface which the robot is placed on affects the control mechanism and therefore three different materials are tested. These are a clean white table, a wooden table and a carpet floor which are shown in the following image.
|
|
|
|
|
|
![Surfaces which the self-balancing NXTWay robot is tested on](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorSurfaces.jpg)
|
|
|
|
|
|
We also know that the surrounding light can be a key factor when using a light sensor in a PID control context. Two cases of surrounding light is analyzed; natural light and artificial light.
|
|
|
|
|
|
##### Software setup
|
|
|
|
|
|
We have created a standard PID software architecture which makes the replacement of a sensor easy. The architecture is shown in the following image.
|
|
|
|
|
|
![General software architecture with a specific class for the light sensor extending the generic controller class](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorSoftware.png)
|
|
|
|
|
|
The architecture consists of a generic PID controller with abstract methods to calculate the error and control signal. The specific PID controller, in this case the LightPIDController, will extend this class and define logic for these methods. The program uses the PCconnection class to establish a Bluetooth connection between the NXT and the PC which is used to pass parameters from a GUI on the PC to the NXT. The GUI is seen in the following image.
|
|
|
|
|
|
![PC GUI that offsers modification of the control parameters on the NXT](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GUI.PNG)
|
|
|
|
|
|
The `Segway` class is the main class which controls the flow of the program by first letting the user choose a balancing point which then is defined as an offset making the set point of the PID controller 0. Afterwards the control algorithm is applied while the parameters are shown on the LCD. The core control algorithm is located in the `GenericPIDController` class and is shown in the following.
|
|
|
|
|
|
```java
|
|
|
// Calculate parameters
|
|
|
error = calculateError();
|
|
|
integral = integral + error;
|
|
|
derivative = (error - lastError) / dt;
|
|
|
|
|
|
// Reset the the integral part if the error crosses zero
|
|
|
if(Math.signum(error) != Math.signum(lastError))
|
|
|
integral = 0;
|
|
|
|
|
|
// Apply the PID regulation
|
|
|
controlledValue = (kp * error) + (ki * integral) + (kd * derivative);
|
|
|
controlSignal(controlledValue);
|
|
|
|
|
|
lastError = error;
|
|
|
```
|
|
|
|
|
|
where the functions `calculateError()` and `controlSignal()` are overridden by the specific PID controller, in this case the `LightPIDController`. The functions are shown in the following.
|
|
|
|
|
|
```java
|
|
|
@Override
|
|
|
protected int calculateError() {
|
|
|
LCD.drawString("Light value: " + lightsensor.readNormalizedValue() + " ",0,8);
|
|
|
log.writeSample(lightsensor.readNormalizedValue());
|
|
|
return lightsensor.readNormalizedValue() - offset;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
```java
|
|
|
@Override
|
|
|
protected void controlSignal(float controlledValue) {
|
|
|
if (controlledValue > max_power)
|
|
|
controlledValue = max_power;
|
|
|
if (controlledValue < -max_power)
|
|
|
controlledValue = -max_power;
|
|
|
|
|
|
// Power derived from PID value:
|
|
|
int power = Math.abs((int)controlledValue);
|
|
|
power = default_power + (power * max_power-default_power) / max_power; // NORMALIZE POWER
|
|
|
|
|
|
if (controlledValue > 0) {
|
|
|
rightMotor.controlMotor(power, BasicMotorPort.FORWARD);
|
|
|
leftMotor.controlMotor(power, BasicMotorPort.FORWARD);
|
|
|
} else {
|
|
|
rightMotor.controlMotor(power, BasicMotorPort.BACKWARD);
|
|
|
leftMotor.controlMotor(power, BasicMotorPort.BACKWARD);
|
|
|
}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
Due to our results from experimenting with the light sensor in lesson 1 we chose to initialize the sensor with the flood light turned on.
|
|
|
|
|
|
|
|
|
The entire code can be seen in [4].
|
|
|
|
|
|
### Results
|
|
|
|
|
|
By inspecting the performance we concluded that the best of the three surfaces was the clean white surface and therefore the following tuning and analysis of the PID parameters are performed on this surface. Despite our expectations the surrounding light did not seem to affect the light sensor control mechanism significantly.
|
|
|
|
|
|
In order to find the most suitable control parameters we started out setting `Kp`, `Ki` and `Kd` to 0 and then adjusting one paramter at a time through the PC GUI. The procedure was the following
|
|
|
|
|
|
1. Setting the paramter to a large value to inspect the effect of the paramter
|
|
|
2. Lowering the value until the performance began to worsen.
|
|
|
|
|
|
By doing this procedure for each parameter we ended up with the following estimate of the best possible setting of the control parameters:
|
|
|
|
|
|
| Parameter | Value |
|
|
|
| ------------- |:------:|
|
|
|
| Kp | 2 |
|
|
|
| Ki | 0.22 |
|
|
|
| Kd | 1.5 |
|
|
|
| Offset | 460 |
|
|
|
| Min power | 55 |
|
|
|
|
|
|
With this configuration the robot was able to self-balance in short intervals of approximately 1-2 seconds as seen in the video in the references section.
|
|
|
In order to investigate this behavior the data logger is used to collect the light sensor readings during the execution of the program. The end result of this is seen in the following image.
|
|
|
|
|
|
![Output of the light sensor when performing self-balancing](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/LightMeasurements.png)
|
|
|
|
|
|
This plot shows the PID controllers offset as the red line and the output of the light sensor as the blue graph.
|
|
|
When the LEGO robot is tilting forward the light sensor will get closer to the surface which yields less surrounding light coming in. However the output value will increase due to the light sensors ambient light making reflections on the surface. In contrast when the robot is tilting backward the output value of the sensor will decrease due to less reflected light coming in.
|
|
|
|
|
|
In order for the LEGO robot to keep balance it must constantly try to keep an upright position by applying motor force in the tilting direction (forward or backward). Due to the LEGO robots high center of gravity it is difficult to maintain an upright position resulting in the toggling back and forth between the offset until it is no longer able to adjust for the tilting.
|
|
|
|
|
|
|
|
|
## Exercise 2
|
|
|
|
|
|
Self-balancing robots with color sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical setup
|
|
|
|
|
|
For this exercise we used a LEGO model build according to the description in [3] with some minor modifications. Since the upright motor is not used in the case of a segway, the movement of the upper body of the rider has been fixed. Furthermore the motors that are driving the wheels are connected to motor port A and C instead of A and B, in order to utilize both H-bridges.
|
|
|
|
|
|
In general this robot construction is taller than the NXTway [2] but still it has a lower center of gravity. The wheels has been changed to a slightly smaller pair which are more flat giving the robot more contact with the surface. We estimate that these changes will improve the control mechanism. A color sensor is mounted on the lower part of the robot in a height of approximately 1.5 cm above the surface similar to the sensor placement in exercise 1.
|
|
|
|
|
|
An image of the robot is seen in the following image.
|
|
|
|
|
|
![NXT Segway with Rider with light sensor mounted](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/ColorSensorRobot.JPG)
|
|
|
|
|
|
The procedure for testing this robot is similar to the procedure in exercise 1. The same surfaces and light conditions are analyzed according to control mechanism.
|
|
|
|
|
|
##### Software setup
|
|
|
|
|
|
In this exercise the software architecture as in exercise 1 is used but the LightPIDController class is substituted with the ColorPIDcontroller class.
|
|
|
|
|
|
![General software architecture with a specific class for the color sensor extending the generic controller class](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/ColorSensorSoftware.png)
|
|
|
|
|
|
Due to our results from experimenting with the color sensor in lesson 1 we also in this exercise chose to initialize the sensor with the flood light turned on.
|
|
|
|
|
|
### Results
|
|
|
|
|
|
The initial test with random control parameters showed by inspection that also for this robot construction and color sensor that the clean white surface was the best of the three. Regarding the surrounding light we experienced a significantly difference between natural light and artificial light where the robot performed much better when surrounded by artificial light.
|
|
|
|
|
|
In order to find the most suitable control parameters the robot is placed on the white surface in a room only lit by artificial light and then the same procedure as in exercise 1 is followed. Starting out by setting `Kp`, `Ki` and `Kd` to 0 and then adjusting one parameter at a time through the PC GUI.
|
|
|
|
|
|
By doing this procedure for each parameter we ended up with the following estimate of the best possible setting of the control parameters
|
|
|
|
|
|
| Parameter | Value |
|
|
|
| ------------- |:------:|
|
|
|
| Kp | 7.5 |
|
|
|
| Ki | 10 |
|
|
|
| Kd | 2.5 |
|
|
|
| Offset | 480 |
|
|
|
| Min power | 80 |
|
|
|
|
|
|
A video of the robot balancing with these control parameters applied on the white surface in natural lighting is found in the references section. During self-balancing the robot is not able to keep the exact position as placed in initially. The algorithm causes the robot to drift across the table making it finally fall of the table. Until this occur the robot self-balances for ~25 seconds (The video starts after the robot has balanced for ~10 seconds).
|
|
|
|
|
|
In order to compare these results with the results of exercise 1 the data logger is used to collect the color sensor readings during the execution of the program. The end result of this is seen in the following image.
|
|
|
|
|
|
![Output of the color sensor when performing self-balancing](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/ColorMeasurements.png)
|
|
|
|
|
|
This plot shows the light intensity given by the color sensor as the blue graph and the offset is the red line.
|
|
|
|
|
|
It is clear that this the configuration in this exercise is far better than the configuration from exercise 1. The light intensity continues to fluctuate around the offset for much longer time. Three times during the run of this program the robot need human assitances in order to stay balanced. This is at `time = 16s`, `time = 26s` and `time = 37s` where the light intensity drops indicating that the robot is tilting backwards.
|
|
|
|
|
|
With the defined parameters the robot is again tested on the aforementioned surfaces. This resulted in the following.
|
|
|
|
|
|
* **Carpet**: The robot could not balance at all
|
|
|
* **Wooden table**: The robot could balance for 1-3 seconds
|
|
|
* **White surface**: The robot could balance for ~25 seconds. Ended by falling of the table.
|
|
|
|
|
|
Although it seems, from these results, that the color sensor is superior compared to the light sensor analyzed in exercise 1. However the robot configurations in these two exercises are incomparable due to the general robot contruction and the difference in center of gravity.
|
|
|
|
|
|
To perform a proper comparison between the two sensors, in a self-balancing context, the color sensor on the NXT Segway with Rider [3] is replaced by the light sensor and the same procedure from this exercise is carried out.
|
|
|
Despite our expections the light sensor performed worse than the color sensor under the same conditions. The reason for this could be a difference in power level between the tests otherwise the reason is unknown.
|
|
|
|
|
|
## Exercise 3
|
|
|
|
|
|
Self-balancing robots with gyro sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical Setup
|
|
|
|
|
|
For this exercise, we used the Lego model described in the in "Physical Setup" section for exercise 2 as base. Additionally a gyro sensor was mounted on the robot. During the exercise we tried two different mounting points for the gyro sensor - these are discussed in the "Results" section of this exercise.
|
|
|
|
|
|
![Robot with sensor mounted on the shoulder.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSetup2.png)
|
|
|
|
|
|
![Robot with sensor mounted on the lower part.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSetup1.png)
|
|
|
|
|
|
##### Software Setup
|
|
|
|
|
|
An overview of the software architecture for this exercise is seen in the following image.
|
|
|
|
|
|
![Software architecture for exercise 3](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSensorSoftware.png)
|
|
|
|
|
|
The controller used for this exercise is inspired by the controller described in [5]. Unlike the previous exercises, this controller is not a regular PID controller, and it monitors two values instead of one; angular velocity and tacho counts of the two motors. The angular velocity is then integrated over time, to obtain the current angle of the robot. The current motor position is calculated as the sum of the tacho count from each motor. The current motor speed is then derived from the position by differentiating the position with regards to time. The controller is thus relying on four terms:
|
|
|
|
|
|
* **GyroAngle:** This term is responsible for causing the robot to return to the starting position (standing upright). If the robot starts in the upright position, that angle will be 0 degrees. This value will be positive if the robot is leaning forward, and negative if its leaning backward, thus contributing to the control output.
|
|
|
* **GyroSpeed:** This term is non-zero when the robot is accelerating, and thus contributes to the control output when the robot is falling.
|
|
|
* **MotorPosition:** This term is non-zero when the wheels of the robot has turned from the starting point. Thus, this term is responsible for keeping the robot stationary, and is not related to staying upright. This term can be used to drive the robot to a desired location, however we did not exploit this fact in this exercise.
|
|
|
* **MotorSpeed:** This term is non-zero when the wheels of the robot are rotating. This term keeps the robot from oscillating.
|
|
|
|
|
|
Like in the previous exercises a PC connection class is used to establish a Bluetooth connection between the NXT and the PC letting us choose values for the control parameters on the fly.
|
|
|
|
|
|
The full implementation can be found at [6], with the corresponding PC program located at [7].
|
|
|
|
|
|
The important parts of the logic are shown in the this section. The following listing shows how calculating angular velocity (`gyroSpeed`) and `gyroAngle` from the raw gyro output is performed:
|
|
|
|
|
|
```java
|
|
|
void calcGyroValues(float interval)
|
|
|
{
|
|
|
int gyroRaw = gyro.readValue();
|
|
|
|
|
|
// adjust the offset dynamically to correct for the drift of the sensor
|
|
|
offset = EMAOFFSET * gyroRaw + (1-EMAOFFSET) * offset;
|
|
|
|
|
|
//calculate speed and angle. Interval is the actual time for the last loop iteration.
|
|
|
gyroSpeed = gyroRaw - offset;
|
|
|
gyroAngle += gyroSpeed*interval;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
The motor terms `motorSpeed` and `motorPosition` is calculated as shown in the following listing:
|
|
|
|
|
|
```java
|
|
|
private void calcMotorValues(float interval)
|
|
|
{
|
|
|
//get encoder values
|
|
|
int left = leftMotor.getTachoCount();
|
|
|
int right = rightMotor.getTachoCount();
|
|
|
|
|
|
//position is the sum of the two tacho counts
|
|
|
int sum = left + right;
|
|
|
int delta = sum - oldMotorSum;
|
|
|
motorPosition += delta;
|
|
|
|
|
|
//calculate motorspeed as the average motor speed of the last 4 iterations.
|
|
|
motorSpeed = (delta + old1MotorDelta + old2MotorDelta + old3MotorDelta)
|
|
|
/ (4 * interval);
|
|
|
|
|
|
// Save old values for future use
|
|
|
oldMotorSum = sum;
|
|
|
|
|
|
old3MotorDelta = old2MotorDelta;
|
|
|
old2MotorDelta = old1MotorDelta;
|
|
|
old1MotorDelta = delta;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
The control loop calculates the average period for the control loop, then gyro and motor values. It then calculates and applies the control output:
|
|
|
|
|
|
```java
|
|
|
while(running)
|
|
|
{
|
|
|
interval = calcInterval(i, startTime);
|
|
|
calcGyroValues(interval);
|
|
|
calcMotorValues(interval);
|
|
|
|
|
|
float fpower = (KGYROSPEED * gyroSpeed +
|
|
|
KGYROANGLE * gyroAngle) / WHEELRATIO +
|
|
|
KPOS * motorPosition +
|
|
|
KSPEED * motorSpeed;
|
|
|
|
|
|
int power = Math.min(100, Math.abs((int)fpower));
|
|
|
int direction = fpower < 0 ? MotorPort.FORWARD : MotorPort.BACKWARD;
|
|
|
|
|
|
leftMotor.controlMotor(power, direction);
|
|
|
rightMotor.controlMotor(power, direction);
|
|
|
i++;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
### Results
|
|
|
|
|
|
In order to investigate the properties of the gyro sensor, it was mounted on the shoulder of the robot. The robot was then rotated about 90 degrees in one direction (1. movement), then 180 degrees in the opposite direction (2. movement), then then rotated back 90 degrees to the starting point (3. movement). This procedure was carried out for all three axes, one at a time, as shown in video [Exercise 3.1].
|
|
|
|
|
|
![Raw output from gyro sensor during testing.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroData.png)
|
|
|
|
|
|
The data shows, that the offset for this sensor is around 600. Furthermore, it can be seen from the data, that the supplied gyro sensor only senses changes to angular velocity in one axis. This can be seen by the three spikes starting just before T = 8. These spikes corresponds the three movements, for a single axis.
|
|
|
The small fluctuations in the graph is mainly due to an unsteady hand, when turning the robot. In order to test if the gyro drifts, a test was performed, where the gyro sensor was lying completely still. The results from this test is shown in the figure below.
|
|
|
|
|
|
![Raw output from gyro sensor during drift test.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/GyroDriftData.png)
|
|
|
|
|
|
The high variations in the beginning and the end of the graph is due to the button presses to start and stop the test.
|
|
|
The output from the gyro sensor is an integer, which causes the output to take discrete values like 601, 602 and 603.
|
|
|
The small changes, which is seen in the graph might be due to drift, or inaccuracy in the measurements.
|
|
|
From the test we can conclude that the gyro sensor is drifting very little over a period of half a minute, at least when the environment is not changing.
|
|
|
Other similar tests could be performed to test how the environment(temperature, humidity etc.) affects the drift.
|
|
|
|
|
|
To test the robot and the different gains effect on the robot, as described in the setup, we tried to control the robot with one gain at a time:
|
|
|
|
|
|
<<<<<<< HEAD
|
|
|
* **KGyroAngle**: Controlling the robot only with the gyro angle, was not possible. The problem encountered was that the angle drifted with time. In the beginning of the test, the angle was correctly 0 degrees when the robot was in an upright position, but during the test this changed to +5-10 degrees. This meant that the robot tried to maintain a position which was not upright, and therefore fell over.
|
|
|
* **KGyroSpeed**: The gyro speed did not drift in the same way as the angle and as a consequent the control using only the gyro speed was much better than the angle, although it was not sufficient for the robot to maintain balance. Since this term becomes non-zero fast when the robot is falling, it causes the control to act faster than relying on the angle alone does.
|
|
|
* **KSpeed**: The motor speed gain determines the resistance in the motor. The faster the wheel was spanned, the higher the resistance from the motor.
|
|
|
=======
|
|
|
* **KGyroAngle**: Controlling the robot only with the gyro angle, was not possible. The problem encountered was that the angle drifted with time. In the beginning of the test, the angle was correctly 0 degrees when the robot was in an upright position, but during the test this changed to +5-10 degrees. This meant that the robot tried to maintain a position which was not upright, and therefore fell over.
|
|
|
* **KGyroSpeed**: The gyro speed did not drift in the same way as the angle and as a consequent the control using only the gyro speed was much better than the angle, although it was not sufficient for the robot to maintain balance.
|
|
|
* **KSpeed**: The motor speed gain determines the resistance in the motor. The faster the wheel was spun, the higher the resistance from the motor.
|
|
|
>>>>>>> Minor changes
|
|
|
* **KPos**: The motor position gain makes the wheel turn back to its original position. Increasing the gain increases the speed with which the motor turns back to the set-point. If increased to much, the motors starts oscillating.
|
|
|
|
|
|
A possible cause for the erroneous angle could be, that the gyro sensor is more sensitive in the forward direction than backwards, causing a small error on each gyrospeed calculation, which the integration over time then accumulates.
|
|
|
In the first test of the robot, the gyro sensor was placed on the shoulder of the robot(see the pictures in section "physical setup"). This gave rise to a lot of fluctuations in the gyro sensor data, due to the fact that robots upper part is very loosely connected to the lower part and therefore shakes a lot.
|
|
|
In the second part, the gyro sensor was attached to the lower part of the robot. This removed a lot of the high fluctuations due to the tremors.
|
|
|
|
|
|
## Conclusion
|
|
|
|
|
|
In this lesson we have performed expriements with various robot contructions and software implementations in order to create a self-balancing robot. We found out that this is not a straight forward task and many external factors plays an important role.
|
|
|
|
|
|
Two sensors, a light and color sensor, was analyzed in a PID control context. With the right surface and the right lighting we were able to make the robot balance for ~25 seconds. However just by introducing natural light the robot was only able to balance for ~2 seconds which is a significantly deterioration. The same applies to the surface where a non-uniform also yields a deterioration. From this we can conclude that when using these types of sensors in a control context the surroundings should be kept in mind.
|
|
|
|
|
|
According to our results the color sensor performed better than the light sensor.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The self-balancing performance when using the gyro based robot was low. This was mostly due to small errors in angle calculation accumulating over time, causing the robot to loose track of the current angle and then falling over. Since the error in the calculated value was positive in all experiments, we think the sensor may have been more sensitive in the positive direction than the negative. Other sources of error includes the looseness of the construction causing the robot and in turn the gyro measurements to oscillate. We also believe that the motor was introducing tremors, however we did not test this.
|
|
|
The setup of this robot could possibly be enhanced by introducing sensor fusion using a light or color sensor. This would introduce an absolute value corresponding to the angle of the robot, as opposed to the dead-reckoning style approach used in this exercise.
|
|
|
|
|
|
## References
|
|
|
|
|
|
[1] http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson5.dir/Lesson.html
|
|
|
|
|
|
[2] http://www.philohome.com/nxtway/nxtway.htm
|
|
|
|
|
|
[3] http://www.nxtprograms.com/NXT2/segway/index.html
|
|
|
|
|
|
[4] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayOnTheFlyNXT
|
|
|
|
|
|
[5] http://www.hitechnic.com/blog/gyro-sensor/htway/
|
|
|
|
|
|
[6] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayGyroSensor
|
|
|
|
|
|
[7] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayGyroOnTheFlyPC
|
|
|
|
|
|
### Videos
|
|
|
|
|
|
[Exercise 1] - https://www.youtube.com/watch?v=kqeq5SVmsWQ&feature=youtu.be
|
|
|
|
|
|
[Exercise 2] - https://www.youtube.com/watch?v=tngSdW6aB80
|
|
|
|
|
|
[Exercise 3.1] - http://youtu.be/fDe9IyF4uy8 |
|
|
\ No newline at end of file |
|
|
# Group 7
|
|
|
|
|
|
## Lab Notebook - Lesson 5
|
|
|
|
|
|
**Date:** 12/03 2015
|
|
|
|
|
|
**Group members participating:** Ivan Grujic, Lasse Brøsted Pedersen, Steffan Lildholdt, René Nilsson
|
|
|
|
|
|
**Activity duration:** 5 hours
|
|
|
|
|
|
## Goal
|
|
|
The goal of this exercise is to test three different sensors and physical robot constructions in a self-balancing robot context.
|
|
|
|
|
|
## Plan
|
|
|
The plan is to follow the instructions for Lesson 5 [1]. This plan is divided into three parts:
|
|
|
|
|
|
* Build the NXTWay robot [2] and test its self-balancing capabilities using a light sensor and PID control.
|
|
|
* Build the NXT Segway with Rider [3] and test its self-balancing capabilities using a color sensor and PID control.
|
|
|
* Use the NXT Segway with Rider and test self-balancing capabilities using a gyro sensor and alternative control algorithm.
|
|
|
|
|
|
|
|
|
## Exercise 1
|
|
|
Self-balancing robot with light sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical setup
|
|
|
|
|
|
For this exercise we used a LEGO model build according to the description in [2]. The robot includes two motors which are connected to port A and C due to port B/C being connected to the same H-bridge. A light sensor is mounted on the lower part of the robot in a height of approximately 1.5 cm above the surface. In general it is a simple construction with a relatively high center of gravity which will likely pose a challenge for the control mechanism. Our final model is shown in the following image.
|
|
|
|
|
|
![NXTWay robot with light sensor mounted.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorRobot.JPG)
|
|
|
|
|
|
We know that the surface which the robot is placed on affects the control mechanism and therefore three different materials are tested. These are a clean white table, a wooden table and a carpet floor which are shown in the following image.
|
|
|
|
|
|
![Surfaces which the self-balancing NXTWay robot is tested on](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorSurfaces.jpg)
|
|
|
|
|
|
We also know that the surrounding light can be a key factor when using a light sensor in a PID control context. Two cases of surrounding light is analyzed; natural light and artificial light.
|
|
|
|
|
|
##### Software setup
|
|
|
|
|
|
We have created a standard PID software architecture which makes the replacement of a sensor easy. The architecture is shown in the following image.
|
|
|
|
|
|
![General software architecture with a specific class for the light sensor extending the generic controller class](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/LightSensorSoftware.png)
|
|
|
|
|
|
The architecture consists of a generic PID controller with abstract methods to calculate the error and control signal. The specific PID controller, in this case the LightPIDController, will extend this class and define logic for these methods. The program uses the PCconnection class to establish a Bluetooth connection between the NXT and the PC which is used to pass parameters from a GUI on the PC to the NXT. The GUI is seen in the following image.
|
|
|
|
|
|
![PC GUI that offsers modification of the control parameters on the NXT](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GUI.PNG)
|
|
|
|
|
|
The `Segway` class is the main class which controls the flow of the program by first letting the user choose a balancing point which then is defined as an offset making the set point of the PID controller 0. Afterwards the control algorithm is applied while the parameters are shown on the LCD. The core control algorithm is located in the `GenericPIDController` class and is shown in the following.
|
|
|
|
|
|
```java
|
|
|
// Calculate parameters
|
|
|
error = calculateError();
|
|
|
integral = integral + error;
|
|
|
derivative = (error - lastError) / dt;
|
|
|
|
|
|
// Reset the the integral part if the error crosses zero
|
|
|
if(Math.signum(error) != Math.signum(lastError))
|
|
|
integral = 0;
|
|
|
|
|
|
// Apply the PID regulation
|
|
|
controlledValue = (kp * error) + (ki * integral) + (kd * derivative);
|
|
|
controlSignal(controlledValue);
|
|
|
|
|
|
lastError = error;
|
|
|
```
|
|
|
|
|
|
where the functions `calculateError()` and `controlSignal()` are overridden by the specific PID controller, in this case the `LightPIDController`. The functions are shown in the following.
|
|
|
|
|
|
```java
|
|
|
@Override
|
|
|
protected int calculateError() {
|
|
|
LCD.drawString("Light value: " + lightsensor.readNormalizedValue() + " ",0,8);
|
|
|
log.writeSample(lightsensor.readNormalizedValue());
|
|
|
return lightsensor.readNormalizedValue() - offset;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
```java
|
|
|
@Override
|
|
|
protected void controlSignal(float controlledValue) {
|
|
|
if (controlledValue > max_power)
|
|
|
controlledValue = max_power;
|
|
|
if (controlledValue < -max_power)
|
|
|
controlledValue = -max_power;
|
|
|
|
|
|
// Power derived from PID value:
|
|
|
int power = Math.abs((int)controlledValue);
|
|
|
power = default_power + (power * max_power-default_power) / max_power; // NORMALIZE POWER
|
|
|
|
|
|
if (controlledValue > 0) {
|
|
|
rightMotor.controlMotor(power, BasicMotorPort.FORWARD);
|
|
|
leftMotor.controlMotor(power, BasicMotorPort.FORWARD);
|
|
|
} else {
|
|
|
rightMotor.controlMotor(power, BasicMotorPort.BACKWARD);
|
|
|
leftMotor.controlMotor(power, BasicMotorPort.BACKWARD);
|
|
|
}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
Due to our results from experimenting with the light sensor in lesson 1 we chose to initialize the sensor with the flood light turned on.
|
|
|
|
|
|
|
|
|
The entire code can be seen in [4].
|
|
|
|
|
|
### Results
|
|
|
|
|
|
By inspecting the performance we concluded that the best of the three surfaces was the clean white surface and therefore the following tuning and analysis of the PID parameters are performed on this surface. Despite our expectations the surrounding light did not seem to affect the light sensor control mechanism significantly.
|
|
|
|
|
|
In order to find the most suitable control parameters we started out setting `Kp`, `Ki` and `Kd` to 0 and then adjusting one parameter at a time through the PC GUI. The procedure was the following
|
|
|
|
|
|
1. Setting the parameter to a large value to inspect the effect of the parameter
|
|
|
2. Lowering the value until the performance began to worsen.
|
|
|
|
|
|
By doing this procedure for each parameter we ended up with the following estimate of the best possible setting of the control parameters:
|
|
|
|
|
|
| Parameter | Value |
|
|
|
| ------------- |:------:|
|
|
|
| Kp | 2 |
|
|
|
| Ki | 0.22 |
|
|
|
| Kd | 1.5 |
|
|
|
| Offset | 460 |
|
|
|
| Min power | 55 |
|
|
|
|
|
|
With this configuration the robot was able to self-balance in short intervals of approximately 1-2 seconds as seen in the video in the references section.
|
|
|
In order to investigate this behavior the data logger is used to collect the light sensor readings during the execution of the program. The end result of this is seen in the following image.
|
|
|
|
|
|
![Output of the light sensor when performing self-balancing](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/LightMeasurements.png)
|
|
|
|
|
|
This plot shows the PID controllers offset as the red line and the output of the light sensor as the blue graph.
|
|
|
When the LEGO robot is tilting forward the light sensor will get closer to the surface which yields less surrounding light coming in. However the output value will increase due to the light sensors ambient light making reflections on the surface. In contrast when the robot is tilting backward the output value of the sensor will decrease due to less reflected light coming in.
|
|
|
|
|
|
In order for the LEGO robot to keep balance it must constantly try to keep an upright position by applying motor force in the tilting direction (forward or backward). Due to the LEGO robots high center of gravity it is difficult to maintain an upright position resulting in the toggling back and forth between the offset until it is no longer able to adjust for the tilting.
|
|
|
|
|
|
|
|
|
## Exercise 2
|
|
|
|
|
|
Self-balancing robots with color sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical setup
|
|
|
|
|
|
For this exercise we used a LEGO model build according to the description in [3] with some minor modifications. Since the upright motor is not used in the case of a segway, the movement of the upper body of the rider has been fixed. Furthermore the motors that are driving the wheels are connected to motor port A and C instead of A and B, in order to utilize both H-bridges.
|
|
|
|
|
|
In general this robot construction is taller than the NXTway [2] but still it has a lower center of gravity. The wheels has been changed to a slightly smaller pair which are more flat giving the robot more contact with the surface. We estimate that these changes will improve the control mechanism. A color sensor is mounted on the lower part of the robot in a height of approximately 1.5 cm above the surface similar to the sensor placement in exercise 1.
|
|
|
|
|
|
An image of the robot is seen in the following image.
|
|
|
|
|
|
![NXT Segway with Rider with light sensor mounted.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/ColorSensorRobot.JPG)
|
|
|
|
|
|
The procedure for testing this robot is similar to the procedure in exercise 1. The same surfaces and light conditions are analyzed according to control mechanism.
|
|
|
|
|
|
##### Software setup
|
|
|
|
|
|
In this exercise the software architecture as in exercise 1 is used but the LightPIDController class is substituted with the ColorPIDcontroller class.
|
|
|
|
|
|
![General software architecture with a specific class for the color sensor extending the generic controller class](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/ColorSensorSoftware.png)
|
|
|
|
|
|
Due to our results from experimenting with the color sensor in lesson 1 we also in this exercise chose to initialize the sensor with the flood light turned on.
|
|
|
|
|
|
### Results
|
|
|
|
|
|
The initial test with random control parameters showed by inspection that also for this robot construction and color sensor that the clean white surface was the best of the three. Regarding the surrounding light we experienced a significantly difference between natural light and artificial light where the robot performed much better when surrounded by artificial light.
|
|
|
|
|
|
In order to find the most suitable control parameters the robot is placed on the white surface in a room only lit by artificial light and then the same procedure as in exercise 1 is followed. Starting out by setting `Kp`, `Ki` and `Kd` to 0 and then adjusting one parameter at a time through the PC GUI.
|
|
|
|
|
|
By doing this procedure for each parameter we ended up with the following estimate of the best possible setting of the control parameters
|
|
|
|
|
|
| Parameter | Value |
|
|
|
| ------------- |:------:|
|
|
|
| Kp | 7.5 |
|
|
|
| Ki | 10 |
|
|
|
| Kd | 2.5 |
|
|
|
| Offset | 480 |
|
|
|
| Min power | 80 |
|
|
|
|
|
|
A video of the robot balancing with these control parameters applied on the white surface in natural lighting is found in the references section. During self-balancing the robot is not able to keep the exact position as placed in initially. The algorithm causes the robot to drift across the table making it finally fall of the table. Until this occur the robot self-balances for ~25 seconds (The video starts after the robot has balanced for ~10 seconds).
|
|
|
|
|
|
In order to compare these results with the results of exercise 1 the data logger is used to collect the color sensor readings during the execution of the program. The end result of this is seen in the following image.
|
|
|
|
|
|
![Output of the color sensor when performing self-balancing.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/ColorMeasurements.png)
|
|
|
|
|
|
This plot shows the light intensity given by the color sensor as the blue graph and the offset is the red line.
|
|
|
|
|
|
It is clear that this the configuration in this exercise is far better than the configuration from exercise 1. The light intensity continues to fluctuate around the offset for much longer time. Three times during the run of this program the robot need human assistances in order to stay balanced. This is at `time = 16s`, `time = 26s` and `time = 37s` where the light intensity drops indicating that the robot is tilting backwards.
|
|
|
|
|
|
With the defined parameters the robot is again tested on the aforementioned surfaces. This resulted in the following.
|
|
|
|
|
|
* **Carpet**: The robot could not balance at all
|
|
|
* **Wooden table**: The robot could balance for 1-3 seconds
|
|
|
* **White surface**: The robot could balance for ~25 seconds. Ended by falling of the table.
|
|
|
|
|
|
Although it seems, from these results, that the color sensor is superior compared to the light sensor analyzed in exercise 1. However the robot configurations in these two exercises are incomparable due to the general robot construction and the difference in center of gravity.
|
|
|
|
|
|
To perform a proper comparison between the two sensors, in a self-balancing context, the color sensor on the NXT Segway with Rider [3] is replaced by the light sensor and the same procedure from this exercise is carried out.
|
|
|
Despite our expectations the light sensor performed worse than the color sensor under the same conditions. The reason for this could be a difference in power level between the tests otherwise the reason is unknown.
|
|
|
|
|
|
## Exercise 3
|
|
|
|
|
|
Self-balancing robots with gyro sensor
|
|
|
|
|
|
### Setup
|
|
|
|
|
|
##### Physical Setup
|
|
|
|
|
|
For this exercise, we used the Lego model described in the in "Physical Setup" section for exercise 2 as base. Additionally a gyro sensor was mounted on the robot. During the exercise we tried two different mounting points for the gyro sensor - these are discussed in the "Results" section of this exercise.
|
|
|
|
|
|
![Robot with sensor mounted on the shoulder.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSetup2.png)
|
|
|
|
|
|
![Robot with sensor mounted on the lower part.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSetup1.png)
|
|
|
|
|
|
##### Software Setup
|
|
|
|
|
|
An overview of the software architecture for this exercise is seen in the following image.
|
|
|
|
|
|
![Software architecture for exercise 3](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroSensorSoftware.png)
|
|
|
|
|
|
The controller used for this exercise is inspired by the controller described in [5]. Unlike the previous exercises, this controller is not a regular PID controller, and it monitors two values instead of one; angular velocity and tacho counts of the two motors. The angular velocity is then integrated over time, to obtain the current angle of the robot. The current motor position is calculated as the sum of the tacho count from each motor. The current motor speed is then derived from the position by differentiating the position with regards to time. The controller is thus relying on four terms:
|
|
|
|
|
|
* **GyroAngle:** This term is responsible for causing the robot to return to the starting position (standing upright). If the robot starts in the upright position, that angle will be 0 degrees. This value will be positive if the robot is leaning forward, and negative if its leaning backward, thus contributing to the control output.
|
|
|
* **GyroSpeed:** This term is non-zero when the robot is accelerating, and thus contributes to the control output when the robot is falling.
|
|
|
* **MotorPosition:** This term is non-zero when the wheels of the robot has turned from the starting point. Thus, this term is responsible for keeping the robot stationary, and is not related to staying upright. This term can be used to drive the robot to a desired location, however we did not exploit this fact in this exercise.
|
|
|
* **MotorSpeed:** This term is non-zero when the wheels of the robot are rotating. This term keeps the robot from oscillating.
|
|
|
|
|
|
Like in the previous exercises a PC connection class is used to establish a Bluetooth connection between the NXT and the PC letting us choose values for the control parameters on the fly.
|
|
|
|
|
|
The full implementation can be found at [6], with the corresponding PC program located at [7].
|
|
|
|
|
|
The important parts of the logic are shown in the this section. The following listing shows how calculating angular velocity (`gyroSpeed`) and `gyroAngle` from the raw gyro output is performed:
|
|
|
|
|
|
```java
|
|
|
void calcGyroValues(float interval)
|
|
|
{
|
|
|
int gyroRaw = gyro.readValue();
|
|
|
|
|
|
// adjust the offset dynamically to correct for the drift of the sensor
|
|
|
offset = EMAOFFSET * gyroRaw + (1-EMAOFFSET) * offset;
|
|
|
|
|
|
//calculate speed and angle. Interval is the actual time for the last loop iteration.
|
|
|
gyroSpeed = gyroRaw - offset;
|
|
|
gyroAngle += gyroSpeed*interval;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
The motor terms `motorSpeed` and `motorPosition` is calculated as shown in the following listing:
|
|
|
|
|
|
```java
|
|
|
private void calcMotorValues(float interval)
|
|
|
{
|
|
|
//get encoder values
|
|
|
int left = leftMotor.getTachoCount();
|
|
|
int right = rightMotor.getTachoCount();
|
|
|
|
|
|
//position is the sum of the two tacho counts
|
|
|
int sum = left + right;
|
|
|
int delta = sum - oldMotorSum;
|
|
|
motorPosition += delta;
|
|
|
|
|
|
//calculate motorspeed as the average motor speed of the last 4 iterations.
|
|
|
motorSpeed = (delta + old1MotorDelta + old2MotorDelta + old3MotorDelta)
|
|
|
/ (4 * interval);
|
|
|
|
|
|
// Save old values for future use
|
|
|
oldMotorSum = sum;
|
|
|
|
|
|
old3MotorDelta = old2MotorDelta;
|
|
|
old2MotorDelta = old1MotorDelta;
|
|
|
old1MotorDelta = delta;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
The control loop calculates the average period for the control loop, then gyro and motor values. It then calculates and applies the control output:
|
|
|
|
|
|
```java
|
|
|
while(running)
|
|
|
{
|
|
|
interval = calcInterval(i, startTime);
|
|
|
calcGyroValues(interval);
|
|
|
calcMotorValues(interval);
|
|
|
|
|
|
float fpower = (KGYROSPEED * gyroSpeed +
|
|
|
KGYROANGLE * gyroAngle) / WHEELRATIO +
|
|
|
KPOS * motorPosition +
|
|
|
KSPEED * motorSpeed;
|
|
|
|
|
|
int power = Math.min(100, Math.abs((int)fpower));
|
|
|
int direction = fpower < 0 ? MotorPort.FORWARD : MotorPort.BACKWARD;
|
|
|
|
|
|
leftMotor.controlMotor(power, direction);
|
|
|
rightMotor.controlMotor(power, direction);
|
|
|
i++;
|
|
|
}
|
|
|
```
|
|
|
|
|
|
### Results
|
|
|
|
|
|
In order to investigate the properties of the gyro sensor, it was mounted on the shoulder of the robot. The robot was then rotated about 90 degrees in one direction (1. movement), then 180 degrees in the opposite direction (2. movement), then then rotated back 90 degrees to the starting point (3. movement). This procedure was carried out for all three axes, one at a time, as shown in video [Exercise 3.1].
|
|
|
|
|
|
![Raw output from gyro sensor during testing.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Images/GyroData.png)
|
|
|
|
|
|
The data shows, that the offset for this sensor is around 600. Furthermore, it can be seen from the data, that the supplied gyro sensor only senses changes to angular velocity in one axis. This can be seen by the three spikes starting just before T = 8. These spikes corresponds the three movements, for a single axis.
|
|
|
The small fluctuations in the graph is mainly due to an unsteady hand, when turning the robot. In order to test if the gyro drifts, a test was performed, where the gyro sensor was lying completely still. The results from this test is shown in the figure below.
|
|
|
|
|
|
![Raw output from gyro sensor during drift test.](https://gitlab.au.dk/rene2014/lego/raw/master/Lesson5/Measurement/GyroDriftData.png)
|
|
|
|
|
|
The high variations in the beginning and the end of the graph is due to the button presses to start and stop the test.
|
|
|
The output from the gyro sensor is an integer, which causes the output to take discrete values like 601, 602 and 603.
|
|
|
The small changes, which is seen in the graph might be due to drift, or inaccuracy in the measurements.
|
|
|
From the test we can conclude that the gyro sensor is drifting very little over a period of half a minute, at least when the environment is not changing.
|
|
|
Other similar tests could be performed to test how the environment(temperature, humidity etc.) affects the drift.
|
|
|
|
|
|
To test the robot and the different gains effect on the robot, as described in the setup, we tried to control the robot with one gain at a time:
|
|
|
|
|
|
* **KGyroAngle**: Controlling the robot only with the gyro angle, was not possible. The problem encountered was that the angle drifted with time. In the beginning of the test, the angle was correctly 0 degrees when the robot was in an upright position, but during the test this changed to +5-10 degrees. This meant that the robot tried to maintain a position which was not upright, and therefore fell over.
|
|
|
* **KGyroSpeed**: The gyro speed did not drift in the same way as the angle and as a consequent the control using only the gyro speed was much better than the angle, although it was not sufficient for the robot to maintain balance.
|
|
|
* **KSpeed**: The motor speed gain determines the resistance in the motor. The faster the wheel was spun, the higher the resistance from the motor.
|
|
|
* **KPos**: The motor position gain makes the wheel turn back to its original position. Increasing the gain increases the speed with which the motor turns back to the set-point. If increased to much, the motors starts oscillating.
|
|
|
|
|
|
A possible cause for the erroneous angle could be, that the gyro sensor is more sensitive in the forward direction than backwards, causing a small error on each gyro speed calculation, which the integration over time then accumulates.
|
|
|
In the first test of the robot, the gyro sensor was placed on the shoulder of the robot(see the pictures in section "physical setup"). This gave rise to a lot of fluctuations in the gyro sensor data, due to the fact that robots upper part is very loosely connected to the lower part and therefore shakes a lot.
|
|
|
In the second part, the gyro sensor was attached to the lower part of the robot. This removed a lot of the high fluctuations due to the tremors.
|
|
|
|
|
|
## Conclusion
|
|
|
|
|
|
In this lesson we have performed experiments with various robot constructions and software implementation in order to create a self-balancing robot. We found out that this is not a straight forward task and many external factors plays an important role.
|
|
|
|
|
|
Two sensors, a light and color sensor, was analyzed in a PID control context. With the right surface and the right lighting we were able to make the robot balance for ~25 seconds. However just by introducing natural light the robot was only able to balance for ~2 seconds which is a significantly deterioration. The same applies to the surface where a non-uniform also yields a deterioration. From this we can conclude that when using these types of sensors in a control context the surroundings should be kept in mind.
|
|
|
|
|
|
According to our results the color sensor performed better than the light sensor.
|
|
|
|
|
|
|
|
|
Two sensors, light and color sensor, was analyzed in a PID control context.
|
|
|
|
|
|
|
|
|
The self-balancing performance when using the gyro based robot was low. This was mostly due to small errors in angle calculation accumulating over time, causing the robot to loose track of the current angle and then falling over. Since the error in the calculated value was positive in all experiments, we think the sensor may have been more sensitive in the positive direction than the negative. Other sources of error includes the looseness of the construction causing the robot and in turn the gyro measurements to oscillate. We also believe that the motor was introducing tremors, however we did not test this.
|
|
|
The setup of this robot could possibly be enhanced by introducing sensor fusion using a light or color sensor. This would introduce an absolute value corresponding to the angle of the robot, as opposed to the dead-reckoning style approach used in this exercise.
|
|
|
|
|
|
Although no specific test of the motor and sensors performance vs battery level was performed, it is our clear perception that all the tested segway application perform better when the battery level is high. This might be because the performance of the sensors and motors are lower, or because the motor affects the sensors more, when the battery is low.
|
|
|
|
|
|
## References
|
|
|
|
|
|
[1] http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson5.dir/Lesson.html
|
|
|
|
|
|
[2] http://www.philohome.com/nxtway/nxtway.htm
|
|
|
|
|
|
[3] http://www.nxtprograms.com/NXT2/segway/index.html
|
|
|
|
|
|
[4] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayOnTheFlyNXT
|
|
|
|
|
|
[5] http://www.hitechnic.com/blog/gyro-sensor/htway/
|
|
|
|
|
|
[6] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayGyroSensor
|
|
|
|
|
|
[7] https://gitlab.au.dk/rene2014/lego/tree/master/Lesson5/Programs/SegwayGyroOnTheFlyPC
|
|
|
|
|
|
### Videos
|
|
|
|
|
|
[Exercise 1] - https://www.youtube.com/watch?v=kqeq5SVmsWQ&feature=youtu.be
|
|
|
|
|
|
[Exercise 2] - https://www.youtube.com/watch?v=tngSdW6aB80
|
|
|
|
|
|
[Exercise 3.1] - http://youtu.be/fDe9IyF4uy8 |