Lab Notebook 5
Date: 27/03 2015.
Group members participating: Tine Hansen, Daniel Moltzen, Mads Eriksen, Lars Høeg.
Activity duration: 30 hours.
The report is structured as follows; The first two sections describes our overall goal and how we are going to achieve this. The next general sections are in regards to three different types of self-balancing robots that we are going to experiment with. Each of these sections contains respectively our results and findings, and a conclusions on these.
In the last section we will draw an overall conclusion on the three types of robots and compare these in relations to each other. We will further reflect upon what we could have done different and what impact this could result in.
The overall goal of this lab report is to experiment with three different ways of creating a self-balancing robot. This is done respectively with a light sensor, color sensor and lastly a gyro sensor.
We are planning to conduct the different robot experiments under the same conditions, which is a white surface placed in a semi-dark room. The light and color sensors are placed orthogonal to the surface. We are further going to experiment with other conditions and use these as a means reflection. In general we will perform the experiments with the opportunity to manipulate the parameters in the code on-the-fly. We have therefore implemented a GUI in which we can alter the kP, kI, kD values and programmed the right and left button, on the robot, to increase and decrease the offset.
Self-balancing robot with light sensor
Picture 1: Robot setup and height from sensor to surface, inspired by Philippe Hurbain.
Below is shown a video of our end result when experimenting with the self-balancing robot that uses a light sensor.
Video 1: Self-balancing robot with light sensor
It is seen in Video 1 that we were not able to make the robot balance completely by itself, but only for 1 or 2 seconds, and then it tends to lose balance and tip over. Before this video was recorded we observed a better result, but unfortunately we did not get it on tape. We tried many different setups and parameters and this constellation was the best one we obtained. Changing the wheels to see if that made the robot more steady was also done, however we did not see any difference in the balancing. Our parameter tuning, code optimisation together with the context for the robot will be presented and discussed in the latter.
Our program is based on the code in sejway.java by Brian Bagnall . It is extended so it can control the variable of the PID and offset on-the-fly. It can be found in reference [8,9]. We have further tweaked the original program, and changed some of the choices he made. This will be commented below.
// Adjust far and near light readings: if (error < 0) error = (int)(error * 1.8F); // Integral Error: int_error = ((int_error + error) * 2)/3;
Code Snippet 1
As it is seen in the above code snippet Bagnall tries to hinder that the integral increases out of proportions by multiplying it with ⅔. This has been removed and replaced with a method that checks if the error changes signum and by that it resets the integral (setting it to 0). It is further seen that he is trying to adjust the error, by multiplying it with 1.8, if it less than 0, as a means of adjusting the far and near light readings. We tried to remove this, because we were not quite sure why it should do that, and actually obtained a better results.
Bagnall further scales the entire calculated PID value, as seen in Code Snippet 2. It looks like he does this because of the usage of integers as data types for the parameters, meaning that he can not use decimals and tries to correct for this by using a scale variable.
int pid_val = (int)(KP * error + KI * int_error + KD * deriv_error) / SCALE;
Code Snippet 2
At first, we changed the data types for the variables to use float instead of the type of an integer. Unfortunately we had to reverse it back again, because we ran into some troubles, when implementing the on-the-fly change of parameters. The GUI would not allow for digits, even though we told it read float, so we had to rely on integers and the scale variable. Of course this is not optimal as we could lose some decimals on the way, but it is a must, in our case, for the ability to change the parameters on-the-fly, and thereby achieving an easier and better way of tuning the robot.
In the next experiment (with the other type of self-balancing robots) we will be using the same program as a base layout, and as mentioned before it can be found in references [8,9]. Below is a snippet (Code Snippet 3) from the program that illustrates how we calculate the PID value and resets the integral.
// Proportional Error: int error = normVal - offset; // Reset integral: if (error == 0 || !hasSameSignum(error, prev_error)) int_error = 0; int_error = int_error + error; // Integral Error: int_error = (int_error + error); // Derivative Error: int deriv_error = error - prev_error; prev_error = error; int pid_val = (int)(kp * error + ki * int_error + kd * deriv_error) / scale;
Code Snippet 3[8,9]
We have found our PID values without complex math as mentioned in , meaning that we have used trial and error until we found the best value for the different parameters. We have further looked at the tuning of these in relation to Table 1, that states the effects of increasing parameters independently. The best values we found was Kp=400, Ki=0, Kd=600 (remember we scale the values by 100), these gave the best balance although only for a few seconds as seen in Video 1. We found it difficult to tune the Ki parameter, and every time we increased this by more than 1 the robot acceleration became too big and it overshot and crashed. The best value for Ki we found was 0.01 (1), and an example of this can be seen in Video 2 where we used Kp=400, Ki=1, Kd=600 as parameters.
We further tried to scale all the values to Kp=800, Ki=200, Kd=1600, but this resulted in a even worse result as seen in Video 3. Lastly, we tried even more tuning, but eventually stopped when realising that is was almost impossible for the robot to be self-balancing by the use of the light sensor only.
Table 1: Effects of increasing a parameter independently (Wiki) 
Different types of surfaces was used as floor for the experiment, including a grey linoleum floor, a wooden table and white sheets of plain paper. We saw that the white surface resulted in the best results, and were therefore chosen as the general surface. The reason for this was that the detected values from the light sensor differed way more on the same place on the grey and wooden surface, as compared to to the white.
Picture 2: different types of surfaces, to the left grey linoleum, in the middle wood and to the right white A3 paper
We did not find this test worth trying, because of the difficulties on the planar surface that we experienced.
In regards to how the light level in the room affected the robot, we experimented with a light setting where the lights in the room was turned on and a semi-dark room. We found that there were too much shadows in the setting with light turned on, and that this conflicted with light sensor and its detected values. Hence, the best context was in the semi-dark room with drapes closed and lights turned off. The problems that arise in the case where lights were turned on could also be caused by the lightsource not being a fluorescent lamp. A fluorescent lamp emits almost no infrared light, meaning that the sensor to some extent is insensitive to their light, as mentioned in .
We further tried to use the light sensor mounted on another robot design (The Segway with Rider) , where the light sensor was placed higher above the surface and the center of gravity is different. Here we experienced very little changes, and the results was as poorly as with the other design.
Picture 3: Robot setup and height from sensor to surface, inspired by “The Segway with Rider” .
In the process of making the robot self- balance, we performed different experiments. By choosing the PID values with a trial and error method we found that the values Kp=400, Ki=0, Kd=600, all scaled by 100, worked best. Even though this resulted in not more than a few seconds of balance. We further tried to change the wheels on the robot to see if that changed the behaviour. The results from this were more or less the same. As we have seen in the experiments in the earlier lessons, the reflection of the surface makes an impact on the sensor, which is why we also tried different surfaces with different colors and textures,. One of the most important factors in the surrounding is still the shadows that interfere with the surface. Darkening the room also helped our robot to balance better.
Further experimentation led to mounting the light sensor on another robot design as seen in Picture 3, but the results here were just as poorly as with the first design.
Self-balancing robots with color sensor
Picture 4: Robot setup and height from sensor to surface, inspired by “The Segway with Rider”.
Below is shown a video of our end result when experimenting with the self-balancing robot that uses a color sensor.
Video 4: Self-balancing robot with color sensor
Video 4 shows that the robot can balance when the room is dark. It is balancing on white paper by using the white light from the sensor. We had to find the perfect balancing point when setting the offset and starting the program. The robot uses the offset to compare the live sensor data and tries to reach the offsets value. As seen in the video, the wheels are moving all the time trying to find the right angle. In the end when it begins to oscillate it is not possible for the robot to get back to the perfect offset.
We further tried using only the red light from the color sensor, but did not see any remarkable changes and therefore did not document that experiment. Together with the red light we went to try the robot in a complete dark room, but again we did not get better results.
In the latter we will talk about how we have tuned the parameters and reflect more upon the context for the robot.
The code implemented [12,13] in achieving the results for this type of robot is almost similar to the previous one[8,9]. Both the GUI and the NXT software is the same except that we have added the color sensor instead of the light sensor, which also gives us the opportunity to change the color projected from the sensor, as seen in Code Snippet 4.
cs = new ColorSensor(SensorPort.S2); cs.setFloodLight(Color.WHITE); //Could also be RED, etc.
Code Snippet 4[12,13]
We have found the PID values the same as in the first experiment, i.e by trial and error inspired by table 1.
The best values we found were Kp=300, Ki=20, Kd=1000 with an overall scale of 100. If we increase Kp the robot starts to overshoot and becomes unstable. If we use a smaller value the robot starts to have problems with the rise time and starts drifting one way or the other. This time Ki was most effective with a value of 20. We tried once more with a Ki value on 0, but this time it had another effect. Without a Ki value the robot could not find any steady state and continued to overshoot a bit every time. We ended up with a Kd value on 1000, this helped us handle the overshooting, however there were a larger margin. The effect of values between 800 and 1200 was almost the same, but for some reason 1000 had a bit more positive effect on the result.
In the main experiments we used a horizontal surface. This worked fine, but we wanted to see how well the robot would stay in balance on an angled surface. As seen in Video 5 the robot balances in about 10 seconds on a 8° surface. A steeper angle does not work. it seems like the motors become unsurfasient and the values from the sensor is too diverse for the robot to stay in balance.
Video 5: Self-balancing robot with color sensor on non-planar surface
The color of the surface in which we had the best results was white. When trying on a grey textured floor there was too much disturbance to have a self-balancing robot. We tried on a wooden table in the surface angle experiment, this seemed to work better than when the wooden table was horizontal. We do not know if this is a coincident or if the tables reflection of the light from the sensor is better when the angle of the reflection is away from the sensor.
We have tried almost the same as in the first experiment the only exception was that we wanted to try the robot in an all dark room , so we did not have any light source at all. This did not seem to affect the robot particular in any particular way. The results from a darkroom and a semi dark room was too very similar to say which one was better. We used light for the angel experiment. Here the light source came from the phone recording the video. The light source here did not seem to inflict the robots sensor. this can be due to the angle of the light source which most of the time did not make any shadows that could make interference or it indicates that the color sensors is not as sensitive to shadows as the light sensor. If this could be due to a stronger light on the color sensor that can obliterate the shadows, we do not know.
Our obtained results indicates that the color sensor worked better than the light sensor. The robot did not work at all when using the light sensor but worked very well with the color sensor. We cannot say exactly why, but discussed several options such as, there might be better technology in the color sensor, and could be faster when updating to the NXT. Another more plausible possibility is that the color sensor is less disturbed by light and shadows. We further saw that the robot could balance for about 10 second on a non-planar surface, and it wasn’t so sensitive to surface color and texture when it was non-planar. The best PID values we found for the color sensor was Kp=300, Ki=20, Kd=1000 with an overall scale of 100. which resulted in the robot staying in balance for at least 15 seconds.
Self-balancing robots with gyro sensor
Picture 5: Robot setup with GYRO sensor attached. Design inspired by .
Below is shown a Video of our end result when experimenting with the self-balancing robot that uses a gyro sensor.
Video 7: Self-balancing robot with gyro sensor
In the last experiment we got the robot to work and balance by itself (until the battery runs out), as seen in Video 7. It also worked even when we gave it a little push. After trying a lot of different parameters we finally found the ones that gave a good result. In the sections below you will comments on how we achieved the result.
Video 6: Self-balancing robot with gyro sensor - Drifting
In most of the experiments with the gyro sensor we had a big problem with drifting. When executing the program it almost instantly started drifting and therefore we tried to change the code to use different parameters. This did not make the robot balance perfectly and we ended up not being able to find an solution to this problem. This was a problem for many other groups, so we did the best we could to make a self-balancing robot with the gyro sensor and even when we tried to use some of the same variables from the other groups it would still not work.
Another problem when running the program on the NXT was, as seen in video 6, that the offset kept changing by itself. This resulted in that the robot began to lean forward by itself and could not find a balancing offset afterwards. We tried to erase the offset from the code and place it on the table, but still, the offset kept moving forwards, ultimately causing the robot to fall.
We tested the gyro sensor and its capabilities to get a better understanding of how it works. Three tests were performed, respectively; stationary gyro sensor without motors running, movement of gyro sensor without motors running, stationary gyro sensor with motors running. The gyroscopic sensor was on placed on the robot as seen in Picture 5.
Graph 1: Stationary gyro sensor without motors running.
In the first test, the gyroscopic sensor was laying still with no running motors. We hoped that this would give us a more steady offset, because that the voltage for the NXT would not change.
Graph 1 shows us that the median for the gyro sensors data is about 603, and differs between +- 2. By looking at this you get a good idea about what the gyro’s offset state is, since it is laying flat with no other movement and no motors running. However, there are some factors that could be in in play here, eg.  states that the gyro has to be warm for proper readings. They further mentions that the power level of the battery also plays as a factor, as well as the voltage of the NXT when the motors are running.
In the next test no motors were running, and the gyroscope was moved was along with the robot in a forward, backward, backward and then forward movement pattern. We were trying mimic the behavior of the robot when correcting itself so it would balance again.
Graph 2: Movement of gyro sensor without motors running.
As given in Graph 2, we see in general that the gyro values for the different movement patterns is the same. However, there er is a difference in how high the values peak. This could be due to the force and intensity of the movement. Hence, if you compare the first forward movement with second, and the first backward movement with the second as well. All this gives us a good understanding of what the gyro registres in the use of a self-balancing context.
The last test that we conducted with the gyro sensor, was performed with motors running and the gyro being stationary. We were hoping to see if the running motors would interfere with the voltage of the NXT and by that add noise to the readings of the gyro, as  also mentions could be a factor of noise.
Graph 3: Stationary gyro sensor with motors running.
Graph 3 shows us that when the motors are running there are more peaks in the readings from the gyro, which tells us that the motors have an impact on the voltage of the NXT. It is further seen that there are places in which the offset remains between the same values as in Graph 1 even though the motors are running.
The code we have implemented is a translation to Java from the program given in  written in NXC. The program was tweaked to follow our needs, eg. one of the things we removed was ability to steer the robot remotely. The entire program can be found in 
In the program we have tried to use different parameters to make robot balance. Secondly we changed aGyro to 24 and KsGyro= 2.9, keeping the other values at aPos = 0.12 and aSpeed = 0.035. This eliminated most of the drift, although, instead the robot moved aggressively forwards and backwards until it lost its balance after a few seconds. We kept the EMAOFFSET at 0.0005 as in the NXC code. Lastly, we made the robot keep its balance by using the parameters:
private double aGyro = 22; private double KsGyro = 3.4; private double aPos = 0.12; private double aSpeed = 0.15 ; private static final double EMAOFFSET = 0.0005;
Code Snippet 5
Using these parameters the robot was able to keeps its balance for at least 2.5 minutes. Although, pushing it backwards or forwards made the robot fall to the ground.
In the beginning of the experiment when using the gyro sensor, we placed the sensor on the right side of the robots “body”, as seen in picture 5. After a couple of tries we observed that due to the construction of the robot the gyro Rider on the top of the segway was shaking, while the robot was moving. Because the gyro is placed on the rider we needed to rebuild the robot to make the gyro sensor more steady during the experiments. This should prevent the shaking of the rider and give us more stable gyro sensor data.
The gyro placement was changed during the experiments to see if that would help. We placed the sensor right above the right wheel as seen in picture 6, because we thought that the new setup would make the movements and the gyros “observations” more alike in the experiments. Thus we did not see any huge improvements.
Picture 6: The new placement of the Gyro sensors
For making a better balancing robot we could try to combine the gyro sensor with the color sensor. By doing so we could perhaps prevent that the gyro sensor making the robot drift and then use the values from the color sensor to “help” the robot get back to the right offset where it can balance by itself.
Because the gyro sensor tends to drift we had a lot of problems with the experimentation. We were not able to make the robot balance much in the exercise. We both tried to change the setup of the robot and also optimize the code, but neither helped much. Finally we had the robot balancing by itself by changing the parameters and thereby finding the best working ones.
In this paper we have tried to have a structured plan so we could compare the different experiments. We have also tried to explain, very thorough, all the parameters, conditions and results to make sure that the reader understands what we have been through in the excises.
During the different experiments with the different sensors. Our results shows us that the light sensor self-balancing robot is the hardest to function properly when trying to get the robot to balance as explained in the section about it. When we used the color sensor we had a good result and got the robot to balance for a while. Of course we had to make sure that there was certain conditions when making the tests, eg. the room was made dark to eliminate shadows and also the surface of which the robot was balancing on was white.
- Philippe Hurbain, NXTway.
- Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 11, 243-284.
- nxtprograms, NXT Segway with Rider.
- [J. Sluka.] (http://www.inpharmix.com/jps/PID_Controller_For_Lego_Mindstorms_Robots.html)
- Video 1: Self-balancing robot with Light Sensor.
- Video 2: LSblance2.
- Video 3: LSbalance3.
- Table 1.
- Video 4: Self-balancing robot with Color Sensor
- Video 5: Self-balancing robot with Color Sensor - Non planar surface
- Video 6: Self-balancing robot with Gyro Sensor - Drifting.
- Gyro offset and drift.
- HITechnic, HTWay - A Segway type robot.
- Video 7: Self-balancing robot with Gyro Sensor