Priyam Patel

Lab 12

Net ID: pp386

Localization in Simulation

Running the given lab12_sim notebook, I got the following the output trajectory as shown. The green dots represent the ground truth and the blue dots represent the belief of the Bayes filter. image

Part 2 Localization in Live Robot

We are given with the lab12_real skeleton notebook for creating our Real Robot class. I have designed the robot class in such a way that the perform observation loop function as shown below starts the car and it starts one rotation and takes in readings at every 20 degrees. This gives us 18 evenly spaced readings of the TOF sensor of the map. The perform observation loop function returns the sensor readings and the sensor bearings. One thing to not is that my car was doing a clockwise rotation, so rather than changing the code of the car, I decided to change the readings in numpy. The perform observation loop shows the process done. For the pose, I just input the pose for whichever location I was computing the observation loop for before going into the update step.

 def perform_observation_loop(self, rot_vel=120):
        def notification_handler(uuid, byte_array):
            x=self.ble.bytearray_to_string(byte_array)
            self.y.append(x)
        self.ble.send_command(CMD.START_CAR,"")
        self.ble.start_notify(self.ble.uuid['RX_STRING'], notification_handler)
        await asyncio.sleep(30)
        sensor_ranges=(np.array(self.y)[np.newaxis].T)/1000
        x0=sensor_ranges[0]
        sensor_ranges=np.flipud(np.delete(sensor_ranges, [0])) 
        sensor_ranges=np.insert(sensor_ranges,x0)
        sensor_bearings=np.array([])
        return sensor_ranges,sensor_bearings

I took the robot and placed it for each of the 5 locations to get the localization belief of the robot. After performing each observation loop, I computed the update step of the Bayes filter and got the belief of the robot.

(0,0) Localization

TOF readings for (0,0,0) = [0.814,0.991,0.813,1.106,2.535,2.038,2204.0,1.914,1.428,0.754,0.903,1.637,1.894,2.420,1.458,1.071,1.672,1.827]

image
image
As we can see in the figures above, the localization is 1 foot off in the y axis reading. This can be because the robot confused its position due to very similar TOF readings from 2 positions. This can be rectified with multiple readings from the TOF sensors.

(0,3) Localization

TOF readings for (0,3,0) = [2.550,2.671,1.158,0.791,0.560,0.446,0.549,0.849,1.261,0.779,0.664,0.717,1.044,2.785,2.601,1.651,2.419,1.086]

image
image

For the (0,3) localization, again the readings were off by 1 tile but this time it was in the X direction. This can be because the robot had a faulty first reading at zero degrees, with the reading being much larger than the actual reading. I was playing with my TOF sensor orientation, i.e. if it was pointing straight or to the ground which may have caused the error as the sensor might have been pointing up by 1 or 2 degrees.

(5,3) Localization

TOF readings for (5,3,0) = [0.419,0.474,0.522,0.653,0.539,0.439,0.415,0.461.0,0.662,1.187,2.860,1.845,1.098,0.569,0.506,2.552,1.747,0.737,0.495]

image
image
For the (5,3) localization, the car drifted downward around the 90 degree mark which may have caused the error in the reading in the y direction since the car shouldnt have gone wrong at the 90 degree mark which was very close to the robot and the TOF sensors should also have been accurate.

(-3,-2) Localization

TOF readings for (-3,-2,0) =[2.922,1.713,2.908,2.215,0.607,0.625,0.727,0.784,0.629,0.585,0.599,0.831,0.765,0.663,0.653,0.780,0.931,2.100]

image
image
For the (-3,-2) readings we can see that the localization and the ground truth overlap, which means that it was a perfect localization. I think the localization was perfect because (-3,-2) looks very unique amongst all the points in the graph, as it has 3 close walls to it in 3 directions and a lot of variation in one quadrant. No other tile location has the same TOF sensor readings.

(5,-3) Localization

TOF readings for (5,-3,0) =[0.413,0.391,0.403,0.480,0.782,3.509,3.500,0.789,1.324,3.061,3.316,2.453,0.794,0.537,0.430,0.416,0.444,0.564,0.474] image
image

The (5,-3) localization is also perfect and overlaps with the ground truth with a very high probability because I think that the location was unique like the (-3,-2) location.

Conclusion

In this lab, I utilized the Bayes filter implemented in the Lab 11 and applied it to get the localization of the robot for any rotation. I had some challenges in this lab like the Artemis giving a hard fault, which was only because of an array overflow which Kirstin suggested me to check. I also had to take the localization readings 2 times because for the first time, I had set my TOF sensor to short distance mode by mistake hence all the readings in the first iteration were a waste. All in all, I think localization is a very helpful tool to estimate the current location of the robot and probabilistic robotics is the way to go. I believe the localization code will help me execute a good trajectory of the robot in Lab 13.