Lab 10
Net ID: pp386
Objective: Simulation
In this lab, we are going to create a simulation model using Box2D and Flatland and trying to create a simulation environment for the robot and testing the virtual robot in this environment.
Description of the simulator
The simulator environment has 3 parts:
-
The simulator: It has a wheeled robot having a laser range sensor which is very similar to the robot that we have in our class. THe range sensor is pointed towards the front of the car. The dimensions of the simulator world are defined in a file called world.yaml where we can change the boundary lines of the map, the robot dimensions (length and breadth), the range of the TOF sensors, the initial pose of the robot, localization parameters, sensor noise, etc. The virtual robot thus defined can mimic an ideal real robot, with no friction and corresponding effects. There is also a kind of an IMU onboard the virtual robot which gives the pose of the robot and thus we can get the odometry of the robot. We can also see the ground truth of the robot after each movement which is basically the position of the robot measured by the sensor.
-
The plotter: The plotter is a 2D plotting tool that is lightweight and allows for live plotting of many scatter plots. THe plotter has buttons in the GUI to plot the odometry, ground truth, belief, map and the distance feature. The plotter can also be reset using a button in the GUI.
-
The controller: The controller or the commander class in python is used to send commands to the robot and it is used to do whatever we want with the robot. We can get the robot odometry pose readings, the sensor data, and also move the robot using the controller.
Why are we using the simulator?
We are using the simulator to test our hypothesis of the robot before actually implementing our logic on the real robot. Simulations are extremely important because they are fast for prototyping and experimenting with our logic while in the safety of the virtual environment and not posing any risk to our real robot. Basically simulators are sanity checks before applying the logic to the robot.
Open Loop simulation
First I created an open loop simulation to create a square using the setvel commands. The code is shown below:
if cmdr.sim_is_running() and cmdr.plotter_is_running():
cmdr.set_vel(0,1.57)
await asyncio.sleep(1)
cmdr.set_vel(0.5,0)
await asyncio.sleep(1)
pose, gt_pose = cmdr.get_pose()
cmdr.plot_odom(pose[0], pose[1])
cmdr.plot_gt(gt_pose[0], gt_pose[1])
cmdr.set_vel(0,1.57)
await asyncio.sleep(1)
cmdr.set_vel(0.5,0)
await asyncio.sleep(1)
pose, gt_pose = cmdr.get_pose()
cmdr.plot_odom(pose[0], pose[1])
cmdr.plot_gt(gt_pose[0], gt_pose[1])
cmdr.set_vel(0,1.57)
await asyncio.sleep(1)
cmdr.set_vel(0.5,0)
await asyncio.sleep(1)
pose, gt_pose = cmdr.get_pose()
cmdr.plot_odom(pose[0], pose[1])
cmdr.plot_gt(gt_pose[0], gt_pose[1])
cmdr.set_vel(0,1.57)
await asyncio.sleep(1)
cmdr.set_vel(0.5,0)
await asyncio.sleep(1)
pose, gt_pose = cmdr.get_pose()
cmdr.plot_odom(pose[0], pose[1])
cmdr.plot_gt(gt_pose[0], gt_pose[1])
cmdr.set_vel(0,0)
As you can see in the code above, we use the setvel command to set the radians and speed in m/s. We use the asyncio.sleep command to give delay for our simulation. We estimate the pose and ground truth pose and then plot the odometry and ground truth. The below image shows the odometry and the ground truth pose.
The pose estimation is pretty different from the ground truth. The duration of a velocity command is 1 second in my code since I am using the asyncio.sleep to delay for a second.
What happens if we run the loop again and again? Does it execute a perfect loop?
The video below shows a drift happening because of the inaccuracy of open loop. This is happening because of 2 reasons. First is that the rotation is specified in radians and pi/2 radians is a very long number which I have rounded off to 1.57. This makes the robot not turn exactly 90 degrees, but around 89.9 degrees or so. This means that over time, the angle error is going to accumulate. Second, there may be a time delay due to each of the loops in the setvel command for translating, and the robot may not stop at the exact location we want as there are errors due to the python loop.
Closed Loop
For closed loop, we take into account the sensor values and I have set the distance setpoint as 0.3 m.
while cmdr.sim_is_running() and cmdr.plotter_is_running():
sensor_values = cmdr.get_sensor()
#print(sensor_values)
if sensor_values <0.3:
cmdr.set_vel(0,1.57)
else:
cmdr.set_vel(2,0)
pose, gt_pose = cmdr.get_pose()
cmdr.plot_odom(pose[0], pose[1])
cmdr.plot_gt(gt_pose[0], gt_pose[1])
The maximum speed of the car for which it doesnt collide with the wall is 2m/s and I have set the angle as 90 degrees for each time an obstacle is detected. The car rotates by 90 degrees each time an obstacle is detected. No, my obstacle avoidance code fails at 2.5 m/s when it starts grazing the wall and also one edge of the robot collides with the wall.
Avoiding collisions
-
I noticed that when the car was too fast, the car started hitting the wall and grazing it too. This can be rectified by using a sensor which updates much faster and has a faster sampling time. If we get faster sensor readings, we can rectify our robot motion earlier and hence avoid collision.
- Better sensor: We can use a more expensive sensor which has less sensor noise. If our sensor is more accurate than the current sensor setting of sigma=0.1, we can avoid obstacles better.
- More sensors can also be used to make the obstacle avoidance better. The robot can have 2 more sensors on the edges of the robot on the left and the right side near the wheels. If we do this orientation, we will be able to detect the walls better and use all the three readings to turn whichever way required.
- Change the setpoint for the TOF sensor to something more than 30 cm.
- Instead of doing a 90 degree turn, if we would just turn 180 degrees, we can make the robot oscillate on one line, however the robot wouldnt move around the whole map.
Conclusion
In this lab, I learned about the simulator and its importance for robot trajectory development. I also used open loop and closed loop control to make the robot move and tested the limits where the robot could successfully avoid the obstacle. I also learned about various ways we could make the obstacle avoidance be better.