Klamp't Tutorial: Simulating Sensors

In this tutorial we will learn how to use the sensor simulation capabilities in the Klamp't framework for each of the supported sensor types: visual, tactile, and inertial.

Difficulty: moderate

Time: 30 minutes

Currently Klamp't suppports the following sensors:

  • Joint Sensors
    • Position
    • Velocity
    • Torque
  • Inertial Sensors
    • Accelerometer
    • Tilt sensor
    • Gyroscope
    • Inertial Measurement Unit (IMU)
  • Visual Sensors
    • RGB camera
    • Depth camera
    • Laser rangefinder
  • Tactile Sensors
    • Contact sensor
    • Force/torque sensor

These sensors can be used in the simulation environment to monitor the robot's state and provide feedback for your robot's controller. Each sensor has a set of properties that describe its location on the robot (or in the world) and govern its behavior, e.g. resolution, noise level, update rate, etc. In the case of camera sensor these parameters include the field of view and resolution.

To use sensors in a simulation, they must first be defined in a simulation world file, after which api functions can be called to access their measurements and visualize them

Sensors are defined in a world file, just like terrain, robots, and other rigid geometry. (See Tutorial 2 for instruction on how to define a world file.)

Sensors are added under the simulation heading, and must be associated with a robot. When adding a sensor, it must be identified by its class name as defined in the Klamp't c++ API. The currently supported sensor types are as follows: JointPositionSensor, JointVelocitySensor, Accelerometer, Gyroscope, IMUSensor, ForceTorqueSensor, ContactSensor, CameraSensor, and LaserRangeSensor.

You may set the properties of sensors using attributes in their XML tags. An excerpt from a world definition file is shown below to demonstrate the structure. The complete file may be found here.

<simulation> <robot index=0> <sensors> <JointPositionSensor name="encoders" /> <JointVelocitySensor name="dencoders" /> <ContactSensor name="contact" link="6" Tsensor="1 0 0 0 1 0 0 0 1 0 0 0.03" patchMin="-0.01 -0.01" patchMax="0.01 0.01" patchTolerance="0.005" hasForce="0 0 1"/> <ForceTorqueSensor name="f/t" link="6" hasForce="1 1 1" hasTorque="1 1 1"/> <Accelerometer name="accelerometer" link="6" hasAxis="1 1 1"/> <IMUSensor name="imu" link="6" hasAxis="1 1 1" hasAngAccel="1" hasAngVel="1"/> <LaserRangeSensor name="lidar" link="6" Tsensor="0 1 0 -1 0 0 0 0 1 0 0 0" depthMaximum="4.0" depthMinimum="0.1" depthResolution="0.01" depthVarianceLinear="0.0001" /> <CameraSensor name="rgbd_camera" link="6" Tsensor="0 -1 0 -1 0 0 0 0 -1 0 0.1 0" xres="256" yres="128" xfov="1.05" yfov="0.06" zmin="0.4" zresolution="0.01" zvarianceLinear="0.00001" /> </sensors> </robot> </simulation>

Sensors can be visualized in the SimTest app, providing a convenient way to test their behavior without writing a custom simulation script. Assuming you have defined your sensors correctly in the world file, they can be accessed in the SimTest under Windows->Sensor Plot, or by pressing Ctrl+P. Enabling the Sensor Drawing Options window will produce the screen below.

Sensor plot menu

Using this window, you can enable a scrolling 2D line plot of the sensor readings by checking the "Plot On" box, as well as rendering the sensor measurements in the world. An example of this is shown below.

Sensor plot example

Sensors are accessed using the names defined in the world file. In addition, you can select which parts of the sensor output you would like to plot under the Measurements heading using the buttons on the right side of the window.

You may download the code for this example here. Throughout, we will assume that your code is placed in a directory that shares the same parent as Klamp't. We also assume you have learned how to build applications that link to Klamp't using CMake, e.g. by completing the simulation tutorial.

In this section we will go over a simple python script that makes changes to a sensor's characteristics before running a simulation. We'll be using Klampt/data/tx90scenario0.xml as the base world file.

The script begins with import statements to include all the necessary modules to run our simulation. The vis module is used to generate visualizations of the robot geometry and sensor data being simulated. The so3, se3, and vectorops modules are used for vector math and rigid body transformations.

from klampt import vis
from klampt import *
from klampt.math import so3,se3,vectorops
from klampt.vis.glcommon import *
import time

The following code defines a functuion which will run once each simulation time step to process the depth data. In this case, we simply perform a min and max operation over every pixel of our depth camera to determine the depth range of each frame and print it to the console. This snippet demonstrates two of the methods provided by the sensor class in the python interface: getMeasurements and getSetting.

As the name indicates, getMeasurements is used to get the state of the sensors for the current time step. The getSetting method allows you to query the sensor model for its parameters. The form of the data returned by getMeasurements and the available settings vary for each sensor. Refer to the documentation for details on what parameters need to be set for each sensor model.


def processDepthSensor(sensor):
  data = sensor.getMeasurements()
  w = int(sensor.getSetting("xres"))
  h = int(sensor.getSetting("yres"))
  mind,maxd = float('inf'),float('-inf')
  for i in range(h):
    for j in range(w):
      pixelofs = (j+i*w)
      rgb = int(data[pixelofs])
      depth = data[pixelofs+w*h]
      mind = min(depth,mind)
      maxd = max(depth,maxd)
  print "Depth range",mind,maxd

  

The next part of the code is used to initialize a world model and configure it by reading in a world file. The simulator is also created, and a reference to a sensor is created using the sensor method of the SimRobotController class. In this instance, the sensor is referred to by its name, but it is also possible to use its integer index (i.e. sim.controller(0).sensor(0))

world = WorldModel()
world.readFile("../data/simulation_test_worlds/sensortest.xml")
#world.readFile("../../data/tx90scenario0.xml")
robot = world.robot(0)

vis.add("world",world)

sim = Simulator(world)
sensor = sim.controller(0).sensor("rgbd_camera")

In the following lines, the getSetting method is used to query the link index the sensor is attached to, and its relative transformation to that link's origin. The setSetting method is used to modify the sensor's parent link, attaching to the world instead of the robot. The link's relative position and orientation is also changed to a random location/direction.

print sensor.getSetting("link")
print sensor.getSetting("Tsensor")
sensor.setSetting("link",str(-1))
T = (so3.sample(),[0,0,1.0])
sensor.setSetting("Tsensor",' '.join(str(v) for v in T[0]+T[1]))

The remainder of the code adds the sensor to the visualization, defines the object that interfaces with the visualization system, and sets up the loop that performs the simulation stepping.

vis.add("sensor",sensor)

class SensorTestWorld (GLPluginInterface):
  def __init__(self):
    robot.randomizeConfig()
    sim.controller(0).setPIDCommand(robot.getConfig(),[0.0]*7)

  def idle(self):
    sim.simulate(0.01)
    sim.updateWorld()
    processDepthSensor(sensor)
    return True

  def keyboardfunc(self,c,x,y):
    if c == ' ':
      robot.randomizeConfig()
      sim.controller(0).setPIDCommand(robot.getConfig(),[0.0]*7)

vis.pushPlugin(SensorTestWorld())
vis.show()
while vis.shown():
  time.sleep(0.1)
vis.kill()