Sensors¶
Klamp’t can emulate a handful of sensors typically found on robots. These sensors can be used in the simulation environment to monitor the robot’s state and provide feedback for your robot’s controller. Each sensor has a set of properties that describe its location on the robot (or in the world) and govern its behavior, e.g. resolution, noise level, update rate, etc. In the case of camera sensor these parameters include the field of view and resolution.
The following sensors are natively supported:
- Joint sensors
JointPositionSensor
: Standard joint encoders.JointVelocitySensor
: Velocity sensors. Here velocities are treated as raw measurements, rather than differencing from a position encoder, and hence they are rarely found in real sensors. However, these will be good approximations of differenced velocity estimates from high-rate encoders.DriverTorqueSensor
: Torques fed back from a robot’s motors.
- Visual sensors
CameraSensor
: An RGB or RGB-D camera.LaserRangeSensor
: A laser rangefinder sensor.
- Tactile sensors
ContactSensor
: A contact switch/sensor defined over a rectangular patch.ForceTorqueSensor
: A force/torque sensor at a robot’s joint. Can be configured to report values from 1 to 6DOF.
- Inertial Sensors
Accelerometer
: An accelerometer. Can be configured to report values from 1 to 3 channels.TiltSensor
: A tilt sensor. Can be configured to report values from 1 to 2 axes, and optionally tilt rates.GyroSensor
: A gyroscope. Can be configured to report accelerations, velocities, or absolute rotations.IMUSensor
: An inertial measurement unit that uses an accelerometer and/or gyroscope to provide estimates of a link’s transformation and its derivatives. It will fill in the gaps that are not provided by the accelerometer / gyro using either integration or differencing.
- Miscellaneous “virtual sensors”
FilteredSensor
: Simply filters the measurements provided by another sensor.CorruptedSensor
: Corrupts the measurements provided by another sensor.TransformedSensor
: Transforms the measurements provided by another sensor using some scaling and bias (good for simulating sensors that provide non-MKS units).TimeDelayedSensor
: Delays the measurements provided by another sensor.
At the user’s level of abstraction, they generically provide streaming numerical-valued measurements. It is up to the controller to process these raw measurements into meaningful information.
Each sensor obeys a standard interface for configuring the sensor’s settings and
retrieving its measurements. A sensor’s settings are configured via
attribute/value pairs, which are fed to the sensor’s setSetting
method.
More details on measurements and sensor-specific settings are
listed below.
XML configuration¶
To use sensors in a simulation, they must first be defined in a simulation world file.
Note
Sensors cannot yet be defined programmatically in the Python API. This functionality may be supported in the future.
Sensors are defined, with initial settings, via an XML tag of the form
<sensors>
<TheSensorType name="some_name" attr1="value1" attr2="value2" >
</sensors>
These XML strings can be inserted into several places:
.rob files under a line:
property sensors [SENSORS_XML_FILE]
URDF files under the
<klampt>
element, andWorld XML files under the
<simulation>
and<robot>
elements.
See the Hubo-II+
model Klampt-examples/data/robots/huboplus/huboplus_col.rob
for an example of
configuring sensors in a .rob file, and the simulation sensor test
environment Klampt-examples/data/simulation_test_worlds/sensortest.xml
for an
example of configuring sensors in a world XML file.
API summary¶
The main interface to sensors is SimRobotSensor
sensor = controller.sensor(index or name)
: retrieves a SimRobotSensor reference.sensor.name()
: gets the sensor’s name stringsensor.type()
: gets the sensor’s type stringsensor.measurementNames()
: returns a list of strings naming the sensor’s measurementssensor.getMeasurements()
: returns a list of floats giving the sensor’s measurements at the current time stepsensor.setSetting(name,value)
: sets a setting for the sensor. value must be a stringsensor.getSetting(name)
: retrieves a setting for the sensor, returned as a string
It is often useful to retrieve hypothetical sensor data without actually running a simulation, in particular for visual sensors.
sensor.kinematicSimulate(world,dt)
: kinematically simulates the sensor for its corresponding robot in the given world.
The sensing module contains utility functions for reading sensor transforms and converting camera measurements to images (Numpy arrays) and point clouds.
Sensor measurements and attributes¶
Formal documentation is not yet complete for some sensors. For the most part, the attributes of a sensor match the members of the corresponding C++ class. Please see the C++ class attributes and comments for the most complete information.
JointPositionSensor
¶
Settings are:
indices
(list of ints): a list of link indices actually read. May also be empty to indicate all DOFs on the robot are read.qresolution
(list of floats): resolution of each reading, in radians. E.g. “0.01 … 0.01” indicates that each reading will be rounded to the nearest 0.01 radianqvariance
(list of floats): variance of each reading, in radians
JointVelocitySensor
¶
Settings are:
indices
(list of ints): a list of link indices actually read. May also be empty to indicate all DOFs on the robot are read.qresolution
(list of floats): resolution of each reading, in rad/s. E.g. “0.1 … 0.1” indicates that each reading will be rounded to the nearest 0.1 rad/sqvariance
(list of floats): variance of each reading, in rad/s.
CameraSensor
¶
Simulates a camera or RGB-D sensor. Measurements give the pixel measurements of the RGB sensor (if present) followed by the pixel measurements of the depth sensor (if present). RGB measurements are three floating point measurements in the range [0,1] giving the RGB channels of each pixel, in scan-line order. Depth measurements are in meters.
Settings are:
link
(int): the link on which this sensor lies.rgb
(bool): if true, the camera provides RGB output.depth
(bool): if true, the camera provides depth output.xres
,yres
(int): the x and y resolution of the sensor.xfov
,yfov
(float): the x and y field of view, in radians.zmin
,zmax
(float): minimum and maximum range of the depth sensor.
LaserRangeSensor
¶
See the C++ API documentation for attributes.
DriverTorqueSensor
¶
See the C++ API documentation for attributes.
ContactSensor
¶
See the C++ API documentation for attributes.
ForceTorqueSensor
¶
See the C++ API documentation for attributes.
Accelerometer
¶
See the C++ API documentation for attributes.
TiltSensor
¶
See the C++ API documentation for attributes.
GyroSensor
¶
See the C++ API documentation for attributes.
IMUSensor
¶
See the C++ API documentation for attributes.
FilteredSensor
¶
See the C++ API documentation for attributes.
Example¶
You may set the properties of a robot’s sensors using the <sensor>
XML
tag. An excerpt from a world definition file is shown below to demonstrate
how to give a robot simulated sensors.
<world>
<terrain file="Klampt-examples/data/terrains/plane.off" translation="0 0 0"/>
<robot name="tx90" file="Klampt-examples/data/robots/tx90ball.rob">
<sensors>
<JointPositionSensor name="encoders"/>
<JointVelocitySensor name="dencoders"/>
<!-- <ContactSensor name="contact" link="6" Tsensor="1 0 0 0 1 0 0 0 1 0 0 0.03" patchMin="-0.01 -0.01" patchMax="0.01 0.01" patchTolerance="0.005" hasForce="0 0 1"/>
<ForceTorqueSensor name="f/t" link="6" hasForce="1 1 1" hasTorque="1 1 1"/>
<Accelerometer name="accelerometer" link="6" hasAxis="1 1 1"/>
<IMUSensor name="imu" link="6" hasAxis="1 1 1" hasAngAccel="1" hasAngVel="1"/>
<LaserRangeSensor name="lidar" link="6" Tsensor="0 1 0 -1 0 0 0 0 1 0 0 0" depthMaximum="4.0" depthMinimum="0.1" depthResolution="0.01" depthVarianceLinear="0.0001"/> -->
<CameraSensor name="rgbd_camera" link="6" Tsensor="0 1 0 -1 0 0 0 0 1 0 0.1 0" xres="256" yres="128" xfov="1.05" yfov="0.6" zmin="0.4" zresolution="0.01" zvarianceLinear="0.00001"/>
</sensors>
</robot>
<rigidObject name="sphere" position="2 0 1.00000">
<geometry mesh="Klampt-examples/data/objects/sphere.geom" scale="0.1"/>
<physics mass="1.000000" automass="1" kRestitution="1" kFriction="0.500000" kStiffness="inf" kDamping="inf"/>
</rigidObject>
<simulation>
<terrain index="0">
<geometry kRestitution="0.500000" kFriction="0.500000" kStiffness="inf" kDamping="inf" padding="0.001" preshink="1"/>
</terrain>
</simulation>
</world>
To get this to work, you may need to edit the location of the Klampt-examples repository.
Note that sensors can be enabled / disabled using the XML comment tags <!--
and -->
.
SimTest App¶
If you have built from source, sensors can be visualized in the SimTest app,
providing a convenient way
to test their behavior without writing a custom simulation script.
Copy the above XML code into a file sensortestworld.xml
.
Now launch SimTest sensortestworld.xml
.
If you check
can be accessed in the SimTest under Windows->Sensor Plot
menu, or by
pressing Ctrl+P
. The Sensor Drawing Options
window will pop up as follows:
If you select the rgbd_camera
sensor and check the Render
checkbox,
you can see a live display of what the simulated rgbd_camera
is recording.
Begin the simulation by pressing the Play
button, and move the configuration around
so the end effector points toward the ball. The simulation environment will then show
something like this:
showing that depth information is being recorded.
Reading Sensors in Code¶
Copy the above XML code into a file sensortestworld.xml
.
Let’s now create a new Python file with the following code.
import klampt
from klampt import vis
from klampt.math import so3,se3,vectorops
from klampt.vis.glinterface import GLPluginInterface
import time
The first part of the code initializes a world model and
configures it by reading in a world file. The simulator is also created,
and a reference to a sensor is created using the sensor method of the
SimRobotController
class. In this instance, the sensor is referred to by its name, but it
is also possible to use its integer index (i.e.
sim.controller(0).sensor(0))
world = klampt.WorldModel()
world.readFile("sensortestworld.xml")
robot = world.robot(0)
vis.add("world",world)
sim = klampt.Simulator(world)
sensor = sim.controller(0).sensor("rgbd_camera")
In the following lines, the getSetting
method is used to query the link
index the sensor is attached to, and its relative transformation to that
link’s origin. The setSetting
method is used to modify the sensor’s
parent link, attaching to the world instead of the robot. The link’s
relative position and orientation is also changed to a random
location/direction.
print sensor.getSetting("link")
print sensor.getSetting("Tsensor")
sensor.setSetting("link",str(-1))
T = (so3.sample(),[0,0,1.0])
sensor.setSetting("Tsensor",' '.join(str(v) for v in T[0]+T[1]))
The remainder of the code adds the sensor to the visualization, defines the object that interfaces with the visualization system, and sets up the loop that performs the simulation stepping.
vis.add("sensor",sensor)
class SensorTestWorld (GLPluginInterface):
def __init__(self):
robot.randomizeConfig()
sim.controller(0).setPIDCommand(robot.getConfig(),[0.0]*7)
def idle(self):
sim.simulate(0.01)
sim.updateWorld()
return True
def keyboardfunc(self,c,x,y):
if c == ' ':
robot.randomizeConfig()
sim.controller(0).setPIDCommand(robot.getConfig(),[0.0]*7)
vis.run(SensorTestWorld())
If we run this file, the simulation visualization looks something like this:
where the camera is no longer mounted on the robot. Every time you press the spacebar, the robot will go to a new configuration, and you can watch this on the sensor visualization.
Now, let’s see how to process the sensor data.
The following code defines a function which we can run once each
simulation time step to process the depth data. In this case, we simply
perform a min and max operation over every pixel of our depth camera to
determine the depth range of each frame and print it to the console.
This snippet demonstrates two of the methods provided by the sensor
class: getMeasurements
and getSetting
.
As the name indicates, getMeasurements
is used to get the state of the
sensors for the current time step. The getSetting
method allows you to
query the sensor model for its parameters. The form of the data returned
by getMeasurements
and the available settings vary for each sensor.
def processDepthSensor(sensor):
data = sensor.getMeasurements()
w = int(sensor.getSetting("xres"))
h = int(sensor.getSetting("yres"))
mind,maxd = float('inf'),float('-inf')
for i in range(h):
for j in range(w):
pixelofs = (j+i*w)
rgb = int(data[pixelofs])
depth = data[pixelofs+w*h]
mind = min(depth,mind)
maxd = max(depth,maxd)
print "Depth range",mind,maxd
Now, change the idle function to read this:
def idle(self):
processDepthSensor(sensor)
sim.simulate(0.01)
sim.updateWorld()
return True
We should now see many print statements on the console that change as the robot moves.
Extracting data for camera sensors into an array is so common that we
have provided a convenience routine klampt.model.sensing.camera_to_images()
that processes the camera measurements into images (either Python Imaging Library (PIL) Images,
or Numpy arrays). For example, the following code will save the images to disk
each time it is called.
def processDepthSensor(sensor):
rgb,depth = sensing.camera_to_images(sensor)
rgb.save("last_rgb_image.jpg","JPEG")
depth.save("last_depth_image.jpg","JPEG")
(Note that another convenience routine, sensing.camera_to_points
, processes
the camera measurements into point clouds.)