A camera capturing 3D points cloud
This sensor generates a Depth ‘image’ from the camera perspective.
“Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.” [ROS Enhancement Proposal 118](http://ros.org/reps/rep-0118.html) on Depth Images.
If you are looking for PointCloud data, you can use external tools like [depth_image_proc](http://ros.org/wiki/depth_image_proc) which will use the intrinsic_matrix and the image to generate it, or eventually the XYZCameraClass in this module.
You can set these properties in your scripts with <component>.properties(<property1> = ..., <property2>= ...).
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
This sensor exports these datafields at each simulation step:
Z-Buffer captured by the camera, converted in meters. memoryview of float of size (cam_width * cam_height * sizeof(float)) bytes.
Interface support:
(attention, no interface support!)
Capture n images
Returns the current data stored in the sensor.
Return value
a dictionary of the current sensor’s data
The following example shows how to use this component in a Builder script:
from morse.builder import *
robot = ATRV()
# creates a new instance of the sensor
depthvideocamera = DepthVideoCamera()
# place your component at the correct location
depthvideocamera.translate(<x>, <y>, <z>)
depthvideocamera.rotate(<rx>, <ry>, <rz>)
robot.append(depthvideocamera)
# define one or several communication interface, like 'socket'
depthvideocamera.add_interface(<interface>)
env = Environment('empty')
(This page has been auto-generated from MORSE module morse.sensors.depth_camera.)