Bases: morse.core.sensor.Sensor
This sensor emulates an Accelerometer/Podometer, measuring the distance that a robot has moved, the current speed and current acceleration. Measurements are done for the 3 axes (X, Y, Z) for velocity and acceleration. The values for velocity and acceleration are measured at each tic of the Game Engine, measuring the difference in distance from the previous tic, and the estimated time between tics (60 tics per second is the default in Blender).
Compute the speed and accleration of the robot
The speed and acceleration are computed using the blender tics to measure time. When computing velocity as v = d / t, and t = 1 / frequency, then v = d * frequency where frequency is computed from the blender tics and number of skipped logic steps for this sensor.
Bases: morse.core.sensor.Sensor
The sensor streams the joint state (ie, the rotation or translation value of each joint belonging to the armature) of its parent armature.
Note
This sensor must be added as a child of the armature you want to sense, like in the example below:
robot = ATRV()
arm = KukaLWR()
robot.append(arm)
arm.translate(z=0.9)
arm_pose = ArmaturePose('arm_pose')
arm.append(arm_pose)
This component only allows to read armature configuration. To change the armature pose, you need an armature actuator.
Important
To be valid, special care must be given when creating armatures. If you want to add new one, please carefully read the armature creation documentation.
Note
The data structure on datastream exported by the armature sensor depends on the armature. It is a dictionary of pair (joint name, joint value). Joint values are either radians (for revolute joints) or meters (for prismatic joints)
Sees : | armature actuator |
---|
Get the x, y, z, yaw, pitch and roll of the armature, and the rotation angle for each of the segments.
Returns the value of a given joint, either: - its absolute rotation in radian along its rotation axis, or - it absolute translation in meters along its translation axis.
Throws an exception if the joint does not exist.
Parameters: | joint – the name of the joint in the armature. |
---|
Returns the list of joints of the armature.
Returns: | the (ordered) list of joints in the armature, from root to tip. |
---|
Bases: morse.core.sensor.Sensor
This sensor emulates the remaining charge of a battery on the robot. It is meant to be used only as an informative measure, to be taken in consideration by the planning algorithms. It does not prevent the robot from working.
The charge of the battery decreases with time, using a predefined Discharge rate specified as a property of the Blender object. This rate is independent of the actions performed by the robot, and only dependant on the time elapsed since the beginning of the simulation.
A planned feature is to allow for designated Charging Zones where the battery will gradually recharge. However, this is not implemented yet.
Bases: morse.core.sensor.Sensor
A generic camera class, which is expected to be used as a base class for real camera. Concrete instantiation are currently:
Note
The streaming of data from this sensor can be toggled off and on by pressing the SPACE key during the simulation. This will affect all the video cameras on the scene.
Toggling off the cameras can help make the simulation run faster, specially when there are several cameras. However, the lack of data on the stream may cause problems to some middlewares.
Bases: morse.core.sensor.Sensor
This special sensor is constructed by passing a list of other sensors, and creates a new datastream from the concatenation of other sensors’ local_data.
More accurately, it streams a dictionary of {<sensor name>: <sensor local_data>}.
Note that services exposed by original sensors are not exposed by the compound sensor.
Bases: morse.sensors.depth_camera.AbstractDepthCamera
This sensor generates a 3D point cloud from the camera perspective.
Bases: morse.sensors.depth_camera.AbstractDepthCamera
This sensor generates a Depth ‘image’ from the camera perspective.
“Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.” [ROS Enhancement Proposal 118](http://ros.org/reps/rep-0118.html) on Depth Images.
If you are looking for PointCloud data, you can use external tools like [depth_image_proc](http://ros.org/wiki/depth_image_proc) which will use the intrinsic_matrix and the image to generate it, or eventually the XYZCameraClass in this module.
Bases: morse.sensors.depth_camera.AbstractDepthCamera
This sensor gets raw Z-Buffer from the camera perspective.
Bases: morse.core.sensor.Sensor
This sensor emulates a GPS, providing the exact coordinates in the Blender scene. The coordinates provided by the GPS are with respect to the origin of the Blender coordinate reference.
Bases: morse.core.sensor.Sensor
This sensor emulates a Gyroscope, providing the yaw, pitch and roll angles of the sensor object with respect to the Blender world reference axes.
Angles are given in radians.
Bases: morse.core.sensor.Sensor
This sensor collects the positions of the bones in the human armature for the file $MORSE_ROOT/data/robots/human.blend.
It stores the position and orientation of the general armature object, as well as the local rotation of each individual bone. The rotation angles are given in radians.
This sensor will only work for the human.blend model, as it uses a specific naming convention for each of the bones.
You can also check to general documentation of the human component.
Bases: morse.core.sensor.Sensor
This sensor emulates an Inertial Measurement Unit (IMU), measuring the angular velocity and linear acceleration including acceleration due to gravity.
If the robot has a physics controller, the velocities are directly read from it’s properties localAngularVelocity and worldLinearVelocity. Otherwise the velocities are calculated by simple differentiation. Linear acceleration is always computed by differentiation of the linear velocity. The measurements are given in the IMU coordinate system, so the location and rotation of the IMU with respect to the robot is taken into account.
Get the speed and acceleration of the robot and transform it into the imu frame
Bases: morse.core.sensor.Sensor
This sensor emulates the kinect output, ie both a depth image and an rgba image.
Bases: morse.core.sensor.Sensor
This is a generic sensor class used to emulate laser range scanners, including a variety of SICK and Hokuyo sensors.
This sensor works by generating a series of rays in predefined directions, and then computing whether any active object is found within a certain distance from the origin of the sensor.
The resolution and detection range can be completely configured using the MORSE Builder API. This will generate a flat mesh with a semi-circular shape, where its vertices represent the directions in which the rays of the sensor are cast. It is also possible to create a sensor with multiple scan layers, such as the SICK LD-MRS. This is configured using the parameters specified below.
Note
Objects in the scene with the No collision setting in their Game properties will not be detected by this sensor
![]() SICK LMS500 |
![]() SICK LD-MRS |
![]() Hokuyo |
The number and direction of the rays emitted by the sensor is determined by the vertices of a semi-circle mesh parented to the sensor. The sensor will cast rays from the center of the sensor in the direction of each of the vertices in the semi-circle.
Three preconfigured scanners are available: a SICK LMS500 laser scanner, a Hokuyo and a SICK LD-MRS. The example below shows how to add them in a simulation:
from morse.builder import *
# Append a sick laser
sick = Sick() # range: 30m, field: 180deg, 180 sample points
hokuyo = Hokuyo() # range: 30m, field: 270deg, 1080 sample points
sick_ld_mrs = SickLDMRS() # range: 30m, field 100deg, 4 layers, 400 points per layer
All these default parameters can be changed i(cf Configuration parameters below). An example of how to change the arc object using the Builder API is show below:
from morse.builder import *
# Append a sick laser
sick = Sick()
sick.properties(resolution = 5)
sick.properties(scan_window = 90)
sick.properties(laser_range = 5.0)
Note
In some special cases (like multi-robot setups), you may need to additionally call sick.create_sick_arc() after setting your scanner properties.
The ray will be created from (-window/2) to (+window/2). So the range_list will contain the range clockwise.
Another example for the SICK LD-MRS:
from morse.builder import *
sick = SickLDMRS()
sick.properties(Visible_arc = True)
sick.properties(resolution = 1.0)
sick.properties(scan_window = 100)
sick.properties(laser_range = 50.0)
sick.properties(layers = 4)
sick.properties(layer_separation = 0.8)
sick.properties(layer_offset = 0.25)
As with any other component, it is possible to adjust the refresh frequency of the sensor, after it has been defined in the builder script. For example, to set the frequency to 1 Hz:
sick.frequency(1.0)
Bases: morse.core.sensor.Sensor
This sensor collects the positions of the bones in the human armature for the file $MORSE_ROOT/data/robots/mocap_human.blend.
It stores the position and orientation of the general armature object, as well as the local rotation of each individual bone. The rotation angles are given in radians. It exports the same interface than the human posture sensor, but some joints are not reflected by the Kinect, and so they stay to their initial values.
This sensor will only work for the mocap_human.blend model, as it uses a specific naming convention for each of the bones.
You can also check to general documentation of the human component.
Bases: morse.core.sensor.Sensor
This sensor produces relative displacement with respect to the position and rotation in the previous Blender tick. It can compute too the position of the robot with respect to its original position, and the associated speed.
The angles for yaw, pitch and roll are given in radians.
Note
This sensor always provides perfect data. To obtain more realistic readings, it is recommended to add modifiers.
Bases: morse.core.sensor.Sensor
This sensor returns the full pose of the sensor, i.e. both translation and rotation with respect to the Blender world frame.
Bases: morse.core.sensor.Sensor
This sensor can be used to determine which other objects are within a certain radius of the sensor. It performs its test based only on distance. The type of tracked objects can be specified using the Track property.
Bases: morse.core.sensor.Sensor
Simple sensor that provides the current rotation angles of the pan and tilt segments of the PTU actuator. The angles returned are in radians in the range (-pi, pi).
Note
This sensor must be added as a child of the PTU you want to sense, like in the example below:
robot = ATRV()
ptu = PTU()
robot.append(ptu)
ptu.translate(z=0.9)
ptu = PTUPosture('ptu_pose')
ptu.append(ptu_pose)
Note
The angles are given with respect to the orientation of the robot
Sees : | PTU actuator. |
---|
Bases: morse.core.sensor.Sensor
This is a multi functional component specific for Search and Rescue scenario, where the robot must be able to aid human victims. The sensor is capable of detecting any victim located within a cone in front of the robot, with a range delimited in the properties of the Blender object. The output of the sensor is a list of the robots and their positions in the simulated world. This sensor works only with the human victim object.
Additionally, the sensor provides a number of services related to the capabilities of the robot to help the nearest victim:
- Report on the condition of a victim
- Report the capabilities of the robot
- Heal a victim (if the robot has compatible capabilities with the requirements of the victim)
In the test scenarios, human victims are shown in red. When a robot approaches, if it has the adequate capabilities, it will be able to help the victims. When issued the command, the sensor will gradually change the colour of the victim to green, and its status to healthy. It will detect if a victim is in front of the robot. When instructed to heal the victim, it will change the Game Properties of the object to reduce its injured value.
Returns the list describing the abilities with which the robot is equipped. It must match the requirements of the victim for the robot to be able to heal it.
Bases: morse.sensors.camera.Camera
This sensor emulates a hight level camera that outputs the names of the objects that are located within the field of view of the camera.
The sensor determines first which objects are to be tracked (objects marked with a Logic Property called Object, cf documentation on passive objects for more on that). If the Label property is defined, it is used as exported name. Else the Blender object name is used.
Then a test is made to identify which of these objects are inside of the view frustum of the camera. Finally, a single visibility test is performed by casting a ray from the center of the camera to the center of the object. If anything other than the test object is found first by the ray, the object is considered to be occluded by something else, even if it is only the center that is being blocked.
The cameras make use of Blender’s bge.texture module, which requires a graphic card capable of GLSL shading. Also, the 3D view window in Blender must be set to draw Textured objects.
Bases: morse.core.sensor.Sensor
The purpose of this component is to link together one or more cameras, and provide them with the possibility to move together as a single unit. It will also provide the connection interface to use the information of the cameras attached to it. In the case of two cameras, it will provide the stereo information generated from the two camera images.
A stereo unit needs to be the parent of one or more cameras. Otherwise, it does no useful function.
The movement of the stereo unit is implemented by making it the child of a Pan-Tilt unit actuator.
Here is an example of how to construct the whole stereo system to mount on top of a robot, using the Builder API. Note the order in which components are appended to each other, as this is important to get the desired functionality:
from morse.builder import *
# Add a robot
atrv = ATRV()
atrv.translate(z=0.1000)
# A pan-tilt unit to be able to orient the cameras
Platine = PTU()
Platine.translate(x=0.2000, z=0.9000)
atrv.append(Platine)
# The STEREO UNIT, where the two cameras will be fixed
Stereo = StereoUnit()
Stereo.translate(z=0.0400)
Platine.append(Stereo)
# Left camera
CameraL = VideoCamera()
CameraL.translate(x=0.1000, y=0.2000, z=0.0700)
Stereo.append(CameraL)
CameraL.properties(capturing = True)
CameraL.properties(cam_width = 320)
CameraL.properties(cam_height = 240)
CameraL.properties(cam_focal = 25.0000)
# Right camera
CameraR = VideoCamera()
CameraR.translate(x=0.1000, y=-0.2000, z=0.0700)
Stereo.append(CameraR)
CameraR.properties(capturing = True)
CameraR.properties(cam_width = 320)
CameraR.properties(cam_height = 240)
CameraR.properties(cam_focal = 25.0000)
Bases: morse.core.sensor.Sensor
This sensor emulates a Thermometer, measuring the temperature with respect to the distance to heat sources. It defines a default temperature throughout the scenario, which is affected by local fire sources. The temperature rises exponentially when the distance between the sensor and the heat source decreases.
The default temperature is specified as a parameter Temperature of the Scene_Script_Holder Empty object in the simulation file. It is expressed in degrees Celsius.
Bases: morse.sensors.camera.Camera
This sensor emulates a single video camera. It generates a series of RGBA images. Images are encoded as binary char arrays, with 4 bytes per pixel.
The cameras make use of Blender’s bge.texture module, which requires a graphic card capable of GLSL shading. Also, the 3D view window in Blender must be set to draw Textured objects.
The camera configuration parameters implicitly define a geometric camera in blender units. Knowing that the cam_focal attribute is a value that represents the distance in Blender unit at which the largest image dimension is 32.0 Blender units, the camera intrinsic calibration matrix is defined as
alpha_u 0 u_0 0 alpha_v v_0 0 0 1
where: