#include <Inventor/nodes/SoCamera.h>
Inheritance diagram for SoCamera:
Public Types | |
enum | ViewportMapping { CROP_VIEWPORT_FILL_FRAME, CROP_VIEWPORT_LINE_FRAME, CROP_VIEWPORT_NO_FRAME, ADJUST_CAMERA, LEAVE_ALONE } |
enum | StereoMode { MONOSCOPIC, LEFT_VIEW, RIGHT_VIEW } |
Public Member Functions | |
void | pointAt (const SbVec3f &targetpoint) |
void | pointAt (const SbVec3f &targetpoint, const SbVec3f &upvector) |
virtual void | scaleHeight (float scalefactor)=0 |
virtual SbViewVolume | getViewVolume (float useaspectratio=0.0f) const =0 |
void | viewAll (SoNode *const sceneroot, const SbViewportRegion &vpregion, const float slack=1.0f) |
void | viewAll (SoPath *const path, const SbViewportRegion &vpregion, const float slack=1.0f) |
SbViewportRegion | getViewportBounds (const SbViewportRegion ®ion) const |
void | setStereoMode (StereoMode mode) |
StereoMode | getStereoMode (void) const |
void | setStereoAdjustment (float adjustment) |
float | getStereoAdjustment (void) const |
void | setBalanceAdjustment (float adjustment) |
float | getBalanceAdjustment (void) const |
virtual void | doAction (SoAction *action) |
virtual void | callback (SoCallbackAction *action) |
virtual void | GLRender (SoGLRenderAction *action) |
virtual void | getBoundingBox (SoGetBoundingBoxAction *action) |
virtual void | handleEvent (SoHandleEventAction *action) |
virtual void | rayPick (SoRayPickAction *action) |
virtual void | getPrimitiveCount (SoGetPrimitiveCountAction *action) |
Static Public Member Functions | |
void | initClass (void) |
Public Attributes | |
SoSFEnum | viewportMapping |
SoSFVec3f | position |
SoSFRotation | orientation |
SoSFFloat | aspectRatio |
SoSFFloat | nearDistance |
SoSFFloat | farDistance |
SoSFFloat | focalDistance |
Protected Member Functions | |
SoCamera (void) | |
virtual | ~SoCamera () |
virtual void | viewBoundingBox (const SbBox3f &box, float aspect, float slack)=0 |
virtual void | jitter (int numpasses, int curpass, const SbViewportRegion &vpreg, SbVec3f &jitteramount) const |
To be able to view a scene, one needs to have a camera in the scene graph. A camera node will set up the projection and viewing matrices for rendering of the geometry in the scene.
This node just defines the abstract interface by collecting common fields that all camera type nodes needs. Use the non-abstract camera node subclasses within a scene graph. The ones that are default part of the Coin library are SoPerspectiveCamera and SoOrthographicCamera, which uses the two different projections given by their name.
Note that the viewer components of the GUI glue libraries of Coin (SoXt, SoQt, SoWin, etc) will automatically insert a camera into a scene graph is none has been defined.
It is possible to have more than one camera in a scene graph. One common trick is for instance to use a second camera to display static geometry or overlay geometry (e.g. for head-up displays ("HUD")), as shown by this example code:
#include <Inventor/Qt/SoQt.h> #include <Inventor/Qt/viewers/SoQtExaminerViewer.h> #include <Inventor/nodes/SoNodes.h> int main(int argc, char ** argv) { QWidget * mainwin = SoQt::init(argv[0]); SoSeparator * root = new SoSeparator; root->ref(); // Adds a camera and a red cone. The first camera found in the // scene graph by the SoQtExaminerViewer will be picked up and // initialized automatically. root->addChild(new SoPerspectiveCamera); SoMaterial * material = new SoMaterial; material->diffuseColor.setValue(1.0, 0.0, 0.0); root->addChild(material); root->addChild(new SoCone); // Set up a second camera for the remaining geometry. This camera // will not be picked up and influenced by the viewer, so the // geometry will be kept static. SoPerspectiveCamera * pcam = new SoPerspectiveCamera; pcam->position = SbVec3f(0, 0, 5); pcam->nearDistance = 0.1; pcam->farDistance = 10; root->addChild(pcam); // Adds a green cone to demonstrate static geometry. SoMaterial * greenmaterial = new SoMaterial; greenmaterial->diffuseColor.setValue(0, 1.0, 0.0); root->addChild(greenmaterial); root->addChild(new SoCone); SoQtExaminerViewer * viewer = new SoQtExaminerViewer(mainwin); viewer->setSceneGraph(root); viewer->show(); SoQt::show(mainwin); SoQt::mainLoop(); delete viewer; root->unref(); return 0; }
|
Enumerates the available possibilities for how the render frame should map the viewport. |
|
Enumerates the possible stereo modes. |
|
Constructor. |
|
Destructor. |
|
Sets up initialization for data common to all instances of this class, like submitting necessary information to the Coin type system. Reimplemented from SoNode. Reimplemented in SoOrthographicCamera, and SoPerspectiveCamera. |
|
Reorients the camera so that it points towards targetpoint. The positive y-axis is used as the up vector of the camera, unless the new camera direction is parallel to this axis, in which case the positive z-axis will be used instead. |
|
Reorients the camera so that it points towards targetpoint, using upvector as the camera up vector. This method is an extension versus the Open Inventor API. |
|
Sets a scalefactor for the height of the camera viewport. What "viewport height" means exactly in this context depends on the camera model. See documentation in subclasses. Implemented in SoOrthographicCamera, and SoPerspectiveCamera. |
|
Returns total view volume covered by the camera under the current settings. This view volume is not adjusted to account for viewport mapping. If you want the same view volume as the one used during rendering, you should do something like this:
Also, for the CROPPED viewport mappings, the viewport might be changed if the viewport aspect ratio is not equal to the camera aspect ratio. See SoCamera::getView() to see how this is done. Implemented in SoOrthographicCamera, and SoPerspectiveCamera. |
|
Position the camera so that all geometry of the scene from sceneroot is contained in the view volume of the camera, while keeping the camera orientation constant. Finds the bounding box of the scene and calls SoCamera::viewBoundingBox(). |
|
Position the camera so all geometry of the scene in path is contained in the view volume of the camera. Finds the bounding box of the scene and calls SoCamera::viewBoundingBox(). |
|
Based in the SoCamera::viewportMapping setting, convert the values of region to the viewport region we will actually render into. |
|
Sets the stereo mode. |
|
Returns the stereo mode. |
|
Sets the stereo adjustment. |
|
Returns the stereo adjustment. |
|
Sets the stereo balance adjustment. |
|
Returns the stereo balance adjustment. |
|
This function performs the typical operation of a node for any action. Reimplemented from SoNode. |
|
Action method for SoCallbackAction. Simply updates the state according to how the node behaves for the render action, so the application programmer can use the SoCallbackAction for extracting information about the scene graph. Reimplemented from SoNode. |
|
Action method for the SoGLRenderAction. This is called during rendering traversals. Nodes influencing the rendering state in any way or who wants to throw geometry primitives at OpenGL overrides this method. Reimplemented from SoNode. |
|
Action method for the SoGetBoundingBoxAction. Calculates bounding box and center coordinates for node and modifies the values of the action to encompass the bounding box for this node and to shift the center point for the scene more towards the one for this node. Nodes influencing how geometry nodes calculates their bounding box also overrides this method to change the relevant state variables. Reimplemented from SoNode. |
|
Picking actions can be triggered during handle event action traversal, and to do picking we need to know the camera state.
Reimplemented from SoNode. |
|
Action method for SoRayPickAction. Checks the ray specification of the action and tests for intersection with the data of the node. Nodes influencing relevant state variables for how picking is done also overrides this method. Reimplemented from SoNode. |
|
Action method for the SoGetPrimitiveCountAction. Calculates the number of triangle, line segment and point primitives for the node and adds these to the counters of the action. Nodes influencing how geometry nodes calculates their primitive count also overrides this method to change the relevant state variables. Reimplemented from SoNode. |
|
Convenience method for setting up the camera definition to cover the given bounding box with the given aspect ratio. Multiplies the exact dimensions with a slack factor to have some space between the rendered model and the borders of the rendering area. If you define your own camera node class, be aware that this method should not set the orientation field of the camera, only the position, focal distance and near and far clipping planes. Implemented in SoOrthographicCamera, and SoPerspectiveCamera. |
|
"Jitter" the camera according to the current rendering pass (curpass), to get an antialiased rendering of the scene when doing multipass rendering. |
|
Set up how the render frame should map the viewport. The default is SoCamera::ADJUST_CAMERA. |
|
Camera position. Defaults to <0,0,1>. |
|
Camera orientation specified as a rotation value from the default orientation where the camera is pointing along the negative z-axis, with "up" along the positive y-axis. |
|
Aspect ratio for the camera (i.e. width / height). Defaults to 1.0. |
|
Distance from camera position to the near clipping plane in the camera's view volume. Default value is 1.0. Value must be larger than 0.0, or it will not be possible to construct a valid viewing volume (for perspective rendering, at least). If you use one of the viewer components from the So[Xt|Qt|Win|Gtk] GUI libraries provided by Systems in Motion, they will automatically update this value for the scene camera according to the scene bounding box. Ditto for the far clipping plane.
|
|
Distance from camera position to the far clipping plane in the camera's view volume. Default value is 10.0. Must be larger than the SoCamera::nearDistance value, or it will not be possible to construct a valid viewing volume. Note that the range [nearDistance, farDistance] decides the dynamic range of the Z-buffer in the underlying polygon-rendering rasterizer. What this means is that if the near and far clipping planes of the camera are wide apart, the possibility of visual artifacts will increase. The artifacts will manifest themselves in the form of flickering of primitives close in depth. It is therefore a good idea to keep the near and far clipping planes of your camera(s) as closely fitted around the geometry of the scene graph as possible.
|
|
Distance from camera position to center of scene. |