-Added copying of shaders and attributes in osg::Program copy constructor.
-Changed StateSet::compare function to compare Uniforms and their
override values. Previously it compared a RefUniformPair."
Also, there was also a small bug in osgDB's CMakeLists.txt that was causing an error when I tested with CMake 2.4.4.
IF(${OSG_DEFAULT_IMAGE_PLUGIN_FOR_OSX} STREQUAL "quicktime")
was changed to
IF(OSG_DEFAULT_IMAGE_PLUGIN_FOR_OSX STREQUAL "quicktime")
"
Node::Node(Node &node, copyop) :
_stateSet(copyop(node.getStateSet()),
It doesn't call the setStateSet method of osg::Node (or osg::Drawable). So the parent
list of the state set is not updated with the new node (drawable)."
On destruction of some static variables, the global referenced mutex is used
to lock access to the parent lists of state attributes, nodes and so on.
This even happens past the mutex is already destroyed.
This change to Referenced.cpp revision 9851 uses the same technique like the
DeleteHandlerPointer already in Referenced.cpp to return an zero pointer for
the global referenced lock if it is already destroyed."
DisplaySettings.cpp: OSG_COMPIlE_CONTEXTS -> OSG_COMPILE_CONTEXTS
AnimtkViewer.cpp: is a 3d poker game client -> is an example for viewing
osgAnimation animations"
It\u2019s a one line change against OSG 2.8.0 (see line 196). I\u2019ve already tested the change, and confirmed it\u2019s fixing the crashes described above."
used but never restored to the decimal notation. That made OSG print messages
like the following after some notifications:
Warning: detected OpenGL error 'invalid value' after RenderBin::draw(,)
RenderStage::drawInner(,) FBO status= 0x8cd5
[...]
Scaling image 'brick_side.JPG' from (1b4,24f) to (200,200) <--- Values in hex
because of previous error.
[...]"
circumstances under which this bug occur are rather specific, but the
basic problem occurs when one translation unit other than libosg.so
constructs an object that is a subclass of osg::Shape and another
translation unit other than libosg.so tries to perform a dynamic_cast or
other RTTI-based operation on that object. Under these circumstances,
the RTTI operation will fail. In my case, the translation units involved
were an application and osgdb_ive.so. The application constructed a
scene graph that included instantiations of subclasses of osg::Shape.
Depending on how the user ran the application, it would write the scene
graph to an IVE file using osgDB::writeNodeFile(). The dynamic_cast
operations in DataOutputStream::writeShape() would fail on the first
subclass of osg::Shape that was encountered. This is because there were
two different RTTI data objects for all osg::Shape subclasses being
compared: one in the application and one in osgdb_ive.so.
The fix for this is simple. We must ensure that at least one member
function of each of the subclasses of the polymorphic type osg::Shape is
compiled into libosg.so so that there is exactly one RTTI object for
that type in libosg.so. Then, all code linking against libosg.so will
use that single RTTI object. The following message from a list archive
sort of explains the issue and the solution:
http://aspn.activestate.com/ASPN/Mail/Message/1688156
While the posting has to do with Boost.Python, the problem applies to
C++ libraries in general."
The fix was to convert the osg::State to use C pointers for the set of applied PerContexProgram objects, and use the osg::Oberver mechanism to avoid dangling pointers for being maintained in osg::State.
methods
getProjectionMatrixAsOrtho()
getProjectionMatrixAsFrustum()
getProjectionMatrixAsPerspective()
getViewMatrixAsLookAt() (2x)
are now const, as they only call const methods of osg::Matrixf/d.
"
returns true if (the extension string is supported or GL version is greater than or equal to a specified version) and
non extension disable is used. This makes it possible to disable extensions that are now
available as parts of the core OpenGL spec.
Updated Texture.cpp is use this method.
VertexBufferObject::compileBuffer().
The offsets of newly added Arrays were not properly
calculated. This submission tries to find a
matching empty slot when the total size of
the VBO has not changed (e.g. when an array
is replaced by another array of the same size).
This fixes the overwriting issue that I showed in my posting
"Bug in VertexBufferObject::compileBuffer" on OSG-Users.
"
unsigned int Image::computeNumComponents(GLenum pixelFormat)
so I added these types to the switch statement:
case(GL_RED_INTEGER_EXT): return 1;
case(GL_GREEN_INTEGER_EXT): return 1;
case(GL_BLUE_INTEGER_EXT): return 1;
case(GL_ALPHA_INTEGER_EXT): return 1;
case(GL_RGB_INTEGER_EXT): return 3;
case(GL_RGBA_INTEGER_EXT): return 4;
case(GL_BGR_INTEGER_EXT): return 3;
case(GL_BGRA_INTEGER_EXT): return 4;
case(GL_LUMINANCE_INTEGER_EXT): return 1;
case(GL_LUMINANCE_ALPHA_INTEGER_EXT): return 2;
That's all... now it computes the number of components and, thus, the image size
correctly."
--------------------------------------------
Descripton:
The patch does provide a new class PixelDataBufferObject which is capable of allocating memory on the GPU side (PBO memory) of arbitrary size. The memory can then further be used to be enabled into read mode (GL_PIXEL_UNPACK_BUFFER_ARB) or in write mode (GL_PIXEL_PACK_BUFFER_ARB). Enabling the buffer into write mode will force the driver to write data from bounded textures into that buffer (i.e. glGetTexImage). Using buffer in read mode give you the possibility to read data from the buffer into a texture with e.g. glTexSubImage or other instuctions. Hence no data is copied over the CPU (host memory), all the operations are done in the GPU memory.
--------------------------------------------
Compatibility:
The new class require the unbindBuffer method from the base class BufferObject to be virtual, which shouldn't break any functionality of already existing classes. Except of this the new class is fully orthogonal to existing one, hence can be safely added into already existing osg system.
--------------------------------------------
Testing:
The new class was tested in the current svn version of osgPPU. I am using the new class to copy data from textures into the PBO and hence provide them to CUDA kernels. Also reading the results back from CUDA is implemented using the provided patch. The given patch gives a possibility of easy interoperability between CUDA and osg (osgPPU ;) )
--------------------------------------------
I think in general it is a better way to derive the PixelBufferObject class from PixelDataBufferObject, since the second one is a generalization of the first one. However this could break the current functionality, hence I haven't implemented it in such a way. However I would push that on a stack of wished osg 3.x features, since this will reflect the OpenGL PBO functionality through the classes better.
"
"To reproduce the bug:
1. Create a template osg::Sequence node (and underlying geometry) but do not attach the node to the current active scenegraph.
2. At some point during the rendering loop (perhaps on a keystroke) clone the sequence node (I use the call:
dynamic_cast<osg::Node*>(templateNode -> clone( osg::CopyOp( (osg::CopyOp::CopyFlags)osg::CopyOp::DEEP_COPY_NODES ) ) )
3. Set the cloned sequence node duration to a value that makes the animation run slower (i.e. 2.0).
4. Start the cloned sequence (using setMode()).
5. Repeat steps 2 \u2013 4 and observe that the cloned sequences do not run slow but run as fast, appearing to ignore the duration that has been set on them.
Looking at the \u2018good documentation\u2019 (2.4 source code), I see that _start is being set to _now (osg::Sequence::setMode(), line 192). Should this not _start not be set to -1.0?"
* When used PDS RenderStage::runCameraSetUp sets flag that FBO has already stencil,depth buffer attached. Prevents adding next depth buffer.
* Sets correct traits for p-buffer if used PDS and something goes wrong with FBO setup or p-buffer is used directly.
* Adds warning to camera if user add depth/stencil already attached through PDS.
* Sets blitMask when use blit to resolve buffer.
There is also new example with using multisampled FBO."
I have not reverted added Compiler options. I assume that one may want to have warnings enabled for the application but may not want to see them while OSG libraries and examples compile.
Modified files:
osg/Export - now explicitly includes osg/Config to make sure OSG_DISABLE_MSVC_WARNINGS is read
osg/Config.in - declares OSG_DISABLE_MSVC_WARNINGS flag to be added to autogenerated osg/Config
CMakeLists.txt - declares OSG_DISABLE_MSVC_WARNINGS as option with default ON setting
"
The fix was to check whether glGetString( GL_VERSION ) returned a null pointer (Ref. svn diff below). The altered src/osg/GLExtensions.cpp is zipped and attached to this email."
set.
The optimization is based on the observation that matrix matrix multiplication
with a dense matrix 4x4 is 4^3 Operations whereas multiplication with a
transform, or scale matrix is only 4^2 operations. Which is a gain of a
*FACTOR*4* for these special cases.
The change implements these special cases, provides a unit test for these
implementation and converts uses of the expensiver dense matrix matrix
routine with the specialized versions.
Depending on the transform nodes in the scenegraph this change gives a
noticable improovement.
For example the osgforest code using the MatrixTransform is about 20% slower
than the same codepath using the PositionAttitudeTransform instead of the
MatrixTransform with this patch applied.
If I remember right, the sse type optimizations did *not* provide a factor 4
improovement. Also these changes are totally independent of any cpu or
instruction set architecture. So I would prefer to have this current kind of
change instead of some hand coded and cpu dependent assembly stuff. If we
need that hand tuned stuff, these can go on top of this changes which must
provide than hand optimized additional variants for the specialized versions
to give a even better result in the end.
An other change included here is a change to rotation matrix from quaterion
code. There is a sqrt call which couold be optimized away. Since we divide in
effect by sqrt(length)*sqrt(length) which is just length ...
"
they are written to disk, either inline or as an external file. Added support for
this in the .ive plugin. Default of WriteHint is NO_PREFERNCE, in which case it's
up to the reader/writer to decide.
occlusion query result is not ready for retrieval when the app tries to
retrieve it. This fix adds an application-level wait loop to ensure the
result is ready for retrieval. This code is not compiled by default; add "-D
FORCE_QUERY_RESULT_AVAILABLE_BEFORE_RETRIEVAL" to get this code.
Full, gory details, to the best of my recollection:
The conditions under which we encountered this issue are as follows: 64-bit
processor, Mac/Linux OS, multiple NVIDIA GPUs, multiple concurrent draw
threads, VRJuggler/SceneView-based viewer, and a scene graph containing
OcclusionQueryNodes. Todd wrote a small test program that produces an almost
instant crash in this environment. We verified the crash does not occur in a
similar environment with a 32-bit processor, but we have not yet tested on
Windows and have not yet tested with osgViewer.
The OpenGL spec states clearly that, if an occlusion query result is not yet
ready, an app can go ahead and attempt to retrieve it, and OpenGL will
simply block until the result is ready. Indeed, this is how
OcclusionQueryNode is written, and this has worked fine on several platforms
and configurations until Todd's test program.
By trial and error and dumb luck, we were able to workaround the crash by
inserting a wait loop that forces the app to only retrieve the query after
OpenGL says it is available. As this should not be required (OpenGL should
do this implicitly, and more efficiently), the wait loop code is not
compiled by default. Developers requiring this work around must explicitly
add "-D FORCE_QUERY_RESULT_AVAILABLE_BEFORE_RETRIEVAL" to the compile
options to include the wait loop."
New attribute DatabasePager::_expiryFrames sets number of frames a PagedLOD child is kept in memory. The attribute is set with DatabasePager::setExpiryFrames method or OSG_EXPIRY_FRAMES environmental variable.
New attribute PagedLOD::PerRangeData::_
frameNumber contains frame number of last cull traversal.
Children of PagedLOD are expired when time _AND_ number of frames since last cull traversal exceed OSG_EXPIRY_DELAY _AND_ OSG_EXPIRY_FRAMES respectively. By default OSG_EXPIRY_FRAMES = 1 which means that nodes from last cull/rendering
traversal will not be expired even if last cull time exceeds OSG_EXPIRY_DELAY. Setting OSG_EXPIRY_FRAMES = 0 revokes previous behaviour of PagedLOD.
Setting OSG_EXPIRY_FRAMES > 0 fixes problems of children reloading in lazy rendering applications. Required behaviour is achieved by manipulating OSG_EXPIRY_DELAY and OSG_EXPIRY_FRAMES together.
Two interface changes are made:
DatabasePager::updateSceneGraph(double currentFrameTime) is replaced by DatabasePager::updateSceneGraph(const osg::FrameStamp &frameStamp). The previous method is in #if 0 clause in the header file. Robert, decide if You want to include it.
PagedLOD::removeExpiredChildren(double expiryTime, NodeList &removedChildren) is deprecated (warning is printed), when subclassing use PagedLOD::removeExpiredChildren(double expiryTime, int expiryFrame, NodeList &removedChildren) instead. "