This issue can be reproduced:
1. Create osgViewer window,
2. Push right&left mouse buttons on the osgViewer window,
3. Move mouse out of window, and release right&left mouse buttons.
osgViewer window handle only first mouse release, as result window thinks that we did not released second mouse button.
I attached fix for this issue."
calculated model / view matrices up to that point. The IntersectionVisitor would instead keep the
view matrices calculated up to that point even though the Transform class will throw out the
calculated model matrix via “computeLocalToWorldMatrix.”
The change I made will push an identity matrix as the view matrix when running into a transform
with an absolute reference frame and will pop the matrix off after the traverse.
To test this, I created a camera with a perspective view and added a transform with some geometry
in it. Afterwards, I set the transform’s reference frame to ABSOLUTE_RF and spun the camera around
using the trackball manipulator. When trying to pick with a LineSegmentIntersector, it would not
pick the geometry in the transform with the reference frame set to ABSOLUTE_RF."
I fixed some bugs and did some more tests with both of the video-plugins. I integrated CoreVideo with osgPresentation, ImageStream has a new virtual method called createSuitableTexture which returns NULL for default implementations. Specialized implementations like the QTKit-plugin return a CoreVideo-texture. I refactored the code in SlideShowConstructor::createTexturedQuad to use a texture returned from ImageStream::createSuitableTexture.
I did not use osgDB::readObjectFile to get the texture-object, as a lot of image-related code in SlideShowConstructor had to be refactored to use a texture. My changes are minimal and should not break existing code.
There's one minor issue with CoreVideo in general: As the implementation is asynchronous, there might be no texture available, when first showing the video the first frame. I am a bit unsure how to tackle this problem, any input on this is appreciated.
Back to the AVFoundation-plugin: the current implementation does not support CoreVideo as the QTKit-plugin supports it. There's no way to get decoded frames from AVFoundation stored on the GPU, which is kind of sad. I added some support for CoreVideo to transfer decoded frames back to the GPU, but in my testings the performance was worse than using the normal approach using glTexSubImage. This is why I disabled CoreVideo for AVFoundation. You can still request a CoreVideoTexture via readObjectFile, though.
"
Added template readFile(..) function to make it more convinient to cast to a specific object type.
Added support for osgGA::Device to osgViewer.
Added sdl plugin to provides very basic joystick osgGA::Device integration.