There was code in the osgViewer/Viewer.cpp and osgViewer/CompositeViewer.cpp that transformed the Y-coordinates of an event. The code in the composite viewer did however miss the touch-data of the event. I thought that it should really be the GUIEventAdapter that should know about this, and hence I added the
GUIEventAdapter::setMouseYOrientationAndUpdateCoords which is re-computing the coordinates. First I simply added a boolean to the setMouseYOrientation function:
setMouseYOrientation( MouseYOrientation, bool updatecooreds=false );
but then the serializer complained.
This function is called from both the Viewer and the CompositeViewer. We have not tested from the viewer, but I cannot see it would not work from visual inspection.
The other change is in MultiTouchTrackballManipulator::handleMultiTouchDrag. I have removed the normalisation. The reason for that is that it normalised into screen coordinates from 0,0 to 1,1. The problem with that is that if you have a pinch event and you keep the distance say 300 pixels between your fingers, these 300 pixels represent 0.20 of the screen in the horizontal domain, but 0.3 of the screen in the vertical domain. A rotation of the pinch-fingers will hence result in a zoom in, as the normalised distance is changing between them.
A consequence of this is that I have changed the pan-code to use the same algorithm as the middle-mouse-pan.
The rest of it is very similar from previous revision, and there has been some fine-tuning here and there.
"
To select standard OpenGL 1/2 build with full backwards and forwards comtability use:
./configure
make
OR
./configure -DOPENGL_PROFILE=GL2
To select OpenGL 3 core profile build using GL3/gl3.h header:
./configure -DOPENGL_PROFILE=GL3
To select OpenGL Arb core profile build using GL/glcorearb.h header:
./configure -DOPENGL_PROFILE=GLCORE
To select OpenGL ES 1.1 profile use:
./configure -DOPENGL_PROFILE=GLES1
To select OpenGL ES 2 profile use:
./configure -DOPENGL_PROFILE=GLES2
Using OPENGL_PROFILE will select all the appropriate features required so no other settings in cmake will need to be adjusted.
The new configuration options are stored in the include/osg/OpenGL header that deprecates the old include/osg/GL header.
Added the Cmake option OSG_USE_LOCAL_LUA_SOURCE to control whether to build and use the Lua source code in the lua plugin, or look for lua as an external dependency.
To the Lua plugin added support for assigned lua functions to C++ osg::Objects via the new osg::CallbackObject mechanism. To invoke the scripts function from C++ one must get the CallbackObject and call run on it.
Renamed ScriptCallback to ScriptNodeCallback to avoid possibly confusion between osg::CallbackObject and the ScriptNodeCallback.
From Robert Osfield, changed the example so that the vertical and horizon scalar bars are rotated to the XZ plane so you can see them with the default viewer's camera orientation.
Tweaked the positioning of title text of vertic scalar bar to avoid overlap of text.
I added a new tag to p3d called forward_touch_event_to_device and renamed the existing forward_event_to_device to forward_mouse_event_to_device. This new tag will transmit touches to the virtual trackpad as touch events. I added the MultitouchTrackball to the p3d-app so zooming and moving a model remotely should now work, if you use forward_touch_event_to_device. I kept (and fixed) forward_mouse_event_to_device for background compatibility, so old presentations works as in previous versions, without the ability to zoom + scale. of course.
forward_touch_event_to_device needs some more testing, (e.g. with image-streams and keystone, afaik there’s no support for touch-events...) but for a first version it works nice.
"
with the --write-to-source-file-directory added to allow one to have the original behaviour
of writing to the same directory as the original source file.
* force _OPENTHREADS_ATOMIC_USE_BSD_ATOMIC for IOS device + simulator as the test does not pick the right implementation
* fixed a small compile-bug for iphone-example
* added a check to prevent multiple realization of a GraphicsWindowIOS-object
"
* MultiTouchTrackball: some code cleanup and support for normalized touch-points
* oscdevice: receiving and sending multi-touch-events via the Cursor2D-profile from TUIO
* added some documentation
* fixed a bug with multi-touch and touch-id-generation on iOS and OS X. (will fix a bug reported by Colin Cochran, without ditching the existing logic)
* removed unnecessary warning-flagss when generating xcode-projects via cmake, will enable the usage of OSG_AGGRESSIVE_WARNING_FLAGS
* added support for 10.9 (OS X)
* new cmake-variable: IPHONE_VERSION_MIN, this will set the deployment-target (previously hard-coded) If you set the IPHONE_VERSION_MIN to something like 7.0 osg gets compiled also for 64bit (amd64)
* cmake defaults now to the clang compiler if IPHONE_VERSION_MIN > 4.2
* cmake now sets some xcode-settings so the compiler uses the c++98-standard (clang defaults to c++11, w/o this I got a lot of linking errors)
* removed include-dir for avfoundation-plugin as not needed on OSX/IOS.
* enhanced the ios-example, will now show multitouch-information on a hud (similar to the osgmultitouch-example), and more importantly, will compile + link out of the box
* small enhancements for the osc-device-plugin (send only one msg for MOVE/DRAG, even if multiple msgs/event is enabled)
* better memory-handling for the zeroconf-plugin
* fixed a possible bug in the rest-http-plugin when receiving mouse-events.
* incorporated a fix from Colin Cochran "forwarded touch events are not transformed into the GL UIView“
"
1> osgmultiplemovies example does not use SDL so needs no link to SDL
2> Added header files to "Plugins osg" project, so visual studio can find the source of
OSG_WARN << "AsciiInputIterator::readProperty(): Unmatched property "
"
Provided are lua, python and V8 (for javascript) plugins that just open up enough of a link to the respective libs to run a script, there is no scene graph <-> script communication in current implementation.
This version adds:
- an encapsulation of the entire Depth Peeling procedure into a class (not currently a scene graph node) for easier integration in other projects.
- compositing with opaque (solid) geometry is possible and the opaque model is only rendered once. This needs to performs some depth buffer blitting between FBOs.
- mix and match with GLSL shaders in the transparent objects is possible, as demonstrated with a 3D heat map intersecting an opaque truck model.
Some Drawbacks:
- the display framebuffer does not receive any depth information from the compositing camera. This could be fixed by compositing with a GLSL shader and writing to FragDepth."
From Robert Osfield, ported the code to work under Linux and without the automatic ref_ptr to C* conversion.
New methods osg::Geometry::containsDeprecatedData() and osg::Geometry::fixDeprecatedData() provide a means for converting geometries that still use the array indices and BIND_PER_PRIMITIVE across to complient
versions.
Cleaned up the rest of the OSG where use of array indices and BIND_PER_PRIMITIVE were accessed or used.
path. Also the arrays are moved back to static storage since this is the data
that is actually referenced in draw. So the change moving this onto the stack
that happend somewhere before broke this."
GeometryNew is only temporary and will be renamed to Geometry on the completion of refactoring work and feedback from community.
Ported osggeometry across to use GeometryNew.
* avfoundation: added support for IOS (CoreVideo-support is still in development, works only for SDK >= 6.0, set IPHONE_SDKVER in cMake accordingly)
* zeroconf: added ZeroConf-device-plugin (Mac/Win only, linux implementation missing) to advertise and discover services via ZeroConf/Bonjour, on windows you'll need the Bonjour SDK from Apple
* osgosc: modified the example to demonstrate the usage of the ZeroConf-plugin (start the example with the command-line-argument --zeroconf)
* SlideShowConstructor: enable/disable CoreVideo via a environment variable (P3D_ENABLE_CORE_VIDEO)
* RestHttp: mouse-motion-events get interpolated
* RestHttp: unhandled http-requests get sent as an user-event to the event-queue, all arguments get attached as user-values to the event
* modified some CMakeModules to work correctly when compiling for IOS
* fixed a compile-error for IOS in GraphicsWindowIOS
* some minor bugfixes"
Not sure why my system seems to be so sensitive to these problems.
But attached is a fix which seems to stabilise the example.
Note: it only seems to crash intermittently when spinning the object with
your mouse.
So I assume this is a threading issue because of the data variance missing
in some of the text node setups in the example.
"
- add non square matrix
- add double
- add all uniform type available in OpenGL 4.2
- backward compatibility for Matrixd to set/get an float uniform matrix
- update of IVE / Wrapper ReadWriter
implementation of AtomicCounterBuffer based on BufferIndexBinding
add example that use AtomicCounterBuffer and show rendering order of fragments,
original idea from geeks3d.com."
osgshaders.cpp and demonstrates the use of GLSL vertex and fragment
shaders with a simple animation callback. I found the osgshaders.cpp
too complex to serve as a starting point for GLSL programming"
10.6), which will forward all multi-touch events from a trackpad to the
corresponding osgGA-event-structures.
The support is switched off per default, but you can enable multi-touch
support via a new flag for GraphicsWindowCocoa::WindowData or directly
via the GraphicsWindowCocoa-class.
After switching multi-touch-support on, all mouse-events from the
trackpad get ignored, otherwise you'll have multiple events for the same
pointer which is very confusing (as the trackpad reports absolute
movement, and as a mouse relative movement).
I think this is not a problem, as multi-touch-input is a completely
different beast as a mouse, so you'll have to code your own
event-handlers anyway.
While coding this stuff, I asked myself if we should refactor
GUIEventAdapter/EventQueue and assign a specific event-type for
touch-input instead of using PUSH/DRAG/RELEASE. This will make it
clearer how to use the code, but will break the mouse-emulation for the
first touch-point and with that all existing manipulators. What do you
think? I am happy to code the proposed changes.
Additionally I created a small (and ugly) example osgmultitouch which
makes use of the osgGA::MultiTouchTrackballManipulator, shows all
touch-points on a HUD and demonstrates how to get the touchpoints from
an osgGA::GUIEventAdapter.
There's even a small example video here: http://vimeo.com/31611842"
to ensure the correct methods on constraints and callbaks are called for each Command. Also fixed the handling of
Constraints when applied to composite Draggers.
parameter in osg::Image. To support this Image::setData(..) now has a new optional rowLength parameter which
defaults to 0, which provides the original behaviour, Image::setRowLength(int) and int Image::getRowLength() are also provided.
With the introduction of RowLength support in osg::Image it is now possible to create a sub image where
the t size of the image are smaller than the row length, useful for when you have a large image on the CPU
and which to use a small portion of it on the GPU. However, when these sub images are created the data
within the image is no longer contiguous so data access can no longer assume that all the data is in
one block. The new method Image::isDataContiguous() enables the user to check whether the data is contiguous,
and if not one can either access the data row by row using Image::data(column,row,image) accessor, or use the
new Image::DataIterator for stepping through each block on memory assocatied with the image.
To support the possibility of non contiguous osg::Image usage of image objects has had to be updated to
check DataContiguous and handle the case or use access via the DataIerator or by row by row. To achieve
this a relatively large number of files has had to be modified, in particular the texture classes and
image plugins that doing writing.