changed extensions from .c to .cpp and got compiling as C files as part of the osg core library.
Updated and cleaned up the rest of the OSG to use the new internal GLU.
type is supported at present. The attached osgparticleshader.cpp will
show how it works. It can also be placed in the examples folder. But I
just wonder how this example co-exists with another two (osgparticle
and osgparticleeffect)?
Member variables in Particle, including _alive, _current_size and
_current_alpha, are now merged into one Vec3 variable. Then we can
make use of the set...Pointer() methods to treat them as vertex
attribtues in GLSL. User interfaces are not changed.
Additional methods of ParticleSystem are introduced, including
setDefaultAttributesUsingShaders(), setSortMode() and
setVisibilityDistance(). You can see how they work in
osgparticleshader.cpp.
Additional user-defined particle type is introduced. Set the particle
type to USER and attach a drawable to the template. Be careful because
of possible huge memory consumption. It is highly suggested to use
display lists here.
The ParticleSystemUpdater can accepts ParticleSystem objects as child
drawables now. I myself think it is a little simpler in structure,
than creating a new geode for each particle system. Of course, the
latter is still compatible, and can be used to transform entire
particles in the world.
New particle operators: bounce, sink, damping, orbit and explosion.
The bounce and sink opeartors both use a concept of domains, and can
simulate a very basic collision of particles and objects.
New composite placer. It contains a set of placers and emit particles
from them randomly. The added virtual method size() of each placer
will help determine the probability of generating.
New virtual method operateParticles() for the Operator class. It
actually calls operate() for each particle, but can be overrode to use
speedup techniques like SSE, or even shaders in the future.
Partly fix a floating error of 'delta time' in emitter, program and
updaters. Previously they keep the _t0 variable seperately and compute
different copies of dt by themseleves, which makes some operators,
especially the BounceOperator, work incorrectly (because the dt in
operators and updaters are slightly different). Now a getDeltaTime()
method is maintained in ParticleSystem, and will return the unique dt
value (passing by reference) for use. This makes thing better, but
still very few unexpected behavours at present...
All dotosg and serialzier wrappers for functionalities above are provided.
...
According to some simple tests, the new shader support is slightly
efficient than ordinary glBegin()/end(). That means, I haven't got a
big improvement at present. I think the bottlenack here seems to be
the cull traversal time. Because operators go through the particle
list again and again (for example, the fountain in the shader example
requires 4 operators working all the time).
A really ideal solution here is to implement the particle operators in
shaders, too, and copy the results back to particle attributes. The
concept of GPGPU is good for implementing this. But in my opinion, the
Camera class seems to be too heavy for realizing such functionality in
a particle system. Myabe a light-weight ComputeDrawable class is
enough for receiving data as textures and outputting the results to
the FBO render buffer. What do you think then?
The floating error of emitters
(http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/2009-May/028435.html)
is not solved this time. But what I think is worth testing is that we
could directly compute the node path from the emitter to the particle
system rather than multiplying the worldToLocal and LocalToWorld
matrices. I'll try this idea later.
"
short oit. This rendering technique is also known as depth peeling.
Attached is the example that makes depth peeling work with the fixed function
pipeline. Ok, this is 'old fashioned' but required for our use case that
still has to work on older UNIX OpenGL implementations as well as together
with a whole existing application making use of the fixed function pipeline.
I can imagine to add support for shaders when we have that shader composition
framework where we can add a second depth test in a generic way.
This does *not* implement the dual depth peeling described in a paper from the
ETH Zurich.
This example could serve as a test case for the feature that you can on the
fly remove pre render cameras that you made work a few time ago.
It is also a test case for the new TraversalOrderBin that is used to composite
the depth layers in the correct blend order.
This example also stresses your new texture object cache since you can change
some parameters for the oit implementation at runtime.
You can just load any model with osgoit and see how it works.
Use the usual help key to see what you can change.
There is already an osgdepthpeeling example that I could not really make sense
of up to now. So I just made something new without touching what I do not
understand."
creation of main shader to ShaderComposer and
collection of ShaderComponent to osg::State.
Also added very basic shader set up in osgshadecomposition example.
to get QWidgetImage to a point where it can fill a need we have: to be
able to use Qt to make HUDs and to display widgets over / inside an OSG
scene.
---------------
Current results
---------------
I've attached what I have at this point. The modified QWidgetImage +
QGraphicsViewAdapter classes can be rendered fullscreen (i.e. the Qt
QGraphicsView's size follows the size of the OSG window) or on a quad in
the scene as before. It will let events go through to OSG if no widget
is under the mouse when they happen (useful when used as a HUD with
transparent parts - a click-focus scheme could be added later too). It
also supercedes Martin Scheffler's submission because it adds a
getter/setter for the QGraphicsViewAdapter's background color (and the
user can set their widget to be transparent using
widget->setAttribute(Qt::WA_TranslucentBackground) themselves).
The included osgQtBrowser example has been modified to serve as a test
bed for these changes. It has lots more command line arguments than
before, some of which can be removed eventually (once things are
tested). Note that it may be interesting to change its name or split it
into two examples. Though if things go well, the specific QWebViewImage
class can be removed completely and we can consolidate to using
QWidgetImage everywhere, and then a single example to demonstrate it
would make more sense, albeit not named osgQtBrowser... You can try this
path by using the --useWidgetImage --useBrowser command line arguments -
this results in an equivalent setup to QWebViewImage, but using
QWidgetImage, and doesn't work completely yet for some unknown reason,
see below.
----------------
Remaining issues
----------------
There are a few issues left to fix, and for these I request the
community's assistance. They are not blockers for me, and with my
limited Qt experience I don't feel like I'm getting any closer to fixing
them, so if someone else could pitch in and see what they can find, it
would be appreciated. It would be really nice to get them fixed, that
way we'd really have a first-class integration of Qt widgets in an OSG
scene. The issues are noted in the osgQtBrowser.cpp source file, but
here they are too:
-------------------------------------------------------------------
QWidgetImage still has some issues, some examples are:
1. Editing in the QTextEdit doesn't work. Also when started with
--useBrowser, editing in the search field on YouTube doesn't
work. But that same search field when using QWebViewImage
works... And editing in the text field in the pop-up getInteger
dialog works too. All these cases use QGraphicsViewAdapter
under the hood, so why do some work and others don't?
a) osgQtBrowser --useWidgetImage [--fullscreen] (optional)
b) Try to click in the QTextEdit and type, or to select text
and drag-and-drop it somewhere else in the QTextEdit. These
don't work.
c) osgQtBrowser --useWidgetImage --sanityCheck
d) Try the operations in b), they all work.
e) osgQtBrowser --useWidgetImage --useBrowser [--fullscreen]
f) Try to click in the search field and type, it doesn't work.
g) osgQtBrowser
h) Try the operation in f), it works.
2. Operations on floating windows (--numFloatingWindows 1 or more).
Moving by dragging the titlebar, clicking the close button,
resizing them, none of these work. I wonder if it's because the
OS manages those functions (they're functions of the window
decorations) so we need to do something special for that? But
in --sanityCheck mode they work.
a) osgQtBrowser --useWidgetImage --numFloatingWindows 1
[--fullscreen]
b) Try to drag the floating window, click the close button, or
drag its sides to resize it. None of these work.
c) osgQtBrowser --useWidgetImage --numFloatingWindows 1
--sanityCheck
d) Try the operations in b), all they work.
e) osgQtBrowser --useWidgetImage [--fullscreen]
f) Click the button so that the getInteger() dialog is
displayed, then try to move that dialog or close it with the
close button, these don't work.
g) osgQtBrowser --useWidgetImage --sanityCheck
h) Try the operation in f), it works.
3. (Minor) The QGraphicsView's scrollbars don't appear when
using QWidgetImage or QWebViewImage. QGraphicsView is a
QAbstractScrollArea and it should display scrollbars as soon as
the scene is too large to fit the view.
a) osgQtBrowser --useWidgetImage --fullscreen
b) Resize the OSG window so it's smaller than the QTextEdit.
Scrollbars should appear but don't.
c) osgQtBrowser --useWidgetImage --sanityCheck
d) Try the operation in b), scrollbars appear. Even if you have
floating windows (by clicking the button or by adding
--numFloatingWindows 1) and move them outside the view,
scrollbars appear too. You can't test that case in OSG for
now because of problem 2 above, but that's pretty cool.
4. (Minor) In sanity check mode, the widget added to the
QGraphicsView is centered. With QGraphicsViewAdapter, it is not.
a) osgQtBrowser --useWidgetImage [--fullscreen]
b) The QTextEdit and button are not in the center of the image
generated by the QGraphicsViewAdapter.
c) osgQtBrowser --useWidgetImage --sanityCheck
d) The QTextEdit and button are in the center of the
QGraphicsView.
-------------------------------------------------------------------
As you can see I've put specific repro steps there too, so it's clear
what I mean by a given problem. The --sanityCheck mode is useful to see
what should happen in a "normal" Qt app that demonstrates the same
situation, so hopefully we can get to a point where it behaves the same
with --sanityCheck and without."
graphics context and link it with a slave camera. I don't know the
reason we perform like that, which will cause a problem that the
GUIEventHandler may not obtain correct window coordinates because the
main camera will use a default input range to receive events from the
slave camera's graphics context. It is also weird to see the
addSlave() used in non-cluster applications, which beginners will be
confused with.
I've make a slightly modification to the osgviewerMFC example to make
it work without setting slave cameras. I've tested with the MDI
framework and everything seems fine."