Initially I described issue in message:
http://forum.openscenegraph.org/viewtopic.php?t=13820
It solves issue with compiling texture using ico from image with mipmaps
I added enviroment variable OSG_GL_TEXTURE_STORAGE_ENABLE to control usage of glTexStorage2d. Initially it is disabled.
It used only if image have mipmaps.
Another issue is converting from internalFormat + type to sized internal format. I created sizedInternalFormats[] struct where sized internal formats are ordered from worse->best.
also this struct have commented lines. Commented formats are listed in
http://www.opengl.org/wiki/GLAPI/glTexStorage2D
but looks like not using in osg."
Note from Robert Osfield. Changed the env var control to OSG_GL_TEXTURE_STORAGE and made it's value true by default when the feature is supported by the OpenGL driver. To disable to
use of glTexStorage2D use OSG_GL_TEXTURE_STORAGE="OFF" or "DISABLE"
git-svn-id: http://svn.openscenegraph.org/osg/OpenSceneGraph/trunk@14275 16af8721-9629-0410-8352-f15c8da7e697
The pvr format which can be used as a wrapper for different compressed and uncompressed formats supports this compression algorithm. The original pvr compression uses the pvrtc format. The handling of pvrtc is already implemented in the pvr plugin. PVR provides wrapper functionality for some formats, e.g. etc or even dxt/dds.
Our target system (gles2) is able to use the etc compression format. With minor changes in the submitted files, there is no need to write a separate plugin. However the original pvr texture compression formats are not supported on our target, which is the reason for this extension.
The changes mainly consist in the definition on new enum values in the classes and headers of ReaderWriterPVR,Image and Texture. I also found some locations where the handling of the original pvr textures was not implemented. These are also part of this submission."
Texture.cpp:applyTexImage2D_subload:
<code>
unsigned char* data = = (unsigned char*)image->data();
if (needImageRescale) {
// allocates rescale buffer
data = new unsigned char[newTotalSize];
// calls gluScaleImage into the data buffer
}
const unsigned char* dataPtr = image->data();
// subloads 'dataPtr'
// deletes 'data'
</code>
In effect, the scaled data would never be used.
I've also replaced bits of duplicate code in Texture1D/2D/2DArray/3D/Cubemap/Rectangle
that checks if the texture image can/should be unref'd with common functionality in
Texture.cpp.
"
Texture2DMultismaple as name suggests provides means to directly access subsamples of rendered FBO target. (GLSL 1.5 texelFetch call).
Recently I was working on deferred renderer with OSG, during that I noticed there is no support for multisampled textures (GL_ARB_texture_multisample extension). After consultations with Paul Martz and Wojtek Lewandowski I added Texture2DMultisample class and made few necessary changes around osg::FrameBufferObject, osg::Texture and osgUtil::RenderStage classes."
and from follow email:
"Fixed. According to ARB_texture_multisample extension specification multisample textures don't need TexParameters since they can only be fetched with texelFetch."
1) Add getShadowComparison() accessor function to osg::Texture class
2) Modify ReaderWriterTiff::writeTifStream() and _readColor() (in Image.cpp) to handle pixelFormat==GL_DEPTH_COMPONENT as if it were GL_LUMINANCE
3) Modify the Texture classes of the ive and osg plug-ins so that they save/load the following Texture members: _use_shadow_comparison, _shadow_compare_func and _shadow_texture_mode
"
The Texture Pool can be enabled by setting the env var OSG_TEXTURE_POOL_SIZE=size_in_bytes.
Note, setting a size of 1 will result in the TexturePool allocating the minimum number of
textures it can without having to reuse TextureObjects from within the same frame.
- osg::Texture sets GL_MAX_TEXTURE_LEVEL if image uses fewer mipmaps than
number from computeNumberOfMipmaps (and it works!)
- DDS fix to read only available mipmaps
- DDS fixes to read / save 3D textures with mipmaps ( packing == 1 is
required)
- Few cosmetic DDS modifications and comments to make code cleaner (I hope)
Added _isTextureMaxLevelSupported variable to texture extensions. It
could be removed if OSG requires OpenGL version 1.2 by default.
Added simple ComputeImageSizeInBytes function in DDSReaderWrites. In
my opinion it would be better if similar static method was defined for
Image. Then it could be used not only in DDS but other modules as well (I
noticed that Texture/Texture2D do similar computations).
Also attached is an example test.osg model with DDS without last mipmaps to
demonstrate the problem. When loaded into Viewer with current code and moved
far away, so that cube occupies 4 pixels, cube becomes red due to the issue
I described in earlier post. When you patch DDS reader writer with attched
code but no osg::Texture yet, cube becomes blank (at least on my
Windows/NVidia) When you also merge osg::Texture patch cube will look right
and mipmaps will be correct."
GL_GENERATE_MIPMAP_SGIS is very slow (over half a second for a 720*576
texture). However, glGenerateMipmapEXT() performs well (16ms for the
same texture), so I have modified the attached files to use
Texture::generateMipmap() if glGenerateMipmapEXT is supported, instead
of enabling & disabling GL_GENERATE_MIPMAP_SGIS."
Notes, from Robert Osfield, I've tested the out of the previous path using
GL_GENERATE_MIPMAP_SGIS and non power of two textures on NVidia 7800GT and
Nvidia linux drivers with the image size 720x576 and only get compile times
of 56ms, so the above half second speed looks to be a driver bug. With
Muchael's changes the cost goes done to less than 5ms, so it's certainly
an effective change, even given that Michael's poor expereiences with
GL_GENERATE_MIP_SGIS do look to be a driver bug.