The Texture Pool can be enabled by setting the env var OSG_TEXTURE_POOL_SIZE=size_in_bytes.
Note, setting a size of 1 will result in the TexturePool allocating the minimum number of
textures it can without having to reuse TextureObjects from within the same frame.
- osg::Texture sets GL_MAX_TEXTURE_LEVEL if image uses fewer mipmaps than
number from computeNumberOfMipmaps (and it works!)
- DDS fix to read only available mipmaps
- DDS fixes to read / save 3D textures with mipmaps ( packing == 1 is
required)
- Few cosmetic DDS modifications and comments to make code cleaner (I hope)
Added _isTextureMaxLevelSupported variable to texture extensions. It
could be removed if OSG requires OpenGL version 1.2 by default.
Added simple ComputeImageSizeInBytes function in DDSReaderWrites. In
my opinion it would be better if similar static method was defined for
Image. Then it could be used not only in DDS but other modules as well (I
noticed that Texture/Texture2D do similar computations).
Also attached is an example test.osg model with DDS without last mipmaps to
demonstrate the problem. When loaded into Viewer with current code and moved
far away, so that cube occupies 4 pixels, cube becomes red due to the issue
I described in earlier post. When you patch DDS reader writer with attched
code but no osg::Texture yet, cube becomes blank (at least on my
Windows/NVidia) When you also merge osg::Texture patch cube will look right
and mipmaps will be correct."
GL_GENERATE_MIPMAP_SGIS is very slow (over half a second for a 720*576
texture). However, glGenerateMipmapEXT() performs well (16ms for the
same texture), so I have modified the attached files to use
Texture::generateMipmap() if glGenerateMipmapEXT is supported, instead
of enabling & disabling GL_GENERATE_MIPMAP_SGIS."
Notes, from Robert Osfield, I've tested the out of the previous path using
GL_GENERATE_MIPMAP_SGIS and non power of two textures on NVidia 7800GT and
Nvidia linux drivers with the image size 720x576 and only get compile times
of 56ms, so the above half second speed looks to be a driver bug. With
Muchael's changes the cost goes done to less than 5ms, so it's certainly
an effective change, even given that Michael's poor expereiences with
GL_GENERATE_MIP_SGIS do look to be a driver bug.
- Implementation of integer textures as in EXT_texture_integer
- setBorderColor(Vec4) changed to setBorderColor(Vec4d) to pass double values
as border color. (Probably we have to provide an overloading function to
still support Vec4f ?)
- new method Texture::getInternalFormatType() added. Gives information if the
internal format normalized, float, signed integer or unsigned integer. Can
help people to write better code ;-)
"
Futher changes to this submission by Robert Osfield, changed the dirty mipmap
flag into a buffer_value<> vector to ensure safe handling of multiple contexts.
local function pointer to avoid compiler warnings related to case void*.
Moved various OSG classes across to using setGLExtensions instead of getGLExtensions,
and changed them to use typedef declarations in the headers rather than casts in
the .cpp.
Updated wrappers
"A new texture class Texture2DArray derived from
Texture extends the osg to support the new
EXT_texture_array extensions. Texture arrays provides
a feature for people interesting in GPGPU programming.
Faetures and changes:
- Full support for layered 2D textures.
- New uniform types were added (sampler2DArray)
- FrameBufferObject implementation were changed to
support attaching of 2D array textures to the
framebuffer
- StateSet was slightly changed to support texture
arrays. NOTE: array textures can not be used in fixed
function pipeline. Thus using the layered texture as a
statemode for a Drawable produce invalid enumerant
OpenGL errors.
- Image class was extended to support handling of
array textures
Tests:
I have used this class as a new feature of my
application. It works for me without problems (Note:
Texture arrays were introduced only for shading
languages and not for fixed function pipelines!!!).
RTT with Texture2DArray works, as I have tested them
as texture targets for a camera with 6 layers/faces
(i.e. replacement for cube maps). I am using the array
textures in shader programming. Array textures can be
attached to the FBO and used as input and as output."
in incorrect texture assignment. Solution was to a compareTextureObjects() test to the Texture*::compare(..) method that
the osgUtil::Optimizer::StateSetVisitor uses to determine uniqueness.
along with support for this Texture1D, 2D, 3D, TextureCubeMap and TextureRectangle. The
new SourceFormat and SourceType parameters are only used when no osg::Image is assigned to
an osg::Texture, and main use is for render to texture effects.
Added support for --hdr option in osgprerender, which utilises the new Texture::setSourceFormat/Type() methods.
Note, this required adding a unsigned int context ID to the osg::isGLUExtensionSupported(,)
and osg::isGLExtensionSupported(,) functions. This may require reimplementation
of end user code to accomodate the new calling convention.
Removed the rescalling of images in osg::Image during texture apply, moving
the rescale so it is locally calculated. This solves an outstanding threading
problem which occured by multiple draw threads all tried to rescale the same
image at one time.
Made osg::Image ptr in osg::Texture2D non mutable as it is no longer modified
during apply.
any reference to these in the distribution across to using unsigned char,
unsigned short etc. This has been done to keep the OSG code more opaque
to what types are.
Added new osg::State::setInterleavedArray() method.
Added new osg::Group::setNode(uint,Node*) method.
Cleaned up and fixed the osg::Texture's handling of dirtyTextureParamters().
initializes the array to the number of graphics contexts, and automatically
expands the array when indices outside the current size are required.
Added new osg::Texture::Extensions nested class to handle extensions on a per
context basis.