Age | Commit message (Collapse) | Author |
|
By selecting a 4444 texture format due to a bad test, we hit the
intel_update_wrapper error path, and despite the appearance of error handling
in it and its callers, the desired behavior (software fallback) doesn't occur.
|
|
|
|
|
|
Too much commit -a while debugging.
|
|
This lets swrast produce an fbconfig suitable for the root visual now that
the server's not allowing mismatched fbconfigs.
|
|
Some apps won't run w/ indirect rendering contexts.
Also, consolidate some context-init code in new init_glx_context() function.
|
|
|
|
Only 15 actually worked before since we always reserved generic[0] as an
alias for vertex position.
The case of vertex attribute 0 is tricky. The spec says that there is no
aliasing between generic vertex attributes 0..MAX_VERTEX_ATTRIBS-1 and the
conventional attributes. But it also says that calls to glVertexAttrib(0, v)
are equivalent to glVertex(v). The distinction seems to be in glVertex-mode
versus vertex array mode.
So update the VBO code so that if the shader uses generic[0] but not gl_Vertex,
route the attribute data set with glVertex() to go to shader input generic[0].
No change needed for the glDrawArrays/Elements() path.
This is a potentially risky change so regressions are possible. All the usual
tests seem OK though.
|
|
This info will be used in the linker for allocating generic vertex attribs.
|
|
|
|
|
|
|
|
|
|
No real-world impact on performance seen. Even glxgears seems to be, if
anything, happier.
|
|
|
|
This still leaves us with a broken depth 32 visual, but now it's the server's
visual setup that's at fault.
|
|
In a normal build system this is generated by configure.
|
|
Remove some unneeded fields. Rename some function parameters.
|
|
Rename some structs and fields to be more consistant with the rest of mesa.
|
|
|
|
|
|
Rendering and textures are limited to 8kx8k, but mesa limits things to
4kx4k, and magic guard band stuff may break on 8kx8k drawing. This is safe
though, and makes compiz work on bigger screens.
Signed-off-by: Keith Packard <keithp@keithp.com>
|
|
|
|
Fixes failed assertion / segfault for particular proxy texture tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
adjust_buffer_object_ref_counts()
Fixes bug 19835. However, a more elaborate fix should be implemented someday
which uses proper reference counting for gl_array_object.
|
|
This is a 10% win on the ever-important glxgears not-a-benchmark.
|
|
This saves an inadvertent round-trip to the X Server on DrawBuffers, which was
hurting some metaops.
|
|
We can support any combination of (a8r8g8b8, x8r8g8b8, r5g6b5) x (z0,z24,z24s8)
on either class of chipsets. The only restriction is no mixing bpp when also
mixing tiling. This shouldn't be occurring currently.
|
|
|
|
|
|
|
|
This is part of GL_EXT_framebuffer_blit and GL_ARB_framebuffer_obbject.
|
|
|
|
|
|
|
|
|
|
Anytime we're not rendering to the default/window FBO, need to invert
rendering, not just when rendering to a texture. Otherwise, if a FBO
consists of a mix of textures and renderbuffers the up/down orientation
was inconsistant.
Fixes shadowtex.c bad rendering.
|
|
When glTexImage() is called we need to re-validate any FBOs that point to
the texture (i.e. render-to-texture) since changing the texture's size/format
will effect FBO completeness.
We don't keep a list of all FBOs rendering into each texture (which would be
a bit messy) so we check all FBOs in existance. To optimize this, the
gl_texture_object->_RenderToTexture flag is used to avoid checking textures
that have never been used as renderbuffers. So, we only walk over all FBOs
(there's usually only a few) when glTexImage() modifies a RTT texture.
Fixes a bug seen in shadowtex.c when toggling packed depth/stencil mode.
|
|
Move the remaining extension string enables to intel_extensions.c.
Make sure that GL_NV_texture_env_combine4 is not enabled on i830.
|
|
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|