Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Conflicts:
docs/install.html
docs/relnotes-7.3.html
src/mesa/shader/slang/slang_codegen.c
src/mesa/shader/slang/slang_compile.c
src/mesa/shader/slang/slang_emit.c
src/mesa/shader/slang/slang_preprocess.c
src/mesa/shader/slang/slang_preprocess.h
|
|
|
|
|
|
|
|
Don't overload the Size field with the texture target, to avoid confusion.
|
|
This was changed between GL 1.0 and 1.1. Mesa still had the 1.0 behaviour.
|
|
Previously MaxTextureUnits was used to validate both texture image
units and texture coordinate units in fragment programs. Instead, use
MaxTextureCoordUnits for texture coordinate units and
MaxTextureImageUnits for texture image units.
Fixes bugzilla #19468.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Unfortunately both Mesa and Gallium use the same defines for memory
allocation (MALLOC, FREE, etc), and worse, some times memory is allocated
with one set and freed with the other set, causing the homegrown memory
debugger to trip on itself.
In the future mesa and gallium should use different names, but for now,
memory debugging on Windows will have to be carried with different tools..
|
|
|
|
This allows to build MinGW and MSVC within the same dir.
|
|
|
|
Without the f, the constant is treated as a double, resulting in
slower arithmetic and libgcc conversion calls each time CEILF()
is used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Two forms are supported:
Pragmas are silently ignored at this time.
|
|
Copied language from the glXSwapBuffers manual page about the implicit
glFlush and expected command completion. This just codifies what
people already expect from glXCopySubBufferMESA. The intention of
this command is to work like glXSwapBuffers but on a sub-rectangle of
the drawable.
Acked-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
|
Previously all squares were textured with the same texture.. not quite what
the demo was supposed to look like!
|
|
This fixes a lot of other things where not all the geometry got drawn
also.
|
|
No real idea what this does.. but a lot of things that misrendered and
made the GPU throw a DATA_ERROR now work.. I'm wondering what side-effects
we'll see from this :)
|
|
|
|
|
|
|
|
Not quite working, but the general idea is right I think.
|
|
|
|
|
|
Replace cell_batch{align,alloc)*() with cell_batch_alloc16(), allocating
multiples of 16 bytes that are 16 byte aligned.
Opcodes are stored in preferred slot of SPU machine word.
Various structures are explicitly padded to 16 byte multiples.
Added STATIC_ASSERT().
|
|
As for glBitmap, it needs to be an NDC coord in [-1,1].
|
|
When we use the do_blit_bitmap() function, it seems the fragment Z is always
1.0. If depth testing is on, that means that bitmap fragments are often
occluded by other rendering. So, the bitmap doesn't appear even if
rasterpos.Z==0.
The fix is to use the intel_texture_bitmap() path when depth testing is on.
Also, fix the incorrect Z coordinate. It needs to be an NDC value in [-1,1].
|
|
Just call _mesa_append_fog_code() if the fragment program's FogOption is
not GL_NONE.
This allows us to remove some unnecessary i965 fog code.
Note, the arbfplight.c demo can be used to test this (see DO_FRAGMENT_FOG).
|
|
According to feature_test_macros(7), _GNU_SOURCE encompasses all the
other feature macros we were setting, so we can just dispose of them.
|