Age | Commit message (Collapse) | Author |
|
|
|
use 64-bit pointers and 32-bit longs. So, operations like casting pointers
to unsigned long and back to pointer won't work. glheader.h now
includes files to define uintptr_t, which should instead be used for
this sort of operation. It is an integer type that is the same size
as a pointer.
|
|
Main driver impacts:
- new code for creating the Mesa GLframebuffer
- new span/pixel read/write code
Some drivers not yet updated/tested.
|
|
which is automatically generated to match the current texture environment
state. Introduces a new value ctx->FragmentProgram._Active which is
true when either _Enabled is true or there is such a fragment program
ready to run.
To test out on a driver running the software rasterizer, set
MESA_TEX_PROG=t in the environment. It goes without saying that performance
is lower for the software rasterizer in this mode.
|
|
regenerate get.c
|
|
_mesa_GetIntegerv(GL_COMPRESSED_TEXTURE_FORMATS_ARB). Must be ENUM_TO_INT.
mesa/progs/tests/texenv now lists supported compressed texture formats
correctly.
|
|
|
|
|
|
|
|
- remove input/output fields, input tracking removed.
- remove state fields, the validate function now called
on every statechange.
- add an explicit 'create' function.
Add in code to build vertex program to implement current t&l state. Still
disabled, but turn on with a #define in t_vp_build.h.
|
|
|
|
representations by switching to packed structures for registers and
instructions.
|
|
|
|
|
|
|
|
variable. Without this, GET_CURRENT_CONTEXT would *always* result in a call
to _glapi_get_context (because _glapi_Context is a const pointer to NULL in
TLS builds).
|
|
|
|
|
|
|
|
|
|
Fragment shader derivative hint
|
|
reorder GL_ARB_shader_objects and GL_ARB_fragment/vertex_shader stuff
|
|
remove FEATURE_shading_language;
add ARB_fragment/vertex_shader max values
|
|
|
|
|
|
libGL builds.
|
|
compiler is hooked in. May be enabled on compiler command line by setting
-DFEATURE_shading_lanuage=1.
|
|
|
|
functions. There are two parts to this. First, a size element with a name
"Get" is shorthand for having four separate size elements with names
"GetIntegerv", "GetDoublev", "GetFloatv", and "GetBooleanv". Additionally,
a count of "?" is treated specially. This causes a call to a handcoded
function named "__gl<base name>_variable_size". This is *only* needed to
support GL_COMPRESSED_TEXTURE_FORMATS. That enum can return a variable
number of values depending how many compressed texture formats are supported
by the implementation.
Fix a problem with glGetProgram{Local,Env}Parameter[df]vARB,
glAreProgramsResidentNV, and glGetVertexAttribivNV. These changes only
affect code generated for the server-side.
The changes to enum.c are caused by enums added for the server-side
__glGetBooleanv_size functions.
|
|
|
|
|
|
additional checks.
Replace _mesa_init_buffers() with _mesa_init_scissor() and _mesa_init_multisample().
|
|
now contains 3 static tables. The first table is a single, large string of
all the enum names. The second table is an array, sorted by enum name, of
indexes to the string table and the matching enum value. The extra string
table is used to eliminate relocs (and save space) in the compiled file.
The third table is an array, sorted by enum value, of indexes into the
second table.
The [name, enum] table contains all of the enums, but the table sorted by
enum-value does not. This table contains one entry per enum value. For
enum values that have multiple names (e.g., 0x84C0 has GL_TEXTURE0_ARB and
GL_TEXTURE0), only an index to the "best" name will appear in the table.
gl_enums.py gives precedence to "core" GL versions of names, followed by ARB
versions, followed by EXT versions, followed, finally, by vendor versions
(i.e., anything that doesn't fall into one of the previous categories). By
filtering the unneeded elements from this table, not only can we guarantee
determinism in the generated tables, but we save 364 elements in the table.
The optimizations outlined above reduced the size of the stripped enums.o
(on x86) from ~80KB to ~53KB.
The internal organization of gl_enums.py was also heavily modified.
Previously enums were stored in an unsorted list as [value, name] tuples
(basically). This list was then sorted, using a user-specified compare
function (i.e., VERY slow in most Python implementations) to generate a
table sorted by enum value. It was then sorted again, using another
user-specified compare function, to generate a table sorted by name.
Enums are now stored in a dictionary, called enum_table, with the enum value
as the key. Each dictionary element is a list of [name, priority] pairs.
The priority is determined as described above. The table sorted by enum
value is generated by sorting the keys of enum_table (i.e., very fast). The
tables sorted by name are generated by creating a list, called name_table,
of [name, enum value] pairs. This table can then be sorted by doing
name_table.sort() (i.e., very fast).
The result is a fair amount more Python code, but execution time was reduced
from ~14 seconds to ~2 seconds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|