Age | Commit message (Collapse) | Author |
|
Merge multiple egl_<platform>_<pipe>.so into a single
egl_gallium_<pipe>.so. The environment variable EGL_PLATFORM is now
used to modify the return value of _eglGetNativePlatform.
|
|
Move native_get_name, native_create_probe, native_get_probe_result, and
native_create_display into struct native_platform, and add
native_get_platform to get a handle to the struct.
|
|
This allows api_OpenGL.so to support OpenGL ES.
|
|
Add st_gl_api_create_es1 and st_gl_api_create_es2 to create OpeGL ES 1.1
and OpenGL ES 2.0 contexts respectively.
|
|
The callback is used by st/vega to check if a visual specifies the
depth/stencil format. It forces st/vega to be loaded by st/egl to
perform the check. As noted in EGL spec, the depth/stencil format of a
visual should not affect OpenVG. It should be better to ignore the
field and always allocate the depth/stencil texture.
|
|
laying down the foundation for everything and implementing most of the
stuff.
linking, gl_VerticesIn and multidimensional inputs are left.
|
|
Ooops. This should possibly fix some bugs...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Conflicts:
src/gallium/state_trackers/egl/x11/native_dri2.c
src/gallium/state_trackers/egl/x11/native_x11.c
src/gallium/state_trackers/egl/x11/native_x11.h
src/gallium/state_trackers/xorg/xorg_driver.c
src/gallium/winsys/radeon/drm/radeon_drm.c
|
|
|
|
The cpu_access is redundant in a software rasterizer.
|
|
lp_setup_bind_framebuffer().
We were starting a scene whenever lp_setup_get_vertex_info() was called by
the draw module. So when when all primitives were culled/clipped, not only
did we create a new scene for nothing, but we end up using the old scene
with the old framebuffer state instead of a new one.
Fix consists in:
- don't call lp_setup_update_state() in lp_setup_get_vertex_info() -- no
longer necessary
- always setting the scene state before binning a command -- query
commands were bypassing it
- assert no old scene is reused in lp_setup_bind_framebuffer()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Now the question is whether we are allowed to ignore gl_rasterization_rules and
pipe_rasterizer_state::multisample. The former is invariant anyway and
I think the latter would need re-emitting the AA state which is quite costly,
considering that it implicitly flushes the whole pipeline (all AA regs
in the AA state are *unpipelined*).
|
|
|
|
|
|
Unlike other invariant states, this one must be emitted after VAP flush.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
we used to create and cache unltimited number of variant, this
change limits the number of variants kept around to a fixed number.
the change is based on a similar patch by Roland for llvmpipe fragment
shaders.
|
|
|
|
The conditionals aren't needed.
|
|
|
|
SSE vertex shader does not seem to honor the execution mask. Pad the
output array as a workaround.
|
|
...instead of waiting until glGetString(GL_EXTENSIONS) is called.
This fixes a problem where the MESA_EXTENSION_OVERRIDE env var is
ignored if the app never calls glGetString(GL_EXTENSIONS).
NOTE: this is a candidate patch for the 7.8 branch.
|
|
|
|
|
|
|
|
I'm not sure if I really got it right. This seems like one of those
"Duh, of course it works that way" things, but I'd like the
documentation to be readable by people not acquainted with OGL/D3D.
|
|
Only for first RT at the moment, as there is no trivial way in galahad
to look at framebuffer state and (sadly) people don't usually calloc
their CSOs, so flags could be wrongly set.
On the other hand, of course, galahad will hopefully encourage more
people to calloc their CSOs. :3
|
|
|
|
|