Age | Commit message (Collapse) | Author |
|
drm_driver_descriptor::driver_name is defined to be the name of the
kernel module. We should check against drm_driver_descriptor::name
instead of drm_driver_descriptor::driver_name.
|
|
KNOWN ISSUE: eglShowScreenSurfaceMESA in st/egl/kms fails
but st/egl/x11 works
|
|
st/egl/x11/x11_screen.c requests a driver named r300 not radeon
KNOWN ISSUE: breaks st/egl/kms/
st/egl/kms requests a pipe named "radeon"
that will not be found now
so why not leaving pipe_radeon there?
that was possible as long we have only r300g.
now there is also r600g for which st/egl/kms also
requests a pipe named "radeon"
(possible solution in later commit)
|
|
|
|
Make st/wgl include only main/core.h from core mesa.
|
|
Make st/glx include only main/core.h from core mesa.
|
|
Fixes #29771.
|
|
ported from r600c.
|
|
ported from r600c, fixes fp-cmp, glsl1-sqrt*
|
|
Should fix errors on the original nv30, reported by pmdata.
|
|
|
|
|
|
s->close_first was on the wrong side of the inequality.
Caught by blender.
Thanks to AndrewR for reporting this.
|
|
We need to always at least export one component (wether it's depth
or color. Add valid r7xx shader program for depth decompression.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
|
|
|
|
|
|
util_fill_rect could not handle formats with more than 32 bits,
since the fill color was a uint32_t value. Fix this by using
a util_color union instead, and also expand the union so it
works with formats which have up to 256 bits (the max of any
format currently defined).
|
|
Should improve performance, possibly significantly.
|
|
Gallium always puts gl_PointCoord in GENERIC[0] if
point_quad_rasterization is enabled.
This is silly, but for now it makes mesa-demos/glsl/pointcoord work.
|
|
Before, we were discarding the compiled vertex program on each
vertex program change.
Now we compile the program as if there were 6 clip planes and
dynamically patch in an "end program" bit at the right place.
Also, nv30 should now work.
|
|
the current code reuses the same vbo over and over, however after a flush
we'd stall and wait for mapping on the vbo when we should just fire and forget.
On a gears test this brings me from ~620 to ~750 on my rv530 in swtcl mode.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Signed-off-by: Brian Paul <brianp@vmware.com>
|
|
|
|
Now we lie less when claiming OpenGL 2 support.
Also, first piglit result group is now all green, except for
fdo25614-genmipmap, which seems mesa/st's fault.
|
|
|
|
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
|
|
Include draw_context.h for draw_*_vertex_shader symbols.
Fixes the following GCC warning.
nvfx_vertprog.c: In function 'nvfx_vp_state_create':
nvfx_vertprog.c:1276: warning: implicit declaration of function 'draw_create_vertex_shader'
nvfx_vertprog.c:1276: warning: assignment makes pointer from integer without a cast
nvfx_vertprog.c: In function 'nvfx_vp_state_delete':
nvfx_vertprog.c:1298: warning: implicit declaration of function 'draw_delete_vertex_shader'
|
|
Changed by me to use movd instead of movss to avoid penalties.
|
|
|
|
|
|
|
|
Before using depth buffer as texture, it needs to be decompressed
(tile pattern of db are different from one used for colorbuffer
like texture)
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
|
|
Introduce a new configuration option XMESA_STRICT_INVALIDATE to switch
between swapbuffers-based and glViewport-based buffer invalidation.
Default strict invalidate to false, ie glViewport-based invalidation,
aka ST_MANAGER_BROKEN_INVALIDATE.
This means we will not call XGetGeometry after every swapbuffers,
which allows swapbuffers to remain asynchronous. For apps running at
100fps with synchronous swapping, a 10% boost is typical. For gears,
I see closer to 20% speedup.
Note that the work of copying data on swapbuffers doesn't disappear -
this change just allows the X server to execute the PutImage
asynchronously without us effectively blocked until its completion.
This applies even to llvmpipe's threaded rasterization as the
swapbuffers operation was a large part of the serial component of an
llvmpipe frame.
The downside of this is correctness - applications which don't call
glViewport on window resizes will get incorrect rendering, unless
XMESA_STRICT_INVALIDATE is set.
The ultimate solution would be to have per-frame but asynchronous
invalidation. Xcb almost looks as if it could provide this, but the
API doesn't seem to quite be there.
|
|
Don't spend as much time comparing them.
|
|
|
|
If bins outside the current scene bounds are being corrupted, we'll
need to fix that separately. Currently seems ok though.
|
|
|
|
With this change, xmesa_get_window_size still does one round trip, but
that's better than doing two.
|
|
Try to eliminate some unnecessary X server round trips.
|
|
This version should hopefully be much clearer and thus less likely
to be subtly broken.
Also fixes point sprites on nv40 and possibly some other bugs too.
|
|
|
|
|
|
|
|
There may actually be something mapped in that range, especially for large
buffers like e.g. the GL Drawable.
|
|
How to cope with NULL surface relocations should be entirely at winsys'
discretion.
|
|
Include st_api.h for st_api_create_OpenGL symbol.
|
|
The variable is used but only in the body of an assert.
|
|
This is a typo fix of earlier commit 0f3b3751b8643352dcc242567b3696bd1505df1d.
|
|
dump_cpu is used only when DEBUG is defined.
Fixes the following GCC warning on builds without DEBUG defined.
util/u_cpu_detect.c:76: warning: 'debug_get_option_dump_cpu' defined but not used
|
|
Initialize variables on error paths.
|