Age | Commit message (Collapse) | Author |
|
When we have DRI2 protocol at least 2.3, we get an event from the
server when the back buffers get invalidated. When that's the case
let the driver know that it can rely on invalidate instead of the
glViewport polling.
|
|
|
|
Now that transfers are context operations it is the driver's
responsibility to ensure that transfers happen in order with all other
context operations, so flushes and finishes inside Mesa should be no
longer necessary. The attached patch implements that.
This should proportionate significant improvements for hardware drivers
which are able to stream transfers in the command buffers.
You can use the softpipe/llvmpipe_flush_resource() as reference
implementation of the worst case scenario, where the driver is not able
to streamline transfers. But the expectation is that driver
implementators will want to avoid flushing as much as possible.
|
|
for consistency
|
|
From http://www.opengl.org/registry/api/glext.h
GLAPI void APIENTRY glBindTransformFeedback (GLenum target, GLuint id);
Fixes MinGW build.
|
|
Update Makefile rules for st/egl.
|
|
Includes GL_ARB_transform_feedback2 which encapsulates transform feedback
state in objects.
|
|
|
|
This enables the GL_EXT_draw_buffers2 extension.
Tested with progs/tests/drawbuffers2.
|
|
|
|
|
|
|
|
|
|
|
|
This was missed in commit c4775a27e3aaa2006b98f225387499b79bc609ef.
Fixes i915 build.
|
|
Now that intel_flush() deosn't use the needs_mi_flush argument, we can
finally drop one of the two flush functions.
|
|
Without this patch, any old intel_flush() call will cause a round trip to
the server and do a copy from fake to real front. We only actually
guarantee that frontbuffer results show up when glFlush() ia called, so
move the flushing to intel_glFlush().
We also need to flush fake to front before getting new buffers, but
we just handle that manually.
|
|
When we call intel_prepare_render() from intelReadPixels(), we'll mark
the front buffer dirty. That's silly, since we're only reading from it
and marking it dirty will cause us to copy from fake front to front
eventually.
Just clear the dirty flag after doing the read.
|
|
Now that we have intel_prepare_render() in place, we can use it to mark
the front buffer dirty if we're rendering to the front buffer once we
get there.
|
|
We only need this when the server may have swapped the buffers or
when we receive an invalidate event from the server. The default
behaviour is still that the DRI driver will invalidate its own buffers
when glViewport is called.
https://bugs.freedesktop.org/show_bug.cgi?id=27277
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Seeing very weird crashes during std::cout initialization.
The fault probably lies in the way I build LLVM on MSVC, but disable for
now to allow more time to investigate.
|
|
So taking the function address does not work.
|
|
|
|
NOTE: This is a candidate for the 7.8 stable branch
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When internal_format and tex->format differ, st_finailize_texture will
surface_copy between surfaces with different formats. This commit works
around the issue by ignoring internal_format. A sane fix is needed
here.
|
|
|
|
In particular, don't use the clamped lod to compute level + 1, or
lod in [-1, 0] range will actually interpolate with level 1.
This makes Mipfilter DCT pass 100%.
|
|
Also start axing the code duplication for scalar case. The olution is to
treat the scalar case specially in a few innermost functions, and leave
outer functions untouched.
|
|
It's quite a pain to remember the details after a while, and it is quite
likely we'll want to use this again, either for different polynomial
orders or different functions, so commit it here.
|
|
The advantage of range[-0.5, 0.5] is that it doesn't require floor (for
which intrinsics are only available in SSE4.1).
But the EXP opcode pretty much forces us to use floor, and there is a
good floor approximation around truncation available anyway.
This fixes EXP failures in VShader DCT.
|
|
|