May 27, 2016 - Derek Foreman
Wayland: Now Accepting Damage in Buffer Co-ordinates
Fill buffer. Attach buffer to surface. Damage surface. Commit surface.
Ok, now you’ve got a picture on screen, that was easy, right? This is the (somewhat simplified) sequence of requests that’s required to show an image on screen with Wayland.
A buffer is what your client draws into. A surface is what the compositor displays your buffer on. The commit operation tells the compositor it’s time to atomically perform all the surface operations you’ve been sending it. Any surface requests prior to the commit have been buffered, and could have been requested in any order (the compositor performs them in the appropriate order during the commit).
How Wayland Uses Damage to Redraw Surfaces
Damage is today’s area of focus: it’s how you tell the compositor where the surface has changed since the last time it was committed. While rendering into its buffer, the client should create a list of rectangles that indicate where the surface has changed since the last commit to pass on to the compositor.
This is mostly just an optimization; in any event, your buffer must contain a completely up-to-date scene when it’s committed. The compositor could choose to ignore the damage report and re-draw the whole thing, or it could render the exact regions it was told were damaged.
Historically, Wayland applications were required to post damage in surface coordinates, not buffer coordinates. For a large number of applications these two coordinate systems are the same.
However, there are some interesting things we can do with Wayland that make surface coordinates and buffer coordinates different:
- It’s possible to add a scale factor to a buffer; this is how Wayland deals with high DPI displays.
- It’s possible to transform a buffer in 90 degree increments or flip it, this is useful for when the screen is rotated. The application can render in the same orientation as the display, which might allow it to be dropped into a hardware plane or scanned out directly if it’s running fullscreen.
- It’s possible to use a viewport to display a subset of a buffer on a surface.
To use any of this stuff, it’s necessary to make sure the damage rectangles are properly transformed to match the surface. This likely means transforming them from buffer to surface coordinates before passing them along.
The compositor uses the surface coordinates to determine what part of your desktop needs to be re-rendered. If rendering with GL, the compositor also transforms these surface coordinates (back?) into buffer coordinates so it can update a texture from the right part of the buffer.
It might seem like we’re going out of our way to do work the compositor is going to have to undo anyway since it needs coordinates in both buffer and surface space. If it were possible to simply pass damage in buffer coordinates it could be possible to save some complexity in the client. Either way, the compositor probably still needs to convert the damage rectangles into the coordinate set it doesn’t have.
How Tall is Your Surface?
There are places where using surface coordinates are more than just a minor annoyance though. Some clever optimizations are ruined for us when using EGL buffers. The function EGLSwapBuffersWithDamage() allows applications to tell the compositor that only some of the GL scene has changed. A list of rectangles is passed to it that specifies the damaged areas using the standard GL convention that 0, 0 is the bottom left corner of the buffer. Wayland treats the *top* left corner as the origin. That means that the GL implementation must convert from bottom-left origin to top-left origin coordinate systems internally before passing damage on to the compositor. This is trivial math: it’s just subtracting the Y coordinate from the surface height.
The catch? EGL doesn’t know the surface height at all. It only knows its buffer height. So, when using scales, viewports, or transforms, with buffer height that isn’t the same as surface height, it’s not possible to post damage accurately with EGLSwapBuffersWithDamage(). In fact, it’s actually even worse than that, the EGL implementation doesn’t know whether you’re using these advanced features or not. In order to ensure no damage is missing, EGL has to respond to EGLSwapBuffersWithDamage() by posting maximum possible damage: 0, 0 – INT32_MAX, INT32_MAX. This is exactly the same thing it does when using EGLSwapBuffers() without going tthrough the effort of providing damage lists.
We’ve finally fixed this though, by adding a new surface request: wl_surface.damage_buffer. This takes damage in buffer coordinates instead of surface coordinates. Don’t worry, wl_surface.damage is still around and there’s no expectation that old code be changed to use the new interface! You can even mix damage and damage_buffer requests on the same surface, though I can’t possibly imagine why you would.
Still, even with applications able to post damage in the more convenient buffer co-ordinates, there was still a problem blocking EGLSwapBuffersWithDamage() from working as intended – Mesa had no way to know that the wl_surface object it has on hand is new enough to suppport wl_surface.damage_buffer.
The solution to that comes in the form of a new function called wl_proxy_get_version(). From a client’s perspective Wayland objects are proxies to objects inside the compositor – this new function lets clients check what protocol version their proxy object conforms to. Mesa can now determine when a wl_surface proxy is new enough to support wl_surface.damage_buffer, and use it appropriately instead of posting full surface damage.
So what does all this mean in the real world? Well, GL rendered EFL applications have always passed damage via EGLSwapBuffersWithDamage(), so if your system libraries are new enough to support these features, these applications just magically become a little more efficient.
About Derek Foreman
Derek Foreman is a Senior Open Source Developer with Samsung's Open Source Group, specializing in graphics work. Previously, he worked on the graphics team at an open source consultancy where his work primarily focused on hardware enablement and software optimization for embedded systems. His career started at a biomedical institute where he developed analysis and control software for medical imaging equipment.
Image Credits: Kristian Høgsberg