Screen management in Wayland

One of the bigger things that is in the works in Plasma’s Wayland support is screen management. In most cases, that is reasonably easy, there’s one screen and it has a certain resolution and refresh rate set. For mobile devices, this is almost always good enough. Only once we starting thinking about convergence and using the same codebase on different devices, we need to be able to configure the screens used for rendering. Especially on desktops and laptops, where we often find multi-monitor setups or connected projectors is where the user should be able to decide a bunch of things, relative position of the screens, resolution (“mode”) for each, etc.. Another thing that we haven’t touched yet is scaling of the rendering per display, which becomes increasingly important with a wider range of displays connected, just imagine a 4K laptop running north of 300 pixels per inch (PPI) connected to a projector which throws 1024*768 pixels on a wall sized 4x3m.

The Wayland protocol currently does not provide a mechanism for setting up the screen, or tell us about displays that are not used for rendering, either because they’re disabled, or have just been connected, but not enabled “automatically” (yet). For most applications, that doesn’t matter, they’re just fine with knowing about the rendering screens and some details about those, which is provided by the wl_output interface. For screen management, this interface is insufficient, though, since it lacks a few things, EDID information, enabled/disabled flags and also ways to set the mode, scaling, rotation and position. This makes clearly recognizing display and setting them up harder than necessary, and thus error-prone. Let’s look at the background, first, however.

Setting up X11

On the hardware side, this has been a complete mess in the past. One problem is X11’s asynchronous nature. The XRandR extension that is used for this basically works by throwing a bunch of calls to the X server (“use this mode”, “position display there”) and then seeing what sticks to the wall. The problem is that we never really know what happened, there’s no well-defined “OK, this works” result, and we also don’t know when the whole procedure is done. The result is a flicker-fest and the desktop trying to catch up with what X11 made of the xrandr calls. It can also be an unpleasant experience, when a display gets connected, used for rendering, then the shell finds out about it, expanding the desktop to it, and then everything is resized again because there’s a predefined configuration for this. These kind of race conditions are very hard to fix due to the number of components involved in the process, and the lack of proper notification semantics around it.

X11 has the nasty habit of interacting with hardware directly, rather than through well-defined and modern kernel interfaces. On the kernel side, this has been fixed. We now have atomic mode setting, which allows us to check whether changes can be applied (through the DRM_MODE_ATOMIC_TEST_ONLY flag), and apply them all at once, or in portions that are known to not screw up, lock the user out, or are simply invalid in context with each other.
For the user, getting this right across the whole stack means quicker reconfiguration of the hardware and only minimal flickering when switching screen setups. We won’t be able to completely prevent the flickering on most displays, as that is simply how the hardware works, but we will be able to make it a lot less jarring. The compositor now being the one that calls the DRM subsystem on the user side, we can coordinate these things well with visual effects, so we’ll be able to make the user experience while re-configuring displays a bit smoother as well.

Atomic mode setting, DRM and kernel

From the kernel side, this needed quite radical changes, which have now landed throughout the DRM subsystem. The result is a kernel interface and helper library that allows interacting with the kernel using semantics that allow tighter control of the processes, better error prevention and handling and more modern power management semantics. Switching off the screen can now be done from the compositor, for example — this allows us to fix those cases where the display is black, but still has its backlight on, or where the display is off, but used for rendering (in which case you get huge blind spots in your user interface).
Daniel Vetter’s (part 1, part 2) provides an excellent overview over history, present and future of atomic mode setting on the kernel side. Pertaining is that a reasonably recent Linux kernel with working DRM drivers now provides all that we need to fix this problem on the user side. X11 is still in the way of a completely smooth solution, though.

Screen setup in Plasma

In Plasma, the screens can be set up using the Display configuration module in system settings. This module is internally called “KScreen”. KScreen provides a visual interface to position displays, set resolution, etc.. It’s backed by a daemon that can apply a configuration on login – useful stuff, but ultimately bound by the limits across the underlying software stack (X11, kernel, drivers, etc.).

KScreen is backed by libkscreen, a library that we ship with Plasma. libkscreen offers an API that allows to list displays, their properties, including disabled displays. libkscreen is driven by out-of-process running backends, commonly used is the “xrandr” backend, which talks to the X Server over the XRandR extension. libkscreen has other backends, notably a read-only QScreen backend a “fake” backend used for unit tests. A native Wayland backend is work in progress (you can find it in the libkscreen[sebas/wayland] branch.)

libkscreen been developed for the purpose of screen configuration, but we have also started using it for the Plasma shell. QScreen, the natural starting point of this was not up to the task yet, missing some functionality. In Qt 5.6, Aleix Pol has now landed the necessary missing functionality, so we can move the Plasma shell back onto QScreen entirely. QScreen is backed by the XCB Qt platform plugin (QPA). One problem in Plasma has been that we got acknowledged of changes through different code paths, which made it hard to set up the desktop, position panels, etc. In a Wayland session, this has to happen in a much more coordinated way, with clearly defined semantics when the screen setup changes, and as little of those changes as necessary.
KScreen should concentrate on doing what it’s good at: screen configuration. For X11 kscreen uses its xrandr backend, no changes there. In Plasma shell’s startup, we will be able to remove libkscreen and rely purely on QScreen directly as soon as we can depend on Qt 5.6, so that probably puts us into the time-frame of Q2 next year. For read-only access on wayland, we can use the libkscreen QScreen backend for now, it comes with some limitations around multi-screen, but these will be ironed by spring next year. The QScreen backend is actually used to start Plasma Mobile’s on kwin_wayland. For configuration, QScreen is not an option, however — it’s simply not its purpose and shouldn’t be.

In the Wayland protocol itself, there are no such semantics yet. Screen configuration has, so far, been outside of the scope of the agreed-upon wayland protocols. If we don’t run on top of an X server, who’s doing the actual hardware setup? Our answer is: KWin, the compositor.

KWind

KWin plays a more central role in a Wayland world. For rendering and compositing of windows, it interacts with the hardware. Since it already initializes hardware when it starts a Wayland server, it makes a lot of sense to put screen configuration also exactly there. This means that we will configure KWin at runtime through an interface that is designed around semantics of atomic mode setting, and KWin picks a suitable configuration for connected displays. KWin saves the configuration, applies it on startup or when a display gets switched off, connected or disconnected, and only then tells the workspace shell and the apps to use it. This design makes a lot of sense, since it is KWin that ultimately knows of all the constraints related to dynamic display configuration, and it can make concert how the hardware is used and how its changes are presented to the applications and workspace.

KWayland and unit testing

Much of Kwin/Wayland’s functionality is implemented in a library called KWayland. KWayland wraps the wayland protocol with a Qt-style API for wayland clients and servers, offers threaded connection and type-safety on top of the basic C implementation of libwayland.
KWayland provides a library that allows to run wayland servers, or just specific parts of it with very little code. The KWayland server classes allow us to test a great deal of the functionality in unittests, since we can run the unit tests on a “live” wayland server. Naturally, this is used a lot in kwayland’s own autotests. In the libkscreen wayland backend’s tests, we’re loading different configuration scenarios from json definitions, so we can not only test whether the library works in principle, but really test against live servers, so we cover a much larger part of the stack in our tests. This helps us a lot to make sure that the code works in the first place, but also helps us catch problems easily as soon as they arise. The good unit test coverage also allows much swifter development as a bonus.

Output management wayland interface design

The output management wayland protocol that we have implemented provides two things:

  • It lists connected output hardware and all of their properties, EDID, modes, physical size, and runtime information such as currently used mode, whether this output device is currently enabled for rendering, etc.
  • It provides an interface to change settings such as mode, rotation, scaling, position, for hardware and to apply them atomically

This works as follows:

  1. The server announces that the global interfaces for OutputManagement and a list of OutputDevices is available
  2. The configuration client (e.g. the Display Settings) requests the list of output devices and uses them to show the screen setup visually
  3. The user changes some settings and hits “apply”, the client requests an OutputConfiguration object from the OutputManagement global
  4. The configuration object is created on the server specifically for the client, it’s not exposed in the server API at this point.
  5. The client receives the config object and calls setters with new settings for position, mode, rotation, etc.
  6. The server buffers these changes in the per-client configuration object
  7. The client is done changing settings and asks the server to apply them
  8. The compositor now receives a sealed configuration object, tests and applies the new settings, for example through the DRM kernel interface
  9. The compositor updates the global list of OutputDevices and changes its setup, then it signals the client failure or success back through the configuration object

The output management protocol, client- and server-side library, unit tests and documentation are quite a hefty beast, combined they come in at ca. 4700 lines of code. The API impact, however, has been kept quite low and easy to understand. The atomic semantics are reflected in the API, and it encourages to do the right thing, both for the client configuring the screens, and the compositor, which is responsible for applying the setup.

Next steps

I am currently working on a libkscreen module for screen configuration under wayland, that implements atomic mode setting semantics in libkscreen. It uses a new wayland protocol which Martin Gräßlin and I have been working on in the past months. This protocol lands with the upcoming Plasma 5.5, the libkscreen module may or may not make the cut, this also depends on if we get the necessary bits finished in KWin and its DRM backend. That said, we’re getting really close to closing the last gaps in the stack.

On the compositor side, we can now connect the OutputManagement changes, for example in the DRM backend and implement the OutputDevices interface on top of real hardware.