Plasma 5.0 is out. I’ve compiled a (non-exhaustive) list of ingredients and that have been put into this release to give the reader an estimate of the dimensions of the project and the achievement of this milestone:
A swimming-pool full of tears cried over graphics driver problems and crashers buried deep down in scripting engines, scenegraphs and (the pool allegedly was previously used for skateboarding by Greg KH)
In many cases, high-quality code counts more than bells and whistles. Fast, reliable and well-maintained libraries provide a solid base for excellent applications built on top of it. Investing time into improving existing code improves the value of that code, and of the software built on top of that. For shared components, such as libraries, this value is often multiplied by the number of users. With this in mind, let’s have a closer look of how the Frameworks 5 transition affects the quality of the code, so many developers and users rely on.
KDE Frameworks 5 will be released in 2 weeks from now. This fifth revision of what is currently known as the “KDE Development Platform” (or, technically “kdelibs”) is the result of 3 years of effort to modularize the individual libraries (and “bits and pieces”) we shipped as kdelibs and kde-runtime modules as part of KDE SC 4.x. KDE Frameworks contains about 60 individual modules, libraries, plugins, toolchain, and scripting (QtQuick, for example) extensions.
One of the important aspects that has seen little exposure when talking about the Frameworks 5 project, but which is really at the heart of it, are the processes behind it. The Frameworks project, as it happens with such transitions has created a new surge of energy for our libraries. The immediate results, KF5’s first stable release is a set of software frameworks that induce minimal overhead, are source- and binary-stable for the foreseeable future, are well maintained, get regular updates and are proven, high-quality, modern and performant code. There is a well-defined contribution process and no mandatory copyright assignment. In other words, it’s a reliable base to build software on in many different aspects.
Extension and improvement of existing software are two ways of increasing their values. KF5 does not contain revolutionary new code, instead of extending it, in this major cycle, we’re concentrating on widening the usecases and improving their quality. The initial KDE4 release contained a lot of rewritten code, changed APIs and meant a major cleanup of hard-to-scale and sometimes outright horrible code. Even over the course of 4.x, we had a couple of quite fundamental changes to core functionality, for example the introduction of semantic desktop features, Akonadi, in Plasma the move to QML 1.x.
All these new things have now seen a few years of work on them (and in the case of Nepomuk replacing of the guts of it with the migration to the much more performant Baloo framework). These things are mature, stable and proven to work by now. The transition to Qt5 and KF5 doesn’t actually change a lot about that, we’ve worked out most of the kinks of this transition by now. For many application-level code using KDE Frameworks, the porting will be rather easy to do, though not zero effort. The APIs themselves haven’t changed a lot, changes to make something work usually involve updating the build-system. From that point on, the application is often already functional, and can be gradually moved away from deprecated APIs. Frameworks 5 provides the necessary compatibility libraries to ease porting as much as possible.
Surely, with the inevitable and purposeful explosion of the user-base following a first stable release, we will get a lot of feedback how to further improve the code in Frameworks 5. Processes, requirements and tooling for this is in place. Also, being an open system, we’re ready to receive your patches.
Frameworks 5, in many ways encodes more than 15 years of experience into a clearly structured, stable base to build applications for all kinds of purposes, on all kinds of platforms on.
With the modularization of the libraries, we’ve looked for suitable maintainers for them, and we’ve been quite successful in finding responsible caretakers for most of them. This is quite important as it reduces bottlenecks and single points of failure. It also scales up the throughput of our development process, as the work can be shared across more shoulders more easily. This achieves quicker feedback for development questions, code review requests, or help with bug fixes. We don’t actually require module maintainers to fix every single bug right away, they are acting much more as orchestrators and go-to-guys for a specific framework.
More peer-review of code is generally a good thing. It provides safety nets for code problems, catches potential bugs, makes sure code doesn’t do dumb thing, or smart things in the wrong way. It also allows transfer of knowledge by talking about each others code. We have already been using Review Board for some time, but the work on Frameworks 5 and Plasma 5 has really boosted our use of review board, and review processes in general. It has become a more natural part of our collaboration process, and it’s a very good thing, both socially and code-quality-wise.
More code reviews also keeps us developers in check. It makes it harder to slip in a bit of questionable code, a psychological thing. If I know my patches will be looked at line-by-line critically, it triggers more care when submitting patches. The reasons for this are different, and range from saving other developers some time to point out issues which I could have found myself had I gone over the code once more, but also make me look more cool when I submit a patch that is clean and nice, and can be submitted as-is.
Surely, code reviews can be tedious and slow down the development, but with the right dose, in the end it leads to better code, which can be trusted down the line. The effects might not be immediately obvious, but they are usually positive.
Splitting up the libraries and getting the build-system up to the task introduced major breakage at the build-level. In order to make sure our changes would work, and actually result in buildable and working frameworks, we needed better tooling. One huge improvement in our process was the arrival of a continuous integration system. Pushing code into one of the Frameworks nowadays means that a it is built in a clean environment and automated tests are run. It’s also used to build its dependencies, so problems in the code that might have slipped the developer’s attention are more often caught automatically. Usually, the results of the Continuous integration system’s automated builds are available within a few minutes, and if something breaks, developers get notifications via IRC or email. Having these short turnaround cycles makes it easier to fix things, as the memory of the change leading to the problem is still fresh. It also saves others time, it’s less likely that I find a broken build when I update to latest code.
The build also triggers running autotests, which have been extended already, but are still quite far away from complete coverage. Having automated tests available makes it easier to spot problems, and increases the confidence that a particular change doesn’t wreak havoc elsewhere.
Neither continuous builds, nor autotests can make 100% sure that nothing ever breaks, but it makes it less likely, and it saves development resources. If a script can find a problem, that’s probably vastly more efficient than manual testing. (Which is still necessary, of course.)
A social aspect here is that not a single person is responsible if something breaks in autobuilds or autotests, it rather should be considered a “stop-the-line” event, and needs immediate attention — by anyone.
This harnessing allows us to concentrate more on further improvments. Software in general are subject to a continous evolution, and Frameworks 5.0 is “just another” milestone in that ongoing process. Better scalability of the development processes (including QA) is not about getting to a stable release, it supports the further improvement. As much as we’ve updated code with more modern and better solutions, we’re also “upgrading” the way we work together, and the way we improve our software further. It’s the human build system behind software.
The circle goes all the way round, the continuous improvement process, its backing tools and processes evolve over time. They do not just pop out of thin air, they’re not dictated from the top down, they are rather the result of the same level of experience that went into the software itself. The software as a product and its creation process are interlinked. Much of the important DNA of a piece of software is encoded in its creation and maintenance process, and they evolve together.
One of the things that take care of internationalization of Plasma is the locale. Locale is a container concept that includes Wikipedia defines Locale as “a set of parameters that defines the user’s language, country and any special variant preferences that the user wants to see in their user interface”. There have been some changes in this area between Plasma 4.x and Plasma Next. In this article, I will give an overview of some of the changes, and what it means for the user. I’ll exclusively talk about the locale and although there is some overlap between Locale and Translations, I’ll concentrate just on the locale for now.
In Qt5, the locale support has seen a lot of improvements compared to Qt4. John Layt has done some fantastic work in contributing the features that are needed by many KDE applications, to a point where in most cases, KLocale is not needed anymore, and code that used it can now rely on QLocale. This means less duplication of code and API (QLocale vs. KLocale), more compabitility across applications (as more apps move to use QLocale), less interdependencies between libraries, and a smaller footprint.
This is one of the areas where porting of applications from KDE Platform 4.x to KDE Frameworks 5 can cause a bit of work, but it has clear advantages. KLocale is also still there, in the kde4support library, but it’s deprecated, and included as a porting aid and compatibility layer.
In Plasma, we have already made this transition to QLocale, and we’re at a point where we’re mostly happy about it. This also means that we had to revisit the Locale settings, which is probably the single component that is most visible to the user. Of course the locale matters everywhere, so the most fundamental thing is that the user gets units, number formats, currencies and all that presented in a way that is familiar and in line with overall regional settings. There’s a bunch of cases where users will want to have more fine-grained control over specific settings, and that is where the “Formats” settings interface in systemsettings comes in. In Plasma 4.x, the settings were very much based on either using a common setting and overriding specific properties of that in great detail. You could, for example, specify the decimal separator as a string. This allows a lot of control, but it’s also easy to get wrong. It also does not cover all necessary cases, as the Locale is much more subtle than can be expressed in a bunch of input boxes. Locale also has impact on sorting, collation of strings, has its own rules for appending or prepending the currency.
QLocale, as opposed to the deprecated KLocale doesn’t allow to set specific properties for outside users. This is, in my opinion, a valid choice, and can be translated in a fashion that is more useful to the user as well. The Formats settings UI now allows the user to pick a regional/language setting per “topic”. So if you pick, for example “Netherlands” for currency, and United States” for time, you’ll get euros, but your time will display with AM/PM. So UI has moved, so to say to using a region and language combination instead of overriding locale internals.
The mechanism we’ve put behind it is simple, but it has a number of advantages as well. The basic premise is that systemsettings sets the locale(s) for the workspace, and apps obey that. This can be done quite easily, following POSIX rules, by exporting variables such as LANG, LC_CURRENCY, LC_TIME, etc.. Now if the user has configured the locale in systemsettings, at next login, these variables will be exported for apps that are run within that session to be picked up. If the user didn’t specify her own locale settings, the default as set by the system is used. QLocale picks up these variables and Does The Right Thing. A “wanted side-effect” of this is that applications that do not use QLocale will also be able to pick up the locale settings, assuming they’re following the POSIX standard described above. This means that GTK+ apps will follow these settings as well — just as it should be within the same session. It also means that if you run, for example LXDE, it will also be able to have apps follow its locale, without doing special magic for Qt/KDE applications.
In Plasma, we have traditionally relied on the font settings dictated by the distribution we run on. This means that we’ll take whatever “Sans” font the distro has set up (or has left to something else), and worked with that. The results of that were sub-optimal at least, as it meant we had almost no control how things are going to look like for end users. Fonts matter a lot, since they determine how readable the UI is, but also what impression it gives. They also have effect on sizing, and even more so in Plasma Next.
Many widgets’ size in a UI depend on the font: Will this message actually fit into the allowed space for it? (And then: What about a translated version of this message?) In Plasma Next, we’re relying even more on sensible font settings and metrics in order to improve our support for HighDPI displays (displays that have more than 150 dots per inch). To achieve balance in the UI sizing, and to make sizing based on what really matters (how much content fits in there?), we’ve put a much stronger emphasis on fontsize-as-rendered-on-a-given screen. I’ve explained the basic mechanics behind that in an earlier post, so I won’t go into too much detail about that. Suffice to say that the base unit for our sizing is the height of the letter M rendered on the screen. This gives us a good base metric that takes into account the DPI of the screen, but also the preference of the font as set up by the user. In essence, this means that we design UIs to fit a certain number of columns and rows of text (approximately, and with ample dynamic spacing, so also longer translations fit well). It also means that the size of UI elements is not expressed in pixels anymore, and also not relative to the screen resolution, but that you get roughly the same physical size on different displays. This seems to work rather well, and we have gotten little complaints about sizing being off.
Relying on font metrics for low-level sizing units also means that we need the font to actually tell us the truth about its sizing. We need to know for example, how many pixels a given font on a given screen with a given pointsize will take, and we need this font to actually align with these values. This sounds quite logical, but there are fonts out there who don’t do a really good job in telling their metrics. This can lead to over- or undersizes UIs, alignment and margins being off, and a whole bunch of other visual and usability problems. It also looks bad. I find it personally quite frustrating when I see UIs that I or somebody else has spent quite some time on “getting it juuuuuust right”, and then seeing it completely misaligned and wrongly sized, just because some distro didn’t pay enough attention to choose a well-working (by our standards, of course ;)) font.
So, to mitigate these cases, we’ve chosen to be a bit more bold about font selection in Plasma Next. We are now including the Oxygen font and setting it up as default on new installs. This means that we know the defaults work, and they work well across a range of displays and systems. We’re also defaulting to certain renderer settings, so the fonts look as smooth as possible on most machines. This fixes a slew of possible technical issues, but it also has a huge impact on esthetics. By setting a default font, we provide a clearer idea of “with this setup, we feel it’s going to look just right”.
For this, we’ve chosen the Oxygen font, which has been created by Vernon Adams, is released under the SIL Open Font License and has been created under the Google webfonts project. It’s a really beautifully done, modern, simple and clean typeface. It is optimized for rendering with Freetype, and it mainly targets web browsers, desktops, laptops and mobile devices. Vern has created this font for Oxygen and in collaboration with some of the Oxygen designers. The font has actually been around for a while already, but we feel it’s now ready for prime-time, so limelight it is.
As it happens with Free software, this has been a long-lasting itch to scratch for me. One of the first thing I had to do with every install of Plasma (or previously, KDE 3 even), was to change the fonts to something bearable. Imagine finishing the installer, and being greeted with Helvetica — Barf. (And Helvetica isn’t even that bad a font, I’ve seen much worse.) I’m glad we could fix this now in Plasma Next, and I’m confident that this will help many users having a nicer looking desktop without changing anything.
Apart from the technicalities, there will always be users who have a strong preference for a certain font, or setting. For those, we have the font selection in our systemsettings, so you can always set up your personally preferred font. We’re just changing the default.
In the Plasma team, we’re working frantically towards the next release of the Plasma workspaces, code-named “Plasma Next”. With the architectural work well in place, we’ve been filling in missing bits and pieces in the past months, and are now really close to the intended feature set for the first stable release. A good time to give you an impression of what it’s looking like right now. Keep in mind that we’re talking Alpha software here, and that we still have almost three months to iron out problems. I’m sure you’ll be able to observe something broken, but also something new and shiny.
For the first stable release of Plasma Next, we decided to focus on core functionality. It’s impossible to get every single feature that’s available in our long-term support release KDE Plasma workspaces 4.11 into Plasma Next at once. We therefore decided to not spread ourselves too thin, and set aside some not-quite-core functionality for now. So we’re not aiming at complete feature parity yet, but at a stable core desktop that gets the work done, even if it doesn’t have all the bells and whistles that some of our users might be used to, yet.
Apart from “quite boring”, underlying “system stuff”, we’ve also worked on the visuals. In the video, you can see an improved contrast in Plasma popups, effects in kwin have been polished up to make the desktop feel snappier. We’ve started the work on a new Plasma theme, that will sport a flatter look with more pronounced typography than the venerable Air, and animations can now be globally disabled, so the whole thing runs more efficiently on systems with slow painting performance, for example across a network. These are only some of the changes, there are many more, visible and invisible.
We’re not quite done yet, but we have moved our focus from feature development to bugfixing, the results of that are very visible if you follow the development closely. Annoying problems are being fixed every day, and at this rate of development, I think we’re looking at a very shiny first stable release. Between today and that unicorn-dances-on-rainbows release lie almost three months of hard work, though, and that’s what we’ll do. While the whole thing already runs very smooth on my computers, we still have a lot of work to do in the integration department, and to translate this stability to the general case. Systems out there are diverse and different, and only wide-spread testing can help us make the experience a good one for everybody.
If you’re, like me, regularly building Qt, you probably have noticed a decent hunger for memory, especially when linking Webkit. This part of the build can take well over 8GB of RAM, and when it fails, you get to do it over again.
The unfortunate data point is that my laptop only has 4GB, which is enough for most (but one) cases. Short of buying a new laptop, here’s a trick how you can get through this part of the build: Create a swap file. Creating a swapfile increases your virtual memory. This won’t make it fast, but it at least gives Linux a chance to not run out of memory and kill the ‘ld’ process. Creating a swapfile is actually really easy under Linux, you just have to know your toolbox. Here’s the quick run-down of steps:
First, create an empty file:
fallocate -l 4096M /home/swapfile
Using the fallocate syscall (which works for newer kernels, but only for btrfs, ext4, ocfs2, and xfs filesystems), this can be done fast. In this example, I have enough space on my home partition, so I decided to put the swapfile there. It’s 4GB in size, which should be plenty of virtual memory to finish even the greediest of builds — your mileage may vary. If you’re not able to use fallocate, you’ll need a bit more patience and dd.
As your swap should never be readable by anybody else than root, change its permissions:
chmod 600 /home/swapfile
Next, “format” the swapfile:
Then, add it to your virtual memory pool:
You can now check with tools like `free` or `top` (tip: use the much nicer `htop` if you’re into fancy) that your virtual memory actually increased. Once your build is done, and you need your disk-space back, that’s easy as pie:
If you want to make this permanent (so it’ll survive a reboot), add a line like the following to your fstab:
/home/swapfile none swap defaults 0 0
This is just a really quick tip, more details on this process can be found in the excellent Arch Wiki.
The metaphor of a home make-over crossed my mind today, and I think it implies a few ideas that connect to Plasma Next quite well.
Plasma Next is a new iteration of the Plasma workspaces. This is the first time, that KDE’s workspace is released independently from its underlying libraries and the applications accompanying it. It is built upon KDE Frameworks 5 and Qt5. “Plasma Next” is our name for the development version of Plasma, the final product will not be called “Next”. The planned release does not include the applications that are shipped today as part of the KDE Software Compilation. Applications, just like the Frameworks follow a different release schedule. Of course it’s entirely possible to run KDE SC 4 Applications on a Plasma Next desktop, and vice versa.
We have now settled on a release schedule for Plasma Next, which plans for the first stable release in June. It’s a good time to talk about what’s to expect in this first release. I think we’ve got some pretty exciting improved technology in the works. As our new baby shapes up, let’s take a look at what’s old and what’s new in more detail.
Plasma Next bases on KDE Frameworks 5, which is a modularized version of the venerable KDE development platform, kdelibs and friends, so to say. This results in a reduced footprint in terms of dependencies. The Plasma libraries and runtime components are one of those Frameworks itself, providing the tools to build applications and workspaces. Plasma has been one of the early adopters of Frameworks 5, mainly for the reason that we had to do some development in parallel. While building on an unstable base wasn’t always easy, it certainly has its benefits: We could check the viability of the Frameworks goals, and catch regressions early on. A good test harness only goes so far, actually building a real world application with the libraries makes sure they’re stable and functioning. These days, the issues are gone, KDE Frameworks has stabilised enough that applications can be ported quite easily.
With the move to Frameworks 5 come some architectural changes that allow us to cut out some “runtime fat”, we can get rid of some separate processes with the new architecture, which results in a leaner system for the user.
QtQuick for the UI
The transition to QtQuick reaches a very significant milestone in Plasma Next: the whole UI is written in QML. This concludes a transition process which we’ve started in Plasma 4.x, as a side-effect it solves a whole class of long-standing quality and maintenance problem in the code, it means a higher quality for many users. The transition to QtQuick-based UI has been fairly smooth so far, so we are not jumping onto something entirely new here, which means that we have been able to port much of the existing and matured functionality from Plasma 4.x over to Plasma Next. Our QML APIs have not changed radically, so porting existing code written in QML to Plasma Next is not a huge amount of work. For add-ons written in other languages (UIs written in C++, Python and other languages are currently not supported, this is a conscious design decision, and unless there are QML bindings for these languages (which seems like a weird idea), this is unlikely to be supported in the future. Instead we’ve improved and matured the QML API to a level where it’s very easy to create almost anything you can dream of in QML.
New Graphics Stack
Plasma’s graphical stack reaches a new state-of-the-art this year. We are now able to fully offload the rendering of the UI into threads, and other processes, making the UI responsive by default. We also offload visual effects almost entirely onto the graphics card. This means that it’s faster, so you get better framerates out of it, it frees up CPU time for the user and it’s also more energy-efficient, so saves some battery life.
In case the system falls back to software rendering, we can now centrally disable animations, so even on systems that paint slowly, or we can reduce repaints drastically to give a snappy feel. This means that the visual effects in the UI degrade gracefully with available graphics features on the system. For the vast majority of users, this isn’t an issue anyway, their systems run a fully hardware-accelerated desktop with complete ease. On these systems we can improve the user experience by using the graphics card’s capability — and not only that: The transition to QtQuick 2, which is part of Qt5 means that all the work at UI level can be offloaded onto the GPU as well.
Another hot topic these days is Wayland readiness. This is currently work in progress, Martin Gräßlin’s blog shows some impressive progress on running KDE applications on Wayland. This is one of many important milestones. The most complex case is running a full Wayland session with Plasma and KDE applications, and we are not there yet. Wayland support is continuously improved.
Next to these architectural changes, we’ve also put some work into the actual UI visuals and interaction. We are in the process of establishing a Visual Design Group in KDE. Which already participates in the development of Plasma Next. The first results of this will be visible this summer, and the group is currently hashing out plans for the future. There is some serious design love appearing. You can follow the progress of it on the wheeldesign blog.
One of my favourite new features that has recently landed is Marco’s work on contrast behind translucent dialogs, which hugely improves readability in many cases, and make “the old Plasma” almost look bland in comparison. We’ve cleaned up quite a lot of workflows, not by making them any different, but by removing visual noise in between. The idea is to polish common elements to feel fresh and like an upgrade to users, but not entirely different. In the UI, known behavioral patterns are kept in place, with more pronounced core functions, and less fuzz around them. We’re aiming at keeping all the functionality and adaptability in place. To the user, the migration to Plasma Next should feel like an upgrade, not something completely new, but trusted after a bigger step in its evolution, yet recognizably true to its values.
We want to achieve this by concentrating on the core values, on what makes us good, and what users love us for. But we also do not want to pass the opportunity to fix what nags us and our users. Improvements in details mean that we listen to our users, a large portion of which do not want to be the subject of UI experiments, but who require a reliable system that supports and improves the personal workflows they have almost brought to perfection.
Of course, all of these workflows rely on many details behaving as they do, and these things are different for everyone. A small difference in behavior, or a missing seemingly minor feature might make or break any given workflow, there is no guarantee to be made, just a promise of best effort.
Many, but not all parts are co-installable, and it will be possible to use KDE SC 4 applications in a Plasma Next environment, and vice versa.
Plasma Next runs on top of a device-independent shell. The whole UI is structured into logical blocks that can be exchanged at runtime, this allows for dynamic switching the user interface to suit a different form factor. A target device will typically have the plasma-shell (and runtime bits) installed, and one or more device-specific shells. We are now readying the first of these “workspace user experiences”, Plasma Desktop. Others, such as the tablet-oriented Plasma Active UX will join in subsequent releases.
When will it be good enough for me?
While no .0 will ever be perfect, I expect that Plasma Next.0 will feel very much like an evolutionary step to the user, and certainly miles away compared to the impact of Plasma in KDE 4.0. For those that still remember the transition from KDE 2.2 to KDE 3.0, this seems rather comparable. While KDE 2 was almost a complete rewrite of KDE 1, the 2 to 3 transition was much less radical and far-reaching. We saw the same pattern between KDE 3 and 4, which again was quite radical, especially in and kdelibs and even more so for the workspace — Plasma which was completely new in 4.0.
In the first stable release of Plasma Next we want to provide a stable and fully functional core desktop. We concentrate on making the most important default functionality available, and are polishing this first and foremost. I expect that most users can be just happy with this first release, but as I said, there’s no guarantee, and maybe you’re missing something. For those that want to make the switch later, we have of course our long-term maintained current Plasma stable, so there’s no rush.
This concentration on the core also means that not every single feature will be available immediately. We are, however aiming at feature parity, so in most cases, a missing feature in .0 will be brought back in a later release. The kdeplasma-addons module, which contains additional functionality will likely not be ported by summer.
Ultimately, the way to make a good Plasma Next happen is to lend us a hand and take part. Even better, you don’t have to wait with this until summer. Take the plunge, build KDE Frameworks 5 and Plasma Next, try it and help us completing the core functionality and to fix bugs. That is exactly what we will be doing in the next months leading up to the release this summer, and we hope the experience will be delightful one for our users.
In this article, I’m describing a way to dynamically load Plasmoids into the systemtray. It’s interesting for you if you develop Plasma addons, or if you’re interested in the design of Plasma by KDE.
One of the wishes that came up during the latest Plasma sprint in Barcelona was a more dynamic way of showing functionality in the systemtray, that little notification area sitting in the panel next to the clock. The systemtray can have different kinds of “things” in them: Statusnotifiers, which are basically systray icons of applications, and Plasma widgets, which allows for much more functionality and freedom in UI development. Statusnotifiers are instantiated by applications, so their lifetime is entirely in the hands of the application they belong to. For Plasma widgets, this is a bit different, they’re currently loaded on startup. Ideally, we want to load specific services on-demand, say when a specific service becomes available.
You may have guessed by the title already, this feature has now landed in Plasma Next. It was actually quite easy to do, yet it’s a very powerful feature. First, let’s see what it looks like:
This feature allows loading and unloading Plasmoids when dbus services come and go. Applets can specify in their metadata which service should control their lifecycle.
This has the following advantages:
We can dynamically only load widgets when they’re useful, reducing clutter in many cases
Applications can provide widgets that appear and disappear based on the application running
We can load controls for system services dynamically as they come and go
It makes it easier to delay loading of widgets in the systemtray until when a specific service appears, cutting down startup time
It makes widgets and their features more discoverable as they’ll be able to appear automatically
One immediate user for this is the media controller widget, which will now only be loaded once an MPRIS2-compatible media player is running (as indicated by a dbus interface becoming available on the session bus.)
How do you do that? It’s quite easy. Two things need to be done: the widget should indicate to be loaded into the systemtray automatically, and it needs to specify a service which triggers loading it. That’s two lines in your metadata.desktop file, looking like this for example:
We are currently looking into how we can improve Plasma (but in extension also other applications, including QWidget-based ones) on hardware that sports unusually high DPI (also called “reasonable DPI”). In this article, I’ll outline the problem space and explain our tools to improve support for high DPI devices in Plasma.
First, let’s take a step back, however, to explain the problem space to those who haven’t yet spent that much thinking on this. Most of today’s desktops and laptops have have roughly the same amount of pixels per square inch of screen space. The amount of pixels per inch is measured in DPI (dots per inch) or PPI (pixels per inch). This value is around 100 DPI for laptops. Tablets and smartphones are already available with much higher DPI screens, making for sharper fonts and overall higher fidelity graphics. A while ago, Apple has started producing laptops with high DPI displays as well, these are the so-called Retina screens. There are Macbooks on the market with 220 DPI.
Some people have complained for years about the low DPI value of screens available on the market (and I am one of them), higher DPI simply allows for nicer looks, and reduces the need for “dirty tricks” such as subpixel rendering algorithms (which come with their own set of problems). Graphics chips also have become fast enough (and with enough memory) to not have a problem with high resolutions (think 4K on your laptop, for example).
I’ve done some research (Well, lazy-webbing, mostly), and it seems that higher DPI screens for desktop systems, but also for laptops are still quite rare, and when you find one, they’re often really, really expensive. I believe this is caused by two reasons: production-line quality is not always good enough to produce high DPI screens, one could say that the more pixels there are on a given device, the higher the chance that one or more of them are dead, making the display unsellable, and thus increasing the price of the ones that are pixel-perfect. Also, tablets and smartphones, which often already sport high DPI screens are currently taking up almost all of the production capacity. The obvious result is scarcity and a high price. I believe it’s only a matter of time until high DPI displays become more common, however. A few friends of mine already have high DPI displays, so there is a real need to support this kind of screen well.
So what’s the problem? Many applications assume a fixed DPI size, or at least that the DPI value of the screen is within a certain range, so that when you render an icon at 22 pixels, it will look like a small icon, compared to text, and that its details are visible. Also, icons and sizing of fonts are loosely related, so an icon that is way smaller than the size of the text as rendered will look weird and cause layouting problems. (Imagine huge letters and cut off text, for example.)
For graphical elements, this is a real problem, but less so for text. Today’s text rendering engines (Freetype, for example) take the DPI value of the screen into account. This means that this at least partly solves our problem. Luckily, it solves the problem for very complex cases, so at least this is not of great concern. It’s only a partial solution, however, so this at best eases finding a complete solution. We need to fix these problems in different areas, and all need to have well-thought out solutions.
Limitations in X11
The bad news is that this problem is only partly solvable in an X11 world. X11 has no real concept of different DPI values per screen. This is not so much a problem for single screen systems — we just use the DPI of the only screen. As soon as you attach a second screen that has a different DPI, this is going to be a problem, and it’s not possible to solve this completely in an X11 world. We’re planning to first make it work in a single DPI environment, and then work towards supporting different DPI values per screen. (Of course we’ll keep multi-DPI-screens in mind, so that we don’t have to take too many steps back once we are actually in a position to be able to support this. This means Wayland, basically.)
Cheating with fonts
A pretty easy, yet neat solution is to use font metrics to compute sensible dimensions for graphical elements on the screen. The very short version is to stop thinking in numbers of pixels, and to start thinking in lines of text and with of characters as they end up on the screen. This is not a pure solution to the DPI problem, which in many cases is actually an advantage. The size (for example height) of a given letter rendered on the screen depends on a number of properties:
The DPI value of the screen which is used to render the text
The font size setting
The size of the font itself, as it is designed (this is usually more relevant for the aspect ratio between width and height)
This means that taking the font height as rendered, we actually can compute values for sizing elements that not only take the low-level technical properties of the hardware into account, but also user preferences (or system presets). In Plasma 2, we started using this mechanism in a number of places, and so far we’re pretty happy with the results. Some examples which where we use this is the sizing of popups coming out of the panel (for the notification area and the calendar for example), but also the sizing of the icons in the notification area (or system tray). This means instead of hardcoding pixel sizes, these UI elements grow and shrink depending on your settings. This solves a part of the problem, but is obviously not a complete solution. If you would like to implement this mechanism, here’s two snippets of code, which, with some necessary adaption, you can use to make your app work well on High DPI devices.
in Qt C++ code:
const int fontHeight = QFontMetrics(QApplication::font()).boundingRect("M").size().height();
This gives you the height of the letter “M” as it would be rendered on the screen. It’s a useful mechanism to get you a pixelsize that is dependent on the DPI value of the screen. Note that you want to use an int here, in order to not end up aligning UI elements at half pixels, as this leads to blurriness in your UI.)
We’ve bridged this API in the Plasma Theme class, which is available from QML applications by importing org.kde.plasma.core (the global property theme will be automatically set, which allows you easy access to Plasma::Theme from everywhere, in case you’re wondering where the “theme” variable is coming from).
import org.kde.plasma.core 2.0 as PlasmaCore
/* Paint an orange rectangle on the screen that scales with DPI
This example makes the rect big enough to paint about 8 rows
of text (minus spacing), and allows for a column with of about
60 characters. Mileage varies on fonts used, and the text
itself, so this is an approximation.
width: theme.mSize(theme.defaultFont).height * 8
height: theme.mSize(theme.defaultFont).width * 60
/* ... more stuff ... */
In the second example, you see that we’re using another property, “units.largeSpacing” for the margins. This is another piece of DPI-dependent UI which you can use to apply margins and spacing that take DPI (actually font-as-rendered-settings) into account.
To get the size of a piece of text on the screen, in QtQuick code, you can use the paintedWidth property of a Text elements, but not that this can be tricky due to text elision, line breaks, etc., so this is to be dealt with with care.
Icons and other graphical elements
Another interesting case which affects the usability of our graphcal interfaces on high-DPI screens is the sizing of icons. We are using standard sizes there, which you access via properties “units.iconSizes.small”, “units.iconSizes.large”, etc.. In Plasma, these sizes now get interpolated with the DPI value of the screen, while making sure the icons are still getting rendered correctly. The sizing is done in steps, and not linearly to avoid getting washed-out icons. This mechanism works well and automatically solves another piece of the puzzle. The result of doing this is a more balanced look and better alignment and spacing across some widgets (notably the battery gains quite a bit of beauty with this treatment).
In other UI elements, especially the toolkitty widgets we provide in PlasmaComponents, we often already scale with the text, which works just fine for reasonably-but-not-excessively high DPI values. We have done some experiments with scaling up the graphical elements that are used to, for example, render a button. As we are using SVG graphics throughout the user interface, we don’t have to resolve to dirty tricks such as doubling pixels, and we can get quite neat results. This needs some more work, and it hasn’t been merged into master yet.
The Higher-Level Picture
Now, having discussed lots of technical details, how does the user-facing side of this look and work? The changes we’ve done right now only affect the user in one way: More UI elements scale with the DPI of the screen, this means no change for displays around 100 DPI. On my laptop (which has a 180 DPI screen), this gives a more balanced picture.
As Plasma is making its way onto a wider range of target devices (think tablets, media centers, as well as high-dpi laptops or monitors), this topic becomes increasinly important. I have good hopes that we will be able to solve this problem in a way that really leverages the power of this kind of hardware. In Plasma Next, we’re now providing some of the basic tools to make UIs work well on high-DPI displays, and we’ll continue this route further on. The user will notice that in Plasma Next, most of the problems that could be seen on high DPI screen are gone, and that the whole behaves much better without many tricks.
The Plasma team is meeting in Barcelona, Spain these days to work on the next major version of KDE’s popular workspaces. As we are in a transition period, technically and organisationally, this is a very important meeting. I won’t go into too many details in this post, as they are still being fleshed out, but to give you an idea what we are talking about, here’s a quick run-down of some of the things we talked about.
Process & Transpareny
We do not have firm technical results, we are sharing the proposals we come up with here in the meeting on the Plasma mailinglist for feedback first. This is a change we are making in our development and decision-making process. In the past, we sometimes got the feedback that the Plasma team as a group appears a bit too exclusive to the outside. This stands in contrast to its architectural position. One of the things that make it very interesting to work on Plasma is the scope. This scope is usually work on the workspace UI, and of a specific technology. User interfaces don’t stand on themselves, but they express something and allow access. Examples are hardware integration, where issues like powermanagement, device management, etc. have to be presented in the workspace in a way that first of all makes sense technically, but that also is consistent with the way other “things” are presented, and that is beautiful and engaging with the user, while getting out of the way of doing real work. This is a thin line to walk, and in order to achieve great results, it needs close involvement from all sides.
Related to this is the transparency in decision-making processes. Some people have complained that decisions have been made inside a tight group, and that they don’t feel to be part of this process. This stands in the way of team growth (and no growth means shrinking), making it hard to maintain a high level of quality on the one hand, and on the other hand to improve existing functionality and develop new features. We want to change this. Of course this must not stand in the way of firm direction, but the responsibility has to be shared by more people, meaning that not one person is seen as responsible for more controversial changes, but we as a team stand behind it. This reduces stress on individuals, and leads to a fairer distribution of also the negative sides of responsibility.
Lately, we’ve been struggling with an unwelcoming atmosphere in the Plasma communication channels. We’ve talked about this issue, and everybody present agreed that in order to keep our working environment pleasant, we have to be more friendly and respectful to each other. It’s totally not acceptable to lash out against each other, or to answer emails in condescending ways, to talk to people assuming they’ve bad intentions or anything like that. We need to rebuild some mutual trust, but we also need to step in when things threaten to escalate. As an in-person meeting is a good opportunity to talk about these issues in person, this was an important topic. My personal feeling is that we have reestablished strong standards, and everybody is on the same page and willing to defend this newly found balance. We all share the same goals, and we want our working environment to be friendly and enjoyable, as this forms the baseline for being productive and achieving great results as a team.
Drinking from the firehose
Finally, another topic was the situation of Plasma issues in KDE’s Bugzilla. We have a lot of bugs, and are currently lacking the resources to handle this stream of incoming bugs. We do need to do something about this, since it’s an important tool for support and to increase the quality of our codebase, and in extension the user experience. This means that we will, for the technological transitions deprecate a number of bugs of which we know fairly certainly that they do not apply in Plasma 2 anymore. We’ll also draw clearer boundaries around components supported by the core team (essential functionality), and community-supported addons. This means that we will be able to categorize and prioritize better, and hopefully get a grip on the rather messy situation in our issue tracker. Right now, trying to make sense of the issue reports for Plasma very much feels like this:
On the design side, we’ve started on visual guidelines. This is a tool for us to achieve greater beauty and consistency across the components. Plasma 1 feels, in some places, like a collection of individual, separate components. This is of course true, and it has specifically and purposefully designed like that — it’s a good thing technically, but the architecture should not bleed into the visual appearance. We’ve done a lot of work to ensure visual consistency in our components, and we’d like to take this to the next level. For this reason, we’ve worked on visual design guidelines. We’ve taken the new Plasma calendar as a starting point, since that resonated very well with the community, and with professional designers, and we started to extract guidelines that are commonly applicable also to other parts. This is about usage of fonts, spacing and alignment. On the technical side of this, Digia’s Mitch Curtis, who works on the calendar components in QtQuick Components has joined us for a day of design, planning and hacking, so we have some really nice collaboration going on there as well.