July 2nd, 2015

Convergence through Divergence

It’s that time of the year again, it seems: I’m working on KPluginMetaData improvements.

In this article, I am describing a new feature that allows developers to filter applications and plugins depending on the target device they are used on. The article targets developers and device integrators and is of a very technical nature.

Different apps per device

This time around, I’m adding a mechanism that allows us to list plugins, applications (and the general “service”) specific for a given form factor. In normal-people-language, that means that I want to make it possible to specify whether an application or plugin should be shown in the user interface of a given device. Let’s look at an example: KMail. KMail has two user interfaces, the desktop version, a traditional fat client offering all the features that an email client could possibly have, and a touch-friendly version that works well on devices such as smart phones and tablets. If both are installed, which should be shown in the user interface, for example the launcher? The answer is, unfortunately: we can’t really tell as there currently is no scheme to derive this information from in a reliable way. With the current functionality that is offered by KDE Frameworks and Plasma, we’d simply list both applications, they’re both installed and there is no metadata that could possibly tell us the difference.

Now the same problem applies to not only applications, but also, for example to settings modules. A settings module (in Frameworks terms “KCM”) can be useful on the desktop, but ignored for a media center. There may also be modules which provide similar functionality, but for a different use case. We don’t want to create a mess of overlapping modules, however, so again, we need some kind of filtering.

Metadata to the rescue

Enter KPluginMetaData. KPluginMetaData gives information about an application, a plugin or something like this. It lists name, icon, author, license and a whole bunch of other things, and it lies at the base of things such as the Kickoff application launcher, KWin’s desktop effects listing, and basically everything that’s extensible or uses plugins.

I have just merged a change to KPluginMetaData that allows all these things to specify what form factor it’s relevant and useful for. This means that you can install for example KDevelop on a system that can be either a laptop or a mediacenter, and an application listing can be adapted to only show KDevelop when in desktop mode, and skipping it in media center mode. This is of great value when you want to unclutter the UI by filtering out irrelevant “stuff”. As this mechanism is implemented at the base level, KPluginMetaData, it’s available everywhere, using the exact same mechanism. When listing or loading “something”, you simply check if your current formfactor is among the suggested useful ones for an app or plugin, and based on that you make a decision whether to list it or skip it.

With increasing convergence between user interfaces, this mechanism allows us to adapt the user interface and its functionality in a fully dynamic way, and reduces clutter.

Getting down and dirty

So, how does this look exactly? Let’s take KMail as example, and assume for the sake of this example that we have two executables, kmail and kmail-touch. Two desktop files are installed, which I’ll list here in short form.

For the desktop fat client:

[Desktop]
Name=Email
Comment=Fat-client for your email
Exec=kmail
FormFactors=desktop

For the touch-friendly version:

[Desktop]
Name=Email
Comment=Touch-friendly email client
Exec=kmail
FormFactor=handset,tablet

Note that that “FormFactors” key does not just take one fixed value, but allows specifying a list of values — an application may support more than one form-factor. This is reflected throughout the API with the plural form being used. Now the only thing the application launcher has to do is to check if the current form-factor is among the supplied ones, for example like this:

foreach (const KPluginMetaData &app, allApps) {
    if (app.formFactors().count() == 0 || app->formFactors().contains("desktop")) {
        shownAppsList.append(app);
    }
}

In this example, we check if the plugin metadata does specify the form-factor by counting the elements, and if it does, we check whether “desktop” is among them. For the above mentioned example files, it would mean that the fat client will be added to the list, and the touch-friendly one won’t. I’ll leave it as an exercise to the reader how one could filter only applications that are specifically suitable for example for a tablet device.

What devices are supported?

KPluginMetaData does not itself check if any of the values make sense. This is done by design because we want to allow for a wide range of form-factors, and we simply don’t know yet which devices this mechanism will be used on in the future. As such, the values are free-form and part of the contract between the “reader” (for example a launcher or a plugin listing) and the plugins themselves. There are a few commonly used values already (desktop, mediacenter, tablet, handset), but in principle, adding new form-factors (such as smartwatches, toasters, spaceships or frobulators) is possible, and part of its design.

For application developers

Application developers are encouraged to add this metadata to their .desktop files. Simply adding a line like the FormFactors one in the above examples will help to offer the application on different devices. If your application is desktop-only, this is not really urgent, as in the case of the desktop launchers (Kickoff, Kicker, KRunner and friends), we’ll likely use a mechanism like the above: No formfactors specified means: list it. For devices where most of the applications to be found will likely not work, marking your app with a specific FormFactor will increase the chances of it being found. As applications are being adopted to respect the form-factor’s metadata, its usefulness will increase. So if you know your app will work well with a remote control, add “mediacenter”, if you know it works well on touch devices with a reasonably sized display, add “tablet”, and so on.

Moreover…

We now have basic API, but nobody uses it (a chicken-and-egg situation, really). I expect that one of the first users of this will be Plasma Mediacenter. Bhushan is currently working on the integration of Plasma widgets into its user interface, and he has already expressed interest in using this exact mechanism. As KDE software moves onto a wider range of devices, this functionality will be one of the cornerstones of the device-adaptable user interface. If we want to use device UIs to their full potential, we do not just need converging code, we also need to add divergence features to allow benefiting from the difference of devices.

May 3rd, 2015

Cuba Diving.

Spectacular sunset at Maria la Gorda

Spectacular sunset at Maria la Gorda

I recently went on a vacation to Cuba. As I wanted to go scuba diving there, I researched a bit beforehand. The information I could dig up was spotty at times, so I decided to share my notes in order to add it as anecdotal information when planning their diving trips.

During the 3 week trip to Cuba, I visited three locations in the south-western part of the island. In total, I did 19 dives along the Cuba coast, all of them very enjoyable. On the list were shallow (10-18m) coral reef dives, wall dives, some of them deep. I clocked my max depth at 34.1m. One of the things I wanted to do was a cave dive in a Cenote. Cenotes are underwater cave systems found around the geological area.

General Considerations

Dive boat at Maria la Gorda

Dive boat at Maria la Gorda

Cuba, being a Carribean island has a tropical climate with warm waters around it, and climatically a wet and a dry season. As the wet season may make the sea choppy, reduce visibility and carries the risk of hurricanes, it’s advisable to pick the dry seasons, months from November to May for diving activities. The South coast, which is where I have been diving had warm waters 27°C at the surface, and 26°C as depths down to about 35m. Visibility was generally excellent, commonly around 30m, with sometimes up to 50-60m in calm water. In several spots, there are large and well-preserved coral reefs. The South coast usually has calmer waters than the North coast, so I picked locations in the South-West: Maria La Gorda at the far southwestern point of the island, Playa Girón at the Bay of Pigs, and Playa Ancón near Trinidad. All turned out to be worth visiting and made for some amazing dives. (We also visited Cayo Levisa on Cuba’s Northern shore, which has a nice beach, but was mediocre at best for snorkeling from the shore. Go to Cayo Jutia instead, if you want good snorkeling, or book the boat to go diving at Cayo Levisa.)
Cuba is a communist country, instead of Coca Cola advertisements you’ll find some billboard reminding you that “the revolution is invincible”. Economic trade embargoes make acquiring scuba diving gear a problem (although I haven’t seen any shortcomings in this area myself). There’s usually just one dive center running the diving operations, so not much choice, but on the other hand, you’ll rarely encounter crowded dive sites, or reduced visibility due to other divers silting up the waters.

Touristic activities such as diving are usually possible through government-owned dive centers. There’s a network of official travel agents across the country, which can help you with booking trips and getting in contact with dive centers. Many of them are not easily reachable by phone, but you can sometimes book in advance of your trip online. In my experience, it would have been fine to just show up at the dive center at the right time of the day with your certification card and dive logs to prove your experience, and you’ll be almost good to go. I decided to bring my own gear, regulator, jacket BCD, 3mm wetsuit, fins, mask and torch in order to avoid any annoyances or unsafe situations due to flaky equipment.

My personal experience has been very positive, I loved the different dive sites, guides were generally skilled, and I had a whole bunch of amazing dives in Cuba. Would recommend.

Maria la Gorda

Maria la Gorda's beach

Maria la Gorda’s beach

Maria la Gorda is a bit off the beaten path in Cuba, one of the more remote locations on the main land. We travelled there from Vinales in 3 hours by car. The location itself is comprised of a hotel, two restaurants, and dive centre on a beautiful beach that also makes for some very nice snorkeling, you can basically walk in and enjoy lots of fish, even those not often discovered while scuba diving. Kim spotted barracuda, jacks, parrot fish, a moray and even an octopus just a few meters off the beach.
Diving there is done by boat 3 times a day. Almost all of the diving spots are within a 15min boat ride. The dive boat goes to 3 different sites a day, at 8:30, 11:00 and 15:00. It’s possible to also do a night dive, but has to be arranged with the staff. If you’re doing a day trip from Vinales just for diving, you’ll arrive in time to do two dives before leaving, the dive center does consider day guests. The surface time in between was enough to not dip too deep into nitrogen levels with 3 daily dives, the first two of them deep. Dives are usually limited to 45′ bottom time. If in general, you can’t get enough of scuba diving, that’s a lot of diving there.
The sites I’ve visited were all amazing in their own rights. Beautiful walls littered with coral, dropping down to 2000m right below you, really nice tunnels to dive through, large fan corals, barrel sponges of 3m and more, large groupers, jacks, and the usual variety of coral reef fish (parrot, box fish, angelfish, jacks, butterfly fish, etc.). Fish of more than 50cm in size were no exception, which seems like a sign that at least this part of the Carribean is comparably less overfished than other areas, there and especially in Asian countries.
The procedures on the boat were a bit unclear, I had liked to get better introduction there. Other people were happy to help, so this wasn’t much of a problem. The guides’ briefings were too short for my taste, especially knowing a bit about the navigation planned underwater would help to keep the group together more closely and in the end improves safety. I’ve asked the guide to tell me about the planned route under water, which he did in the following dives. That allowed me to take some responsibility myself (I really like to know under water that everybody who went into the water comes out of it as well). That said, there’s always room for improvement, and it didn’t lead to any dangerous situations. Taking responsibility for your buddies is part of diving, and as long as everybody takes it seriously, no problem.

Bahía de Cochinos (Bay of Pigs)

Museum in Playa Giron

Museum in Playa Giron

The Bay of Pigs is historically known for an attempt by the CIA to invade Cuba with US-friendly troops and overthrow the then-young communist government. Lack of political support from the US government, underestimation of the Cuban revolutionary troops and insufficient secrecy lead to an utter failure of the invasion attempt. Nowadays, the bay of pigs is a rather calm area, with excellent scuba diving. Basically, the eastern shore of the bay is lined with a coral reef wall very close to the shore. Commonly, one would do a shore entry here, swim out about 100m and then drop into the wall.
I dove with Ronel’s local dive operation. A tweaked bus would pick us up in the morning, go to the dive center (next to the government run hotel in Playa Giron) to gear up, and then drive up North for 10 – 20 minutes to one of the dive spots along the bay, then do two shore dives from there. We’d return around 1 o’clock in the afternoon, so there’s plenty of time for other activities (which, to be fair there aren’t that many apart from the beach and a “not-quite-neutral” museum about the failed invasion attempt).

The Cenote Dive

Entering the Cenote

Entering the Cenote

Cenotes are sinkholes in the shallow limestone ground near the coast. Small pools filled with fresh surface waters lead to extended cave systems flooded with salt water, so one enters a fresh water pool in the woods, then descends through a haloclyne. This haloclyne is the border between fresh and salt water. The caves were usually filled with salt water seeping in from the sea, but as there are almost no currents, rain water that comes in from above stays on top as fresh water. The haloclyne produces a weird disturbed visual effect when one dives through it, but above and underneath it, visibility is clear. These sink holes are often quite deep, the one we entered was 26m deep at entry point, the deepest points of the cave system went down to 60m. We entered into a tunnel, a vertical crack in the limestone about 1m wide, so wide enough to comfortably swim through. During the dive, we made our way about 350m into the cave to a maximum depth of 32m. As the shape of the cave determines the dive profile, I ended up having to add an extra decompression stop before surfacing.
We went through a lower tunnel into a larger cave, which had some beautiful sunlight shining in through cracks above in blue-green colors. Visibility was excellent, and the sunrays produced an almost magical ray of sunlight in the water of the deeper cave.
Descending into the Cenote

Descending into the Cenote

Through the haloclyne above us, the sunlight was broken by the different densities of air, fresh and salt water until they hit the particles drifting in the water or the walls and bottom and of the cave. This dive was guided by a specialized cave diving guide. Briefings were thorough, and after a first reef dive to check everyone’s buoyancy and general diving skills, we did our second dive of the day in the Cenote. I’ve found this video, which gives an impression how such a dive looks like. If you’re an advanced diver, comfortable with overhead environments and experienced enough, I’d definitely recommend doing a Cenote dive, For me, it’s been an unforgettable experience.

Playa Ancón

Playa Ancón is the beach village close to Trinidad. It’s a peninsula at about 7km from Trinidad. I’ve found it a bit complicated to book the diving there. Tour operator in Trinidad would tell me that everything’s fully booked, but inquiring at the dive center in Playa Ancon, I was told to just show up before 9am and I should be fine. That’s what I did, and it was indeed no problem to go diving there. We’d enter the boat from the beach and would go out a few hundred meters, just too much to swim there comfortably.
Even with a bit of a choppy sea that day, the diving was excellent. Good guides lead me over an interesting seascape with sandbed-“roads” in between coral fields, and much life in between. Highlights of these dives were a wreck, which lay across two large rocks and created a swimthrough this way, a school of tuna (about 40 fish), and a 1.2m large eagle ray. Water was warm and visibility in the range of 15m (considered quite bad for the location, so expect better when you get there). The dive shop was run professionally, but be prepared for a “laid-back scheduling”, which means depending on the day, two boat dives with a surface interval on shore might run into the early afternoon. (I’m mentioning it here, since every other dive center I dove with in Cuba was exceptionally punctual, contrary to what I had read before.)

March 9th, 2015

Spring dive.

image

Spring is just showing its first rays of sun, so we went diving today. We did a shallow dive in the morning in a lake close by, which is known for decent diving. The water was 4°C, so really chilly for a recreational dive, visibility under water about 5-7m, which is quite good for this kind of water. During our dive, we had really nice light as we didn’t go very deep and it’s been a really sunny day.

I’ve used Lycra undergarment and a 2mm neoprene bodywarmer under a 5mm wetsuit, 5mm neoprene gloves, hood and boots. My coldest dive so far was in 18 degrees water (in the same lake, late summer), so water that cold was quite something new. Richard, my buddy has a lot of experience also in these conditions, so I was in excellent hands.

After an initial bit of a shock when we entered the water, and a quick weighting / buoyancy check I gained back my serene diving breathing rhythm and started trusting the suit enough to become comfortable that it would keep me warm enough to take it for a swim. We went under for almost half an hour, but also took first signs of hypothermia seriously, in order to keep it safe and healthy. During the dive, we saw some fun fresh water lobsters, but also a fairly dormant ecosystem. I wanted to test out my new equipment, wireless integrated tank pressure gauge, jacket BCD, fins, regulator and an air tank I’ve borrowed from a neighbor, aside from the different layers of the suit.

I’m really happy with the new gear, everything functions perfectly, and there’s nothing that doesn’t have a clear purpose. I’ve taken a couple of notes for the next dive with this equipment, though those are only small adjustments, such as strapping on the tank a bit lower for a better weight distribution (which translates to a more hydrodynamic body position, meaning less exertion, lower air consumption and a more relaxed dive).

My next dive will be in 26°C water. Phew.

February 19th, 2015

Say hi to cuttlefish!

Cuttlefish icon previewer

Cuttlefish icon previewer

One of the things I’ve been sorely missing when doing UI design and development was a good way to preview icons. The icon picker which is shipped with KDE Frameworks is quite nice, but for development purposes it lacks a couple of handy features that allow previewing and picking icons based on how they’re rendered.

Over the christmas downtime, I found some spare cycles to sit down and hammer out a basic tool which allows me to streamline that workflow. In the course of writing this little tool, I realised that it’s not only useful for a developer (like me), but also for artists and designers who often work on or with icons. I decided to target these two groups (UI developers and designers) and try to streamline this tool as good as possible for their usecases.

Cuttlefish is the result of that work. It’s a small tool to list, pick and preview icons. It tries to follow the way we render icons in Plasma UIs as close as possible, in order to make the previews as realistic as possible. I have just shown this little tool to a bunch of fellow Plasma hackers here at the sprint, and it was very well received. I’ve collected a few suggestions what to improve, and of course, cuttlefish being brand-new, it still has a few rough edges.

You can get the source code using the following command:

git clone kde:scratch/sebas/cuttlefish
git clone kde:plasmate

and build it with the cmake.

Enjoy cuttlefish!

[Edit] We moved cuttlefish to the Plasmate repository, it’s now part of Plasma’s SDK.

February 18th, 2015

“Killing the Cashew” done right.

Plasma Desktop's Toolbox

Plasma Desktop’s Toolbox

One of the important design cornerstones of Plasma is that we want to reduce the amount of “hidden features” as much as possible. We do not want to have to rely on the user knowing where to right-click in case she wants to find a certain, desktop-related option, say adding widgets, opening the desktop settings dialog, the activity switcher, etc.. For this, Plasma 4.0 introduced the toolbox, a small icon that when clicked opens a small dialog with actions related to the desktop. To many users, this is an important lifeline when they’re looking for a specific option.

In Plasma 4.x, there was a Plasmoid, provided by a third party, that used a pretty gross hack to remove the toolbox (which was depicted as the old Plasma logo, resembling a cashew a bit). We did not support this officially, but if people are deliberately risking to break their desktop, who are we to complain. They get to keep both pieces.

During the migration to QML (which begun during Plasma 4.x times), one of the parts I had been porting to QtQuick was this toolbox. Like so many other components in Plasma, this is actually a small plugin. That means it’s easy to replace the toolbox with something else. This feature has not really been documented as its more or less an internal thing, and we didn’t want to rob users of this important lifeline.

Some users want to reduce clutter on their desktop as much as possible, however. Since the options offered in the toolbox are also accessible elsewhere (if you know to find them). Replacing the toolbox is actually pretty easy. You can put a unicorn dancing on a rainbow around your desktop there, but you can also replace it with just an empty object, which means that you’re effectively hiding the toolbox.

For users who would rather like their toolbox to be gone, I’ve prepared a small package that overrides the installed toolbox with an empty one. Hiding the toolbox is as easy as installing this minimal package, which means the toolbox doesn’t get shown, or even get loaded.

I would not recommend doing this, especially not as default, but at the same time, I don’t want to limit what people do with their Plasma do what we as developers exactly envision, so there you go.

Download this file, then install it as follows:


plasmapkg2 -t package -i emptytoolbox.plasmoid

Now restart the Plasma Shell (either by stopping the plasmashell process, or by logging out and in again), and your toolbox should be gone.

If you want it back, run

plasmapkg2 -t package -r org.kde.desktoptoolbox

Then restart Plasma and it’s back again.

Even more than just removing the toolbox, I’d like to invite and encourage everybody to come up with nice, crazy and beautiful ideas how to display and interact with the toolbox. The toolbox being a QtQuick Plasmoid package, it’s easy to change and to share with others.

December 10th, 2014

Trusty Old Router

Linksys WRT54G
I decommissioned a fine piece of hardware today. This access point brought the first wireless connectivity to my place. It’s been in service for more than 11 years, and is still fully functional.

In the past years, the device has been running OpenWRT, which is a really nice and very powerful little Linux distribution specifically for this kind of routers. OpenWRT actually sprang from the original firmware for this device, and was extended, updated, improved and made available for a wide range of hardware. OpenWRT lately has made this piece of hardware useful, and I’m really thankful for that. It also a shows how much value releasing firmware under an Open Source license can add to a product. Aside from the long-term support effect of releasing the firmware, updated firmware would add features to the router which were otherwise only available in much more expensive hardware.

The first custom firmware I ran on this device was Sveasoft. In the long run, this ended up not being such a good option, since the company producing the software really stretched the meaning of the GPL — while you were technically allowed to share software with others, doing so would end your support contract with the company — no updates for you. LWN has a good write-up about this story.

Bitter-sweet gadget-melancholy aside, the replacement access point brings a 4 times speed increase to the wifi in my home office: less finger-twiddling, more coding. :)

November 5th, 2014

Diving into Plasma’s 2015

Sea anemone with anemone fish

Sea anemone with anemone fish

TL;DR: The coming year is full of challenges, old and new, for the Plasma team. In this post, I’m highlighting end-user readiness, support for Wayland as display server and support for high-dpi displays.

Before you continue reading, have a gratuitous fish! (Photo taken by my fine scuba diving buddy Richard Huisman.)
Next year will be interesting for Plasma. Two things that are lined up are particularly interesting. In 2015, distributions will start shipping Plasma 5 as their default user interface. This is the point where more “oblivious” users will make their first contact with Plasma 5. As we’re navigating through the just-after-a-big-release phase, which I think we’re mastering quite nicely, we approach a state where a desktop that has so many things changed under its hood is becoming a really polished and complete working environment, that feels modern, supports traditional workflows well, and is built on top of a top-notch modern modularized set of libraries, KDE’s Frameworks.

In terms of user demographic, we’re almost certain to see one thing happening with the new Plasma 5 UI, as distros start to ship it by default, this is what these new users are going to see. Not everybody in this group of users is interested in how cool the technology stack lines up, they just want to get their work done and certainly not feel impeded in their daily workflows. This is the target group which we’ve been focusing our work on in months since summer, since the release of Plasma 5.0. Wider group of users sounds pretty abstract, so let’s take some numbers: While Plasma 5 is run by a group of people already, the number of users who get it via Linux distributions is much larger than the group of early adopters. This means by the end of next year, Plasma 5 will be in the hands of millions of users, probably around 10 million, and increasing. (This is interpolated from an estimation of Plasma users in the tens of millions, with the technology adaption lifecycle taken as base.)

The other day, I’ve read on a forum a not particularly well-informed, yet interesting opinion: “Plasma 5 is not for end users, its Wayland support is still not ready”. The Plasma 5 is not for end users, I do actually agree with, in a way. While I know that there is a very sizable group of people that have been having a blast running Plasma since 5.0, when talking about end-users, one needs to look at the cases where it isn’t suitable. For one, these give concrete suggestions what to improve, so they’re important for prioritization. This user feedback channel has been working very well so far, we’ve been receiving hundreds of bug reports, which we could address in one way or another, we have been refining our release and QA processes, and we’ve filled in many smaller and bigger gaps. There’s still much more work to do, but the tendency is exactly right. By ironing out many real-world problems, each of those fixes increases the group of users Plasma is ready for, and improve the base to build a more complete user experience upon.

What’s also true about the statement of the above “commenter on the Internet” is that our Wayland support isn’t ready. It is entirely orthogonal to the “is it ready for end users?” question. Support for Wayland is a feature we’re gradually introducing, very much in a release-early-release-often fashion. I expect our support for this new display server system to reach a point where one can run a full session on top of Wayland in the course of next year. I expect that long-term, most of our users will run the user interface on top of Wayland, effectively deprecating X11. Yet, X11 will stay around for a long time, there’s so much code written on top of X11 APIs that we simply can’t expect it to just vanish from one day to the other. Some Linux distros may switch relatively early, while for Enterprise distros, that switch might only happen in the far future, that doesn’t even count existing installations. That is not a problem, though, since Wayland and X11 support are well encapsulated, and supposed to not get in the way of each other — we do the same trick already on other operating systems, and it’s a proven and working solution.

Then, there’s the mission to finish high-dpi support. High DPI support means rendering a usable UI on displays with more than 200 DPI. That means that UI elements have to scale or be rendered with more detail and fidelity. One approach is to simply scale up everything in every direction by a fixed factor, but while it would get the sizing right, it would also negate any benefit of the increased amount of pixels. Plasma 5 already solves many issues around high-dpi, but not without fiddling, and going over different settings to get them right. Our goal is to support high-dpi displays out of the box, no fiddling, just sensible defaults in case a high dpi display gets connected. As there are 101 corner cases to this, it’s not easy to get right, and will take time and feedback cycles. Qt 5.4, which is around the corner, brings some tools to support these displays better, and we’ll be adjusting our solutions to make use of that.

It seems we are not quite yet running out of interesting topics that make Plasma development a lot of fun. :)

July 25th, 2014

Plasma’s Road to Wayland

Road to WaylandWith the Plasma 5.0 release out the door, we can lift our heads a bit and look forward, instead of just looking at what’s directly ahead of us, and make that work by fixing bug after bug. One of the important topics which we have (kind of) excluded from Plasma’s recent 5.0 release is support for Wayland. The reason is that much of the work that has gone into renovating our graphics stack was also needed in preparation for Wayland support in Plasma. In order to support Wayland systems properly, we needed to lift the software stack to Qt5, make X11 dependencies in our underlying libraries, Frameworks 5 optional. This part is pretty much done. We now need to ready support for non-X11 systems in our workspace components, the window manager and compositor, and the workspace shell.

Let’s dig a bit deeper and look at at aspects underlying to and resulting from this transition.

Why Wayland?

The short answer to this question, from a Plasma perspective, is:

  • Xorg lacks modern interfaces and protocols, instead it carries a lot of ballast from the past. This makes it complex and hard to work with.
  • Wayland offers much better graphics support than Xorg, especially in terms of rendering correctness. X11’s asynchronous rendering makes it impossible to be sure about correctness and timeliness of graphics that ends up on screen. Instead, Wayland provides the guarantee that every frame is perfect
  • Security considerations. It is almost impossible to shield applications properly from each other. X11 allows applications to wiretap each other’s input and output. This makes it a security nightmare.

I could go deeply into the history of Xorg, and add lots of technicalities to that story, but instead of giving you a huge swath of text, hop over to Youtube and watch Daniel Stone’s presentation “The Real Story Behind Wayland and X” from last year’s LinuxConf.au, which gives you all the information you need, in a much more entertaining way than I could present it. H-Online also has an interesting background story “Wayland — Beyond X”.

While Xorg is a huge beast that does everything, like input, printing, graphics (in many different flavours), Wayland is limited by design to the use-cases we currently need X for, without the ballast.
With all that in mind, we need to respect our elders and acknowledge Xorg for its important role in the history of graphical Linux, but we also need to look beyond it.

What is Wayland support?

KDE Frameworks 5 apps under Weston

KDE Frameworks 5 apps under Weston

Without communicating our goal, we might think of entirely different things when talking about Wayland support. Will Wayland retire X? I don’t think it will in the near future, the point where we can stop caring for X11-based setups is likely still a number of years away, and I would not be surprised if X11 was still a pretty common thing to find in enterprise setups ten years down the road from now. Can we stop caring about X11? Surely not, but what does this mean for Wayland? The answer to this question is that support for Wayland will be added, and that X11 will not be required anymore to run a Plasma desktop, but that it is possible to run Plasma (and apps) under both, X11 and Wayland systems. This, I believe, is the migration process that serves our users best, as the question “When can I run Plasma on Wayland?” can then be answered on an individual basis, and nobody is going to be thrown into the deep (at least not by us, your distro might still decide to not offer support for X11 anymore — that is not in our hands). To me, while a quick migration to Wayland (once ready) is something desirable, realistically, people will be running Plasma on X11 for years to come. Wayland can be offered as an alternative at first, and then promote to primary platform once the whole stack matures further.

Where at we now?

With the release of KDE Frameworks 5, most of the issues in our underlying libraries have been ironed out, that means X11-dependent codepaths have become optional. Today, it’s possible to run most applications built on top of Frameworks 5 under a Wayland compositor, independent from X11. This means that applications can run under both, X11 and Wayland with the same binary. This is already really cool, as without applications, having a workspace (which in a way is the glue between applications would be a pointless endeavour). This chicken-egg situation plays both ways, though: Without a workspace environment, just having apps run under Wayland is not all that useful. This video shows some of our apps under the Weston compositor. (This is not a pure Wayland session “on bare metal”, but one running in an X11 window in my Plasma 5 session for the purpose of the screen-recoding.)

For a full-blown workspace, the porting situation is a bit different, as the workspace interacts much more intimately with the underlying display server than applications do at this point. These interactions are well-hidden behind the Qt platform abstraction. The workspace provides the host for rendering graphics onto the screen (the compositor) and the machinery to start and switch between applications.

We are currently missing a number of important pieces of the full puzzle: Interfaces between the workspace shell, the compositor (KWin) and the display server are not yet well-defined or implemented, some pioneering work is ahead of us. There is also a number of workspace components that need bigger adjustments, global shortcut handling being a good example. Most importantly, KWin needs to take over the role of Wayland compositor. While some support for Wayland has already been added to KWin, the work is not yet complete. Besides KWin, we also need to add support for Wayland to various bits of our workspace. Information about attached screens and their layout has to be made accessible. Global keyboard shortcuts only support X11 right now. The screen locking mechanism needs to be implemented. Information about Windows for the task-manager has to be shared. Dialog positioning and rendering needs to be ported. There are also a few assumptions in startkde and klauncher that currently prevent them from being able to start a session under Wayland, and more bits and pieces which need additional work to offer a full workspace experience under Wayland.

Porting Strategy

The idea is to be able to run the same binaries under both, X11 and Wayland. This means that we (need to decide at runtime how to interact with the windowing system. The following strategy is useful (in descending order of preference):

  • Use abstract Qt and Frameworks (KF5) APIs
  • Use XCB when there are no suitable Qt and KF5 APIs
  • Decide at runtime whether to call X11-specific functions

In case we have to resort to functions specific to a display server, X11 should be optional both at build-time and at run-time:

  • The build of X11-dependent code optional. This can be done through plugins, which are optionally included by the build-system or (less desirably) by #ifdef’ing blocks of code.
  • Even with X11 support built into the binary, calls into X11-specific libraries should be guarded at runtime (QX11Info::isPlatformX11() can be used to check at runtime).

Get your Hands Dirty!

Computer graphics are an exciting thing, and many of us are longing for the day they can remove X11 from their systems. This day will eventually come, but it won’t come by itself. It’s a very exciting time to get involved, and make the migration happen. As you can see, we have a multitude of tasks that need work. An excellent first step is to build the thing on your system and try running, fix issues, and send us patches. Get in touch with us on Freenode’s #plasma IRC channel, or via our mailing list plasma-devel(at)kde.org.

July 15th, 2014

Plasma 5 Ingredients

Plasma 5.0 is out!

Plasma 5.0 is out. I’ve compiled a (non-exhaustive) list of ingredients and that have been put into this release to give the reader an estimate of the dimensions of the project and the achievement of this milestone:

  • 46 kilo of espresso (pure arabica)
  • The milk of 3 cows
  • a Swiss mountain of chocolate
  • 140 sleepless nights mulling over code
  • 354 liters of pressurized air breathed during scuba dives
  • One encounter with a Mantis shrimp
  • The total length of 43 bathtubs full of tiger tails fixed in pixel-alignment problems
  • 817 hours spent in front of webcams
  • 189MB of irc lines written (compressed)
  • 80.000 automated builds to keep us in check
  • 2403 bugs in the code that had to die
  • A swimming-pool full of tears cried over graphics driver problems and crashers buried deep down in scripting engines, scenegraphs and (the pool allegedly was previously used for skateboarding by Greg KH)
  • 5 magic wands
  • 800 million pixels
  • 37843200000 frames rendered
  • Too many puppies
  • 7 virtual goats sacrificed during a total of 28 full moon ceremonies
  • 450 ml of holy water
  • 76 rock bands
  • 119 beats per minute
  • 8 bits alpha channels
  • 52 WTFs
  • The equivalent of 3 dead trees in recycled paper
  • 2 small branches of cederwood for pencils
  • 1 box of crayons

Nothing like entirely made-up statistics.

tl;dr:

Plasma == ♥

… but also some really hard work, made possible by the sacrifices (see above) of many great people.

June 18th, 2014

Five Musings on Frameworks Quality

Gratuitious ape photo

Musing…


In many cases, high-quality code counts more than bells and whistles. Fast, reliable and well-maintained libraries provide a solid base for excellent applications built on top of it. Investing time into improving existing code improves the value of that code, and of the software built on top of that. For shared components, such as libraries, this value is often multiplied by the number of users. With this in mind, let’s have a closer look of how the Frameworks 5 transition affects the quality of the code, so many developers and users rely on.

KDE Frameworks 5 will be released in 2 weeks from now. This fifth revision of what is currently known as the “KDE Development Platform” (or, technically “kdelibs”) is the result of 3 years of effort to modularize the individual libraries (and “bits and pieces”) we shipped as kdelibs and kde-runtime modules as part of KDE SC 4.x. KDE Frameworks contains about 60 individual modules, libraries, plugins, toolchain, and scripting (QtQuick, for example) extensions.

One of the important aspects that has seen little exposure when talking about the Frameworks 5 project, but which is really at the heart of it, are the processes behind it. The Frameworks project, as it happens with such transitions has created a new surge of energy for our libraries. The immediate results, KF5’s first stable release is a set of software frameworks that induce minimal overhead, are source- and binary-stable for the foreseeable future, are well maintained, get regular updates and are proven, high-quality, modern and performant code. There is a well-defined contribution process and no mandatory copyright assignment. In other words, it’s a reliable base to build software on in many different aspects.

Maturity

Extension and improvement of existing software are two ways of increasing their values. KF5 does not contain revolutionary new code, instead of extending it, in this major cycle, we’re concentrating on widening the usecases and improving their quality. The initial KDE4 release contained a lot of rewritten code, changed APIs and meant a major cleanup of hard-to-scale and sometimes outright horrible code. Even over the course of 4.x, we had a couple of quite fundamental changes to core functionality, for example the introduction of semantic desktop features, Akonadi, in Plasma the move to QML 1.x.
All these new things have now seen a few years of work on them (and in the case of Nepomuk replacing of the guts of it with the migration to the much more performant Baloo framework). These things are mature, stable and proven to work by now. The transition to Qt5 and KF5 doesn’t actually change a lot about that, we’ve worked out most of the kinks of this transition by now. For many application-level code using KDE Frameworks, the porting will be rather easy to do, though not zero effort. The APIs themselves haven’t changed a lot, changes to make something work usually involve updating the build-system. From that point on, the application is often already functional, and can be gradually moved away from deprecated APIs. Frameworks 5 provides the necessary compatibility libraries to ease porting as much as possible.
Surely, with the inevitable and purposeful explosion of the user-base following a first stable release, we will get a lot of feedback how to further improve the code in Frameworks 5. Processes, requirements and tooling for this is in place. Also, being an open system, we’re ready to receive your patches.
Frameworks 5, in many ways encodes more than 15 years of experience into a clearly structured, stable base to build applications for all kinds of purposes, on all kinds of platforms on.

Framework Caretakers

With the modularization of the libraries, we’ve looked for suitable maintainers for them, and we’ve been quite successful in finding responsible caretakers for most of them. This is quite important as it reduces bottlenecks and single points of failure. It also scales up the throughput of our development process, as the work can be shared across more shoulders more easily. This achieves quicker feedback for development questions, code review requests, or help with bug fixes. We don’t actually require module maintainers to fix every single bug right away, they are acting much more as orchestrators and go-to-guys for a specific framework.

More Reviews

More peer-review of code is generally a good thing. It provides safety nets for code problems, catches potential bugs, makes sure code doesn’t do dumb thing, or smart things in the wrong way. It also allows transfer of knowledge by talking about each others code. We have already been using Review Board for some time, but the work on Frameworks 5 and Plasma 5 has really boosted our use of review board, and review processes in general. It has become a more natural part of our collaboration process, and it’s a very good thing, both socially and code-quality-wise.
More code reviews also keeps us developers in check. It makes it harder to slip in a bit of questionable code, a psychological thing. If I know my patches will be looked at line-by-line critically, it triggers more care when submitting patches. The reasons for this are different, and range from saving other developers some time to point out issues which I could have found myself had I gone over the code once more, but also make me look more cool when I submit a patch that is clean and nice, and can be submitted as-is.
Surely, code reviews can be tedious and slow down the development, but with the right dose, in the end it leads to better code, which can be trusted down the line. The effects might not be immediately obvious, but they are usually positive.

Tooling

Splitting up the libraries and getting the build-system up to the task introduced major breakage at the build-level. In order to make sure our changes would work, and actually result in buildable and working frameworks, we needed better tooling. One huge improvement in our process was the arrival of a continuous integration system. Pushing code into one of the Frameworks nowadays means that a it is built in a clean environment and automated tests are run. It’s also used to build its dependencies, so problems in the code that might have slipped the developer’s attention are more often caught automatically. Usually, the results of the Continuous integration system’s automated builds are available within a few minutes, and if something breaks, developers get notifications via IRC or email. Having these short turnaround cycles makes it easier to fix things, as the memory of the change leading to the problem is still fresh. It also saves others time, it’s less likely that I find a broken build when I update to latest code.
The build also triggers running autotests, which have been extended already, but are still quite far away from complete coverage. Having automated tests available makes it easier to spot problems, and increases the confidence that a particular change doesn’t wreak havoc elsewhere.
Neither continuous builds, nor autotests can make 100% sure that nothing ever breaks, but it makes it less likely, and it saves development resources. If a script can find a problem, that’s probably vastly more efficient than manual testing. (Which is still necessary, of course.)
A social aspect here is that not a single person is responsible if something breaks in autobuilds or autotests, it rather should be considered a “stop-the-line” event, and needs immediate attention — by anyone.

Continuous Improvement

This harnessing allows us to concentrate more on further improvments. Software in general are subject to a continous evolution, and Frameworks 5.0 is “just another” milestone in that ongoing process. Better scalability of the development processes (including QA) is not about getting to a stable release, it supports the further improvement. As much as we’ve updated code with more modern and better solutions, we’re also “upgrading” the way we work together, and the way we improve our software further. It’s the human build system behind software.

The circle goes all the way round, the continuous improvement process, its backing tools and processes evolve over time. They do not just pop out of thin air, they’re not dictated from the top down, they are rather the result of the same level of experience that went into the software itself. The software as a product and its creation process are interlinked. Much of the important DNA of a piece of software is encoded in its creation and maintenance process, and they evolve together.