Interview on hostingadvice.com

How KDE's Open Source community has built reliable, monopoly-free computing for 20+ years
How KDE’s Open Source community has built reliable, monopoly-free computing for 20+ years
A few days ago, Hostingadvice.com’s lovely Alexandra Leslie interviewed me about my work in KDE. This interview has just been published on their site. The resulting article gives an excellent overview over what and why KDE is and does, along with some insights and personal stories from my own history in the Free software world.

At the time, Sebastian was only a student and was shocked that his work could have such a huge impact on so many people. That’s when he became dedicated to helping further KDE’s mission to foster a community of experts committed to experimentation and the development of software applications that optimize the way we work, communicate, and interact in the digital space.

“With enough determination, you can really make a difference in the world,” Sebastian said. “The more I realized this, the more I knew KDE was the right place to do it.”

Plasma rocks Akademy published

Marco and Sebas talk Plasma: State of the Union
Marco and Sebas talk Plasma: State of the Union
I published an article about the recent Akademy conference on KDE’s news site, dot.kde.org. This article discusses the presentation Marco and I gave (Plasma: State of the Union), Wayland, Kirigami, Plasma Mobile and our award-winning colleague Kai. Enjoy the read!

Plasma’s Vision

Plasma -- Durable, Usable, Elegant.
Plasma — Durable, Usable, Elegant.
Over the past weeks, we (KDE’s Plasma team) have distilled the reasons why we do what we do, and what we want to achieve into a vision statement. In this article, I’d like to present Plasma’s vision and explain a bit what is behind it. Let’s start with the statement itself, though:

Plasma is a cross-device work environment by the KDE Community where trust is put on the user’s capacity to best define her own workflow and preferences.

Plasma is simple by default, a clean work area for real-world usage which intends to stay out of your way.
Plasma is powerful when needed, enabling the user to create the workflow that makes her more effective to complete her tasks.

Plasma never dictates the user’s needs, it only strives to solve them. Plasma never defines what the user is allowed to do, it only ensures that she can.

Our motivation is to enable actual work to happen, across devices, across different platforms, using any application needed.

We build to be durable, we create to be usable, we design to be elegant.

I’ve marked a few bits which are especially important in a bold font, let’s get into a bit more detail:

Cross-device — Plasma is a work environment for different classes of devices, it adapts to the form-factor and offers a user interface which is suitable for the device’s characteristics (input methods such as touchscreen, mouse, keyboard) and constraints (screen size, memory and CPU capabilties, etc.).

Define the workflow — Plasma is a flexible tool that can be set up as the user wishes and needs to make her more effective, to get the job done. Plasma is not a purpose in itself, it rather enables and gets out of the way. This isn’t to say that we’ll introduce every single option one can think of, but we strive to serve many users’ use cases.

Simple by default means that Plasma is self-explanatory to new users, and that it offers a clean and sober interface in its default state. We don’t want to overwhelm the user, but present a serene and friendly environment.

Powerful when needed on the other hand means that under the hood, Plasma offers tremendous power that allow to get almost any job done efficiently and without flailing.

We build to be durable, we create to be usable, we design to be elegant. — The end result is a stable workspace that the user can trust, that is friendly and easy to use, and that is beautiful and elegant in how it works.

Software from the Source

In this article, I am outlining an idea for an improved process of deploying software to Linux systems. It combined advantages of traditional, package mangement based systems with containerized software through systems such as Flatpak, Snap, or AppImage. An improved process allows us to make software deployment more efficient across the whole Free software community, have better supported software on users systems and allow for better quality at the same time.

Where we are going
Where we are going
In today’s Linux and Free software ecosystems, users usually receive all their software from one source. It usually means that software is well integrated with the system, can be tested in combination with each other and support comes from a single vendor. Compared to systems, in which single software packages are downloaded from their individual vendors and then installed manually, this has huge advantages, as it makes it easy to get updates for everything installed on your system with a single command. The base system and the software comes from the same hands and can be tested as a whole. This ease of upgrading is almost mind-boggling to people who are used to a Windows world, where you’d download 20 .exe installer files post-OS-install and have to update them individually, a hugely time-consuming process and at time outright dangerous as software easily gets out of date.

Traditional model of software deployment
Traditional model of software deployment
There are also downsides to how we handle software deployment and installation currently, most of them revolve around update cycles. There is always a middle man who decides when and what to upgrade. This results in applications getting out of date, which is bad in reality and leads to a number of problems, as security and bug fixes are not making it to users in a timely fashion,

  • It’s not unusual that software installed on a “supported” Linux system is outdated and not at all supported upstream anymore on the day it reaches the user. Worse, policies employed by distributions (or more generally, operating system vendors) will prevent some software packages from ever getting an update other than the most critical security fix within the whole support cycle.
  • Software out in the wild with its problems isn’t supported upstream, bug reports reaching the upstream developers are often invalid and have been fixed in newer versions, or users are asked to test the latest version, which most of the time isn’t available for their OS — this makes it harder to address problems with the software and it’s frustrating for both, users and developers.
  • Even if bugs are fixed in a timely fashion, the likelihood of users of traditional distributions actually receiving these updates without manually installing them is small, especially if users are not aware of it.
  • Packaging software for a variety of different distributions is a huge waste of time. While this can be automated to some extent, it’s still less than ideal as work is duplicated, packaging bugs do happen simply because distribution packagers do not fully understand how a specific piece of software is built and best deployed (there’s a wide variety of software after all) and software stacks aren’t often well-aligned. (More on that later!)
  • Support cycles differ, leading to two problems:
  • Distros need to guarantee support for software they didn’t produce
  • Developers are not sure how much value there is in shipping a release and subsequent bugfix releases, since it takes usually at least months until many users upgrade their OS and receive the new version.
  • Related to that, it can take a long time until a user confirms a bug fix.
  • There is only a small number of distributions who can package every single piece of useful software available. This essentially limits the user’s choice because his niche distro of choice may simply not have all needed software available.

The value of downstreams

One argument that has been made is that downstreams do important work, too. An example for that legal or licensing problems are often found during reviews at SUSE, one of KDE’s downstream partners. These are often fed back to KDE’s developers where the problems can be fixed and be made part of upstream. This doesn’t have to change at all, in fact, with a quicker deployment process, we’re actually able to ship these fixes quicker to users. Likewise, QA that currently happens downstream should actually shift more to upstream so fixes get integrated and deployed quicker.

One big problem that we are currently facing is the variety of software stacks our downstreams use. An example that often bites us is that Linux distributions are combining applications with different versions of Qt. This is not only problematic on desktop form-factors, but has been a significant problem on mobile as well. Running an application against the same version of Qt that developers developed or tested it against means fewer bugs due to a smaller matrix of software stacks, resulting in less user-visible bugs.

In short: We’d be better off if work happening downstream happens more upstream, anyway.

Upstream as software distributor

Software directly from its source
Software directly from its source
So, what’s the idea? Let me explain what I have in mind. This is a bit of a radical idea, but given my above train of thoughts, it may well solve a whole range of problems that I’ve explained.

Linux distributors supply a base system, but most of the UI layers, so the user-visible parts come from downstream KDE (or other vendors, but let’s assume KDE for now). The user gets to run a stable base that boots a system that supports all his hardware and gets updated according to the user’s flavor, but the apps and relevant libraries come from upstream KDE, are maintained, tested and deployed from there. For many of the applications, the middle-man is cut out.

This leads to

  • vastly reduced packaging efforts of distros as apps are only packaged once, not once per distro.
  • much, much shorter delays until a bug fix reaches the user
  • stacks that are carefully put together by those that know the apps’ requirements best

Granted, for a large part of the user’s system that stays relatively static, the current way of using packaged software works just fine. What I’m talking about are the bits and pieces that the users relies on for her productivity, the apps that are fast moving, where fixes are more critical to the user’s productivity, or simply where the user wants to stay more up to date.

Containerization allows systemic improvements

In practice, this can be done by making containerized applications more easily available to the users. Discover, Plasma’s software center, can allow the user to install software directly supplied by KDE and allow to keep it up to date. Users can pick where to get software from, but distros can make smart choices for users as well. Leaner distros could even entirely rely on KDE (or other upstreams) shipping applications and fully concentrate on the base system and advancing that part.

Luckily, containerization technologies now allow us to rethink how we supply users with our software and provide opportunities to let native apps on Linux systems catch up with much shorter deployment cycles and less variety in the stack, resulting in higher quality software on our users’ systems.

Plasma at Akademy


As every year, also this year, I will be going to KDE’s yearly world summit, Akademy. This year, it will take place in Almería, Spain. In our presentation “Plasma: State of the Union“, Marco and I will talk about what’s going on in your favorite workspace, what we’ve been working on and what cool features are coming to you, and what our plans for the future are. Topics we will cover range Wayland, web browser integration, UI design, mobile and release and support planning. Our presentation will take place on Saturday at 11:05, right after the key note held by Robert Kaye. If you can’t make it to Spain next week, there will likely be video recordings, which I will post here as soon as they’re widely available.

Haste luego!

Parrotfish

I’ve been trying macro photography and using the depth of field to make the subject of my photos stand out more from the background. This photo of a parrotfish shows promising results beyond “blurry fish butt” quality. I’ll definitely use this technique more often in the future, especially for colorful fish with colorful coral in the background.

#photographyfirstworldproblems

White Tip

White Tip Reef Shark
White Tip Reef Shark

This morning, on a dive at Sha’ab Iris, off the coast of Hurghada in the Red Sea, a white tip reef shark visited us for a swim-by.

Surfing progress!

As a founding member of our surf club, I’ve decided to do what what was long overdue and took my second surfing lesson.

Surfspot, Scheveningen's harbour pier
Surfspot, Scheveningen’s harbour pier

I went to a surfschool in Scheveningen at the North Sea, got in touch with an instructor, and after going over the water situation, currents, swell and technique, we went into the water for a good one and a half hours. There was a good swell, and next to the harbour’s pier, we were mostly out of the wind. After building up some skills, like catching waves and paddling into them, I managed to ride out a few waves, onto the beach. Not over a long distance, but at least I didn’t fall for a few seconds, a few times. Pretty good progress. Next try planned on sunday, weather allowing. It’s still the North Sea, and it’s still winter, so things can get nasty…

The water temperature was 9°C, which seems cold. I wore my 7mm full-length suit, 3mm gloves and a 5mm hood. It didn’t feel cold even in my fingertops after getting out of the water, so even in March, the North Sea is already very manageable.

Surfing was great fun, it’s an interesting break from diving in that it’s much more physically active. In diving, you tend to spend as little energy on anything as possible. That means that if you’re a good diver (and in the right conditions), you actually burn very little energy. That means you’re getting cold much quicker. Bodysurfing, on the other hand means that you’re constantly moving through the swell, swimming, paddling, getting up, falling, so you end up burning a lot of energy. The cold splash of water is really welcome then.

As opposed to diving, there is no buddy system in surfing, so you can go surfing on your own (under the right conditions, of course). That makes it a bit more flexible than diving. It also trains different muscle groups, especially arms and shoulders, so it complements diving well.

Waves are awesome. :)