I am concerned. In the past years, it has become clear that real privacy has become harder to come by. Our society is quickly heading into a situation where an unknown number of entities and people can follow my every single step, and it’s not possible to keep to myself what I don’t want others to know. With every step into that direction, there’s less and less things about my life of which I don’t control who knows about it.
Privacy as product or weapon
Realistically, I won’t be able to do that, however, since in this modern age, tools that need to share data are rather the norm, than the exception. Most of the time, this sharing of data (even if only between my own devices) goes through the hand of a third party. On top of that, there’s a whole lot of spying going on, and of course malicious hackers which are keen to acquire large personal sets of identity data. My personal data can make me a product, and worse, it can be used as a weapon against myself. It is really in my best interest to share only the absolute minimal amount of data with as little others as possible.
Traditionally, this urgency for privacy has been closely connected to the goals of Free software. This is not a coincidence. Free software and was intended as a way to give control to the users, and copyleft is an effective tool to achieve “software democracy”, in the best interest of the user. In reverse, someone who is not in control of his data cannot truly be free. Privacy and freedom are in fact closely related concepts.
Software Freedom: economics and ideology
I prefer Free software over proprietary solutions. It puts me in control what my machine does, it allows me to fulfill my needs and influence the tools I use for communication, work and entertainment into a direction that is driven by value to the user, rather than return-on-investment measured in money.
When I started using computers, Free software was sub-par to proprietary solutions, that is largely not the case anymore. In many cases, Free software surpasses what proprietary alternatives offer. In a lot of areas, Free software has come to dominate the market.
This is not surprising, given the economic model behind Free software. In the long run, building on the shoulder of giants, sharing the work across more stakeholders, open code and processes are more economical, scale better and tend to be more sustainable.
The ideological point of view benefits from that, I can lead a fully functional digital life using almost exclusively Free software and I certain guarantees of continuity often unmet in the proprietary world.
To me, the purpose of Free software has shifted a bit, or rather expands to enabling privacy. A good measurement whether the Free software movement has achieved its goal is the degree of privacy it allows me to have, while enabling all the modern amendments that our digital age makes possible, or even just to have a private conversation with a friend.
Effective privacy needs network effects, so it doesn’t work very well for niche products. Of what use is a secure and private communication tool if I can’t use it to talk with my friends? Luckily the initial successes of Free software still play in our advantage: being able to collaboratively develop and share the work across many shoulders, we should be able to not just build all the pieces, but put together a complete set of solutions that make better privacy achievable for more people. In terms of achieving network effects, we’re not starting at zero, but our adversaries are strong, and often ahead of our game, some tend to play unfair.
Purpose means responsibility
Is it not our responsibility as Free software community (or even just as citizens) to provide the tools that maximize privacy for the users? If the answer is yes, then I suppose the measurement for success is how much can we make possible while maximizing privacy? How attractive can we make the tools in terms of functionality, effectiveness and availability?
A happy user is one who finds that a useful and fun-to-use tool also protects him from threats that he often may not fully appreciate until it’s too late.
Marco has come over to the Netherlands to pay me a visit, and to hack a little bit together, in person. So with the weather clearly suggesting to stay inside, that’s what we did over the weekend, and how better to entertain yourself than to work on mobile software?
Marco has been working for a while on components that follow Plasma’s human interface guidelines and make it easy to implement applications with a common navigation pattern and look and feel. Obviously, these components use a lot of Plasma under the hood, so they get excellent integration at a visual and at a technical level. This high integration, however, comes at the price of having a non-trivial chain of dependencies. That’s not a problem in Plasma Mobile, or other Plasma workspaces, since all that is already there, anyway.
We thought that an interesting exercise would be to find out what really defines a “Plasma application”, and how we can make the concepts we engrained in their design available to application developers more easily. How hard could it be to use Plasma components in an Android app, for example? The answer is, not entirely trivial, but it just became a whole lot easier. So what did we do?
For those reading this article via a feed aggregator, hop over to youtube to watch the demo video.
We took Subsurface, which is a piece of Free software used for logging and analysing scuba dives. Subsurface has a mobile version, which is still in its infancy, so it’s an excellent candidate to experiment with. We also took Marco’s set of Plasma components that provide a reduced set of functionality, in fact, just enough to create what most applications will need. These components extend QtQuick components where we found them lacking. They’re very light weight, carry no dependencies other than QtQuick, and they’re entirely written in QML, so basically, you add a bunch of QML files to your app and concentrate on what makes your app great, not on overall navigation components or re-implementing for the n-th time a set of widgets.
So after solving some deployment issues, on Saturday night, we had the Plasma mobile components loading in an Android app. A first success. Running the app did show a number of problems, however, so we spent most of the Sunday to look into each problem one by one and trying to solve them. By early Monday morning, we had all of the glaring issues we found during our testing solved, and we got Subsurface mobile to a pretty solid appearance (pretty solid given its early state of development, not bug free by any means).
So, what to take a away from this? In a reduced form, Plasma can be a huge help to create also Android applications. The mobile components which we’re developing with Plasma Mobile as target in mind have had their first real world exposure and a lot of fixes, we got very useful feedback from the Subsurface community which we’re directly feeding back into our components.
A big thanks goes out to the Subsurface team and especially Dirk Hohndel for giving us excellent and timely feedback, for being open to our ideas and for willing to play guinea pig for the Plasma HIG and our components. The state you can see in the above video has already been reviewed and merged into Subsurface’s master tree, so divers around the world will be able to enjoy it when the app becomes available for a wider audience.
That moment when the application “just works” after all your unit tests pass…
A really nice experience after working on these low-levelbits was firing up the kscreen systemsettings module configured to use my wayland test server. I hadn’t done so in a while, so I didn’t expect much at all. The whole thing just worked right out of the box, however. Every single change I’ve tried had exactly the expected effect.
This screenshot shows Plasma’s screen configuration settings (“kscreen”). The settings module uses the new kwayland backend to communicate with a wayland server (which you can see “running” on the left hand side). That means that another big chunk of getting Plasma Wayland-ready for multi-display use-cases is falling nicely into place.
I’m working on this part of the stack using test-driven development methods, so I write unit tests for every bit of functionality, and then implement and polish the library parts. Something is done when all units tests pass reliably, when others have reviewed the code, when everything works in on the application side, and when I am happy with it.
The unit tests stay in place and are from then on compiled and run through our continuous integration system automatically on every code change. This system yells at us as soon as any of the unit tests breaks or shows problems, so we can fix it right away.
Interestingly, we run the unit tests live against a real wayland server. This test server is implemented using the KWayland library. The server runs headless, so it doesn’t do any rendering of windows, and it just implements the bits interesting for screen management. It’s sort of a mini kwin_wayland, the real kwin will use this exact same library on the server side, so our tests are not entirely synthetical. This wasn’t really possible for X11-based systems, because you can’t just fire up an X server that supports XRandR in automated tests — the machine running the test may not allow you to use its graphics card, if it even has one. It’s very easy to do, however, when using wayland.
Our autotests fire up a wayland server from one of many example configurations. We have a whole set of example configurations that we run tests against, and it’s easy to add more that we want to make sure work correctly. (I’m also thinking about user support, where we can ask to send us a problematic configuration written out to a json file, that we can then add to our unit tests, fix, and ensure that it never breaks again.
The wayland test server is only about 500 lines of relatively simple code, but it provides full functionality for setting up screens using the wayland protocol.
The real kwin_wayland will use the exact same library, on the server as we do in our tests, but instead of using “virtual screens”, it does actually interact with the hardware, for example through libdrm on more sensible system or through libhybris on ones less so.
Kwin takes a more central role in our wayland story, as we move initial mode-setting there, it just makes to have it do run-time mode setting as well.
The next steps are to hook the server side of the protocol up in kwin_wayland’s hardware backends.
In the back of my head are a few new features, which so far had a lower priority — first the core feature set needed to be made stable. There are three things which I’d like to see us doing:
per-display scaling — This is an interesting one. I’d love to be able to specify a floating point scaling factor. Wayland’s wl_output interface, which represents the application clients, only provides int-precision. I think that sucks since there is a lot of hardware around where a scaling factor of 1 is to small, and 2 is too high. That’s pretty much everything between 140 and 190 DPI according to my eyesight, your mileage may vary here. I’m wondering if I should go ahead and add the necessary API’s at least on our end of the stack to allow better than integer precision.
Also, of course we want the scaling be controlled per display (and not globally for all displays, as it is on X11), but that’s in fact already solved by just using wayland semantics — it needs to be fixed on the rendering side now.
pre-apply checks — at least the drm backend will allow us to ask it if it will be able to apply a new configuration to the hardware. I’d love to hook that up to the UI, so we can do things like enable or disable the apply button, and warn the user of something that the hardware is not going to like doing.
The low-level bits have arrived with the new drm infrastructure in the kernel, so we can hook it up in the libraries and the user interface.
configuration profiles — it would make sense to allow the user to save configurations for different situations and pick between them. It would be quite easy to allow the user to switch between setups not just through the systemsettings ui, but also for example when connecting or disabling a screen. I an imagine that this could be presented very nicely, and in tune with graphical effects that get their timing juuuuust right when switching between graphics setups. Let’s see how glitch-free we can make it.
So, first of all, this is all very much work-in-progress and highly experimental. It’s related to the work on screen management which I’ve outlined in an earlier article.
I ran a few benchmarks across our wayland stack, especially measuring interprocess communication performance when switching from X11 (or, in fact XCB and XRandR) to wayland. I haven’t done a highly scientific setup, just ran the same code with different backends to see how long it takes to receive information about screens connected, their modes, etc..
I also ran the numbers when loading the libkscreen backend in-process, more on that later.
The spreadsheet shows three data columns, in vertical blocks per backend the results for 4-5 individual runs and their mean values. One column for the default out-of-process mode, one with loading the backend in process and one showing the speedup between in- and out-of-process of the same backend.
The lower part contains some cross referencing of the mean values to compare different setups.
All values are nano seconds.
My results show a speedup of between 2 and 2.5 times when querying screen information on X11 and on wayland, wayland being much faster here.
The qscreen and xrandr backends perform pretty similar, they’re both going through XCB. That checks out. The difference between wayland and xrandr/qscreen can then be attributed to either the wayland protocol or its implementation in KWayland being much faster than the corresponding XCB implementations.
But, here’s the kicker…
in- vs. out-of-process
The main overhead, as it turns out, is libkscreen loading the backend plugins out-of-process. That means that it starts a dbus-autolaunched backend process and then passes data over DBus between the libkscreen front-end API and the backend plugin. It’s done that way to shield the client API (for example the plasma shell process or systemsettings) from unsafe calls into X11, as it encapsulates some crash-prone code in the XRandR backend. When using the wayland backend, this is not necessary, as we’re using KWayland, which is much safer.
I went ahead and instrumented libkscreen in a way that these backends are being loaded in process, which avoids most of the overhead. This change has an even more dramatic influence on performance: on X11, the speedup is 1.6x – 2x, on wayland loading the backend in-process makes it run 10 times faster. Of course, these speedups are complementary, so combined, querying screen information on wayland can be done about 20 times faster.
While this change from out-of-process to in-process backends introduces a bit more complexity in the library, it has a couple of other advantages additional to the performance gains. In-process means that debugging is much easier. If there are crashes, we do not hide them anymore, but identify and fix them. It also makes development more worthwhile, since it’s much easier to debug and test the backends and frontend API together. It also means that we can load backend plugins at the same time.
I’ve uploaded the benchmark data here. Before merging this, I’ll have to iron out some more wrinkles and have the code reviewed, so it’s not quite ready for prime-time yet.
One of the bigger things that is in the works in Plasma’s Wayland support is screen management. In most cases, that is reasonably easy, there’s one screen and it has a certain resolution and refresh rate set. For mobile devices, this is almost always good enough. Only once we starting thinking about convergence and using the same codebase on different devices, we need to be able to configure the screens used for rendering. Especially on desktops and laptops, where we often find multi-monitor setups or connected projectors is where the user should be able to decide a bunch of things, relative position of the screens, resolution (“mode”) for each, etc.. Another thing that we haven’t touched yet is scaling of the rendering per display, which becomes increasingly important with a wider range of displays connected, just imagine a 4K laptop running north of 300 pixels per inch (PPI) connected to a projector which throws 1024*768 pixels on a wall sized 4x3m.
The Wayland protocol currently does not provide a mechanism for setting up the screen, or tell us about displays that are not used for rendering, either because they’re disabled, or have just been connected, but not enabled “automatically” (yet). For most applications, that doesn’t matter, they’re just fine with knowing about the rendering screens and some details about those, which is provided by the wl_output interface. For screen management, this interface is insufficient, though, since it lacks a few things, EDID information, enabled/disabled flags and also ways to set the mode, scaling, rotation and position. This makes clearly recognizing display and setting them up harder than necessary, and thus error-prone. Let’s look at the background, first, however.
Setting up X11
On the hardware side, this has been a complete mess in the past. One problem is X11’s asynchronous nature. The XRandR extension that is used for this basically works by throwing a bunch of calls to the X server (“use this mode”, “position display there”) and then seeing what sticks to the wall. The problem is that we never really know what happened, there’s no well-defined “OK, this works” result, and we also don’t know when the whole procedure is done. The result is a flicker-fest and the desktop trying to catch up with what X11 made of the xrandr calls. It can also be an unpleasant experience, when a display gets connected, used for rendering, then the shell finds out about it, expanding the desktop to it, and then everything is resized again because there’s a predefined configuration for this. These kind of race conditions are very hard to fix due to the number of components involved in the process, and the lack of proper notification semantics around it.
X11 has the nasty habit of interacting with hardware directly, rather than through well-defined and modern kernel interfaces. On the kernel side, this has been fixed. We now have atomic mode setting, which allows us to check whether changes can be applied (through the DRM_MODE_ATOMIC_TEST_ONLY flag), and apply them all at once, or in portions that are known to not screw up, lock the user out, or are simply invalid in context with each other.
For the user, getting this right across the whole stack means quicker reconfiguration of the hardware and only minimal flickering when switching screen setups. We won’t be able to completely prevent the flickering on most displays, as that is simply how the hardware works, but we will be able to make it a lot less jarring. The compositor now being the one that calls the DRM subsystem on the user side, we can coordinate these things well with visual effects, so we’ll be able to make the user experience while re-configuring displays a bit smoother as well.
Atomic mode setting, DRM and kernel
From the kernel side, this needed quite radical changes, which have now landed throughout the DRM subsystem. The result is a kernel interface and helper library that allows interacting with the kernel using semantics that allow tighter control of the processes, better error prevention and handling and more modern power management semantics. Switching off the screen can now be done from the compositor, for example — this allows us to fix those cases where the display is black, but still has its backlight on, or where the display is off, but used for rendering (in which case you get huge blind spots in your user interface).
Daniel Vetter’s (part 1, part 2) provides an excellent overview over history, present and future of atomic mode setting on the kernel side. Pertaining is that a reasonably recent Linux kernel with working DRM drivers now provides all that we need to fix this problem on the user side. X11 is still in the way of a completely smooth solution, though.
Screen setup in Plasma
In Plasma, the screens can be set up using the Display configuration module in system settings. This module is internally called “KScreen”. KScreen provides a visual interface to position displays, set resolution, etc.. It’s backed by a daemon that can apply a configuration on login – useful stuff, but ultimately bound by the limits across the underlying software stack (X11, kernel, drivers, etc.).
KScreen is backed by libkscreen, a library that we ship with Plasma. libkscreen offers an API that allows to list displays, their properties, including disabled displays. libkscreen is driven by out-of-process running backends, commonly used is the “xrandr” backend, which talks to the X Server over the XRandR extension. libkscreen has other backends, notably a read-only QScreen backend a “fake” backend used for unit tests. A native Wayland backend is work in progress (you can find it in the libkscreen[sebas/wayland] branch.)
libkscreen been developed for the purpose of screen configuration, but we have also started using it for the Plasma shell. QScreen, the natural starting point of this was not up to the task yet, missing some functionality. In Qt 5.6, Aleix Pol has now landed the necessary missing functionality, so we can move the Plasma shell back onto QScreen entirely. QScreen is backed by the XCB Qt platform plugin (QPA). One problem in Plasma has been that we got acknowledged of changes through different code paths, which made it hard to set up the desktop, position panels, etc. In a Wayland session, this has to happen in a much more coordinated way, with clearly defined semantics when the screen setup changes, and as little of those changes as necessary.
KScreen should concentrate on doing what it’s good at: screen configuration. For X11 kscreen uses its xrandr backend, no changes there. In Plasma shell’s startup, we will be able to remove libkscreen and rely purely on QScreen directly as soon as we can depend on Qt 5.6, so that probably puts us into the time-frame of Q2 next year. For read-only access on wayland, we can use the libkscreen QScreen backend for now, it comes with some limitations around multi-screen, but these will be ironed by spring next year. The QScreen backend is actually used to start Plasma Mobile’s on kwin_wayland. For configuration, QScreen is not an option, however — it’s simply not its purpose and shouldn’t be.
In the Wayland protocol itself, there are no such semantics yet. Screen configuration has, so far, been outside of the scope of the agreed-upon wayland protocols. If we don’t run on top of an X server, who’s doing the actual hardware setup? Our answer is: KWin, the compositor.
KWin plays a more central role in a Wayland world. For rendering and compositing of windows, it interacts with the hardware. Since it already initializes hardware when it starts a Wayland server, it makes a lot of sense to put screen configuration also exactly there. This means that we will configure KWin at runtime through an interface that is designed around semantics of atomic mode setting, and KWin picks a suitable configuration for connected displays. KWin saves the configuration, applies it on startup or when a display gets switched off, connected or disconnected, and only then tells the workspace shell and the apps to use it. This design makes a lot of sense, since it is KWin that ultimately knows of all the constraints related to dynamic display configuration, and it can make concert how the hardware is used and how its changes are presented to the applications and workspace.
KWayland and unit testing
Much of Kwin/Wayland’s functionality is implemented in a library called KWayland. KWayland wraps the wayland protocol with a Qt-style API for wayland clients and servers, offers threaded connection and type-safety on top of the basic C implementation of libwayland.
KWayland provides a library that allows to run wayland servers, or just specific parts of it with very little code. The KWayland server classes allow us to test a great deal of the functionality in unittests, since we can run the unit tests on a “live” wayland server. Naturally, this is used a lot in kwayland’s own autotests. In the libkscreen wayland backend’s tests, we’re loading different configuration scenarios from json definitions, so we can not only test whether the library works in principle, but really test against live servers, so we cover a much larger part of the stack in our tests. This helps us a lot to make sure that the code works in the first place, but also helps us catch problems easily as soon as they arise. The good unit test coverage also allows much swifter development as a bonus.
Output management wayland interface design
The output management wayland protocol that we have implemented provides two things:
It lists connected output hardware and all of their properties, EDID, modes, physical size, and runtime information such as currently used mode, whether this output device is currently enabled for rendering, etc.
It provides an interface to change settings such as mode, rotation, scaling, position, for hardware and to apply them atomically
This works as follows:
The server announces that the global interfaces for OutputManagement and a list of OutputDevices is available
The configuration client (e.g. the Display Settings) requests the list of output devices and uses them to show the screen setup visually
The user changes some settings and hits “apply”, the client requests an OutputConfiguration object from the OutputManagement global
The configuration object is created on the server specifically for the client, it’s not exposed in the server API at this point.
The client receives the config object and calls setters with new settings for position, mode, rotation, etc.
The server buffers these changes in the per-client configuration object
The client is done changing settings and asks the server to apply them
The compositor now receives a sealed configuration object, tests and applies the new settings, for example through the DRM kernel interface
The compositor updates the global list of OutputDevices and changes its setup, then it signals the client failure or success back through the configuration object
The output management protocol, client- and server-side library, unit tests and documentation are quite a hefty beast, combined they come in at ca. 4700 lines of code. The API impact, however, has been kept quite low and easy to understand. The atomic semantics are reflected in the API, and it encourages to do the right thing, both for the client configuring the screens, and the compositor, which is responsible for applying the setup.
I am currently working on a libkscreen module for screen configuration under wayland, that implements atomic mode setting semantics in libkscreen. It uses a new wayland protocol which Martin Gräßlin and I have been working on in the past months. This protocol lands with the upcoming Plasma 5.5, the libkscreen module may or may not make the cut, this also depends on if we get the necessary bits finished in KWin and its DRM backend. That said, we’re getting really close to closing the last gaps in the stack.
On the compositor side, we can now connect the OutputManagement changes, for example in the DRM backend and implement the OutputDevices interface on top of real hardware.
Sol Lewitt’s wall drawing at Stedelijk Museum Amsterdam
A few weeks ago, I visited the Stedelijk Museum in Amsterdam for an exhibition of Henry Matisse’s works. What stuck is not a painting or collage by Matisse, but a wall drawing by Sol Lewitt. I took a photo with my phone and have since then used it as wallpaper for it, and it works well, colors are nicely toned, everything provides enough contrast, and I like how it looks on a screen.
Yesterday, when I needed a break from hacking on Plasma’s Wayland integration, I remade the photo into vector art to use it as wallpaper. You can download the result here (the wallpaper package has versions for all resolutions, including phone and landscape versions, you can just unzip the package into /usr/share/wallpapers).
Tomorrow, on Thursday at 12:00 CEST (10:00 UTC), we will hold a Plasma Mobile IRC meeting in #plasma on Freenode. We want to discuss some workflow and project management bits, and how we can effectively shape our workflows with the tools available. It’s also a great way to say “hi!” and get involved, or just lurk around if you’re interested in what’s going on.
Be there or be square!
Posted in KDE | Comments Off on Plasma Mobile IRC Meeting
For a short while, the Plasma Mobile forums were hosted outside of the official KDE Forums. In our quest to put everything under KDE governance, we have now moved the Plasma Mobile forums under KDE’s forums as well. Enjoy the new Plasma Mobile forums.
As a few users had already registered on the “old” forums, this means a smallish interruption as the threads could not be quickly moved to the new forums. We’re sorry for that inconvenience and would like to ask everyone to move to the new forums.
Thanks for your patience and sorry again for the hassle involved with that.
At Blue Systems, we have been working on making Plasma shine for a while now. We’ve contributed much to the KDE Frameworks 5 and Plasma 5 projects, and helping with the transition to Qt5. Much of this work has been involving porting, stabilizing and improving existing code. With the new architecture in place, we’ve also worked on new topics, such as Plasma on non-desktop (and non-laptop) devices.
Plasma Mobile on an LG Nexus 5
This work is coming to fruition now, and we feel that it has reached a point where we want to present it to a more general public. Today we unveil the Plasma Mobile project. Its aim is to offer a Free (as in Freedom), user-friendly, privacy-enabling and customizable platform for mobile devices. Plasma Mobile runs on top of Linux, uses Wayland for rendering graphics and offers a device-specific user interface using the KDE Frameworks and Plasma library and tooling. Plasma Mobile is under development, and not usable by end users now. Missing functionality and stability problems are normal in this phase of development and will be ironed out. Plasma Mobile provides basic functionality and an opportunity for developers to jump in now and shape the mobile platform, and how we use our mobile devices.
As is necessary with development on mobile devices, we’ve not stopped at providing source code that “can be made to work”, rather we’re doing a reference implementation of Plasma Mobile that can be used by those who would like to build a product based on Plasma Mobile on their platform. The reference implementation is based on Kubuntu, which we chose because there is a lot of expertise in our team with Kubuntu, and at Blue Systems we already have continuous builds and package creation in place. Much of the last year was spent getting the hardware to work, and getting our code to boot on a phone. With pride, we’re now announcing the general availability of this project for public contribution. In order to make clear that this is not an in-house project, we have moved the project assets to KDE infrastructure and put under Free software licenses (GPL and LGPL according to KDE’s licensing policies). Plasma Mobile’s reference implementation runs on an LG Nexus 5 smartphone, using an Android kernel, Ubuntu user space and provides an integrated Plasma user interface on top of all that. We also have an x86 version, running on an ExoPC, which can be useful for testing.
Plasma Mobile uses the Wayland display protocol to render the user interface. KWin, Plasma’s window manager and compositor plays a central role. For apps that do not support Wayland, we provide X11 support through the XWayland compatibility layer.
Plasma Mobile is a truly converged user interface. More than 90% of its code is shared with the traditional desktop user interface. The mobile workspace is implemented in the form of a shell or workspace suitable for mobile phones. The shell provides an app launcher, a quick settings panel and a task switcher. Other functionality, such as a dialer, settings, etc. is implemented using specialized components that can be mixed and matched to create a specific user experience or to provide additional functionality — some of them already known from Plasma Desktop.
Architecture diagram of Plasma Mobile
Plasma Mobile is developed in a public and open development process. Contributions are welcome and encouraged throughout the process. We do not want to create another walled garden, but an inclusive platform for creation of mobile device user experiences. We do not want to create releases behind closed doors and throw them over the wall once in a while, but create a leveled playing field for contributors to work together and share their work. Plasma Mobile’s code is available on git.kde.org, and its development is discussed on the plasma-devel mailinglist. In the course of Akademy, we have a number of sessions planned to flesh out more and more detailed plans for further development.
With the basic workspace and OS integration work done, we have laid a good base for further development, and for others to get their code to run on Plasma Mobile. More work which is already in our pipeline includes support for running Android applications, which potentially brings a great number of mature apps to Plasma Mobile, better integration with other Plasma Devices, such as your desktop or laptop through KDE Connect, an improved SDK making it very easy to get a full-fledged development environment set up in minutes, and of course more applications.
“Since when has the world of computer software design been about what people want? This is a simple question of evolution. The day is quickly coming when every knee will bow down to a silicon fist, and you will all beg your binary gods for mercy.” Bill Gates
For the sake of the users, let’s assume Bill was either wrong or (||) sarcastic.
Let’s say that we want to deliver Freedom and privacy to the users and that we want to be more effective at that. We plan to do that through quality software products and communication — that’s how we reach new users and keep them loving our software.
We can’t get away with half-assed software that more or less always shows clear signs of “in progress”, we need to think our software through from a users point of view and then build the software accordingly. We need to present our work at eye-level with commercial software vendors, it needs to be clear that we’re producing software fully reliable on a professional level. Our planning, implementation, quality and deployment processes need to be geared towards this same goal.
We need processes that allow us to deliver fixes to users within days, if not hours. Currently in most end-user scenario, it often takes months and perhaps even a dist-upgrade for a fix for a functional problem with our software.
The fun of all this lies in a more rewarding experience of making successful software, and learning to work together across the whole stack (including communication) to work together on this goal.
So, with these objectives in mind, where do we go from here? The answer is of course that we’re already underway, not at a very fast speed, but many of us have good understanding of many of the above structural goals and found solutions that work well.
Take tighter and more complete quality control, being at the heart of the implementation, as an example. We have adopted better review processes, more unit testing, more real-world testing and better feedback cycles with the community, especially the KDE Frameworks and Plasma stacks are well maintained and stabilized at high speeds. We can clearly say that the Frameworks idea worked very well technically but also from an organizational point of view, we have spread the maintainership over many more shoulders, and have been able to vastly simplify the deployment model (away from x.y.z releases). This works out because we test especially the Frameworks automatically and rather thoroughly through our CI systems. Within one year of Frameworks 5, our core software layer has settled into a nice pace of stable incremental development.
On the user interaction side, the past years have accompanied our interaction designers with visual artists. This is clearly visible when comparing Plasma 4 to Plasma 5. We have help from a very active group of visual designers now for about one and a half year, but have also adopted stricter visual guidelines in our development process and forward-thinking UI and user interaction design. These improvements in our processes have not just popped up, they are the result of a cultural shift towards opening the KDE also to non-coding contributors, and creating an atmosphere where designers feel welcome and where they can work productively in tandem with developers on a common goal. Again, this shows in many big and small usability, workflow and consistency improvements all over our software.
To strengthen the above processes and plug the missing holes in the big picture to make great products, we have to ask ourselves the right questions and then come up with solutions. Many of them will not be rocket science, some may take a lot of effort by many. This should not hold us back, as a commonly shared direction and goal is needed anyway, regardless of ability to move. We need to be more flexible, and we need to be able to move swiftly on different fronts. Long-standing communities such as KDE can sometimes feel to have the momentum of an ocean liner, which may be comfortable but takes ages to move, while it really should have the velocity, speed and navigational capabilities of a zodiak.
By design, Free Culture communities such as ours can operate more efficiently (through sharing and common ownership) than commercial players (who are restricted, but also boosted by market demands), so in principle, we should be able to offer competitive solutions promoting Freedom and privacy.
Our users need merciful binary source code gods and deserve top-notch silicon fists.