As many other components of the Plasma Workspaces, Plasma Desktop’s default Containment is being ported to QML. A technology preview of the containment is being demoed and can be tested by a wider audience now. While the port is mainly replicating the current desktop containment in QML, its interaction scheme to position and manipulate widgets on the desktop has been improved.
First of all, a note: The code presented in this article is not part of the upcoming Plasma Desktop 4.10. It can easily be installed on top of it, it might see inclusion in the summer 2013 release, however
In our Roadmap, you learn that KDE is currently porting many aspects of its workspaces to QML, with the goal to create a more modern user experience on top of state-of-the-art technologies such as Qt5, OpenGL scenegraph and Wayland. The move to QML is a gradual process, made possible by the componentized architecture of Plasma. Widgets and other components that make up the workspace are replaced with QML ports one by one. The premise is to not introduce regressions by shipping each component “when it’s ready”. Ultimately, we need a complete set of QML components to run the whole desktop (and at some point also apps) directly on top of the graphics hardware, leading to higher graphics performance and more available resources on the CPU.
One of the important pieces is the Desktop containment. This is the component in Plasma that is responsible for managing and laying out widgets on the desktop and creating the toolbox (which makes some “workspace actions” available to the user. In general, a “Containment” is an area on the screen (a panel, the desktop background, the dashboard, …), and it takes care of managing widgets, their position and sizing within. It also offers access to action specific to widgets, or the containment or workspace.
The currently shipped (also in 4.10) default Desktop containment is written in C++, using QGraphicsWidgets and offers free placing of widgets on the desktop, with a bit of edge and overlap detection and repositioning poured in.
Most of the new containment is exactly the same as in the current default — this is done by design, we do not want to introduce radical changes to the workspace (and the users’ workflows), but rather improve gradually and in an iterative process. There are two areas (which in fact are closely related) where we did change a few things: positioning/sizing and visual cleanliness. These are expressed in two changes: integration of applet handle and positioning aids.
In order to reduce visual clutter, we integrated the applet handle into the applet’s background frame. Previously, it would be its own frame, and shift out as separate element from under the widget. Merging handle and background frame reduces the number of distinct elements on the screen and allows less intrusive transitions when the widget handle becomes visible.
The second important change is that we now provide helpers when the user moves and resizes a widget. When moving, we show a halo at the position the applet will snap to when dragged. This makes widget placement more predictable and allows the user to get it right in one go. We also align the widgets to an invisible grid, so applets automatically end up being aligned pixel-perfectly with each other, which leads to a more ergonomic workflow, cleaner appearance of the workspace, and again to less visual clutter.
Platform improvements: Bindings and Toolbox
An important aspect of the work on the QML containment, was to improve the bindings which load declarative code (QML) into Plasma shells, these improvements are included in Plasma 4.10, due to be released in early february. This includes the necessary platform features to allow running fully-featured QML containments, something which we have done in Plasma Active for a while, but within a more confined context.
As a result of this work, Plasma can now also load toolboxes written in QML. The Plasma Toolbox is the little widget with the Plasma icon you can see on top of many containments, and which gives access to actions such as add widgets, settings, shortcuts, etc.. The toolbox used with the containment shown is a 1:1 port of its counterpart in the current default (C++) toolbox. The name of the toolbox package is currently hard-coded in the bindings (it loads it from the org.kde.toolbox package and silently falls back to the C++ implementation if that isn’t found — a 4.10 feature), but it also opens up this part of the workspace to QtQuick goodness. The toolbox is basically a translucent layer on top of the desktop, so much freedom is given to other implementations).
A template and a bridge
The code is not only there to replace the current containment, it also serves as a template for new developments. With the new containment bindings in place, it is now very easy to create your own containment, modify someone else’s and publish them to share them. The containment shown is just one example for what we can do with the QML integration features in Plasma. As Plasmoid packages are architecture independent, this of course works across devices and different workspaces.
The work that is upcoming in Plasma Desktop is further bridging the gap between Plasma’s interfaces for different devices and formfactors. Some of its code has been introduced in Plasma Active, and is now available in a more generic fashion also for Plasma Desktop (and Netbook). This brings us closer to one of our goals, having only one shell that dynamically loads a suitable interface (Containment, Widgets) for a given formfactor, use case, output device, etc.
Give it a spin
If you’re interested and would like to try it (we appreciate feedback, it’s especially valuable in this phase!), there are two ways to get this containment. The minimal requirement for it is Plasma 4.10-beta1.
If you’re using git, you will find the code in the branch called “plasma/sebas/desktop-qml”, just check it out and build it, install it, run kbuildsycoca4, and you’re done.
If you are using the packages, you can easily install the following two Plasmoid packages to your system:
If your system is using a version prior to KDE SC 4.10-beta1, the packages will install, but not work.
The following commands install the necessary Plasma packages into your home KDE install directory.
# create the package directory and go there
mkdir -p `kde4-config --prefix`/share/apps/plasma/packages/org.kde.toolbox
cd `kde4-config --prefix`/share/apps/plasma/packages/org.kde.toolbox
# unpack the plasmoid package
# check if it's installed correctly,
# this should list metadata.desktop and contents/
ls -la `kde4-config --prefix`/share/apps/plasma/packages/org.kde.toolbox
[Edit: changes –localprefix to –prefix, as we’ve found a bug in –localprefix code.]
Then install the desktop containment package (If you’re updating the containment at a later stage, use plasmapkg -u.):
plasmapkg -i desktop-git28012013.plasmoid
You can now choose the new containment from Layout dropdown in the Desktop Settings, pick “Default Desktop (QML)” there.
I would like to thank Blue Systems for supporting my work on the above topics.
KDE’s Next Generation user interfaces will run on top of Qt5, on Linux, they will run atop Wayland or Xorg as display server. The user interfaces move away from widget-based X11 rendering to OpenGL. Monolithic libraries are being split up, interdependencies removed and portability and dependencies cut by stronger modularization.
For users, this means higher quality graphics, more organic user interfaces and availability of applications on a wider range of devices.
Developers will find an extensive archive of high-quality, libraries and solutions on top of Qt. Complex problems and a high-level of integration between apps and the workspace allow easy creation of portable, high-quality applications.
The projects to achieve this goal are KDE Frameworks 5 and Plasma 2. In this article, you’ll learn about the reasons for this migration and the status of the individual steps to be taken.
As this article is going to be a bit long, due to its level of detail, you can just skip to the end of every subsection to get the executive summary. Also, I would like to thank Blue Systems for their sponsoring of a lot of the work that is going into the future of KDE’s products, among which, mine.
Status Frameworks 5
Development of KDE’s Frameworks5, which focuses on modularization of APIs currently contained in kdelibs and kde-runtime, loosening its internal structure and making it possible to only use specific parts by splitting it into individual libraries and solutions.
The entire work to be done for Frameworks 5.0 is split into 7 topics. Three of these “Epics” are done:
Initial communication and documentation (Kevin Ottens),
Merging of code into Qt 5.0 (David Faure)
Reduction of duplication with Qt by removing classes and using their Qt alternatives (Stephen Kelly)
Four Epics are currently work in progress, three of them are monstrous:
Build system (Alex Neundorf, Stephen Kelly)
CMake (upstreaming some stuff, modularization, porting)
Modularization of CMake KDE settings (work in progress)
Modularization of macros
Review and inventarize Find* CMake modules
kdelibs cleanups (David Faure)
This is a large Epic, containing many bite-sized tasks. Roughly 50% of them are done, 37 tasks remain open and 7 are being worked on, an extensive list is on the wiki.
Qt 5.1 merging (David Faure)
This is the list of things that we haven’t been able to merge upstream into Qt 5.0, so we hope we can upstream as much as possible into Qt 5.1. This can potentially cause timing problems, if we can’t get all the necessary things we need into Qt 5.1. 9 tasks are work in progress by David Faure, Thiago Maciera, Richard Moore and others. 52 tasks are on the todo list, most of them currently unclaimed.
Splitting kdelibs (blocked) (Kevin Ottens)
Another large Epic, in bigger chunks, meaning going through all libraries one by one, porting their build system to the changes in Frameworks5, cut out certain library dependencies and changing the translation system. 13 tasks are done, 12 work in progress and 8 on the todo list, not all of them assigned.
An extensive list of libraries and their status can be found on the wiki.
Frameworks 5 currently compiles on top of Qt 5.0 and basic system services run (kdeinit5), although not all of its dependencies have been ported to Qt 5. Work on Frameworks5 is ongoing, so it is currently quite a moving target, and will remain so for a while.
Plasma and KWin Direction
An architecture based on Qt5 and Wayland makes it possible to use a more modern graphics stack, which means moving from X11-based rendering to OpenGL graphics rendering. QtQuick2 (which is the QtQuick shipped with Qt5) makes it possible to offer a very nice and extensible development API, while using the full power of the graphics hardware to produce excellent visual possibilites. Plasma offers development APIs that make it easy to create well-integrated applications as well as workspaces that are flexible, extensible and fully featured on top of QtQuick, and in the future QtQuick2.
As KDE moves forward towards Frameworks5, Plasma is taking the opportunity of the source and binary compatibility break of Qt5 to do necessary updates to its architecture. The goal is to have a leaner Plasma Development API and depdendency chain and achieve a better user- and developer experience by moving the UI fully to Plasma Quick, which is QtQuick plus a number of integration components for theming, compositor interaction, internationalization, data access and sharing, configuration, hardware, etc..
This constitutes a major refactoring of the Plasma libraries and components. First, their UI needs to be done in QML. This effort of porting workspace components to QML is already well underway. Second, the Plasma library and runtime components need to be ported from the QGraphicsView-based canvas to QML. This means cutting out dependencies on classes such as QGraphicsItem and QGraphicsWidget to their equivalent in QML. In the case of painting and layouting code, it means porting this code to QML.
Plasma Components (containing a basic QtQuick widget set)
QtExtras (containing components missing in Qt, such as MouseEventListener)
PlasmaExtras (containing additional UI widgets for better integration, such as animations, text layout helpers, Share-like-connect integration, etc.)
Making scriptengines (such as the Python scriptengine) only export QObject-deriven classes to the QML runtime (needs investigation right now)
Port of widgets away from QGraphics*, also necessary for some QML code
Plans for KWin Plasma Compositor
Plasma Compositor refers, in a Wayland world, to the compositor used for Plasma workspaces, which is essentially KWin in disguise as Wayland compositor.
In KWin, we benefit from an ongoing effort to modularize and clean it up architecturally. For most of its UI, KWin already supports QML (Window decorations, tabswitcher, etc.). Some mechanisms which currenty work through XAtoms will need to be ported, the API impact of that will likely be quite limited for application developers.
The strategy for KWin is to port KWin to Qt 5, then make it possible to run KWin outside of an X server on top of KMS, using the graphics hardware more directly. The next step is to use KWin as compositor for Wayland display servers. The dependency of X11 can be removed once it is not needed anymore to provide compatibility with X11 applications, or can possibly be made optional.
Milestones for KWin (Martin Graesslin) (updated with further clarifications, thanks Martin):
KWin on Qt5 (work in progress, planned for 4.11): KWin will not depend on Qt 5 as of 4.11. The idea is to have KWin in a state that we could compile KWin with Qt 5/KF 5. But as it is unlikely that KF 5 will be allowed dependency for 4.11, we will not see a KWin on top of Qt 5 even if we achieve that goal. It’s a weak goal as we cannot release on it anyway.
on top of KMS (planned for 4.11): KWin in 4.11 will still run on top of the X-Server. This is mostly about adding a proof-of-concept. Whether that will be merged into 4.11 and compilation enabled will be seen once the code has been written. So in this case it will at most be an additional very hidden (env variable) mode for testing.
KWin as Wayland compositor (planned for 4.12): Again only as addition. As of 4.12 we will still be targetting X-Server as default. If we succeed we might add an option. But this pretty much depends on the state of Qt 5/KF 5 and QtCompositor. If any of those dependencies is not ready to depend on, the code might exist, but will not be released.
no X11 dependency (planned for the distant future): There are no plans to drop X11 support. But we want to have the possibility to build a KWin without X for new targets like Plasma Active. For the desktop there are no such plans.
Once we have a working libplasma2 and a useful set of QML Plasmoids, we can think of running an entire workspace in QML and on top of QtQuick2, either on top of X11, or with KWin’s plans in mind, on Wayland.
Porting status of important widgets to QML / Plasma Quick needed for the workspace:
Taskbar (close to first review, target: 4.11) (Eike Hein)
Folderview (work in progress) (Ignat Semenov)
Desktop containment (second revision close to review, target: 4.11) (Sebastian Kügler)
Calendar (work in progress, target: 4.11) (Davide Bettio, Sebastian Kügler)
Kickoff (about to be merged into master, target: 4.11)
KRunner (work in progress, target: 4.11) Aaron Seigo, Aleix Pol
Done: System tray, pager, notifications, device notifier, battery, lock/logout, weather, Wallpaper, Containment support
others from kdeplasma-addons
and more (see wiki)
KDE’s Framework 5 project is well underway. It will allow us to move to a more modern graphics rendering engine, make our development platform more portable, and make it easier to reuse solutions KDE has built. The work does not happen by itself, however, yet it is time-critical. With Qt5.0 being released, 3rd parties are porting their code already. These people will only consider using KDE’s technologies if they are actually available — and that means we need a Frameworks 5 release.
So is this going to be KDE 5? The answer to this question is still “No!”, for a number of reasons:
Frameworks 5, apps and the Plasma workspaces are not one singular entity. These parts are only released together (which might change in the future), and cobbling them up under one name really is really not helpful. (3rd party developers will think we’re only targeting Plasma workspaces, Plasma users will think you’ll only be able to run “KDE apps”, potential users of applications will assume that you can only use them inside Plasma workspaces — all of them untrue, all of them taken right out of my daily experience)
Within the Plasma team, we tend to use the abbreviation PW2 to refer to the next generation of Plasma workspaces. It stands for Plasma Workspaces 2, and it will probably be named differently in the future.
So, now you’re fully up to date on the status, isn’t it time to get cracking?
ownCloud offers a Free software solution to synchronize your files across different devices. I’ve been working on a KDE Plasma client for this server technology. In this article, after giving a bit of background of the problem, I explain the design concepts behind the new ownCloud Plasma client and demonstrate how it works and integrates with different Plasma workspaces.
I’ve been looking for a good solution to synchronize my data across different devices I use forever. The need arose when I got my first laptop (next to my desktop machine): I wanted to be able to work on my projects on both devices without having to manually copy files between the machines. It turned out to be actually quite a hard problem: Sure, you can work on your stuff on a remote machine, and thereby always work on one version of a certain file. But what happens when you’re offline? I needed files to be synched, and was looking for a more or less automated that worked both offline and online. I took a look at the Coda Filesystem, which seemed to solve this problem well, for others at least. It was a bit of a pain to set up (and understand how it exactly worked), but after a bit of fiddling, I had copies of my stuff on different machines. It seemed to work initially, but later turned out to be a little bit too brittle for my use cases (I don’t quite recall the details, but eventually, I gave up on it because it meant more or less constant maintenance, and still it wasn’t quite as bullet proof as I had hoped it would be. At university (where I both studied and worked for quite a while), we often used SVN for collaboration. There’s value to using the same tools in more places, so I put all my files into an SVN repository and used that for synchronization. Eventually, I moved these private data repositories to Git, as I grew more comfortable with this tool and it allowed useful things like offline commits and synching between “client devices”, not just between client and server. Not an automatic process, also not the most beautiful solution to the problem as it wasn’t quite as automated as I’d have liked it to be. Also, as computing became more portable, I needed something that would also blend in well with with all the devices that produce, collect and are supposed to carry my data.
My problem is of course not unique to me: File synchronization plays an ever more important role, Internet connections become faster and more ubiquitous, the device spectrum expands. Where people have been using one computer for all their electronic communication and computing needs, it now spreads across different devices, desktops, laptops, tablets, smartphones, media centers, and likely many more form factors we can only dream of today (or maybe not even that!). Devices jumping between network infrastructures doesn’t make it easier, and the most flexible solution seems to a central server – multiple clients model. ownCloud offers exactly that, it’s Free software (licensed under the AGPL), is easy to setup and runs on bog standard LAMP setups. Its server side performs well enough to be able to run it on even woefully underpowered hardware such as NAS systems or other embedded devices. More importantly, it gives you full control over your data, which is nice and important for private use, and often a hard requirement for institutional use cases, such as document management and sharing in a company.
I’ve been keeping a keen eye on ownCloud for a while, and, somewhere in its 3.x cycle, I decided to set up owncloud on my private server and started experimenting with it. Installation was quite easy, and the desktop client Mirall seemed to work well initially.
My own Plasma Cloud
After using it for a bit, I ran into a few problems, some of them simple bugs (most of which seem to have been fixed), some UI issues, and also a lack of integration. Mirall’s UI is a “standard systray application”, it always sits there in the system tray, shows its status and allows you to set up directories (“Folders”) for synchronization with the ownCloud server. Mirall’s UI, however bears all the traits of a “traditional” desktop application: It’s unsuitable for touch interfaces, abuses the system tray as task switcher and overall not quite up to modern UI standards we use in Plasma. As Mirall is geared towards portability (it’s the official desktop client for Linux, Mac and Windows), it lacked good integration into Plasma — the usual lowest common denominator problem). What crossed my mind was redoing the UI using Plasma technologies, thereby updating its Look & Feel and at the same time making it suitable for touch interfaces, such as Plasma Active, KDE’s device-independent user experience.
Collaboration for a new design
At Akademy this summer, Klaas (who works for ownCloud Inc. on Mirall), me and a few others sat down to talk about better integration of ownCloud into the Plasma Desktop. ownCloud being a project very close to KDE (it sprang from the KDE community, had developer meetings and travel expenses funded by the KDE e.V., partly runs on KDE infrastructure, and some KDE developers are also working on ownCloud), that seemed like the right thing to do. In a BoF session, we sat down, talked about what we’d like to see improved and started designing a KDE Plasma client for ownCloud, based on the Mirall code base. What we came up with can be explained as follows:
The UI bits and synchronization code from Mirall will be split, and the sync mechanism will be split out into a shared library
The shared library is used by a daemon, which on the one hand takes care of keeping files in sync, and one other side exports a DBus control interface
A Plasma widget shows synchronization status and allows basic control like disabling syncing
A System Settings module allows to setup synchronization of the ownCloud client
A Plasma Active Settings module allows to set up the ownCloud client from your mobile device
With this solution, we can cover a range of devices, providing a consistent operation and Look & Feel and share code to keep development costs down. Doing the UI components in Plasma Quick (Plasma and KDE technologies + Qt Quick’s QML), we can the maximum amount of code across different devices, and create a modern UI that adapts itself to form factors and input methods (meaning it will work well on both, mouse/keyboard driven target devices, as well as touch screens). As a bonus, doing the UI in Plasma Quick means it’s easy to hack, easy to contribute for others.
Having a clear idea how we want to go forward, and an agreed-upon plan de campagne, we sat down to implement all the designed goodness. Klaas was quick to split out the synchronization mechanism from ownCloud into its own shared library, and I started working on the bits needed for the Plasma client: The sync daemon, the UI components used by the configuration modules and the Plasmoid, and of course those UIs themselves. Progress has been quite good, I quickly got the synchronization daemon up and running, and could move to implementing the various pieces of user interface needed.In the process, I’ve created a bunch of patches to Mirall, some of them merged, others still in the review queue. They’re mostly fairly trivial, splitting out a bit of QWidget-based remainders from the daemon (and thereby cutting down its size considerably), and extending the API (in a backwards compatible way) to allow better status display and control from the user interface “satellites”.
File synchronization is of course the main feature here. This already works reasonably well, including setting up of server and folders. The UI has a few problems here and there, but the important pieces are in place. The plan is to ship a first stable version with the release of Plasma Active 4 in March, which will also mark the availability of the desktop client.
How to make it all happen faster?
I’d be most grateful if distro packagers would pick the new ownCloud Plasma client up at this point, package it and allow users to test it (with the necessary warning signs of eating pet and firstborns attached, of course). There’s still quite a lot of work to do, but the basics work, and getting feedback helps me prioritize what to work on first to get this into the hands of our dear users. Of course if you’re interested in contributing to the client directly (in the form of code, UI design, bug fixing, polishing, performance, etc, you’re also most welcome to do so. The code is currently hosted on Github (like other ownCloud subprojects, if you prefer a Free software solution, we can clone it on KDE infrastructure as well, and sync our upstream changes as needed to Github). In order to build it, check out and build the “plasmaclient” branch. This will install the various components to your system. After running “kbuildsycoca4″ (or relogging in), you can enable the Plasma widget through your notification area configuration.
I’ve attended FOSS.in in Bangalore two weeks ago. FOSS.in is the largest Indian Free software conference, and has been on my list of conferences to ever attend for a long time. I’m back home for a good week now, so it’s time to recap a bit my experiences there. I travelled together with Lydia, a.k.a. Nightrose, who was attending on behalf of Wikimedia to tell about Wikidata. For the conference, I was scheduled for a talk about Plasma Active, and we also did a workshop on creating device-adaptive interfaces. More on that later.
Lydia and I went a few days earlier, to have some time to see Bangalore and surroundings. It was my first time in India, so also a good opportunity to see a few new things here and there, and to acclimatize. On the first day, we went around the city a bit, and later were invited to PES-IT, a renowned Indian IT college, where a 24hour open source hackathon would take place. Lydia and I held ad-hoc presentations about getting involved with KDE and Plasma Active respectively, followed by hands-on demos and discussions about both technical and non-technical issues. The students and professors were very friendly, and it was awesome to see enthusiastic students spending their weekend together hacking. We only arrived late at night back at our hotel, after some long and enlightening discussions about Free software and Indian culture. What struck me in particular (and in a very positive way) was the number of girls attending, about one third. In most “Western” countries, information technology is very much a male trade, Dutch universities for example struggle to attract more than one or maybe two girls each year for their computer science courses. India is way ahead there, which on the one hand is great to see, but on the other hand raises the question what is going wrong in my home country. Free software communities suffer from the same skewed demographic, so the same question applies here.
Jean-Baptiste (j-b) of VLC arrived two days after us, and we all hopped on a nightbus to Hampi, a UNESCO heritage site, an old capital of a long-gone empire and religious centre a few hours North-West of Bangalore. There, we spent an unforgettable day, from watching (and participating) in the morning ritual of washing your body in the river, sipping a glass of chai, having a wonderful breakfast under the Mango Tree, watching temples in a beautiful surrounding, more wonderful curries, chais, temples and friendly people to enjoying the sunset from the top of a mountain.
On Thursday, FOSS.in started. One of the booths that struck me first was the stand of Aakash, which is a low-cost tablet meant for students. The tablet is procured by the Indian government under the supervision of the Indian Institute of Technology in Bombay (IIT-Bombay). It is running a dualboot Linux/Android system right now. The Aakash people have already looked into Plasma Active (they prefer it much to Android, but there were some problems getting it to run on their hardware. The hardware is a 7″ tablet with a a capacitive screen, 512MB RAM and otherwise an Allwinner A13 chip with Mali400 GPU. That should just be powerful to enough to run Plasma Active. I got demoed a few applications, both under Android and Linux which quickly revealed why Android was not the best choice: Android basically made a lot of apps run 3 times slower. In the course of the next days, I sat down with IIT’s developers to look into problems they had with getting PA to run. We made some progress, and fleshed out strategies how to get it to run. One bigger hurdle is the lack of a good graphics driver, other tasks involve “relatively simple” system integration tasks. Doable, it seems, and a wonderful opportunity to bring KDE’s software to a very large new group of users.One thing that struck me as genius in this project is that it is not limited to procuring hardware and getting it to boot, but a large part (60+% of the budget) is allocated to content creation. Software is created under the GPL, content under Creative Commons, non-commercial licenses. Translation of content is an integral part of the project, so this initial Freeing of educational content has the potential of being very useful far outside of India as well. Visionary. As with any big project, there are also critical voices. Hardware is one issue, building a relationship of trust with Chinese manufacturers is not easy, as is getting the manufacturer to understand the constraints and requirements of Free software. I wish the Aakash project all the success it needs however, and we will continue to support the goals of the project. This could be the beginning of a wonderful thing. :)
Plasma Active Presentation and Workshop
On the first day, I held a presentation about Plasma Active, its approach, technology, goals and so on. The talk took place in the main hall and was well attended. I collected some valuable feedback, and am happy that people understand the ideas and believe them to be right. The next day, we held a KDE miniconf, where Shantanu and me did a workshop on developing device-adaptive apps. In the workshop, we outlined the process from idea to running code on a device, and dug into details. We had about 50 interested visitors, the workshop itself was quite interactive, and we did some live-coding, it was a lot of fun to do.
During the conference it became evident, that the Indian KDE and Free software community would very much like to organize an Indian KDE conference again. After conf.kde.in 2011, which was a great success, this seemed like a good idea, so we did some planning on that, asked if people were willing to volunteer in the organization and outlined a few possible options. The discussion has moved on to the kde-india mailinglist, so if you are among the people who would love to see conf.kde.in 2013 happen, join the list and add your ideas and man/girlpower!
The Internet of Things
One of the presentations I attended during FOSS.in was by Priya Kuber, who works for Arduino. Arduino produces a open source hardware microcontroller aimed at educational purposes. The talk was very inspiring, so I wondered if I could use this for some home automation tasks, as simple example a remote power switch to turn on my workstation in the office, or somesuch. Priya sat down with me and quickly got me going with my own basic programme for the Arduino microcontroller, and it was all very easy and fun. Back home I ordered an Arduino starter kit, which has already arrived and contains basically what I’d call a kid’s microcontroller wet dream, it has the Arduino Uno board, LEDs, sensors for light, temperature, an LCD display and a bunch of other small electronic components along with a nice book. Surely something to spend the calmer Christmas days with, old style. :) Still in India, I sat down for an afternoon and hacked up some code to use with this little project, and got already quite far. The idea is to connect the Arduino to my RaspberryPi (which is energy-efficient enough to run 24/7), run a small http server on the RPi and use that to remote control physical devices at home from a remote location (I’d like to think of a tropical island here ;)). I’ve implemented the server in twisted Python, it presents a JSON interface, which can be directly consumed from a QML Plasmoid, on either my laptop or any Plasma Active device. I didn’t get around to doing the actually interesting hardware part, yet. Maybe this is the feable start of using KDE technologies for home automation and domotica?
I would like to thank the KDE e.V., the foundation for the KDE community, for supporting my trip. You can also pitch in here, to make participation of KDE contributors in this kind of events possible by Joining the Game.
I’m on my way back from the Randa Meetings, where a few teams in KDE came together to collaborate on the next steps in their respective subprojects. I’ll post this to my blog after arrival, but as I’ve got some time to kill here in Basel, Switzerland, close to the German border, I decided to recap what we’ve been up to in the past days. I’ll concentrate on what the Plasma team has been working on. Present were Marco, Aaron, Afiestas, Giorgos, Antonis and myself. Giorgos and Antonis are still relatively new to the Plasma team, they concentrate on on making Plasmate rock, but have also done some excellent work this week on libplasma2. I’m happy to see the influx of enthusiast and skilled new developers in the team, as that reduces the busnumber and makes it easier to achieve our audacious goals.
Speaking of goals, we came to Randa with the plan to achieve a critical mass towards libplasma2. But what is libplasma2 exactly? Well, while we had some plans where to go with it, there were still a few items unclear — but not anymore! One of the big ticket items was the future of QGraphicsView in Plasma. QGraphicsView had been introduced in Qt 4.1. It’s basically a canvas on steroids, that gained features to create a fluid, widget-based UI using it. In Qt4, QtQuick uses QGraphicsView as its rendering backend. QGraphicsView is heavily based on QPainter, and employs a procedural way to rendering the UI. In Qt5 and QtQuick2, graphical display is supposed to move to an OpenGL scenegraph, making it possible to move much of the work involved to display a UI into hardware, so your GPU can do its magic with your UI. The benefit for the user is mainly that we’re able to have a much better performing rendering engine underneath (so smoother graphics), and more CPU time left to do the real work of your application. (One side effect will likely also be that we save a bit of power on mobile devices, as the GPU is much more efficient in doing these tasks – it has been optimized for it. (Experts say that the saving is in the range of an order of magnitude. Wether or not it will be noticeable to the user in the end, we’ll have to see later.)
As the OpenGL-scenegraph-based rendering is an entirely different paradigm compared to the procedural QGraphicsView, we have to rethink our use of QGraphicsView. Unfortunately, QGraphicsView is deeply ingrained into Plasma’s architecture. Even more unfortunate is that it’s not as bugfree as we’d like it to be, in fact much of the occasional rendering glitches you sometimes see in Plasma-based UIs are caused by QGraphicsView problems. Moving away from QGV and towards scenegraph is likely to solve this whole class of problems.
So one thing is clear, we want to move towards scenegraph. But what about all the old, QGraphicsView-based code? Well, we already started moving components of Plasma Desktop one by one to QML. This has begun with Plasma Workspaces 4.8, a lot more has moved in the 4.9 cycle, and yet another batch will move with 4.10, which will be released in January 2013. Our credo has been that we want to ship feature-equivalent ports, in order to keep the impact for the user as minimal as possible. There will be a point, however, when we will have to remove support for QGraphicsView in libplasma, and that will likely be libplasma2. We expect this work to take still more than a year, so also third party developers get ample time to move their code to the (much nicer) QML way of doing UI. But why not keep support for QGraphicsView? Well, it’s not that easy, as scenegraph and QGV are due to their respective paradigms more or less mutually exclusive. We’ve spent quite some time trying to come up with solutions that guarantee maximum amount of backwards compatibility, but we also had to ask ourselves if we have the manpower to implement and maintain workarounds for the incompatibilities between scenegraph and QGV. Moreover, what would the impact on our APIs and our code be? the tradeoffs were quite horrific, so in the end we decided to bite the bullet, and remove QGV from the frameworks5 branch. But what will Plasma2 with out QGraphicsView become? What we came up with is actually a very neat and clean approach: Our classes that currently manage the workspace (Corona, Containment, Applet) will become abstract managers that concert how components work together. Containments and Applets will have their UI written in QML (so we can do the rendering in the scenegraph, thus in the graphics hardware). They are extensible through C++ and various scripting languages that have bindings for Qt. This way, you can choose to implement the business logic in your favourite procedural language (C++, Python, Ruby, QtScript, etc.) and do the UI in a declarative way. Things like theming, localization, distribution, and all that will still be offered by the platform.
In Marco’s preliminary branch, where he removed support for QGraphicsView from libplasma2, the result is quite spectacular. The library is already about 30% smaller, and will likely lose another big chunk of code. That means more maintainability, a smaller memory and disk footprint and faster startup. As the functionality of QGraphicsView is more or less a subset of what we can do with QML, it’s not any less powerful or flexible. Just smaller, leaner and meaner (and also a bit easier to grok for developers using our APIs).
As you can see, we have been quite productive during this year’s sprint in Randa, and the above is only one part of what we’ve worked on. We’ve also made quite a dent into our TODO list for the KDE Frameworks 5 port, reviewed lots of patches, fixed bugs left and right, made existing code faster, and caught up with each other on various side projects. This all would not have been possible without the sponsors of the event, and the 287 people who donated through our fundraiser campaign to make this great event in a scenic location possible. Thank you all!
The discussion around including online search results in the workspace, and especially in the app starter, reminded me of a discussion we had some time ago about including online search in KRunner queries. First of all, I think the idea of including online search results directly in the shell is great. It’s not new by any means, but it serves value to the user, and in fact, I use it daily and would not want to miss it.
In KDE Plasma, we do that for a few years already. I recall sitting down during the Gran Canaria Desktop Summit in 2009 with Richard Moore and hacking on a KRunner plugin that includes results from Wikipedia and Wikitravel in the KRunner search results. We got that working pretty quickly, and the plugin is shipped on most installations of Plasma Desktop out in the wild nowadays, and nobody complained. How come?
Privacy by Default
First of all, we do it quite differently from the way Canonical does it in Ubuntu. Sending every search query to an online service forms a privacy problem. Especially when not using SSL encrypted HTTP requests, people around you can basically wiretap your traffic almost trivially, or intercept it using man-in-the-middle-attacks. Also, the service receives all your queries as well, not something I’d want in general. Even if I trust someone in principle doesn’t mean I have to tell them everything I do.
While we ship plugins that promote Free culture (in this case Wikipedia and Wikitravel), one could easily add support for Amazon as well, and of course for all kinds of search engines. (We do include a couple of proprietary web services in KDE, but we’d never silently send them data when it’s not clear to the user or explicitely asked for) What we, as Plasma maintainers will not accept however, is triggering these online requests on every query typed. Basically, we won’t send anything across the net without the user explicitely requesting us to do so.
Maybe we could raise some funds this way, but we think that our users are best served with a system that gets advertisement out of the way. I’m personally easily annoyed by commercial offerings which jump into my face without me asking for it, and I understand I’m not the only one.
Earning Money through affiliate programmes
A few months ago, David Faure, the maintainer of Konqueror, KIO and a lot of other important pieces in KDE got contactetd by the DuckDuckGo search engine. DuckDuckGo asked if KDE would be willing to take part in their affiliate programme. David passed this on to KDE e.V. and offered to do the necessary changes on the code side if we decide to go ahead with this. DDG offered us to receive 25% of their earnings per clicked ad when the user searched through Konqueror (or in fact through the webshortcut). As we have already been shipping a search provider for DuckDuckGo for quite some time, it was enough to add KDE to the search query and sign a form with KDE e.V.’s banking details. That’s some free money, maybe not much, but who knows and every bit helps. The impact technically and to the user is minimal, and it didn’t require any changes to our privacy principles and setting, so ahead we went. That means, if you feel like supporting KDE through your online search, that’s easy: Use the ddg: search provider (see below). This works starting with 4.9.0.
Online, but respecting privacy
So, offline and private by default, but how can we still include all the goodness from the Internet in your local search results, so we save you a trip to your webbrowser when we can? There are a few ways you can easily query online services from your desktop:
Wikipedia, Wikitravel (and other MediaWiki-based services): ALT+F2, enter “wiki $YOURQUERY”
Videos on Youtube: ALT+F2, enter “videos $YOURQUERY”
Google search: ALT+F2, enter “gg:$YOURQUERY” (use ggi: for google images, dd: for DuckDuckGo, amz: for Amazon, qt: for Qt API documentation, php: for PHP docs, many, many more are available as well, have a look at Konqueror’s webshortcuts for a full list, all of those are transparantly supported in KRunner as well)
Starting this weekend, KDE e.V. the organisation supporting the KDE community organizes a number of meetings in Randa, Switzerland. Randa is a village in the Swiss Alps. It’s a bit of a remote location between mountains which reach up to 4500m, provides the perfect location to sit down together and do nothing but concentrate on a single topic. You can help us to make this happen, but first, read on to understand what we will be doing in Randa, and why.
From Platform to Frameworks
One of the meetings planned there is the meeting of the KDE Plasma team. The goal of this meeting is to plan and make progress on the port of libplasma, the framework that is underlying the Plasma workspaces to Qt 5 and KDE Frameworks 5.
Within the KDE Frameworks 5 effort, the KDE team splits up and reorganizes the whole KDE platform. This effort has been started during last year’s meeting in Randa, and while being a huge effort, we have already made good progress across many of the libraries and technologies that make up KDE Frameworks. KDE Frameworks 5 will be mostly source-compatible, meaning that no large rewrites will be needed in order to port an app to Frameworks 5, much rather, we’re splitting dependencies differently across the board to make it easier to reuse only certain frameworks, without having to drag in larger dependency chains. The Frameworks 5 effort goes along with porting of the codebase to Qt5 (which is also much more painless than the port from Qt3 to Qt4). The result will be a faster, more modular, and hopefully due to these new virtues more widespread use of KDE technology.
The road to Plasma-Next
One of the technologies in Frameworks 5 is libplasma2. For libplasma2, there is still a lot of work to happen. While we have a rough plan what we want to achieve, and already taken the first steps to make this happen, there’s still a huge mountain of work ahead of us to create a new libplasma2 that is worthy its place in Frameworks 5. One of the high-level goals is to be able to run Plasma workspaces (and apps) in an entirely hardware-accelerated environment. In Qt5 terms, this means we want to use QtQuick 2 to display our user interfaces using an OpenGL scenegraph. As that is quite a deviation from the current, QPainter-based approach, you can imagine that it won’t be easy to get this “just right”, and in any case, it involves touching and testing a lot of code that works well right now. Also for Plasma Addons, we want to achieve almost source compatibility, so all the work that has gone into Plasma in KDE SC 4 will be immediately available in the new world order of Qt5. Not an easy task, and there are a few areas where we don’t have satisfactory solutions yet. A lot of sweat and braincycles will be needed.
The good news is that we have a capable and motivated team to tackle this effort, and to really kick off our libplasma2 hacking spree, we will sit down in Randa and plan all this through in detail, then sit down and get hacking so we have hopefully reached a critical mass to continue on, and a clear and shared vision across the team how we want to go about the remaining work.
How you can help…
Free Software is a joint effort, and everybody can pitch in. In order to get travel costs funded for this event in the Swiss Alps, we have started a fundraiser campaign. The goal is to raise 10.000€, and we’ve already made good progress towards this goal. You can help reaching it, making the sprint in Randa possible and contribute to the future of the software you might (or might not yet!) be running!
So with the Apple vs. Samsung case drawing to a close, we’re all being reminded how bad patents are, for developers, for innovators, and also for consumers. I’ve written about Defensive Publications as a means to fight the war on patents before, and while reading the news this morning, it’s probably time to give everybody around a few hints how to best write a defensive publication so that, at least that idea, cannot be patented and used in the war again.
A defensive publication is a technical document that describes ideas, methods or inventions and is a form of explicit prior art. Defensive publications are published by Open Invention Network in a database that is searched by patent offices during a patent exam. A good defensive publication will prevent software patents from being granted on ideas that are not new and inventive.These will help protect your freedom to operate.
A defensive publication typically consists of the following:
a descriptive title
a few paragraphs documenting the idea including how the idea would work.
one or two (or more) diagrams describing control flow, interaction, network flow or communication between components possibly an example or two
It is important to make sure that a defensive publication does not go into too much detail: patent examiners typically have little time to read all materials and it makes it likely to miss the important bits. If possible only describe one idea or invention per defensive publication. If there are two or more ideas that can be conceptually separated, split them and write them as separate defensive publications.Make sure that the defensive publication connects completely describes the idea by explaining HOW this idea would work. This part is the most important. Without the how, the idea is too abstract.
Things to watch out for when writing defensive publications:
don’t use technology specific terms, like program names. For example: don’t say MySQL, say database.
don’t refer to competitor’s products, or say that it works like a specific technology
try to use standard terms that are in common use
Examples of defensive publications can be found on IP.com. Via the “search” menu a menu for “non-patent literature” can be found. The publications sent in via Linux Defenders can be chosen in the drop down box for prior art databases.Good places to scrape for ideas for defensive publications:
issue tracker (especially wish lists)
Tools for making the diagrams:Good tools for making the diagrams are:
Software patents are an evil thing which should die a horrible and painful death. Until that moment, recording prior art in a way that is understood by the system is an effective way to fight patents. By recording prior art in the form of defensive publications, we can make it much harder for a patent to be granted — and it does not have to be hard at all to do so.
Yesterday at FISL, I attended a panel on software patents in Brazil, the discussion revolved around the why and why nots. Unsurprisingly, there are a lot of good reasons why not, and very little pro. It’s an interesting topic, especially since I lately dived a bit deeper into this topic. Softwaree parents were also high on the agenda of this year’s Akademy, which happened last month in Tallinn, Estland. They are legal threats to Free software, so we need to figure out how to deal with them. Let’s first rehash why software patents are a bad thing:
Patents are valid for a ridiculously long time. This unrealistic timeframe makes them effectively stifle progress
For a casual developer, it’s impossible to know if she or he is infringing on any existing patents
Software patents are used in entirely wrong ways: Often patent claims are brought in after a product has shipped successfully, so in order to protect prior investment into research, they’re used as weapons of economic warfare.
So, software patents are bad, really, really bad. Unfortunately they are just as well a reality we will have to deal with for the foreseeable future. Make no mistake, software patents should not exist, they are evil and they should die sooner rather than later. Until then, however, the threat is real and needs mitigation.
The process of creating a software patent is, very roughly: You come up with a new idea, you write it down in a formal way, you register it with a patent office, the patent is reviewed and rejected or granted. One part of this review process is research for prior art. In order to grant a patent, it has to be an original idea, and it must not already exist.
The “must not already exist” is also known as prior art. Prior art must of course be known, so it has to be visible in the public space. Prior art that is useful in the War on Patents is right under the nose of the reviewers at the USPTO (U.S. Patent and Trademark Office), for example. In order to prevent patents from being registered, having as many technologies, ideas and concepts as possible registered as prior art is an effective way to fight the system with its own weapons.
The Open Invention Network sent us three smart people to Akademy, which was really useful. Apart from being cool folks, the OIN’ers also presented us a relatively easy way to record prior art as “Defensive Publication” and feed it into the system so it will be found, considered, and lead to rejection of similar patent applications. The nice thing here is creating a defensive publication is a lot less work than applying for a patent, it is in fact very well doable for an individual developer. A typical defensive publication fits on one page, has a few paragraphs of text with a generally applicable explanation of the idea and preferably a diagram or graphic that makes it easier to understand it. The key here is that it should be easy to understand for an alien to the specific technology (but technically savvy person nevertheless) to understand and find back in the case a related patent application ends up on the table.
There’s a few pleasant catches to this whole thing:
The process is lightweight, with a bit of exercise, writing a defensive publication about a good idea can be done in half an hour
The idea you’re recording doesn’t even have to be your idea: You can help a fellow developer by recording his work in this form. As you don’t derive any kind of license or copyright to the actual work by writing the defensive publication, the work can be spread.
The idea does not need to have been already implemented. Just being able to prove “somebody has already come up with that!” is enough, so even (consistent) brainfarts are eligible
By now, you might already get why I’m writing this: In order to effectively fight the war on patents, we need more people to write these defensive publications. As it’s quite easy to do and we have people who help us in this, I’d like to encourage you all to record prior art we’ve created.
Update: I’ve uploaded an example defensive publication of prior art about Plasma’s Activities, find it here.
After a nice stroll through downtown Porto Alegre with a few fellow Free software activists from Spain, Mexica, Peru and Argentina, today was the first day of FISL13, and it was amazing. My usually screwed up biorythm in this part of the world means that I get up at a reasonable (for a conference visit) time in the morning, enough to have a relaxed shower, a bit of emailing, breakfast and still be on time for a fully packed conference day. FISL’s first day was amazing, it started off by meeting a few well known faces (Sandro, Filipe, Isabel, Knuth, I’m talking to you!), and getting to know a lot of new ones. Especially the Brazilian KDE community is really awesome (I knew that, but doesn’t hurt to mention it nevertheless). I also went by SOLAR, the Argentinian Free software organisation who offered me a nice cup of Mate tea, and an opportunity to share some ideas by means of an interview.
After a bit of booth-crawling, I went to attend two talks, which were both really excellent: The first was by Seth Schoen, who is with the Electronic Frontiers Foundation (EFF). Seth talked about privacy and data security at the United States Border. I’m sure anybody who has travelled to the U.S. in the past 10 years can relate to that. Seth explained how the border control is organised, what your right are (at the border, your normal civil rights are actually limited, so searching your luggage or detaining you for questioning can be done without good reason, unlike “in the streets”, so it’s very good to know what they can and can’t do), and how you can protect yourself in case you *DO* have something to hide (or rather, how you carry private information on digital devices across the border). There are a few methods, one is: wipe your device, do a clean install and download the interesting data only after you crossed the border, more sophisticated tricks involve temporarily encrypting your device with a key you don’t know and only get after you’ve passed through border control. Lots of interesting information there, and it makes me more comfortable when travelling to the U.S. the next time. Also, chapeau to the EFF for their protection of privacy and civil rights in general. The talk was excellently held, really interesting and absolutely worth my time.
Next up was Rick Falkvinge, one of the founders of the first Pirate Party (in Sweden). Rick talked about the history of copyright, how people who gain power all of a sudden “kick away the ladder” to secure their position, and how, in more general the Internet changes society. Another excellent talk that felt like it was really worth the time spent. Interestingly, these days, the Brazilian branch of the Pirate Party is being founded, so it’s very much a historical thing happening here. Excited to witness that.
Later in the afternoon, Daker Fernandes Pinheiro of Plasma Qt Quick Components fame and me did a workshop teaching everything you need to know to get started writing device-adaptive applications using Plasma. The workshop was well-attended, I think well prepared as well and people seemed to like it (even if we flooded everybody with a lot of information). It was quite practical, luckily Daker and me hit the right audience, so I think even if it was quite heavy on information, it’s still manageable. After a good two and a half hours where we taught the basics of Qt Quick, Plasma Quick, and how to write Plasma Components and Apps, it was evident on the faces of our audience (and judging by their questions) that people enjoyed our sharing of knowledge, and who knows, maybe a few of the attendants will join our team and become new KDE hackers. As promised, I’ve put the slides online. They provide a good overview of the technologies involved, so are probably quite useful as reading material. Of course, as sharing is good, let me know if you would like to reuse them in part or as a whole, and I’ll send you the sources (including a bit of example code). I myself enjoyed “teaching” as well. Especially Plasmate, our one-stop-shop workflow-driven Plasma IDE resonated really well with the audience, just in time for the end of this summer’s 1.0 release.
Overall, I’m really exhilerated by FISL so far. People are more than welcoming, so it immediately feels “at home” (at 12.000km away from it!), I’ve had really interesting and inspiring conversations, and everything is really well organised. I’m now at my hotel (one that focuses on sustainability, it’s really nice, well done FISL peeps!), we’ll go for churrasco in a bit and then off to the first night’s party.
My talk on “Freeing the Device Spectrum” will be on Friday, at 11.00 o’clock in the GNU hall (that’s the big auditorium). It’s targeted at a general audience, so a lot less technical than the workshop we did today. Be there, or be square!