My Writings. My Thoughts.
Spending yesterday mostly away from the computer screen, I was shocked this morning when I read about the Ubuntu Community Council’s request for Jonathan Ridell to step down from the Kubuntu Council. I knew that things have been rough lately and honestly there were some situations that Jonathan could have handled better, but I didn’t expect anything as drastic and sudden as this without seeing any warning signs.
Looking at the mails that Scott Kitterman posted sent by the Kubuntu Council, it seems like it’s been a surprise to KC as well.
I’m disappointed in the way the Ubuntu Community Council has handled this and I think the way they treated Jonathan is appalling, even taking into account that he could’ve communicated his grievances better. I’m also unconvinced that the Ubuntu Community Council is as beneficial to the Ubuntu community in its current form as it could be. The way it is structured and reports to the SABDFL makes that it will always favour Canonical when there’s a conflict of interest. I brought this up with two different CC members last year who both provided shruggy answers in the vein of “Sorry, but we have a framework that’s set up on how we can work in here and there’s just so much we can do about it.” – they seem to fear the leadership too much to question it, and it’s a pity, because everyone makes mistakes.
This request to step down is probably going to sour the Ubuntu project’s relationship with Jonathan Ridell even more, which is especially sad because he’s one of the really good community guys left that keeps both the CoC and the original Ubuntu manifesto ethos in high regard while striving for technical excellence. On top of that, it seems like it may result in at least another such person leaving.
I hope that the CC also takes this opportunity to take a step back and re-avaluate it’s structure and purpose, instead of just shrugging it off with a corporate-sounding statement. I’d also urge them to retract their statement to Jonathan Ridell and attempt to find a more amicable solution.
Last week I discovered The Fan Club’s Experiments page. It reminds me of the Debian Experiments community on Google+. I like the concept of trying out all kinds of different things and reporting back to the community on how it works. I’ve been meaning to do more of that myself so this is me jumping in and reporting back on something I’ve been poking at this weekend.
Squashfs is a read-only compressed filesystem commonly used on embedded devices, Linux installation media and remote file systems (as is done in LTSP). Typically, a system like tmpfs, unionfs or aufs is mounted over this read-only system to make it usable as a root filesystem. It has plenty of other use cases too but for the purposes of this entry we’ll stick with those use cases in mind. It supports gzip, lzo and xz(lzma) as compression back-ends. It also supports block sizes from 4K up to 1M.
Compression technique as well as block size can have major effects on both performance and file size. In most cases the defaults will probably be sufficient, but if you want to find a good balance between performance and space saving, then you’ll need some more insight.
My Experiment: Test effects of squashfs compression methods and block sizes
I’m not the first person to have done some tests on squashfs performance and reported on it. Bernhard Wiedemann and the Squashfs LZMA project have posted some results before, and while very useful I want more information (especially compress/uncompress times). I was surprised to not find a more complete table elsewhere either. Even if such a table existed, I probably wouldn’t be satisfied with it. Each squashfs is different and it makes a big difference whether it contains already compressed information or largely uncompressed information like clear text. I’d rather be able to gather compression ratio/times for a specific image rather than for one that was used for testing purposes once-off.
So, I put together a quick script that takes a squashfs image, extracts it to tmpfs, and then re-compressing it using it all the specified compression techniques and block sizes… and then uncompressing those same images for their read speeds.
My Testing Environment
For this post, I will try out my script on the Ubuntu Desktop 14.04.2 LTS squashfs image. It’s a complex image that contains a large mix of different kinds of files. I’m extracting it to RAM since I want to avoid having disk performance as a significant factor. I’m compressing the data back to SSD and extracting from there for read speed tests. The SSD seems fast enough not to have any significant effect on the tests. If you have a slow storage, the results of the larger images (with smaller block sizes) may be skewed unfavourably.
As Bernhard mentioned on his post, testing the speed of your memory can also be useful, especially when testing on different kinds of systems and comparing the results:
# dd if=/dev/zero of=/dev/null bs=1M count=100000 104857600000 bytes (105 GB) copied, 4.90059 s, 21.4 GB/s
CPU is likely to be your biggest bottleneck by far when compressing. mksquashfs is SMP aware and will use all available cores by default. I’m testing this on a dual core Core i7 laptop with hyper-threading (so squashfs will use 4 threads) and with 16GB RAM apparently transferring around 21GB/s. The results of the squashfs testing script will differ greatly based on the CPU cores, core speed, memory speed and storage speed of the computer you’re running it on, so it shouldn’t come as a surprise if you get different results than I did. If you don’t have any significant bottleneck (like slow disks, slow CPU, running out of RAM, etc) then your results should more or less correspond in scale to mine for the same image.
How to Run It
Create a directory and place the filesystem you’d like to test as filesystem.squashfs, then:
$ apt-get install squashfs-tools $ wget https://raw.githubusercontent.com/highvoltage/squashfs-experiments/master/test-mksquashfs.sh $ bash test-mksquashfs.sh
With the default values in that file, you’ll end up with 18 squashfs images taking up about 18GB of disk space. I keep all the results for inspection, but I’ll probably adapt/fix the script to be more friendly to disk space usage some time.
You should see output that look something like this, with all the resulting data in the ./results directory.
* Setting up... - Testing gzip * Running a squashfs using compression gzip, blocksize 4096 * Running a squashfs using compression gzip, blocksize 8192 * Running a squashfs using compression gzip, blocksize 16384 ... - Testing lzo * Running a squashfs using compression lzo, blocksize 4096 * Running a squashfs using compression lzo, blocksize 8192 * Running a squashfs using compression lzo, blocksize 16384 ... - Testing xz * Running a squashfs using compression xz, blocksize 4096 * Running a squashfs using compression xz, blocksize 8192 * Running a squashfs using compression xz, blocksize 16384 ... * Testing uncompressing times... * Reading results/squashfs-gzip-131072.squashfs... * Reading results/squashfs-gzip-16384.squashfs... * Reading results/squashfs-gzip-32768.squashfs... ... * Cleaning up...
On to the Results
The report script will output the results into CSV.
Here’s the table with my results. Ratio is percentage of the size of the original uncompressed data, CTIME and UTIME is compression time and uncompress time for the entire image.
- Even though images with larger block sizes uncompress faster as a whole, they may introduce more latency on live media since a whole block will need to be uncompressed even if you’re just reading just 1 byte from a file.
- Ubuntu uses gzip with a block size of 131072 bytes on it’s official images. If you’re doing a custom spin, you can get improved performance on live media by using a 16384 block size with a sacrifice of around 3% more squashfs image space.
- I didn’t experiment with Xdict-size (dictionary size) for xz compression yet, might be worth while sacrificing some memory for better performance / compression ratio.
- I also want stats for random byte reads on a squashfs, and typical per-block decompression for compressed and uncompressed files. That will give better insights on what might work best on embedded devices, live environments and netboot images (the above table is more useful for large complete reads, which is useful for installer images but not much else), but that will have to wait for another day.
- In the meantime, please experiment on your own images and feel free to submit patches.
Long story short, we put in a bid to host Debconf 16 in Cape Town, and we got it!
Back at Debconf 12 (Nicaragua), many people asked me when we’re hosting a Debconf in South Africa. I just laughed and said “Who knows, maybe some day”. During the conference I talked to Stefano Rivera (tumbleweed) who said that many people asked him too. We came to the conclusion that we’d both really really want to do it but just didn’t have enough time at that stage. I wanted to get to a point where I could take 6 months off for it and suggested that we prepare a bid for 2019. Stefano thought that this was quite funny, I think at some point we managed to get that estimate down to 2017-2018.
That date crept back even more with great people like Allison Randal and Bernelle Verster joining our team, along with other locals Graham Inggs, Raoul Snyman, Adrianna Pińska, Nigel Kukard, Simon Cross, Marc Welz, Neill Muller, Jan Groenewald, and our international mentors such as Nattie Mayer-Hutchings, Martin Krafft and Hannes von Haugwitz. Now, we’re having that Debconf next year. It’s almost hard to believe, not sure how I’ll sleep tonight, we’ve waited so long for this and we’ve got a mountain of work ahead of us, but we’ve got a strong team and I think Debconf 2016 attendees are in for a treat!
Since I happened to live close to Montréal back in 2012, I supported the idea of a Debconf bid for Montréal first, and then for Cape Town afterwards. Little did I know then that the two cities would be the only two cities bidding against each other 3 years later. I think both cities are superb locations to host a Debconf, and I’m supporting Montréal’s bid for 2017.
Want to get involved? We have a mailing list and IRC channel: #debconf16-capetown on oftc. Thanks again for all the great support from everyone involved so far!
Everything is set! The Debconf committee approved my accommodation sponsorship, my leave is confirmed and my airline tickets are booked, I’m going to Debconf 14!
It’s going to be a 30 hour trip from Cape Town to Portland (I could shave off around 6 hours if I pay 120% more, doesn’t seem worth it) and I’ll be there for the full 9 days from 23 August to 31 August. I last attended Debconf 12 in Managua, Nicaragua and it was fun, educational and productive. I’m really excited to see the Debian folk in person again and it will also be my first time in Portland. See you there!
The death of Gnome Panel
Gnome Panel (or more properly, gnome-panel) is the main dock that you would see in the Gnome 2 series desktop, and in the Gnome Fallback session (also called Gnome “Classic” in many distributions) in Gnome 3.
To provide the typical desktop experience, it’s also accompanied by Nautilus and Metacity along with a few other libraries (hence forth, gnome-panel’s friends). Gnome Panel and friends have recently been deprecated so that developers have more time to focus on Gnome Shell, the new default shell for Gnome that has a vastly simplified (and better) technology stack. Last November, Vincent Untz announced that he would stop maintaining Gnome Panel and friends beyond the 3.6 release, which means the death for it unless anyone else takes it up.
I’ve been an avid user of the Gnome 2.x series and also Gnome Fallback in the 3.x series. I’ve gotten rather good at supporting it too. We include it by default in Edubuntu, and even have an option in the installer to make it the default for installations over Unity. It provides a low-footprint, fast and simple desktop experience with very reasonable usability, while being very configurable and lockdownable. (my spell check says that’s not a word, but I don’t care).
I’ve been considering whether we should switch to having Xfce or LXDE as an alternative to Unity, but after discussing it with other Edubuntu contributors, it became clear that if I wanted to do that, I’d have to be willing to maintain it for Edubuntu by myself. In Edubuntu we’ve been pretty good at having at least 2 people being interested in any side-project we pick up and I like to keep it that way if we can. It means that if someone gets a bit busy, there’s someone who can pick up the slack for a little while. Also, Xfce and LXDE had big holes in usability, especially when it came to things like having multiple displays and running on laptops. I decided to put that project on the backburner a little since Ubuntu 13.04 will still be using Gnome 3.6, which meant that we’d have the Fallback session for one more release anyway.
The Inevitable Fork
Ikey Doherty forked off Gnome Panel to create a new environment called Consort. Metacity is forked to become Consortium. The website where the Consort desktop environment used to live seems gone now, but here’s a link to some screenshots from Google+.
This caused a bit of a stir, Vincent Untz posted a good chronology of what lead up to it and why he believes that a fork is a bad idea when the Gnome project has effectively put the upstream code up for adoption.
I’ve been interested in the Consort family since it could potentially be something that we could use in Edubuntu once the upstream gnome-panel is no longer in the archives. Also, while Gnome Shell, KDE Plasma Desktop and Unity are great and have come incredibly far in terms of stability and performance, it’s just not always for me. I want to be able to use it for myself in virtual machines, older machines and some other special cases (most notably, on LTSP).
Josselin Mouette, maintainer of Gnome in Debian, approached Ikey after some requests have been made for it in Debian. If you’ve read the post and the IRC logs linked, then you’ll probably agree that it could’ve gone a lot better. I’m not on the SolusOS IRC channel so only saw the conversation after the fact, but I was disappointing since it would need to go into Debian if I’d want to support it in Edubuntu. I think both Josselin and Ikey could’ve handled it better, but humans are just that and emotions and misunderstandings happen.
And so I Bite
I was chewing a bit on Josselin’s comment on how the former maintainer “maintainer decided to give the key to anyone who wanted to” and it’s been several weeks since Vincent invited people to take over maintainership. I decided that I’d at least be willing to do the absolute minimum just to keep the project releasable every six months so that it can be included in distributions, maintain its online presence pages, bug tracker status and keep up with component changes in the stack. So I e-mailed Vincent and explained what I’m willing to do. I had very little resistance, Vincent sent an email out to other people who are steakholders in the gnome-panel project and after a week, there were no objections. So here I am, brand new maintainer of the Gnome Fallback session and its components!
This means that the project is, at least for now, alive again. It’s not going to be part of the official Gnome 3.8 release (I still have to figure out exactly what that means), but there will be a 3.8 release of Gnome Panel and friends as tarballs and for people who maintain it in distributions, things will continue to work exactly as it did before.
- My complete primary goal for this at the moment is to ensure that gnome-panel, metacity, etc is releasable alongside the Gnome 3.8 release. This basically means making sure it builds, including any patches that we can and releasing.
- Do something about the long buglist. The Gnome bug tracker has an ugly long list of gnome-panel bugs (939 at my last count). I want to eliminate all the stale Gnome 2.x gnome-panel bugs of which a very large amount of them are no longer relevant (at least on first glance). Then I’d like to do some regular posts to the mailing list and blog about a few prominent bugs every now and again and try to fix them and get people involved.
- Porting Metacity to GTK3. So here’s a bit of really good news. Josselin is also involved with this and one of his mid-term goals is to port metacity to gtk3. It’s something that I know would have to happen, but I don’t have the skills to do that (yet) and I’m glad that he has took this up. Josselin’s mid-term goals also include possibly adding support for the new notification system (if necessary) and adding support for the new Gnome global menu.
- Create a nice project page with goals and to-do list, who’s envolved and what they’re doing and encourage more people to get involved. The current page is rather outdated so it would be nice to fix it. For now that mostly involved bringing the Gnome Panel Gnome Wiki page up to date.
- My pet peeve… intelligent launcher icons. Windows 7, Mac OS X, KDE, Unity and Gnome Shell have docks that work very similarly in many ways. You click on a launcher and those same launcher entries are recycled as your window list. Gnome Panel is a bit old fashioned in this regard. Many people use 3rd party panels and launchers just to get around this. I have thought for a long time that this should be fixed in Gnome Panel and long-term, it’s something that I’d like to see happen.
- Make the stack as downstream-friendly as possible. Regarding Ikey and Consort, I don’t actually think it was a completely horrible idea at the time. We live in a free world where we use free software and anyone is allowed to do whatever they want and fork whenever they want, and while that doesn’t necessarilly mean it’s a good idea, it also doesn’t mean that we need to get all hissy about it. I’d actually be very interested in working with people who want to fork and find out why they want to fork and try to reel them in closer to upstream. In the case of Consort, I think it would be most beneficial for both projects and all their users if Consort was a branch of Gnome Fallback, rather than a fork. Both projects use Git, FFS. I’ll reach out and try to minimize duplication of effort while not blocking anyone on experimenting with new features or implementing distro-specific changes.
- More metacity features. Metacity’s compositing features have come quite a long way, there are still a few bugs that need to be sorted out, but more than that, there are many window manager features that users have become accustomed to in pretty much all the other environments. Ikey has indicated previously that he wants to do this for consortium. It’s one of the reaons I’ll be super-nice to him because I’d really prefer that he submit as much of that upstream as possible.
- Make everything worth configurable and lockdownable. There are some settings that I get requests from from the users I support so often that it’s just getting boring. The Gnome 2.x series proved to work well in educational and corporate environments. I say we should play on that strength and make it even more so, while sticking 100% with the Gnome Human Interface Guidelines, of course.
Very Long-term Goals
Well, the fact is, Gnome Fallback will die. There’s a new project called Gnome Legacy, it implements a Gnome 2.x-like experience in Gnome 3. As time goes by, older machines become more powerful and the missing pieces will be implemented and eventually there would be no more good reason for anyone to want to run what we now know as Gnome Fallback. I think it could still have a good 3-5 years or maybe even more in it. Who knows, by then Gnome 4 might even be in development and all of this will be ancient history.
So, my very quick “Eek, I’m now maintainer of Gnome Panel!” post has become quite lengthy post, if you have any questions, I’ll respond to it in the comments.
The War on Time
Whoosh! I’ve been incredibly quiet on my blog for the last 2-3 months. It’s been a crazy time but I’ll catch up and explain everything over the next few entries.
Firstly, I’d like to get out a few details about the last Ubuntu Developer Summit that took place in Copenhagen, Denmark in October. I’m usually really good at getting my blog post out by the end of UDS or a day or two after, but this time it just flew by so incredibly fast for me that I couldn’t keep up. It was a bit shorter than usual at 4 days, as apposed to the usual 5. The reason I heard for that was that people commented in previous post-UDS surveys that 5 days were too long, which is especially understandable for Canonical staff who are often in sprints (away from home) for the week before the UDS as well. I think the shorter period works well, it might need a bit more fine-tuning, I think the summary session at the end wasn’t that useful because, like me, there wasn’t enough time for people to process the vast amount of data generated during UDS and give nice summaries on it. Overall, it was a great get-together of people who care about Ubuntu and also many areas of interest outside of Ubuntu.
I didn’t take many photos this UDS, my camera is broken and only takes blurry pics (not my fault I swear!). So I just ended up taking a few pictures with my phone. Go tag yourself on Google+ if you were there. One of the first interesting things I saw when arriving in Copenhagen was the hotel we stayed in. The origami-like design reminded me of the design of the Quantal Quetzel logo that is used for the current stable Ubuntu release.
The Road ahead for Edubuntu to 14.04 and beyond
This release will mostly focus on the Edubuntu Server aspect. If everything works out, you will be able to use the standard Edubuntu DVD to also install an Edubuntu Server system that will act as a Linux container host as well as an Active Directory compatible directory server using Samba 4. The catch with Samba 4 is that it doesn’t have many administration tools for Linux yet. Stéphane has started work on a web interface for Edubuntu server that looks quite nice already. I’m supposed to do some CSS work on it, but I have to say it looks really nice already, it’s based on the MAAS service theme and Stéphane did some colour changes and fixes on it already.
From the Edubuntu installer, you’ll be able to choose whether this machine should act as a domain server, or whether you would like to join an existing domain. Since Edubuntu Server is highly compatible with Microsoft Active Directory, the installer will connect to it regardless of whether it’s a Windows Domain or Edubuntu Domain. This should make it really easy for administrators in schools with mixed environments and where complete infrastructure migrations are planned.
You will be able to connect to the same domain whether you’re using Edubuntu on thin clients, desktops or tablets and everything is controllable using the Epoptes administration tool.
Many people are asking whether this is planned for Ubuntu / Ubuntu Server as well, since this could be incredibly useful in other organisations who have a domain infrastructure. It’s currently meant to be easily rebrandable and the aim is to have it available as a general solution for Ubuntu once all the pieces work together.
Empowering Ubuntu Flavours
This cycle, Ubuntu is making some changes to the release schedule. One of the biggest changes made this cycle is that the alpha and beta releases are being dropped for the main Ubunut product. This session was about establishing how much divergence and changes the Ubuntu Flavours (Ubuntu Studio, Mythbuntu, Kubuntu, Lubuntu and Edubuntu) could have from the main release cycle. Edubuntu and Kubuntu decided to be a bit more conservative and maintain the snapshot releases. For Edubuntu it has certainly helped so far in identifying and finding some early bugs and I’m already glad that we did that. Mythbuntu is also a notable exception since it will now only do LTS releases. We’re tempted to change Edubuntu’s official policy that the LTS releases are the main releases and treat the releases in between more like technology previews for the next LTS. It’s already not such a far stretch from the truth, but we’ll need to properly review and communicate that at some point.
Valve at UDS and Steam for Linux
One of the first plenaries was from Valve where Drew Bliss talked about Steam on Linux. Steam is one of the most popular publishing and distribution systems for games and up until recently it has only been available on Windows and Mac. Valve (the company behind Steam and many popular games such as Half Life and Portal) are actively working on porting games to run natively on Linux as well.
Some people have asked me what I think about it, since the system is essentially using a free software platform to promote a lot of non-free software. My views on this is pretty simple, I think it’s an overwhelmingly good thing for Linux desktop adoption and it’s been proven to be a good thing for people who don’t even play games. Since the announcement from Valve, Nvidia has already doubled perfomance in many cases for its Linux drivers. AMD, who have been slacking on Linux support the last few years have beefed up their support drastically with the announcement of new drivers that were released earlier this month. This new collection of AMD drivers also adds support for a range of cards where the drivers were completely discontinued, giving new life to many older laptops and machines which would be destined for the dumpster otherwise. This benefits not only gamers, but everyone from an average office worker who wants snappy office suite performance and fast web browsing to designers who work with graphics, videos and computer aided design.
Also, it means that many home users who prefer Linux-based systems would no longer need to dual-boot to Windows or OS X for their games. While Steam will actively be promoting non-free software, it more than makes up for that by the enablement it does for the free software eco-system. I think anyone who disagrees with that is somewhat of a purist and should be more willing to make compromises in order to make progress.
Ubuntu Release Changes
Last week, there was a lot of media noise stating that Ubuntu will no longer do releases and will become a rolling release except for the LTS releases. This is certainly not the case, at least not any time soon. One meme that I’ve noticed increasingly over the last UDSs was that there’s an increasing desire to improve the LTS releases and using the usual Ubuntu releases more and more for experimentation purposes.
I think there’s more and more consensus that the current 6 month cycle isn’t really optimal and that there must be a better way to get Ubuntu to the masses, it’s just the details of what the better way is that leaves a lot to be figured out. There’s a desire between developers to provide better support (better SRUs and backports) for the LTS releases to make it easier for people to stick with it and still have access to new features and hardware support. Having less versions between LTS releases will certainly make that easier. In my opinion it will probably take at least another 2 cycles worth of looking at all the factors from different angles and getting feedback from all the stakeholders before a good plan will have formed for the future of Ubuntu releases. I’m glad to see that there is so much enthusiastic discussion around this and I’m eager to see how Ubuntu’s releases will continue to evolve.
Lightning talks are a lot like punk-rock songs. When it’s good, it’s really, really amazingly good and fun. When it’s bad, at least it will be over soon :)
Unfortunately, since it’s been a few months since the UDS, I can’t remember all the details of the lightning talks, but one thing that I find worth mentioning is that they’re not just awesome for the topic they aim to produce (for example, the one lightning talks session I attended was on the topic of “Tests in your software”), but since they are more demo-like than presentation-like, you get to learn a lot of neat tricks and cool things that you didn’t know before. Every few minutes someone would do something and I’d hear someone say something like “Awesome! I didn’t know you could do that with apt-daemon!”. It’s fun and educational and I hope lightning talks will continue to be a tradition at future UDSs.
Stefano Rivera (fellow MOTU, Debianista, Capetonian, Clugger) wins the prize for person I’ve seen in the most countries in one year. In 2012, I saw him in Cape Town for Scaleconf, Managua during Debconf, Oakland for a previous UDS and Copenhagen for this UDS. Sometimes when I look at silly little statistics like that I realise what a great adventure the year was!
Between the meet ‘n’ greet, an evening of lightning talks and the closing party (which was viking themed and pretty awesome) there was just one free evening left. I used it to gather with the Debian folk who were at UDS. It was great to see how many Debian people were attending, I think we had around a dozen or so people at the dinner and there were even more who couldn’t make it since they work for Canonical or Linaro and had to attend team dinners the same evening. It was as usual, great to put some more faces to names and get to know some people better.
It was also great to have a UDS with many strong technical community folk present who is willing to engage in discussion. There were still a few people who felt missing but it was less than at some previous UDSs.
I also discovered my face on a few puzzles! They were a *great* idea, I saw a few people come and go to work on them during the week, they seem to have acted as good menial activities for people to fix their brains when they got fried during sessions :)
Overall, this was a good and punchy UDS. I’ll probably not make the next one in Oakland due to many changes in my life currently taking place (although I will remotely participate), but will probably make the one later this year, especially if it’s in Europe. I’ll also make a point of live-blogging a bit more, it’s just so hard remembering all the details a few months after the fact. Thanks to everyone who contributed their piece in making it a great week!
Last weekend I was in Southwest Harbour, Maine again for the annual LTSP hackfest (called By The Sea). It’s a fun and productive event and as always it’s been good catching up with LTSP folk, even though we were missing Oliver, Alkis and Vagrant.
Here is a summary of what I can recall from the discussions of the weekend…
Recent Happenings in LTSP
- New LTSP Website – After a really long time, LTSP finally has a new website. It’s a big improvement, there’s a new success stories page and the wiki is now self-hosted.
- LTSP PNP – This is a new package (ltsp-pnp) that can safely be installed on existing machines to allow a user to log into an LTSP server using LDM.
- Libpamssh – This has been ongoing work in the LTSP project that will allow you to authenticate against another machine using SSH with local PAM. This will make it somewhat trivial to adapt LightDM (Light Display Manager) as a remote login manager and we can then do away with LDM (LTSP Display Manager) which currently has several big limitations. Scott Balneaves and Stéphane Graber made big progress on this over the weekend and it’s close to an initial release.
- New LTSP Cluster Control Center – Simon Poirier has been working on a new LTSP Cluster Control Center, the old one has been rusting away and other attempts at rewriting it didn’t quite work out. He did a demonstration of the proof-of-concept code and it’s looking quite nice already.
- Squashed Bugs – Marc Gariépy took some time to squash some bugs: LP: #996533, LP: #1048689 and LP: #1062947
- Documentation – I’m taking it upon myself to fix some problems we have with documentation. The lts.conf documentation is incomplete and difficult to maintain, so I will be going through the client/server code and tag all the possible settings that there are so that we can auto-generate documentation from it. David Trask will be helping out there and will be writing some nice descriptions for the config settings.
- LTSP 6.0 and the Future of LTSP – There was a good opening discussion about the future of LTSP. The combination of all the recent partial rewrites that Alkis has been doing combined with the deprecation of LDM will culminate in what will be called LTSP 6.0. There was also some interest in having an LTSP standalone distribution again (that can be installed on none-integrated distributions) and things like an LTSP live client disc. We were also wondering about the future of pure thin clients, many upstreams are writing software that aren’t at all thin client friendly (Clutter-based software like Gnome Shell and Totem, Unity, etc) and at the same time, thin client hardware is becoming increasingly powerful. It’s possible that there may be a focus on making diskless fat clients working even better with LTSP and make it easier to use remote-apps for running only certain applications the application servers. We’re also quite interested in projects like FreeRDP for users who would still require pure thin clients.
- On Friday, Chuck Liebow took us out for a boat ride around the harbour on the Sea Princess
- On Saturday night we had the big lobster dinner as per BTS tradition (I had steak since I’m not a big seafood person). We were to busy eating and telling stories to be taking any pictures :)
- And last but certainly not least, I finally met Eric Harrisson, who did a lot of work on K12-LTSP in the Portland schools district. He has an awesome hobby where he builds guitars out of… well, almost anything he can find.
He brought along one of the experimental guitars he slapped together recently and told me all about how it’s put together. Even more amazing, he said I could have it! I was planning on getting an acoustic guitar anyway so I’m very thrilled about it. Not only is it a completely unique guitar but it sounds great too. I’m going to have to think of something to make back for him!
It was great seeing everyone again. I first started using LTSP around 9 years ago and never imagined back then that I’d get to meet the people behind it. Ron Colcernian sourced us some really cool LTSP tops that you can see us wearing in the group photo. I hope to get to BTS again next year!