Friday, September 9, 2011

Sacrificing gratis for libre

        Gratis and libre can both be interpreted as free. However, there is a difference. Gratis is free as in beer and libre is free as in freedom, to quote the common explanation. Usually a piece of open source software fits both words, but should it?

        Everyone likes to get something for nothing. That's why pricing is one of the most commonly stated benefits of Linux and other free software. Many businesses and individuals turn to open source every year to help them cut down their operating costs. In fact, my first taste of Linux was also for this reason.

         On the other hand, what if our free software tries to force us to do something? Or saves our data in a format other programs can't read? Or if you want to port it to a different platform and the license forbids it?

        That's where libre comes in. Everyone has the right to use, modify and redistribute the software. If you are a developer, you can fix bugs that the manufacturer doesn't have time for, add new features and more. If you aren't, you can reap the rewards of other people's changes or hire a developer to make the changes for you.

        So far it sounds like a win-win. However, what if a free software project runs out of money? Unlike commercial software, it isn't the amount of users that matters, it's the amount of contributors. If everyone downloads it but no one donates, the project will fail and people will have to start using a different program. Perhaps one that isn't open source.

        As the title suggests, that is when we have to make a decision. Is getting something for nothing more important than being free to use your software in any way you choose? In my opinion, no. It's better to pay for freedom than use closed source for free.

        What are our options then? The project could charge a fee for the support contract like Red Hat does, they could only charge for commercial use, or they have some kind of shareware type agreement.

Of course, all that would be unnecessary if we, the users, would do our part voluntarily. The next time you open Firefox, go ahead and make a contribution. When you upgrade your distribution to the next version, take the time to send it's provider a donation. Remember, every little bit helps.

Otherwise, you may find yourself purchasing your next update or having to find a new project.

Monday, August 15, 2011

Has Microsoft defeated Linux?

Apparently, Microsoft thinks it has defeated Linux. At least, they deleted it from their list of competitors. 

Steven J. Vaughan-Nichols declares that they are mistaken. Windows has the desktop, but Linux has the other markets. It is true that Microsoft doesn't have much market share outside the desktop. And if the desktop truly is declining, will Microsoft decline with it?

I won't repeat all the places that Linux holds large amounts of market share, but ,with regard to the desktop, I will say this: not winning is not the same as losing.

Sure Windows runs way more personal computers than Linux, but it's been that way for twenty years. Linux hasn't lost the desktop because it never had it.

In fact, just be refusing to disappear, it has shown much more resilience than other operating systems like Amiga. Even the great Apple, which came on the scene much earlier than Linux and had a GUI was reduced to a mere 7%.

Linux started with nothing. No hardware support, no apps, no user-friendly interface, no users. From that, it has made massive strides to become an OS that my parents can use. That's quite an improvement for a something that has no company to back it up and is given away for free.

Sure commercial applications are kind of scarce and it's hard to find a reasonably priced computer with Linux preinstalled, but that's not a reason to throw in the towel.

I'm not saying that Linux will be number one next year or anything. However, I am saying that Linux is just as capable of taking on Microsoft as it has ever been. Probably fifty times more capable, and I don't see it weakening one bit.

Open Surface vs Open Core

According to this article from ZDNet, Microsoft introduced the term "Open Surface" to describe their goals for the cloud. It's very likely that the term will soon be used in other contexts also.

Basically the term means that the source code is closed, but through open standards, different clouds implementations can work together and share data. So it looks open but it isn't.

This sounds good to lots of people. They can import and export their data from place to place and everything will work fine. I'm all for that, and I agree that I can't tell from using something if it is open source or not.

On the flip side, just because the consumer doesn't see something as important doesn't mean that it isn't. It is very important to me that the lower levels of software are open. Kernel, shell, graphics engine, compilers and interpreters, core libraries, etc. Because everything that happens above those levels depends on them to work properly, be secure and not do something evil. I don't mind commercial applications as long as other software doesn't depend on them. That sounds like Open Core to me.

Changing sides again, if my system is open but anything I write in my word processor can't be opened on another computer without spending lots of $$$ for another license, I won't be too happy. An if the open core of my system has to meet the demands of the commercial software companies, it might not feel very open or work very well.

It's all about control. Is the user in control or the software provider? Or in the cloud, the company who uses and maintains the cloud or the company that designed it.

So what is the conclusion? Which one is best? The answer of course, is neither. Real Open Source is the best. An open surface that extends clear to the core. But then, how will MS make money?

Sunday, August 14, 2011

Top secret productivity recipe

 With all the phones, tablets, netbooks, and other devices running around, the desktop is starting to decline. Many people do some or all of their work on a mobile device.

Everyone has their own system, and its not up to me or anyone else to tell you that it's right or wrong. However, those little phone screens and buttons just don't work for me. I have a laptop and I love it for the freedom and mobility it gives me, but after struggling with that touchpad and fumbling with those tiny Home/End/Delete buttons, I want to come back home to my ultimate productivity place. Here is the recipe:
  • Setup one fairly powerful, desktop computer with a large monitor, full size keyboard and mouse.
  • Install whichever Linux distribution you prefer.
  • Connect to network, update and install Secure Shell server.
  • Add essential supporting software you need to get your work done, Xserver, Python, Firefox and G++ are enough for me. 
  • Next, setup less powerful computers beside it with large monitors and keyboards, no mice. Someday I hope to build a few tiny computers just for that purpose. Right now I just have an old desktop I picked up for free, but sometimes I'll use my old laptop for a second one.
  • Install an extremely minimal Linux on each of these, just enough to connect to the network and use a Secure Shell client.

Now you have it! The extra computers act as terminals to the big computer, allowing you to use it's resources more effectively. I generally edit all my programs on one and test them on the other. It's even better if you use a console music player that supports multiple clients like mpd or moc.

You might wonder, why I don't just use multiple monitors. I could open a full screen terminal in one of them.

I could, but that would not be near as awesome :), and I would have to share one keyboard between the two screens. Besides, the terminals can perform automated tasks, be used as a guinea pig for testing unstable software and anything else I don't want to do on my main computer. 

There you have it, my secret recipe for being productive. I don't recommend it for everyone, especially new users. It's only good for people who know the command line and use it more than the GUI. However, I do insist that the desktop is way more efficient, besides being cheaper, more powerful and easier to fix. Happy birthday IBM PC!

Friday, August 12, 2011

Mate review on Arch Linux

In my last post, I reported that a Gnome2 fork, Mate, was available. I use Arch Linux so I followed the instructions here to add the custom repository. There is also source code available and packages are being built for Debian/Ubuntu, Fedora and Gentoo.

Downloading took a while. The server was somewhat slow, (slower than my Internet connection), and would drop the connection periodically. It wasn't too bad though. After running the command 4 times, I had it and the mate-applets package installed.

Next, I stopped TWM and ran mate-session. For some reason the process keeps stopping if run in the background. Most people would use a display manager and not notice the problem.

The resulting desktop looked exactly like Gnome2, and it felt good to be back :). I made the panels transparent, added a few applets and some panel shortcuts.

Then I tried to delete the default desktop icons with mate-conf-editor. However, I couldn't find a setting for it like the one in gconf-editor. Desktop icons have always bothered me for some reason. The same way that bugs on my windshield do. I guess I'll just have to overlook them for now.

I tried to change the default theme, but no more were available and the mate-themes package gave me errors. The background changed with no trouble though.

I also noticed that the eyes applet is missing and that the transparent/opaque bar under panel properties doesn't move.

Well, that's all there is to report for now. The issues I listed are minor and I'm sure will be fixed in the future. Live on Gnome2!

Here are some screenshots I took after installing the gnome-utils package:




Mate keeps Gnome2 alive

When I used Linux for the first time, the desktop environment was Gnome2. I liked it as soon as I saw it, and the more I used it the more impressed I became with it's simple default menu, how it can be easy customized and the wealth of applets and themes.

As I advanced in my Linux knowledge, I started trying other desktop environments and window managers, but I kept coming back to Gnome because none of the others allowed me to work half as efficiently.

When I became a command line guru, I began to favor smaller environments that were lighter on resources and more reliant on a terminal. Now I am firmly settled on TWM.

Nevertheless, I was still very sorry to see Gnome2 replaced by Gnome3 and Unity.

Fortunately, Gnome2 has been forked. The new DE is called Mate. The homepage can be found here.

Thursday, January 6, 2011

Ubuntu is a "real" Linux, no matter what anyone says

Recently I have heard several people remark that Ubuntu is not a "real" Linux distribution. It's not surprising that they got that idea, given the amount of criticism that Ubuntu gets in a typical week. They think that being beginner oriented means that it has to be dumbed down.  

Ubuntu was my introduction to Linux. I used it as my primary OS for eight months. During that time I dual-booted it with a dozen other distros and used countless others in virtualbox and my junk computers. Then my filesystem got corrupted and I decided not to reinstall it. Now I am happily using Arch and Debian testing while still keeping Ubuntu up to date in virtualbox.

Anyway, I can tell you that Ubuntu is very much a traditional Linux distribution. You can get to the command line with a simple ctrl+alt+f1-f6, updating is a simple sudo apt-get update && sudo apt-get upgrade, and you can use any window manager you please with the xinit command or from your display manager. You also benefit greatly from the Debian repository.

So what are the differences? It is easier to install drivers and proprietary plugins and codecs, there are lots of visual enhancements like putting the music player controls in the volume menu and it uses newer software that in some cases is not fully tested.

There are also a lot of changes in store. Unity will replace Gnome in 11.04 and Wayland will replace Xorg sometime in the future. There's no point in worrying about Unity since it's just a window manager, but Wayland will require applications to be rewritten and needs a faster video card. However, I suspect that X will be still provided as an optional package and it may be possible to have both installed at once and start whichever one you want.

My feeling is this: every distro has the right to do things their way. That's what Linux is all about. Proprietary codecs are a must for non-technical users. As for the software, you always have to balance being stable with being up to date. You can't have both and where the happy medium is depends upon the usage. There are things I don't like about Ubuntu. And as it is often seen as the face of Linux, and problem with it reflects upon Linux as a whole. So if you don't think it is suitable, don't recommend it. If someone complains about an Ubuntu-only issue, refer them to a different distro. But if someone uses it and is happy with it, don't try to tell them that they shouldn't be or that it is in some way inadequate. Ubuntu is happy to give it's users complete control over their environment and they can learn quite a bit if they are so inclined.