I've tried for over ten years to switch from Windows to an Open Source operating system and my current conclusion is that there is no one Open Source operating system that meets all my needs at this time. I had hoped that this page would be about how to find and install the right Open Source os for my desktop machine. I wanted to put together a page similar to the one I wrote detailing how I found a useable operating system for my old laptop. However, I've been unable to find that os for my main computer.
Table of Contents:
To the main page.
Here's a list of what I need most from an operating system
There are situations when an Open Source operating system makes sense.
If you want to run a more modern operating system, but have an older machine with very little resources and it doesn't pay to upgrade the hardware, then an Open Source operating system may be the way to go. Keep in mind that most modern operating systems aren't designed for older machines and usually don't work well on them. There are modern operating system options that may work fine on older machines such as BSD, Debian Linux with alternative network installation and Windows Fundamentals for Legacy PCs. There are also specialty Linux distributions for machines with low resources, but most don't work well with very low memory machines. Also many smaller specialty distributions have such small software repositories, you'll probably end up installing the software once and never want to update again because it's too much work to rebuild everything. That negates the point of running a modern operating system. I've heard several people recommend the Debian Linux network installation for low resource machines. Debian Linux has one of the largest software repositories. The main drawback is you'll need some type of Internet support working to use this as an installation route. You can sneakernet from another system with Internet, but even with a few tools out there to help you like the Keryx project, it's not a fast process and not always easy either. If you can get Framebuffer support working in Linux, it can really improve performance on older machines. However, you need to run programs that make use of the Framebuffer and many older machines don't have the hardware needed to run programs that way. I don't know much about the Windows Fundamentals operating system, but at least Microsoft is making some effort to give a viable option for keeping your older machines out of landfills. If a BSD system has support for whatever hardware you need on your old machine, it makes a very good alternative. If you have to run a modern operating system, I've found it the most efficient blend of responsiveness and hardware support for legacy systems. All the other systems I've tried on my laptop either lacked the hardware support or lacked the speed. FreeBSD provided the best trade-off for me on all these points. Depending on your machine's hardware and the types of software you want to run, you may find another system better suits your needs.
Finding the right operating system is mainly a matter of finding the right balance of trade-offs. I've seen some partial solutions to the backward compatibility issues. I've been looking into installation/package managing solutions like xstow, zero install, nixos and gobolinux, portablelinuxapps and appimages. I've also seen some interesting options for binary compatibility including Mastadon Linux, Glendix and sta.li. The LSB looks promising too and may be the best way to go especially in combination with other solutions.
As a programmer who likes to build and install several open source programs on my operating systems, I keep coming back to the problem of what to do when it comes time to update the system. The way things currently work with the GNU compiler chain, you're supposed to rebuild and reinstall everything from scratch. This isn't an issue for interpreted programs. However, I'm a C/C++ programmer and I like the speed and performance of a good C/C++ program when I can use one. The way Linux distributions typically deal with this issue is to either have dedicated machines to rebuild all the necessary source (like the OpenSuse Build System) or have enough people in the community to spread the work of rebuilding out amoung volunteers who repackage their favorite programs (like Debian's packaging system). I consider neither solution highly satisfactory and I keep coming back to the fact that all this wouldn't be necessary if the libraries were more backward compatible. On Windows, MinGW (Minimalist GNU for Windows) plugs into msvcrt.dll (the Microsoft runtime library) already available on all modern Windows systems and users never run into the issue of needing to recompile all their software when the compiler is updated. They often don't need to update their software when their operating system is updated and if they do, there's usually some program that might provide emulation so the old software can continue to be used as is.
Software engineering emphasizes that programming interfaces (such as APIs) are a contract between the library or routine developer and the user of the library or routine. If you create a good design ahead of time before coding, you should never need to break that contract. At all the companies I've worked for, we've always emphasized separating parts of the program that might change rapidly from the rest of the program using solid, stable, minimal interfaces. Of course, the parts of the program that changed most often were either hardware related or third party libraries (not our own). By separating these parts out, whenever a change was required, once the corresponding module that communicates with hardware, interfaces with drivers or encapsulates a library was updated, the whole system worked fine. We didn't need to rewrite every line of the program. Parts we were dependent upon that could change often were not intermingled through out the rest of the program.
From what I've read, many Open Source users prefer rapid developments and improvements in their software over backward compatibility. That may well be their choice, but as a programmer, my choice in design has always been backward compatibility first. That doesn't mean lump all the backward compatibility in one program and make it inefficient or bloated. Take, for example, the common need to read a file with settings for a program. Often, the format changes over time as the program evolves. If forethought is put into the design, one could write a parsing routine smart enough to deal with the changes. That's where a keyword based file like XML has advantages over files that have fixed size and fixed location strings. However, if the program started out with another less useful file format and evolved to a better format over time, it doesn't need code to decipher every file format it has ever supported. It just needs to support the latest format and can provide separate conversion tools to get older formats updated to that one.
Many Open Source advocates seem okay with trade-offs of backward and binary compatibility for faster development cycles. However, I don't feel like I should have to give up the ability to take software to another version of a distribution just so someone can get in the latest and greatest changes, improvements, features and bug fixes. As a designer, it's just not a design trade-off I would be willing to make. One of the ideas I keep hearing touted about Open Source is that you have freedom of choice and the ability to choose whatever software you want to run. There have to be other people like me who prefer backward compatibility in their design over rapid development. As proof, there are other groups working on projects that emphasize other design decisions besides ones like rapid development. Projects like sta.li are working on stable, statically linked executables that do one job well and an operating system to run them on. Other Open Source operating systems alternatives to Linux such as Syllable claim to emphasize backward compatibility in their design goals.
The biggest obstacle to compatibility doesn't appear to be the Linux kernel. I've been reading that it's fairly stable. It's the C/C++ compiler. Maybe another C/C++ compiler option would solve the issue. There's a FreeBSD project to switch over to LLVM. There's talk of Android's new runtime library bionic. Windows provides many compiler options. If you don't find one that suits your needs, there's always another. Perhaps, if Open Source operating systems like Linux had such choices, compatibility would no longer be that much of an issue. A Windows compiler option that may be available for Linux users is Open Watcom. At one point, it created the most highly optimized binaries of any of the commercial C/C++ compilers available on Windows. Open Watcom developers are currently in the process of adding support for Linux development. If you really need to use the GNU compiler chain to build something, possibly using the LSB standards may help improve the situation. Statically linked programs are usually much easier to share between distributions or machines. Most compilers will let you build libraries that are either statically or dynamically linked. The GNU compiler suite complies in a sense, but you can never get a fully statically linked program with the latest GNU C/C++ compilers because the nss functionality must be dynamically linked.
Coming from a Windows environment, I like the idea of having all my code in one executable. Having many dlls on a system has only been a problem for me when they intermingle. Windows looks for dlls in same directory as the program first and then in the path. As long as you keep the dll with the program and the program in its own directory, things work fine. POSIX operating systems usually comingle all their like files. So, shared libraries end up together in a lib directory and executable programs end up in a bin directory. If the names of libraries are the same, you have issues. If you want to keep your shared libraries local with your executables and separated from other software like you can do naturally with Windows, you need work-arounds like those used by PC-BSD or GoboLinux. You can set the LD_LIBRARY_PATH to point to the location of needed shared libraries before you run an executable or you can set up links to shared libraries so that they're found where your system normally expects to find them even when they're not physically stored there. There's an easier way to do the same thing that works much more like the Windows environment. You can use the rpath switch when linking along with $ORIGIN to specify relative paths to search for libraries. However, I did see mention of the possibility of some security issues using this route. I couldn't find anything specific as to what those security issues were though. I tried asking other developers and searching for more information on why one would prefer shared libraries to static libraries. I have to say, all the explanations I found on why to prefer shared libraries weren't persuasive for my particular use cases. You can check the sta.li project for some interesting pros and cons. Unfortunately, from what I've read, just choosing to create all your software statically does not fix the backward compatibility issues in Linux. You would need a different compiler (or at least a different set of runtime libraries) with properties different than the standard GNU compiler to do that.
Finally, I need to discuss the trade-off of time. I've been thinking about all the time taken to look for solutions, to install or, in many cases, attempt to install Open Source operating systems, to set them up and then to set them up again when it's time to update. If you think of your hours based on how much money you could earn at your job, does it really pay? I finally understand Microsoft's argument that commercial operating systems cost less than Open Source ones. Unlike Microsoft, I don't consider training as a cost. It's a learning experience. I also don't consider getting to know the differences in configuration files between Open Source operating systems a significant cost. DOS and Windows have gone through similar drastic changes, losing autoexec.bat and config.sys files and going toward a registry. There's similar time and effort involved in either route. I do consider setting up my hardware and my software so that I have a functional system I can get work done on as a cost. If I can upgrade a commercial system in a matter of hours without fast Internet access, have all hardware drivers working and just copy my pre-built programs over and have them work, it more than pays for the cost of the commercial operating system. Some people will be able to do the exact same thing with an Open Source operating system based on their particular needs. However, I've yet to find a way to do so based on my needs. For me, it can take months of work to rebuild all my programs from source and figure out how to get specialty hardware to work if it's even possible.
I've seen a lot of glimpses of where I want to be with an Open Source operating system. I've seen a lot of possible solutions to get me closer to the particular set of trade-offs I want to make when choosing an operating system. I've yet to find a system that's ready to be installed and has enough of those solutions in place that I can concentrate my time on the tasks I want to do (building software and using applications). If someone knows of an Open Source system with design goals similar to those I'm personally looking for, please let me know. I'd be more than happy to help out development by using my software engineering skills to convert programs to work on it properly or help to document how to use the system. I've been trying for over ten years to switch and I intend to keep looking. I hope someone can prove me wrong, but I just don't think my ideal Open Source operating system has been created yet.
To the main page.