Installing software on Linux without root : managing packages in user space

First step : don’t. Reconsider. There really isn’t an alternative ? Using something else that is already installed ? Sweet-talking your sysadmin into doing the installation ? Giving up that particular task ? Giving up Computer Sciences altogether and moving to the country to raise pigs ?

Ok, so you really don’t have an alternative. May the gods have mercy on your soul, because Linux won’t. By necessity, this won’t be a step-by-step guide, because each system has its quirks. I’m not promising you heaven, just a tolerable purgatory instead of a living hell.

Take a deep breath.

(And as always : follow those instructions at your own risk. If you turn your Linux box into a Linux brick, or into Skynet, I’m not liable.)

The problem : dependency hell

There’s a reason why installing software from sources is so painful : dependencies. Sure, you just want to install Caffe, or Torch7, or Theano. But Theano needs python, python needs openssl, openssl needs… it’s endless.

High-level package managers like apt-get and yum are so popular because they deal with those. When installing from source, you’re on your own.

But here’s the catch : when installing from sources, you can almost always relocate the software to your ~home, bypassing the need for root access. High-level package managers, at least the current generation, can’t relocate.

Except for Homebrew.

The strategy : Linuxbrew

Homebrew was created as “the missing package manager for OS X”, and is required to do anything interesting on a Mac. It was designed around two principles : installation at the user home, and installation from sources.

Say that again ? Installation at the user home, without need for root. From sources. Wow. If only there was a version for Linux ! Enter Linuxbrew. Homebrew concept was so successful that, in an ironic turn, it’s now becoming “the missing package manager for Linux”.

So, case closed ? Hardly. To start, Linuxbrew has dependencies of its own, and you have to take care of those by hand. Then, the software you want to install has to be available as a brew “formula” (but the list is quite comprehensive, and growing). Finally, it doesn’t always goes smoothly. Linuxbrew is a much bumpier ride than Homebrew/OS X, at least for now. Most formulas will install without issue, but a good 20% will require tweaking, googling, and deciphering forum posts.

The strategy is most definitely not user-friendly. But contrarily to installing each and every package by hand it is just user-bearable enough to be feasible. If you really need the software. (Are you sure you don’t prefer the pig farm ?)

Okay, you are sure.

Our strategy will be :

  1. Ensuring Linuxbrew dependencies ;
  2. Installing and configuring Linuxbrew ;
  3. Using Linuxbrew to install the desired software… ;
  4. …or if you’re unlucky, using Linuxbrew to install the desired software dependencies, and then installing the desired software by hand.

The tatics

Installing Linuxbrew dependencies

Linuxbrew has, fortunately, few dependencies : Ruby, GCC, Git, and… Linux (duh !). It runs on x86 or 32-bit ARM platforms. If you’re running Linux in other architectures this is your cue to break down sobbing.

Most systems will, fortunately have those dependencies already installed. You can check the minimal versions currently required by Linuxbrew, and then check the versions installed at your system (and whether they are installed at all) calling the commands with --version :

$ ruby --version
ruby 1.8.7 (2011-12-28 patchlevel 357) [x86_64-linux]

$ gcc --version
gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973]
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

$ git --version
If 'git' is not a typo you can run the following command to lookup the package that contains the binary:
command-not-found git
-bash: git: command not found

Linuxbrew won’t turn your dependency hell into a package management heaven, but it might turn it into a tolerable purgatory.

Linuxbrew won’t turn your dependency hell into a package management heaven, but it might turn it into a tolerable purgatory.

As you see, I got almost lucky. Ruby and GCC are good to go, but I’ll have to install Git (and its dependencies).

The trick is being minimalist. Many packages have optional features : extra commands, documentation, GUI interfaces, etc., that will fire zillions of extra dependencies. Get rid of those optionals as much as possible.

I recommend a top-down plan of attack. Start with the package you want to install and try to configure–make–install it. If it breaks (and it will, it will), find out which dependency/configuration is missing and correct it. Do this recursively, depth-first, until you get everything installed.

Resist perfectionism. You might spend a lot of time smoothing out every wrinkle of package-1.73.03 just to find a bit later that it breaks your installation and has to be removed to make room for package-1.73.04. This is war, kid : not a time for bells and whistles. Once you get a dependency working, move on to the next.

In more detail, each cycle will consist of :

  1.  Finding, downloading, and unpacking the source package ;
  2.  Configuring the package to work with a $HOME prefix ;
  3. Building the package ;
  4. Installing the package.

Step 1 is usually trivial after bit of googling. If your Linux distribution is Debian-based, you might be able to use a single command-line operation :

apt-get source git

There are similar facilities for other high-level package managers.

Otherwise, you might download either a compressed source file, or even the bleeding edge version from the source repository (sourceforge, github, etc.) In the case of Git, this would be at https://github.com/git/git. (Be careful, because those hyper-up-to-date versions might be unstable.)

Step 2 varies a bit from package to package, but usually consists in calling a ./configure script. Sometimes pre-configuration is involved : a call to make configure or make config, or another script, e.g., ./buildconf. Sometimes it involves cmake (cross your fingers for having autoconf/automake already installed). Sometimes there’s no step 2, all options being passed directly to make during step 3. It varies.

How will you know ? Try to locate a INSTALL.* or README.* file. Usually the instructions are there. No luck ? Try browsing the official website of the package for installations instructions. Googling <package> installation instructions usually will point you to the right direction.

For git, this will work :

cd git-2.1.4/
./configure --prefix=$HOME

Well, sort of. It will probably break, because one or more dependencies will be missing. Install those (and their recursive dependencies) and try again.

Step 3 is almost always :

make

or sometimes :

make all

Sometimes this is the moment when things break down for lack of dependencies (or wrong versions, or wrong settings, or the universe showing its lack of care). Sometimes the --prefix=$HOME option comes here instead of Step 2.

Step 4 is almost always :

make install

If you set the prefixes right, that will automagically put everything in the right place, under your ~home directory. And you won’t need root permissions.

Got it ? Good. I hope you enjoy typing command-line commands : you’ll be doing it all day. For extra enjoyment, get a second monitor and close the shades.

Installing Linuxbrew

Once you have all dependencies working, installing Linuxbrew itself is a breeze :

git clone https://github.com/Homebrew/linuxbrew.git ~/.linuxbrew

Aaand… that’s it. It won’t work immediately because you have to set the paths (see below). After you do it you can simply type :

brew install $WHATEVER_YOU_WANT

And it should take care of everything.

Before you do it, however it is a good idea to call

brew doctor

and check if everything is ok. Again, be minimalist : you don’t have to correct every tiny issue. Take a good look and make the smallest needed intervention.

Linuxbrew comes ready with a lot of recipes for installing packages, or as they call, formulas. You can keep them up do date by typing

brew update

Depending on what you want to install, however, you’ll need extra formulas. In Homebrew/Linuxbrew parlance this is called tapping. For example :

brew tap homebrew/science

will install a lot of new formulas related to science, data analysis, etc.

PATH configurations

Both phases (manual dependency installations; Linuxbrew operation) won’t do you much good if your paths aren’t configured. There are at least four important paths, maybe more depending on your setup : executables (PATH), static libraries (LIBRARY_PATH), dynamic libraries (LD_LIBRARY_PATH), and include files (CPATH).

The usual place to set up those is your your shell configuration file. The examples below assume you’re using bash. If that’s your case, decide whether .bashrc or .bash_profile is better for you (usually it’s the former).

During the manual installation of dependencies add the following lines :

# Manually installed packages
export PATH="$HOME/bin:$PATH"
export LIBRARY_PATH="$HOME/lib:$LIBRARY_PATH"
export LD_LIBRARY_PATH="$HOME/lib:$LD_LIBRARY_PATH"
export CPATH="$HOME/include:$CPATH"

During Linuxbrew operation put those additional lines :

# HomeBrew / LinuxBrew
export HOMEBREW_PREFIX="$HOME/.linuxbrew"
export PATH="$HOMEBREW_PREFIX/bin:$PATH"
export LIBRARY_PATH="$HOMEBREW_PREFIX/lib:$LIBRARY_PATH"
export LD_LIBRARY_PATH="$HOMEBREW_PREFIX/lib:$LD_LIBRARY_PATH"
export CPATH="$HOMEBREW_PREFIX/include:$CPATH"

Remember that shell configurations are not effective immediately, only on the next start. You don’t have to reboot the system : simple closing and reopening the terminal, or logging out and back in suffices.

An ugly ugly ugly workaround

During my installations, I faced an issue with CA certificates that I could not bypass. Many formulas would refuse to proceed, stopping during download with the error : “cURL error 60: SSL certificate problem: unable to get local issuer certificate”.

Yes : I tried downloading updated certificates from Mozilla Corp. Yes : I checked my curl-config --ca. Yes : I tried reinstalling cURL. And Git. And OpenSSL. I spent, litteraly, hours trying to solve the problem in an elegant way.

I concede defeat. Here’s the very inelegant solution. Be aware that it opens your package manager to man-in-the-middle attacks. That is more than a theoretical risk : it has been done. This is a huge security hole. If you decide to apply it, don’t do it preemptively, wait to see if you’ll actually get the SSL certificate problem.

So you got the error, and you’re willing to expose your neck to the wolves ? Sucks to be you. Open the file download_strategy.rb at ~/.linuxbrew/Library/Homebrew and find the lines below

# Curl options to be always passed to curl,
# with raw head calls (`curl -I`) or with actual `fetch`.
def _curl_opts
  copts = []
  copts << "--user" << meta.fetch(:user) if meta.key?(:user)
  copts
end

Change line four to

# Curl options to be always passed to curl,
# with raw head calls (`curl -I`) or with actual `fetch`.
def _curl_opts
  copts = ["-k"] # Disable certificate verification
  copts << "--user" << meta.fetch(:user) if meta.key?(:user)
  copts
end

And that’s it. You’re ready to proceed installing source packages. And to be a victim of cyber mafias, and of terrorists, and of tyrannical governments.

(Note to security people : if your watertight security solution makes a system unusable, guess what will happen ?)

Extra tips

First and foremost, source code relocation is not a panacea. Some things require root access, for example, driver installations, kernel recompilations, boot sector modifications, etc. You might want to check if your software require one of those before you start this whole adventure.

You can learn a lot about a formula with the info option

$ brew info python
python: stable 2.7.10 (bottled), HEAD
Interpreted, interactive, object-oriented programming language
https://www.python.org
Not installed
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/python.rb
==> Dependencies
Build: pkg-config ✘
Required: openssl ✘
Recommended: readline ✘, sqlite ✘, gdbm ✘
Optional: homebrew/dupes/tcl-tk ✘, berkeley-db4 ✘
==> Options
--universal
  Build a universal binary
--with-berkeley-db4
  Build with berkeley-db4 support
--with-poll
  Enable select.poll, which is not fully implemented on OS X (https://bugs.python.org/issue5154)
--with-quicktest
  Run `make quicktest` after the build (for devs; may fail)
--with-tcl-tk
  Use Homebrew's Tk instead of OS X Tk (has optional Cocoa and threads support)
--without-gdbm
  Build without gdbm support
--without-readline
  Build without readline support
--without-sqlite
  Build without sqlite support
--HEAD
  Install HEAD version

Take a good look at the --without-* options because they are sometimes a lifesaver. Some packages have optional GUI extras. They might fire hundreds of useless extra dependencies — especially if you are installing on a headless server.

Sometimes Linuxbrew breaks down for the lack of a dependency, refusing to install it, but will gladly do it if you explicitly ask for it. For example : brew install package1 breaks for the lack of package2, and all it takes is typing brew install package2 and retrying package1. Mysteries.

Installation time is highly unpredictable. Sometimes a small innocent little package will require a precise version of GCC… that Linuxbrew will then have to install from the sources. Time for a tea.

If your installation becomes so corrupted with conflicting packages that you have to restart from scratch (nooooooo !), it can be — small consolation — accomplished easily :

rem ~/.linuxbrew -rf

For extra OCD cred, clean-up the download cache as well :

rm ~/.cache/Homebrew/ -rf

If the whole thing becomes so messed up that you have to scratch even the manual dependencies (two words : pig farm), it is also easily done :

rm ~/bin/ ~/lib/ -rf

You might also consider :

rm ~/include ~/etc -rf

but be careful because that might erase innocent third parties.

You might be forced to install multiple versions of the same package. That adds another nightmare layer to the ongoing nightmare, but it’s doable. Linuxbrew will usually be friendly enough to tell you what to do.

For example, when I had to install both opencv2 and opencv3 I got this :

opencv3 and opencv install many of the same files.

Generally there are no consequences of this for you. If you build your
own software and it requires this formula, you'll need to add to your
build variables:

    LDFLAGS:  -L/home/valle/.linuxbrew/opt/opencv3/lib
    CPPFLAGS: -I/home/valle/.linuxbrew/opt/opencv3/include

Those little displays of care are the reason why I like Homebrew/Linuxbrew so much. Love breeds love : a truism even for human–computer interaction.

Oh Linux, you’ll never give me a boring weekend, will you ?

For the 100th time in my life I am installing Ubuntu in a machine — in my MacBook Pro this time.

Since it’s the third year of the second decade of the third millennium, I was expecting a dull “plug and play” procedure.  But it came as a nice surprise that Linux is still a wonderful unpredictable adventure. You never know whether installing your wireless card will take seconds or hours; whether or not upgrading your graphics card will result on a bricked system; or whether or not you’ll end the day throwing your computer out a window in frustration.

Just kidding — you know for sure that installing the wireless card will take weeks, and that you’ll end up throwing yourself out a window.

Linux Tidbits

Small bits of Linux wisdom I’ve learned this weekend:

  • If you are burning a LiveCD ISO, you better choose “Disc at Once” instead of “Track at Once”, lest at boot time you’ll get this annoying error:
    end_request: I/O error, dev sr0 sector XXX
    Buffer I/O error on drive sr0, logical block XXX
    ;
  • It’s actually possible to run Microsoft Office 2007 in Linux outside a virtual machine, using a compatibility layer called Wine. It takes a few tricks to make it work, and the experience is not completely smooth, but it’s feasible.
    In my tests, Word worked perfectly, Excel crashed its EUROTOOL.XLAM complement (but then automatically disabled it and run okay the second time), but Powerpoint was barely usable (the screen goes black when you try to draw or position a drawing element). In addition, Excel could not open a password-protected file. Still, it worked better than I had expected;
  • You are not supposed to share Wine software installation between Linux users. I’ve tried hard playing with shared paths and symbolic links, and all I’ve got was error messages and Office no longer working;
  • Also, apparently you can’t share a Mozilla (Firefox or Thunderbird) profile between Linux users. I’ve also tried playing with links and paths and only got burnt.

What I’ve learned ? That Linux software tends to be more customizable than Windows software, and much more customizable than MacOS software. That does not mean, however that the desired customization will be straightforward and easy (try, for example, understanding Gnome’s menu customization scheme !), neither that it will be always possible to bend the system to work as you’d like. On the other hand, it makes a weekend pass quickly !

The Quest for Scholarly Tech: Virtualisation

As a computer scientist, I find OS monogamy an impossible commitment: I want to use my desktop applications (Microsoft Office, Adobe CS) in Windows but the staple of my experimental work has to be done in some flavour of Unix.

To handle both systems, I have tried all sorts of solutions:

  • a single dual-boot machine;
  • two complete boxes side by side on my desk;
  • two boxes and a single head (one monitor, keyboard and mouse) with a hardware “interface switch” to alternate between the systems;
  • a headless Linux box connected to the Windows system with a SSH shell prompt (giving up Linux’s GUI);
  • a headless Linux box connected to the Windows system using a free version of VNC (prompting the ire of the sysadmin guy — apparently I’ve unwittingly opened some ports I shouldn’t, but he eventually got it figured out).

The dual-boot configuration was, of course, the worst. Using two complete boxes was, surprisingly, the second to worst solution, not only because of that confusing alternation of keyboards and mice, but also because of the impossibility to do “copy and paste” between the systems. With the “interface switch”, there was no keyboard juggling, but no “copy and paste” either. Worse, the switch was quite unreliable, causing video quality issues and an irritating “keyboard-sudden-death-syndrome”  treatable only by reboot.

The “Linux client in Windows” solution was not a bad choice. Actually, I’ve been using it for years, but… there is still the problem of having to use two machines. As soon as I take my laptop from its docking station and quit the building, I am cut off from my Linux half.

Enters virtualisation, a not-so-recent technology (the term was coined in the 1960’s) that has recently gained the status of buzzword because of its potential benefits on grid computing applications. As the name implies, virtualisation is the creation of a virtual machine on the top of which a system is executed. This allows a guest system to run on top of a host system in a controlled, isolated environment — and what is best: the guest and host systems don’t have to be the same!

The possibilities are many:

  • I can run Linux on my Windows box and have my Linux machine just an “alt+tab” away;
  • I can have several Unixes (FreeBSD, Ubuntu, openSUSE, Mandriva, Debian…) to run as I fancy;
  • I can make a copy of the virtual machine’s files before doing potentially disrupting stuff (like installing suspicious software). If something goes wrong, I can easily and completely rollback the machine to its pristine state;
  • I can even run Windows on Windows for the same purpose. I can for example, keep a virtual machine exclusively to open suspicious mail attachments.

Now, for the drawbacks:

Each virtual machine implementation has its quirks. Microsoft Virtual PC does not officially support Linux, Sun’s VirtualBox demands a lot of tweaking to get things running, and VMware Player (my favourite) has no support to create the virtual machines, just to execute them. By the way, all three solutions are free-as-in-beer, but only VirtualBox is (mostly) open source.

As I said, you can’t create virtual machines with VMware Player. To do that, you have to buy another VMware product, VMware Workstation, for about 190 bucks. Or (like me) you can be cheap and look for prêt-à-porter machines on VMware Application Marketplace — I’ve found neatly installed images of most Linux distros.

If you need something more exotic, customised, or… proprietary, there is EasyVMX, an useful website, which creates, for free, an empty virtual machine where you can install the OS of your choice.

VMware does not come with VMware Tools, an optional (but absolutely essential) package which, among other things, enables copy & paste between the host and guest OSs. But this is not deal breaker, as this website has a workaround, and this thread in VMware community says it is legal, but use it at your own risk, or better yet, check it with your favourite lawyer.*

Speaking of lawyers, I’ve found that software licensing for Virtual Machines is prone to hazards. I’ve discovered for example, that my OEM Windows XP license does not allow me to put it in a virtual machine. If you, like me, can’t live without proprietary software, brace yourself for some entertaining EULA deciphering and a few interesting phone chats with the juridical department.

* EDIT 30/03: I’ve found out that VMware has put the vmtools on open source, so this this is one less concern. The package is bundled in some distros (like openSUSE), and it is available at SourceForge.

The Quest for Scholarly Tech: Ubuntu

Since the last year of the thesis, I have been in such a hurry, that I haven’t been able to keep up with technology. But now, I have a few spare days and I’m using them to evaluate some new (?) services and software, in my eternal quest for better tools for academic work.

Linux is by no means new software, and I have been using it on and off for more than ten years. During my undergrads, I had already noticed a cult following of the penguin, and nothing was deemed more uncool than proprietary OSs. But having anything Linux-related (including the OS itself) meant downloading the sources and compiling them, which was enough to keep me at bay (I made an exception when I took the subject of Operating Systems, one of the rare occasions when one should be expected to compile an OS kernel). Besides, those were the years of religious wars, the simple mention of the words “Windows” and “Linux” in the same room was enough to spark hours of heated debate, with very low signal-to-noise ratio — not the ideal environment for gathering useful information. At those troubled times, I used to ask my friends, provocatively: “what do you think will happen first: Windows getting stable or Linux getting usable?”

Ironically, I think both (sort of) happened by now. We have been graced with some good Windows crops, like 2000 and XP, which have reached a nice point of stability. And some Linux distributions are almost user-friendly. I don’t think that the old mantra of “Linux (or Unix) on the server, Windows (or MacOS) on the desktop” is completely overthrown, but we are getting there, in both ways.

Specifically, I’ve been evaluating Ubuntu, the famous “Linux for human beings” distribution, which has been around since 2004. I’ve downloaded and installed it last November, and my first impression was extremely positive: installation was a breeze, the interface is pretty, there are lots of GUI-accessible tweaks, and the package management system actually works.

Installation and hardware compatibility has always been the Achilles’ heel of Linux distributions, but it also has been one of the aspects which has most improved. Not all equipment guarantees an equally worry-free process, but the list of supported hardware has also grown dramatically. In my case, the installation, on a 3-year old Dell Latitude D620 laptop, went uneventful. My guess is that if a system is both popular and old enough, people will have their drivers figured out.

The first thing that strikes on Ubuntu is how cute it is. From startup and shutdown screens to configuration applets, everything is design to shield users from the Unix innards. Ubuntu is the first distro I’ve ever seen which has an usable “Start Menu” (instead of the usual everything-but-the-kitchen-sink approach). The system feels easy, inviting exploration and discovery.

Ubuntu's "Start Menu" and Quick launch bar

Ubuntu's "Start Menu" and Quick launch bar

In the best tradition of Linux systems, (almost) everything is customisable, but breaking with tradition, the system is (almost) comfortable to use fresh out of box. That didn’t prevent me from trying-on a bit of everything, tweaking themes, font rendering, keyboard layouts, shortcuts, and everything which was at the reach of a mouse click. At some point I had played so much with the shortcuts, that the system got practically unusable. What to do now? Well, in Ubuntu (courtesy of GNOME) customisation is done by adding files to the user folder, not by modifying existing ones, so, deleting those files (located in the ~/.gconf folder) reverts the system to its pristine virgin state.

Tweaking Ubuntu's GUI

Tweaking Ubuntu's GUI

I was also quite pleased with the package manager, Debian-based distros are the ones which got it right. Or so I’ve heard — and based on the nightmarish experiences I’ve had with Mandriva’s URPMI it is probably true.

Now, for the downsides.

Well, Ubuntu is not Windows, and I have to retrain my “muscle-memory” to new ways of doing things. So far, what I’ve found most difficult to surmount is the difference between the two system’s “US International” keyboard layouts. While Linux’s targets all Latin script languages, Windows’ layout is optimised for Western European ones (which is great for me, who write in English, French and Portuguese). This makes for unexpected results: in Windows if you want to write “do’s” (as in “the do’s and don’ts of laying out keyboards”) you just type [d] [o] [‘] [s] and the system, recognising that “s” is not accented in any Western European language, interprets the [‘] as an apostrophe. In Ubuntu, you have to type [d] [o] [‘] [space] [s], otherwise you’ll get “doś”. Worse, if you want to write “don’t” you still have to type the space after the apostrophe, even if there is no accented “t” in Latin script, otherwise you just get an ugly beep. So far I’ve been unable to circumvent this annoying behaviour — editing the keyboard layout files seems to be useless, it has to do with how the system deals with dead keys.

I’ve got quite a few bugs also. Small ones (no keyboard tweaks involving the Caps Lock key would work properly) and big ones (trying to record sound would freeze the system, no connection to wireless networks). Ubuntu comes with periodic updates, so the big ones have already disappeared, but some of the small ones are still around.

I had to tweak quite a bit with the installed packages in order for the system to work properly. Initially it had both GNOME and KDE installed, but KDE started to conflict with GNOME configuration daemon. Since I don’t use (so far) anything KDE based, I’ve just removed the KDE packages and the problem disappeared. It was much more difficult to figure out that the problems I was experiencing with Eclipse 2.4 and RSSOwl 2.0 were due to an exotic version of the Java Virtual Machine (based on GNU’s GCJ) which came installed on the system. When I removed everything GCJ and kept only Sun’s Java, my life became much easier.

Incidentally, I was somewhat disappointed when I realised that the Eclipse version available via package manager was 2.2 (almost 2 year old by now!), and to install the version 2.4 I would have to go the “Linux way” (download a tar.gz, create a folder, install the icons myself, etc., etc.).

Ubuntu has some small but irritating limitations. One example: the system has no consistent way to tell an application to run minimised (which comes handy when you want something to run on system start up): either the application has its own non-standard option, either it is impossible. Another annoying example is an ugly, loud beep you get every time the system is unhappy with you. This beep is emitted by the legacy “PC speaker” device, not by the modern sound system, so you have no control over volume or muting. Apparently, the only reliable way to disable this is to put the PC speaker on the “black list” of kernel devices, so it won’t be loaded. Not much user friendly.

Those small nuisances are part of the Linux experience, and though surprising on the proprietary software mind-frame, they are a foreseeable result of that Cathedral and Bazaar philosophy. Obviously, a system assembled from so many articulated parts, each the fruit of a different mind, is due to have some small inconsistencies.

In summary I would say that, for a Linux desktop system, Ubuntu offers a nice experience. I don’t think it is already smooth enough for technically unsavvy users, but it is perfect for midbrow amateurs, who know their 0’s and 1’s but are not inclined to spend hours editing text-based configuration files.