Practical Package Administration under Debian

Slashdot it! Delicious Share on Facebook Tweet! Digg!
Dmitry Kalinovsky,

Dmitry Kalinovsky,


Technologies like Flatpak and Snap make it look like the concepts behind earlier package management systems were a thing of the past. The current status of software administration in Linux shows that this impression is wrong.

Over the last several months, there has been an uproar about package formats for Linux (see the "Existing Container Package Formats" box). Canonical garnered attention by announcing with great fanfare its new cross-distribution development [1] and its Snap format [2] (see the "Snap vs. Flatpak" box). The subsequent press coverage seemed oblivious to the fact that Canonical needs to maneuver over some thin ice with these innovations. Everything has a proprietary license and is currently only on Ubuntu with no support from Ubuntu derivatives. These shortcomings escaped the attention of the press but came to light with more attentive testers [3].

Existing Container Package Formats

Generally speaking, further development of existing package formats is welcome, as is standardizing applications so they function more easily across distributions. The Snap [1] and Flatpak [13] formats were introduced in a recent article [12]. Similar ideas about containerizing individual applications have been around in various stages of development for a long time. As with the Google web browser, Chrome, the container for Snap and Flatpak includes a sandbox. The sandbox takes care of shielding the application from the operating system environment, and it releases interfaces so the application can communicate with the outside. The second choice in this category, in addition to a chroot environment, is virtualization. Here there are various preferences. These include the Docker framework [14], the Open Container Format (OCF) [15], App Container Images (ACI) or appc [16], rkt [17], and Java Virtual Machines (JVM). Firejail shows that the freedom of movement for an application can be fenced off in a different way and without much overhead [18].

Snap vs. Flatpak

Snap and Flatpak focus on different priorities [18]. Snap sees itself as a package format for servers. Flatpak targets the desktop. Moreover, Snap is currently only available via a central repository, while Flatpak is distributed via individual archives. Each system is available on the distribution-specific display servers Mir or Wayland. Since the formats are driven by Canonical and Red Hat, both are strongly influenced by commercial considerations and are therefore on the outer most edges of free software.

Click packages [4] also received similar press coverage, perhaps as a filler for lack of news during the summer months. Looking back, this was a predecessor for Snap. Click continues to have limited meaning for Linux systems. The situation can be different for Snap. Perhaps the format will establish itself in a particular area of application. However, all of the Canonical formats celebrated by the company tie up a lot of resources and then quickly disappear.

Therefore, before accepting claims made about a package system, it is a good idea to figure out how well designed the system is and how it can actually be of benefit. Then, the user is better able to decide whether to relegate the RPM and DEB formats, which have been around for the last 20 years, to the "Old Timers" category and send them out to pasture or whether to keep on using them.

Both of these formats raise the yardstick significantly for competition from alternative possibilities. Even so, using these tools to tease out desired information on package status [5] is often like stumbling around an enchanted garden. In this article, I use the DEB format and selected use examples to show use possibilities that exist once the user leaves the beaten path. All of the examples I present work for Debian, as well as its derivatives, like Ubuntu, Linux Mint, and Armbian.

Software Distribution

Up to now, the concept under Linux has been to disassemble a piece of software component by component and make the individual components available in separate packages. The developer or maintainer specifies information about which packages belong together and then includes it along with the identification of package dependencies in the package description. This makes it possible to have decentralized development. It also makes for optimized use of memory. Duplicates of the components or multiple versions are not an issue. Identical program code can be reused as needed.

Errors that occur in one component, for example in a library, will affect all programs using that particular component. The opposite also holds true. Corrections and cleanups immediately have a positive effect on all components. For the developer, this assumes knowledge of which components are needed and which already exist. Understanding the larger picture beyond the project at hand is always a good idea.

Underlying the "new" developments in container-based package formats is the desire to make software available by means of a concept linked to virtualization. This change of direction has had far reaching consequences that are not always apparent. See the "Changing Software Distribution" box. The real-world parallel is found in the area of throwaway consumer goods. Sustainability works differently.

Changing Software Distribution

Container formats transform the processes for rolling out and maintaining software. A container app makes it possible to provide a distributed software solution including all dependencies in an encapsulated environment. This is known as application virtualization. The new formats minimize efforts required for installation and configuration but reduce the level of security. The user does not know what the container contains, what it does not contain, and whether there are collateral contents that might later cause problems. Trust in software has taken a lot of punishment over the last few years. Untested binary code has been slipped past the users. The mechanisms for using proof totals with software packages and files exist for good reason.

In addition, it is easy to lose sight of the big picture. Specifically, the user has to take note of each individual container and its installation status. Containers build larger Binary Large Objects (BLOBS), which are not as finely granulated as packages. Containers require more disk space and bandwidth due to the potential that accompanying specified dependencies will be duplicated.

On the other hand, containers promise to encapsulate software programs in a sandbox and limit them to desired communication interfaces. These interfaces might include ports, programs, channels, and resources. Containers are also intended to solve the problem of open unresolved package dependencies and use the most current components. Experience tells us that the latter advantage over familiar and somewhat older software does not always come with increased stability and error-free operation.

The responsibility for all of the delivered components belongs to the container's provider. Fulfilling this responsibility requires knowledge of the actual software, all additional parts (dependencies), and known gaps in security plus program errors. The provider may know what is being delivered but is often not familiar with the software. On top of this, various versions of the software frequently occur in one container. Each version may or may not be buggy, something the provider needs to keep in mind.

Software found in a container has typically been assembled for a specific use. Designating container contents as throwaway means that updates for longer-term use are difficult. Security patches are not envisioned for individual components. Instead, the entire container has to be updated [19].

Proponents of the new developments in software frequently forget that needs surrounding software components change. Change also comes to the components themselves. Therefore, adaptation is essential. Brand new rollouts are not always the best solution. It takes a lot of effort before performing an update to collect, save, and re-install configuration files, as well as user and report data. Fans of the status quo, on the other hand, only switch out what is necessary. Either approach can be appropriate depending on the circumstances. The user will have to decide which choice makes more sense for the situation at hand. There is not one solution for everybody.

Finding the Disc Hog

Disk space can become limited depending on how much software a user loads. The tools dpkg (with the -s option) and dlocate (with the -du option) use the package name to figure out how much space the package requires.

Listing 1 combines dpkg with fgrep , as well as dlocate together with tail and cut , in order to extract just the first column showing the total amount of required space from the last line of the output. By way of example, I show how this works for the chromium package where the output value is displayed in kilobytes. Conversion into megabyte requires some shell scripting and the help of the command-line calculator bc . The echo command causes the unit of measurement to be displayed (next to last line).

Listing 1

dpkg and dlocate

$ dpkg -s chromium-browser | fgrep Installed-Size:
Installed-Size: 155388
$ dlocate -du chromium-browser | tail -1 | cut -f1
$ echo $(echo $(dlocate -du chromium-browser | tail -1 | cut -f1) / 1024 | bc) "MByte"
151 MByte

The dpigs ("diskspace pigs") tool from the debian-goodies package helps you find the packages that are taking up the most disk space. Listing 2 lists the five largest packages that have been installed. They appear in descending order by size and name. You can use the -H option to convert the unit of measurement into commonly understood sizes. Entering -n5 lets you limit the output to five packages.

Listing 2


$ dpigs -H -n5
 436.7M texlive-latex-extra-doc
 155.8M linux-image-3.16.0-4-amd64
 151.7M chromium
 120.9M libreoffice-core
 106.8M texlive-pstricks-doc

You use the -Z switch to have the front-end aptitude calculate space requirements for packages that have not yet been installed. The program even takes dependencies into account. The sub-command install used together with the --simulate switch simulates the entire process. Listing 3 shows the output by reference to the corresponding packages for the Nginx web server. If the process needs more packages, then aptitude will show a prompt. Clicking on the N will interrupt the process.

Listing 3


$ aptitude -Z install --simulate nginx
The following NEW packages will be installed:
 nginx <+37.9 kB>  nginx-common{a} <+169 kB>  nginx-core{a} <+1275 kB>
0 packages upgraded, 3 newly installed, 0 to remove and 288 not upgraded.
Need to get 458 kB of archives. After unpacking 1482 kB will be used.
Do you want to continue? [Y/n/?]
Would download/install/remove packages.

Buy this article as PDF

Express-Checkout as PDF

Pages: 7

Price $0.99
(incl. VAT)

Buy Ubuntu User

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content