Copyright © 2007-2010 Osamu Aoki
Abstract
This book is free; you may redistribute it and/or modify it under the terms of the GNU General Public License of any version compliant to the Debian Free Software Guidelines (DFSG).
Table of Contents
apt-get
/ apt-cache
vs. aptitude
$LANG
" environment variableList of Tables
ls -l
" output
chmod
(1) commands
$HOME
" values
aptitude
(8) and apt-get
(8) /apt-cache
(8)
aptitude
(8)
dpkg
pam_unix
(8)
/etc/passwd
"
/etc/network/interfaces
"
net-tools
commands to new iproute2
commands
ls -l
" command for lenny
/etc/fstab
" entry
cvs
(1))
svn
(1))
bsdmainutils
and coreutils
packages
This Debian Reference (version 2) (2010-12-10 17:20:34 UTC) is intended to provide a broad overview of Debian system administration as a post-installation user guide.
The target reader is someone who is willing to learn shell scripts but who is not ready to read all the C sources to figure out how the GNU/Linux system works.
All warranties are disclaimed. All trademarks are property of their respective trademark owners.
The Debian system itself is a moving target. This makes its documentation difficult to be current and correct. Although the current unstable version of Debian system was used as the basis for writing this, some contents may be already outdated by the time you read this.
Please treat this document as the secondary reference. This document does not replace any authoritative guides. The author and contributors do not take responsibility for consequences of errors, omissions or ambiguity in this document.
The Debian Project is an association of individuals who have made common cause to create a free operating system. It's distribution is characterized by the following.
unstable
and testing
archives
Free Software pieces in Debian come from GNU, Linux, BSD, X, ISC, Apache, Ghostscript, Common Unix Printing System , Samba, GNOME, KDE, Mozilla, OpenOffice.org, Vim, TeX, LaTeX, DocBook, Perl, Python, Tcl, Java, Ruby, PHP, Berkeley DB, MySQL, PostgreSQL, Exim, Postfix, Mutt, FreeBSD, OpenBSD, Plan 9 and many more independent free software projects. Debian integrates this diversity of Free Software into one system.
Following guiding rules are followed while compiling this document.
I tried to elucidate hierarchical aspects and lower levels of the system.
You are expected to make good efforts to seek answers by yourself beyond this documentation. This document only gives efficient starting points.
You must seek solution by yourself from primary sources.
/usr/share/doc/<package_name>
" directory
dpkg -L <package_name> |grep '/man/man.*/'
"
dpkg -L <package_name> |grep '/info/'
"
For detailed documentation, you may need to install the corresponding documentation package named with "-doc
" as its suffix.
This document provides information through the following simplified presentation style with bash
(1) shell command examples.
# <command in root account> $ <command in user account>
These shell prompts distinguish account used and correspond to set environment variables as: "PS1='\$'
" and "PS2=' '
". These values are chosen for the sake of readability of this document and are not typical on actual installed system.
See the meaning of the "$PS1
" and "$PS2
" environment variables in bash
(1).
Action required by the system administrator is written in the imperative sentence, e.g. "Type Enter-key after typing each command string to the shell."
The description column and similar ones in the table may contain a noun phrase following the package short description convention which drops leading articles such as "a" and "the". They may alternatively contain an infinitive phrase as a noun phrase without leading "to" following the short command description convention in manpages. These may look funny to some people but are my intentional choices of style to keep this documentation as simple as possible. These Noun phrases do not capitalize their starting nor end with periods following these short description convention.
Proper nouns including command names keeps their case irrespective of their location.
A command snippet quoted in a text paragraph is referred by the typewriter font between double quotation marks, such as "aptitude safe-upgrade
".
A text data from a configuration file quoted in a text paragraph is referred by the typewriter font between double quotation marks, such as "deb-src
".
A command is referred by its name in the typewriter font optionally followed by its manpage section number in parenthesis, such as bash
(1). You are encouraged to obtain information by typing the following.
$ man 1 bash
A manpage is referred by its name in the typewriter font followed by its manpage section number in parenthesis, such as sources.list
(5). You are encouraged to obtain information by typing the following.
$ man 5 sources.list
An info page is referred by its command snippet in the typewriter font between double quotation marks, such as "info make
". You are encouraged to obtain information by typing the following.
$ info make
A filename is referred by the typewriter font between double quotation marks, such as "/etc/passwd
". For configuration files, you are encouraged to obtain information by typing the following.
$ sensible-pager "/etc/passwd"
A directory name is referred by the typewriter font between double quotation marks, such as "/etc/init.d/
". You are encouraged to explore its contents by typing the following.
$ mc "/etc/init.d/"
A package name is referred by its name in the typewriter font, such as vim
. You are encouraged to obtain information by typing the following.
$ dpkg -L vim $ apt-cache show vim $ aptitude show vim
A documentation may indicate its location by the filename in the typewriter font between double quotation marks, such as "/usr/share/doc/sysv-rc/README.runlevels.gz
" and "/usr/share/doc/base-passwd/users-and-groups.html
"; or by its URL, such as http://www.debian.org. You are encouraged to read the documentation by typing the following.
$ zcat "/usr/share/doc/sysv-rc/README.runlevels.gz" | sensible-pager $ sensible-browser "/usr/share/doc/base-passwd/users-and-groups.html" $ sensible-browse "http://www.debian.org"
An environment variable is referred by its name with leading "$
" in the typewriter font between double quotation marks, such as "$TERM
". You are encouraged to obtain its current value by typing the following.
$ echo "$TERM"
Astarisk "*" placed right after each package name is linked to Debian bug tracking system (BTS) of each package.
The popcon data is presented as the objective measure for the popularity of each package. It was downloaded on 2010-12-08 14:47:18 UTC and contains the total submission of 95150 reports over 109197 binary packages and 19 architectures.
Please note that the amd64
unstable
archive contains only 30552 packages currently. The popcon data contains reports from many old system installations.
The popcon number preceded with "V:" for "votes" is calculated by "100 * (the popcon submissions for the package executed recently on the PC)/(the total popcon submissions)".
The popcon number preceded with "I:" for "installs" is calculated by "100 * (the popcon submissions for the package installed on the PC)/(the total popcon submissions)".
The popcon figures should not be considered as absolute measures of the importance of packages. There are many factors which can skew statistics. For example, some system participating popcon may have mounted directories such as "/bin
" with "noatime
" option for system performance improvement and effectively disabled "vote" from such system.
The package size data is also presented as the objective measure for each package. It is based on the "Installed-Size:
" reported by "apt-cache show
" or "aptitude show
" command (currently on amd64
architecture for the unstable
release). The reported size is in KiB (Kibibyte = unit for 1024 bytes).
A package with a small numerical package size may indicate that the package in the unstable
release is a dummy package which installs other packages with significant contents by the dependency. The dummy package enables a smooth transition or split of the package.
A package size followed by "(*)" indicates that the package in the unstable
release is missing and the package size for the experimental
release is used instead.
Here are some interesting quotes from the Debian mailing list which may help enlighten new users.
<miquels at cistron.nl>
<tollef at add.no>
I think learning a computer system is like learning a new foreign language. Although tutorial books and documentation are helpful, you have to practice it yourself. In order to help you get started smoothly, I elaborate a few basic points.
The powerful design of Debian GNU/Linux comes from the Unix operating system, i.e., a multiuser, multitasking operating system. You must learn to take advantage of the power of these features and similarities between Unix and GNU/Linux.
Don't shy away from Unix oriented texts and don't rely solely on GNU/Linux texts, as this robs you of much useful information.
If you have been using any Unix-like system for a while with command line tools, you probably know everything I explain here. Please use this as a reality check and refresher.
Upon starting the system, you are presented with the character based login screen if you did not install X Window System with the display manager such as gdm
. Suppose your hostname is foo
, the login prompt looks as follows.
foo login:
If you did install a GUI environment such as GNOME or KDE, then you can get to a login prompt by Ctrl-Alt-F1, and you can return to the GUI environment via Alt-F7 (see Section 1.1.6, “Virtual consoles” below for more).
At the login prompt, you type your username, e.g. penguin
, and press the Enter-key, then type your password and press the Enter-key again.
Following the Unix tradition, the username and password of the Debian system are case sensitive. The username is usually chosen only from the lowercase. The first user account is usually created during the installation. Additional user accounts can be created with adduser
(8) by root.
The system starts with the greeting message stored in "/etc/motd
" (Message Of The Day) and presents a command prompt.
Debian GNU/Linux lenny/sid foo tty1 foo login: penguin Password: Last login: Sun Apr 22 09:29:34 2007 on tty1 Linux snoopy 2.6.20-1-amd64 #1 SMP Sun Apr 15 20:25:49 UTC 2007 x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. foo:~$
Here, the main part of the greeting message can be customized by editing the "/etc/motd.tail
" file. The first line is generated from the system information using "uname -snrvm
".
Now you are in the shell. The shell interprets your commands.
If you installed X Window System with a display manager such as GNOME's gdm
by selecting "Desktop environment" task during the installation, you are presented with the graphical login screen upon starting your system. You type your username and your password to login to the non-privileged user account. Use tab to navigate between username and password, or use the mouse and primary click.
You can gain the shell prompt under X by starting a x-terminal-emulator
program such as gnome-terminal
(1), rxvt
(1) or xterm
(1). Under the GNOME Desktop environment, clicking "Applications" → "Accessories" → "Terminal" does the trick.
You can also see the section below Section 1.1.6, “Virtual consoles”.
Under some other Desktop systems (like fluxbox
), there may be no obvious starting point for the menu. If this happens, just try (right) clicking the center of the screen and hope for a menu to pop-up.
The root account is also called superuser or privileged user. From this account, you can perform the following system administration tasks.
This unlimited power of root account requires you to be considerate and responsible when using it.
Never share the root password with others.
File permissions of a file (including hardware devices such as CD-ROM etc. which are just another file for the Debian system) may render it unusable or inaccessible by non-root users. Although the use of root account is a quick way to test this kind of situation, its resolution should be done through proper setting of file permissions and user's group membership (see Section 1.2.3, “Filesystem permissions”).
Here are a few basic methods to gain the root shell prompt by using the root password.
root
at the character based login prompt.
Type "su -l
" from any user shell prompt.
Type "su
" from any user shell prompt.
When your desktop menu does not start GUI system administration tools automatically with the appropriate privilege, you can start them from the root shell prompt of the X terminal emulator, such as gnome-terminal
(1), rxvt
(1), or xterm
(1). See Section 1.1.4, “The root shell prompt” and Section 7.8.4, “Running X clients as root”.
Never start the X display/session manager under the root account by typing in root
to the prompt of the display manager such as gdm
(1).
Never run untrusted remote GUI program under X Window when critical information is displayed since it may eavesdrop your X screen.
In the default Debian system, there are six switchable VT100-like character consoles available to start the command shell directly on the Linux host. Unless you are in a GUI environment, you can switch between the virtual consoles by pressing the Left-Alt-key
and one of the F1
— F6
keys simultaneously. Each character console allows independent login to the account and offers the multiuser environment. This multiuser environment is a great Unix feature, and very addictive.
If you are under the X Window System, you gain access to the character console 1 by pressing Ctrl-Alt-F1
key, i.e., the left-Ctrl-key
, the left-Alt-key
, and the F1-key
are pressed together. You can get back to the X Window System, normally running on the virtual console 7, by pressing Alt-F7
.
You can alternatively change to another virtual console, e.g. to the console 1, from the commandline.
# chvt 1
You type Ctrl-D
, i.e., the left-Ctrl-key
and the d-key
pressed together, at the command prompt to close the shell activity. If you are at the character console, you return to the login prompt with this. Even though these control characters are referred as "control D" with the upper case, you do not need to press the Shift-key. The short hand expression, ^D
, is also used for Ctrl-D
. Alternately, you can type "exit".
If you are at x-terminal-emulator
(1), you can close x-terminal-emulator
window with this.
Just like any other modern OS where the file operation involves caching data in memory for improved performance, the Debian system needs the proper shutdown procedure before power can safely be turned off. This is to maintain the integrity of files, by forcing all changes in memory to be written to disk. If the software power control is available, the shutdown procedure automatically turns off power of the system. (Otherwise, you may have to press power button for few seconds after the shutdown procedure.)
You can shutdown the system under the normal multiuser mode from the commandline.
# shutdown -h now
You can shutdown the system under the single-user mode from the commandline.
# poweroff -i -f
Alternatively, you may type Ctrl-Alt-Delete
(The left-Ctrl-key
, the left-Alt-Key
, and the Delete
are pressed together) to shutdown if "/etc/inittab
" contains "ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -h now
" in it. See inittab
(5) for details.
See Section 6.9.6, “How to shutdown the remote system on SSH”.
When the screen goes berserk after doing some funny things such as "cat <some-binary-file>
", type "reset
" at the command prompt. You may not be able to see the command echoed as you type. You may also issue "clear
" to clean up the screen.
Although even the minimal installation of the Debian system without any desktop environment tasks provides the basic Unix functionality, it is a good idea to install few additional commandline and curses based character terminal packages such as mc
and vim
with apt-get
(8) for beginners to get started by the following.
# apt-get update ... # apt-get install mc vim sudo ...
If you already had these packages installed, no new packages are installed.
Table 1.1. List of interesting text-mode program packages
package | popcon | size | description |
---|---|---|---|
mc
*
|
V:12, I:28 | 6508 | A text-mode full-screen file manager |
sudo
*
|
V:42, I:71 | 668 | A program to allow limited root privileges to users |
vim
*
|
V:15, I:33 | 1792 | Unix text editor Vi IMproved, a programmers text editor (standard version) |
vim-tiny
*
|
V:16, I:92 | 776 | Unix text editor Vi IMproved, a programmers text editor (compact version) |
emacs23
*
|
V:3, I:4 | 13016 | GNU project Emacs, the Lisp based extensible text editor (version 23) |
w3m
*
|
V:24, I:84 | 1992 | Text-mode WWW browsers |
gpm
*
|
V:3, I:4 | 484 | The Unix style cut-and-paste on the text console (daemon) |
It may be a good idea to read some informative documentations.
Table 1.2. List of informative documentation packages
package | popcon | size | description |
---|---|---|---|
doc-debian
*
|
I:82 | 408 | Debian Project documentation, (Debian FAQ) and other documents |
debian-policy
*
|
I:3 | 3500 | Debian Policy Manual and related documents |
developers-reference
*
|
I:1.0 | 1388 | Guidelines and information for Debian developers |
maint-guide
*
|
I:0.7 | 776 | Debian New Maintainers' Guide |
debian-history
*
|
I:0.3 | 3736 | History of the Debian Project |
debian-faq
*
|
I:66 | 1224 | Debian FAQ |
doc-linux-text
*
|
I:82 | 8616 | Linux HOWTOs and FAQ (text) |
doc-linux-html
*
|
I:0.7 | 62564 | Linux HOWTOs and FAQ (html) |
sysadmin-guide
*
|
I:0.2 | 964 | The Linux System Administrators' Guide |
You can install some of these packages by the following.
# apt-get install package_name
If you do not want to use your main user account for the following training activities, you can create a training user account, e.g. fish
by the following.
# adduser fish
Answer all questions.
This creates a new account named as fish
. After your practice, you can remove this user account and its home directory by the following.
# deluser --remove-home fish
For the typical single user workstation such as the desktop Debian system on the laptop PC, it is common to deploy simple configuration of sudo
(8) as follows to let the non-privileged user, e.g. penguin
, to gain administrative privilege just with his user password but without the root password.
# echo "penguin ALL=(ALL) ALL" >> /etc/sudoers
Alternatively, it is also common to do as follows to let the non-privileged user, e.g. penguin
, to gain administrative privilege without any password.
# echo "penguin ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
This trick should only be used for the single user workstation which you administer and where you are the only user.
Do not set up accounts of regular users on multiuser workstation like this because it would be very bad for system security.
The password and the account of the penguin
in the above example requires as much protection as the root password and the root account.
Administrative privilege in this context belongs to someone authorized to perform the system administration task on the workstation. Never give some manager in the Admin department of your company or your boss such privilege unless they are authorized and capable.
For providing access privilege to limited devices and limited files, you should consider to use group to provide limited access instead of using the root
privilege via sudo
(8).
With more thoughtful and careful configuration, sudo
(8) can grant limited administrative privileges to other users on a shared system without sharing the root password. This can help with accountability with hosts with multiple administrators so you can tell who did what. On the other hand, you might not want anyone else to have such privileges.
Now you are ready to play with the Debian system without risks as long as you use the non-privileged user account.
This is because the Debian system is, even after the default installation, configured with proper file permissions which prevent non-privileged users from damaging the system. Of course, there may still be some holes which can be exploited but those who worry about these issues should not be reading this section but should be reading Securing Debian Manual.
We learn the Debian system as a Unix-like system with the following.
In GNU/Linux and other Unix-like operating systems, files are organized into directories. All files and directories are arranged in one big tree rooted at "/
". It's called a tree because if you draw the filesystem, it looks like a tree but it is upside down.
These files and directories can be spread out over several devices. mount
(8) serves to attach the filesystem found on some device to the big file tree. Conversely, umount
(8) detaches it again. On recent Linux kernels, mount
(8) with some options can bind part of a file tree somewhere else or can mount filesystem as shared, private, slave, or unbindable. Supported mount options for each filesystem are available in "/share/doc/linux-doc-2.6.*/Documentation/filesystems/
".
Directories on Unix systems are called folders on some other systems. Please also note that there is no concept for drive such as "A:
" on any Unix system. There is one filesystem, and everything is included. This is a huge advantage compared to Windows.
Here are some Unix file basics.
MYFILE
" and "MyFile
" are different files.
/
". Don't confuse this with the home directory for the root user: "/root
".
/
". The root directory is an exception; its name is "/
" (pronounced "slash" or "the root directory") and it cannot be renamed.
/
" directory, and there's a "/
" between each directory or file in the filename. The first "/
" is the top level directory, and the other "/
"'s separate successive subdirectories, until we reach the last entry which is the name of the actual file. The words used here can be confusing. Take the following fully-qualified filename as an example: "/usr/share/keytables/us.map.gz
". However, people also refers to its basename "us.map.gz
" alone as a filename.
/etc/
" and "/usr/
". These subdirectories in turn branch into still more subdirectories, such as "/etc/init.d/
" and "/usr/local/
". The whole thing viewed collectively is called the directory tree. You can think of an absolute filename as a route from the base of the tree ("/
") to the end of some branch (a file). You also hear people talk about the directory tree as if it were a family tree: thus subdirectories have parents, and a path shows the complete ancestry of a file. There are also relative paths that begin somewhere other than the root directory. You should remember that the directory "../
" refers to the parent directory. This terminology also applies to other directory like structures, such as hierarchical data structures.
C:\
". (However, directory entries do exist that refer to physical devices as a part of the normal filesystem. See Section 1.2.2, “Filesystem internals”.)
While you can use almost any letters or symbols in a file name, in practice it is a bad idea to do so. It is better to avoid any characters that often have special meanings on the command line, including spaces, tabs, newlines, and other special characters: { } ( ) [ ] ' ` " \ / > < | ; ! # & ^ * % @ $
. If you want to separate words in a name, good choices are the period, hyphen, and underscore. You could also capitalize each word, "LikeThis
". Experienced Linux users tend to avoid spaces in filenames.
The word "root" can mean either "root user" or "root directory". The context of their usage should make it clear.
The word path is used not only for fully-qualified filename as above but also for the command search path. The intended meaning is usually clear from the context.
The detailed best practices for the file hierarchy are described in the Filesystem Hierarchy Standard ("/usr/share/doc/debian-policy/fhs/fhs-2.3.txt.gz
" and hier
(7)). You should remember the following facts as the starter.
Table 1.3. List of usage of key directories
directory | usage of the directory |
---|---|
/
|
the root directory |
/etc/
|
system wide configuration files |
/var/log/
|
system log files |
/home/
|
all the home directories for all non-privileged users |
Following the Unix tradition, the Debian GNU/Linux system provides the filesystem under which physical data on hard disks and other storage devices reside, and the interaction with the hardware devices such as console screens and remote serial consoles are represented in an unified manner under "/dev/
".
Each file, directory, named pipe (a way two programs can share data), or physical device on a Debian GNU/Linux system has a data structure called an inode which describes its associated attributes such as the user who owns it (owner), the group that it belongs to, the time last accessed, etc. If you are really interested, see "/usr/include/linux/fs.h
" for the exact definition of "struct inode
" in the Debian GNU/Linux system. The idea of representing just about everything in the filesystem was a Unix innovation, and modern Linux kernels have developed this idea ever further. Now, even information about processes running in the computer can be found in the filesystem.
This abstract and unified representation of physical entities and internal processes is very powerful since this allows us to use the same command for the same kind of operation on many totally different devices. It is even possible to change the way the kernel works by writing data to special files that are linked to running processes.
If you need to identify the correspondence between the file tree and the physical entity, execute mount
(8) with no arguments.
Filesystem permissions of Unix-like system are defined for three categories of affected users.
For the file, each corresponding permission allows following actions.
For the directory, each corresponding permission allows following actions.
Here, the execute permission on a directory means not only to allow reading of files in that directory but also to allow viewing their attributes, such as the size and the modification time.
ls
(1) is used to display permission information (and more) for files and directories. When it is invoked with the "-l
" option, it displays the following information in the order given.
Table 1.4. List of the first character of "ls -l
" output
character | meaning |
---|---|
-
|
normal file |
d
|
directory |
l
|
symlink |
c
|
character device node |
b
|
block device node |
p
|
named pipe |
s
|
socket |
chown
(1) is used from the root account to change the owner of the file. chgrp
(1) is used from the file's owner or root account to change the group of the file. chmod
(1) is used from the file's owner or root account to change file and directory access permissions. Basic syntax to manipulate a foo
file is the following.
# chown <newowner> foo # chgrp <newgroup> foo # chmod [ugoa][+-=][rwxXst][,...] foo
For example, you can make a directory tree to be owned by a user foo
and shared by a group bar
by the following.
# cd /some/location/ # chown -R foo:bar . # chmod -R ug+rwX,o=rX .
There are three more special permission bits.
Here the output of "ls -l
" for these bits is capitalized if execution bits hidden by these outputs are unset.
Setting set user ID on an executable file allows a user to execute the executable file with the owner ID of the file (for example root). Similarly, setting set group ID on an executable file allows a user to execute the executable file with the group ID of the file (for example root). Because these settings can cause security risks, enabling them requires extra caution.
Setting set group ID on a directory enables the BSD-like file creation scheme where all files created in the directory belong to the group of the directory.
Setting the sticky bit on a directory prevents a file in the directory from being removed by a user who is not the owner of the file. In order to secure contents of a file in world-writable directories such as "/tmp
" or in group-writable directories, one must not only reset the write permission for the file but also set the sticky bit on the directory. Otherwise, the file can be removed and a new file can be created with the same name by any user who has write access to the directory.
Here are a few interesting examples of file permissions.
$ ls -l /etc/passwd /etc/shadow /dev/ppp /usr/sbin/exim4 crw------- 1 root root 108, 0 2007-04-29 07:00 /dev/ppp -rw-r--r-- 1 root root 1427 2007-04-16 00:19 /etc/passwd -rw-r----- 1 root shadow 943 2007-04-16 00:19 /etc/shadow -rwsr-xr-x 1 root root 700056 2007-04-22 05:29 /usr/sbin/exim4 $ ls -ld /tmp /var/tmp /usr/local /var/mail /usr/src drwxrwxrwt 10 root root 4096 2007-04-29 07:59 /tmp drwxrwsr-x 10 root staff 4096 2007-03-24 18:48 /usr/local drwxrwsr-x 4 root src 4096 2007-04-27 00:31 /usr/src drwxrwsr-x 2 root mail 4096 2007-03-28 23:33 /var/mail drwxrwxrwt 2 root root 4096 2007-04-29 07:11 /var/tmp
There is an alternative numeric mode to describe file permissions with chmod
(1). This numeric mode uses 3 to 4 digit wide octal (radix=8) numbers.
Table 1.5. The numeric mode for file permissions in chmod
(1) commands
digit | meaning |
---|---|
1st optional digit | sum of set user ID (=4), set group ID (=2), and sticky bit (=1) |
2nd digit | sum of read (=4), write (=2), and execute (=1) permissions for user |
3rd digit | ditto for group |
4th digit | ditto for other |
This sounds complicated but it is actually quite simple. If you look at the first few (2-10) columns from "ls -l
" command output and read it as a binary (radix=2) representation of file permissions ("-" being "0" and "rwx" being "1"), the last 3 digit of the numeric mode value should make sense as an octal (radix=8) representation of file permissions to you.
For example, try the following
$ touch foo bar $ chmod u=rw,go=r foo $ chmod 644 bar $ ls -l foo bar -rw-r--r-- 1 penguin penguin 17 2007-04-29 08:22 bar -rw-r--r-- 1 penguin penguin 12 2007-04-29 08:22 foo
If you need to access information displayed by "ls -l
" in shell script, you should use pertinent commands such as test
(1), stat
(1) and readlink
(1). The shell builtin such as "[
" or "test
" may be used too.
What permissions are applied to a newly created file or directory is restricted by the umask
shell builtin command. See dash
(1), bash
(1), and builtins
(7).
(file permissions) = (requested file permissions) & ~(umask value)
Table 1.6. The umask value examples
umask | file permissions created | directory permissions created | usage |
---|---|---|---|
0022
|
-rw-r--r--
|
-rwxr-xr-x
|
writable only by the user |
0002
|
-rw-rw-r--
|
-rwxrwxr-x
|
writable by the group |
The Debian system uses a user private group (UPG) scheme as its default. A UPG is created whenever a new user is added to the system. A UPG has the same name as the user for which it was created and that user is the only member of the UPG. UPG scheme makes it is safe to set umask to 0002
since every user has their own private group. (In some Unix variants, it is quite common to setup all normal users belonging to a single users
group and is good idea to set umask to 0022
for security in such cases.)
In order to make group permissions to be applied to a particular user, that user needs to be made a member of the group using "sudo vigr
".
Alternatively, you may dynamically add users to groups during the authentication process by adding "auth optional pam_group.so
" line to "/etc/pam.d/common-auth
" and setting "/etc/security/group.conf
". (See Chapter 4, Authentication.)
The hardware devices are just another kind of file on the Debian system. If you have problems accessing devices such as CD-ROM and USB memory stick from a user account, you should make that user a member of the relevant group.
Some notable system-provided groups allow their members to access particular files and devices without root
privilege.
Table 1.7. List of notable system-provided groups for file access
group | description for accessible files and devices |
---|---|
dialout
|
full and direct access to serial ports ("/dev/ttyS[0-3] ")
|
dip
|
limited access to serial ports for Dialup IP connection to trusted peers |
cdrom
|
CD-ROM, DVD+/-RW drives |
audio
|
audio device |
video
|
video device |
scanner
|
scanner(s) |
adm
|
system monitoring logs |
staff
|
some directories for junior administrative work: "/usr/local ", "/home "
|
You need to belong to the dialout
group to reconfigure modem, dial anywhere, etc. But if root
creates pre-defined configuration files for trusted peers in "/etc/ppp/peers/
", you only need to belong to the dip
group to create Dialup IP connection to those trusted peers using pppd
(8), pon
(1), and poff
(1) commands.
Some notable system-provided groups allow their members to execute particular commands without root
privilege.
Table 1.8. List of notable system provided groups for particular command executions
group | accessible commands |
---|---|
sudo
|
execute sudo without their password
|
lpadmin
|
execute commands to add, modify, and remove printers from printer databases |
plugdev
|
execute pmount (1) for removable devices such as USB memories
|
For the full listing of the system provided users and groups, see the recent version of the "Users and Groups" document in "/usr/share/doc/base-passwd/users-and-groups.html
" provided by the base-passwd
package.
See passwd
(5), group
(5), shadow
(5), newgrp
(1), vipw
(8), vigr
(8), and pam_group
(8) for management commands of the user and group system.
There are three types of timestamps for a GNU/Linux file.
Table 1.9. List of types of timestamps
type | meaning |
---|---|
mtime |
the file modification time (ls -l )
|
ctime |
the file status change time (ls -lc )
|
atime |
the last file access time (ls -lu )
|
ctime is not file creation time.
Even simply reading a file on the Debian system normally causes a file write operation to update atime information in the inode. Mounting a filesystem with "noatime
" or "relatime
" option makes the system skip this operation and results in faster file access for the read. This is often recommended for laptops, because it reduces hard drive activity and saves power. See mount
(8).
Use touch
(1) command to change timestamps of existing files.
For timestamps, the ls
command outputs different strings under the modern English locale ("en_US.UTF-8
") from under the old one ("C
").
$ LANG=en_US.UTF-8 ls -l foo -rw-r--r-- 1 penguin penguin 3 2008-03-05 00:47 foo $ LANG=C ls -l foo -rw-r--r-- 1 penguin penguin 3 Mar 5 00:47 foo
See Section 9.2.5, “Customized display of time and date” to customize "ls -l
" output.
There are two methods of associating a file "foo
" with a different filename "bar
".
ln foo bar
"
ln -s foo bar
"
See the following example for changes in link counts and the subtle differences in the result of the rm
command.
$ echo "Original Content" > foo $ ls -li foo 2398521 -rw-r--r-- 1 penguin penguin 17 2007-04-29 08:15 foo $ ln foo bar # hard link $ ln -s foo baz # symlink $ ls -li foo bar baz 2398521 -rw-r--r-- 2 penguin penguin 17 2007-04-29 08:15 bar 2398538 lrwxrwxrwx 1 penguin penguin 3 2007-04-29 08:16 baz -> foo 2398521 -rw-r--r-- 2 penguin penguin 17 2007-04-29 08:15 foo $ rm foo $ echo "New Content" > foo $ ls -li foo bar baz 2398521 -rw-r--r-- 1 penguin penguin 17 2007-04-29 08:15 bar 2398538 lrwxrwxrwx 1 penguin penguin 3 2007-04-29 08:16 baz -> foo 2398540 -rw-r--r-- 1 penguin penguin 12 2007-04-29 08:17 foo $ cat bar Original Content $ cat baz New Content
The hardlink can be made within the same filesystem and shares the same inode number which the "-i
" option with ls
(1) reveals.
The symlink always has nominal file access permissions of "rwxrwxrwx
", as shown in the above example, with the effective access permissions dictated by permissions of the file that it points to.
It is generally good idea not to create complicated symbolic links or hardlinks at all unless you have a very good reason. It may cause nightmares where the logical combination of the symbolic links results in loops in the filesystem.
It is generally preferable to use symbolic links rather than hardlinks unless you have a good reason for using a hardlink.
The ".
" directory links to the directory that it appears in, thus the link count of any new directory starts at 2. The "..
" directory links to the parent directory, thus the link count of the directory increases with the addition of new subdirectories.
If you are just moving to Linux from Windows, it soon becomes clear how well-designed the filename linking of Unix is, compared with the nearest Windows equivalent of "shortcuts". Because it is implemented in the filesystem, applications can't see any difference between a linked file and the original. In the case of hardlinks, there really is no difference.
A named pipe is a file that acts like a pipe. You put something into the file, and it comes out the other end. Thus it's called a FIFO, or First-In-First-Out: the first thing you put in the pipe is the first thing to come out the other end.
If you write to a named pipe, the process which is writing to the pipe doesn't terminate until the information being written is read from the pipe. If you read from a named pipe, the reading process waits until there is nothing to read before terminating. The size of the pipe is always zero --- it does not store data, it just links two processes like the shell "|
". However, since this pipe has a name, the two processes don't have to be on the same command line or even be run by the same user. Pipes were a very influential innovation of Unix.
For example, try the following
$ cd; mkfifo mypipe $ echo "hello" >mypipe & # put into background [1] 8022 $ ls -l mypipe prw-r--r-- 1 penguin penguin 0 2007-04-29 08:25 mypipe $ cat mypipe hello [1]+ Done echo "hello" >mypipe $ ls mypipe mypipe $ rm mypipe
Sockets are used extensively by all the Internet communication, databases, and the operating system itself. It is similar to the named pipe (FIFO) and allows processes to exchange information even between different computers. For the socket, those processes do not need to be running at the same time nor to be running as the children of the same ancestor process. This is the endpoint for the inter process communication (IPC). The exchange of information may occur over the network between different hosts. The two most common ones are the Internet socket and the Unix domain socket.
"netstat -an
" provides a very useful overview of sockets that are open on a given system.
Device files refer to physical or virtual devices on your system, such as your hard disk, video card, screen, or keyboard. An example of a virtual device is the console, represented by "/dev/console
".
There are 2 types of device files.
Character device
Block device
You can read and write device files, though the file may well contain binary data which may be an incomprehensible-to-humans gibberish. Writing data directly to these files is sometimes useful for the troubleshooting of hardware connections. For example, you can dump a text file to the printer device "/dev/lp0
" or send modem commands to the appropriate serial port "/dev/ttyS0
". But, unless this is done carefully, it may cause a major disaster. So be cautious.
For the normal access to a printer, use lp
(1).
The device node number are displayed by executing ls
(1) as the following.
$ ls -l /dev/hda /dev/ttyS0 /dev/zero brw-rw---- 1 root cdrom 3, 0 2007-04-29 07:00 /dev/hda crw-rw---- 1 root dialout 4, 64 2007-04-29 07:00 /dev/ttyS0 crw-rw-rw- 1 root root 1, 5 2007-04-29 07:00 /dev/zero
/dev/hda
" has the major device number 3 and the minor device number 0. This is read/write accessible by the user who belongs to cdrom
group.
/dev/ttyS0
" has the major device number 4 and the minor device number 64. This is read/write accessible by the user who belongs to dialout
group.
/dev/zero
" has the major device number 1 and the minor device number 5. This is read/write accessible by anyone.
In the Linux 2.6 system, the filesystem under "/dev/
" is automatically populated by the udev
(7) mechanism.
There are some special device files.
Table 1.10. List of special device files
device file | action | description of response |
---|---|---|
/dev/null
|
read | return "end-of-file (EOF) character" |
/dev/null
|
write | return nothing (a bottomless data dump pit) |
/dev/zero
|
read |
return "the \0 (NUL) character" (not the same as the number zero ASCII)
|
/dev/random
|
read | return random characters from a true random number generator, delivering real entropy (slow) |
/dev/urandom
|
read | return random characters from a cryptographically secure pseudorandom number generator |
/dev/full
|
write | return the disk-full (ENOSPC) error |
These are frequently used in conjunction with the shell redirection (see Section 1.5.8, “Typical command sequences and shell redirection”).
The procfs and sysfs mounted on "/proc
" and "/sys
" are the pseudo-filesystem and expose internal data structures of the kernel to the userspace. In other word, these entries are virtual, meaning that they act as a convenient window into the operation of the operating system.
The directory "/proc
" contains (among other things) one subdirectory for each process running on the system, which is named after the process ID (PID). System utilities that access process information, such as ps
(1), get their information from this directory structure.
The directories under "/proc/sys/
" contain interface to change certain kernel parameters at run time. (You may do the same through specialized sysctl
(8) command or its preload/configuration file "/etc/sysctrl.conf
".)
The Linux kernel may complain "Too many open files". You can fix this by increasing "file-max
" value to a larger value from the root shell, e.g., "echo "65536" > /proc/sys/fs/file-max
" (This was needed on older kernels).
People frequently panic when they notice one file in particular - "/proc/kcore
" - which is generally huge. This is (more or less) a copy of the content of your computer's memory. It's used to debug the kernel. It is a virtual file that points to computer memory, so don't worry about its size.
The directory under "/sys
" contains exported kernel data structures, their attributes, and their linkages between them. It also contains interface to change certain kernel parameters at run time.
See "proc.txt(.gz)
", "sysfs.txt(.gz)
" and other related documents in the Linux kernel documentation ("/usr/share/doc/linux-doc-2.6.*/Documentation/filesystems/*
") provided by the linux-doc-2.6.*
package.
Midnight Commander (MC) is a GNU "Swiss army knife" for the Linux console and other terminal environments. This gives newbie a menu driven console experience which is much easier to learn than standard Unix commands.
You may need to install the Midnight Commander package which is titled "mc
" by the following.
$ sudo apt-get install mc
Use the mc
(1) command to explore the Debian system. This is the best way to learn. Please explore few interesting locations just using the cursor keys and Enter key.
/etc
" and its subdirectories
/var/log
" and its subdirectories
/usr/share/doc
" and its subdirectories
/sbin
" and "/bin
"
In order to make MC to change working directory upon exit and cd
to the directory, I suggest to modify "~/.bashrc
" to include a script provided by the mc
package.
. /usr/share/mc/bin/mc.sh
See mc
(1) (under the "-P
" option) for the reason. (If you do not understand what exactly I am talking here, you can do this later.)
MC can be started by the following.
$ mc
MC takes care of all file operations through its menu, requiring minimal user effort. Just press F1 to get the help screen. You can play with MC just by pressing cursor-keys and function-keys.
In some consoles such as gnome-terminal
(1), key strokes of function-keys may be stolen by the console program. You can disable these features by "Edit" → "Keyboard Shortcuts" for gnome-terminal
.
If you encounter character encoding problem which displays garbage characters, adding "-a
" to MC's command line may help prevent problems.
If this doesn't clear up your display problems with MC, see Section 9.6.6, “The terminal configuration”.
The default is two directory panels containing file lists. Another useful mode is to set the right window to "information" to see file access privilege information, etc. Following are some essential keystrokes. With the gpm
(8) daemon running, one can use a mouse on Linux character consoles, too. (Make sure to press the shift-key to obtain the normal behavior of cut and paste in MC.)
Table 1.11. The key bindings of MC
key | key binding |
---|---|
F1
|
help menu |
F3
|
internal file viewer |
F4
|
internal editor |
F9
|
activate pull down menu |
F10
|
exit Midnight Commander |
Tab
|
move between two windows |
Insert or Ctrl-T
|
mark file for a multiple-file operation such as copy |
Del
|
delete file (be careful---set MC to safe delete mode) |
Cursor keys | self-explanatory |
cd
command changes the directory shown on the selected screen.
Ctrl-Enter
or Alt-Enter
copies a filename to the command line. Use this with cp
(1) and mv
(1) commands together with command-line editing.
Alt-Tab
shows shell filename expansion choices.
mc /etc /root
".
Esc
+ n-key
→ Fn
(i.e., Esc
+ 1
→ F1
, etc.; Esc
+ 0
→ F10
)
Esc
before the key has the same effect as pressing the Alt
and the key together.; i.e., type Esc
+ c
for Alt-C
. Esc
is called meta-key and sometimes noted as "M-
".
The internal editor has an interesting cut-and-paste scheme. Pressing F3
marks the start of a selection, a second F3
marks the end of selection and highlights the selection. Then you can move your cursor. If you press F6, the selected area is moved to the cursor location. If you press F5, the selected area is copied and inserted at the cursor location. F2
saves the file. F10
gets you out. Most cursor keys work intuitively.
This editor can be directly started on a file using one of the following commands.
$ mc -e filename_to_edit
$ mcedit filename_to_edit
This is not a multi-window editor, but one can use multiple Linux consoles to achieve the same effect. To copy between windows, use Alt-F<n> keys to switch virtual consoles and use "File→Insert file" or "File→Copy to file" to move a portion of a file to another file.
This internal editor can be replaced with any external editor of choice.
Also, many programs use the environment variables "$EDITOR
" or "$VISUAL
" to decide which editor to use. If you are uncomfortable with vim
(1) or nano
(1) initially, you may set these to "mcedit
" by adding the following lines to "~/.bashrc
".
export EDITOR=mcedit export VISUAL=mcedit
I do recommend setting these to "vim
" if possible.
If you are uncomfortable with vim
(1), you can keep using mcedit
(1) for most system maintenance tasks.
MC is a very smart viewer. This is a great tool for searching words in documents. I always use this for files in the "/usr/share/doc
" directory. This is the fastest way to browse through masses of Linux information. This viewer can be directly started using one of the following commands.
$ mc -v path/to/filename_to_view
$ mcview path/to/filename_to_view
Press Enter on a file, and the appropriate program handles the content of the file (see Section 9.5.11, “Customizing program to be started”). This is a very convenient MC feature.
Table 1.12. The reaction to the enter key in MC
file type | reaction to enter key |
---|---|
executable file | execute command |
man file | pipe content to viewer software |
html file | pipe content to web browser |
"*.tar.gz " and "*.deb " file
|
browse its contents as if subdirectory |
In order to allow these viewer and virtual file features to function, viewable files should not be set as executable. Change their status using chmod
(1) or via the MC file menu.
MC can be used to access files over the Internet using FTP. Go to the menu by pressing F9
, then type "p
" to activate the FTP virtual filesystem. Enter a URL in the form "username:passwd@hostname.domainname
", which retrieves a remote directory that appears like a local one.
Try "[http.us.debian.org/debian]" as the URL and browse the Debian archive.
Although MC enables you to do almost everything, it is very important for you to learn how to use the command line tools invoked from the shell prompt and become familiar with the Unix-like work environment.
You can select your login shell with chsh
(1).
Table 1.13. List of shell programs
package | popcon | size | POSIX shell | description |
---|---|---|---|---|
bash
*
|
V:91, I:99 | 3536 | Yes | Bash: the GNU Bourne Again SHell (de facto standard) |
tcsh
*
|
V:4, I:27 | 768 | No | TENEX C Shell: an enhanced version of Berkeley csh |
dash
*
|
V:25, I:32 | 248 | Yes | Debian Almquist Shell, good for shell script |
zsh
*
|
V:3, I:6 | 12784 | Yes | Z shell: the standard shell with many enhancements |
pdksh
*
|
V:0.2, I:1.1 | 468 | Yes | public domain version of the Korn shell |
csh
*
|
V:0.6, I:2 | 404 | No | OpenBSD C Shell, a version of Berkeley csh |
sash
*
|
V:0.2, I:1.0 | 856 | Yes |
Stand-alone shell with builtin commands (Not meant for standard "/bin/sh ")
|
ksh
*
|
V:0.5, I:1.6 | 2800 | Yes | the real, AT&T version of the Korn shell |
rc
*
|
V:0.16, I:1.6 | 204 | No | implementation of the AT&T Plan 9 rc shell |
posh
*
|
V:0.01, I:0.11 | 228 | Yes |
Policy-compliant Ordinary SHell (pdksh derivative)
|
In this tutorial chapter, the interactive shell always means bash
.
You can customize bash
(1) behavior by "~/.bashrc
".
For example, try the following.
# CD upon exiting MC . /usr/share/mc/bin/mc.sh # set CDPATH to good one CDPATH=.:/usr/share/doc:~:~/Desktop:~ export CDPATH PATH="${PATH}":/usr/sbin:/sbin # set PATH so it includes user's private bin if it exists if [ -d ~/bin ] ; then PATH=~/bin:"${PATH}" fi export PATH EDITOR=vim export EDITOR
You can find more bash
customization tips, such as Section 9.2.7, “Colorized commands”, in Chapter 9, System tips.
In the Unix-like environment, there are few key strokes which have special meanings. Please note that on a normal Linux character console, only the left-hand Ctrl
and Alt
keys work as expected. Here are few notable key strokes to remember.
Table 1.14. List of key bindings for bash
key | description of key binding |
---|---|
Ctrl-U
|
erase line before cursor |
Ctrl-H
|
erase a character before cursor |
Ctrl-D
|
terminate input (exit shell if you are using shell) |
Ctrl-C
|
terminate a running program |
Ctrl-Z
|
temporarily stop program by moving it to the background job |
Ctrl-S
|
halt output to screen |
Ctrl-Q
|
reactivate output to screen |
Ctrl-Alt-Del
|
reboot/halt the system, see inittab (5)
|
Left-Alt-key (optionally, Windows-key )
|
meta-key for Emacs and the similar UI |
Up-arrow
|
start command history search under bash
|
Ctrl-R
|
start incremental command history search under bash
|
Tab
|
complete input of the filename to the command line under bash
|
Ctrl-V
Tab
|
input Tab without expansion to the command line under bash
|
The terminal feature of Ctrl-S
can be disabled using stty
(1).
Unix style mouse operations are based on the 3 button mouse system.
Table 1.15. List of Unix style mouse operations
action | response |
---|---|
Left-click-and-drag mouse | select and copy to the clipboard |
Left-click | select the start of selection |
Right-click | select the end of selection and copy to the clipboard |
Middle-click | paste clipboard at the cursor |
The center wheel on the modern wheel mouse is considered middle mouse button and can be used for middle-click. Clicking left and right mouse buttons together serves as the middle-click under the 2 button mouse system situation. In order to use a mouse in Linux character consoles, you need to have gpm
(8) running as daemon.
less
(1) is the enhanced pager (file content browser). Hit "h
" for help. It can do much more than more
(1) and can be supercharged by executing "eval $(lesspipe)
" or "eval $(lessfile)
" in the shell startup script. See more in "/usr/share/doc/lessf/LESSOPEN
". The "-R
" option allows raw character output and enables ANSI color escape sequences. See less
(1).
You should become proficient in one of variants of Vim or Emacs programs which are popular in the Unix-like system.
I think getting used to Vim commands is the right thing to do, since Vi-editor is always there in the Linux/Unix world. (Actually, original vi
or new nvi
are programs you find everywhere. I chose Vim instead for newbie since it offers you help through F1
key while it is similar enough and more powerful.)
If you chose either Emacs or XEmacs instead as your choice of the editor, that is another good choice indeed, particularly for programming. Emacs has a plethora of other features as well, including functioning as a newsreader, directory editor, mail program, etc. When used for programming or editing shell scripts, it intelligently recognizes the format of what you are working on, and tries to provide assistance. Some people maintain that the only program they need on Linux is Emacs. Ten minutes learning Emacs now can save hours later. Having the GNU Emacs manual for reference when learning Emacs is highly recommended.
All these programs usually come with tutoring program for you to learn them by practice. Start Vim by typing "vim
" and press F1-key. You should at least read the first 35 lines. Then do the online training course by moving cursor to "|tutor|
" and pressing Ctrl-]
.
Good editors, such as Vim and Emacs, can be used to handle UTF-8 and other exotic encoding texts correctly with proper option in the x-terminal-emulator on X under UTF-8 locale with proper font settings. Please refer to their documentation on multibyte text.
Debian comes with a number of different editors. We recommend to install the vim
package, as mentioned above.
Debian provides unified access to the system default editor via command "/usr/bin/editor
" so other programs (e.g., reportbug
(1)) can invoke it. You can change it by the following.
$ sudo update-alternatives --config editor
The choice "/usr/bin/vim.basic
" over "/usr/bin/vim.tiny
" is my recommendation for newbies since it supports syntax highlighting.
Many programs use the environment variables "$EDITOR
" or "$VISUAL
" to decide which editor to use (see Section 1.3.5, “The internal editor in MC” and Section 9.5.11, “Customizing program to be started”). For the consistency on Debian system, set these to "/usr/bin/editor
". (Historically, "$EDITOR
" was "ed
" and "$VISUAL
" was "vi
".)
You can customize vim
(1) behavior by "~/.vimrc
".
For example, try the following
" ------------------------------- " Local configuration " set nocompatible set nopaste set pastetoggle=<f2> syn on if $USER == "root" set nomodeline set noswapfile else set modeline set swapfile endif " filler to avoid the line above being recognized as a modeline " filler " filler
The output of the shell command may roll off your screen and may be lost forever. It is good practice to log shell activities into the file for you to review them later. This kind of record is essential when you perform any system administration tasks.
The basic method of recording the shell activity is to run it under script
(1).
For example, try the following
$ script Script started, file is typescript
Do whatever shell commands under script
.
Press Ctrl-D
to exit script
.
$ vim typescript
See Section 9.2.3, “Recording the shell activities cleanly” .
Let's learn basic Unix commands. Here I use "Unix" in its generic sense. Any Unix clone OSs usually offer equivalent commands. The Debian system is no exception. Do not worry if some commands do not work as you wish now. If alias
is used in the shell, its corresponding command outputs are different. These examples are not meant to be executed in this order.
Try all following commands from the non-privileged user account.
Table 1.16. List of basic Unix commands
command | description |
---|---|
pwd
|
display name of current/working directory |
whoami
|
display current user name |
id
|
display current user identity (name, uid, gid, and associated groups) |
file <foo>
|
display a type of file for the file "<foo> "
|
type -p <commandname>
|
display a file location of command "<commandname> "
|
which <commandname>
|
, , |
type <commandname>
|
display information on command "<commandname> "
|
apropos <key-word>
|
find commands related to "<key-word> "
|
man -k <key-word>
|
, , |
whatis <commandname>
|
display one line explanation on command "<commandname> "
|
man -a <commandname>
|
display explanation on command "<commandname> " (Unix style)
|
info <commandname>
|
display rather long explanation on command "<commandname> " (GNU style)
|
ls
|
list contents of directory (non-dot files and directories) |
ls -a
|
list contents of directory (all files and directories) |
ls -A
|
list contents of directory (almost all files and directories, i.e., skip ".. " and ". ")
|
ls -la
|
list all contents of directory with detail information |
ls -lai
|
list all contents of directory with inode number and detail information |
ls -d
|
list all directories under the current directory |
tree
|
display file tree contents |
lsof <foo>
|
list open status of file "<foo> "
|
lsof -p <pid>
|
list files opened by the process ID: "<pid> "
|
mkdir <foo>
|
make a new directory "<foo> " in the current directory
|
rmdir <foo>
|
remove a directory "<foo> " in the current directory
|
cd <foo>
|
change directory to the directory "<foo> " in the current directory or in the directory listed in the variable "$CDPATH "
|
cd /
|
change directory to the root directory |
cd
|
change directory to the current user's home directory |
cd /<foo>
|
change directory to the absolute path directory "/<foo> "
|
cd ..
|
change directory to the parent directory |
cd ~<foo>
|
change directory to the home directory of the user "<foo> "
|
cd -
|
change directory to the previous directory |
</etc/motd pager
|
display contents of "/etc/motd " using the default pager
|
touch <junkfile>
|
create a empty file "<junkfile> "
|
cp <foo> <bar>
|
copy a existing file "<foo> " to a new file "<bar> "
|
rm <junkfile>
|
remove a file "<junkfile> "
|
mv <foo> <bar>
|
rename an existing file "<foo> " to a new name "<bar> " ("<bar> " must not exist)
|
mv <foo> <bar>
|
move an existing file "<foo> " to a new location "<bar>/<foo> " (the directory "<bar> " must exist)
|
mv <foo> <bar>/<baz>
|
move an existing file "<foo> " to a new location with a new name "<bar>/<baz> " (the directory "<bar> " must exist but the directory "<bar>/<baz> " must not exist)
|
chmod 600 <foo>
|
make an existing file "<foo> " to be non-readable and non-writable by the other people (non-executable for all)
|
chmod 644 <foo>
|
make an existing file "<foo> " to be readable but non-writable by the other people (non-executable for all)
|
chmod 755 <foo>
|
make an existing file "<foo> " to be readable but non-writable by the other people (executable for all)
|
find . -name <pattern>
|
find matching filenames using shell "<pattern> " (slower)
|
locate -d . <pattern>
|
find matching filenames using shell "<pattern> " (quicker using regularly generated database)
|
grep -e "<pattern>" *.html
|
find a "<pattern>" in all files ending with ".html " in current directory and display them all
|
top
|
display process information using full screen, type "q " to quit
|
ps aux | pager
|
display information on all the running processes using BSD style output |
ps -ef | pager
|
display information on all the running processes using Unix system-V style output |
ps aux | grep -e "[e]xim4*"
|
display all processes running "exim " and "exim4 "
|
ps axf | pager
|
display information on all the running processes with ASCII art output |
kill <1234>
|
kill a process identified by the process ID: "<1234>" |
gzip <foo>
|
compress "<foo> " to create "<foo>.gz " using the Lempel-Ziv coding (LZ77)
|
gunzip <foo>.gz
|
decompress "<foo>.gz " to create "<foo> "
|
bzip2 <foo>
|
compress "<foo> " to create "<foo>.bz2 " using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding (better compression than gzip )
|
bunzip2 <foo>.bz2
|
decompress "<foo>.bz2 " to create "<foo> "
|
xz <foo>
|
compress "<foo> " to create "<foo>.xz " using the Lempel–Ziv–Markov chain algorithm (better compression than bzip2 )
|
unxz <foo>.xz
|
decompress "<foo>.xz " to create "<foo> "
|
tar -xvf <foo>.tar
|
extract files from "<foo>.tar " archive
|
tar -xvzf <foo>.tar.gz
|
extract files from gzipped "<foo>.tar.gz " archive
|
tar -xvjf <foo>.tar.bz2
|
extract files from "<foo>.tar.bz2 " archive
|
tar -xvJf <foo>.tar.xz
|
extract files from "<foo>.tar.xz " archive
|
tar -cvf <foo>.tar <bar>/
|
archive contents of folder "<bar>/ " in "<foo>.tar " archive
|
tar -cvzf <foo>.tar.gz <bar>/
|
archive contents of folder "<bar>/ " in compressed "<foo>.tar.gz " archive
|
tar -cvjf <foo>.tar.bz2 <bar>/
|
archive contents of folder "<bar>/ " in "<foo>.tar.bz2 " archive
|
tar -cvJf <foo>.tar.xz <bar>/
|
archive contents of folder "<bar>/ " in "<foo>.tar.xz " archive
|
zcat README.gz | pager
|
display contents of compressed "README.gz " using the default pager
|
zcat README.gz > foo
|
create a file "foo " with the decompressed content of "README.gz "
|
zcat README.gz >> foo
|
append the decompressed content of "README.gz " to the end of the file "foo " (if it does not exist, create it first)
|
Unix has a tradition to hide filenames which start with ".
". They are traditionally files that contain configuration information and user preferences.
For cd
command, see builtins
(7).
The default pager of the bare bone Debian system is more
(1) which cannot scroll back. By installing the less
package using command line "apt-get install less
", less
(1) becomes default pager and you can scroll back with cursor keys.
The "[
" and "]
" in the regular expression of the "ps aux | grep -e "[e]xim4*"
" command above enable grep
to avoid matching itself. The "4*
" in the regular expression means 0 or more repeats of character "4
" thus enables grep
to match both "exim
" and "exim4
". Although "*
" is used in the shell filename glob and the regular expression, their meanings are different. Learn the regular expression from grep
(1).
Please traverse directories and peek into the system using the above commands as training. If you have questions on any of console commands, please make sure to read the manual page.
For example, try the following
$ man man $ man bash $ man builtins $ man grep $ man ls
The style of man pages may be a little hard to get used to, because they are rather terse, particularly the older, very traditional ones. But once you get used to it, you come to appreciate their succinctness.
Please note that many Unix-like commands including ones from GNU and BSD display brief help information if you invoke them in one of the following ways (or without any arguments in some cases).
$ <commandname> --help $ <commandname> -h
Now you have some feel on how to use the Debian system. Let's look deep into the mechanism of the command execution in the Debian system. Here, I have simplified reality for the newbie. See bash
(1) for the exact explanation.
A simple command is a sequence of components.
>
, >>
, <
, <<
, etc.)
&&
, ||
, <newline> , ;
, &
, (
, )
)
Values of some environment variables change the behavior of some Unix commands.
Default values of environment variables are initially set by the PAM system and then some of them may be reset by some application programs.
gdm
resets environment variables.
~/bash_profile
" and "~/.bashrc
".
The full locale value given to "$LANG
" variable consists of 3 parts: "xx_YY.ZZZZ
".
Table 1.17. 3 parts of locale value
locale value | meaning |
---|---|
xx
|
ISO 639 language codes (lower case) such as "en" |
YY
|
ISO 3166 country codes (upper case) such as "US" |
ZZZZ
|
codeset, always set to "UTF-8" |
For language codes and country codes, see pertinent description in the "info gettext
".
For the codeset on the modern Debian system, you should always set it to UTF-8
unless you specifically want to use the historic one with good reason and background knowledge.
For fine details of the locale configuration, see Section 8.3, “The locale”.
The "LANG=en_US
" is not "LANG=C
" nor "LANG=en_US.UTF-8
". It is "LANG=en_US.ISO-8859-1
" (see Section 8.3.1, “Basics of encoding”).
Table 1.18. List of locale recommendations
locale recommendation | Language (area) |
---|---|
en_US.UTF-8
|
English(USA) |
en_GB.UTF-8
|
English(Great_Britain) |
fr_FR.UTF-8
|
French(France) |
de_DE.UTF-8
|
German(Germany) |
it_IT.UTF-8
|
Italian(Italy) |
es_ES.UTF-8
|
Spanish(Spain) |
ca_ES.UTF-8
|
Catalan(Spain) |
sv_SE.UTF-8
|
Swedish(Sweden) |
pt_BR.UTF-8
|
Portuguese(Brazil) |
ru_RU.UTF-8
|
Russian(Russia) |
zh_CN.UTF-8
|
Chinese(P.R._of_China) |
zh_TW.UTF-8
|
Chinese(Taiwan_R.O.C.) |
ja_JP.UTF-8
|
Japanese(Japan) |
ko_KR.UTF-8
|
Korean(Republic_of_Korea) |
vi_VN.UTF-8
|
Vietnamese(Vietnam) |
Typical command execution uses a shell line sequence as the following.
$ date Sun Jun 3 10:27:39 JST 2007 $ LANG=fr_FR.UTF-8 date dimanche 3 juin 2007, 10:27:33 (UTC+0900)
Here, the program date
(1) is executed with different values of the environment variable "$LANG
".
Most command executions usually do not have preceding environment variable definition. For the above example, you can alternatively execute as the following.
$ LANG=fr_FR.UTF-8 $ date dimanche 3 juin 2007, 10:27:33 (UTC+0900)
As you can see here, the output of command is affected by the environment variable to produce French output. If you want the environment variable to be inherited to subprocesses (e.g., when calling shell script), you need to export it instead by the following.
$ export LANG
When filing a bug report, running and checking the command under "LANG=en_US.UTF-8
" is good idea if you use non-English environment.
See locale
(5) and locale
(7) for "$LANG
" and related environment variables.
I recommend you to configure the system environment just by the "$LANG
" variable and to stay away from "$LC_*
" variables unless it is absolutely needed.
When you type a command into the shell, the shell searches the command in the list of directories contained in the "$PATH
" environment variable. The value of the "$PATH
" environment variable is also called the shell's search path.
In the default Debian installation, the "$PATH
" environment variable of user accounts may not include "/sbin
" and "/usr/sbin
". For example, the ifconfig
command needs to be issued with full path as "/sbin/ifconfig
". (Similar ip
command is located in "/bin
".)
You can change the "$PATH
" environment variable of Bash shell by "~/.bash_profile
" or "~/.bashrc
" files.
Many commands stores user specific configuration in the home directory and changes their behavior by their contents. The home directory is identified by the environment variable "$HOME
".
Table 1.19. List of "$HOME
" values
value of "$HOME "
|
program execution situation |
---|---|
/
|
program run by the init process (daemon) |
/root
|
program run from the normal root shell |
/home/<normal_user>
|
program run from the normal user shell |
/home/<normal_user>
|
program run from the normal user GUI desktop menu |
/home/<normal_user>
|
program run as root with "sudo program "
|
/root
|
program run as root with "sudo -H program "
|
Shell expands "~/
" to current user's home directory, i.e., "$HOME/
". Shell expands "~foo/
" to foo
's home directory, i.e., "/home/foo/
".
Some commands take arguments. Arguments starting with "-
" or "--
" are called options and control the behavior of the command.
$ date Mon Oct 27 23:02:09 CET 2003 $ date -R Mon, 27 Oct 2003 23:02:40 +0100
Here the command-line argument "-R
" changes date
(1) behavior to output RFC2822 compliant date string.
Often you want a command to work with a group of files without typing all of them. The filename expansion pattern using the shell glob, (sometimes referred as wildcards), facilitate this need.
Table 1.20. Shell glob patterns
shell glob pattern | description of match rule |
---|---|
*
|
filename (segment) not started with ". "
|
.*
|
filename (segment) started with ". "
|
?
|
exactly one character |
[…]
|
exactly one character with any character enclosed in brackets |
[a-z]
|
exactly one character with any character between "a " and "z "
|
[^…]
|
exactly one character other than any character enclosed in brackets (excluding "^ ")
|
For example, try the following
$ mkdir junk; cd junk; touch 1.txt 2.txt 3.c 4.h .5.txt ..6.txt $ echo *.txt 1.txt 2.txt $ echo * 1.txt 2.txt 3.c 4.h $ echo *.[hc] 3.c 4.h $ echo .* . .. .5.txt ..6.txt $ echo .*[^.]* .5.txt ..6.txt $ echo [^1-3]* 4.h $ cd ..; rm -rf junk
See glob
(7).
Unlike normal filename expansion by the shell, the shell pattern "*
" tested in find
(1) with "-name
" test etc., matches the initial ".
" of the filename. (New POSIX feature)
BASH can be tweaked to change its glob behavior with its shopt builtin options such as "dotglob
", "noglob
", "nocaseglob
", "nullglob
", "nocaseglob
", "extglob
", etc. See bash
(1).
Each command returns its exit status (variable: "$?
") as the return value.
Table 1.21. Command exit codes
command exit status | numeric return value | logical return value |
---|---|---|
success | zero, 0 | TRUE |
error | non-zero, -1 | FALSE |
For example, try the following.
$ [ 1 = 1 ] ; echo $? 0 $ [ 1 = 2 ] ; echo $? 1
Please note that, in the logical context for the shell, success is treated as the logical TRUE which has 0 (zero) as its value. This is somewhat non-intuitive and needs to be reminded here.
Let's try to remember following shell command idioms typed in one line as a part of shell command.
Table 1.22. Shell command idioms
command idiom | description |
---|---|
command &
|
background execution of command in the subshell
|
command1 | command2
|
pipe the standard output of command1 to the standard input of command2 (concurrent execution)
|
command1 2>&1 | command2
|
pipe both standard output and standard error of command1 to the standard input of command2 (concurrent execution)
|
command1 ; command2
|
execute command1 and command2 sequentially
|
command1 && command2
|
execute command1 ; if successful, execute command2 sequentially (return success if both command1 and command2 are successful)
|
command1 || command2
|
execute command1 ; if not successful, execute command2 sequentially (return success if command1 or command2 are successful)
|
command > foo
|
redirect standard output of command to a file foo (overwrite)
|
command 2> foo
|
redirect standard error of command to a file foo (overwrite)
|
command >> foo
|
redirect standard output of command to a file foo (append)
|
command 2>> foo
|
redirect standard error of command to a file foo (append)
|
command > foo 2>&1
|
redirect both standard output and standard error of command to a file "foo "
|
command < foo
|
redirect standard input of command to a file foo
|
command << delimiter
|
redirect standard input of command to the following lines until "delimiter " is met (here document)
|
command <<- delimiter
|
redirect standard input of command to the following lines until "delimiter " is met (here document, the leading tab characters are stripped from input lines)
|
The Debian system is a multi-tasking system. Background jobs allow users to run multiple programs in a single shell. The management of the background process involves the shell builtins: jobs
, fg
, bg
, and kill
. Please read sections of bash(1) under "SIGNALS", and "JOB CONTROL", and builtins
(1).
For example, try the following
$ </etc/motd pager
$ pager </etc/motd
$ pager /etc/motd
$ cat /etc/motd | pager
Although all 4 examples of shell redirections display the same thing, the last example runs an extra cat
command and wastes resources with no reason.
The shell allows you to open files using the exec
builtin with an arbitrary file descriptor.
$ echo Hello >foo $ exec 3<foo 4>bar # open files $ cat <&3 >&4 # redirect stdin to 3, stdout to 4 $ exec 3<&- 4>&- # close files $ cat bar Hello
Here, "n<&-
" and "n>&-
" mean to close the file descriptor "n
".
The file descriptor 0-2 are predefined.
Table 1.23. Predefined file descriptors
device | description | file descriptor |
---|---|---|
stdin
|
standard input | 0 |
stdout
|
standard output | 1 |
stderr
|
standard error | 2 |
You can set an alias for the frequently used command.
For example, try the following
$ alias la='ls -la'
Now, "la
" works as a short hand for "ls -la
" which lists all files in the long listing format.
You can list any existing aliases by alias
(see bash
(1) under "SHELL BUILTIN COMMANDS").
$ alias ... alias la='ls -la'
You can identity exact path or identity of the command by type
(see bash
(1) under "SHELL BUILTIN COMMANDS").
For example, try the following
$ type ls ls is hashed (/bin/ls) $ type la la is aliased to ls -la $ type echo echo is a shell builtin $ type file file is /usr/bin/file
Here ls
was recently searched while "file
" was not, thus "ls
" is "hashed", i.e., the shell has an internal record for the quick access to the location of the "ls
" command.
In Unix-like work environment, text processing is done by piping text through chains of standard text processing tools. This was another crucial Unix innovation.
There are few standard text processing tools which are used very often on the Unix-like system.
No regular expression is used:
cat
(1) concatenates files and outputs the whole content.
tac
(1) concatenates files and outputs in reverse.
cut
(1) selects parts of lines and outputs.
head
(1) outputs the first part of files.
tail
(1) outputs the last part of files.
sort
(1) sorts lines of text files.
uniq
(1) removes duplicate lines from a sorted file.
tr
(1) translates or deletes characters.
diff
(1) compares files line by line.
Basic regular expression (BRE) is used:
grep
(1) matches text with patterns.
ed
(1) is a primitive line editor.
sed
(1) is a stream editor.
vim
(1) is a screen editor.
emacs
(1) is a screen editor. (somewhat extended BRE)
Extended regular expression (ERE) is used:
egrep
(1) matches text with patterns.
awk
(1) does simple text processing.
tcl
(3tcl) can do every conceivable text processing: re_syntax
(3). Often used with tk
(3tk).
perl
(1) can do every conceivable text processing. perlre
(1).
pcregrep
(1) from the pcregrep
package matches text with Perl Compatible Regular Expressions (PCRE) pattern.
python
(1) with the re
module can do every conceivable text processing. See "/usr/share/doc/python/html/index.html
".
If you are not sure what exactly these commands do, please use "man command
" to figure it out by yourself.
Sort order and range expression are locale dependent. If you wish to obtain traditional behavior for a command, use C locale instead of UTF-8 ones by prepnding command with "LANG=C
" (see Section 1.5.2, “"$LANG
" variable” and Section 8.3, “The locale”).
Perl regular expressions (perlre
(1)), Perl Compatible Regular Expressions (PCRE), and Python regular expressions offered by the re
module have many common extensions to the normal ERE.
Regular expressions are used in many text processing tools. They are analogous to the shell globs, but they are more complicated and powerful.
The regular expression describes the matching pattern and is made up of text characters and metacharacters.
The metacharacter is just a character with a special meaning. There are 2 major styles, BRE and ERE, depending on the text tools as described above.
Table 1.24. Metacharacters for BRE and ERE
BRE | ERE | description of the regular expression |
---|---|---|
\ . [ ] ^ $ *
|
\ . [ ] ^ $ *
|
common metacharacters |
\+ \? \( \) \{ \} \|
|
BRE only "\ " escaped metacharacters
|
|
+ ? ( ) { } |
|
ERE only non-"\ " escaped metacharacters
|
|
c
|
c
|
match non-metacharacter "c "
|
\c
|
\c
|
match a literal character "c " even if "c " is metacharacter by itself
|
.
|
.
|
match any character including newline |
^
|
^
|
position at the beginning of a string |
$
|
$
|
position at the end of a string |
\<
|
\<
|
position at the beginning of a word |
\>
|
\>
|
position at the end of a word |
\[abc…\]
|
[abc…]
|
match any characters in "abc… "
|
\[^abc…\]
|
[^abc…]
|
match any characters except in "abc… "
|
r*
|
r*
|
match zero or more regular expressions identified by "r "
|
r\+
|
r+
|
match one or more regular expressions identified by "r "
|
r\?
|
r?
|
match zero or one regular expressions identified by "r "
|
r1\|r2
|
r1|r2
|
match one of the regular expressions identified by "r1 " or "r2 "
|
\(r1\|r2\)
|
(r1|r2)
|
match one of the regular expressions identified by "r1 " or "r2 " and treat it as a bracketed regular expression
|
The regular expression of emacs
is basically BRE but has been extended to treat "+
"and "?
" as the metacharacters as in ERE. Thus, there are no needs to escape them with "\
" in the regular expression of emacs
.
grep
(1) can be used to perform the text search using the regular expression.
For example, try the following
$ egrep 'GNU.*LICENSE|Yoyodyne' /usr/share/common-licenses/GPL GNU GENERAL PUBLIC LICENSE GNU GENERAL PUBLIC LICENSE Yoyodyne, Inc., hereby disclaims all copyright interest in the program
For the replacement expression, some characters have special meanings.
Table 1.25. The replacement expression
replacement expression | description of the text to replace the replacement expression |
---|---|
&
|
what the regular expression matched (use \& in emacs )
|
\n
|
what the n-th bracketed regular expression matched ("n" being number) |
For Perl replacement string, "$n
" is used instead of "\n
" and "&
" has no special meaning.
For example, try the following
$ echo zzz1abc2efg3hij4 | \ sed -e 's/\(1[a-z]*\)[0-9]*\(.*\)$/=&=/' zzz=1abc2efg3hij4= $ echo zzz1abc2efg3hij4 | \ sed -e 's/\(1[a-z]*\)[0-9]*\(.*\)$/\2===\1/' zzzefg3hij4===1abc $ echo zzz1abc2efg3hij4 | \ perl -pe 's/(1[a-z]*)[0-9]*(.*)$/$2===$1/' zzzefg3hij4===1abc $ echo zzz1abc2efg3hij4 | \ perl -pe 's/(1[a-z]*)[0-9]*(.*)$/=&=/' zzz=&=
Here please pay extra attention to the style of the bracketed regular expression and how the matched strings are used in the text replacement process on different tools.
These regular expressions can be used for cursor movements and text replacement actions in some editors too.
The back slash "\
" at the end of line in the shell commandline escapes newline as a white space character and continues shell command line input to the next line.
Please read all the related manual pages to learn these commands.
The ed
(1) command can replace all instances of "FROM_REGEX
" with "TO_TEXT
" in "file
".
$ ed file <<EOF ,s/FROM_REGEX/TO_TEXT/g w q EOF
The sed
(1) command can replace all instances of "FROM_REGEX
" with "TO_TEXT
" in "file
".
$ sed file 's/FROM_REGEX/TO_TEXT/g' | sponge file
The sponge
(8) command is a non-standard Unix tool offered by the moreutils
package. This is quite useful when you wish to overwrite original file.
The vim
(1) command can replace all instances of "FROM_REGEX
" with "TO_TEXT
" in "file
" by using ex
(1) commands.
$ vim '+%s/FROM_REGEX/TO_TEXT/gc' '+w' '+q' file
The "c
" flag in the above ensures interactive confirmation for each substitution.
Multiple files ("file1
", "file2
", and "file3
") can be processed with regular expressions similarly with vim
(1) or perl
(1).
$ vim '+argdo %s/FROM_REGEX/TO_TEXT/ge|update' '+q' file1 file2 file3
The "e
" flag in the above prevents the "No match" error from breaking a mapping.
$ perl -i -p -e 's/FROM_REGEX/TO_TEXT/g;' file1 file2 file3
In the perl(1) example, "-i
" is for in-place editing, "-p
" is for implicit loop over files.
Use of argument "-i.bak
" instead of "-i
" keeps each original file by adding ".bak
" to its filename. This makes recovery from errors easier for complex substitutions.
ed
(1) and vim
(1) are BRE; perl
(1) is ERE.
Let's consider a text file called "DPL
" in which some pre-2004 Debian project leader's names and their initiation days are listed in a
space-separated format.
Ian Murdock August 1993 Bruce Perens April 1996 Ian Jackson January 1998 Wichert Akkerman January 1999 Ben Collins April 2001 Bdale Garbee April 2002 Martin Michlmayr March 2003
See "A Brief History of Debian" for the latest Debian leadership history.
Awk is frequently used to extract data from these types of files.
For example, try the following
$ awk '{ print $3 }' <DPL # month started August April January January April April March $ awk '($1=="Ian") { print }' <DPL # DPL called Ian Ian Murdock August 1993 Ian Jackson January 1998 $ awk '($2=="Perens") { print $3,$4 }' <DPL # When Perens started April 1996
Shells such as Bash can be also used to parse this kind of file.
For example, try the following
$ while read first last month year; do echo $month done <DPL ... same output as the first Awk example
Here, the read
builtin command uses characters in "$IFS
" (internal field separators) to split lines into words.
If you change "$IFS
" to ":
", you can parse "/etc/passwd
" with shell nicely.
$ oldIFS="$IFS" # save old value $ IFS=':' $ while read user password uid gid rest_of_line; do if [ "$user" = "bozo" ]; then echo "$user's ID is $uid" fi done < /etc/passwd bozo's ID is 1000 $ IFS="$oldIFS" # restore old value
(If Awk is used to do the equivalent, use "FS=':'
" to set the field separator.)
IFS is also used by the shell to split results of parameter expansion, command substitution, and arithmetic expansion. These do not occur within double or single quoted words. The default value of IFS is <space>, <tab>, and <newline> combined.
Be careful about using this shell IFS tricks. Strange things may happen, when shell interprets some parts of the script as its input.
$ IFS=":," # use ":" and "," as IFS $ echo IFS=$IFS, IFS="$IFS" # echo is a Bash builtin IFS= , IFS=:, $ date -R # just a command output Sat, 23 Aug 2003 08:30:15 +0200 $ echo $(date -R) # sub shell --> input to main shell Sat 23 Aug 2003 08 30 36 +0200 $ unset IFS # reset IFS to the default $ echo $(date -R) Sat, 23 Aug 2003 08:30:50 +0200
The following scripts do nice things as a part of a pipe.
Table 1.26. List of script snippets for piping commands
script snippet (type in one line) | effect of command |
---|---|
find /usr -print
|
find all files under "/usr "
|
seq 1 100
|
print 1 to 100 |
| xargs -n 1 <command>
|
run command repeatedly with each item from pipe as its argument |
| xargs -n 1 echo
|
split white-space-separated items from pipe into lines |
| xargs echo
|
merge all lines from pipe into a line |
| grep -e <regex_pattern>
|
extract lines from pipe containing <regex_pattern> |
| grep -v -e <regex_pattern>
|
extract lines from pipe not containing <regex_pattern> |
| cut -d: -f3 -
|
extract third field from pipe separated by ": " (passwd file etc.)
|
| awk '{ print $3 }'
|
extract third field from pipe separated by whitespaces |
| awk -F'\t' '{ print $3 }'
|
extract third field from pipe separated by tab |
| col -bx
|
remove backspace and expand tabs to spaces |
| expand -
|
expand tabs |
| sort| uniq
|
sort and remove duplicates |
| tr 'A-Z' 'a-z'
|
convert uppercase to lowercase |
| tr -d '\n'
|
concatenate lines into one line |
| tr -d '\r'
|
remove CR |
| sed 's/^/# /'
|
add "# " to the start of each line
|
| sed 's/\.ext//g'
|
remove ".ext "
|
| sed -n -e 2p
|
print the second line |
| head -n 2 -
|
print the first 2 lines |
| tail -n 2 -
|
print the last 2 lines |
One-line shell script can loop over many files using find
(1) and xargs
(1) to perform quite complicated tasks. See Section 10.1.5, “Idioms for the selection of files” and Section 9.5.9, “Repeating a command looping over files”.
When using the shell interactive mode becomes too complicated, please consider to write a shell script (see Section 12.1, “The shell script”).
This chapter is written assuming the latest stable release is codename: squeeze
.
Debian is a volunteer organization which builds consistent distributions of pre-compiled binary packages of free software and distributes them from its archive.
The Debian archive is offered by many remote mirror sites for access through HTTP and FTP methods. It is also available as CD-ROM/DVD.
The Debian package management system, when used properly, offers the user to install consistent sets of binary packages to the system from the archive. Currently, there are 30552 packages available for the amd64 architecture.
The Debian package management system has a rich history and many choices for the front end user program and back end archive access method to be used. Currently, we recommend the following.
apt-get
(8) for all commandline operations, including package installation and removal, and dist-upgrades.
aptitude
(8) for an interactive text interface to manage the installed packages and to search the available packages.
update-manager
(8) for keeping your system up-to-date if you're running the default GNOME desktop.
Table 2.1. List of Debian package management tools
package | popcon | size | description |
---|---|---|---|
apt
*
|
V:90, I:99 | 5600 |
Advanced Packaging Tool (APT), front-end for dpkg providing "http ", "ftp ", and "file " archive access methods (apt-get /apt-cache commands included)
|
aptitude
*
|
V:25, I:98 | 11916 |
interactive terminal-based package manager with aptitude (8)
|
update-manager-gnome
*
|
V:7, I:10 | 1221 |
GNOME application that manages software updates with update-manager (8)
|
tasksel
*
|
V:5, I:93 | 904 | tool for selecting tasks for installation on Debian system (front-end for APT) |
unattended-upgrades
*
|
V:4, I:31 | 280 | enhancement package for APT to enable automatic installation of security upgrades |
dselect
*
|
V:2, I:30 | 2404 | terminal-based package manager (previous standard, front-end for APT and other old access methods) |
dpkg
*
|
V:92, I:99 | 6804 | package management system for Debian |
synaptic
*
|
V:13, I:40 | 6464 | graphical package manager (GNOME front-end for APT) |
apt-utils
*
|
V:51, I:99 | 516 |
APT utility programs: apt-extracttemplates (1), apt-ftparchive (1), and apt-sortpkgs (1)
|
apt-listchanges
*
|
V:11, I:17 | 280 | package change history notification tool |
apt-listbugs
*
|
V:1.4, I:2 | 508 | lists critical bugs before each APT installation |
apt-file
*
|
V:2, I:9 | 188 | APT package searching utility — command-line interface |
apt-rdepends
*
|
V:0.13, I:0.9 | 92 | recursively lists package dependencies |
Here are some key points for package configuration on the Debian system.
debconf
(7) to help initial installation process of the package.
Do not install packages from random mixture of suites. It probably breaks the package consistency which requires deep system management knowledge, such as compiler ABI, library version, interpreter features, etc.
The newbie Debian system administrator should stay with the stable
release of Debian while applying only security updates. I mean that some of the following valid actions are better avoided, as a precaution, until you understand the Debian system very well. Here are some reminders.
testing
or unstable
in "/etc/apt/sources.list
".
/etc/apt/sources.list
".
/etc/apt/preferences
".
dpkg -i <random_package>
".
dpkg --force-all -i <random_package>
".
/var/lib/dpkg/
".
Do not overwrite system files by installing software programs directly compiled from source.
/usr/local
" or "/opt
", if needed.
The non-compatible effects caused by above actions to the Debian package management system may leave your system unusable.
The serious Debian system administrator who runs mission critical servers, should use extra precautions.
Do not install any packages including security updates from Debian without thoroughly testing them with your particular configuration under safe conditions.
Despite my warnings above, I know many readers of this document wish to run the testing
or unstable
suites of Debian as their main system for self-administered Desktop environments. This is because they work very well, are updated frequently, and offer the latest features.
For your production server, the stable
suite with the security updates is recommended. The same can be said for desktop PCs on which you can spend limited administration efforts, e.g. for your mother's PC.
It takes no more than simply setting the distribution string in the "/etc/apt/sources.list
" to the suite name: "testing
" or "unstable
"; or the codename: "wheezy
" or "sid
". This makes you live the life of eternal upgrades.
The use of testing
or unstable
is a lot of fun but comes with some risks. Even though the unstable
suite of Debian system looks very stable for most of the times, there have been some package problems on the testing
and unstable
suite of Debian system and a few of them were not so trivial to resolve. It may be quite painful for you. Sometimes, you may have a broken package or missing functionality for a few weeks.
Here are some ideas to ensure quick and easy recovery from bugs in Debian packages.
stable
suite of Debian system to another partition
apt-listbugs
to check the Debian Bug Tracking System (BTS) information before the upgrade
(If you can not do any one of these precautionary actions, you are probably not ready for the testing
and unstable
suites.)
Enlightenment with the following saves a person from the eternal karmic struggle of upgrade hell and let him reach Debian nirvana.
Let's look into the Debian archive from a system user's perspective.
Official policy of the Debian archive is defined at Debian Policy Manual, Chapter 2 - The Debian Archive.
For the typical HTTP access, the archive is specified in the "/etc/apt/sources.list
" file as the following, e.g. for the current stable
= squeeze
system.
deb http://ftp.XX.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.XX.debian.org/debian/ squeeze main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib deb-src http://security.debian.org/ squeeze/updates main contrib
Please note "ftp.XX.debian.org
" must be replaced with appropriate mirror site URL for your location, for USA "ftp.us.debian.org
", which can be found in the list of Debian worldwide mirror sites. The status of these servers can be checked at Debian Mirror Checker site.
Here, I tend to use codename "squeeze
" instead of suite name "stable
" to avoid surprises when the next stable
is released.
The meaning of "/etc/apt/sources.list
" is described in sources.list
(5) and key points are followings.
deb
" line defines for the binary packages.
deb-src
" line defines for the source packages.
The "deb-src
" lines can safely be omitted (or commented out by placing "#" at the start of the line) if it is just for aptitude
which does not access source related meta data. It speeds up the updates of the archive meta data. The URL can be "http://
", "ftp://
", "file://
", ….
If "sid
" is used in the above example instead of "squeeze
", the "deb: http://security.debian.org/ …
" line for security updates in the "/etc/apt/sources.list
" is not required. This is because there is no security update archive for "sid
" (unstable
).
Here is the list of URL of the Debian archive sites and suite name or codename used in the configuration file.
Table 2.2. List of Debian archive sites
archive URL | suite name (codename) | purpose |
---|---|---|
http://ftp.XX.debian.org/debian/ |
stable (squeeze )
|
stable (squeeze) release |
http://ftp.XX.debian.org/debian/ |
testing (wheezy )
|
testing (wheezy) release |
http://ftp.XX.debian.org/debian/ |
unstable (sid )
|
unstable (sid) release |
http://ftp.XX.debian.org/debian/ |
experimental
|
experimental pre-release (optional, only for developer) |
http://ftp.XX.debian.org/debian/ |
stable-proposed-updates
|
Updates for the next stable point release (optional) |
http://security.debian.org/ |
stable/updates
|
security updates for stable release (important) |
http://security.debian.org/ |
testing/updates
|
security updates for testing release (important) |
http://volatile.debian.org/debian-volatile/ |
volatile
|
compatible updates for spam filter, IM clients, etc. |
http://volatile.debian.org/debian-volatile/ |
volatile-sloppy
|
non-compatible updates for spam filter, IM clients, etc. |
http://backports.debian.org/debian-backports/ |
squeeze-backports
|
newer backported packages for squeeze (official, optional) |
Only pure stable
release with security updates provides the best stability. Running mostly stable
release mixed with some packages from testing
or unstable
release is riskier than running pure unstable
release for library version mismatch etc. If you really need the latest version of some programs under stable
release, please use packages from the debian-volatile project and http://backports.debian.org (see Section 2.7.4, “Volatile and Backports”) services. These services must be used with extra care.
You should basically list only one of stable
, testing
, or unstable
suites in the "deb
" line. If you list any combination of stable
, testing
, and unstable
suites in the "deb
" line, APT programs slow down while only the latest archive is effective. Multiple listing makes sense for these when the "/etc/apt/preferences
" file is used with clear objectives (see Section 2.7.3, “Tweaking candidate version”).
For the Debian system with the stable
and testing
suites, it is a good idea to include lines with "http://security.debian.org/
" in the "/etc/apt/sources.list
" to enable security updates as in the example above.
The security bugs for the stable
archive are fixed by the Debian security team. This activity has been quite rigorous and reliable. Those for the testing
archive may be fixed by the Debian testing security team. For several reasons, this activity is not as rigorous as that for stable
and you may need to wait for the migration of fixed unstable
packages. Those for the unstable
archive are fixed by the individual maintainer. Actively maintained unstable
packages are usually in a fairly good shape by leveraging latest upstream security fixes. See Debian security FAQ for how Debian handles security bugs.
Table 2.3. List of Debian archive area
area | number of packages | criteria of package component |
---|---|---|
main
|
29887 |
DSFG compliant and no dependency to non-free
|
contrib
|
202 |
DSFG compliant but having dependency to non-free
|
non-free
|
463 | not DSFG compliant |
Here the number of packages in the above is for the amd64 architecture. Strictly speaking, only the main
area archive shall be considered as the Debian system.
The Debian archive organization can be studied best by pointing your browser to the each archive URL appended with dists
or pool
.
The distribution is referred by two ways, the suite or codename. The word distribution is alternatively used as the synonym to the suite in many documentations. The relationship between the suite and the codename can be summarized as the following.
Table 2.4. The relationship between suite and codename
Timing |
suite = stable
|
suite = testing
|
suite = unstable
|
---|---|---|---|
after the squeeze release
|
codename = squeeze
|
codename = wheezy
|
codename = sid
|
after the wheezy release
|
codename = wheezy
|
codename = wheezy+1
|
codename = sid
|
The history of codenames are described in Debian FAQ: 6.3.1 Which other codenames have been used in the past?
In the stricter Debian archive terminology, the word "section" is specifically used for the categorization of packages by the application area. (Although, the word "main section" may sometimes be used to describe the Debian archive area named as "main".)
Every time a new upload is done by the Debian developer (DD) to the unstable
archive (via incoming processing), DD is required to ensure uploaded packages to be compatible with the latest set of packages in the latest unstable
archive.
If DD breaks this compatibility intentionally for important library upgrade etc, there is usually announcement to the debian-devel mailing list etc.
Before a set of packages are moved by the Debian archive maintenance script from the unstable
archive to the testing
archive, the archive maintenance script not only checks the maturity (about 10 days old) and the status of the RC bug reports for the packages but also tries to ensure them to be compatible with the latest set of packages in the testing
archive. This process makes the testing
archive very current and usable.
Through the gradual archive freeze process led by the release team, the testing
archive is matured to make it completely consistent and bug free with some manual interventions. Then the new stable
release is created by assigning the codename for the old testing
archive to the new stable
archive and creating the new codename for the new testing
archive. The initial contents of the new testing
archive is exactly the same as that of the newly released stable
archive.
Both the unstable
and the testing
archives may suffer temporary glitches due to several factors.
unstable
)
unstable
)
testing
and unstable
)
testing
) etc.
So if you ever decide to use these archives, you should be able to fix or work around these kinds of glitches.
For about few months after a new stable
release, most desktop users should use the stable
archive with its security updates even if they usually use unstable
or testing
archives. For this transition period, both unstable
and testing
archives are not good for most people. Your system is difficult to keep in good working condition with the unstable
archive since it suffers surges of major upgrades for core packages. The testing
archive is not useful either since it contains mostly the same content as the stable
archive without its security support (Debian testing-security-announce 2008-12). After a month or so, the unstable
archive may be usable if you are careful.
When tracking the testing
archive, problem caused by a removed package is usually worked around by installing corresponding package from the unstable
archive which is uploaded for bug fix.
See Debian Policy Manual for archive definitions.
The Debian system offers a consistent set of binary packages through its versioned binary dependency declaration mechanism in the control file fields. Here is a bit over simplified definition for them.
"Depends"
"Pre-Depends"
"Recommends"
"Suggests"
"Enhances"
"Breaks"
"Conflicts"
"Replaces"
"Provides"
Please note that defining, "Provides", "Conflicts" and "Replaces" simultaneously to an virtual package is the sane configuration. This ensures that only one real package providing this virtual package can be installed at any one time.
The official definition including source dependency can be found in the Policy Manual: Chapter 7 - Declaring relationships between packages.
Here is a summary of the simplified event flow of the package management by APT.
Update ("aptitude update
" or "apt-get update
"):
Upgrade ("aptitude safe-upgrade
" and "aptitude full-upgrade
", or "apt-get upgrade
" and "apt-get dist-upgrade
"):
Install ("aptitude install …
" or "apt-get install …
"):
Remove ("aptitude remove …
" or "apt-get remove …
"):
Purge ("aptitude purge …
" or "apt-get purge …
"):
Here, I intentionally skipped technical details for the sake of big picture.
You should read the fine official documentation. The first document to read is the Debian specific "/usr/share/doc/<package_name>/README.Debian
". Other documentation in "/usr/share/doc/<package_name>/
" should be consulted too. If you set shell as Section 1.4.2, “Customizing bash”, type the following.
$ cd <package_name> $ pager README.Debian $ mc
You may need to install the corresponding documentation package named with "-doc
" suffix for detailed information.
If you are experiencing problems with a specific package, make sure to check out the Debian bug tracking system (BTS) sites, first.
Table 2.5. List of key web site to resolving problems with a specific package
web site | command |
---|---|
Home page of the Debian bug tracking system (BTS) |
sensible-browser "http://bugs.debian.org/"
|
The bug report of a known package name |
sensible-browser "http://bugs.debian.org/<package_name>"
|
The bug report of known bug number |
sensible-browser "http://bugs.debian.org/<bug_number>"
|
Search Google with search words including "site:debian.org
", "site:wiki.debian.org
", "site:lists.debian.org
", etc.
When you file a bug report, please use reportbug
(1) command.
Basic package management operations on the Debian system can be performed by any package management tools available on the Debian system. Here, we explain basic package management tools: apt-get
/ apt-cache
and aptitude
.
For the package management operation which involves package installation or updates package metadata, you need to have root privilege.
The apt-get
and apt-cache
commands are the most basic package management tool.
apt-get
and apt-cache
offer only the commandline user interface.
apt-get
is most suitable for the major system upgrade between releases, etc.
apt-get
offers a robust and stable package resolver which uses the common package state data.
apt-get
has been updated to support autoinstall and autoremove of recommended packages.
apt-get
has been updated to support logging of package activities.
apt-cache
offers a standard regex based search on the package name and description.
apt-get
and apt-cache
can manage multiple versions of packages using /etc/apt/preferences
but it is quite cumbersome.
The aptitude
command is the most versatile package management tool.
aptitude
offers the fullscreen interactive text user interface.
aptitude
offers the commandline user interface, too.
aptitude
is most suitable for the daily interactive package management such as inspecting installed packages and searching available packages.
aptitude
offers an enhanced package resolver which also uses an extra package state data used only by aptitude
.
aptitude
supports autoinstall and autoremove of recommended packages.
aptitude
supports logging of package activities.
aptitude
offers an enhanced regex based search on all of the package metadata.
aptitude
can manage multiple versions of packages without using /etc/apt/preferences
and it is quite intuitive.
Although the aptitude
command comes with rich features such as its enhanced package resolver, this complexity has caused (or may still causes) some regressions such as Bug #411123, Bug #514930, and Bug #570377. In case of doubt, please use the apt-get
and apt-cache
commands over the aptitude
command.
Here are basic package management operations with the commandline using aptitude
(8) and apt-get
(8) /apt-cache
(8).
Table 2.6. Basic package management operations with the commandline using aptitude
(8) and apt-get
(8) /apt-cache
(8)
aptitude syntax
|
apt-get /apt-cache syntax
|
description |
---|---|---|
aptitude update
|
apt-get update
|
update package archive metadata |
aptitude install foo
|
apt-get install foo
|
install candidate version of "foo " package with its dependencies
|
aptitude safe-upgrade
|
apt-get upgrade
|
install candidate version of installed packages without removing any other packages |
aptitude full-upgrade
|
apt-get dist-upgrade <package>
|
install candidate version of installed packages while removing other packages if needed |
aptitude remove foo
|
apt-get remove foo
|
remove "foo " package while leaving its configuration files
|
N/A |
apt-get autoremove
|
remove auto-installed packages which is no longer required |
aptitude purge foo
|
apt-get purge foo
|
purge "foo " package with its configuration files
|
aptitude clean
|
apt-get clean
|
clear out the local repository of retrieved package files completely |
aptitude autoclean
|
apt-get autoclean
|
clear out the local repository of retrieved package files for outdated packages |
aptitude show foo
|
apt-cache show <package>
|
display detailed information about "foo " package
|
aptitude search <regex>
|
apt-cache search <regex>
|
search packages which match <regex> |
aptitude why <regex>
|
N/A | explain the reason why <regex> matching packages should be installed |
aptitude why-not <regex>
|
N/A | explain the reason why <regex> matching packages can not be installed |
Since apt-get
and aptitude
share auto-installed package status (see Section 2.5.5, “The package state for APT”) after lenny
, you can mix these tools without major troubles (see Bug #594490).
The difference between "safe-upgrade
"/"upgrade
" and "full-upgrade
"/"dist-upgrade
" only appears when new versions of packages stand in different dependency relationships from old versions of those packages. The "aptitude safe-upgrade
" command does not install new packages nor remove installed packages.
The "aptitude why <regex>
" can list more information by "aptitude -v why <regex>
". Similar information can be obtained by "apt-cache rdepends <package>
".
When aptitude
command is started in the commandline mode and faces some issues such as package conflicts, you can switch to the full screen interactive mode by pressing "e
"-key later at the prompt.
You may provide command options right after "aptitude
".
Table 2.7. Notable command options for aptitude
(8)
command option | description |
---|---|
-s
|
simulate the result of the command |
-d
|
download only but no install/upgrade |
-D
|
show brief explanations before the automatic installations and removals |
See aptitude
(8) and "aptitude user's manual" at "/usr/share/doc/aptitude/README
" for more.
The dselect
package is still available and was the preferred full screen interactive package management tool in previous releases.
For the interactive package management, you start aptitude
in interactive mode from the console shell prompt as follows.
$ sudo aptitude -u Password:
This updates the local copy of the archive information and display the package list in the full screen with menu. Aptitude places its configuration at "~/.aptitude/config
".
If you want to use root's configuration instead of user's one, use "sudo -H aptitude …
" instead of "sudo aptitude …
" in the above expression.
Aptitude
automatically sets pending actions as it is started interactively. If you do not like it, you can reset it from menu: "Action" → "Cancel pending actions".
Notable key strokes to browse status of packages and to set "planned action" on them in this full screen mode are the following.
Table 2.8. List of key bindings for aptitude
key | key binding |
---|---|
F10 or Ctrl-t
|
menu |
?
|
display help for keystroke (more complete listing) |
F10 → Help → User's Manual
|
display User's Manual |
u
|
update package archive information |
+
|
mark the package for the upgrade or the install |
-
|
mark the package for the remove (keep configuration files) |
_
|
mark the package for the purge (remove configuration files) |
=
|
place the package on hold |
U
|
mark all upgradable packages (function as full-upgrade) |
g
|
start downloading and installing selected packages |
q
|
quit current screen and save changes |
x
|
quit current screen and discard changes |
Enter
|
view information about a package |
C
|
view a package's changelog |
l
|
change the limit for the displayed packages |
/
|
search for the first match |
\
|
repeat the last search |
The file name specification of the command line and the menu prompt after pressing "l
" and "//
" take the aptitude regex as described below. Aptitude regex can explicitly match a package name using a string started by "~n
and followed by the package name.
You need to press "U
" to get all the installed packages upgraded to the candidate version in the visual interface. Otherwise only the selected packages and certain packages with versioned dependency to them are upgraded to the candidate version.
In the interactive full screen mode of aptitude
(8), packages in the package list are displayed as the next example.
idA libsmbclient -2220kB 3.0.25a-1 3.0.25a-2
Here, this line means from the left as the following.
The full list of flags are given at the bottom of Help screen shown by pressing "?
".
The candidate version is chosen according to the current local preferences (see apt_preferences
(5) and Section 2.7.3, “Tweaking candidate version”).
Several types of package views are available under the menu "Views
".
Table 2.9. List of views for aptitude
view | status | description of view |
---|---|---|
Package View
|
Good | see Table 2.10, “The categorization of standard package views” (default) |
Audit Recommendations
|
Good | list packages which are recommended by some installed packages but not yet installed are listed |
Flat Package List
|
Good | list packages without categorization (for use with regex) |
Debtags Browser
|
Very usable | list packages categorized according to their debtags entries |
Categorical Browser
|
Deprecated |
list packages categorized according to their category (use Debtags Browser , instead)
|
Please help us improving tagging packages with debtags!
The standard "Package View
" categorizes packages somewhat like dselect
with few extra features.
Table 2.10. The categorization of standard package views
category | description of view |
---|---|
Upgradable Packages
|
list packages organized as section → area → package
|
New Packages
|
, , |
Installed Packages
|
, , |
Not Installed Packages
|
, , |
Obsolete and Locally Created Packages
|
, , |
Virtual Packages
|
list packages with the same function |
Tasks
|
list packages with different functions generally needed for a task |
Tasks
view can be used to cherry pick packages for your task.
Aptitude offers several options for you to search packages using its regex formula.
Shell commandline:
aptitude search '<aptitude_regex>'
" to list installation status, package name and short description of matching packages
aptitude show '<package_name>'
" to list detailed description of the package
Interactive full screen mode:
l
" to limit package view to matching packages
/
" for search to a matching package
\
" for backward search to a matching package
n
" for find-next
N
" for find-next (backward)
The string for <package_name> is treated as the exact string match to the package name unless it is started explicitly with "~
" to be the regex formula.
The aptitude regex formula is mutt-like extended ERE (see Section 1.6.2, “Regular expressions”) and the meanings of the aptitude
specific special match rule extensions are as follows.
Table 2.11. List of the aptitude regex formula
description of the extended match rule | regex formula |
---|---|
match on package name |
~n<regex_name>
|
match on description |
~d<regex_description>
|
match on task name |
~t<regex_task>
|
match on debtag |
~G<regex_debtag>
|
match on maintainer |
~m<regex_maintainer>
|
match on package section |
~s<regex_section>
|
match on package version |
~V<regex_version>
|
match archive |
~A{sarge,etch,sid }
|
match origin |
~O{debian,… }
|
match priority |
~p{extra,important,optional,required,standard }
|
match essential packages |
~E
|
match virtual packages |
~v
|
match new packages |
~N
|
match with pending action |
~a{install,upgrade,downgrade,remove,purge,hold,keep }
|
match installed packages |
~i
|
match installed packages with A-mark (auto installed package) |
~M
|
match installed packages without A-mark (administrator selected package) |
~i!~M
|
match installed and upgradable packages |
~U
|
match removed but not purged packages |
~c
|
match removed, purged or can-be-removed packages |
~g
|
match packages with broken relation |
~b
|
match packages with broken depends/predepends/conflict |
~B<type>
|
match packages from which relation <type> is defined to <term> package |
~D[<type>:]<term>
|
match packages from which broken relation <type> is defined to <term> package |
~DB[<type>:]<term>
|
match packages to which the <term> package defines relation <type> |
~R[<type>:]<term>
|
match packages to which the <term> package defines broken relation <type> |
~RB[<type>:]<term>
|
match packages to which some other installed packages depend on |
~R~i
|
match packages to which no other installed packages depend on |
!~R~i
|
match packages to which some other installed packages depend or recommend on |
~R~i|~Rrecommends:~i
|
match <term> package with filtered version |
~S filter <term>
|
match all packages (true) |
~T
|
match no packages (false) |
~F
|
^
", ".*
", "$
" etc. as in egrep
(1), awk
(1) and perl
(1).
When <regex_pattern> is a null string, place "~T
" immediately after the command.
Here are some short cuts.
~P<term>
" == "~Dprovides:<term>
"
~C<term>
" == "~Dconflicts:<term>
"
…~W term
" == "(…|term)
"
Users familiar with mutt
pick up quickly, as mutt was the inspiration for the expression syntax. See "SEARCHING, LIMITING, AND EXPRESSIONS" in the "User's Manual" "/usr/share/doc/aptitude/README
".
With the lenny
version of aptitude
(8), the new long form syntax such as "?broken
" may be used for regex matching in place for its old short form equivalent "~b
". Now space character "
" is considered as one of the regex terminating character in addition to tilde character "~
". See "User's Manual" for the new long form syntax.
The selection of a package in aptitude
not only pulls in packages which are defined in its "Depends:
" list but also defined in the "Recommends:
" list if the menu "F10
→ Options → Dependency handling" is set accordingly. These auto installed packages are removed automatically if they are no longer needed under aptitude
.
Before the lenny
release, apt-get
and other standard APT tools did not offer the autoremove functionality.
You can check package activity history in the log files.
Table 2.12. The log files for package activities
file | content |
---|---|
/var/log/dpkg.log
|
Log of dpkg level activity for all package activities
|
/var/log/apt/term.log
|
Log of generic APT activity |
/var/log/aptitude
|
Log of aptitude command activity
|
In reality, it is not so easy to get meaningful understanding quickly out from these logs. See Section 9.2.10, “Recording changes in configuration files” for easier way.
Here are few examples of aptitude
(8) operations.
The following command lists packages with regex matching on package names.
$ aptitude search '~n(pam|nss).*ldap' p libnss-ldap - NSS module for using LDAP as a naming service p libpam-ldap - Pluggable Authentication Module allowing LDAP interfaces
This is quite handy for you to find the exact name of a package.
The regex "~dipv6
" in the "New Flat Package List" view with "l
" prompt, limits view to packages with the matching description and let you browse their information interactively.
You can purge all remaining configuration files of removed packages.
Check results of the following command.
# aptitude search '~c'
If you think listed packages are OK to be purged, execute the following command.
# aptitude purge '~c'
You may want to do the similar in the interactive mode for fine grained control.
You provide the regex "~c
" in the "New Flat Package List" view with "l
" prompt. This limits the package view only to regex matched packages, i.e., "removed but not purged". All these regex matched packages can be shown by pressing "[
" at top level headings.
Then you press "_
" at top level headings such as "Installed Packages". Only regex matched packages under the heading are marked to be purged by this. You can exclude some packages to be purged by pressing "=
" interactively for each of them.
This technique is quite handy and works for many other command keys.
Here is how I tidy auto/manual install status for packages (after using non-aptitude package installer etc.).
aptitude
in interactive mode as root.
u
", "U
", "f
" and "g
" to update and upgrade package list and packages.
l
" to enter the package display limit as "~i(~R~i|~Rrecommends:~i)
" and type "M
" over "Installed Packages
" as auto installed.
l
" to enter the package display limit as "~prequired|~pimportant|~pstandard|~E
" and type "m
" over "Installed Packages
" as manual installed.
l
" to enter the package display limit as "~i!~M
" and remove unused package by typing "-
" over each of them after exposing them by typing "[
" over "Installed Packages
".
l
" to enter the package display limit as "~i
" and type "m
" over "Tasks
" as manual installed.
aptitude
.
apt-get -s autoremove|less
" as root to check what are not used.
aptitude
in interactive mode and mark needed packages as "m
".
apt-get -s autoremove|less
" as root to recheck REMOVED contain only expected packages.
apt-get autoremove|less
" as root to autoremove unused packages.
The "m
" action over "Tasks
" is an optional one to prevent mass package removal situation in future.
When moving to a new release etc, you should consider to perform a clean installation of new system even though Debian is upgradable as described below. This provides you a chance to remove garbages collected and exposes you to the best combination of latest packages. Of course, you should make a full backup of system to a safe place (see Section 10.1.6, “Backup and recovery”) before doing this. I recommend to make a dual boot configuration using different partition to have the smoothest transition.
You can perform system wide upgrade to a newer release by changing contents of the "/etc/apt/sources.list
" file pointing to a new release and running the "apt-get update; apt-get dist-upgrade
" command.
To upgrade from stable
to testing
or unstable
, you replace "squeeze
" in the "/etc/apt/sources.list
" example of Section 2.1.4, “Debian archive basics” with "wheezy
" or "sid
".
In reality, you may face some complications due to some package transition issues, mostly due to package dependencies. The larger the difference of the upgrade, the more likely you face larger troubles. For the transition from the old stable
to the new stable
after its release, you can read its new Release Notes and follow the exact procedure described in it to minimize troubles.
When you decide to move from stable
to testing
before its formal release, there are no Release Notes to help you. The difference between stable
and testing
could have grown quite large after the previous stable
release and makes upgrade situation complicated.
You should make precautionary moves for the full upgrade while gathering latest information from mailing list and using common senses.
script
(1).
aptitude unmarkauto vim
", to prevent removal.
/etc/apt/preferences
" file (disable apt-pinning).
oldstable
→ stable
→ testing
→ unstable
.
/etc/apt/sources.list
" file to point to new archive only and run "aptitude update
".
aptitude install perl
".
apt-get -s dist-upgrade
" command to assess impact.
apt-get dist-upgrade
" command at last.
It is not wise to skip major Debian release when upgrading between stable
releases.
In previous "Release Notes", GCC, Linux Kernel, initrd-tools, Glibc, Perl, APT tool chain, etc. have required some special attention for system wide upgrade.
For daily upgrade in unstable
, see Section 2.4.3, “Safeguarding for package problems”.
Here are list of other package management operations for which aptitude
is too high-level or lacks required functionalities.
Table 2.13. List of advanced package management operations
command | action |
---|---|
COLUMNS=120 dpkg -l <package_name_pattern>
|
list status of an installed package for the bug report |
dpkg -L <package_name>
|
list contents of an installed package |
dpkg -L <package_name> | egrep '/usr/share/man/man.*/.+'
|
list manpages for an installed package |
dpkg -S <file_name_pattern>
|
list installed packages which have matching file name |
apt-file search <file_name_pattern>
|
list packages in archive which have matching file name |
apt-file list <package_name_pattern>
|
list contents of matching packages in archive |
dpkg-reconfigure <package_name>
|
reconfigure the exact package |
dpkg-reconfigure -p=low <package_name>
|
reconfigure the exact package with the most detailed question |
configure-debian
|
reconfigure packages from the full screen menu |
dpkg --audit
|
audit system for partially installed packages |
dpkg --configure -a
|
configure all partially installed packages |
apt-cache policy <binary_package_name>
|
show available version, priority, and archive information of a binary package |
apt-cache madison <package_name>
|
show available version, archive information of a package |
apt-cache showsrc <binary_package_name>
|
show source package information of a binary package |
apt-get build-dep <package_name>
|
install required packages to build package |
apt-get source <package_name>
|
download a source (from standard archive) |
dget <URL for dsc file>
|
download a source packages (from other archive) |
dpkg-source -x <package_name>_<version>-<debian_version>.dsc
|
build a source tree from a set of source packages ("*.orig.tar.gz " and "*.debian.tar.gz "/"*.diff.gz ")
|
debuild binary
|
build package(s) from a local source tree |
make-kpkg kernel_image
|
build a kernel package from a kernel source tree |
make-kpkg --initrd kernel_image
|
build a kernel package from a kernel source tree with initramfs enabled |
dpkg -i <package_name><version>-<debian_version><arch>.deb
|
install a local package to the system |
debi <package_name><version>-<debian_version><arch>.dsc
|
install local package(s) to the system |
dpkg --get-selection '*' >selection.txt
|
save dpkg level package selection state information
|
dpkg --set-selection <selection.txt
|
set dpkg level package selection state information
|
echo <package_name> hold | dpkg --set-selection
|
set dpkg level package selection state for a package to hold (equivalent to "aptitude hold <package_name> ")
|
Lower level package tools such as "dpkg -i …
" and "debi …
" should be carefully used by the system administrator. It does not automatically take care required package dependencies. Dpkg's commandline options "--force-all
" and similar (see dpkg
(1)) are intended to be used by experts only. Using them without fully understanding their effects may break your whole system.
Please note the following.
aptitude
which uses regex (see Section 1.6.2, “Regular expressions”), other package management commands use pattern like shell glob (see Section 1.5.6, “Shell glob”).
apt-file
(1) provided by the apt-file
package must run "apt-file update
" in advance.
configure-debian
(8) provided by the configure-debian
package runs dpkg-reconfigure
(8) as its backend.
dpkg-reconfigure
(8) runs package scripts using debconf
(1) as its backend.
apt-get build-dep
", "apt-get source
" and "apt-cache showsrc
" commands require "deb-src
" entry in "/etc/apt/sources.list
".
dget
(1), debuild
(1), and debi
(1) require devscripts
package.
apt-get source
" in Section 2.7.10, “Porting a package to the stable system”.
make-kpkg
command requires the kernel-package
package (see Section 9.7, “The kernel”).
The installation of debsums
enables verification of installed package files against MD5sum values in the "/var/lib/dpkg/info/*.md5sums
" file with debsums
(1). See Section 10.4.5, “The MD5 sum” for how MD5sum works.
Because MD5sum database may be tampered by the intruder, debsums
(1) is of limited use as a security tool. It is only good for checking local modifications by the administrator or damage due to media errors.
Many users prefer to follow the unstable release of the Debian system for its new features and packages. This makes the system more prone to be hit by the critical package bugs.
The installation of the apt-listbugs
package safeguards your system against critical bugs by checking Debian BTS automatically for critical bugs when upgrading with APT system.
The installation of the apt-listchanges
package provides important news in "NEWS.Debian
" when upgrading with APT system.
Although visiting Debian site http://packages.debian.org/ facilitates easy ways to search on the package meta data these days, let's look into more traditional ways.
The grep-dctrl
(1), grep-status
(1), and grep-available
(1) commands can be used to search any file which has the general format of a Debian package control file.
The "dpkg -S <file_name_pattern>
" can be used search package names which contain files with the matching name installed by dpkg
. But this overlooks files created by the maintainer scripts.
If you need to make more elaborate search on the dpkg meta data, you need to run "grep -e regex_pattern *
" command in the "/var/lib/dpkg/info/
" directory. This makes you search words mentioned in package scripts and installation query texts.
If you wish to look up package dependency recursively, you should use apt-rdepends
(8).
Let's learn how the Debian package management system works internally. This should help you to create your own solution to some package problems.
Meta data files for each distribution are stored under "dist/<codename>
" on each Debian mirror sites, e.g., "http://ftp.us.debian.org/debian/
". Its archive structure can be browsed by the web browser. There are 6 types of key meta data.
Table 2.14. The content of the Debian archive meta data
file | location | content |
---|---|---|
Release
|
top of distribution | archive description and integrity information |
Release.gpg
|
top of distribution |
signature file for the "Release " file signed with the archive key
|
Contents-<architecture>
|
top of distribution | list of all files for all the packages in the pertinent archive |
Release
|
top of each distribution/area/architecture combination |
archive description used for the rule of apt_preferences (5)
|
Packages
|
top of each distribution/area/binary-architecture combination |
concatenated debian/control for binary packages
|
Sources
|
top of each distribution/area/source combination |
concatenated debian/control for source packages
|
In the recent archive, these meta data are stored as the compressed and differential files to reduce network traffic.
The top level "Release
" file is used for signing the archive under the secure APT system.
Each suite of the Debian archive has a top level "Release
" file, e.g., "http://ftp.us.debian.org/debian/dists/unstable/Release
", as follows.
Origin: Debian Label: Debian Suite: unstable Codename: sid Date: Sat, 26 Jan 2008 20:13:58 UTC Architectures: alpha amd64 arm hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sparc Components: main contrib non-free Description: Debian x.y Unstable - Not Released MD5Sum: e9f11bc50b12af7927d6583de0a3bd06 22788722 main/binary-alpha/Packages 43524d07f7fa21b10f472c426db66168 6561398 main/binary-alpha/Packages.gz ...
Here, you can find my rationale to use the "suite", and "codeneme" in Section 2.1.4, “Debian archive basics”. The "distribution" is used when referring to both "suite" and "codeneme". All archive "area" names offered by the archive are listed under "Component".
The integrity of the top level "Release
" file is verified by cryptographic infrastructure called the secure apt.
Release.gpg
" is created from the authentic top level "Release
" file and the secret Debian archive key.
The public Debian archive key can be seeded into "/etc/apt/trusted.gpg
";
base-files
package, or
gpg
or apt-key
tool with the latest public archive key posted on the ftp-master.debian.org .
Release
" file cryptographically by this "Release.gpg
" file and the public Debian archive key in "/etc/apt/trusted.gpg
".
The integrity of all the "Packages
" and "Sources
" files are verified by using MD5sum values in its top level "Release
" file. The integrity of all package files are verified by using MD5sum values in the "Packages
" and "Sources
" files. See debsums
(1) and Section 2.4.2, “Verification of installed package files”.
Since the cryptographic signature verification is very CPU intensive process than the MD5sum value calculation, use of MD5sum value for each package while using cryptographic signature for the top level "Release
" file provides the good security with the performance (see Section 10.4, “Data security infrastructure”).
The archive level "Release
" files are used for the rule of apt_preferences
(5).
There are archive level "Release
" files for all archive locations specified by "deb
" line in "/etc/apt/sources.list
", such as "http://ftp.us.debian.org/debian/dists/unstable/main/binary-amd64/Release
" or "http://ftp.us.debian.org/debian/dists/sid/main/binary-amd64/Release
" as follows.
Archive: unstable Component: main Origin: Debian Label: Debian Architecture: amd64
For "Archive:
" stanza, suite names ("stable
", "testing
", "unstable
", …) are used in the Debian archive while codenames ("dapper
", "feisty
", "gutsy
", "hardy
", "intrepid
", …) are used in the Ubuntu archive.
For some archives, such as experimental
, volatile-sloppy
, and squeeze-backports
, which contain packages which should not be installed automatically, there is an extra line, e.g., "http://ftp.us.debian.org/debian/dists/experimental/main/binary-amd64/Release
" as follows.
Archive: experimental Component: main Origin: Debian Label: Debian NotAutomatic: yes Architecture: amd64
Please note that for normal archives without "NotAutomatic: yes
", the default Pin-Priority value is 500, while for special archives with "NotAutomatic: yes
", the default Pin-Priority value is 1 (see apt_preferences
(5) and Section 2.7.3, “Tweaking candidate version”).
When APT tools, such as aptitude
, apt-get
, synaptic
, apt-file
, auto-apt
…, are used, we need to update the local copies of the meta data containing the Debian archive information. These local copies have following file names corresponding to the specified distribution
, area
, and architecture
names in the "/etc/apt/sources.list
" (see Section 2.1.4, “Debian archive basics”).
/var/lib/apt/lists/ftp.us.debian.org_debian_dists_<distribution>_Release
"
/var/lib/apt/lists/ftp.us.debian.org_debian_dists_<distribution>_Release.gpg
"
/var/lib/apt/lists/ftp.us.debian.org_debian_dists_<distribution>_<area>_binary-<architecture>_Packages
"
/var/lib/apt/lists/ftp.us.debian.org_debian_dists_<distribution>_<area>_source_Sources
"
/var/cache/apt/apt-file/ftp.us.debian.org_debian_dists_<distribution>_Contents-<architecture>.gz
" (for apt-file
)
First 4 types of files are shared by all the pertinent APT commands and updated from command line by "apt-get update
" and "aptitude update
". The "Packages
" meta data are updated if there is the "deb
" line in "/etc/apt/sources.list
". The "Sources
" meta data are updated if there is the "deb-src
" line in "/etc/apt/sources.list
".
The "Packages
" and "Sources
" meta data contain "Filename:
" stanza pointing to the file location of the binary and source packages. Currently, these packages are located under the "pool/
" directory tree for the improved transition over the releases.
Local copies of "Packages
" meta data can be interactively searched with the help of aptitude
. The specialized search command grep-dctrl
(1) can search local copies of "Packages
" and "Sources
" meta data.
Local copy of "Contents-<architecture>
" meta data can be updated by "apt-file update
" and its location is different from other 4 ones. See apt-file
(1). (The auto-apt
uses different location for local copy of "Contents-<architecture>.gz
" as default.)
In addition to the remotely fetched meta data, the APT tool after lenny
stores its locally generated installation state information in the "/var/lib/apt/extended_states
" which is used by all APT tools to track all auto installed packages.
In addition to the remotely fetched meta data, the aptitude
command stores its locally generated installation state information in the "/var/lib/aptitude/pkgstates
" which is used only by it.
All the remotely fetched packages via APT mechanism are stored in the "/var/cache/apt/packages
" until they are cleaned.
Debian package files have particular name structures.
Table 2.15. The name structure of Debian packages
package type | name structure |
---|---|
The binary package (a.k.a deb )
|
<package-name>_<epoch>:<upstream-version>-<debian.version>-<architecture>.deb
|
The binary package (a.k.a udeb )
|
<package-name>_<epoch>:<upstream-version>-<debian.version>-<architecture>.udeb
|
The source package (upstream source) |
<package-name>_<epoch>:<upstream-version>-<debian.version>.orig.tar.gz
|
The 1.0 source package (Debian changes)
|
<package-name>_<epoch>:<upstream-version>-<debian.version>.diff.gz
|
The 3.0 (quilt) source package (Debian changes)
|
<package-name>_<epoch>:<upstream-version>-<debian.version>.debian.tar.gz
|
The source package (description) |
<package-name>_<epoch>:<upstream-version>-<debian.version>.dsc
|
Here only the basic source package formats are described. See more on dpkg-source
(1).
Table 2.16. The usable characters for each component in the Debian package names
name component | usable characters (regex) | existence |
---|---|---|
<package-name>
|
[a-z,A-Z,0-9,.,
|
required |
<epoch>:
|
[0-9]+:
|
optional |
<upstream-version>
|
[a-z,A-Z,0-9,.,
|
required |
<debian.version>
|
[a-z,A-Z,0-9,.,
|
optional |
You can check package version order by dpkg
(1), e.g., "dpkg --compare-versions 7.0 gt 7.~pre1 ; echo $?
" .
The debian-installer (d-i) uses udeb
as the file extension for its binary package instead of normal deb
. An udeb
package is a stripped down deb
package which removes few non-essential contents such as documentation to save space while relaxing the package policy requirements. Both deb
and udeb
packages share the same package structure. The "u
" stands for micro.
dpkg
(1) is the lowest level tool for the Debian package management. This is very powerful and needs to be used with care.
While installing package called "<package_name>
", dpkg
process it in the following order.
ar -x
" equivalent)
<package_name>.preinst
" using debconf
(1)
tar -x
" equivalent)
<package_name>.postinst
" using debconf
(1)
The debconf
system provides standardized user interaction with I18N and L10N (Chapter 8, I18N and L10N) supports.
Table 2.17. The notable files created by dpkg
file | description of contents |
---|---|
/var/lib/dpkg/info/<package_name>.conffiles
|
list of configuration files. (user modifiable) |
/var/lib/dpkg/info/<package_name>.list
|
list of files and directories installed by the package |
/var/lib/dpkg/info/<package_name>.md5sums
|
list of MD5 hash values for files installed by the package |
/var/lib/dpkg/info/<package_name>.preinst
|
package script run before the package installation |
/var/lib/dpkg/info/<package_name>.postinst
|
package script run after the package installation |
/var/lib/dpkg/info/<package_name>.prerm
|
package script run before the package removal |
/var/lib/dpkg/info/<package_name>.postrm
|
package script run after the package removal |
/var/lib/dpkg/info/<package_name>.config
|
package script for debconf system
|
/var/lib/dpkg/alternatives/<package_name>
|
the alternative information used by the update-alternatives command
|
/var/lib/dpkg/available
|
the availability information for all the package |
/var/lib/dpkg/diversions
|
the diversions information used by dpkg (1) and set by`dpkg-divert`(8)
|
/var/lib/dpkg/statoverride
|
the stat override information used by dpkg (1) and set by`dpkg-statoverride`(8)
|
/var/lib/dpkg/status
|
the status information for all the packages |
/var/lib/dpkg/status-old
|
the first-generation backup of the "var/lib/dpkg/status " file
|
/var/backups/dpkg.status*
|
the second-generation backup and older ones of the "var/lib/dpkg/status " file
|
The "status
" file is also used by the tools such as dpkg
(1), "dselect update
" and "apt-get -u dselect-upgrade
".
The specialized search command grep-dctrl
(1) can search the local copies of "status
" and "available
" meta data.
In the debian-installer environment, the udpkg
command is used to open udeb
packages. The udpkg
command is a stripped down version of the dpkg
command.
The Debian system has mechanism to install somewhat overlapping programs peacefully using update-alternatives
(8). For example, you can make the vi
command select to run vim
while installing both vim
and nvi
packages.
$ ls -l $(type -p vi) lrwxrwxrwx 1 root root 20 2007-03-24 19:05 /usr/bin/vi -> /etc/alternatives/vi $ sudo update-alternatives --display vi ... $ sudo update-alternatives --config vi Selection Command ---------------------------------------------- 1 /usr/bin/vim *+ 2 /usr/bin/nvi Enter to keep the default[*], or type selection number: 1
The Debian alternatives system keeps its selection as symlinks in "/etc/alternatives/
". The selection process uses corresponding file in "/var/lib/dpkg/alternatives/
".
Stat overrides provided by the dpkg-statoverride
(8) command are a way to tell dpkg
(1) to use a different owner or mode for a file when a package is installed. If "--update
" is specified and file exists, it is immediately set to the new owner and mode.
The direct alteration of owner or mode for a file owned by the package using chmod
or chown
commands by the system administrator is reset by the next upgrade of the package.
I use the word file here, but in reality this can be any filesystem object that dpkg
handles, including directories, devices, etc.
File diversions provided by the dpkg-divert
(8) command are a way of forcing dpkg
(1) not to install a file into its default location, but to a diverted location. The use of dpkg-divert
is meant for the package maintenance scripts. Its casual use by the system administrator is deprecated.
When running unstable
system, the administrator is expected to recover from broken package management situation.
Some methods described here are high risk actions. You have been warned!
If a desktop GUI program experienced instability after significant upstream version upgrade, you should suspect interferences with old local configuration files created by it. If it is stable under newly created user account, this hypothesis is confirmed. (This is a bug of packaging and usually avoided by the packager.)
To recover stability, you should move corresponding local configuration files and restart the GUI program. You may need to read old configuration file contents to recover configuration information later. (Do not erase them too quickly.)
Archive level package management systems, such as aptitude
(8) or apt-get
(1), do not even try to install packages with overlapped files using package dependencies (see Section 2.1.5, “Package dependencies”).
Errors by the package maintainer or deployment of inconsistently mixed source of archives (see Section 2.7.2, “Packages from mixed source of archives”) by the system administrator may create situation with incorrectly defined package dependencies. When you install a package with overlapped files using aptitude
(8) or apt-get
(1) under such situation, dpkg
(1) which unpacks package ensures to return error to the calling program without overwriting existing files.
The use of third party packages introduces significant system risks via maintainer scripts which are run with root privilege and can do anything to your system. The dpkg
(1) command only protects against overwriting by the unpacking.
You can work around such broken installation by removing the old offending package, <old-package>
, first.
$ sudo dpkg -P <old-package>
When a command in the package script returns error for some reason and the script exits with error, the package management system aborts their action and ends up with partially installed packages. When a package contains bugs in its removal scripts, the package may become impossible to remove and quite nasty.
For the package script problem of "<package_name>
", you should look into following package scripts.
/var/lib/dpkg/info/<package_name>.preinst
"
/var/lib/dpkg/info/<package_name>.postinst
"
/var/lib/dpkg/info/<package_name>.prerm
"
/var/lib/dpkg/info/<package_name>.postrm
"
Edit the offending package script from the root using following techniques.
#
"
|| true
"
Configure all partially installed packages with the following command.
# dpkg --configure -a
Since dpkg
is very low level package tool, it can function under the very bad situation such as unbootable system without network connection. Let's assume foo
package was broken and needs to be replaced.
You may still find cached copies of older bug free version of foo
package in the package cache directory: "/var/cache/apt/archives/
". (If not, you can download it from archive of http://snapshot.debian.net/ or copy it from package cache of a functioning machine.)
If you can boot the system, you may install it by the following command.
# dpkg -i /path/to/foo_<old_version>_<arch>.deb
If system breakage is minor, you may alternatively downgrade the whole system as Section 2.7.7, “Emergency downgrading” using the higher level APT system.
If your system is unbootable from hard disk, you should seek other ways to boot it.
/target
".
foo
package by the following.
# dpkg --root /target -i /path/to/foo_<old_version>_<arch>.deb
This example works even if the dpkg
command on the hard disk is broken.
Any GNU/Linux system started by another system on hard disk, live GNU/Linux CD, bootable USB-key drive, or netboot can be used similarly to rescue broken system.
If attempting to install a package this way fails due to some dependency violations and you really need to do this as the last resort, you can override dependency using dpkg
's "--ignore-depends
", "--force-depends
" and other options. If you do this, you need to make serious effort to restore proper dependency later. See dpkg
(8) for details.
When your system is seriously broken, you should make a full backup of system to a safe place (see Section 10.1.6, “Backup and recovery”) and should perform a clean installation. This is less time consuming and produces better results in the end.
If "/var/lib/dpkg/status
" becomes corrupt for any reason, the Debian system loses package selection data and suffers severely. Look for the old "/var/lib/dpkg/status
" file at "/var/lib/dpkg/status-old
" or "/var/backups/dpkg.status.*
".
Keeping "/var/backups/
" in a separate partition may be a good idea since this directory contains lots of important system data.
For serious breakage, I recommend to make fresh re-install after making backup of the system. Even if everything in "/var/
" is gone, you can still recover some information from directories in "/usr/share/doc/
" to guide your new installation.
Reinstall minimal (desktop) system.
# mkdir -p /path/to/old/system
Mount old system at "/path/to/old/system/
".
# cd /path/to/old/system/usr/share/doc # ls -1 >~/ls1.txt # cd /usr/share/doc # ls -1 >>~/ls1.txt # cd # sort ls1.txt | uniq | less
Then you are presented with package names to install. (There may be some non-package names such as "texmf
".)
You can seek packages which satisfy your needs with aptitude
from the package description or from the list under "Tasks".
When you encounter more than 2 similar packages and wonder which one to install without "trial and error" efforts, you should use some common sense. I consider following points are good indications of preferred packages.
python2.4
by python
)
Debian being a volunteer project with distributed development model, its archive contains many packages with different focus and quality. You must make your own decision what to do with them.
Installing packages from mixed source of archives is not supported by the official Debian distribution except for officially supported particular combinations of archives such as stable
with security updates and volatile updates.
Here is an example of operations to include specific newer upstream version packages found in unstable
while tracking testing
for single occasion.
/etc/apt/sources.list
" file temporarily to single "unstable
" entry.
aptitude update
".
aptitude install <package-name>
".
/etc/apt/sources.list
" file for testing
.
aptitude update
".
You do not create the "/etc/apt/preferences
" file nor need to worry about apt-pinning with this manual approach. But this is very cumbersome.
When using mixed source of archives, you must ensure compatibility of packages by yourself since the Debian does not guarantee it. If package incompatibility exists, you may break system. You must be able to judge these technical requirements. The use of mixed source of random archives is completely optional operation and its use is not something I encourage you to use.
General rules for installing packages from different archives are followings.
Non-binary packages ("Architecture: all
") are safer to install.
Binary packages (non "Architecture: all
") usually face many road blocks and unsafe to install.
In order to make a package to be safer to install, some commercial non-free binary program packages may be provided with completely statically linked libraries. You should still check ABI compatibility issues etc. for them.
Except to avoid broken package for a short term, installing binary packages from officially unsupported archives is generally bad idea. This is true even if you use apt-pinning (see Section 2.7.3, “Tweaking candidate version”). You should consider chroot or similar techniques (see Section 9.8, “Virtualized system”) to run programs from different archives.
Without the "/etc/apt/preferences
" file, APT system choses the latest available version as the candidate version using the version string. This is the normal state and most recommended usage of APT system. All officially supported combinations of archives do not require the "/etc/apt/preferences
" file since some archives which should not be used as the automatic source of upgrades are marked as NotAutomatic and dealt properly.
The version string comparison rule can be verified with, e.g., "dpkg --compare-versions ver1.1 gt ver1.1~1; echo $?
" (see dpkg
(1)).
When you install packages from mixed source of archives (see Section 2.7.2, “Packages from mixed source of archives”) regularly, you can automate these complicated operations by creating the "/etc/apt/preferences
" file with proper entries and tweaking the package selection rule for candidate version as described in apt_preferences
(5). This is called apt-pinning.
Use of apt-pinning by a novice user is sure call for major troubles. You must avoid using apt-pinning except when you absolutely need it.
When using apt-pinning, you must ensure compatibility of packages by yourself since the Debian does not guarantee it. The apt-pinning is completely optional operation and its use is not something I encourage you to use.
Archive level Release files (see Section 2.5.3, “Archive level "Release" files”) are used for the rule of apt_preferences
(5). Thus apt-pinning works only with "suite" name for normal Debian archives and security Debian archives. (This is different from Ubuntu archives). For example, you can do "Pin: release a=unstable
" but can not do "Pin: release a=sid
" in the "/etc/apt/preferences
" file.
When you use non-Debian archive as a part of apt-pinning, you should check what they are intended for and also check their credibility. For example, Ubuntu and Debian are not meant to be mixed.
Even if you do not create the "/etc/apt/preferences
" file, you can do fairly complex system operations (see Section 2.6.4, “Rescue with the dpkg command” and Section 2.7.2, “Packages from mixed source of archives”) without apt-pinning.
Here is a simplified explanation of apt-pinning technique.
APT system choses highest Pin-Priority upgrading package from available package sources defined in the "/etc/apt/sources.list
" file as the candidate version package. If the Pin-Priority of the package is larger than 1000, this version restriction for upgrading is dropped to enable downgrading (see Section 2.7.7, “Emergency downgrading”).
Pin-Priority value of each package is defined by "Pin-Priority" entries in the "/etc/apt/preferences
" file or uses its default value.
Table 2.18. List of the default Pin-Priority value for each package source type
default Pin-Priority | package source type |
---|---|
990 | target release archive |
500 | normal archive |
100 | installed package |
1 | NotAutomatic archive |
The target release archive can be set by several methods.
/etc/apt/apt.conf
" configuration file with "APT::Default-Release "stable";
" line
apt-get install -t testing some-package
"
The NotAutomatic archive is set by archive server having its archive level Release file (see Section 2.5.3, “Archive level "Release" files”) containing "NotAutomatic: yes
".
The apt-pinning situation of <package> from multiple archive sources is displayed by "apt-cache policy <package>
".
Package pin:
" lists the package version of pin if association just with <package> is defined, e.g., "Package pin: 0.190
".
Package pin:
" exists if no association just with <package> is defined.
0.181 700
".
0
" is listed right side of all version strings if no association just with <package> is defined, e.g., "0.181 0
".
Package: *
" in the "/etc/apt/preferences
" file) are listed left side of all archive paths, e.g., "200 http://backports.debian.org/debian-backports/ squeeze-backports/main Packages
".
Here is an example of apt-pinning technique to include specific newer upstream version packages found in unstable
regularly upgraded while tracking testing
. You list all required archives in the "/etc/apt/sources.list
" file as the following.
deb http://ftp.us.debian.org/debian/ testing main contrib non-free deb http://ftp.us.debian.org/debian/ unstable main contrib non-free deb http://security.debian.org/ testing/updates main contrib
Set the "/etc/apt/preferences
" file as as the following.
Package: * Pin: release a=testing Pin-Priority: 500 Package: * Pin: release a=unstable Pin-Priority: 200
When you wish to install a package named "<package-name>
" with its dependencies from unstable
archive under this configuration, you issue the following command which switches target release with "-t
" option (Pin-Priority of unstable
becomes 990.).
$ sudo apt-get install -t unstable <package-name>
With this configuration, usual execution of "apt-get upgrade
" and "apt-get dist-upgrade
" (or "aptitude safe-upgrade
" and "aptitude full-upgrade
") upgrades packages which were installed from testing
archive using current testing
archive and packages which were installed from unstable
archive using current unstable
archive.
Be careful not to remove "testing
" entry from the "/etc/apt/sources.list
" file. Without "testing
" entry in it, APT system upgrades packages using newer unstable
archive.
I usually edit the "/etc/apt/sources.list
" file to comment out "unstable
" archive entry right after above operation. This avoids slow update process of having too many entries in the "/etc/apt/sources.list
" file although this prevents upgrading packages which were installed from unstable
archive using current unstable
archive.
If "Pin-Priority: 20
" is used instead of "Pin-Priority: 200
" for the "/etc/apt/preferences
" file, already installed packages having Pin-Priority value of 100 are not upgraded by unstable
archive even if "testing
" entry in the "/etc/apt/sources.list
" file is removed.
If you wish to track particular packages in unstable
automatically without initial "-t unstable
" installation, you must create the "/etc/apt/preferences
" file and explicitly list all those packages at the top of it as the following.
Package: <package-1> Pin: release a=unstable Pin-Priority: 700 Package: <package-2> Pin: release a=unstable Pin-Priority: 700
These set Pin-Priority value for each specific package. For example, in order to track the latest unstable
version of this "Debian Reference" in English, you should have following entries in the "/etc/apt/preferences
" file.
Package: debian-reference-en Pin: release a=unstable Pin-Priority: 700 Package: debian-reference-common Pin: release a=unstable Pin-Priority: 700
This apt-pinning technique is valid even when you are tracking stable
archive. Documentation packages have been always safe to install from unstable
archive in my experience, so far.
Here is another example of apt-pinning technique to include specific newer upstream version packages found in experimental
while tracking unstable
. You list all required archives in the "/etc/apt/sources.list
" file as the following.
deb http://ftp.us.debian.org/debian/ unstable main contrib non-free deb http://ftp.us.debian.org/debian/ experimental main contrib non-free deb http://security.debian.org/ testing/updates main contrib
The default Pin-Priority value for experimental
archive is always 1 (<<100) since it is NotAutomatic archive (see Section 2.5.3, “Archive level "Release" files”). There is no need to set Pin-Priority value explicitly in the "/etc/apt/preferences
" file just to use experimental
archive unless you wish to track particular packages in it automatically for next upgrading.
There are debian-volatile project and backports.debian.org archives which provide updgrade packages for stable
.
Do not use all packages available in the NotAutomatic archives such as squeeze-backports
and volatile-sloppy
. Use only selected packages which fits your needs.
Archive level Release files (see Section 2.5.3, “Archive level "Release" files”) are used for the rule of apt_preferences
(5). Thus apt-pinning works only with "code" name for volatile Debian archives. This is different from other Debian archives. For example, you can do "Pin: release a=squeeze
" but can not do "Pin: release a=stable
" in the "/etc/apt/preferences
" file for volatile Debian archives.
Here is an example of apt-pinning technique to include specific newer upstream version packages found in squeeze-backports
while tracking squeeze
and volatile
. You list all required archives in the "/etc/apt/sources.list
" file as the following.
deb http://ftp.us.debian.org/debian/ squeeze main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib deb http://volatile.debian.org/debian-volatile/ squeeze/volatile main contrib non-free deb http://volatile.debian.org/debian-volatile/ squeeze/volatile-sloppy main contrib non-free deb http://backports.debian.org/debian-backports/ squeeze-backports main contrib non-free
The default Pin-Priority value for backports.debian.org and volatile-sloppy
archives are always 1 (<<100) since they are NotAutomatic archive (see Section 2.5.3, “Archive level "Release" files”). There is no need to set Pin-Priority value explicitly in the "/etc/apt/preferences
" file just to use for backports.debian.org and volatile-sloppy
archive unless you wish to track packages automatically for next upgrading.
So whenever you wish to install a package named "<package-name>
" with its dependency from squeeze-backports
archive, you use following command while switching target release with "-t
" option.
$ sudo apt-get install -t squeeze-backports <package-name>
If you wish to upgrade particular packages, you must create the "/etc/apt/preferences
" file and explicitly lists all packages in it as the following.
Package: <package-1> Pin: release o=Backports.org archive Pin-Priority: 700 Package: <package-2> Pin: release o=volatile.debian.org Pin-Priority: 700
Alternatively, with the "/etc/apt/preferences
" file as the following.
Package: * Pin: release a=stable , o=Debian Pin-Priority: 500 Package: * Pin: release a=squeeze, o=volatile.debian.org Pin-Priority: 500 Package: * Pin: release a=squeeze-backports, o=Backports.org archive Pin-Priority: 200 Package: * Pin: release a=squeeze-sloppy, o=volatile.debian.org Pin-Priority: 200
Execution of "apt-get upgrade
" and "apt-get dist-upgrade
" (or "aptitude safe-upgrade
" and "aptitude full-upgrade
") upgrades packages which were installed from stable
archive using current stable
archive and packages which were installed from other archives using current corresponding archive for all archives in the "/etc/apt/sources.list
" file.
The apt
package comes with its own cron script "/etc/cron.daily/apt
" to support the automatic download of packages. This script can be enhanced to perform the automatic upgrade of packages by installing the unattended-upgrades
package. These can be customized by parameters in "/etc/apt/apt.conf.d/02backup
" and "/etc/apt/apt.conf.d/50unattended-upgrades
" as described in "/usr/share/doc/unattended-upgrades/README
".
The unattended-upgrades
package is mainly intended for the security upgrade for the stable
system. If the risk of breaking an existing stable
system by the automatic upgrade is smaller than that of the system broken by the intruder using its security hole which has been closed by the security update, you should consider using this automatic upgrade with configuration parameters as the following.
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Download-Upgradeable-Packages "1"; APT::Periodic::Unattended-Upgrade "1";
If you are running an unstable
system, you do not want to use the automatic upgrade since it certainly breaks system some day. Even for such unstable
case, you may still want to download packages in advance to save time for the interactive upgrade with configuration parameters as the following.
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Download-Upgradeable-Packages "1"; APT::Periodic::Unattended-Upgrade "0";
If you want to limit the download bandwidth for APT to e.g. 800Kib/sec (=100kiB/sec), you should configure APT with its configuration parameter as the following.
APT::Acquire::http::Dl-Limit "800";
Downgrading is not officially supported by the Debian by design. It should be done only as a part of emergency recovery process. Despite of this situation, it is known to work well in many incidents. For critical systems, You should backup all important data on the system after the recovery operation and re-install the new system from the scratch.
You may be lucky to downgrade from newer archive to older archive to recover from broken system upgrade by manipulating candidate version (see Section 2.7.3, “Tweaking candidate version”). This is lazy alternative to tedious actions of many "dpkg -i <broken-package>_<old-version>.deb
" commands (see Section 2.6.4, “Rescue with the dpkg command”).
Search lines in the "/etc/apt/sources.list
" file tracking unstable
as the following.
deb http://ftp.us.debian.org/debian/ sid main contrib non-free
Replace it with the following to track testing
.
deb http://ftp.us.debian.org/debian/ wheezy main contrib non-free
Set the "/etc/apt/preferences
" file as the following.
Package: * Pin: release a=testing Pin-Priority: 1010
Run "apt-get dist-upgrade
" to force downgrading of packages across the system.
Remove this special "/etc/apt/preferences
" file after this emergency downgrading.
It is good idea to remove (not purge!) as much packages to minimize dependency problems. You may need to manually remove and install some packages to get system downgraded. Linux kernel, bootloader, udev, PAM, APT, and networking related packages and their configuration files require special attention.
Although the maintainer name listed in "/var/lib/dpkg/available
" and "/usr/share/doc/package_name/changelog
" provide some information on "who is behind the packaging activity", the actual uploader of the package is somewhat obscure. who-uploads
(1) in the devscripts
package identifies the actual uploader of Debian source packages.
If you are to compile a program from source to replace the Debian package, it is best to make it into a real local debianized package (*.deb
) and use private archive.
If you chose to compile a program from source and to install them under "/usr/local
" instead, you may need to use equivs
as a last resort to satisfy the missing package dependency.
Package: equivs Priority: extra Section: admin Description: Circumventing Debian package dependencies This is a dummy package which can be used to create Debian packages, which only contain dependency information.
For partial upgrades of the stable
system, rebuilding a package within its environment using the source package is desirable. This avoids massive package upgrades due to their dependencies.
Add the following entries to the "/etc/apt/sources.list
" of a stable
system.
deb-src http://http.us.debian.org/debian unstable main contrib non-free
Install required packages for the compilation and download the source package as the following.
# apt-get update # apt-get dist-upgrade # apt-get install fakeroot devscripts build-essential $ apt-get build-dep foo $ apt-get source foo $ cd foo*
Adjust installed packages if needed.
Execute the following.
$ dch -i
Bump package version, e.g. one appended with "+bp1
" in "debian/changelog
"
Build packages and install them to the system as the following.
$ debuild $ cd .. # debi foo*.changes
Since mirroring whole subsection of Debian archive wastes disk space and network bandwidth, deployment of a local proxy server for APT is desirable consideration when you administer many systems on LAN. APT can be configure to use generic web (http) proxy servers such as squid
(see Section 6.10, “Other network application servers”) as described in apt.conf
(5) and in "/usr/share/doc/apt/examples/configure-index.gz
". The "$http_proxy
" environment variable can be used to override proxy server setting in the "/etc/apt/apt.conf
" file.
There are proxy tools specially for Debian archive. You should check BTS before using them.
Table 2.19. List of the proxy tools specially for Debian archive
package | popcon | size | description |
---|---|---|---|
approx
*
|
V:0.3, I:0.3 | 3896 | caching proxy server for Debian archive files (compiled OCaml program) |
apt-cacher
*
|
V:0.3, I:0.4 | 308 | Caching proxy for Debian package and source files (Perl program) |
apt-cacher-ng
*
|
V:0.3, I:0.4 | 1092 | Caching proxy for distribution of software packages (compiled C++ program) |
debtorrent
*
|
V:0.12, I:0.17 | 1185 | Bittorrent proxy for downloading Debian packages (Python program) |
When Debian reorganizes its archive structure, these specialized proxy tools tend to require code rewrites by the package maintainer and may not be functional for a while. On the other hand, generic web (http) proxy servers are more robust and easier to cope with such changes.
Here is an example for creating a small public package archive compatible with the modern secure APT system (see Section 2.5.2, “Top level "Release" file and authenticity”). Let's assume few things.
foo
"
www.example.com
"
apt-utils
, gnupg
, and other packages
http://www.example.com/~foo/
" ( → "/home/foo/public_html/index.html
")
amd64
"
Create an APT archive key of Foo on your server system as the following.
$ ssh foo@www.example.com $ gpg --gen-key ... $ gpg -K ... sec 1024D/3A3CB5A6 2008-08-14 uid Foo (ARCHIVE KEY) <foo@www.example.com> ssb 2048g/6856F4A7 2008-08-14 $ gpg --export -a 3A3CB5A6 >foo.public.key
Publish the archive key file "foo.public.key
" with the key ID "3A3CB5A6
" for Foo
Create an archive tree called "Origin: Foo" as the following.
$ umask 022 $ mkdir -p ~/public_html/debian/pool/main $ mkdir -p ~/public_html/debian/dists/unstable/main/binary-amd64 $ mkdir -p ~/public_html/debian/dists/unstable/main/source $ cd ~/public_html/debian $ cat > dists/unstable/main/binary-amd64/Release << EOF Archive: unstable Version: 4.0 Component: main Origin: Foo Label: Foo Architecture: amd64 EOF $ cat > dists/unstable/main/source/Release << EOF Archive: unstable Version: 4.0 Component: main Origin: Foo Label: Foo Architecture: source EOF $ cat >aptftp.conf <<EOF APT::FTPArchive::Release { Origin "Foo"; Label "Foo"; Suite "unstable"; Codename "sid"; Architectures "amd64"; Components "main"; Description "Public archive for Foo"; }; EOF $ cat >aptgenerate.conf <<EOF Dir::ArchiveDir "."; Dir::CacheDir "."; TreeDefault::Directory "pool/"; TreeDefault::SrcDirectory "pool/"; Default::Packages::Extensions ".deb"; Default::Packages::Compress ". gzip bzip2"; Default::Sources::Compress "gzip bzip2"; Default::Contents::Compress "gzip bzip2"; BinDirectory "dists/unstable/main/binary-amd64" { Packages "dists/unstable/main/binary-amd64/Packages"; Contents "dists/unstable/Contents-amd64"; SrcPackages "dists/unstable/main/source/Sources"; }; Tree "dists/unstable" { Sections "main"; Architectures "amd64 source"; }; EOF
You can automate repetitive updates of APT archive contents on your server system by configuring dupload
.
Place all package files into "~foo/public_html/debian/pool/main/
" by executing "dupload -t foo changes_file
" in client while having "~/.dupload.conf
" containing the following.
$cfg{'foo'} = { fqdn => "www.example.com", method => "scpb", incoming => "/home/foo/public_html/debian/pool/main", # The dinstall on ftp-master sends emails itself dinstall_runs => 1, }; $cfg{'foo'}{postupload}{'changes'} = " echo 'cd public_html/debian ; apt-ftparchive generate -c=aptftp.conf aptgenerate.conf; apt-ftparchive release -c=aptftp.conf dists/unstable >dists/unstable/Release ; rm -f dists/unstable/Release.gpg ; gpg -u 3A3CB5A6 -bao dists/unstable/Release.gpg dists/unstable/Release'| ssh foo@www.example.com 2>/dev/null ; echo 'Package archive created!'";
The postupload hook script initiated by dupload
(1) creates updated archive files for each upload.
You can add this small public archive to the apt-line of your client system by the following.
$ sudo bash # echo "deb http://www.example.com/~foo/debian/ unstable main" \ >> /etc/apt/sources.list # apt-key add foo.public.key
If the archive is located on the local filesystem, you can use "deb file:///home/foo/debian/ …
" instead.
You can make a local copy of the package and debconf selection states by the following.
# dpkg --get-selections '*' > selection.dpkg # debconf-get-selections > selection.debconf
Here, "*
" makes "selection.dpkg
" to include package entries for "purge" too.
You can transfer these 2 files to another computer, and install there with the following.
# dselect update # debconf-set-selections < myselection.debconf # dpkg --set-selections < myselection.dpkg # apt-get -u dselect-upgrade # or dselect install
If you are thinking about managing many servers in a cluster with practically the same configuration, you should consider to use specialized package such as fai
to manage the whole system.
alien
(1) enables the conversion of binary packages provided in Red Hat rpm
, Stampede slp
, Slackware tgz
, and Solaris pkg
file formats into a Debian deb
package. If you want to use a package from another Linux distribution than the one you have installed on your system, you can use alien
to convert it from your preferred package format and install it. alien
also supports LSB packages.
alien
(1) should not be used to replace essential system packages, such as sysvinit
, libc6
, libpam-modules
, etc. Practically, alien
(1) should only used for non-free binary-only packages which are LSB compliant or statically linked. For free softwares, you should use their source packages to make real Debian packages.
The current "*.deb
" package contents can be extracted without using dpkg
(1) on any Unix-like environment using standard ar
(1) and tar
(1).
# ar x /path/to/dpkg_<version>_<arch>.deb # ls total 24 -rw-r--r-- 1 bozo bozo 1320 2007-05-07 00:11 control.tar.gz -rw-r--r-- 1 bozo bozo 12837 2007-05-07 00:11 data.tar.gz -rw-r--r-- 1 bozo bozo 4 2007-05-07 00:11 debian-binary # mkdir control # mkdir data # tar xvzf control.tar.gz -C control # tar xvzf data.tar.gz -C data
You can also browse package content using the mc
command.
You can learn more on the package management from following documentations.
Primary documentations on the package management:
aptitude
(8), dpkg
(1), tasksel
(8), apt-get
(8), apt-config
(8), apt-key
(8), sources.list
(5), apt.conf
(5), and apt_preferences
(5);
/usr/share/doc/apt-doc/guide.html/index.html
" and "/usr/share/doc/apt-doc/offline.html/index.html
" from the apt-doc
package; and
/usr/share/doc/aptitude/html/en/index.html
" from the aptitude-doc-en
package.
Official and detailed documentations on the Debian archive:
Tutorial for building of a Debian package for Debian users:
It is wise for you as the system administrator to know roughly how the Debian system is started and configured. Although the exact details are in the source files of the packages installed and their documentations, it is a bit overwhelming for most of us.
I did my best to provide a quick overview of the key points of the Debian system and their configuration for your reference, based on the current and previous knowledge of mine and others. Since the Debian system is a moving target, the situation over the system may have been changed. Before making any changes to the system, you should refer to the latest documentation for each package.
The computer system undergoes several phases of boot strap processes from the power-on event until it offers the fully functional operating system (OS) to the user.
For simplicity, I limit discussion to the typical PC platform with the default installation.
The typical boot strap process is like a four-stage rocket. Each stage rocket hands over the system control to the next stage one.
Of course, these can be configured differently. For example, if you compiled your own kernel, you may be skipping the step with the mini-Debian system. So please do not assume this is the case for your system until you check it yourself.
For non-legacy PC platform such as the SUN or the Macintosh system, the BIOS on ROM and the partition on the disk may be quite different (Section 9.3.1, “Disk partition configuration”). Please seek the platform specific documentations elsewhere for such a case.
The BIOS is the 1st stage of the boot process which is started by the power-on event. The BIOS residing on the read only memory (ROM) is executed from the particular memory address to which the program counter of CPU is initialized by the power-on event.
This BIOS performs the basic initialization of the hardware (POST: power on self test) and hands the system control to the next step which you provide. The BIOS is usually provided with the hardware.
The BIOS startup screen usually indicates what key(s) to press to enter the BIOS setup screen to configure the BIOS behavior. Popular keys used are F1, F2, F10, Esc, Ins, and Del. If your BIOS startup screen is hidden by a nice graphics screen, you may press some keys such as Esc to disable this. These keys are highly dependent on the hardware.
The hardware location and the priority of the code started by the BIOS can be selected from the BIOS setup screen. Typically, the first few sectors of the first found selected device (hard disk, floppy disk, CD-ROM, …) are loaded to the memory and this initial code is executed. This initial code can be any one of the following.
Typically, the system is booted from the specified partition of the primary hard disk partition. First 2 sectors of the hard disk on legacy PC contain the master boot record (MBR). The disk partition information including the boot selection is recorded at the end of this MBR. The first boot loader code executed from the BIOS occupies the rest of this MBR.
The boot loader is the 2nd stage of the boot process which is started by the BIOS. It loads the system kernel image and the initrd image to the memory and hands control over to them. This initrd image is the root filesystem image and its support depends on the bootloader used.
The Debian system normally uses the Linux kernel as the default system kernel. The initrd image for the current 2.6 Linux kernel is technically the initramfs (initial RAM filesystem) image. The initramfs image is a gzipped cpio archive of files in the root filesystem.
The default install of the Debian system places first-stage GRUB boot loader code into the MBR for the PC platform. There are many boot loaders and configuration options available.
Table 3.1. List of boot loaders
bootloader | package | popcon | size | initrd | description |
---|---|---|---|---|---|
GRUB Legacy | grub-legacy * | V:0.4, I:1.1 | 1984 | Supported |
This is smart enough to understand disk partitions and filesystems such as vfat, ext3, …. (lenny default)
|
GRUB 2 | grub-pc * | V:7, I:25 | 2480 | Supported | This is smart enough to understand disk partitions and filesystems such as vfat, ext3, …. |
GRUB 2 | grub-rescue-pc * | V:0.04, I:0.5 | 3896 | Supported | This is GRUB 2 bootable rescue images (CD and floppy) (PC/BIOS version) |
Lilo | lilo * | V:0.5, I:2 | 1236 | Supported | This relies on the sector locations of data on the hard disk. (Old) |
Isolinux | syslinux * | V:1.3, I:8 | 204 | Supported | This understands the ISO9660 filesystem. This is used by the boot CD. |
Syslinux | syslinux * | V:1.3, I:8 | 204 | Supported | This understands the MSDOS filesystem (FAT). This is used by the boot floppy. |
Loadlin | loadlin * | V:0.03, I:0.2 | 144 | Supported | New system is started from the FreeDOS/MSDOS system. |
MBR by Neil Turton | mbr * | V:0.8, I:5 | 96 | Not supported | This is free software which substitutes MSDOS MBR. This only understands disk partitions. |
Do not play with boot loaders without having bootable rescue media (CD or floppy) created from images in the grub-rescue-pc
package. It makes you boot your system even without functioning bootloader on the hard disk.
For GRUB Legacy, the menu configuration file is located at "/boot/grub/menu.lst
". For example, it has entries as the following.
title Debian GNU/Linux root (hd0,2) kernel /vmlinuz root=/dev/hda3 ro initrd /initrd.img
For GRUB 2, the menu configuration file is located at "/boot/grub/grub.cfg
". It is automatically generated by "/usr/sbin/update-grub
" using templates from "/etc/grub.d/*
" and settings from "/etc/default/grub
". For example, it has entries as the following.
menuentry "Debian GNU/Linux" { set root=(hd0,3) linux /vmlinuz root=/dev/hda3 initrd /initrd.img }
For these examples, these GRUB parameters mean the following.
Table 3.2. The meaning of GRUB parameters
GRUB parameter | meaning |
---|---|
root
|
use 3rd partition on the primary disk by setting it as "(hd0,2) " in GRUB legacy or as "(hd0,3) " in GRUB 2
|
kernel
|
use kernel located at "/vmlinuz " with kernel parameter: "root=/dev/hda3 ro "
|
initrd
|
use initrd/initramfs image located at "/initrd.img "
|
The value of the partition number used by GRUB legacy program is one less than normal one used by Linux kernel and utility tools. GRUB 2 program fixes this problem.
UUID (see Section 9.3.2, “Accessing partition using UUID”) may be used to identify a block special device instead of its file name such as "/dev/hda3
", e.g."root=UUID=81b289d5-4341-4003-9602-e254a17ac232 ro
".
You can start a boot loader from another boot loader using techniques called chain loading.
See "info grub
" and grub-install
(8).
The mini-Debian system is the 3rd stage of the boot process which is started by the boot loader. It runs the system kernel with its root filesystem on the memory. This is an optional preparatory stage of the boot process.
The term "the mini-Debian system" is coined by the author to describe this 3rd stage boot process for this document. This system is commonly referred as the initrd or initramfs system. Similar system on the memory is used by the Debian Installer.
The "/init
" script is executed as the first program in this root filesystem on the memory. It is a shell script program which initializes the kernel in user space and hands control over to the next stage. This mini-Debian system offers flexibility to the boot process such as adding kernel modules before the main boot process or mounting the root filesystem as an encrypted one.
You can interrupt this part of the boot process to gain root shell by providing "break=init
" etc. to the kernel boot parameter. See the "/init
" script for more break conditions. This shell environment is sophisticated enough to make a good inspection of your machine's hardware.
Commands available in this mini-Debian system are stripped down ones and mainly provided by a GNU tool called busybox
(1).
You need to use "-n
" option for mount
command when you are on the readonly root filesystem.
Table 3.3. List of boot utilities for the Debian system
package | popcon | size | description |
---|---|---|---|
initscripts
*
|
V:91, I:99 | 284 | scripts for initializing and shutting down the system |
sysvinit
*
|
V:85, I:99 | 208 |
System-V-like init (8) utilities
|
sysv-rc
*
|
V:91, I:99 | 300 | System-V-like runlevel change mechanism |
sysvinit-utils
*
|
V:91, I:99 | 224 |
System-V-like utilities (startpar (8), bootlogd (8), …)
|
lsb-base
*
|
V:91, I:99 | 36 | Linux Standard Base 3.2 init script functionality |
insserv
*
|
V:22, I:26 | 292 | tool to organize boot sequence using LSB init.d script dependencies |
upstart
*
|
V:0.15, I:0.2 | 700 |
event-based init (8) daemon for concurrency (alternative to sysvinit )
|
readahead-fedora
*
|
V:0.3, I:0.5 | 144 |
readahead (8) to preload boot process files
|
uswsusp
*
|
V:4, I:14 | 536 | tools to use userspace software suspend provided by Linux |
kexec-tools
*
|
V:0.17, I:0.5 | 320 |
kexec tool for kexec (8) reboots (warm reboot)
|
bootchart
*
|
V:0.13, I:0.7 | 132 | boot process performance analyser |
bootchart-view
*
|
V:0.10, I:0.6 | 280 | boot process performance analyser (visualisation) |
mingetty
*
|
V:0.2, I:0.5 | 64 |
console-only getty (8)
|
mgetty
*
|
V:0.19, I:0.6 | 416 |
smart modem getty (8) replacement
|
This section describes classical System V style boot system on lenny
. Debian is moving to the event driven boot system. See The future of the boot system in Debian and Dependency based boot sequence.
All boot mechanisms are compatible through "/etc/init.d/rc
", "/etc/init.d/rcS
", "/usr/sbin/update-rc.d
", and "/usr/sbin/invoke-rc.d
" scripts.
The readahead-fedora
package can speed up starting of a system with decent amount of DRAM.
The normal Debian system is the 4th stage of the boot process which is started by the mini-Debian system. The system kernel for the mini-Debian system continues to run in this environment. The root filesystem is switched from the one on the memory to the one on the real hard disk filesystem.
The "/sbin/init
" program is executed as the first program and performs the main boot process. The Debian normally uses the traditional sysvinit scheme with the sysv-rc
package. See init
(8), inittab
(5), and "/usr/share/doc/sysv-rc/README.runlevels.gz
" for the exact explanation. This main boot process essentially goes through the following.
/etc/inittab
" description.
The initial runlevel used for multi-user mode is specified with the "init=
" kernel boot parameter or in the "initdefault" line of the "/etc/inittab
". The Debian system as installed starts at the runlevel 2.
All actual script files executed by the init system are located in the directory "/etc/init.d/
".
Each runlevel uses a directory for its configuration and has specific meaning as the following.
Table 3.4. List of runlevels and description of their usage
runlevel | directory | description of runlevel usage |
---|---|---|
N
|
none |
system bootup (NONE) level (no "/etc/rcN.d/ " directory)
|
0
|
/etc/rc0.d/
|
halt the system |
S
|
/etc/rcS.d/
|
single-user mode on boot (alias: "s ")
|
1
|
/etc/rc1.d/
|
single-user mode switched from multi-user mode |
2
|
/etc/rc2.d/
|
multi-user mode |
3
|
/etc/rc3.d/
|
,, |
4
|
/etc/rc4.d/
|
,, |
5
|
/etc/rc5.d/
|
,, |
6
|
/etc/rc6.d/
|
reboot the system |
7
|
/etc/rc7.d/
|
valid multi-user mode but not normally used |
8
|
/etc/rc8.d/
|
,, |
9
|
/etc/rc9.d/
|
,, |
You can change the runlevel from the console to, e.g., 4 by the following.
$ sudo telinit 4
The Debian system does not pre-assign any special meaning differences among the runlevels between 2 and 5. The system administrator on the Debian system may change this. (I.e., Debian is not Red Hat Linux nor Solaris by Sun Microsystems nor HP-UX by Hewlett Packard nor AIX by IBM nor …)
The Debian system does not populate directories for the runlevels between 7 and 9 when the package is installed. Traditional Unix variants don't use these runlevels.
In Debian squeeze
, dependency based boot order provided by the insserv
package is used instead of classical alphabetical one. The "CONCURRENCY
" value in "/etc/default/rcS
" controls its concurrency: "none
" for no concurrency, "startpar
" for concurrency within the same sequence number, or "makefile
" for full concurrency. See "/usr/share/doc/insserv/README.Debian
".
The name of the symlink in each runlevel directory has the form "S<2-digit-number><original-name>
" or "K<2-digit-number><original-name>
". The 2-digit-number is used to determine the order in which to run the scripts. "S
" is for "Start" and "K
" is for "Kill".
For "CONCURRENCY=none
", when init
(8) or telinit
(8) commands goes into the runlevel to "<n>", it execute following scripts.
K
" in "/etc/rc<n>.d/
" are executed in alphabetical order with the single argument "stop
". (killing services)
S
" in "/etc/rc<n>.d/
" are executed in alphabetical order with the single argument "start
". (starting services)
For example, if you had the links "S10sysklogd
" and "S20exim4
" in a runlevel directory, "S10sysklogd
" which is symlinked to "../init.d/sysklogd
" would run before "S20exim4
" which is symlinked to "../init.d/exim4
".
For "CONCURRENCY=makefile
" (new default), package dependency defined in the header of init scripts are used to order them.
It is not advisable to make any changes to symlinks in "/etc/rcS.d/
" unless you know better than the maintainer.
For example, let's set up runlevel system somewhat like Red Hat Linux as the following.
init
starts the system in runlevel=3 as the default.
init
does not start gdm
(1) in runlevel=(0,1,2,6).
init
starts gdm
(1) in runlevel=(3,4,5).
This can be done by using editor on the "/etc/inittab
" file to change starting runlevel and using user friendly runlevel management tools such as sysv-rc-conf
or bum
to edit the runlevel. If you are to use command line only instead, here is how you do it (after the default installation of the gdm
package and selecting it to be the choice of display manager).
# cd /etc/rc2.d ; mv S21gdm K21gdm # cd /etc ; perl -i -p -e 's/^id:.:/id:3:/' inittab
Please note the "/etc/X11/default-display-manager
" file is checked when starting the display manager daemons: xdm
, gdm
, kdm
, and wdm
.
You can still start X from any console shell with the startx
(1) command.
The default parameter for each init script in "/etc/init.d/
" is given by the corresponding file in "/etc/default/
" which contains environment variable assignments only. This choice of directory name is specific to the Debian system. It is roughly the equivalent of the "/etc/sysconfig
" directory found in Red Hat Linux and other distributions. For example, "/etc/default/cron
" can be used to control how "/etc/init.d/cron
" works.
The "/etc/default/rcS
" file can be used to customize boot-time defaults for motd
(5), sulogin
(8), etc.
If you cannot get the behavior you want by changing such variables then you may modify the init scripts themselves. These are configuration files editable by system administrators.
The kernel maintains the system hostname. The init script in runlevel S which is symlinked to "/etc/init.d/hostname.sh
" sets the system hostname at boot time (using the hostname
command) to the name stored in "/etc/hostname
". This file should contain only the system hostname, not a fully qualified domain name.
To print out the current hostname run hostname
(1) without an argument.
Although the root filesystem is mounted by the kernel when it is started, other filesystems are mounted in the runlevel S by the following init scripts.
`/etc/init.d/mountkernfs.sh
" for kernel filesystems in "/proc
", "/sys
", etc.
`/etc/init.d/mountdevsubfs.sh
" for virtual filesystems in "/dev
"
`/etc/init.d/mountall.sh
" for normal filesystems using "/etc/fstab
"
`/etc/init.d/mountnfs.sh
" for network filesystems using"/etc/fstab
"
The mount options of the filesystem are set in "/etc/fstab
". See Section 9.3.5, “Optimization of filesystem by mount options”.
The actual mounting of network filesystems waits for the start of the network interface.
After mounting all the filesystems, temporary files in "/tmp
", "/var/lock
", and "/var/run
" are cleaned for each boot up.
Network interfaces are initialized in runlevel S by the init script symlinked to "/etc/init.d/ifupdown-clean
" and "/etc/init.d/ifupdown
". See Chapter 5, Network setup for how to configure them.
Many network services (see Chapter 6, Network applications) are started under multi-user mode directly as daemon processes at boot time by the init script, e.g., "/etc/rc2.d/S20exim4
" (for RUNLEVEL=2) which is a symlink to "/etc/init.d/exim4
".
Some network services can be started on demand using the super-server inetd
(or its equivalents). The inetd
is started at boot time by "/etc/rc2.d/S20inetd
" (for RUNLEVEL=2) which is a symlink to "/etc/init.d/inetd
". Essentially, inetd
allows one running daemon to invoke several others, reducing load on the system.
Whenever a request for service arrives at super-server inetd
, its protocol and service are identified by looking them up in the databases in "/etc/protocols
" and "/etc/services
". inetd
then looks up a normal Internet service in the "/etc/inetd.conf
" database, or a Open Network Computing Remote Procedure Call (ONC RPC)/Sun RPC based service in "/etc/rpc.conf
".
Sometimes, inetd
does not start the intended server directly but starts the TCP wrapper program, tcpd
(8), with the intended server name as its argument in "/etc/inetd.conf
". In this case, tcpd
runs the appropriate server program after logging the request and doing some additional checks using "/etc/hosts.deny
" and "/etc/hosts.allow
".
For system security, disable as much network service programs as possible. See Section 4.6.3, “Restricting access to some server services”.
See inetd
(8), inetd.conf
(5), protocols
(5), services
(5), tcpd
(8), hosts_access
(5), hosts_options
(5), rpcinfo
(8), portmap
(8), and "/usr/share/doc/portmap/portmapper.txt.gz
".
The system message can be customized by "/etc/default/syslogd
" and "/etc/syslog.conf
" for both the log file and on-screen display. See syslogd
(8) and syslog.conf
(5). See also Section 9.2.2, “Log analyzer”.
The kernel message can be customized by "/etc/default/klogd
" for both the log file and on-screen display. Set "KLOGD='-c 3'
" in this file and run "/etc/init.d/klogd restart
". See klogd
(8).
You may directly change the error message level by the following.
# dmesg -n3
Table 3.5. List of kernel error levels
error level value | error level name | meaning |
---|---|---|
0 | KERN_EMERG | system is unusable |
1 | KERN_ALERT | action must be taken immediately |
2 | KERN_CRIT | critical conditions |
3 | KERN_ERR | error conditions |
4 | KERN_WARNING | warning conditions |
5 | KERN_NOTICE | normal but significant condition |
6 | KERN_INFO | informational |
7 | KERN_DEBUG | debug-level messages |
For Linux kernel 2.6, the udev system provides mechanism for the automatic hardware discovery and initialization (see udev
(7)). Upon discovery of each device by the kernel, the udev system starts a user process which uses information from the sysfs filesystem (see Section 1.2.12, “procfs and sysfs”), loads required kernel modules supporting it using the modprobe
(8) program (see Section 3.5.12, “The kernel module initialization”), and creates corresponding device nodes.
If "/lib/modules/<kernel-version>/modules.dep
" was not generated properly by depmod
(8) for some reason, modules may not be loaded as expected by the udev system. Execute "depmod -a
" to fix it.
The name of device nodes can be configured by udev rule files in "/etc/udev/rules.d/
". Current default rules tend to create dynamically generated names resulting non-static device names except for cd and network devices. By adding your custom rules similar to what cd and network devices do, you can generate static device names for other devices such as USB memory sticks, too. See "Writing udev rules" or "/usr/share/doc/udev/writing_udev_rules/index.html
".
Since the udev system is somewhat a moving target, I leave details to other documentations and describe the minimum information here.
For mounting rules in "/etc/fstab
", device nodes do not need to be static ones. You can use UUID to mount devices instead of device names such as "/dev/sda
". See Section 9.3.2, “Accessing partition using UUID”.
The modprobe
(8) program enables us to configure running Linux kernel from user process by adding and removing kernel modules. The udev system (see Section 3.5.11, “The udev system”) automates its invocation to help the kernel module initialization.
There are non-hardware modules and special hardware driver modules as the following which need to be pre-loaded by listing them in the "/etc/modules
" file (see modules
(5)).
iptables
(8), Section 5.9, “Netfilter infrastructure”), and
The configuration files for the modprobe
(8) program are located under the "/etc/modprobes.d/
" directory as explained in modprobe.conf
(5). (If you want to avoid some kernel modules to be auto-loaded, consider to blacklist them in the "/etc/modprobes.d/blacklist
" file.)
The "/lib/modules/<version>/modules.dep
" file generated by the depmod
(8) program describes module dependencies used by the modprobe
(8) program.
If you experience module loading issues with boot time module loading or with modprobe
(8), "depmod -a
" may resolve these issues by reconstructing "modules.dep
".
The modinfo
(8) program shows information about a Linux kernel module.
The lsmod
(8) program nicely formats the contents of the "/proc/modules
", showing what kernel modules are currently loaded.
You can identify exact hardware on your system. See Section 9.6.3, “Hardware identification”.
You may configure hardware at boot time to activate expected hardware features. See Section 9.6.4, “Hardware configuration”.
You can add support for your device by recompiling kernel. See Section 9.7, “The kernel”.
When a person (or a program) requests access to the system, authentication confirms the identity to be a trusted one.
Configuration errors of PAM may lock you out of your own system. You must have a rescue CD handy or setup an alternative boot partition. To recover, boot the system with them and correct things from there.
Normal Unix authentication is provided by the pam_unix
(8) module under the PAM (Pluggable Authentication Modules). Its 3 important configuration files, with ":
" separated entries, are the following.
Table 4.1. 3 important configuration files for pam_unix
(8)
file | permission | user | group | description |
---|---|---|---|---|
/etc/passwd
|
-rw-r--r--
|
root
|
root
|
(sanitized) user account information |
/etc/shadow
|
-rw-r-----
|
root
|
shadow
|
secure user account information |
/etc/group
|
-rw-r--r--
|
root
|
root
|
group information |
"/etc/passwd
" contains the following.
... user1:x:1000:1000:User1 Name,,,:/home/user1:/bin/bash user2:x:1001:1001:User2 Name,,,:/home/user2:/bin/bash ...
As explained in passwd
(5), each ":
" separated entry of this file means the following.
The second entry of "/etc/passwd
" was used for the encrypted password entry. After the introduction of "/etc/shadow
", this entry is used for the password specification entry.
Table 4.2. The second entry content of "/etc/passwd
"
content | meaning |
---|---|
(empty) | passwordless account |
x |
the encrypted password is in "/etc/shadow "
|
* | no login for this account |
! | no login for this account |
"/etc/shadow
" contains the following.
... user1:$1$Xop0FYH9$IfxyQwBe9b8tiyIkt2P4F/:13262:0:99999:7::: user2:$1$vXGZLVbS$ElyErNf/agUDsm1DehJMS/:13261:0:99999:7::: ...
As explained in shadow
(5), each ":
" separated entry of this file means the following.
$1$
" indicates use of the MD5 encryption. The "*" indicates no login.)
"/etc/group
" contains the following.
group1:x:20:user1,user2
As explained in group
(5), each ":
" separated entry of this file means the following.
"/etc/gshadow
" provides the similar function as "/etc/shadow
" for "/etc/group
" but is not really used.
The actual group membership of a user may be dynamically added if "auth optional pam_group.so
" line is added to "/etc/pam.d/common-auth
" and set it in "/etc/security/group.conf
". See pam_group
(8).
The base-passwd
package contains an authoritative list of the user and the group: "/usr/share/doc/base-passwd/users-and-groups.html
".
Here are few notable commands to manage account information.
Table 4.3. List of commands to manage account information
command | function |
---|---|
getent passwd <user_name>
|
browse account information of "<user_name> "
|
getent shadow <user_name>
|
browse shadowed account information of "<user_name> "
|
getent group <group_name>
|
browse group information of "<group_name> "
|
passwd
|
manage password for the account |
passwd -e
|
set one-time password for the account activation |
chage
|
manage password aging information |
You may need to have the root privilege for some functions to work. See crypt
(3) for the password and data encryption.
On the system set up with PAM and NSS as the Debian alioth machine, the content of local "/etc/passwd
", "/etc/group
" and "/etc/shadow
" may not be actively used by the system. Above commands are valid even under such environment.
When creating an account during your system installation or with the passwd
(1) command, you should choose a good password which consists of 6 to 8 characters including one or more characters from each of the following sets according to passwd
(1).
Do not chose guessable words for the password.
There are independent tools to generate encrypted password with salt.
Table 4.4. List of tools to generate password
package | popcon | size | command | function |
---|---|---|---|---|
whois
*
|
V:10, I:88 | 396 |
mkpasswd
|
over-featured front end to the crypt (3) library
|
openssl
*
|
V:56, I:91 | 2380 |
openssl passwd
|
compute password hashes (OpenSSL). passwd (1ssl)
|
Modern Unix-like systems such as the Debian system provide PAM (Pluggable Authentication Modules) and NSS (Name Service Switch) mechanism to the local system administrator to configure his system. The role of these can be summarizes as the following.
ls
(1) and id
(1).
These PAM and NSS systems need to be configured consistently.
The notable packages of PAM and NSS systems are the following.
Table 4.5. List of notable PAM and NSS systems
package | popcon | size | description |
---|---|---|---|
libpam-modules
*
|
V:88, I:99 | 1036 | Pluggable Authentication Modules (basic service) |
libpam-ldap
*
|
V:2, I:4 | 408 | Pluggable Authentication Module allowing LDAP interfaces |
libpam-cracklib
*
|
V:2, I:2 | 104 | Pluggable Authentication Module to enable cracklib support |
libpam-doc
*
|
I:0.6 | 1208 | Pluggable Authentication Modules (documentation in html and text) |
libc6
*
|
V:97, I:99 | 10012 | GNU C Library: Shared libraries which also provides "Name Service Switch" service |
glibc-doc
*
|
I:3 | 2008 | GNU C Library: Manpages |
glibc-doc-reference
*
|
I:1.4 | 12156 | GNU C Library: Reference manual in info, pdf and html format (non-free) |
libnss-mdns
*
|
I:49 | 116 | NSS module for Multicast DNS name resolution |
libnss-ldap
*
|
I:4 | 268 | NSS module for using LDAP as a naming service |
libnss-ldapd
*
|
V:0.18, I:0.5 | 144 |
NSS module for using LDAP as a naming service (new folk of libnss-ldap )
|
libpam-doc
is essential for learning PAM configuration.
glibc-doc-reference
is essential for learning NSS configuration.
You can see more extensive and current list by "aptitude search 'libpam-|libnss-'
" command. The acronym NSS may also mean "Network Security Service" which is different from "Name Service Switch".
PAM is the most basic way to initialize environment variables for each program with the system wide default value.
Here are few notable configuration files accessed by the PAM.
Table 4.6. List of configuration files accessed by the PAM
configuration file | function |
---|---|
/etc/pam.d/<program_name>
|
set up PAM configuration for the "<program_name> " program; see pam (7) and pam.d (5)
|
/etc/nsswitch.conf
|
set up NSS configuration with the entry for each service. See nsswitch.conf (5)
|
/etc/nologin
|
limit the user login by the pam_nologin (8) module
|
/etc/securetty
|
limit the tty for the root access by the pam_securetty (8) module
|
/etc/security/access.conf
|
set access limit by the pam_access (8) module
|
/etc/security/group.conf
|
set group based restraint by the pam_group (8) module
|
/etc/security/pam_env.conf
|
set environment variables by the pam_env (8) module
|
/etc/environment
|
set additional environment variables by the pam_env (8) module with the "readenv=1 " argument
|
/etc/default/locale
|
set locale by pam_env (8) module with the "readenv=1 envfile=/etc/default/locale " argument. (Debian)
|
/etc/security/limits.conf
|
set resource restraint (ulimit, core, …) by the pam_linits (8) module
|
/etc/security/time.conf
|
set time restraint by the pam_time (8) module
|
The limitation of the password selection is implemented by the PAM modules, pam_unix
(8) and pam_cracklib
(8). They can be configured by their arguments.
PAM modules use suffix ".so
" for their filenames.
The modern centralized system management can be deployed using the centralized Lightweight Directory Access Protocol (LDAP) server to administer many Unix-like and non-Unix-like systems on the network. The open source implementation of the Lightweight Directory Access Protocol is OpenLDAP Software.
The LDAP server provides the account information through the use of PAM and NSS with libpam-ldap
and libnss-ldap
packages for the Debian system. Several actions are required to enable this (I have not used this setup and the following is purely secondary information. Please read this in this context.).
slapd
(8).
You change the PAM configuration files in the "/etc/pam.d/
" directory to use "pam_ldap.so
" instead of the default "pam_unix.so
".
/etc/pam_ldap.conf
" as the configuration file for libpam-ldap
and "/etc/pam_ldap.secret
" as the file to store the password of the root.
You change the NSS configuration in the "/etc/nsswitch.conf
" file to use "ldap
" instead of the default ("compat
" or "file
").
/etc/libnss-ldap.conf
" as the configuration file for libnss-ldap
.
libpam-ldap
to use SSL (or TLS) connection for the security of password.
libnss-ldap
to use SSL (or TLS) connection to ensure integrity of data at the cost of the LDAP network overhead.
nscd
(8) locally to cache any LDAP search results in order to reduce the LDAP network traffic.
See documentations in pam_ldap.conf
(5) and "/usr/share/doc/libpam-doc/html/
" offered by the libpam-doc
package and "info libc 'Name Service Switch'
" offered by the glibc-doc
package.
Similarly, you can set up alternative centralized systems with other methods.
This is the famous phrase at the bottom of the old "info su
" page by Richard M. Stallman. Not to worry: the current su
command in Debian uses PAM, so that one can restrict the ability to use su
to the root
group by enabling the line with "pam_wheel.so
" in "/etc/pam.d/su
".
Installing the libpam-cracklib
package enables you to force stricter password rule, for example, by having active lines in "/etc/pam.d/common-password
" as the following.
For lenny
:
password required pam_cracklib.so retry=3 minlen=9 difok=3 password required pam_unix.so use_authtok nullok md5
For squeeze
:
password required pam_cracklib.so retry=3 minlen=9 difok=3 password [success=1 default=ignore] pam_unix.so use_authtok nullok md5 password requisite pam_deny.so password required pam_permit.so
See Section 9.5.15, “Alt-SysRq key” for restricting the kernel secure attention key (SAK) feature.
sudo
(8) is a program designed to allow a sysadmin to give limited root privileges to users and log root activity. sudo
requires only an ordinary user's password. Install sudo
package and activate it by setting options in "/etc/sudoers
". See configuration example at "/usr/share/doc/sudo/examples/sudoers
".
My usage of sudo
for the single user system (see Section 1.1.12, “sudo configuration”) is aimed to protect myself from my own stupidity. Personally, I consider using sudo
a better alternative to using the system from the root account all the time. For example, the following changes the owner of "<some_file>
" to "<my_name>
".
$ sudo chown <my_name> <some_file>
Of course if you know the root password (as self-installed Debian users do), any command can be run under root from any user's account using "su -c
".
Security-Enhanced Linux (SELinux) is a framework to tighten privilege model tighter than the ordinary Unix-like security model with the mandatory access control (MAC) policies. The root power may be restricted under some conditions.
For system security, It is a good idea to disable as much server programs as possible. This becomes critical for network servers. Having unused servers, activated either directly as daemon or via super-server program, are considered security risks.
Many programs, such as sshd
(8), use PAM based access control. There are many ways to restrict access to some server services.
/etc/default/<program_name>
"
/etc/inetd.conf
" for super-server
/etc/hosts.deny
" and "/etc/hosts.allow
" for TCP wrapper, tcpd
(8)
/etc/rpc.conf
" for Sun RPC
/etc/at.allow
" and "/etc/at.deny
" for atd
(8)
/etc/cron.allow
" and "/etc/cron.deny
" for crontab
(1)
See Section 3.5.3, “The runlevel management example”, Section 3.5.4, “The default parameter for each init script”, Section 4.5.1, “Configuration files accessed by the PAM and NSS”, Section 3.5.8, “Network service initialization”, and Section 5.9, “Netfilter infrastructure”.
If you have problems with remote access in a recent Debian system, comment out offending configuration such as "ALL: PARANOID" in "/etc/hosts.deny
" if it exists. (But you must be careful on security risks involved with this kind of action.)
The information here may not be sufficient for your security needs but it should be a good start.
Many popular transportation layer services communicate messages including password authentication in the plain text. It is very bad idea to transmit password in the plain text over the wild Internet where it can be intercepted. You can run these services over "Transport Layer Security" (TLS) or its predecessor, "Secure Sockets Layer" (SSL) to secure entire communication including password by the encryption.
Table 4.7. List of insecure and secure services and ports
insecure service name | port | secure service name | port |
---|---|---|---|
www (http) | 80 | https | 443 |
smtp (mail) | 25 | ssmtp (smtps) | 465 |
ftp-data | 20 | ftps-data | 989 |
ftp | 21 | ftps | 990 |
telnet | 23 | telnets | 992 |
imap2 | 143 | imaps | 993 |
pop3 | 110 | pop3s | 995 |
ldap | 389 | ldaps | 636 |
The encryption costs CPU time. As a CPU friendly alternative, you can keep communication in plain text while securing just password with the secure authentication protocol such as "Authenticated Post Office Protocol" (APOP) for POP and "Challenge-Response Authentication Mechanism MD5" (CRAM-MD5) for SMTP and IMAP. (For sending mail messages over the Internet to your mail server from your mail client, it is recently popular to use new message submission port 587 instead of traditional SMTP port 25 to avoid port 25 blocking by the network provider while authenticating yourself with CRAM-MD5.)
The Secure Shell (SSH) program provides secure encrypted communications between two untrusted hosts over an insecure network with the secure authentication. It consists of the OpenSSH client, ssh
(1), and the OpenSSH daemon, sshd
(8). This SSH can be used to tunnel the insecure protocol communication such as POP and X securely over the Internet with the port forwarding feature.
The client tries to authenticate itself using host-based authentication, public key authentication, challenge-response authentication, or password authentication. The use of public key authentication enables the remote password-less login. See Section 6.9, “The remote access server and utility (SSH)”.
Even when you run secure services such as Secure Shell (SSH) and Point-to-point tunneling protocol (PPTP) servers, there are still chances for the break-ins using brute force password guessing attack etc. from the Internet. Use of the firewall policy (see Section 5.9, “Netfilter infrastructure”) together with the following secure tools may improve the security situation.
Table 4.8. List of tools to provide extra security measures
package | popcon | size | description |
---|---|---|---|
knockd
*
|
V:0.15, I:0.3 | 164 |
small port-knock daemon knockd (1) and client konck (1)
|
denyhosts
*
|
V:2, I:2 | 356 | utility to help sysadmins thwart ssh hackers |
fail2ban
*
|
V:4, I:5 | 660 | ban IPs that cause multiple authentication errors |
libpam-shield
*
|
V:0.01, I:0.05 | 104 | lock out remote attackers trying password guessing |
To prevent people to access your machine with root privilege, you need to make following actions.
With physical access to hard disk, resetting the password is relatively easy with following steps.
/etc/passwd
" in the root partition and make the second entry for the root
account empty.
If you have the edit access to the GRUB menu entry (see Section 3.3, “Stage 2: the boot loader”) for grub-rescue-pc
at the boot time, it is even easier with following steps.
root=/dev/hda6 rw init=/bin/sh
".
/etc/passwd
" and make the second entry for the root
account empty.
The root shell of the system is now accessible without password.
Once one has root shell access, he can access everything on the system and reset any passwords on the system. Further more, he may compromise password for all user accounts using brute force password cracking tools such as john
and crack
packages (see Section 9.6.11, “System security and integrity check”). This cracked password may lead to compromise other systems.
The only reasonable software solution to avoid all these concerns is to use software encrypted root partition (or "/etc
" partition) using dm-crypt and initramfs (see Section 9.4, “Data encryption tips”). You always need password to boot the system, though.
For general guide to the GNU/Linux networking, read the Linux Network Administrators Guide.
Let's review the basic network infrastructure on the modern Debian system.
Table 5.1. List of network configuration tools
packages | popcon | size | type | description |
---|---|---|---|---|
ifupdown
*
|
V:60, I:99 | 228 | config::ifupdown | standardized tool to bring up and down the network (Debian specific) |
ifplugd
*
|
V:0.4, I:0.9 | 244 | , , | manage the wired network automatically |
ifupdown-extra
*
|
V:0.04, I:0.2 | 124 | , , |
network testing script to enhance "ifupdown " package
|
ifmetric
*
|
V:0.02, I:0.10 | 100 | , , | set routing metrics for a network interface |
guessnet
*
|
V:0.07, I:0.3 | 516 | , , |
mapping script to enhance "ifupdown " package via "/etc/network/interfaces " file
|
ifscheme
*
|
V:0.03, I:0.08 | 132 | , , |
mapping scripts to enhance "ifupdown " package
|
ifupdown-scripts-zg2
*
|
V:0.00, I:0.04 | 232 | , , | Zugschlus' interface scripts for ifupdown's manual method |
network-manager
*
|
V:24, I:33 | 2628 | config::NM | NetworkManager (daemon): manage the network automatically |
network-manager-gnome
*
|
V:17, I:29 | 5616 | , , | NetworkManager (GNOME frontend) |
network-manager-kde
*
|
V:2, I:3 | 264 | , , | NetworkManager (KDE frontend) |
cnetworkmanager
*
|
V:0.05, I:0.2 | 208 | , , | NetworkManager (command-line client) |
wicd
*
|
V:0.5, I:2 | 88 | config::wicd | wired and wireless network manager (metapackage) |
wicd-cli
*
|
V:0.04, I:0.2 | 128 | , , | wired and wireless network manager (command-line client) |
wicd-curses
*
|
V:0.15, I:0.4 | 236 | , , | wired and wireless network manager (Curses client) |
wicd-daemon
*
|
V:1.9, I:2 | 1780 | , , | wired and wireless network manager (daemon) |
wicd-gtk
*
|
V:1.6, I:2 | 772 | , , | wired and wireless network manager (GTK+ client) |
iptables
*
|
V:27, I:99 | 1316 | config::Netfilter | administration tools for packet filtering and NAT (Netfilter) |
iproute
*
|
V:41, I:88 | 1044 | config::iproute2 |
iproute2, IPv6 and other advanced network configuration: ip (8), tc (8), etc
|
ifrename
*
|
V:0.2, I:0.6 | 236 | , , |
rename network interfaces based on various static criteria: ifrename (8)
|
ethtool
*
|
V:4, I:13 | 208 | , , | display or change Ethernet device settings |
iputils-ping
*
|
V:36, I:99 | 96 | test::iproute2 | test network reachability of a remote host by hostname or IP address (iproute2) |
iputils-arping
*
|
V:0.6, I:6 | 36 | , , | test network reachability of a remote host specified by the ARP address |
iputils-tracepath
*
|
V:0.4, I:2 | 72 | , , | trace the network path to a remote host |
net-tools
*
|
V:70, I:99 | 1016 | config::net-tools |
NET-3 networking toolkit (net-tools, IPv4 network configuration): ifconfig (8) etc.
|
inetutils-ping
*
|
V:0.03, I:0.12 | 296 | test::net-tools | test network reachability of a remote host by hostname or IP address (legacy, GNU) |
arping
*
|
V:0.5, I:3 | 104 | , , | test network reachability of a remote host specified by the ARP address (legacy) |
traceroute
*
|
V:13, I:99 | 192 | , , | trace the network path to a remote host (legacy, console) |
dhcp3-client
*
|
V:32, I:92 | 60 | config::low-level | DHCP client |
wpasupplicant
*
|
V:28, I:39 | 828 | , , | client support for WPA and WPA2 (IEEE 802.11i) |
wireless-tools
*
|
V:7, I:22 | 420 | , , | tools for manipulating Linux Wireless Extensions |
ppp
*
|
V:6, I:26 | 1016 | , , |
PPP/PPPoE connection with chat
|
pppoeconf
*
|
V:0.4, I:3 | 344 | config::helper | configuration helper for PPPoE connection |
pppconfig
*
|
V:0.2, I:2 | 964 | , , |
configuration helper for PPP connection with chat
|
wvdial
*
|
V:0.5, I:2 | 416 | , , |
configuration helper for PPP connection with wvdial and ppp
|
mtr-tiny
*
|
V:2, I:26 | 120 | test::low-level | trace the network path to a remote host (curses) |
mtr
*
|
V:0.7, I:3 | 180 | , , | trace the network path to a remote host (curses and GTK+) |
gnome-nettool
*
|
V:2, I:33 | 2848 | , , | tools for common network information operations (GNOME) |
nmap
*
|
V:6, I:31 | 7112 | , , | network mapper / port scanner (Nmap, console) |
zenmap
*
|
V:0.2, I:1.3 | 2400 | , , | network mapper / port scanner (GTK+) |
knmap
*
|
V:0.10, I:0.6 | 712 | , , | network mapper / port scanner (KDE) |
tcpdump
*
|
V:3, I:24 | 1020 | , , | network traffic analyzer (Tcpdump, console) |
wireshark
*
|
V:1.4, I:9 | 2052 | , , | network traffic analyzer (Wireshark, GTK+) |
tshark
*
|
V:0.4, I:3 | 276 | , , | network traffic analyzer (console) |
nagios3
*
|
V:1.0, I:1.8 | 32 | , , | monitoring and management system for hosts, services and networks (Nagios) |
tcptrace
*
|
V:0.05, I:0.4 | 436 | , , |
produce a summarization of the connections from tcpdump output
|
snort
*
|
V:0.6, I:0.8 | 1260 | , , | flexible network intrusion detection system (Snort) |
ntop
*
|
V:1.2, I:2 | 11124 | , , | display network usage in web browser |
dnsutils
*
|
V:14, I:90 | 412 | , , |
network clients provided with BIND: nslookup (8), nsupdate (8), dig (8)
|
dlint
*
|
V:0.4, I:6 | 96 | , , | check DNS zone information using nameserver lookups |
dnstracer
*
|
V:0.11, I:0.5 | 92 | , , | trace a chain of DNS servers to the source |
The naming for the domain name is a tricky one for the normal PC workstation users. The PC workstation may be mobile one hopping around the network or located behind the NAT firewall inaccessible from the Internet. For such case, you may not want the domain name to be a valid domain name to avoid name collision.
When you use an invalid domain name, you need to spoof the domain name used by some programs such as MTA for their proper operation. See Section 6.3.3, “The mail address configuration”.
According to rfc2606, "invalid
" seems to be a choice for the top level domain (TLD) to construct domain names that are sure to be invalid from the Internet.
The mDNS network discovery protocol (Apple Bonjour / Apple Rendezvous, Avahi on Debian) uses "local" as the pseudo-top-level domain. Microsoft also seems to promote "local" for the TLD of local area network.
If the DNS service on your LAN uses "local
" as TLD for your LAN, it may interfare with mDNS.
Other popular choices for the invalid TLD seem to be "localdomain
", "lan
", "localnet
", or "home
" according to my incoming mail analysis.
The hostname resolution is currently supported by the NSS (Name Service Switch) mechanism too. The flow of this resolution is the following.
/etc/nsswitch.conf
" file with stanza like "hosts: files dns
" dictates the hostname resolution order. (This replaces the old functionality of the "order
" stanza in "/etc/host.conf
".)
files
method is invoked first. If the hostname is found in the "/etc/hosts
" file, it returns all valid addresses for it and exits. (The "/etc/host.conf
" file contains "multi on
".)
dns
method is invoked. If the hostname is found by the query to the Internet Domain Name System (DNS) identified by the "/etc/resolv.conf
" file, it returns all valid addresses for it and exits.
The "/etc/hosts
" file associates IP addresses with hostnames contains the following.
127.0.0.1 localhost 127.0.1.1 <host_name>.<domain_name> <host_name> # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
Here the <host_name> in this matches the own hostname defined in the "/etc/hostname
". The <domain_name> in this is the fully qualified domain name (FQDN) of this host.
For <domain_name> of the mobile PC without the real FQDN, you may pick a bogus and safe TLD such as "lan
", "home
", "invalid
", "localdomain
", "none
", and "private
".
The "/etc/resolv.conf
" is a static file if the resolvconf
package is not installed. If installed, it is a symbolic link. Either way, it contains information that initialize the resolver routines. If the DNS is found at IP="192.168.11.1
", it contains the following.
nameserver 192.168.11.1
The resolvconf
package makes this "/etc/resolv.conf
" into a symbolic link and manages its contents by the hook scripts automatically.
The hostname resolution via Multicast DNS (using Zeroconf, aka Apple Bonjour / Apple Rendezvous) which effectively allows name resolution by common Unix/Linux programs in the ad-hoc mDNS domain "local
", can be provided by installing the libnss-mdns
package. The "/etc/nsswitch.conf
" file should have stanza like "hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
" to enable this functionality.
The network interface name, e.g. eth0
, is assigned to each hardware in the Linux kernel through the user space configuration mechanism, udev
(see Section 3.5.11, “The udev system”), as it is found. The network interface name is referred as physical interface in ifup
(8) and interfaces
(5).
In order to ensure each network interface to be named persistently for each reboot using MAC address etc., there is a record file "/etc/udev/rules.d/70-persistent-net.rules
". This file is automatically generated by the "/lib/udev/write_net_rules
" program, probably run by the "persistent-net-generator.rules
" rules file. You can modify it to change naming rule.
When editing the "/etc/udev/rules.d/70-persistent-net.rules
" rules file, you must keep each rule on a single line and the MAC address in lowercase. For example, if you find "Firewire device" and "PCI device" in this file, you probably want to name "PCI device" as eth0
and configure it as the primary network interface.
Let us be reminded of the IPv4 32 bit address ranges in each class reserved for use on the local area networks (LANs) by rfc1918. These addresses are guaranteed not to conflict with any addresses on the Internet proper.
Table 5.2. List of network address ranges
Class | network addresses | net mask | net mask /bits | # of subnets |
---|---|---|---|---|
A | 10.x.x.x | 255.0.0.0 | /8 | 1 |
B | 172.16.x.x — 172.31.x.x | 255.255.0.0 | /16 | 16 |
C | 192.168.0.x — 192.168.255.x | 255.255.255.0 | /24 | 256 |
If one of these addresses is assigned to a host, then that host must not access the Internet directly but must access it through a gateway that acts as a proxy for individual services or else does Network Address Translation(NAT). The broadband router usually performs NAT for the consumer LAN environment.
Although most hardware devices are supported by the Debian system, there are some network devices which require DSFG non-free external hardware drivers to support them. Please see Section 9.7.8, “Non-free hardware drivers”.
Debian squeeze
systems can manage the network connection via management daemon software such as NetworkManager (NM) (network-manager and associated packages) or Wicd (wicd and associated packages).
ifupdown
package.
Do not use these automatic network configuration tools for servers. These are aimed primarily for mobile desktop users on laptops.
These modern network configuration tools need to be configured properly to avoid conflicting with the legacy ifupdown
package and its configuration file "/etc/network/interfaces
".
Some features of these automatic network configuration tools may suffer regressions. These are not as robust as the legacy ifupdown
package. Check BTS of network-manager and BTS of wicd for current issues and limitations.
Official documentations for NM and Wicd on Debian are provided in "/usr/share/doc/network-manager/README.Debian
" and "/usr/share/doc/wicd/README.Debian
", respectively.
Essentially, the network configuration for desktop is done as follows.
Make desktop user, e.g. foo
, belong to group "netdev
" by the following (Alternatively, do it automatically via D-bus under modern desktop environments such as GNOME and KDE).
$ sudo adduser foo netdev
Keep configuration of "/etc/network/interfaces
" as simple as the the following.
auto lo iface lo inet loopback
Restart NM or Wicd by the following.
$ sudo /etc/init.d/network-manager restart
$ sudo /etc/init.d/wicd restart
Only interfaces which are not listed in "/etc/network/interfaces
" or which have been configured with "auto …
" or "allow-hotplug …
" and "iface … inet dhcp
" (with no other options) are managed by NM to avoid conflict with ifupdown
.
If you wish to extend network configuration capabilities of NM, please seek appropriate plug-in modules and supplemental packages such as network-manager-openconnect
, network-manager-openvpn-gnome
, network-manager-pptp-gnome
, mobile-broadband-provider-info
, gnome-bluetooth
, etc. The same goes for those of Wicd.
These automatic network configuration tools may not be compatible with esoteric configurations of legacy ifupdown
in "/etc/network/interfaces
" such as ones in Section 5.5, “The basic network configuration with ifupdown (legacy)” and Section 5.6, “The advanced network configuration with ifupdown (legacy)”. Check BTS of network-manager and BTS of wicd for current issues and limitations.
When the method described in Section 5.2, “The modern network configuration for desktop” does not suffice your needs, you should use the legacy network connection and configuration method which combines many simpler tools.
The legacy network connection is specific for each method (see Section 5.4, “The network connection method (legacy)”).
There are 2 types of programs for the low level network configuration on Linux (see Section 5.7.1, “Iproute2 commands”).
ifconfig
(8), …) are from the Linux NET-3 networking system. Most of these are obsolete now.
ip
(8), …) are the current Linux networking system.
Although these low level networking programs are powerful, they are cumbersome to use. So high level network configuration systems have been created.
The ifupdown
package is the de facto standard for such high level network configuration system on Debian. It enables you to bring up network simply by doing , e.g., "ifup eth0
". Its configuration file is the "/etc/network/interfaces
" file and its typical contents are the following.
auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp
The resolvconf
package was created to supplement ifupdown
system to support smooth reconfiguration of network address resolution by automating rewrite of resolver configuration file "/etc/resolv.conf
". Now, most Debian network configuration packages are modified to use resolvconf
package (see "/usr/share/doc/resolvconf/README.Debian
").
Helper scripts to the ifupdown
package such as ifplugd
, guessnet
, ifscheme
, etc. are created to automate dynamic configuration of network environment such as one for mobile PC on wired LAN. These are relatively difficult to use but play well with existing ifupdown
system.
These are explained in detail with examples (see Section 5.5, “The basic network configuration with ifupdown (legacy)” and Section 5.6, “The advanced network configuration with ifupdown (legacy)”).
The connection test method described in this section are meant for testing purposes. It is not meant to be used directly for the daily network connection. You are advised to use them via NM, Wicd, or the ifupdown
package (see Section 5.2, “The modern network configuration for desktop” and Section 5.5, “The basic network configuration with ifupdown (legacy)”).
The typical network connection method and connection path for a PC can be summarized as the following.
Table 5.3. List of network connection methods and connection paths
PC | connection method | connection path |
---|---|---|
Serial port (ppp0 )
|
PPP | ⇔ modem ⇔ POTS ⇔ dial-up access point ⇔ ISP |
Ethernet port (eth0 )
|
PPPoE/DHCP/Static | ⇔ BB-modem ⇔ BB service ⇔ BB access point ⇔ ISP |
Ethernet port (eth0 )
|
DHCP/Static | ⇔ LAN ⇔ BB-router with network address translation (NAT) (⇔ BB-modem …) |
Here is the summary of configuration script for each connection method.
Table 5.4. List of network connection configurations
connection method | configuration | backend package(s) |
---|---|---|
PPP |
pppconfig to create deterministic chat
|
pppconfig , ppp
|
PPP (alternative) |
wvdialconf to create heuristic chat
|
ppp , wvdial
|
PPPoE |
pppoeconf to create deterministic chat
|
pppoeconf , ppp
|
DHCP |
described in "/etc/dhcp3/dhclient.conf "
|
dhcp3-client
|
static IP (IPv4) |
described in "/etc/network/interfaces "
|
net-tools
|
static IP (IPv6) |
described in "/etc/network/interfaces "
|
iproute
|
The network connection acronyms mean the following.
Table 5.5. List of network connection acronyms
acronym | meaning |
---|---|
POTS | plain old telephone service |
BB | broadband |
BB-service | e.g., the digital subscriber line (DSL), the cable TV, or the fiber to the premises (FTTP) |
BB-modem | e.g., the DSL modem, the cable modem, or the optical network terminal (ONT) |
LAN | local area network |
WAN | wide area network |
DHCP | dynamic host configuration protocol |
PPP | point-to-point protocol |
PPPoE | point-to-point protocol over Ethernet |
ISP | Internet service provider |
The WAN connection services via cable TV are generally served by DHCP or PPPoE. The ones by ADSL and FTTP are generally served by PPPoE. You have to consult your ISP for exact configuration requirements of the WAN connection.
When BB-router is used to create home LAN environment, PCs on LAN are connected to the WAN via BB-router with network address translation (NAT). For such case, PC's network interfaces on the LAN are served by static IP or DHCP from the BB-router. BB-router must be configured to connect the WAN following the instruction by your ISP.
The typical modern home and small business network, i.e. LAN, are connected to the WAN(Internet) using some consumer grade broadband router. The LAN behind this router is usually served by the dynamic host configuration protocol (DHCP) server running on the router.
Just install the dhcp3-client
package for the Ethernet served by the dynamic host configuration protocol (DHCP).
No special action is needed for the Ethernet served by the static IP.
The configuration script pppconfig
configures the PPP connection interactively just by selecting the following.
Table 5.6. List of configuration files for the PPP connection with pppconfig
file | function |
---|---|
/etc/ppp/peers/<isp_name>
|
The pppconfig generated configuration file for pppd specific to <isp_name>
|
/etc/chatscripts/<isp_name>
|
The pppconfig generated configuration file for chat specific to <isp_name>
|
/etc/ppp/options
|
The general execution parameter for pppd
|
/etc/ppp/pap-secret
|
Authentication data for the PAP (security risk) |
/etc/ppp/chap-secret
|
Authentication data for the CHAP (more secure) |
The "<isp_name>" value of "provider
" is assumed if pon
and poff
commands are invoked without arguments.
You can test configuration using low level network configuration tools as the following.
$ sudo pon <isp_name> ... $ sudo poff <isp_name>
See "/usr/share/doc/ppp/README.Debian.gz
".
A different approach to using pppd
(8) is to run it from wvdial
(1) which comes in the wvdial
package. Instead of pppd
running chat
(8) to dial in and negotiate the connection, wvdial
does the dialing and initial negotiating and then starts pppd
to do the rest.
The configuration script wvdialconf
configures the PPP connection interactively just by selecting the following.
wvdial
succeeds in making the connection in most cases and maintains authentication data list automatically.
Table 5.7. List of configuration files for the PPP connection with wvdialconf
file | function |
---|---|
/etc/ppp/peers/wvdial
|
The wvdialconf generated configuration file for pppd specific to wvdial
|
/etc/wvdial.conf
|
The wvdialconf generated configuration file
|
/etc/ppp/options
|
The general execution parameter for pppd
|
/etc/ppp/pap-secret
|
Authentication data for the PAP (security risk) |
/etc/ppp/chap-secret
|
Authentication data for the CHAP (more secure) |
You can test configuration using low level network configuration tools as the following.
$ sudo wvdial ... $ sudo killall wvdial
See wvdial
(1) and wvdial.conf
(5).
When your ISP serves you with PPPoE connection and you decide to connect your PC directly to the WAN, the network of your PC must be configured with the PPPoE. The PPPoE stand for PPP over Ethernet. The configuration script pppoeconf
configures the PPPoE connection interactively.
The configuration files are the following.
Table 5.8. List of configuration files for the PPPoE connection with pppoeconf
file | function |
---|---|
/etc/ppp/peers/dsl-provider
|
The pppoeconf generated configuration file for pppd specific to pppoe
|
/etc/ppp/options
|
The general execution parameter for pppd
|
/etc/ppp/pap-secret
|
Authentication data for the PAP (security risk) |
/etc/ppp/chap-secret
|
Authentication data for the CHAP (more secure) |
You can test configuration using low level network configuration tools as the following.
$ sudo /sbin/ifconfig eth0 up $ sudo pon dsl-provider ... $ sudo poff dsl-provider $ sudo /sbin/ifconfig eth0 down
See "/usr/share/doc/pppoeconf/README.Debian
".
The traditional TCP/IP network setup on Debian system uses ifupdown
package as a high level tool. There are 2 typical cases.
resolvconf
package and enable you to switch your network configuration easily (see Section 5.5.4, “The network interface served by the DHCP”).
resolvconf
package and keep your system simple (see Section 5.5.5, “The network interface with the static IP”).
These traditional setup methods are quite useful if you wish to set up advanced configuration (see Section 5.5, “The basic network configuration with ifupdown (legacy)”).
The ifupdown
package provides the standardized framework for the high level network configuration in the Debian system. In this section, we learn the basic network configuration with ifupdown
with simplified introduction and many typical examples.
The ifupdown
package contains 2 commands: ifup
(8) and ifdown
(8). They offer high level network configuration dictated by the configuration file "/etc/network/interfaces".
Table 5.9. List of basic network configuration commands with ifupdown
command | action |
---|---|
ifup eth0
|
bring up a network interface eth0 with the configuration eth0 if "iface eth0 " stanza exists
|
ifdown eth0
|
bring down a network interface eth0 with the configuration eth0 if "iface eth0 " stanza exists
|
Do not use low level configuration tools such as ifconfig
(8) and ip
(8) commands to configure an interface in up state.
There is no command ifupdown
.
The key syntax of "/etc/network/interfaces
" as explained in interfaces
(5) can be summarized as the following.
Table 5.10. List of stanzas in "/etc/network/interfaces
"
stanza | meaning |
---|---|
"auto <interface_name> "
|
start interface <interface_name> upon start of the system |
"allow-auto <interface_name> "
|
, , |
"allow-hotplug <interface_name> "
|
start interface <interface_name> when the kernel detects a hotplug event from the interface |
Lines started with "iface <config_name> … "
|
define the network configuration <config_name> |
Lines started with "mapping <interface_name_glob> "
|
define mapping value of <config_name> for the matching <interface_name> |
A line starting with a hash "# "
|
ignore as comments (end-of-line comments are not supported) |
A line ending with a backslash "\ "
|
extend the configuration to the next line |
Lines started with iface
stanza has the following syntax.
iface <config_name> <address_family> <method_name> <option1> <value1> <option2> <value2> ...
For the basic configuration, the mapping
stanza is not used and you use the network interface name as the network configuration name (See Section 5.6.5, “The mapping stanza”).
Do not define duplicates of the "iface
" stanza for a network interface in "/etc/network/interfaces
".
The following configuration entry in the "/etc/network/interfaces
" file brings up the loopback network interface lo
upon booting the system (via auto
stanza).
auto lo iface lo inet loopback
This one always exists in the "/etc/network/interfaces
" file.
After prepairing the system by Section 5.4.1, “The DHCP connection with the Ethernet”, the network interface served by the DHCP is configured by creating the configuration entry in the "/etc/network/interfaces
" file as the following.
allow-hotplug eth0 iface eth0 inet dhcp hostname "mymachine"
When the Linux kernel detects the physical interface eth0
, the allow-hotplug
stanza causes ifup
to bring up the interface and the iface
stanza causes ifup
to use DHCP to configure the interface.
The network interface served by the static IP is configured by creating the configuration entry in the "/etc/network/interfaces
" file as the following.
allow-hotplug eth0 iface eth0 inet static address 192.168.11.100 netmask 255.255.255.0 broadcast 192.168.11.255 gateway 192.168.11.1 dns-domain lan dns-nameservers 192.168.11.1
When the Linux kernel detects the physical interface eth0
, the allow-hotplug
stanza causes ifup
to bring up the interface and the iface
stanza causes ifup
to use the static IP to configure the interface.
Here, I assumed the following.
192.168.11.0
- 192.168.11.255
192.168.11.1
192.168.11.100
resolvconf
package: installed
lan
"
192.168.11.1
When the resolvconf
package is not installed, DNS related configuration needs to be done manually by editing the "/etc/resolv.conf
" as the following.
nameserver 192.168.11.1 domain lan
The IP addresses used in the above example are not meant to be copied literally. You have to adjust IP numbers to your actual network configuration.
The wireless LAN (WLAN for short) provides the fast wireless connectivity through the spread-spectrum communication of unlicensed radio bands based on the set of standards called IEEE 802.11.
The WLAN interfaces are almost like normal Ethernet interfaces but require some network ID and encryption key data to be provided when they are initialized. Their high level network tools are exactly the same as that of Ethernet interfaces except interface names are a bit different like eth1
, wlan0
, ath0
, wifi0
, … depending on the kernel drivers used.
The wmaster0
device is the master device which is an internal device used only by SoftMAC with new mac80211 API of Linux.
Here are some keywords to remember for the WLAN.
Table 5.11. List of acronyms for WLAN
acronym | full word | meaning |
---|---|---|
NWID | Network ID | 16 bit network ID used by pre-802.11 WaveLAN network (very deprecated) |
(E)SSID | (Extended) Service Set Identifier | network name of the Wireless Access Points (APs) interconnected to form an integrated 802.11 wireless LAN, Domain ID |
WEP, (WEP2) | Wired Equivalent Privacy | 1st generation 64-bit (128-bit) wireless encryption standard with 40-bit key (deprecated) |
WPA | Wi-Fi Protected Access | 2nd generation wireless encryption standard (most of 802.11i), compatible with WEP |
WPA2 | Wi-Fi Protected Access 2 | 3rd generation wireless encryption standard (full 802.11i), non-compatible with WEP |
The actual choice of protocol is usually limited by the wireless router you deploy.
You need to install the wpasupplicant
package to support the WLAN with the new WPA/WPA2.
In case of the DHCP served IP on WLAN connection, the "/etc/network/interfaces
" file entry should be as the following.
allow-hotplug ath0 iface ath0 inet dhcp wpa-ssid homezone # hexadecimal psk is encoded from a plaintext passphrase wpa-psk 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
See "/usr/share/doc/wpasupplicant/README.modes.gz
".
You need to install the wireless-tools
package to support the WLAN with the old WEP. (Your consumer grade router may still be using this insecure infrastructure but this is better than nothing.)
Please note that your network traffic on WLAN with WEP may be sniffed by others.
In case of the DHCP served IP on WLAN connection, the "/etc/network/interfaces
" file entry should be as the following.
allow-hotplug eth0 iface eth0 inet dhcp wireless-essid Home wireless-key1 0123-4567-89ab-cdef wireless-key2 12345678 wireless-key3 s:password wireless-defaultkey 2 wireless-keymode open
See "/usr/share/doc/wireless-tools/README.Debian
".
You need to configure the PPP connection first as described before (see Section 5.4.3, “The PPP connection with pppconfig”). Then, add the "/etc/network/interfaces
" file entry for the primary PPP device ppp0
as the following.
iface ppp0 inet ppp provider <isp_name>
You need to configure the alternative PPP connection with wvdial
first as described before (see Section 5.4.4, “The alternative PPP connection with wvdialconf”). Then, add the "/etc/network/interfaces
" file entry for the primary PPP device ppp0
as the following.
iface ppp0 inet wvdial
For PC connected directly to the WAN served by the PPPoE, you need to configure system with the PPPoE connection as described before (see Section 5.4.5, “The PPPoE connection with pppoeconf”). Then, add the "/etc/network/interfaces
" file entry for the primary PPPoE device eth0
as the following.
allow-hotplug eth0 iface eth0 inet manual pre-up /sbin/ifconfig eth0 up up ifup ppp0=dsl down ifdown ppp0=dsl post-down /sbin/ifconfig eth0 down # The following is used internally only iface dsl inet ppp provider dsl-provider
The "/etc/network/run/ifstate
" file stores the intended network configuration states for all the currently active network interfaces managed by the ifupdown
package are listed. Unfortunately, even if the ifupdown
system fails to bring up the interface as intended, the "/etc/network/run/ifstate
" file lists it active.
Unless the output of the ifconfig
(8) command for an interface does not have a line like following example, it can not be used as a part of IPV4 network.
inet addr:192.168.11.2 Bcast:192.168.11.255 Mask:255.255.255.0
For the Ethernet device connected to the PPPoE, the output of the ifconfig
(8) command lacks a line which looks like above example.
When you try to reconfigure the interface, e.g. eth0
, you must disable it first with the "sudo ifdown eth0
" command. This removes the entry of eth0
from the "/etc/network/run/ifstate
" file. (This may result in some error message if eth0
is not active or it is configured improperly previously. So far, it seems to be safe to do this for the simple single user work station at any time.)
You are now free to rewrite the "/etc/network/interfaces
" contents as needed to reconfigure the network interface, eth0
.
Then, you can reactivate eth0
with the "sudo ifup eth0
" command.
You can (re)initialize the network interface simply by "sudo ifdown eth0;sudo ifup eth0
".
The ifupdown-extra
package provides easy network connection tests for use with the ifupdown
package.
network-test
(1) command can be used from the shell.
ifup
command execution.
The network-test
command frees you from the execution of cumbersome low level commands to analyze the network problem.
The automatic scripts are installed in "/etc/network/*/
" and performs the following.
/etc/network/routes
" definition
/var/log/syslog
" file
This syslog record is quite useful for administration of the network problem on the remote system.
The automatic behavior of the ifupdown-extra
package is configurable with the "/etc/default/network-test
". Some of these automatic checks slow down the system boot-up a little bit since it takes some time to listen for ARP replies.
The functionality of the ifupdown
package can be improved beyond what was described in Section 5.5, “The basic network configuration with ifupdown (legacy)” with the advanced knowledge.
The functionalities described here are completely optional. I, being lazy and minimalist, rarely bother to use these.
If you could not set up network connection by information in Section 5.5, “The basic network configuration with ifupdown (legacy)”, you make situation worse by using information below.
The ifplugd
package is older automatic network configuration tool which can manage only Ethernet connections. This solves unplugged/replugged Ethernet cable issues for mobile PC etc. If you have NetworkManager or Wicd (see Section 5.2, “The modern network configuration for desktop”) installed, you do not need this package.
This package runs daemon and replaces auto or allow-hotplug functionalities (see Table 5.10, “List of stanzas in "/etc/network/interfaces
"”) and starts interfaces upon their connection to the network.
Here is how to use the ifplugd
package for the internal Ethernet port, e.g. eth0
.
/etc/network/interfaces
": "auto eth0
" or "allow-hotplug eth0
".
/etc/network/interfaces
": "iface eth0 inet …
" and "mapping …
".
ifplugd
package.
sudo dpkg-reconfigure ifplugd
".
eth0
as the "static interfaces to be watched by ifplugd".
Now, the network reconfiguration works as you desire.
Upon power-on or upon hardware discovery, the interface is not brought up by itself.
The arguments for the ifplugd
(8) command can set its behaviors such as the delay for reconfiguring interfaces.
The ifmeric
package enables us to manipulate metrics of routes a posteriori even for DHCP.
The following sets the eth0
interface to be preferred over the wlan0
interface.
ifmetric
package.
metric 0
" just below the "iface eth0 inet dhcp
" line.
metric 1
" just below the "iface wlan0 inet dhcp
" line.
The metric 0 means the highest priority route and is the default one. The larger metric value means lower priority routes. The IP address of the active interface with the lowest metric value becomes the originating one. See ifmetric
(8).
A single physical Ethernet interface can be configured as multiple virtual interfaces with different IP addresses. Usually the purpose is to connect an interface to several IP subnetworks. For example, IP address based virtual web hosting by a single network interface is one such application.
For example, let's suppose the following.
192.168.0.x/24
.
eth0
for the Internet.
192.168.0.1
with the virtual interface eth0:0
for the LAN.
The following stanzas in "/etc/network/interfaces
" configure your network.
iface eth0 inet dhcp metric 0 iface eth0:0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 metric 1
Although this configuration example with network address translation (NAT) using netfilter/iptables (see Section 5.9, “Netfilter infrastructure”) can provide cheap router for the LAN with only single interface, there is no real firewall capability with such set up. You should use 2 physical interfaces with NAT to secure the local network from the Internet.
The ifupdown
package offers advanced network configuration using the network configuration name and the network interface name. I use slightly different terminology from one used in ifup
(8) and interfaces
(5).
Table 5.12. List of terminology for network devices
manpage terminology | my terminology | examples in the following text | description |
---|---|---|---|
physical interface name | network interface name |
lo , eth0 , <interface_name>
|
name given by the Linux kernel (using udev mechanism)
|
logical interface name | network configuration name |
config1 , config2 , <config_name>
|
name token following iface in the "/etc/network/interfaces "
|
Basic network configuration commands in Section 5.5.1, “The command syntax simplified” require the network configuration name token of the iface
stanza to match the network interface name in the "/etc/network/interfaces
".
Advanced network configuration commands enables separation of the network configuration name and the network interface name in the "/etc/network/interfaces
" as the following.
Table 5.13. List of advanced network configuration commands with ifupdown
command | action |
---|---|
ifup eth0=config1
|
bring up a network interface eth0 with the configuration config1
|
ifdown eth0=config1
|
bring down a network interface eth0 with the configuration config1
|
ifup eth0
|
bring up a network interface eth0 with the configuration selected by mapping stanza
|
ifdown eth0
|
bring down a network interface eth0 with the configuration selected by mapping stanza
|
We skipped explaining the mapping
stanza in the "/etc/network/interfaces
" in Section 5.5.2, “The basic syntax of "/etc/network/interfaces"” to avoid complication. This stanza has the following syntax.
mapping <interface_name_glob> script <script_name> map <script_input1> map <script_input2> map ...
This provides advanced feature to the "/etc/network/interfaces
" file by automating the choice of the configuration with the mapping script specified by <script_name>
.
Let's follow the execution of the following.
$ sudo ifup eth0
When the "<interface_name_glob>
" matches "eth0
", this execution produces the execution of the following command to configure eth0
automatically.
$ sudo ifup eth0=$(echo -e '<script_input1> \n <script_input2> \n ...' | <script_name> eth0)
Here, script input lines with "map
" are optional and can be repeated.
The glob for mapping
stanza works like shell filename glob (see Section 1.5.6, “Shell glob”).
Here is how to switch manually among several network configurations without rewriting the "/etc/network/interfaces
" file as in Section 5.5.13, “The basic network reconfiguration” .
For all the network configuration you need to access, you create a single "/etc/network/interfaces
" file as the following.
auto lo iface lo inet loopback iface config1 inet dhcp hostname "mymachine" iface config2 inet static address 192.168.11.100 netmask 255.255.255.0 broadcast 192.168.11.255 gateway 192.168.11.1 dns-domain lan dns-nameservers 192.168.11.1 iface pppoe inet manual pre-up /sbin/ifconfig eth0 up up ifup ppp0=dsl down ifdown ppp0=dsl post-down /sbin/ifconfig eth0 down # The following is used internally only iface dsl inet ppp provider dsl-provider iface pots inet ppp provider provider
Please note the network configuration name which is the token after iface
does not use the token for the network interface name. Also, there are no auto
stanza nor allow-hotplug
stanza to start the network interface eth0
automatically upon events.
Now you are ready to switch the network configuration.
Let's move your PC to a LAN served by the DHCP. You bring up the network interface (the physical interface) eth0
by assigning the network configuration name (the logical interface name) config1
to it by the following.
$ sudo ifup eth0=config1 Password: ...
The interface eth0
is up, configured by DHCP and connected to LAN.
$ sudo ifdown eth0=config1 ...
The interface eth0
is down and disconnected from LAN.
Let's move your PC to a LAN served by the static IP. You bring up the network interface eth0
by assigning the network configuration name config2
to it by the following.
$ sudo ifup eth0=config2 ...
The interface eth0
is up, configured with static IP and connected to LAN. The additional parameters given as dns-*
configures "/etc/resolv.conf
" contents. This "/etc/resolv.conf
" is better manged if the resolvconf
package is installed.
$ sudo ifdown eth0=config2 ...
The interface eth0
is down and disconnected from LAN, again.
Let's move your PC to a port on BB-modem connected to the PPPoE served service. You bring up the network interface eth0
by assigning the network configuration name pppoe
to it by the following.
$ sudo ifup eth0=pppoe ...
The interface eth0
is up, configured with PPPoE connection directly to the ISP.
$ sudo ifdown eth0=pppoe ...
The interface eth0
is down and disconnected, again.
Let's move your PC to a location without LAN or BB-modem but with POTS and modem. You bring up the network interface ppp0
by assigning the network configuration name pots
to it by the following.
$ sudo ifup ppp0=pots ...
The interface ppp0
is up and connected to the Internet with PPP.
$ sudo ifdown ppp0=pots ...
The interface ppp0
is down and disconnected from the Internet.
You should check the "/etc/network/run/ifstate
" file for the current network configuration state of the ifupdown
system.
You may need to adjust numbers at the end of eth*
, ppp*
, etc. if you have multiple network interfaces.
The ifupdown
system automatically runs scripts installed in "/etc/network/*/
" while exporting environment variables to scripts.
Table 5.14. List of environment variables passed by the ifupdown system
environment variable | value passed |
---|---|
"$IFACE "
|
physical name (interface name) of the interface being processed |
"$LOGICAL "
|
logical name (configuration name) of the interface being processed |
"$ADDRFAM "
|
<address_family> of the interface |
"$METHOD "
|
<method_name> of the interface. (e.g., "static") |
"$MODE "
|
"start" if run from ifup , "stop" if run from ifdown
|
"$PHASE "
|
as per "$MODE ", but with finer granularity, distinguishing the pre-up , post-up , pre-down and post-down phases
|
"$VERBOSITY "
|
indicates whether "--verbose " was used; set to 1 if so, 0 if not
|
"$PATH "
|
command search path: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin "
|
"$IF_<OPTION> "
|
value for the corresponding option under the iface stanza
|
Here, each environment variable, "$IF_<OPTION>
", is created from the name for the corresponding option such as <option1> and <option2> by prepending "$IF_
", converting the case to the upper case, replacing hyphens to underscores, and discarding non-alphanumeric characters.
See Section 5.5.2, “The basic syntax of "/etc/network/interfaces"” for <address_family>, <method_name>, <option1> and <option2>.
The ifupdown-extra
package (see Section 5.5.14, “The ifupdown-extra package”) uses these environment variables to extend the functionality of the ifupdown
package. The ifmetric
package (see Section 5.6.2, “The ifmetric package”) installs the "/etc/network/if-up.d/ifmetric
" script which sets the metric via the "$IF_METRIC
" variable. The guessnet
package (see Section 5.6.8, “Mapping with guessnet”), which provides simple and powerful framework for the auto-selection of the network configuration via the mapping mechanism, also uses these.
For more specific examples of custom network configuration scripts using these environment variables, you should check example scripts in "/usr/share/doc/ifupdown/examples/*
" and scripts used in ifscheme
and ifupdown-scripts-zg2
packages. These additional scripts have some overlaps of functionalities with basic ifupdown-extra
and guessnet
packages. If you install these additional scripts, you should customize these scripts to avoid interferences.
Instead of manually choosing configuration as described in Section 5.6.6, “The manually switchable network configuration”, you can use the mapping mechanism described in Section 5.6.5, “The mapping stanza” to select network configuration automatically with custom scripts.
The guessnet-ifupdown
(8) command provided by the guessnet
package is designed to be used as a mapping script and provides powerful framework to enhance the ifupdown
system.
guessnet
options for each network configuration under iface
stanza.
iface
with first non-ERROR result as the network configuration.
This dual usage of the "/etc/network/interfaces
" file by the mapping script, guessnet-ifupdown
, and the original network configuration infrastructure, ifupdown
, does not cause negative impacts since guessnet
options only export extra environment variables to scripts run by the ifupdown
system. See details in guessnet-ifupdown
(8).
When multiple guessnet
option lines are required in "/etc/network/interfaces
", use option lines started with guessnet1
, guessnet2
, and so on, since the ifupdown
package does not allow starting strings of option lines to be repeated.
Iproute2 commands offer complete low-level network configuration capabilities. Here is a translation table from obsolete net-tools commands to new iproute2 etc. commands.
Table 5.15. Translation table from obsolete net-tools
commands to new iproute2
commands
obsolete net-tools | new iproute2 etc. | manipulation |
---|---|---|
ifconfig (8)
|
ip addr
|
protocol (IP or IPv6) address on a device |
route (8)
|
ip route
|
routing table entry |
arp (8)
|
ip neigh
|
ARP or NDISC cache entry |
ipmaddr
|
ip maddr
|
multicast address |
iptunnel
|
ip tunnel
|
tunnel over IP |
nameif (8)
|
ifrename (8)
|
name network interfaces based on MAC addresses |
mii-tool (8)
|
ethtool (8)
|
Ethernet device settings |
See ip
(8) and IPROUTE2 Utility Suite Howto.
You may use low level network commands as follows safely since they do not change network configuration.
Table 5.16. List of low level network commands
command | description |
---|---|
ifconfig
|
display the link and address status of active interfaces |
ip addr show
|
display the link and address status of active interfaces |
route -n
|
display all the routing table in numerical addresses |
ip route show
|
display all the routing table in numerical addresses |
arp
|
display the current content of the ARP cache tables |
ip neigh
|
display the current content of the ARP cache tables |
plog
|
display ppp daemon log |
ping yahoo.com
|
check the Internet connection to "yahoo.com "
|
whois yahoo.com
|
check who registered "yahoo.com " in the domains database
|
traceroute yahoo.com
|
trace the Internet connection to "yahoo.com "
|
tracepath yahoo.com
|
trace the Internet connection to "yahoo.com "
|
mtr yahoo.com
|
trace the Internet connection to "yahoo.com " (repeatedly)
|
dig [@dns-server.com] example.com [{a|mx|any}]
|
check DNS records of "example.com " by "dns-server.com " for a "a ", "mx ", or "any " record
|
iptables -L -n
|
check packet filter |
netstat -a
|
find all open ports |
netstat -l --inet
|
find listening ports |
netstat -ln --tcp
|
find listening TCP ports (numeric) |
dlint example.com
|
check DNS zone information of "example.com "
|
Some of these low level network configuration tools reside in "/sbin/
". You may need to issue full command path such as "/sbin/ifconfig
" or add "/sbin
" to the "$PATH
" list in your "~/.bashrc
".
Generic network optimization is beyond the scope of this documentation. I touch only subjects pertinent to the consumer grade connection.
Table 5.17. List of network optimization tools
packages | popcon | size | description |
---|---|---|---|
iftop
*
|
V:1.3, I:7 | 72 | display bandwidth usage information on an network interface |
iperf
*
|
V:0.5, I:3 | 200 | Internet Protocol bandwidth measuring tool |
apt-spy
*
|
V:0.17, I:1.7 | 204 |
write a "/etc/apt/sources.list " file based on bandwidth tests
|
ifstat
*
|
V:0.2, I:1.2 | 88 | InterFace STATistics Monitoring |
bmon
*
|
V:0.2, I:0.9 | 188 | portable bandwidth monitor and rate estimator |
ethstatus
*
|
V:0.10, I:0.7 | 84 | script that quickly measures network device throughput |
bing
*
|
V:0.08, I:0.6 | 96 | empirical stochastic bandwidth tester |
bwm-ng
*
|
V:0.2, I:1.2 | 152 | small and simple console-based bandwidth monitor |
ethstats
*
|
V:0.05, I:0.3 | 52 | console-based Ethernet statistics monitor |
ipfm
*
|
V:0.04, I:0.19 | 156 | bandwidth analysis tool |
The Maximum Transmission Unit (MTU) value can be determined experimentally with ping
(8) with "-M do
" option which sends ICMP packets with data size starting from 1500 (with offset of 28 bytes for the IP+ICMP header) and finding the largest size without IP fragmentation.
For example, try the following
$ ping -c 1 -s $((1500-28)) -M do www.debian.org PING www.debian.org (194.109.137.218) 1472(1500) bytes of data. From 192.168.11.2 icmp_seq=1 Frag needed and DF set (mtu = 1454) --- www.debian.org ping statistics --- 0 packets transmitted, 0 received, +1 errors
Try 1454 instead of 1500
You see ping
(8) succeed with 1454.
This process is Path MTU (PMTU) discovery (RFC1191) and the tracepath
(8) command can automate this.
The above example with PMTU value of 1454 is for my previous FTTP provider which used Asynchronous Transfer Mode (ATM) as its backbone network and served its clients with the PPPoE. The actual PMTU value depends on your environment, e.g., 1500 for the my new FTTP provider.
Table 5.18. Basic guide lines of the optimal MTU value
network environment | MTU | rationale |
---|---|---|
Dial-up link (IP: PPP) | 576 | standard |
Ethernet link (IP: DHCP or fixed) | 1500 | standard and default |
Ethernet link (IP: PPPoE) | 1492 (=1500-8) | 2 bytes for PPP header and 6 bytes for PPPoE header |
Ethernet link (ISP's backbone: ATM, IP: DHCP or fixed) | 1462 (=48*31-18-8) | author's speculation: 18 for Ethernet header, 8 for SAR trailer |
Ethernet link (ISP's backbone: ATM, IP: PPPoE) | 1454 (=48*31-8-18-8) | see "Optimal MTU configuration for PPPoE ADSL Connections" for rationale |
In addtion to these basic guide lines, you should know the following.
Here are examples for setting the MTU value from its default 1500 to 1454.
For the DHCP (see Section 5.5.4, “The network interface served by the DHCP”), you can replace pertinent iface
stanza lines in the "/etc/network/interfaces
" with the following.
iface eth0 inet dhcp hostname "mymachine" pre-up /sbin/ifconfig $IFACE mtu 1454
For the static IP (see Section 5.5.5, “The network interface with the static IP”), you can replace pertinent 'iface
' stanza lines in the "/etc/network/interfaces
" with the following.
iface eth0 inet static address 192.168.11.100 netmask 255.255.255.0 broadcast 192.168.11.255 gateway 192.168.11.1 mtu 1454 dns-domain lan dns-nameservers 192.168.11.1
For the direct PPPoE (see Section 5.4.5, “The PPPoE connection with pppoeconf”), you can replace pertinent "mtu
" line in the "/etc/ppp/peers/dsl-provider
" with the following.
mtu 1454
The maximum segment size (MSS) is used as an alternative measure of packet size. The relationship between MSS and MTU are the following.
The iptables
(8) (see Section 5.9, “Netfilter infrastructure”) based optimization can clamp packet size by the MSS and is useful for the router. See "TCPMSS" in iptables
(8).
The TCP throughput can be maximized by adjusting TCP buffer size parameters as described in "TCP Tuning Guide" and "TCP tuning" for the modern high-bandwidth and high-latency WAN. So far, the current Debian default settings serve well even for my LAN connected by the fast 1G bps FTTP service.
Netfilter provides infrastructure for stateful firewall and network address translation (NAT) with Linux kernel modules (see Section 3.5.12, “The kernel module initialization”).
Table 5.19. List of firewall tools
packages | popcon | size | description |
---|---|---|---|
iptables
*
|
V:27, I:99 | 1316 | administration tools for netfilter |
iptstate
*
|
V:0.14, I:0.9 | 152 |
continuously monitor netfilter state (similar to top (1))
|
shorewall-perl
*
|
V:0.15, I:0.5 | 76 |
Shoreline Firewall, netfilter configuration file generator (Perl-based, recommended for lenny )
|
shorewall-shell
*
|
I:1.9 | 76 |
Shoreline Firewall, netfilter configuration file generator (shell-based, alternative for lenny )
|
Main user space program of netfilter is iptables
(8). You can manually configure netfilter interactively from shell, save its state with iptables-save
(8), and restore it via init script with iptables-restore
(8) upon system reboot.
Configuration helper scripts such as shorewall ease this process.
See documentations at http://www.netfilter.org/documentation/ (or in "/usr/share/doc/iptables/html/
").
Although these were written for Linux 2.4, both iptables
(8) command and netfilter kernel function apply for current Linux 2.6.
After establishing network connectivity (see Chapter 5, Network setup), you can run various network applications.
There are many web browser packages to access remote contents with Hypertext Transfer Protocol (HTTP).
Table 6.1. List of web browsers
package | popcon | size | type | description of web browser |
---|---|---|---|---|
iceweasel
*
|
V:30, I:48 | 3761 | X | unbranded Mozilla Firefox |
iceape-browser
*
|
V:1.4, I:2 | 35686 | , , | unbrandedMozilla, removed due to security concerns bug#505565 |
epiphany-browser
*
|
V:13, I:34 | 1060 | , , | GNOME, HIG compliant, Epiphany |
galeon
*
|
V:0.9, I:1.4 | 1776 | , , | GNOME, Galeon, superseded by Epiphany |
konqueror
*
|
V:8, I:15 | 3584 | , , | KDE, Konqueror |
w3m
*
|
V:24, I:84 | 1992 | text | w3m |
lynx
*
|
I:22 | 252 | , , | Lynx |
elinks
*
|
V:2, I:5 | 1448 | , , | ELinks |
links
*
|
V:3, I:9 | 1380 | , , | Links (text only) |
links2
*
|
V:0.7, I:3 | 3288 | graphics | Links (console graphics without X) |
You may be able to use following special URL strings for some browsers to confirm their settings.
about:
"
about:config
"
about:plugins
"
Debian offers many free browser plugin packages in the main archive area which can handle not only Java (software platform) and Flash but also MPEG, MPEG2, MPEG4, DivX, Windows Media Video (.wmv), QuickTime (.mov), MP3 (.mp3), Ogg/Vorbis files, DVDs, VCDs, etc. Debian also offers helper programs to install non-free browser plugin packages as contrib or non-free archive area.
Table 6.2. List of browser plugin packages
package | popcon | size | area | description |
---|---|---|---|---|
icedtea6-plugin
*
|
V:0.9, I:1.6 | 272 | main | Java plugin based on OpenJDK and IcedTea |
sun-java6-plugin
*
|
I:10 | 100 | non-free | Java plugin for Sun's Java SE 6 (i386 only) |
mozilla-plugin-gnash
*
|
V:0.4, I:4 | 60 | main | Flash plugin based on Gnash |
flashplugin-nonfree
*
|
V:1.3, I:15 | 132 | contrib | Flash plugin helper to install Adobe Flash Player (i386, amd64 only) |
mozilla-plugin-vlc
*
|
V:3, I:4 | 128 | main | Multimedia plugin based on VLC media player |
totem-mozilla
*
|
V:20, I:34 | 544 | main | Multimedia plugin based on GNOME's Totem media player |
gecko-mediaplayer
*
|
V:0.6, I:0.8 | 724 | main | Multimedia plugin based on (GNOME) MPlayer |
nspluginwrapper
*
|
V:1.8, I:3 | 472 | contrib | A wrapper to run i386 Netscape plugins on amd64 architecture |
Although use of above Debian packages are much easier, browser plugins can be still manually enabled by installing "*.so" into plugin directories (e.g., "/usr/lib/iceweasel/plugins/
") and restarting browsers.
Some web sites refuse to be connected based on the user-agent string of your browser. You can work around this situation by spoofing the user-agent string. For example, you can do this by adding following line into user configuration files such as "~/.gnome2/epiphany/mozilla/epiphany/user.js
" or "~/.mozilla/firefox/*.default/user.js
".
user_pref{"general.useragent.override","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"};
Alternatively, you can add and reset this variable by typing "about:config
" into URL and right clicking its display contents.
Spoofed user-agent string may cause bad side effects with Java.
If you are to set up the mail server to exchange mail directly with the Internet, you should be better than reading this elementary document.
The following configuration examples are only valid for the typical mobile workstation on consumer grade Internet connections.
In order to contain spam (unwanted and unsolicited email) problems, many ISPs which provide consumer grade Internet connections are implementing counter measures.
When configuring your mail system or resolving mail delivery problems, you must consider these new limitations.
In light of these hostile Internet situation and limitations, some independent Internet mail ISPs such as Yahoo.com and Gmail.com offer the secure mail service which can be connected from anywhere on the Internet using Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL).
It is not realistic to run SMTP server on consumer grade network to send mail directly to the remote host reliably. They are very likely to be rejected. You must use some smarthost services offered by your connection ISP or independent mail ISPs. For the simplicity, I assume that the smarthost is located at "smtp.hostname.dom
", requires SMTP AUTH, and uses the message submission port (587) in the following text.
The most simple mail configuration is that the mail is sent to the ISP's smarthost and received from ISP's POP3 server by the MUA (see Section 6.4, “Mail user agent (MUA)”) itself. This type of configuration is popular with full featured GUI based MUA such as icedove
(1), evolution
(1), etc. If you need to filter mail by their types, you use MUA's filtering function. For this case, the local MTA (see Section 6.3, “Mail transport agent (MTA)”) need to do local delivery only.
The alternative mail configuration is that the mail is sent via local MTA to the ISP's smarthost and received from ISP's POP3 by the mail retriever (see Section 6.5, “The remote mail retrieval and forward utility”) to the local mailbox. If you need to filter mail by their types, you use MDA with filter (see Section 6.6, “Mail delivery agent (MDA) with filter”) to filter mail into separate mailboxes. This type of configuration is popular with simple console based MUA such as mutt
(1), gnus
(1), etc., although this is possible with any MUAs (see Section 6.4, “Mail user agent (MUA)”). For this case, the local MTA (see Section 6.3, “Mail transport agent (MTA)”) need to do both smarthost delivery and local delivery. Since mobile workstation does not have valid FQDN, you must configure the local MTA to hide and spoof the real local mail name in outgoing mail to avoid mail delivery errors (see Section 6.3.3, “The mail address configuration”).
You may wish to configure MUA/MDA to use Maildir for storing email messages somewhere under your home directory.
For normal workstation, the popular choice for Mail transport agent (MTA) is either exim4-*
or postfix
packages. It is really up to you.
Table 6.3. List of basic mail transport agent related packages for workstation
package | popcon | size | description |
---|---|---|---|
exim4-daemon-light
*
|
V:60, I:65 | 1104 | Exim4 mail transport agent (MTA: Debian default) |
exim4-base
*
|
V:62, I:68 | 1688 | Exim4 documentation (text) and common files |
exim4-doc-html
*
|
I:0.6 | 3440 | Exim4 documentation (html) |
exim4-doc-info
*
|
I:0.3 | 556 | Exim4 documentation (info) |
postfix
*
|
V:18, I:20 | 3492 | Postfix mail transport agent (MTA: alternative) |
postfix-doc
*
|
I:1.9 | 3420 | Postfix documentation (html+text) |
sasl2-bin
*
|
V:2, I:5 | 448 | Cyrus SASL API implementation (supplement postfix for SMTP AUTH) |
cyrus-sasl2-doc
*
|
I:2 | 284 | Cyrus SASL - documentation |
Although the popcon vote count of exim4-*
looks several times popular than that of postfix
, this does not mean postfix
is not popular with Debian developers. The Debian server system uses both exim4
and postfix
. The mail header analysis of mailing list postings from prominent Debian developers also indicate both of these MTAs are as popular.
The exim4-*
packages are known to have very small memory consumption and very flexible for its configuration. The postfix
package is known to be compact, fast, simple, and secure. Both come with ample documentation and are as good in quality and license.
There are many choices for mail transport agent (MTA) packages with different capability and focus in Debian archive.
Table 6.4. List of choices for mail transport agent (MTA) packages in Debian archive
package | popcon | size | capability and focus |
---|---|---|---|
exim4-daemon-light
*
|
V:60, I:65 | 1104 | full |
postfix
*
|
V:18, I:20 | 3492 | full (security) |
exim4-daemon-heavy
*
|
V:1.7, I:1.9 | 1220 | full (flexible) |
sendmail-bin
*
|
V:1.9, I:2 | 2052 | full (only if you are already familiar) |
nullmailer
*
|
V:0.7, I:0.8 | 436 | strip down, no local mail |
ssmtp
*
|
V:1.2, I:1.7 | 0 | strip down, no local mail |
courier-mta
*
|
V:0.14, I:0.15 | 12316 | very full (web interface etc.) |
xmail
*
|
V:0.14, I:0.16 | 836 | light |
masqmail
*
|
V:0.04, I:0.05 | 624 | light |
esmtp
*
|
V:0.09, I:0.2 | 172 | light |
esmtp-run
*
|
V:0.07, I:0.11 | 64 |
light (sendmail compatibility extension to esmtp )
|
msmtp
*
|
V:0.3, I:0.8 | 340 | light |
msmtp-mta
*
|
V:0.11, I:0.15 | 32 |
light (sendmail compatibility extension to msmtp )
|
For the Internet mail via smarthost, you (re)configure exim4-*
packages as the following.
$ sudo /etc/init.d/exim4 stop $ sudo dpkg-reconfigure exim4-config
Select "mail sent by smarthost; received via SMTP or fetchmail" for "General type of mail configuration".
Set "System mail name:" to its default as the FQDN (see Section 5.1.2, “The hostname resolution”).
Set "IP-addresses to listen on for incoming SMTP connections:" to its default as "127.0.0.1 ; ::1".
Unset contents of "Other destinations for which mail is accepted:".
Unset contents of "Machines to relay mail for:".
Set "IP address or host name of the outgoing smarthost:" to "smtp.hostname.dom:587".
Select "<No>" for "Hide local mail name in outgoing mail?". (Use "/etc/email-addresses
" as in Section 6.3.3, “The mail address configuration”, instead.)
Reply to "Keep number of DNS-queries minimal (Dial-on-Demand)?" as one of the following.
Set "Delivery method for local mail:" to "mbox format in /var/mail/".
Select "<Yes>" for "Split configuration into small files?:".
Create password entries for the smarthost by editing "/etc/exim4/passwd.client
".
$ sudo vim /etc/exim4/passwd.client ... $ cat /etc/exim4/passwd.client ^smtp.*\.hostname\.dom:username@hostname.dom:password
Start exim4
by the following.
$ sudo /etc/init.d/exim4 start
The host name in "/etc/exim4/passwd.client
" should not be the alias. You check the real host name with the following.
$ host smtp.hostname.dom smtp.hostname.dom is an alias for smtp99.hostname.dom. smtp99.hostname.dom has address 123.234.123.89
I use regex in "/etc/exim4/passwd.client
" to work around the alias issue. SMTP AUTH probably works even if the ISP moves host pointed by the alias.
You must execute update-exim4.conf
(8) after manually updating exim4
configuration files in "/etc/exim4/
".
Starting exim4
takes long time if "No" (default value) was chosen for the debconf query of "Keep number of DNS-queries minimal (Dial-on-Demand)?" and the system is not connected to the Internet while booting.
Please read the official guide at: "/usr/share/doc/exim4-base/README.Debian.gz
" and update-exim4.conf
(8).
Local customization file "/etc/exim4/exim4.conf.localmacros
" may be created to set MACROs. For example, Yahoo's mail service is said to require "MAIN_TLS_ENABLE = true
" and "AUTH_CLIENT_ALLOW_NOTLS_PASSWORDS = yes
" in it.
If you are looking for a light weight MTA that respects "/etc/aliases
" for your laptop PC, you should consider to configure exim4
(8) with "QUEUERUNNER='queueonly'
", "QUEUERUNNER='nodaemon'
", etc. in "/etc/default/exim4
".
For the Internet mail via smarthost, you should first read postfix documentation and key manual pages.
Table 6.5. List of important postfix manual pages
command | function |
---|---|
postfix (1)
|
Postfix control program |
postconf (1)
|
Postfix configuration utility |
postconf (5)
|
Postfix configuration parameters |
postmap (1)
|
Postfix lookup table maintenance |
postalias (1)
|
Postfix alias database maintenance |
You (re)configure postfix
and sasl2-bin
packages as follows.
$ sudo /etc/init.d/postfix stop $ sudo dpkg-reconfigure postfix
Chose "Internet with smarthost".
Set "SMTP relay host (blank for none):" to "[smtp.hostname.dom]:587
" and configure it by the following.
$ sudo postconf -e 'smtp_sender_dependent_authentication = yes' $ sudo postconf -e 'smtp_sasl_auth_enable = yes' $ sudo postconf -e 'smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd' $ sudo postconf -e 'smtp_sasl_type = cyrus' $ sudo vim /etc/postfix/sasl_passwd
Create password entries for the smarthost.
$ cat /etc/postfix/sasl_passwd [smtp.hostname.dom]:587 username:password $ sudo postmap hush:/etc/postfix/sasl_passwd
Start the postfix
by the following.
$ sudo /etc/init.d/postfix start
Here the use of "[
" and "]
" in the dpkg-reconfigure
dialog and "/etc/postfix/sasl_passwd
" ensures not to check MX record but directly use exact hostname specified. See "Enabling SASL authentication in the Postfix SMTP client" in "usr/share/doc/postfix/html/SASL_README.html
".
There are a few mail address configuration files for mail transport, delivery and user agents.
Table 6.6. List of mail address related configuration files
file | function | application |
---|---|---|
/etc/mailname
|
default host name for (outgoing) mail |
Debian specific, mailname (5)
|
/etc/email-addresses
|
host name spoofing for outgoing mail |
exim (8) specific, exim4-config_files (5)
|
/etc/postfix/generic
|
host name spoofing for outgoing mail |
postfix (1) specific, activated after postmap (1) command execution.
|
/etc/aliases
|
account name alias for incoming mail |
general, activated after newaliases (1) command execution.
|
The mailname in the "/etc/mailname
" file is usually a fully qualified domain name (FQDN) that resolves to one of the host's IP addresses. For the mobile workstation which does not have a hostname with resolvable IP address, set this mailname to the value of "hostname -f
". (This is safe choice and works for both exim4-*
and postfix
.)
The contents of "/etc/mailname
" is used by many non-MTA programs for their default behavior. For mutt
, set "hostname
" and "from
" variables in ~/muttrc
file to override the mailname value. For programs in the devscripts
package, such as bts
(1) and dch
(1), export environment variables "$DEBFULLNAME
" and "$DEBEMAIL
" to override it.
The popularity-contest
package normally send mail from root account with FQDN. You need to set MAILFROM
in /etc/popularity-contest.conf
as described in the /usr/share/popularity-contest/default.conf
file. Otherwise, your mail will be rejected by the smarthost SMTP server. Although this is tedious, this approach is safer than rewriting the source address for all mails from root by MTA and should be used for other daemons and cron scripts.
When setting the mailname to "hostname -f
", the spoofing of the source mail address via MTA can be realized by the following.
/etc/email-addresses
" file for exim4
(8) as explained in the exim4-config_files
(5)
/etc/postfix/generic
" file for postfix
(1) as explained in the generic
(5)
For postfix
, the following extra steps are needed.
# postmap hash:/etc/postfix/generic # postconf -e 'smtp_generic_maps = hash:/etc/postfix/generic' # postfix reload
You can test mail address configuration using the following.
exim
(8) with -brw, -bf, -bF, -bV, …
options
postmap
(1) with -q
option.
Exim comes with several utility programs such as exiqgrep
(8) and exipick
(8). See "dpkg -L exim4-base|grep man8/
" for available commands.
There are several basic MTA operations. Some may be performed via sendmail
(1) compatibility interface.
Table 6.7. List of basic MTA operation
exim command | postfix command | description |
---|---|---|
sendmail
|
sendmail
|
read mails from standard input and arrange for delivery (-bm )
|
mailq
|
mailq
|
list the mail queue with status and queue ID (-bp )
|
newaliases
|
newaliases
|
initialize alias database (-I )
|
exim4 -q
|
postqueue -f
|
flush waiting mails (-q )
|
exim4 -qf
|
postsuper -r ALL deferred; postqueue -f
|
flush all mails |
exim4 -qff
|
postsuper -r ALL; postqueue -f
|
flush even frozen mails |
exim4 -Mg queue_id
|
postsuper -h queue_id
|
freeze one message by its queue ID |
exim4 -Mrm queue_id
|
postsuper -d queue_id
|
remove one message by its queue ID |
N/A |
postsuper -d ALL
|
remove all messages |
It may be a good idea to flush all mails by a script in "/etc/ppp/ip-up.d/*
".
If you subscribe to Debian related mailing list, it may be a good idea to use such MUA as mutt
and gnus
which are the de facto standard for the participant and known to behave as expected.
Table 6.8. List of mail user agent (MUA)
package | popcon | size | type |
---|---|---|---|
iceweasel
*
|
V:30, I:48 | 3761 | X GUI program (unbranded Mozilla Firefox) |
evolution
*
|
V:16, I:34 | 4724 | X GUI program (part of a groupware suite) |
icedove
*
|
V:8, I:12 | 38864 | X GUI program (unbranded Mozilla Thunderbird) |
mutt
*
|
V:26, I:83 | 6004 |
character terminal program probably used with vim
|
gnus
*
|
V:0.06, I:0.3 | 6453 |
character terminal program under (x)emacs
|
Customize "~/.muttrc
" as the following to use mutt
as the mail user agent (MUA) in combination with vim
.
# # User configuration file to override /etc/Muttrc # # spoof source mail address set use_from set hostname=example.dom set from="Name Surname <username@example.dom>" set signature="~/.signature" # vim: "gq" to reformat quotes set editor="vim -c 'set tw=72 et ft=mail'" # "mutt" goes to Inbox, while "mutt -y" lists mailboxes set mbox_type=Maildir # use qmail Maildir format for creating mbox set mbox=~/Mail # keep all mail boxes in $HOME/Mail/ set spoolfile=+Inbox # mail delivered to $HOME/Mail/Inbox set record=+Outbox # save fcc mail to $HOME/Mail/Outbox set postponed=+Postponed # keep postponed in $HOME/Mail/postponed set move=no # do not move Inbox items to mbox set quit=ask-yes # do not quit by "q" only set delete=yes # always delete w/o asking while exiting set fcc_clear # store fcc as non encrypted # Mailboxes in Maildir (automatic update) mailboxes `cd ~/Mail; /bin/ls -1|sed -e 's/^/+/' | tr "\n" " "` unmailboxes Maillog *.ev-summary ## Default #set index_format="%4C %Z %{%b %d} %-15.15L (%4l) %s" ## Thread index with senders (collapse) set index_format="%4C %Z %{%b %d} %-15.15n %?M?(#%03M)&(%4l)? %s" ## Default #set folder_format="%2C %t %N %F %2l %-8.8u %-8.8g %8s %d %f" ## just folder names set folder_format="%2C %t %N %f"
Add the following to "/etc/mailcap
" or "~/.mailcap
" to display HTML mail and MS Word attachments inline.
text/html; lynx -force_html %s; needsterminal; application/msword; /usr/bin/antiword '%s'; copiousoutput; description="Microsoft Word Text"; nametemplate=%s.doc
Mutt can be used as the IMAP client and the mailbox format converter. You can tag messages with "t
", "T
", etc. These tagged messages can be copied with ";C
" between different mailboxes and deleted with ";d
" in one action.
Although fetchmail
(1) has been de facto standard for the remote mail retrieval on GNU/Linux, the author likes getmail
(1) now. If you want to reject mail before downloading to save bandwidth, mailfilter
or mpop
may be useful. Whichever mail retriever utilities are used, it is good idea to configure system to deliver retrieved mails to MDA, such as maildrop
, via pipe.
Table 6.9. List of remote mail retrieval and forward utilities
package | popcon | size | description |
---|---|---|---|
fetchmail
*
|
V:2, I:5 | 2588 | mail retriever (POP3, APOP, IMAP) (old) |
getmail4
*
|
V:0.3, I:0.9 | 668 | mail retriever (POP3, IMAP4, and SDPS) (simple, secure, and reliable) |
mailfilter
*
|
V:0.00, I:0.07 | 332 | mail retriever (POP3) with with regex filtering capability |
mpop
*
|
V:0.01, I:0.08 | 324 | mail retriever (POP3) and MDA with filtering capability |
getmail
(1) configuration is described in getmail documentation. Here is my set up to access multiple POP3 accounts as user.
Create "/usr/local/bin/getmails
" as the following.
#!/bin/sh set -e if [ -f $HOME/.getmail/running ]; then echo "getmail is already running ... (if not, remove $HOME/.getmail/running)" >&2 pgrep -l "getmai[l]" exit 1 else echo "getmail has not been running ... " >&2 fi if [ -f $HOME/.getmail/stop ]; then echo "do not run getmail ... (if not, remove $HOME/.getmail/stop)" >&2 exit fi if [ "x$1" = "x-l" ]; then exit fi rcfiles="/usr/bin/getmail" for file in $HOME/.getmail/config/* ; do rcfiles="$rcfiles --rcfile $file" done date -u > $HOME/.getmail/running eval "$rcfiles $@" rm $HOME/.getmail/running
Configure it as the following.
$ sudo chmod 755 /usr/local/bin/getmails $ mkdir -m 0700 $HOME/.getmail $ mkdir -m 0700 $HOME/.getmail/config $ mkdir -m 0700 $HOME/.getmail/log
Create configuration files "$HOME/.getmail/config/pop3_name
" for each POP3 accounts as the following.
[retriever] type = SimplePOP3SSLRetriever server = pop.example.com username = pop3_name@example.com password = secret [destination] type = MDA_external path = /usr/bin/maildrop unixfrom = True [options] verbose = 0 delete = True delivered_to = False message_log = ~/.getmail/log/pop3_name.log
Configure it as the following.
$ chmod 0600 $HOME/.getmail/config/*
Schedule "/usr/local/bin/getmails
" to run every 15 minutes with cron
(8) by executing "sudo crontab -e -u <user_name>
" and adding following to user's cron entry.
5,20,35,50 * * * * /usr/local/bin/getmails --quiet
Problems of POP3 access may not come from getmail
. Some popular free POP3 services may be violating the POP3 protocol and their SPAM filter may not be perfect. For example, they may delete messages just after receiving RETR command before receiving DELE command and may quarantined messages into Spam mailbox. You should minimize damages by configuring them to archive accessed messages and not to delete them. See also "Some mail was not downloaded".
Most MTA programs, such as postfix
and exim4
, function as MDA (mail delivery agent). There are specialized MDA with filtering capabilities.
Although procmail
(1) has been de facto standard for MDA with filter on GNU/Linux, author likes maildrop
(1) now. Whichever filtering utilities are used, it is good idea to configure system to deliver filtered mails to a qmail-style Maildir.
Table 6.10. List of MDA with filter
package | popcon | size | description |
---|---|---|---|
procmail
*
|
V:19, I:84 | 368 | MDA with filter (old) |
mailagent
*
|
V:0.3, I:5 | 1692 | MDA with Perl filter |
maildrop
*
|
V:0.3, I:0.8 | 1000 | MDA with structured filtering language |
maildrop
(1) configuration is described in maildropfilter documentation. Here is a configuration example for "$HOME/.mailfilter
".
# Local configuration MAILROOT="$HOME/Mail" # set this to /etc/mailname contents MAILHOST="example.dom" logfile $HOME/.maildroplog # rules are made to override the earlier value by the later one. # mailing list mails ? if ( /^Precedence:.*list/:h || /^Precedence:.*bulk/:h ) { # rules for mailing list mails # default mailbox for mails from mailing list MAILBOX="Inbox-list" # default mailbox for mails from debian.org if ( /^(Sender|Resent-From|Resent-Sender): .*debian.org/:h ) { MAILBOX="service.debian.org" } # default mailbox for mails from bugs.debian.org (BTS) if ( /^(Sender|Resent-From|Resent-sender): .*@bugs.debian.org/:h ) { MAILBOX="bugs.debian.org" } # mailbox for each properly maintained mailing list with "List-Id: foo" or "List-Id: ...<foo.bar>" if ( /^List-Id: ([^<]*<)?([^<>]*)>?/:h ) { MAILBOX="$MATCH2" } } else { # rules for non-mailing list mails # default incoming box MAILBOX="Inbox-unusual" # local mails if ( /Envelope-to: .*@$MAILHOST/:h ) { MAILBOX="Inbox-local" } # html mails (99% spams) if ( /DOCTYPE html/:b ||\ /^Content-Type: text\/html/ ) { MAILBOX="Inbox-html" } # blacklist rule for spams if ( /^X-Advertisement/:h ||\ /^Subject:.*BUSINESS PROPOSAL/:h ||\ /^Subject:.*URGENT.*ASISSTANCE/:h ||\ /^Subject: *I NEED YOUR ASSISTANCE/:h ) { MAILBOX="Inbox-trash" } # whitelist rule for normal mails if ( /^From: .*@debian.org/:h ||\ /^(Sender|Resent-From|Resent-Sender): .*debian.org/:h ||\ /^Subject: .*(debian|bug|PATCH)/:h ) { MAILBOX="Inbox" } # whiltelist rule for BTS related mails if ( /^Subject: .*Bug#.*/:h ||\ /^(To|Cc): .*@bugs.debian.org/:h ) { MAILBOX="bugs.debian.org" } # whitelist rule for getmails cron mails if ( /^Subject: Cron .*getmails/:h ) { MAILBOX="Inbox-getmails" } } # check existance of $MAILBOX `test -d $MAILROOT/$MAILBOX` if ( $RETURNCODE == 1 ) { # create maildir mailbox for $MAILBOX `maildirmake $MAILROOT/$MAILBOX` } # deliver to maildir $MAILBOX to "$MAILROOT/$MAILBOX/" exit
Unlike procmail
, maildrop
does not create missing maildir directories automatically. You must create them manually using maildirmake
(1) in advance as in the example "$HOME/.mailfilter
".
Here is a similar configuration with "$HOME/.procmailrc
" for procmail
(1).
MAILDIR=$HOME/Maildir DEFAULT=$MAILDIR/Inbox/ LOGFILE=$MAILDIR/Maillog # clearly bad looking mails: drop them into X-trash and exit :0 * 1^0 ^X-Advertisement * 1^0 ^Subject:.*BUSINESS PROPOSAL * 1^0 ^Subject:.*URGENT.*ASISSTANCE * 1^0 ^Subject: *I NEED YOUR ASSISTANCE X-trash/ # Delivering mailinglist messages :0 * 1^0 ^Precedence:.*list * 1^0 ^Precedence:.*bulk * 1^0 ^List- * 1^0 ^X-Distribution:.*bulk { :0 * 1^0 ^Return-path:.*debian-devel-admin@debian.or.jp jp-debian-devel/ :0 * ^Resent-Sender.*debian-user-request@lists.debian.org debian-user/ :0 * ^Resent-Sender.*debian-devel-request@lists.debian.org debian-devel/ :0 * ^Resent-Sender.*debian-announce-request@lists.debian.org debian-announce :0 mailing-list/ } :0 Inbox/
You need to manually deliver mails to the sorted mailboxes in your home directory from "/var/mail/<username>
" if your home directory became full and procmail
(1) failed. After making disk space in the home directory, run the following.
# /etc/init.d/${MAILDAEMON} stop # formail -s procmail </var/mail/<username> # /etc/init.d/${MAILDAEMON} start
If you are to run a private server on LAN, you may consider to run POP3 / IMAP4 server for delivering mail to LAN clients.
Table 6.11. List of POP3/IMAP4 servers
package | popcon | size | type | description |
---|---|---|---|---|
qpopper
*
|
V:1.1, I:4 | 636 | POP3 | Qualcomm enhanced BSD POP3 server |
courier-pop
*
|
V:1.6, I:2 | 244 | POP3 | Courier mail server - POP3 server (maildir format only) |
ipopd
*
|
V:0.10, I:0.18 | 212 | POP3 | The University of Washington POP2 and POP3 server |
cyrus-pop3d-2.2
*
|
V:0.18, I:0.3 | 852 | POP3 | Cyrus mail system (POP3 support) |
xmail
*
|
V:0.14, I:0.16 | 836 | POP3 | ESMTP/POP3 mail server |
courier-imap
*
|
V:2, I:3 | 1624 | IMAP | Courier mail server - IMAP server (maildir format only) |
uw-imapd
*
|
V:0.7, I:4 | 280 | IMAP | The University of Washington IMAP server |
cyrus-imapd-2.2
*
|
V:0.4, I:0.6 | 2632 | IMAP | Cyrus mail system (IMAP support) |
In the old Unix-like system, the BSD Line printer daemon was the standard. Since the standard print out format of the free software is PostScript on the Unix like system, some filter system was used along with Ghostscript to enable printing to the non-PostScript printer.
Recently, Common UNIX Printing System (CUPS) is the new de facto standard. The CUPS uses Internet Printing Protocol (IPP). The IPP is now supported by other OSs such as Windows XP and Mac OS X and has became new cross-platform de facto standard for remote printing with bi-directional communication capability.
The standard printable data format for the application on the Debian system is the PostScript (PS) which is a page description language. The data in PS format is fed into the Ghostscript PostScript interpreter to produce the printable data specific to the printer. See Section 11.3.1, “Ghostscript”.
Thanks to the file format dependent auto-conversion feature of the CUPS system, simply feeding any data to the lpr
command should generate the expected print output. (In CUPS, lpr
can be enabled by installing the cups-bsd
package.)
The Debian system has some notable packages for the print servers and utilities.
Table 6.12. List of print servers and utilities
package | popcon | size | port | description |
---|---|---|---|---|
lpr
*
|
V:2, I:2 | 440 | printer (515) | BSD lpr/lpd (Line printer daemon) |
lprng
*
|
V:0.6, I:1.3 | 2904 | , , | , , (Enhanced) |
cups
*
|
V:33, I:44 | 15540 | IPP (631) | Internet Printing CUPS server |
cups-client
*
|
V:17, I:46 | 908 | , , |
System V printer commands for CUPS: lp (1), lpstat (1), lpoptions (1), cancel (1), lpmove (8), lpinfo (8), lpadmin (8), …
|
cups-bsd
*
|
V:7, I:41 | 420 | , , |
BSD printer commands for CUPS: lpr (1), lpq (1), lprm (1), lpc (8)
|
cups-driver-gutenprint
*
|
V:12, I:38 | 1212 | Not applicable | printer drivers for CUPS |
You can configure CUPS system by pointing your web browser to "http://localhost:631/" .
The Secure SHell (SSH) is the secure way to connect over the Internet. A free version of SSH called OpenSSH is available as openssh-client
and openssh-server
packages in Debian.
Table 6.13. List of remote access server and utilities
package | popcon | size | tool | description |
---|---|---|---|---|
openssh-client
*
|
V:52, I:99 | 2104 |
ssh (1)
|
Secure shell client |
openssh-server
*
|
V:70, I:83 | 700 |
sshd (8)
|
Secure shell server |
ssh-askpass-fullscreen
*
|
V:0.08, I:0.4 | 92 |
ssh-askpass-fullscreen (1)
|
asks user for a pass phrase for ssh-add (GNOME2) |
ssh-askpass
*
|
V:0.7, I:5 | 156 |
ssh-askpass (1)
|
asks user for a pass phrase for ssh-add (plain X) |
See Section 4.7.3, “Extra security measures for the Internet” if your SSH is accessible from the Internet.
Please use the screen
(1) program to enable remote shell process to survive the interrupted connection (see Section 9.1, “The screen program”).
"/etc/ssh/sshd_not_to_be_run
" must not be present if one wishes to run the OpenSSH server.
SSH has two authentication protocols.
Table 6.14. List of SSH authentication protocols and methods
SSH protocol | SSH method | description |
---|---|---|
SSH-1 |
"RSAAuthentication "
|
RSA identity key based user authentication |
, , |
"RhostsAuthentication "
|
".rhosts " based host authentication (insecure, disabled)
|
, , |
"RhostsRSAAuthentication "
|
".rhosts " based host authentication combined with RSA host key (disabled)
|
, , |
"ChallengeResponseAuthentication "
|
RSA challenge-response authentication |
, , |
"PasswordAuthentication "
|
password based authentication |
SSH-2 |
"PubkeyAuthentication "
|
public key based user authentication |
, , |
"HostbasedAuthentication "
|
"~/.rhosts " or "/etc/hosts.equiv " based host authentication combined with public key client host authentication (disabled)
|
, , |
"ChallengeResponseAuthentication "
|
challenge-response authentication |
, , |
"PasswordAuthentication "
|
password based authentication |
Be careful about these differences if you are using a non-Debian system.
See "/usr/share/doc/ssh/README.Debian.gz
", ssh
(1), sshd
(8), ssh-agent
(1), and ssh-keygen
(1) for details.
Following are the key configuration files.
Table 6.15. List of SSH configuration files
configuration file | description of configuration file |
---|---|
/etc/ssh/ssh_config
|
SSH client defaults, see ssh_config (5)
|
/etc/ssh/sshd_config
|
SSH server defaults, see sshd_config (5)
|
~/.ssh/authorized_keys
|
default public SSH keys that clients use to connect to this account on this SSH server |
~/.ssh/identity
|
secret SSH-1 RSA key of the user |
~/.ssh/id_rsa
|
secret SSH-2 RSA key of the user |
~/.ssh/id_dsa
|
secret SSH-2 DSA key of the user |
See ssh-keygen
(1), ssh-add
(1) and ssh-agent
(1) for how to use public and secret SSH keys.
Make sure to verify settings by testing the connection. In case of any problem, use "ssh -v
".
You can change the pass phrase to encrypt local secret SSH keys later with "ssh-keygen -p
".
You can add options to the entries in "~/.ssh/authorized_keys
" to limit hosts and to run specific commands. See sshd
(8) for details.
The following starts an ssh
(1) connection from a client.
Table 6.16. List of SSH client startup examples
command | description |
---|---|
ssh username@hostname.domain.ext
|
connect with default mode |
ssh -v username@hostname.domain.ext
|
connect with default mode with debugging messages |
ssh -1 username@hostname.domain.ext
|
force to connect with SSH version 1 |
ssh -1 -o RSAAuthentication=no -l username hostname.domain.ext
|
force to use password with SSH version 1 |
ssh -o PreferredAuthentications=password -l username hostname.domain.ext
|
force to use password with SSH version 2 |
If you use the same user name on the local and the remote host, you can eliminate typing "username@
". Even if you use different user name on the local and the remote host, you can eliminate it using "~/.ssh/config
". For Debian Alioth service with account name "foo-guest
", you set "~/.ssh/config
" to contain the following.
Host alioth.debian.org svn.debian.org git.debian.org User foo-guest
For the user, ssh
(1) functions as a smarter and more secure telnet
(1). Unlike telnet
command, ssh
command does not bomb on the telnet
escape character (initial default CTRL-]).
To establish a pipe to connect to port 25 of remote-server
from port 4025 of localhost
, and to port 110 of remote-server
from port 4110 of localhost
through ssh
, execute on the local host as the following.
# ssh -q -L 4025:remote-server:25 4110:remote-server:110 username@remote-server
This is a secure way to make connections to SMTP/POP3 servers over the Internet. Set the "AllowTcpForwarding
" entry to "yes
" in "/etc/ssh/sshd_config
" of the remote host.
One can avoid having to remember passwords for remote systems by using "RSAAuthentication
" (SSH-1 protocol) or "PubkeyAuthentication
" (SSH-2 protocol).
On the remote system, set the respective entries, "RSAAuthentication yes
" or "PubkeyAuthentication yes
", in "/etc/ssh/sshd_config
".
Generate authentication keys locally and install the public key on the remote system by the following.
RSAAuthentication
": RSA key for SSH-1 (deprecated because it is superseded.)
$ ssh-keygen $ cat .ssh/identity.pub | ssh user1@remote "cat - >>.ssh/authorized_keys"
PubkeyAuthentication
": RSA key for SSH-2
$ ssh-keygen -t rsa $ cat .ssh/id_rsa.pub | ssh user1@remote "cat - >>.ssh/authorized_keys"
PubkeyAuthentication
": DSA key for SSH-2 (deprecated because it is slow.)
$ ssh-keygen -t dsa $ cat .ssh/id_dsa.pub | ssh user1@remote "cat - >>.ssh/authorized_keys"
Use of DSA key for SSH-2 is deprecated because key is smaller and slow. There are no more reasons to work around RSA patent using DSA since it has been expired. DSA stands for Digital Signature Algorithm and slow. Also see DSA-1571-1.
For "HostbasedAuthentication
" to work in SSH-2, you must adjust the settings of "HostbasedAuthentication
" to "yes
" in both "/etc/ssh/sshd_config
" on the server host and "/etc/ssh/ssh_config
" or "~/.ssh/config
" on the client host.
There are some free SSH clients available for other platforms.
Table 6.17. List of free SSH clients for other platforms
environment | free SSH program |
---|---|
Windows | puTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/) (GPL) |
Windows (cygwin) | SSH in cygwin (http://www.cygwin.com/) (GPL) |
Macintosh Classic | macSSH (http://www.macssh.com/) (GPL) |
Mac OS X |
OpenSSH; use ssh in the Terminal application (GPL)
|
It is safer to protect your SSH authentication secret keys with a pass phrase. If a pass phrase was not set, use "ssh-keygen -p
" to set it.
Place your public SSH key (e.g. "~/.ssh/id_rsa.pub
") into "~/.ssh/authorized_keys
" on a remote host using a password-based connection to the remote host as described above.
$ ssh-agent bash $ ssh-add ~/.ssh/id_rsa Enter passphrase for /home/<username>/.ssh/id_rsa: Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)
No remote password needed from here on for the next command.
$ scp foo <username>@remote.host:foo
Press ^D to terminating ssh-agent session.
For the X server, the normal Debian startup script executes ssh-agent
as the parent process. So you only need to execute ssh-add
once. For more, read ssh-agent
(1)and ssh-add
(1).
You need to protect the process doing "shutdown -h now
" (see Section 1.1.8, “How to shutdown the system”) from the termination of SSH using the at
(1) command (see Section 9.5.13, “Scheduling tasks once”) by the following.
# echo "shutdown -h now" | at now
Running "shutdown -h now
" in screen
(1) (see Section 9.1, “The screen program”) session is another way to do the same.
If you have problems, check the permissions of configuration files and run ssh
with the "-v
" option.
Use the "-P
" option if you are root and have trouble with a firewall; this avoids the use of server ports 1 — 1023.
If ssh
connections to a remote site suddenly stop working, it may be the result of tinkering by the sysadmin, most likely a change in "host_key
" during system maintenance. After making sure this is the case and nobody is trying to fake the remote host by some clever hack, one can regain a connection by removing the "host_key
" entry from "~/.ssh/known_hosts
" on the local host.
Here are other network application servers.
Table 6.18. List of other network application servers
package | popcon | size | protocol | description |
---|---|---|---|---|
telnetd
*
|
V:0.4, I:1.1 | 156 | TELNET | TELNET server |
telnetd-ssl
*
|
V:0.10, I:0.3 | 152 | , , | , , (SSL support) |
nfs-kernel-server
*
|
V:12, I:21 | 412 | NFS | Unix file sharing |
samba
*
|
V:18, I:31 | 23096 | SMB | Windows file and printer sharing |
netatalk
*
|
V:5, I:9 | 3428 | ATP | Apple/Mac file and printer sharing (AppleTalk) |
proftpd-basic
*
|
V:6, I:7 | 4064 | FTP | General file download |
wu-ftpd
*
|
V:0.4, I:0.6 | 820 | , , | , , |
apache2-mpm-prefork
*
|
V:38, I:42 | 68 | HTTP | General web server |
apache2-mpm-worker
*
|
V:6, I:7 | 68 | , , | , , |
squid
*
|
V:6, I:7 | 1848 | , , | General web proxy server |
squid3
*
|
V:1.5, I:1.8 | 3600 | , , | , , |
slpd
*
|
V:0.14, I:0.2 | 180 | SLP | OpenSLP Server as LDAP server |
bind9
*
|
V:10, I:17 | 1080 | DNS | IP address for other hosts |
dhcp3-server
*
|
V:5, I:10 | 64 | DHCP | IP address of client itself |
Common Internet File System Protocol (CIFS) is the same protocol as Server Message Block (SMB) and is used widely by Microsoft Windows.
Use of proxy server such as squid
is much more efficient for saving bandwidth than use of local mirror server with the full Debian archive contents.
Here are other network application clients.
Table 6.19. List of network application clients
package | popcon | size | protocol | description |
---|---|---|---|---|
netcat
*
|
I:28 | 36 | TCP/IP | TCP/IP swiss army knife |
openssl
*
|
V:56, I:91 | 2380 | SSL | Secure Socket Layer (SSL) binary and related cryptographic tools |
stunnel4
*
|
V:0.6, I:2 | 512 | , , | universal SSL Wrapper |
telnet
*
|
V:13, I:89 | 200 | TELNET | TELNET client |
telnet-ssl
*
|
V:0.2, I:1.1 | 208 | , , | , , (SSL support) |
nfs-common
*
|
V:49, I:81 | 660 | NFS | Unix file sharing |
smbclient
*
|
V:6, I:35 | 45200 | SMB | MS Windows file and printer sharing client |
smbfs
*
|
V:5, I:24 | 56 | , , | mount and umount commands for remote MS Windows file |
ftp
*
|
V:9, I:85 | 168 | FTP | FTP client |
lftp
*
|
V:1.3, I:6 | 1876 | , , | , , |
ncftp
*
|
V:1.4, I:7 | 1276 | , , | full screen FTP client |
wget
*
|
V:33, I:99 | 2364 | HTTP and FTP | web downloader |
curl
*
|
V:7, I:23 | 352 | , , | , , |
bind9-host
*
|
V:43, I:91 | 188 | DNS |
host (1) from bind9, "Priority: standard "
|
dnsutils
*
|
V:14, I:90 | 412 | , , |
dig (1) from bind, "Priority: standard "
|
dhcp3-client
*
|
V:32, I:92 | 60 | DHCP | obtain IP address |
ldap-utils
*
|
V:2, I:7 | 672 | LDAP | obtain data from LDAP server |
The telnet
program enables manual connection to the system daemons and its diagnosis.
For testing plain POP3 service, try the following
$ telnet mail.ispname.net pop3
For testing the TLS/SSL enabled POP3 service by some ISPs, you need TLS/SSL enabled telnet
client by the telnet-ssl
or openssl
packages.
$ telnet -z ssl pop.gmail.com 995
$ openssl s_client -connect pop.gmail.com:995
The following RFCs provide required knowledge to each system daemon.
The port usage is described in "/etc/services
".
The X Window System on the Debian system is based on the source from X.Org. As of July 2009, they are X11R7.1(etch), X11R7.3(lenny), X11R7.3(squeeze) and X11R7.4(sid).
There are a few (meta)packages provided to ease installation.
Table 7.1. List of key (meta)packages for X Window
(meta)package | popcon | size | description |
---|---|---|---|
xorg
*
|
I:43 | 80 | X libraries, an X server, a set of fonts, and a group of basic X clients and utilities (metapackage) |
xserver-xorg
*
|
V:30, I:51 | 228 | full suits of the X server and its configuration |
xbase-clients
*
|
V:3, I:47 | 132 | miscellaneous assortment of X clients |
x11-common
*
|
V:41, I:92 | 568 | filesystem infrastructure for the X Window System |
xorg-docs
*
|
I:6 | 1956 | miscellaneous documentation for the X.Org software suite |
menu
*
|
V:28, I:52 | 2060 | generate the Debian menu for all menu-aware applications |
gksu
*
|
V:23, I:46 | 540 |
Gtk+ frontend to su (1) or sudo (8)
|
menu-xdg
*
|
I:47 | 76 | convert the Debian menu structure to the freedesktop.org xdg menu structure |
xdg-utils
*
|
V:16, I:46 | 300 | utilities to integrate desktop environment provided by the freedesktop.org |
gnome-desktop-environment
*
|
I:29 | 44 | standard GNOME desktop environment (metapackage) |
kde-standard
*
|
I:3 | 36 | core KDE desktop environment (metapackage) |
xfce4
*
|
I:4 | 40 | Xfce lightweight desktop environment (metapackage) |
lxde-core
*
|
I:2 | 36 | LXDE lightweight desktop environment (metapackage) |
fluxbox
*
|
V:0.9, I:2 | 4424 | Fluxbox: package for highly configurable and low resource X window manager |
For the basics of X, refer to X
(7), the LDP XWindow-User-HOWTO.
A desktop environment is usually a combination of a X window manager, a file manager, and a suite of compatible utility programs.
You can setup a full desktop environment such as GNOME, KDE, Xfce, or LXDE, from the aptitude
under the task menu.
Task menu may be out of sync with the latest package transition state under Debian unstable
/testing
environment. In such situation, you need to deselect some (meta)packages listed under aptitude
(8) task menu to avoid package conflicts. When deselecting (meta)packages, you must select certain packages providing their dependencies manually to avoid them deleted automatically.
You may alternatively setup a simple environment manually just with a X window manager such as Fluxbox.
See Window Managers for X for the guide to the X window manager and the desktop environment.
Debian menu system provides a general interface for both text- and X-oriented programs with update-menus
(1) from the menu
package. Each package installs its menu data in the "/usr/share/menu/
" directory. See "/usr/share/menu/README
".
Each package which is compliant to Freedesktop.org's xdg menu system installs its menu data provided by "*.desktop
" under "/usr/share/applications/
". Modern desktop environments which are compliant to Freedesktop.org standard use these data to generate their menu using the xdg-utils
package. See "/usr/share/doc/xdg-utils/README
".
In order to obtain access to the traditional Debian menu under GNOME desktop environment, you must install the menu-xdg
package, click "System" → "Preference" → "Main Menu", and check the box for "Debian".
You may need to do the similar for other modern desktop environments which are compliant to Freedesktop.org standard.
The X Window System is activated as a combination of the server and client programs. The meaning for the words server and client with respect to the words local and remote requires attention here.
Table 7.2. List of server/client terminology
type | description |
---|---|
X server | a program run on a local host connected to the user's display and input devices. |
X client | a program run on a remote host that processes data and talks to the X server. |
application server | a program run on a remote host that processes data and talks to the clients. |
application client | a program run on a local host connected to the user's display and input devices. |
See xorg
(1) for X server information.
X server (post-lenny
) is rewritten to use more information from standardized OS services such as HAL and D-bus, for its configuration than that from "/etc/X11/xorg.conf
". So contents in "/etc/X11/xorg.conf
" are getting less. You may need to work around transitional problems of X server.
The following (re)configures an X server by generating a new "/etc/X11/xorg.conf
" file using dexconf
(1).
# dpkg-reconfigure --priority=low x11-common # dpkg-reconfigure --priority=low xserver-xorg
If you have manually edited this "/etc/X11/xorg.conf
" file but would like it to be automatically updated again, run the following command.
# sudo dpkg-reconfigure -phigh xserver-xorg
Please check your X configuration with respect to the specification of your monitor carefully. For the large high resolution CRT monitor, it is a good idea to set the refresh rate as high as your monitor can handle (85 Hz is great, 75 Hz is OK) to reduce flicker. For the LCD monitor, slower standard refresh rate (60Hz) is usually fine due to its slow response.
Be careful not to use too high refresh rate which may cause fatal hardware failure of your monitor system.
There are several ways of getting the "X server" (display side) to accept connections from an "X client" (application side).
Table 7.3. List of connection methods to the X server
method | package | popcon | size | user | encryption | pertinent use |
---|---|---|---|---|---|---|
xhost command
|
xbase-clients
*
|
V:3, I:47 | 132 | unchecked | no | deprecated |
xauth command
|
xbase-clients
*
|
V:3, I:47 | 132 | checked | no | local connection via pipe |
ssh -X command
|
openssh-client
*
|
V:52, I:99 | 2104 | checked | yes | remote network connection |
GNOME display manager |
gdm
*
|
V:22, I:33 | 16548 | checked | no(XDMCP) | local connection via pipe |
KDE display manager |
kdm
*
|
V:8, I:11 | 5510 | checked | no(XDMCP) | local connection via pipe |
X display manager |
xdm
*
|
V:0.7, I:1.8 | 780 | checked | no(XDMCP) | local connection via pipe |
WindowMaker display manager |
wdm
*
|
V:24, I:84 | 1992 | checked | no(XDMCP) | local connection via pipe |
LTSP display manager |
ldm
*
|
V:0.02, I:0.09 | 392 | checked | yes | remote SSH network connection (thin client) |
Do not use remote TCP/IP connection over unsecured network for X connection unless you have very good reason such as use of encryption. A remote TCP/IP socket connection without encryption is prone to the eavesdropping attack and is disabled by default on the Debian system. Use "ssh -X
".
Do not use XDMCP connection over unsecured network either. It sends data via UDP/IP without encryption and is prone to the eavesdropping attack.
You can dare to enable remote TCP/IP connection by setting "DisallowTCP=false
" in "/etc/gdm/gdm.conf
" to override "/usr/share/gdm/defaults.conf
" and by removing "-nolisten
" from lines found by "find /etc/X11 -type f -print0 | xargs -0 grep nolisten
", if you are in the fully secured environment.
LTSP stands for Linux Terminal Server Project.
The X Window System is usually started as an X session which is the combination of an X server and connecting X clients. For the normal desktop system, both of them are executed on a workstation.
The X session is started by the following.
startx
command started from the command line
*dm
started from the end of the start up script in "/etc/rc?.d/
" ("?
" corresponding to the runlevel) directory
The start up script for the display manager daemons checks the content of the "/etc/X11/default-display-manager
" file before actually executing themselves. This ensures to have only one X display manager daemon program activated.
See Section 8.3.5, “Specific locale only under X Window” for initial environment variables of the X display manager.
Essentially, all these programs execute the "/etc/X11/Xsession
" script. Then the "/etc/X11/Xsession
" script performs run-parts
(8) like action to execute scripts in the "/etc/X11/Xsession.d/
" directory. This is essentially an execution of a first program which is found in the following order with the exec
builtin command.
/etc/X11/Xsession
" by the X display manager, if it is defined.
~/.xsession
" or "~/.Xsession
" script, if it is defined.
/usr/bin/x-session-manager
" command, if it is defined.
/usr/bin/x-window-manager
" command, if it is defined.
/usr/bin/x-terminal-emulator
" command, if it is defined.
This process is affected by the content of "/etc/X11/Xsession.options
". The exact programs to which these "/usr/bin/x-*
" commands point, are determined by the Debian alternative system and changed by "update-alternatives --config x-session-manager
", etc.
gdm
(1) lets you select the session type (or desktop environment: Section 7.2, “Setting up desktop environment”), and language (or locale: Section 8.3, “The locale”) of the X session from its menu. It keeps the selected default value in "~/.dmrc
" as the following.
[Desktop] Session=default Language=ja_JP.UTF-8
On a system where "/etc/X11/Xsession.options
" contains a line "allow-user-xsession
" without preceding "#
" characters, any user who defines "~/.xsession
" or "~/.Xsession
" is able to customize the action of "/etc/X11/Xsession
" by completely overriding the system code. The last command in the "~/.xsession
" file should use form of "exec some-window/session-manager
" to start your favorite X window/session managers.
Here are new methods to customize the X session without completely overriding the system code as above.
gdm
can select a specific session and set it as the argument of "/etc/X11/Xsession
".
~/.xsessionrc
" file is executed as a part of start up process. (desktop independent)
~/.gnomerc
" file is executed as a part of start up process. (GNOME desktop only)
~/.gnome2/session
" file etc.
The use of "ssh -X
" enables a secure connection from a local X server to a remote application server.
Set "X11Forwarding
" entries to "yes
" in "/etc/ssh/sshd_config
" of the remote host, if you want to avoid "-X
" command-line option.
Start the X server on the local host.
Open an xterm
in the local host.
Run ssh
(1) to establish a connection with the remote site as the following.
localname @ localhost $ ssh -q -X loginname@remotehost.domain Password:
Run an X application command, e.g. "gimp
", on the remote site as the following.
loginname @ remotehost $ gimp &
This method can display the output from a remote X client as if it were locally connected through a local UNIX domain socket.
Secure X terminal via the Internet, which displays remotely run entire X desktop environment, can easily achieved by using specialized package such as ldm
. Your local machine becomes a secure thin client to the remote application server connected via SSH.
If you want to add similar feature to your normal display manager gdm
, create executable shell script at "/usr/local/bin/ssh-session
" as the following.
#!/bin/sh -e # Based on gdm-ssh-session in gdm source (GPL) ZENITY=$(type -p zenity) TARGETHOST=$($ZENITY --width=600 \ --title "Host to connect to" --entry \ --text "Enter the name of the host you want to log in to as user@host.dom:") TARGETSESSION=$($ZENITY --width=600 --height=400 \ --title "Remote session name" --list --radiolist --text "Select one" \ --column " " --column "Session" --column "description" --print-column 2 \ TRUE "/etc/X11/Xsession" "Debian" \ FALSE "/etc/X11/xinit/Xclients" "RH variants" \ FALSE "gnome-session" "GNOME session" \ FALSE "xterm" "Safe choice" \ FALSE "rxvt" "Safe choice" \ FALSE "gnome-terminal" "Safe choice") echo "Connecting to "$TARGETHOST" with $TARGETSESSION" /usr/bin/ssh -A -X -T -n "$TARGETHOST" "$TARGETSESSION" #SSH_ASKPASS=/usr/bin/ssh-askpass /usr/bin/ssh -A -X -T -n "$TARGETHOST" "$TARGETSESSION"
Add followings to "/etc/dm/Sessions/ssh.desktop
".
[Desktop Entry] Encoding=UTF-8 Name=SSH Comment=This session logs you into a remote host using ssh Exec=/usr/local/bin/ssh-session Type=Application
Fontconfig 2.0 was created to provide a distribution independent library for configuring and customizing font access in 2002. Debian after squeeze
uses Fontconfig 2.0 for its font configuration.
Font supports on X Window System can be summarized as follows.
Legacy X server side font support system
Modern X client side font support system
fonts.conf
(5) for its configuration.
Table 7.4. Table of packages to support X Window font systems
package | popcon | size | description |
---|---|---|---|
xfonts-utils
*
|
V:23, I:71 | 516 | X Window System font utility programs |
libxft2
*
|
V:44, I:74 | 148 | Xft, a library that connects X applications with the FreeType font rasterization library |
libfreetype6
*
|
V:58, I:87 | 740 | FreeType 2.0 font rasterization library |
fontconfig
*
|
V:21, I:73 | 472 | Fontconfig, a generic font configuration library — support binaries |
fontconfig-config
*
|
I:81 | 440 | Fontconfig, a generic font configuration library — configuration data |
You can check font configuration information by the following.
xset q
" for core X11 font path
fc-match
" for fontconfig font default
fc-list
" for available fontconfig fonts
"The Penguin and Unicode" is a good overview of modern X Window System. Other documentations at http://unifont.org/ should provide good information on Unicode fonts, Unicode-enabled software, internationalization, and Unicode usability issues on free/libre/open source (FLOSS) operating systems.
There are 2 major types of computer fonts.
While scaling of bitmap fonts causes jugged image, scaling of outline/stroke fonts produces smooth image.
Bitmap fonts on the Debian system are usually provided by compressed X11 pcf bitmap font files having their file extension ".pcf.gz
".
Outline fonts on the Debian system are provided by the following.
.pfb
" (binary font file) and ".afm
" (font metrics file).
.ttf
".
OpenType is intended to supersede both TrueType and PostScript Type 1.
Table 7.5. Table of corresponding PostScript Type 1 fonts
font package | popcon | size | sans-serif font | serif font | monospace font | source of font |
---|---|---|---|---|---|---|
PostScript | N/A | N/A | Helvetica | Times | Courier | Adobe |
gsfonts * | V:18, I:66 | 4632 | Nimbus Sans L | Nimbus Roman No9 L | Nimbus Mono L | URW (Adobe compatible size) |
gsfonts-x11 * | I:30 | 116 | Nimbus Sans L | Nimbus Roman No9 L | Nimbus Mono L | X font support with PostScript Type 1 fonts. |
t1-cyrillic * | I:1.9 | 5008 | Free Helvetian | Free Times | Free Courier | URW extended (Adobe compatible size) |
lmodern * | V:2, I:16 | 45644 | LMSans* | LMRoman* | LMTypewriter* | scalable PostScript and OpenType fonts based on Computer Modern (from TeX) |
Table 7.6. Table of corresponding TrueType fonts
font package | popcon | size | sans-serif font | serif font | monospace font | source of font |
---|---|---|---|---|---|---|
ttf-mscorefonts-installer * | I:11 | 200 | Arial | Times New Roman | Courier New | Microsoft (Adobe compatible size) (This installs non-free data) |
ttf-liberation * | I:43 | 1724 | Liberation Sans | Liberation Serif | Liberation Mono | Liberation Fonts project (Microsoft compatible size) |
ttf-freefont * | V:10, I:26 | 4204 | FreeSans | FreeSerif | FreeMono | GNU freefont (Microsoft compatible size) |
ttf-dejavu * | I:77 | 68 | DejaVu Sans | DejaVu Serif | DejaVu Sans Mono | DejaVu, Bitstream Vera with Unicode coverage |
ttf-dejavu-core * | I:72 | 2592 | DejaVu Sans | DejaVu Serif | DejaVu Sans Mono | DejaVu, Bitstream Vera with Unicode coverage (sans, sans-bold, serif, serif-bold, mono, mono-bold) |
ttf-dejavu-extra * | I:69 | 5788 | N/A | N/A | N/A | DejaVu, Bitstream Vera with Unicode coverage (oblique, italic, bold-oblique, bold-italic, condensed) |
ttf-unifont * | I:4 | 16060 | N/A | N/A | unifont | GNU Unifont, with all printable character code in Unicode 5.1 Basic Multilingual Plane (BMP) |
DejaVu fonts are based on and superset of Bitstream Vera fonts.
aptitude
(8) helps you find additional fonts easily.
~Gmade-of::data:font
"
~nxfonts-
"
~nttf-
"
Since Free fonts are sometimes limited, installing or sharing some commercial TrueType fonts is an option for a Debian users. In order to make this process easy for the user, some convenience packages have been created.
ttf-mathematica4.1
ttf-mscorefonts-installer
You'll have a really good selection of TrueType fonts at the expense of contaminating your Free system with non-Free fonts.
Here are some key points focused on fonts of CJK characters.
Table 7.7. Table of key words used in CJK font names to indicate font types
font type | Japanese font name | Chinese font name | Korean font name |
---|---|---|---|
sans-serif | gothic, ゴチック | hei, gothic | dodum, gulim, gothic |
serif | mincho, 明朝 | song, ming | batang |
Font name such as "VL PGothic" with "P" is a proportional font which corresponds to the fixed width "VL Gothic" font.
For example, Shift_JIS code table comprises 7070 characters. They can be grouped as the following.
Double-byte characters occupy double width on console terminals which uses CJK fixed width fonts. In order to cope with such situation, Hanzi Bitmap Font (HBF) File with file extension ".hbf
" may be deployed for fonts containing single-byte and double-byte characters.
In order to save space for TrueType font files, TrueType font collection file with file extension ".ttc
" may be used.
In order to cover complicated code space of characters, CID keyed PostScript Type 1 font is used with CMap files starting themselves with "%!PS-Adobe-3.0 Resource-CMap
". This is rarely used for normal X display but used for PDF rendering etc. (see Section 7.7.2, “X utility applications”).
The multiple glyphs are expected for some Unicode code points due to Han unification. One of the most annoying ones are "U+3001 IDEOGRAPHIC COMMA" and "U+3002 IDEOGRAPHIC FULL STOP" whose character positions differ among CJK countries. Configuring priority of Japanese centric fonts over Chinese ones using "~/.fonts.conf
" should give peace of minds to Japanese.
Here is a list of basic office applications (OO is OpenOffice.org).
Table 7.8. List of basic X office applications
package | popcon | package size | type | description |
---|---|---|---|---|
openoffice.org-writer
*
|
V:21, I:41 | 26892 | OO | word processor |
openoffice.org-calc
*
|
V:21, I:40 | 20524 | OO | spreadsheet |
openoffice.org-impress
*
|
V:18, I:40 | 4208 | OO | presentation |
openoffice.org-base
*
|
V:16, I:39 | 10708 | OO | database management |
openoffice.org-draw
*
|
V:18, I:40 | 10720 | OO | vector graphics editor (draw) |
openoffice.org-math
*
|
V:17, I:40 | 2712 | OO | mathematical equation/formula editor |
abiword
*
|
V:6, I:10 | 4776 | GNOME | word processor |
gnumeric
*
|
V:5, I:11 | 7860 | GNOME | spreadsheet |
gimp
*
|
V:12, I:44 | 13560 | GTK | bitmap graphics editor (paint) |
inkscape
*
|
V:15, I:32 | 87436 | GNOME | vector graphics editor (draw) |
dia-gnome
*
|
V:1.4, I:2 | 576 | GNOME | flowchart and diagram editor |
planner
*
|
V:0.4, I:4 | 6704 | GNOME | project management |
kword
*
|
V:0.6, I:1.5 | 5334 | KDE | word processor |
kspread
*
|
V:0.6, I:1.6 | 8792 | KDE | spreadsheet |
kpresenter
*
|
V:0.5, I:1.3 | 2877 | KDE | presentation |
kexi
*
|
V:0.2, I:1.6 | 7625 | KDE | database management |
karbon
*
|
V:0.6, I:1.4 | 2403 | KDE | vector graphics editor (draw) |
krita
*
|
V:0.6, I:1.6 | 11822 | KDE | bitmap graphics editor (paint) |
kchart
*
|
V:0.8, I:1.9 | 2503 | KDE | graph and chart drawing program |
kformula
*
|
V:0.4, I:1.3 | 2065 | KDE | mathematical equation/formula editor |
kplato
*
|
V:0.15, I:1.4 | 5978 | KDE | project management |
Here is a list of basic utility applications which caught my eyes.
Table 7.9. List of basic X utility applications
package | popcon | package size | type | description |
---|---|---|---|---|
evince
*
|
V:26, I:38 | 1116 | GNOME | document(pdf) viewer |
okular
*
|
V:4, I:6 | 3408 | KDE | document(pdf) viewer |
evolution
*
|
V:16, I:34 | 4724 | GNOME | Personal information Management (groupware and email) |
kontact
*
|
V:1.3, I:8 | 1326 | KDE | Personal information Management (groupware and email) |
scribus
*
|
V:0.5, I:3 | 26888 | KDE | desktop page layout editor |
glabels
*
|
V:0.16, I:0.7 | 1148 | GNOME | label editor |
kbarcode
*
|
V:0.05, I:0.3 | 2180 | KDE | barcode and label printing application |
gnucash
*
|
V:0.7, I:2 | 5748 | GNOME | personal accounting |
homebank
*
|
V:0.09, I:0.4 | 1092 | GTK | personal accounting |
kmymoney2
*
|
V:0.06, I:0.5 | 144 | KDE | personal accounting |
xsane
*
|
V:5, I:36 | 748 | GTK | scanner frontend |
The poppler-data
package (previously non-free, see Section 11.3.1, “Ghostscript”) needs to be installed for evince
and okular
to display CJK PDF documents using Cmap data (Section 7.6.3, “CJK fonts”).
Installing softwares such as scribus
(KDE) on GNOME desktop environment are quite acceptable since corresponding functionality is not available under GNOME desktop environment. But installing too many packages with duplicated functionalities clutter your menu.
xmodmap
(1) is a utility for modifying keymaps and pointer button mappings in the X Window System.
To get the keycode, run xev
(1) in the X and press keys. To get the meaning of keysym, look into the MACRO definition in "/usr/include/X11/keysymdef.h
" file (x11proto-core-dev
package). All "#define
" statements in this file are named as "XK_
" prepended to keysym names.
Most traditional X client programs, such as xterm
(1), can be started with a set of standard command line options to specify geometry, font, and display.
They also use the X resource database to configure their appearance. The system-wide defaults of X resources are stored in "/etc/X11/Xresources/*
" and application defaults of them are stored in "/etc/X11/app-defaults/*
". Use these settings as the starting points.
The "~/.Xresources
" file is used to store user resource specifications. This file is automatically merged into the default X resources upon login. To make changes to these settings and make them effective immediately, merge them into the database using the following command.
$ xrdb -merge ~/.Xresources
See x
(7) and xrdb
(1).
Learn everything about xterm
(1) at http://dickey.his.com/xterm/xterm.faq.html.
Never start the X display/session manager under the root account by typing in root
to the prompt of the display manager such as gdm
because it is considered unsafe (insecure), even when you plan to perform administrative activities. The entire X architecture is considered insecure if run as root. You must always use the lowest privilege level possible, like a regular user account.
Easy ways to run a particular X client, e.g. "foo
" as root is to use sudo
(8) etc. as the following.
$ sudo foo &
$ sudo -s # foo &
$ gksu foo &
$ ssh -X root@localhost # foo &
Use of ssh
(1) just for this purpose as above is waste of resource.
In order for the X client to connect to the X server, please note the following.
$XAUTHORITY
" and "$DISPLAY
" environment variables must be copied to the new user's ones.
$XAUTHORITY
" environment variable must be readable by the new user.
The gksu
package (popcon: V:23, I:46) is a specialized GTK+ GUI package for gaining the root privileges. It can be configured to use su
(1) or sudo
(8) as its backend depending on the "/apps/gksu/sudo-mode
" gconf key. You can edit gconf key using gconf-editor
(1) (menu: "Applications" → "System Tools" → "Configuration Editor").
Multilingualization (M17N) or Native Language Support for an application software is done in 2 steps.
There are 17, 18, or 10 letters between "m" and "n", "i" and "n", or "l" and "n" in multilingualization, internationalization, and localization which correspond to M17N, I18N, and L10N.
The modern software such as GNOME and KDE are multilingualized. They are internationalized by making them handle UTF-8 data and localized by providing their translated messages through the gettext
(1) infrastructure. Translated messages may be provided as separate localization packages. They can be selected simply by setting pertinent environment variables to the appropriate locale.
The simplest representation of the text data is ASCII which is sufficient for English and uses less than 127 characters (representable with 7 bits). In order to support much more characters for the international support, many character encoding systems have been invented. The modern and sensible encoding system is UTF-8 which can handle practically all the characters known to the human (see Section 8.3.1, “Basics of encoding”).
See Introduction to i18n for details.
The international hardware support is enabled with localized hardware configuration data.
The Debian system can be configured to work with many international keyboard arrangements.
Table 8.1. List of keyboard reconfiguration methods
environment | command |
---|---|
Linux console |
dpkg-reconfigure --priority=low console-data
|
X Window |
dpkg-reconfigure --priority=low xserver-xorg
|
This supports keyboard input for accented characters of many European languages with its dead-key function. For Asian languages, you need more complicated input method support such as IBus discussed next.
Setup of multilingual input for the Debian system is simplified by using the IBus family of packages with the im-config
package. The list of IBus packages are the following.
Table 8.2. List of input method supports with IBus
package | popcon | size | supported locale |
---|---|---|---|
ibus * | V:0.2, I:0.2 | 4220 | input method framework using dbus |
ibus-anthy * | V:0.04, I:0.10 | 684 | Japanese |
ibus-skk * | V:0.00, I:0.03 | 404 | , , |
ibus-pinyin * | V:0.06, I:0.09 | 1184 | Chinese (for zh_CN) |
ibus-chewing * | V:0.01, I:0.02 | 252 | , , (for zh_TW) |
ibus-hangul * | V:0.01, I:0.03 | 216 | Korean |
ibus-table * | V:0.05, I:0.10 | 680 | table engine for IBus |
ibus-table-thai * | I:0.00 | 156 160 | Thai |
ibus-unikey * | V:0.00, I:0.00 | 316 | Vietnamese |
ibus-m17n * | V:0.02, I:0.05 | 180 | Multilingual: Indic, Arabic and others |
The kinput2 method and other locale dependent Asian classic input methods still exist but are not recommended for the modern UTF-8 X environment. The SCIM and uim tool chains are an slightly older approach for the international input method for the modern UTF-8 X environment.
I find the Japanese input method started under English environment ("en_US.UTF-8
") very useful. Here is how I did this with IBus.
ibus-anthy
with its recommended packages such as im-config
.
im-config
" from user's shell and select "ibus
".
im-config
".
Please note the following.
im-config
(8) behaves differently if command is executed from root or not.
im-config
(8) enables the best input method on the system as default without any user actions.
im-config
(8) is disable as default to prevent cluttering.
If you wish to input without going through XIM, set "$XMODIFIERS
" value to "none" while starting a program. This may be the case if you use Japanese input infrastructure egg
on emacs
(1). From shell, execute as the following.
$ XMODIFIERS=none emacs
In order to adjust the command executed by the Debian menu, place customized configuration in "/etc/menu/
" following method described in "/usr/share/doc/menu/html
".
Linux console can only display limited characters. (You need to use special terminal program such as jfbterm
(1) to display non-European languages on the non-X console.)
X Window can display any characters in the UTF-8 as long as required font data exists. (The encoding of the original font data is taken care by the X Window System and transparent to the user.)
The following focuses on the locale for applications run under X Window environment started from gdm
(1).
The environment variable "LANG=xx_YY.ZZZZ
" sets the locale to language code "xx
", country code "yy
", and encoding "ZZZZ
" (see Section 1.5.2, “"$LANG
" variable”).
Current Debian system normally sets the locale as "LANG=xx_YY.UTF-8
". This uses the UTF-8 encoding with the Unicode character set. This UTF-8 encoding system is a multibyte code system and uses code points smartly. The ASCII data, which consist only with 7-bit range codes, are always valid UTF-8 data consisting only with 1 byte per character.
Previous Debian system used to set the locale as "LANG=C
" or "LANG=xx_YY
" (without ".UTF-8
").
LANG=C
" or "LANG=POSIX
".
LANG=xx_YY
".
Actual traditional encoding system used for "LANG=xx_YY
" can be identified by checking "/usr/share/i18n/SUPPORTED
". For example, "en_US
" uses "ISO-8859-1
" encoding and "fr_FR@euro
" uses "ISO-8859-15
" encoding.
For meaning of encoding values, see Table 11.2, “List of encoding values and their usage”.
The UTF-8 encoding is the modern and sensible text encoding system for I18N and enables to represent Unicode characters, i.e., practically all characters known to human. UTF stands for Unicode Transformation Format (UTF) schemes.
I recommend to use UTF-8 locale for your desktop, e.g., "LANG=en_US.UTF-8
". The first part of the locale determines messages presented by applications. For example, gedit
(1) (text editor for the GNOME Desktop) under "LANG=fr_FR.UTF-8
" locale can display and edit Chinese character text data while presenting menus in French, as long as required fonts and input methods are installed.
I also recommend to set the locale only using the "$LANG
" environment variable. I do not see much benefit of setting a complicated combination of "LC_*
" variables (see locale
(1)) under UTF-8 locale.
Even plain English text may contain non-ASCII characters, e.g. left and right quotation marks are not available in ASCII.
“double quoted text” ‘single quoted text’
When ASCII plain text data is converted to UTF-8 one, it has exactly the same content and size as the original ASCII one. So you loose nothing by deploying UTF-8 locale.
Some programs consume more memory after supporting I18N. This is because they are coded to use UTF-32(UCS4) internally to support Unicode for speed optimization and consume 4 bytes per each ASCII character data independent of locale selected. Again, you loose nothing by deploying UTF-8 locale.
The vendor specific old non-UTF-8 encoding systems tend to have minor but annoying differences on some characters such as graphic ones for many countries. The deployment of the UTF-8 system by the modern OSs practically solved these conflicting encoding issues.
In order for the system to access a particular locale, the locale data must be compiled from the locale database. (The Debian system does not come with all available locales pre-compiled unless you installed the locales-all
package.) The full list of supported locales available for compiling are listed in "/usr/share/i18n/SUPPORTED
". This lists all the proper locale names. The following lists all the available UTF-8 locales already compiled to the binary form.
$ locale -a | grep utf8
The following command execution reconfigures the locales
package.
# dpkg-reconfigure locales
This process involves 3 steps.
/etc/defaults/locale
" for use by PAM (see Section 4.5, “PAM and NSS”)
The list of available locale should include "en_US.UTF-8
" and all the interesting languages with "UTF-8
".
The recommended default locale is "en_US.UTF-8
" for US English. For other languages, please make sure to chose locale with "UTF-8
". Any one of these settings can handle any international characters.
Although setting locale to "C
" uses US English message, it handles only ASCII characters.
The value of the "$LANG
" environment variable is set and changed by many applications.
login
(1) for the local Linux console programs
ssh
(1) for the remote console programs
gdm
(1) for all X programs
~/.xsessionrc
" for all X programs (lenny
feature)
~/.bashrc
", for all console programs
It is good idea to install system wide default locale as "en_US.UTF-8
" for maximum compatibility.
You can chose specific locale only under X Window irrespective of your system wide default locale using PAM customization (see Section 4.5, “PAM and NSS”) as follows.
This environment should provide you with your best desktop experience with stability. You have access to the functioning character terminal with readable messages even when the X Window System is not working. This becomes essential for languages which use non-roman characters such as Chinese, Japanese, and Korean.
There may be another way available as the improvement of X session manager package but please read following as the generic and basic method of setting the locale. For gdm
(1), I know you can select the locale of X session via its memu.
The following line defines file location of the language environment in the PAM configuration file, such as "/etc/pam.d/gdm
.
auth required pam_env.so read_env=1 envfile=/etc/default/locale
Change this to the following.
auth required pam_env.so read_env=1 envfile=/etc/default/locale-x
For Japanese, create a "/etc/defaults/locale-gdm
" file with "-rw-r--r-- 1 root root
" permission containing the following.
LANG="ja_JP.UTF-8"
Keep the default "/etc/defaults/locale
" file for other programs as the the following.
LANG="en_US.UTF-8"
This is the most generic technique to customize locale and makes the menu selection dialog of gdm
(1) itself to be localized.
Alternatively for this case, you may simply change locale using the "~/.xsessionrc
" file.
For cross platform data exchanges (see Section 10.1.10, “Removable storage device”), you may need to mount some filesystem with particular encodings. For example, mount
(8) for vfat filesystem assumes CP437 if used without option. You need to provide
explicit mount option to use UTF-8 or CP932 for filenames.
When auto-mounting a hot-pluggable USB memory stick under modern desktop environment such as GNOME, you may provide such mount option by right clicking the icon on the desktop, click "Drive" tab, click to expand "Setting", and entering "utf8" to "Mount options:". The next time this memory stick is mounted, mount with UTF-8 is enabled.
If you are upgrading system or moving disk drives from older non-UTF-8 system, file names with non-ASCII characters may be encoded in the historic and deprecated encodings such as ISO-8859-1 or eucJP. Please seek help of text conversion tools to convert them to UTF-8. See Section 11.1, “Text data conversion tools”.
Samba uses Unicode for newer clients (Windows NT, 200x, XP) but uses CP850 for older clients (DOS and Windows 9x/Me) as default. This default for older clients can be changed using "dos charset
" in the "/etc/samba/smb.conf
" file, e.g., to CP932 for Japanese.
Translations exist for many of the text messages and documents that are displayed in the Debian system, such as error messages, standard program output, menus, and manual pages. GNU gettext(1) command tool chain is used as the backend tool for most translation activities.
aptitude
(8) lists under "Tasks" → "Localization" provide extensive list of useful binary packages which add localized messages to applications and provide translated documentation.
For example, you can obtain the localized message for manpage by installing the manpages-<LANG>
package. To read the Italian-language manpage for <programname> from "/usr/share/man/it/
", execute as the following.
LANG=it_IT.UTF-8 man <programname>
The sort order of characters with sort
(1) is affected by the language choice of the locale. Spanish and English locale sort differently.
The date format of ls
(1) is affected by the locale. The date format of "LANG=C ls -l
" and "LANG=en_US.UTF-8
" are different (see Section 9.2.5, “Customized display of time and date”).
Number punctuation are different for locales. For example, in English locale, one thousand one point one is displayed as "1,000.1
" while in German locale, it is displayed as "1.000,1
". You may see this difference in spreadsheet program.
Here, I describe basic tips to configure and manage systems, mostly from the console.
screen
(1) is a very useful tool for people to access remote sites via unreliable or intermittent connections since it support interrupted network connections.
Table 9.1. List of programs to support interrupted network connections
package | popcon | size | description |
---|---|---|---|
screen
*
|
V:11, I:34 | 952 | terminal multiplexer with VT100/ANSI terminal emulation |
screen
(1) not only allows one terminal window to work with multiple processes, but also allows remote shell process to survive interrupted connections. Here is a typical use scenario of screen
(1).
screen
on a single console.
screen
windows created with ^A c
("Control-A" followed by "c").
screen
windows by ^A n
("Control-A" followed by "n").
You may detach the screen
session by any methods.
^A d
("Control-A" followed by "d") and manually logging out from the remote connection
^A DD
("Control-A" followed by "DD") to have screen
detach and log you out
screen
as "screen -r
".
screen
magically reattaches all previous screen
windows with all actively running programs.
You can save connection fees with screen
for metered network connections such as dial-up and packet ones, because you can leave a process active while disconnected, and then re-attach it later when you connect again.
In a screen
session, all keyboard inputs are sent to your current window except for the command keystroke. All screen
command keystrokes are entered by typing ^A
("Control-A") plus a single key [plus any parameters]. Here are important ones to remember.
Table 9.2. List of key bindings for screen
key binding | meaning |
---|---|
^A ?
|
show a help screen (display key bindings) |
^A c
|
create a new window and switch to it |
^A n
|
go to next window |
^A p
|
go to previous window |
^A 0
|
go to window number 0 |
^A 1
|
go to window number 1 |
^A w
|
show a list of windows |
^A a
|
send a Ctrl-A to current window as keyboard input |
^A h
|
write a hardcopy of current window to file |
^A H
|
begin/end logging current window to file |
^A ^X
|
lock the terminal (password protected) |
^A d
|
detach screen session from the terminal |
^A DD
|
detach screen session and log out |
See screen
(1) for details.
Many programs record their activities under the "/var/log/
" directory.
klogd
(8)
syslogd
(8)
See Section 3.5.9, “The system message” and Section 3.5.10, “The kernel message”.
Here are notable log analyzers ("~Gsecurity::log-analyzer
" in aptitude
(8)).
Table 9.3. List of system log analyzers
package | popcon | size | description |
---|---|---|---|
logwatch
*
|
V:3, I:3 | 2592 | log analyzer with nice output written in Perl |
fail2ban
*
|
V:4, I:5 | 660 | ban IPs that cause multiple authentication errors |
analog
*
|
V:1.0, I:16 | 4520 | web server log analyzer |
awstats
*
|
V:1.8, I:3 | 5200 | powerful and featureful web server log analyzer |
sarg
*
|
V:1.9, I:2 | 644 | squid analysis report generator |
pflogsumm
*
|
V:0.3, I:0.7 | 160 | Postfix log entry summarizer |
syslog-summary
*
|
V:0.2, I:0.9 | 84 | summarize the contents of a syslog log file |
lire
*
|
V:0.15, I:0.17 | 5304 | full-featured log analyzer and report generator |
fwlogwatch
*
|
V:0.10, I:0.2 | 440 | firewall log analyzer |
squidview
*
|
V:0.11, I:0.6 | 244 | monitor and analyze squid access.log files |
visitors
*
|
V:0.09, I:0.3 | 228 | fast web server log analyzer |
swatch
*
|
V:0.06, I:0.2 | 112 | log file viewer with regexp matching, highlighting, and hooks |
crm114
*
|
V:0.06, I:0.18 | 1300 | Controllable Regex Mutilator and Spam Filter (CRM114) |
icmpinfo
*
|
V:0.04, I:0.2 | 84 | interpret ICMP messages |
CRM114 provides language infrastructure to write fuzzy filters with the TRE regex library. Its popular use is spam mail filter but it can be used as log analyzer.
The simple use of script
(1) (see Section 1.4.9, “Recording the shell activities”) to record shell activity produces a file with control characters. This can be avoided by using col
(1) as the following.
$ script Script started, file is typescript
Do whatever … and press Ctrl-D
to exit script
.
$ col -bx <typescript >cleanedfile $ vim cleanedfile
If you don't have script
(for example, during the boot process in the initramfs), you can use following instead.
$ sh -i 2>&1 | tee typescript
Some x-terminal-emulator
such as gnome-terminal
can record. You may wish to extend line buffer for scrollback.
You may use screen
(1) with "^A H
" (see Section 9.1.2, “Key bindings for the screen command”) to perform recording of console.
You may use emacs
(1) with "M-x shell
", "M-x eshell
", or "M-x term
" to perform recording of console. You may later use "C-x C-w
" to write the buffer to a file.
Although pager tools such as more
(1) and less
(1) (see Section 1.4.5, “The pager”) and custom tools for highlighting and formatting (see Section 11.1.8, “Highlighting and formatting plain text data”) can display text data nicely, general purpose editors (see Section 1.4.6, “The text editor”) are most versatile and customizable.
For vim
(1) and its pager mode alias view
(1), ":set hls
" enables highlighted search.
The default display format of time and date by the "ls -l
" command depends on the locale (see Section 1.2.6, “Timestamps” for value). The "$LANG
" variable is referred first and it can be overridden by the "$LC_TIME
" variable.
The actual default display format for each locale depends on the version of the standard C library (the libc6
package) used. I.e., different releases of Debian had different defaults.
If you really wish to customize this display format of time and date beyond the locale, you should set the time style value by the "--time-style
" argument or by the "$TIME_STYLE
" value (see ls
(1), date
(1), "info coreutils 'ls invocation'
").
Table 9.4. Display examples of time and date for the "ls -l
" command for lenny
time style value | locale | display of time and date |
---|---|---|
iso
|
any |
01-19 00:15
|
long-iso
|
any |
2009-01-19 00:15
|
full-iso
|
any |
2009-01-19 00:15:16.000000000 +0900
|
locale
|
C
|
Jan 19 00:15
|
locale
|
en_US.UTF-8
|
2009-01-19 00:15
|
locale
|
es_ES.UTF-8
|
ene 19 00:15
|
+%d.%m.%y %H:%M
|
any |
19.01.09 00:15
|
+%d.%b.%y %H:%M
|
C or en_US.UTF-8
|
19.Jan.09 00:15
|
+%d.%b.%y %H:%M
|
es_ES.UTF-8
|
19.ene.09 00:15
|
You can eliminate typing long option on commandline using command alias, e.g. "alias ls='ls --time-style=+%d.%m.%y\ %H:%M'
" (see Section 1.5.9, “Command alias”).
ISO 8601 is followed for these iso-formats.
Shell echo to most modern terminals can be colorized using ANSI escape code (see "/usr/share/doc/xterm/ctlseqs.txt.gz
").
For example, try the following
$ RED=$(printf "\x1b[31m") $ NORMAL=$(printf "\x1b[0m") $ REVERSE=$(printf "\x1b[7m") $ echo "${RED}RED-TEXT${NORMAL} ${REVERSE}REVERSE-TEXT${NORMAL}"
Colorized commands are handy for inspecting their output in the interactive environment. I include the following in my "~/.bashrc
".
if [ "$TERM" != "dumb" ]; then eval "`dircolors -b`" alias ls='ls --color=always' alias ll='ls --color=always -l' alias la='ls --color=always -A' alias less='less -R' alias ls='ls --color=always' alias grep='grep --color=always' alias egrep='egrep --color=always' alias fgrep='fgrep --color=always' alias zgrep='zgrep --color=always' else alias ll='ls -l' alias la='ls -A' fi
The use of alias limits color effects to the interactive command usage. It has advantage over exporting environment variable "export GREP_OPTIONS='--color=auto'
" since color can be seen under pager programs such as less
(1). If you wish to suppress color when piping to other programs, use "--color=auto
" instead in the above example for "~/.bashrc
".
You can turn off these colorizing aliases in the interactive environment by invoking shell with "TERM=dumb bash
".
You can record the editor activities for complex repeats.
For Vim, as follows.
qa
": start recording typed characters into named register "a
".
q
": end recording typed characters.
@a
": execute the contents of register "a
".
For Emacs, as follows.
C-x (
": start defining a keyboard macro.
C-x )
": end defining a keyboard macro.
C-x e
": execute a keyboard macro.
There are few ways to record the graphic image of an X application, including an xterm
display.
Table 9.5. List of graphic image manipulation tools
package | popcon | size | command |
---|---|---|---|
xbase-clients
*
|
V:3, I:47 | 132 |
xwd (1)
|
gimp
*
|
V:12, I:44 | 13560 | GUI menu |
imagemagick
*
|
V:13, I:35 | 268 |
import (1)
|
scrot
*
|
V:0.3, I:1.4 | 80 |
scrot (1)
|
There are specialized tools to record changes in configuration files with help of DVCS system.
Table 9.6. List of packages to record configuration history in VCS
package | popcon | size | description |
---|---|---|---|
etckeeper
*
|
V:1.0, I:1.5 | 376 | store configuration files and their metadata with Git (default), Mercurial, or Bazaar (new) |
changetrack
*
|
V:0.07, I:0.09 | 152 | store configuration files with RCS (old) |
I recommend to use the etckeeper
package with git
(1) which put entire "/etc
" under VCS control. Its installation guide and tutorial are found in "/usr/share/doc/etckeeper/README.gz
".
Essentially, running "sudo etckeeper init
" initializes the git repository for "/etc
" just like the process explained in Section 10.9.5, “Git for recording configuration history” but with special hook scripts for more thorough setups.
As you change your configuration, you can use git
(1) normally to record them. It automatically records changes nicely every time you run package management commands, too.
You can browse the change history of "/etc
" by executing "sudo GIT_DIR=/etc/.git gitk
" with clear view for new installed packages, removed packages, and version changes of packages.
Booting your system with Linux live CDs or debian-installer CDs in rescue mode make it easy for you to reconfigure data storage on your boot device. See also Section 10.3, “The binary data”.
For disk partition configuration, although fdisk
(8) has been considered standard, parted
(8) deserves some attention. "Disk partitioning data", "partition table", "partition map", and "disk label" are all synonyms.
Most PCs use the classic Master Boot Record (MBR) scheme to hold disk partitioning data in the first sector, i.e., LBA sector 0 (512 bytes).
Some new PCs with Extensible Firmware Interface (EFI), including Intel-based Macs, use GUID Partition Table (GPT) scheme to hold disk partitioning data not in the first sector.
Although fdisk
(8) has been standard for the disk partitioning tool, parted
(8) is replacing it.
Table 9.7. List of disk partition management packages
package | popcon | size | GPT | description |
---|---|---|---|---|
util-linux
*
|
V:91, I:99 | 2216 | Not supported |
miscellaneous system utilities including fdisk (8) and cfdisk (8)
|
parted
*
|
V:1.0, I:9 | 236 | Supported | GNU Parted disk partition resizing program |
gparted
*
|
V:3, I:31 | 4548 | Supported |
GNOME partition editor based on libparted
|
qtparted
*
|
V:0.10, I:0.9 | NOT_FOUND | Supported |
KDE partition editor based on libparted
|
gptsync
*
|
V:0.01, I:0.18 | 72 | Supported | synchronize classic MBR partition table with the GPT one |
kpartx
*
|
V:1.0, I:1.8 | 132 | Supported | program to create device mappings for partitions |
Although parted
(8) claims to create and to resize filesystem too, it is safer to do such things using best maintained specialized tools such as mkfs
(8) (mkfs.msdos
(8), mkfs.ext2
(8), mkfs.ext3
(8), …) and resize2fs
(8).
In order to switch between GPT and MBR, you need to erase first few blocks of disk contents directly (see Section 10.3.6, “Clearing file contents”) and use "parted /dev/sdx mklabel gpt
" or "parted /dev/sdx mklabel msdos
" to set it. Please note "msdos
" is use here for MBR.
Although reconfiguration of your partition or activation order of removable storage media may yield different names for partitions, you can access them consistently. This is also helpful if you have multiple disks and your BIOS doesn't give them consistent device names.
mount
(8) with "-U
" option can mount a block device using UUID, instead of using its file name such as "/dev/sda3
".
/etc/fstab
" (see fstab
(5)) can use UUID.
You can probe UUID of a block special device with blkid
(8).
Device nodes of devices such as removable storage media can be made static by using udev rules, if needed. See Section 3.5.11, “The udev system”.
For ext3 filesystem, the e2fsprogs
package provides the following.
The mkfs
(8) and fsck
(8) commands are provided by the e2fsprogs
package as front-ends to various filesystem dependent programs (mkfs.fstype
and fsck.fstype
). For ext3 filesystem, they are mkfs.ext3
(8) and fsck.ext3
(8) (they are hardlinked to mke2fs
(8) and e2fsck
(8)).
Similar commands are available for each filesystem supported by Linux.
Table 9.8. List of filesystem management packages
package | popcon | size | description |
---|---|---|---|
e2fsprogs
*
|
V:60, I:99 | 1924 | utilities for the ext2/ext3/ext4 filesystems |
reiserfsprogs
*
|
V:2, I:8 | 1200 | utilities for the Reiserfs filesystem |
dosfstools
*
|
V:3, I:31 | 192 | utilities for the FAT filesystem. (Microsoft: MS-DOS, Windows) |
xfsprogs
*
|
V:2, I:10 | 3272 | utilities for the XFS filesystem. (SGI: IRIX) |
ntfsprogs
*
|
V:3, I:20 | 676 | utilities for the NTFS filesystem. (Microsoft: Windows NT, …) |
jfsutils
*
|
V:0.5, I:2 | 1112 | utilities for the JFS filesystem. (IBM: AIX, OS/2) |
reiser4progs
*
|
V:0.09, I:0.7 | 1264 | utilities for the Reiser4 filesystem |
hfsprogs
*
|
V:0.06, I:0.8 | 316 | utilities for HFS and HFS Plus filesystem. (Apple: Mac OS) |
btrfs-tools
*
|
V:0.3, I:0.6 | 1288 | utilities for the btrfs filesystem |
zerofree
*
|
V:0.10, I:0.7 | 56 | program to zero free blocks from ext2/3 filesystems |
Ext3 filesystem is the default filesystem for the Linux system and strongly recommended to use it unless you have some specific reasons not to. After Linux kernel 2.6.30 (Debian squeeze
), ext4 filesystem is available and expected to be the default filesystem for the Linux system. btrfs filesystem is expected to be the next default filesystem after ext4 filesystem for the Linux system.
You might face some limitations with ext4 since it is new. For example, you must have Linux kernel 2.6.30 or later if you wish to resize an ext4 partition.
Some tools allow access to filesystem without Linux kernel support (see Section 10.3.2, “Manipulating files without mounting disk”).
The mkfs
(8) command creates the filesystem on a Linux system. The fsck
(8) command provides the filesystem integrity check and repair on a Linux system.
It is generally not safe to run fsck
on mounted filesystems.
Check files in "/var/log/fsck/
" for the result of the fsck
(8) command run from the boot script.
Use "shutdown -F -r now
" to force to run the fsck
(8) command safely on all filesystems including root filesystem on reboot. See the shutdown
(8) manpage for more.
Performance and characteristics of a filesystem can be optimized by mount options used on it (see fstab
(5) and mount
(8)). Notable ones are the following.
defaults
" option implies default options: "rw,suid,dev,exec,auto,nouser,async
". (general)
noatime
" or "relatime
" option is very effective for speeding up the read access. (general)
user
" option allows an ordinary user to mount the filesystem. This option implies "noexec,nosuid,nodev
" option combination. (general, used for CD and floppy)
noexec,nodev,nosuid
" option combination is used to enhance security. (general)
noauto
" option limits mounting by explicit operation only. (general)
data=journal
" option for ext3fs can enhance data integrity against power failure with some loss of write speed.
You need to provide kernel boot parameter (see Section 3.3, “Stage 2: the boot loader”), e.g. "rootflags=data=journal
" to deploy a non-default journaling mode for the root filesystem. For lenny
, the default jounaling mode is "rootflags=data=ordered
". For squeeze
, it is "rootflags=data=writeback
".
Characteristics of a filesystem can be optimized via its superblock using the tune2fs
(8) command.
sudo tune2fs -l /dev/hda1
" displays the contents of the filesystem superblock on "/dev/hda1
".
sudo tune2fs -c 50 /dev/hda1
" changes frequency of filesystem checks (fsck
execution during boot-up) to every 50 boots on "/dev/hda1
".
sudo tune2fs -j /dev/hda1
" adds journaling capability to the filesystem, i.e. filesystem conversion from ext2 to ext3 on "/dev/hda1
". (Do this on the unmounted filesystem.)
sudo tune2fs -O extents,uninit_bg,dir_index /dev/hda1 && fsck -pf /dev/hda1
" converts it from ext3 to ext4 on "/dev/hda1
". (Do this on the unmounted filesystem.)
Filesystem conversion for the boot device to the ext4 filesystem should be avoided until GRUB boot loader supports the ext4 filesystem well and installed Linux Kernel version is newer than 2.6.30.
Please check your hardware and read manpage of hdparam
(8) before playing with hard disk configuration because this may be quite dangerous for the data integrity.
You can test disk access speed of a hard disk, e.g. "/dev/hda
", by "hdparm -tT /dev/hda
". For some hard disk connected with (E)IDE, you can speed it up with "hdparm -q -c3 -d1 -u1 -m16 /dev/hda
" by enabling the "(E)IDE 32-bit I/O support", enabling the "using_dma flag", setting "interrupt-unmask flag", and setting the "multiple 16 sector I/O" (dangerous!).
You can test write cache feature of a hard disk, e.g. "/dev/sda
", by "hdparm -W /dev/sda
". You can disable its write cache feature with "hdparm -W 0 /dev/sda
".
You may be able to read badly pressed CDROMs on modern high speed CD-ROM drive by slowing it down with "setcd -x 2
".
You can monitor and log your hard disk which is compliant to SMART with the smartd
(8) daemon.
smartmontools
package.
Identify your hard disk drives by listing them with df
(1).
/dev/hda
".
Check the output of "smartctl -a /dev/hda
" to see if SMART feature is actually enabled.
smartctl -s on -a /dev/hda
".
Enable smartd
(8) daemon to run by the following.
start_smartd=yes
" in the "/etc/default/smartmontools
" file.
smartd
(8) daemon by "sudo /etc/init.d/smartmontools restart
".
The smartd
(8) daemon can be customized with the /etc/smartd.conf
file including how to be notified of warnings.
For partitions created on Logical Volume Manager (LVM) (Linux feature) at install time, they can be resized easily by concatenating extents onto them or truncating extents from them over multiple storage devices without major system reconfiguration.
Deployment of the current LVM system may degrade guarantee against filesystem corruption offered by journaled filesystems such as ext3fs unless their system performance is sacrificed by disabling write cache of hard disk.
If you have an empty partition (e.g., "/dev/sdx
"), you can format it with mkfs.ext3
(1) and mount
(8) it to a directory where you need more space. (You need to copy original data contents.)
$ sudo mv work-dir old-dir $ sudo mkfs.ext3 /dev/sdx $ sudo mount -t ext3 /dev/sdx work-dir $ sudo cp -a old-dir/* work-dir $ sudo rm -rf old-dir
You may alternatively mount an empty disk image file (see Section 10.2.5, “Making the empty disk image file”) as a loop device (see Section 10.2.3, “Mounting the disk image file”). The actual disk usage grows with the actual data stored.
If you have an empty directory (e.g., "/path/to/emp-dir
") in another partition with usable space, you can create a symlink to the directory with ln
(8).
$ sudo mv work-dir old-dir $ sudo mkdir -p /path/to/emp-dir $ sudo ln -sf /path/to/emp-dir work-dir $ sudo cp -a old-dir/* work-dir $ sudo rm -rf old-dir
Some software may not function well with "symlink to a directory".
If you have usable space in another partition (e.g., "/path/to/
"), you can create a directory in it and stack that on to a directory where you need space with aufs.
$ sudo mv work-dir old-dir $ sudo mkdir work-dir $ sudo mkdir -p /path/to/emp-dir $ sudo mount -t aufs -o br:/path/to/emp-dir:old-dir none work-dir
Use of aufs for long term data storage is not good idea since it is under development and its design change may introduce issues.
With physical access to your PC, anyone can easily gain root privilege and access all the files on your PC (see Section 4.7.4, “Securing the root password”). This means that login password system can not secure your private and sensitive data against possible theft of your PC. You must deploy data encryption technology to do it. Although GNU privacy guard (see Section 10.4, “Data security infrastructure”) can encrypt files, it takes some user efforts.
dm-crypt and eCryptfs facilitates automatic data encryption natively via Linux kernel modules with minimal user efforts.
Table 9.9. List of data encryption utilities
package | popcon | size | description |
---|---|---|---|
cryptsetup
*
|
V:3, I:5 | 1172 | utilities for encrypted block device (dm-crypt / LUKS) |
cryptmount
*
|
V:0.2, I:0.5 | 360 | utilities for encrypted block device (dm-crypt / LUKS) with focus on mount/unmount by normal users |
ecryptfs-utils
*
|
V:0.2, I:0.3 | 416 | utilities for encrypted stacked filesystem (eCryptfs) |
Dm-crypt is a cryptographic filesystem using device-mapper. Device-mapper maps one block device to another.
eCryptfs is another cryptographic filesystem using stacked filesystem. Stacked filesystem stacks itself on top of an existing directory of a mounted filesystem.
Data encryption costs CPU time etc. Please weigh its benefits and costs.
Entire Debian system can be installed on a encrypted disk by the debian-installer (lenny or newer) using dm-crypt/LUKS and initramfs.
See Section 10.4, “Data security infrastructure” for user space encryption utility: GNU Privacy Guard.
You can encrypt contents of removable mass devices, e.g. USB memory stick on "/dev/sdx
", using dm-crypt/LUKS. You simply formatting it as the following.
# badblocks -c 10240 -s -w -t random -v /dev/sdx # shred -v -n 1 /dev/sdx # fdisk /dev/sdx ... "n" "p" "1" "return" "return" "w" # cryptsetup luksFormat /dev/sdx1 ... # cryptsetup luksOpen /dev/sdx1 sdx1 ... # ls -l /dev/mapper/ total 0 crw-rw---- 1 root root 10, 60 2008-10-04 18:44 control brw-rw---- 1 root disk 254, 0 2008-10-04 23:55 sdx1 # mkfs.vfat /dev/mapper/sdx1 ... # cryptsetup luksClose sdx1
Then, it can be mounted just like normal one on to "/media/<disk_label>
", except for asking password (see Section 10.1.10, “Removable storage device”) under modern desktop environment, such as GNOME using gnome-mount
(1). The difference is that every data written to it is encrypted. You may alternatively format media in different file format, e.g., ext3 with "mkfs.ext3 /dev/sdx1
".
If you are really paranoid for the security of data, you may need to overwrite multiple times in the above example. This operation is very time consuming though.
Let's assume that your original "/etc/fstab
" contains the following.
/dev/sda7 swap sw 0 0
You can enable encrypted swap partition using dm-crypt by as the following.
# aptitude install cryptsetup # swapoff -a # echo "cswap /dev/sda7 /dev/urandom swap" >> /etc/crypttab # perl -i -p -e "s/\/dev\/sda7/\/dev\/mapper\/cswap/" /etc/fstab # /etc/init.d/cryptdisks restart ... # swapon -a
You can encrypt files written under "~/Private/
" automatically using eCryptfs and the ecryptfs-utils
package.
ecryptfs-setup-private
(1) and set up "~/Private/
" by following prompts.
~/Private/
" by running ecryptfs-mount-private
(1).
Move sensitive data files to "~/Private/
" and make symlinks as needed.
~/.fetchmailrc
", "~/.ssh/identity
", "~/.ssh/id_rsa
", "~/.ssh/id_dsa
" and other files with "go-rwx
"
Move sensitive data directories to a subdirectory in "~/Private/
" and make symlinks as needed.
~/.gnupg
" and other directories with "go-rwx
"
~/Desktop/Private/
" to "~/Private/
" for easier desktop operations.
~/Private/
" by running ecryptfs-umount-private
(1).
~/Private/
" by issuing "ecryptfs-mount-private
" as you need encrypted data.
If you use your login password for wrapping encryption keys, you can automate mounting eCryptfs via PAM (Pluggable Authentication Modules).
Insert the following line just before "pam_permit.so
" in "/etc/pam.d/common-auth
".
auth required pam_ecryptfs.so unwrap
Insert the following line just at the last line in "/etc/pam.d/common-session
".
session optional pam_ecryptfs.so unwrap
Insert the following line at first active line in "/etc/pam.d/common-password
".
password required pam_ecryptfs.so
This is quite convenient.
Configuration errors of PAM may lock you out of your own system. See Chapter 4, Authentication.
If you use your login password for wrapping encryption keys, your encrypted data are as secure as your user login password (see Section 4.3, “Good password”). Unless you are careful to set up a strong password, your data is at risk when someone runs password cracking software after stealing your laptop (see Section 4.7.4, “Securing the root password”).
Program activities can be monitored and controlled using specialized tools.
Table 9.10. List of tools for monitoring and controlling program activities
package | popcon | size | description |
---|---|---|---|
coreutils
*
|
V:92, I:99 | 13828 |
nice (1): run a program with modified scheduling priority
|
bsdutils
*
|
V:77, I:99 | 196 |
renice (1): modify the scheduling priority of a running process
|
procps
*
|
V:86, I:99 | 772 |
"/proc " filesystem utilities: ps (1), top (1), kill (1), watch (1), …
|
psmisc
*
|
V:47, I:88 | 716 |
"/proc " filesystem utilities: killall (1), fuser (1), peekfd (1), pstree (1)
|
time
*
|
V:6, I:84 | 152 |
time (1): run a program to report system resource usages with respect to time
|
sysstat
*
|
V:4, I:9 | 872 |
sar (1), iostat (1), mpstat (1), …: system performance tools for Linux
|
isag
*
|
V:0.07, I:0.4 | 152 | Interactive System Activity Grapher for sysstat |
lsof
*
|
V:16, I:90 | 444 |
lsof (8): list open files by a running process using "-p " option
|
strace
*
|
V:5, I:39 | 396 |
strace (1): trace system calls and signals
|
ltrace
*
|
V:0.3, I:2 | 188 |
ltrace (1): trace library calls
|
xtrace
*
|
V:0.02, I:0.18 | 372 |
xtrace (1): trace communication between X11 client and server
|
powertop
*
|
V:0.7, I:12 | 524 |
powertop (1): information about system power use on Intel-based laptops
|
cron
*
|
V:91, I:99 | 240 |
run processes according to a schedule in background from cron (8) daemon
|
anacron
*
|
V:41, I:44 | 120 | cron-like command scheduler for systems that don't run 24 hours a day |
at
*
|
V:50, I:83 | 220 |
at (1) or batch (1): run a job at a specified time or below certain load level
|
The procps
packages provide very basics of monitoring, controlling, and starting program activities. You should learn all of them.
Display time used by the process invoked by the command.
# time some_command >/dev/null real 0m0.035s # time on wall clock (elapsed real time) user 0m0.000s # time in user mode sys 0m0.020s # time in kernel mode
A nice value is used to control the scheduling priority for the process.
Table 9.11. List of nice values for the scheduling priority
nice value | scheduling priority |
---|---|
19 | lowest priority process (nice) |
0 | very high priority process for user |
-20 | very high priority process for root (not-nice) |
# nice -19 top # very nice # nice --20 wodim -v -eject speed=2 dev=0,0 disk.img # very fast
Sometimes an extreme nice value does more harm than good to the system. Use this command carefully.
The ps
(1) command on the Debian support both BSD and SystemV features and helps to identify the process activity statically.
Table 9.12. List of ps command styles
style | typical command | feature |
---|---|---|
BSD |
ps aux
|
display %CPU %MEM |
System V |
ps -efH
|
display PPID |
For the zombie (defunct) children process, you can kill them by the parent process ID identified in the "PPID
" field.
The pstree
(1) command display a tree of processes.
top
(1) on the Debian has rich features and helps to identify what process is acting funny dynamically.
Table 9.13. List of commands for top
command key | description of response |
---|---|
h or ?
|
show help |
f
|
set/reset display field |
o
|
reorder display field |
F
|
set sort key field |
k
|
kill a process |
r
|
renice a process |
q
|
quit the top command
|
You can list all files opened by a process with a process ID (PID), e.g. 1, by the following.
$ sudo lsof -p 1
PID=1 is usually init
program.
You can trace program activity with strace
(1), ltrace
(1), or xtrace
(1) for system calls and signals, library calls, or communication between X11 client and server.
You can trace system calls of the ls
command as the following.
$ sudo strace ls
You can also identify processes using files by fuser
(1), e.g. for "/var/log/mail.log
" by the following.
$ sudo fuser -v /var/log/mail.log USER PID ACCESS COMMAND /var/log/mail.log: root 2946 F.... syslogd
You see that file "/var/log/mail.log
" is open for writing by the syslogd
(8) command.
You can also identify processes using sockets by fuser
(1), e.g. for "smtp/tcp
" by the following.
$ sudo fuser -v smtp/tcp USER PID ACCESS COMMAND smtp/tcp: Debian-exim 3379 F.... exim4
Now you know your system runs exim4
(8) to handle TCP connections to SMTP port (25).
watch
(1) executes a program repeatedly with a constant interval while showing its output in fullscreen.
$ watch w
This displays who is logged on to the system updated every 2 seconds.
There are several ways to repeat a command looping over files matching some condition, e.g. matching glob pattern "*.ext
".
for x in *.ext; do if [ -f "$x"]; then command "$x" ; fi; done
find
(1) and xargs
(1) combination:
find . -type f -maxdepth 1 -name '*.ext' -print0 | xargs -0 -n 1 command
find
(1) with "-exec
" option with a command:
find . -type f -maxdepth 1 -name '*.ext' -exec command '{}' \;
find
(1) with "-exec
" option with a short shell script:
find . -type f -maxdepth 1 -name '*.ext' -exec sh -c "command '{}' && echo 'successful'" \;
The above examples are written to ensure proper handling of funny file names such as ones containing spaces. See Section 10.1.5, “Idioms for the selection of files” for more advance uses of find
(1).
You can set up to start a process from graphical user interface (GUI).
Under GNOME desktop environment, a program can be started with proper argument by double-clicking the launcher icon, by drag-and-drop of a file icon to the launcher icon, or by "Open with …" menu via right clicking a file icon. KDE can do the equivalent, too.
Here is an example under GNOME to create a launcher icon for mc
(1) started in gnome-terminal
(1).
Create an executable program "mc-term
" by the following.
# cat >/usr/local/bin/mc-term <<EOF #!/bin/sh gnome-terminal -e "mc \$1" EOF # chmod 755 /usr/local/bin/mc-term
Create a desktop launcher as the following.
Create Launcher …
".
Application
".
mc
".
mc-term %f
".
Create an open-with association as as the following.
Open with Other Application …
".
mc-term %f
".
Launcher is a file at "~/Desktop
" with ".desktop
" as its extension.
Some programs start another program automatically. Here are check points for customizing this process.
Application configuration menu:
mc
(1): "/etc/mc/mc.ext
"
$BROWSER
", "$EDITOR
", "$VISUAL
", and "$PAGER
" (see eviron
(7))
update-alternatives
(8) system for programs such as "editor
", "view
", "x-www-browser
", "gnome-www-browser
", and "www-browser
" (see Section 1.4.7, “Setting a default text editor”)
~/.mailcap
" and "/etc/mailcap
" file contents which associate MIME type with program (see mailcap
(5))
~/.mime.types
" and "/etc/mime.types
" file contents which associate file name extension with MIME type (see run-mailcap
(1))
update-mime
(8) updates the "/etc/mailcap
" file using "/etc/mailcap.order
" file (see mailcap.order
(5)).
The debianutils
package provides sensible-browser
(1), sensible-editor
(1), and sensible-pager
(1) which make sensible decisions on which editor, pager, and web browser to call, respectively. I recommend you to read these shell scripts.
In order to run a console application such as mutt
under X as your preferred application, you should create an X application as following and set "/usr/local/bin/mutt-term
" as your preferred application to be started as described.
# cat /usr/local/bin/mutt-term <<EOF #!/bin/sh gnome-terminal -e "mutt \$@" EOF chmod 755 /usr/local/bin/mutt-term
Use kill
(1) to kill (or send a signal to) a process by the process ID.
Use killall
(1) or pkill
(1) to do the same by the process command name and other attributes.
Table 9.14. List of frequently used signals for kill command
signal value | signal name | function |
---|---|---|
1 | HUP | restart daemon |
15 | TERM | normal kill |
9 | KILL | kill hard |
Run the at
(1) command to schedule a one-time job by the following.
$ echo 'command -args'| at 3:40 monday
Use cron
(8) to schedule tasks regularly. See crontab
(1) and crontab
(5).
If you are a member of crontab
group, you can schedule to run processes as a normal user, e.g. foo
by creating a crontab
(5) file as "/var/spool/cron/crontabs/foo
" with "crontab -e
" command.
Here is an example of a crontab
(5) file.
# use /bin/sh to run commands, no matter what /etc/passwd says SHELL=/bin/sh # mail any output to paul, no matter whose crontab this is MAILTO=paul # Min Hour DayOfMonth Month DayOfWeek command (Day... are OR'ed) # run at 00:05, every day 5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1 # run at 14:15 on the first of every month -- output mailed to paul 15 14 1 * * $HOME/bin/monthly # run at 22:00 on weekdays(1-5), annoy Joe. % for newline, last % for cc: 0 22 * * 1-5 mail -s "It's 10pm" joe%Joe,%%Where are your kids?%.%% 23 */2 1 2 * echo "run 23 minutes after 0am, 2am, 4am ..., on Feb 1" 5 4 * * sun echo "run at 04:05 every Sunday" # run at 03:40 on the first Monday of each month 40 3 1-7 * * [ "$(date +%a)" == "Mon" ] && command -args
For the system not running continuously, install the anacron
package to schedule periodic commands at the specified intervals as closely as machine-uptime permits. See anacron
(8) and anacrontab
(5).
For scheduled system maintenance scripts, you can run them periodically from root account by placing such scripts in "/etc/cron.hourly/
", "/etc/cron.daily/
", "/etc/cron.weekly/
", or "/etc/cron.monthly/
". Execution timings of these scripts can be customized by "/etc/crontab
" and "/etc/anacrontab
".
Insurance against system malfunction is provided by the kernel compile option "Magic SysRq key" (SAK key) which is now the default for the Debian kernel. Pressing Alt-SysRq followed by one of the following keys does the magic of rescuing control of the system.
Table 9.15. List of SAK command keys
key following Alt-SysRq | description of action |
---|---|
r
|
restore the keyboard from raw mode after X crashes |
0
|
change the console loglevel to 0 to reduce error messages |
k
|
kill all processes on the current virtual console |
e
|
send a SIGTERM to all processes, except for init (8)
|
i
|
send a SIGKILL to all processes, except for init (8)
|
s
|
sync all mounted filesystems |
u
|
remount all mounted filesystems read-only (umount) |
b
|
reboot the system without syncing or unmounting |
The combination of "Alt-SysRq s", "Alt-SysRq u", and "Alt-SysRq r" is good for getting out of really bad situations.
See "/usr/share/doc/linux-doc-2.6.*/Documentation/sysrq.txt.gz
".
The Alt-SysRq feature may be considered a security risk by allowing users access to root-privileged functions. Placing "echo 0 >/proc/sys/kernel/sysrq
" in "/etc/rc.local
" or "kernel.sysrq = 0
" in "/etc/sysctl.conf
" disables the Alt-SysRq feature.
From SSH terminal etc., you can use the Alt-SysRq feature by writing to the "/proc/sysrq-trigger
". For example, "echo s > /proc/sysrq-trigger; echo u > /proc/sysrq-trigger
" from the root shell prompt syncs and umounts all mounted filesystems.
You can check who is on the system by the following.
who
(1) shows who is logged on.
w
(1) shows who is logged on and what they are doing.
last
(1) shows listing of last logged in user.
lastb
(1) shows listing of last bad logged in users.
"/var/run/utmp
", "/var/log/wtmp
", and "/var/run/utmp
" hold such user information. See login
(1) and utmp
(5).
You can send message to everyone who is logged on to the system with wall
(1) by the following.
$ echo "We are shutting down in 1 hour" | wall
For the PCI-like devices (AGP, PCI-Express, CardBus, ExpressCard, etc.), lspci
(8) (probably with "-nn
" option) is a good start for the hardware identification
Alternatively, you can identify the hardware by reading contents of "/proc/bus/pci/devices
" or browsing directory tree under "/sys/bus/pci
" (see Section 1.2.12, “procfs and sysfs”).
Table 9.16. List of hardware identification tools
package | popcon | size | description |
---|---|---|---|
pciutils
*
|
V:15, I:92 | 908 |
Linux PCI Utilities: lspci (8)
|
usbutils
*
|
V:38, I:97 | 604 |
Linux USB utilities: lsusb (8)
|
pcmciautils
*
|
V:0.8, I:13 | 100 |
PCMCIA utilities for Linux 2.6: pccardctl (8)
|
scsitools
*
|
V:0.18, I:1.1 | 484 |
collection of tools for SCSI hardware management: lsscsi (8)
|
pnputils
*
|
V:0.01, I:0.16 | 108 |
Plug and Play BIOS utilities: lspnp (8)
|
procinfo
*
|
V:0.3, I:3 | 164 |
system information obtained from "/proc ": lsdev (8)
|
lshw
*
|
V:1.2, I:7 | 604 |
information about hardware configuration: lshw (1)
|
discover
*
|
V:2, I:15 | 120 |
hardware identification system: discover (8)
|
Although most of the hardware configuration on modern GUI desktop systems such as GNOME and KDE can be managed through accompanying GUI configuration tools, it is a good idea to know some basics methods to configure them.
Table 9.17. List of hardware configuration tools
package | popcon | size | description |
---|---|---|---|
hal
*
|
V:37, I:49 | 1668 |
Hardware Abstraction Layer: lshal (1)
|
console-tools
*
|
V:47, I:84 | 956 | Linux console font and keytable utilities |
x11-xserver-utils
*
|
V:34, I:51 | 544 |
X server utilities: xset (1), xmodmap (1)
|
acpid
*
|
V:51, I:91 | 208 | daemon to manage events delivered by the Advanced Configuration and Power Interface (ACPI) |
acpi
*
|
V:4, I:35 | 92 | utility to display information on ACPI devices |
apmd
*
|
V:1.2, I:11 | 252 | daemon to manage events delivered by the Advanced Power Management (APM) |
noflushd
*
|
V:0.04, I:0.09 | 248 | daemon to allow idle hard disks to spin down |
sleepd
*
|
V:0.07, I:0.09 | 148 | daemon to put a laptop to sleep during inactivity |
hdparm
*
|
V:11, I:38 | 304 | hard disk access optimization (see Section 9.3.7, “Optimization of hard disk”) |
smartmontools
*
|
V:7, I:23 | 1076 | control and monitor storage systems using S.M.A.R.T. |
setserial
*
|
V:1.5, I:3 | 180 | collection of tools for serial port management |
memtest86+
*
|
V:0.5, I:5 | 652 | collection of tools for memory hardware management |
scsitools
*
|
V:0.18, I:1.1 | 484 | collection of tools for SCSI hardware management |
tpconfig
*
|
V:0.3, I:0.5 | 220 | utility to configure touchpad devices |
setcd
*
|
V:0.06, I:0.3 | 28 | compact disc drive access optimization |
big-cursor
*
|
I:0.16 | 68 | larger mouse cursors for X |
Here, ACPI is a newer framework for the power management system than APM.
CPU frequency scaling on modern system is governed by kernel modules such as acpi_cpufreq
.
The following sets system and hardware time to MM/DD hh:mm, CCYY.
# date MMDDhhmmCCYY # hwclock --utc --systohc # hwclock --show
Times are normally displayed in the local time on the Debian system but the hardware and system time usually use UT(GMT).
If the hardware (BIOS) time is set to UT, change the setting to "UTC=yes
" in the "/etc/default/rcS
".
If you wish to update system time via network, consider to use the NTP service with the packages such as ntp
, ntpdate
, and chrony
.
See the following.
ntp-doc
package
ntptrace
(8) in the ntp
package can trace a chain of NTP servers back to the primary source.
There are several components to configure character console and ncurses
(3) system features.
/etc/terminfo/*/*
" file (terminfo
(5))
$TERM
" environment variable (term
(7))
setterm
(1), stty
(1), tic
(1), and toe
(1)
If the terminfo
entry for xterm
doesn't work with a non-Debian xterm
, change your terminal type, "$TERM
", from "xterm
" to one of the feature-limited versions such as "xterm-r6
" when you log in to a Debian system remotely. See "/usr/share/doc/libncurses5/FAQ
" for more. "dumb
" is the lowest common denominator for "$TERM
".
Device drivers for sound cards for current Linux 2.6 are provided by Advanced Linux Sound Architecture (ALSA). ALSA provides emulation mode for previous Open Sound System (OSS) for compatibility.
Run "dpkg-reconfigure linux-sound-base
" to select the sound system to use ALSA via blacklisting of kernel modules. Unless you have very new sound hardware, udev infrastructure should configure your sound system.
Use "cat /dev/urandom > /dev/audio
" or speaker-test
(1) to test speaker. (^C to stop)
If you can not get sound, your speaker may be connected to a muted output. Modern sound system has many outputs. alsamixer
(1) in the alsa-utils
package is useful to configure volume and mute settings.
Application softwares may be configured not only to access sound devices directly but also to access them via some standardized sound server system.
Table 9.18. List of sound packages
There is usually a common sound engine for each popular desktop environment. Each sound engine used by the application can choose to connect to different sound servers.
For disabling the screen saver, use following commands.
Table 9.19. List of commands for disabling the screen saver
environment | command |
---|---|
The Linux console |
setterm -powersave off
|
The X Window (turning off screensaver) |
xset s off
|
The X Window (disabling dpms) |
xset -dpms
|
The X Window (GUI configuration of screen saver) |
xscreensaver-command -prefs
|
One can always unplug the PC speaker to disable beep sounds. Removing pcspkr
kernel module does this for you.
The following prevents the readline
(3) program used by bash
(1) to beep when encountering "\a
" (ASCII=7).
$ echo "set bell-style none">> ~/.inputrc
The kernel boot message in the "/var/log/dmesg
" contains the total exact size of available memory.
free
(1) and top
(1) display information on memory resources on the running system.
$ grep '\] Memory' /var/log/dmesg [ 0.004000] Memory: 990528k/1016784k available (1975k kernel code, 25868k reserved, 931k data, 296k init) $ free -k total used free shared buffers cached Mem: 997184 976928 20256 0 129592 171932 -/+ buffers/cache: 675404 321780 Swap: 4545576 4 4545572
Do not worry about the large size of "used
" and the small size of "free
" in the "Mem:
" line, but read the one under them (675404 and 321780 in the example below) and relax.
For my MacBook with 1GB=1048576k DRAM (video system steals some of this), I see the following.
Table 9.20. List of memory sizes reported
report | size |
---|---|
Total size in dmesg | 1016784k = 1GB - 31792k |
Free in dmesg | 990528k |
Total under shell | 997184k |
Free under shell | 20256k (but effectively 321780k) |
Poor system maintenance may expose your system to external exploitation.
For system security and integrity check, you should start with the following.
debsums
package, See debsums
(1) and Section 2.5.2, “Top level "Release" file and authenticity”.
chkrootkit
package, See chkrootkit
(1).
clamav
package family, See clamscan
(1) and freahclam
(1).
Table 9.21. List of tools for system security and integrity check
package | popcon | size | description |
---|---|---|---|
logcheck
*
|
V:3, I:3 | 152 | daemon to mail anomalies in the system logfiles to the administrator |
debsums
*
|
V:2, I:3 | 320 | utility to verify installed package files against MD5 checksums |
chkrootkit
*
|
V:2, I:6 | 808 | rootkit detector |
clamav
*
|
V:2, I:11 | 616 | anti-virus utility for Unix - command-line interface |
tiger
*
|
V:0.8, I:1.0 | 3148 | report system security vulnerabilities |
tripwire
*
|
V:0.6, I:0.7 | 9456 | file and directory integrity checker |
john
*
|
V:0.7, I:2 | 532 | active password cracking tool |
aide
*
|
V:0.2, I:0.4 | 1213 | Advanced Intrusion Detection Environment - static binary |
bastille
*
|
V:0.12, I:0.4 | 1960 | security hardening tool |
integrit
*
|
V:0.08, I:0.16 | 440 | file integrity verification program |
crack
*
|
V:0.03, I:0.2 | 204 | password guessing program |
Here is a simple script to check for typical world writable incorrect file permissions.
# find / -perm 777 -a \! -type s -a \! -type l -a \! \( -type d -a -perm 1777 \)
Since the debsums
package uses MD5 checksums stored locally, it can not be fully trusted as the system security audit tool against malicious attacks.
Debian distributes modularized Linux kernel as packages for supported architectures.
There are few notable features on Linux kernel 2.6 compared to 2.4.
ide-scsi
module.
iptable
kernel modules.
Many Linux features are configurable via kernel parameters as follows.
syscrl
(8) at runtime for ones accessible via sysfs (see Section 1.2.12, “procfs and sysfs”)
modprobe
(8) when a module is activated (see Section 10.2.3, “Mounting the disk image file”)
See "kernel-parameters.txt(.gz)
" and other related documents in the Linux kernel documentation ("/usr/share/doc/linux-doc-2.6.*/Documentation/filesystems/*
") provided by the linux-doc-2.6.*
package.
Most normal programs don't need kernel headers and in fact may break if you use them directly for compiling. They should be compiled against the headers in "/usr/include/linux
" and "/usr/include/asm
" provided by the libc6-dev
package (created from the glibc
source package) on the Debian system.
For compiling some kernel-specific programs such as the kernel modules from the external source and the automounter daemon (amd
), you must include path to the corresponding kernel headers, e.g. "-I/usr/src/linux-particular-version/include/
", to your command line. module-assistant
(8) (or its short form m-a
) helps users to build and install module package(s) easily for one or more custom kernels.
Debian has its own method of compiling the kernel and related modules.
Table 9.22. List of key packages to be installed for the kernel recompilation on the Debian system
package | popcon | size | description |
---|---|---|---|
build-essential
*
|
I:47 | 48 |
essential packages for building Debian packages: make , gcc , …
|
bzip2
*
|
V:51, I:79 | 132 | compress and decompress utilities for bz2 files |
libncurses5-dev
*
|
V:4, I:25 | 6900 | developer's libraries and docs for ncurses |
git
*
|
V:5, I:17 | 10632 | git: distributed revision control system used by the Linux kernel |
fakeroot
*
|
V:4, I:32 | 444 | provide fakeroot environment for building package as non-root |
initramfs-tools
*
|
V:49, I:98 | 468 | tool to build an initramfs (Debian specific) |
kernel-package
*
|
V:1.5, I:14 | 2316 | tool to build Linux kernel packages (Debian specific) |
module-assistant
*
|
V:2, I:18 | 568 | tool to help build module packages (Debian specific) |
dkms
*
|
V:6, I:9 | 468 | dynamic kernel module support (DKMS) (generic) |
devscripts
*
|
V:2, I:11 | 1696 | helper scripts for a Debian Package maintainer (Debian specific) |
linux-tree-2.6.*
|
N/A | N/A | Linux kernel source tree meta package (Debian specific) |
If you use initrd
in Section 3.3, “Stage 2: the boot loader”, make sure to read the related information in initramfs-tools
(8), update-initramfs
(8), mkinitramfs
(8) and initramfs.conf
(5).
Do not put symlinks to the directories in the source tree (e.g. "/usr/src/linux*
") from "/usr/include/linux
" and "/usr/include/asm
" when compiling the Linux kernel source. (Some outdated documents suggest this.)
When compiling the latest Linux kernel on the Debian stable
system, the use of backported latest tools from the Debian unstable
may be needed.
The dynamic kernel module support (DKMS) is a new distribution independent framework designed to allow individual kernel modules to be upgraded without changing the whole kernel. This will be endorsed for the maintenance of out-of-tree modules for squeeze
. This also makes it very easy to rebuild modules as you upgrade kernels.
The Debian standard method for compiling kernel source to create a custom kernel package uses make-kpkg
(1). The official documentation is in (the bottom of) "/usr/share/doc/kernel-package/README.gz
". See kernel-pkg.conf
(5) and kernel-img.conf
(5) for customization.
Here is an example for amd64 system.
# aptitude install linux-tree-<version> $ cd /usr/src $ tar -xjvf linux-source-<version>.tar.bz2 $ cd linux-source-<version> $ cp /boot/config-<oldversion> .config $ make menuconfig ... $ make-kpkg clean $ fakeroot make-kpkg --append_to_version -amd64 --initrd --revision=rev.01 kernel_image modules_image $ cd .. # dpkg -i linux-image*.deb
Reboot to new kernel with "shutdown -r now
".
When you intend to create a non-modularized kernel compiled only for one machine, invoke make-kpkg
without "--initrd
" option since initrd is not used. Invocation of "make oldconfig
" and "make dep
" are not required since "make-kpkg kernel_image
" invokes them.
The Debian standard method for creating and installing a custom module package for a custom kernel package uses module-assistant
(8) and module-source packages. For example, the following builds the unionfs
kernel module package and installs it.
$ sudo aptitude install module-assistant ... $ sudo aptitude install unionfs-source unionfs-tools unionfs-utils $ sudo m-a update $ sudo m-a prepare $ sudo m-a auto-install unionfs ... $ sudo apt-get autoremove
You can still build Linux kernel from the pristine sources with the classic method. You must take care the details of the system configuration manually.
$ cd /usr/src $ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-<version>.tar.bz2 $ tar -xjvf linux-<version>.tar.bz2 $ cd linux-<version> $ cp /boot/config-<version> .config $ make menuconfig ... $ make dep; make bzImage $ make modules # cp ./arch/x86_64/boot/bzImage /boot/vmlinuz-<version> # make modules_install # depmod -a # update-initramfs -c -k <version>
Set up bootloader by the following.
/etc/lilo.conf
" and run "/sbin/lilo
", if you use lilo
.
/boot/grub/menu.lst
", if you use grub
.
Reboot to new kernel with "shutdown -r now
".
Although most of hardware drivers are available as free software and as a part of the Debian system, you may need to load some non-free external drivers to support some hardwares, such as Winmodem, on your system.
Check pertinent resources.
Use of virtualized system enables us to run multiple instances of system simultaneously on a single hardware.
There are several system virtualization and emulation related packages in Debian beyond simple chroot. Some packages also help you to setup such system.
Table 9.23. List of virtualization tools
package | popcon | size | description |
---|---|---|---|
schroot
*
|
V:1.0, I:1.6 | 2460 | specialized tool for executing Debian binary packages in chroot |
sbuild
*
|
V:0.11, I:0.3 | 428 | tool for building Debian binary packages from Debian sources |
pbuilder
*
|
V:0.5, I:2 | 1192 | personal package builder for Debian packages |
debootstrap
*
|
V:1.6, I:12 | 268 | bootstrap a basic Debian system (written in sh) |
cdebootstrap
*
|
V:0.3, I:2 | 116 | bootstrap a Debian system (written in C) |
rootstrap
*
|
V:0.02, I:0.17 | 156 | tool for building complete Linux filesystem images |
virt-manager
*
|
V:0.5, I:1.6 | 5908 | Virtual Machine Manager: desktop application for managing virtual machines |
libvirt-bin
*
|
V:1.4, I:2 | 2240 | programs for the libvirt library |
user-mode-linux
*
|
V:0.07, I:0.3 | 20540 | User-mode Linux (kernel) |
bochs
*
|
V:0.05, I:0.3 | 3280 | Bochs: IA-32 PC emulator |
qemu
*
|
V:0.6, I:6 | 460 | QEMU: fast generic processor emulator |
qemu-system
*
|
V:2, I:3 | 38196 | QEMU: full system emulation binaries |
qemu-user
*
|
V:0.3, I:3 | 16716 | QEMU: user mode emulation binaries |
qemu-utils
*
|
V:0.4, I:3 | 756 | QEMU: utilities |
qemu-kvm
*
|
V:1.3, I:2 | 4308 | KVM: full virtualization on x86 hardware with the hardware-assisted virtualization |
virtualbox-ose
*
|
V:2, I:4 | 31728 | VirtualBox: x86 virtualization solution on i386 and amd64 |
xen-tools
*
|
V:0.2, I:1.9 | 1236 | tools to manage debian XEN virtual server |
wine
*
|
V:1.7, I:13 | 96 | Wine: Windows API Implementation (standard suite) |
dosbox
*
|
V:0.5, I:2 | 2460 | DOSBox: x86 emulator with Tandy/Herc/CGA/EGA/VGA/SVGA graphics, sound and DOS |
dosemu
*
|
V:0.2, I:1.2 | 5940 | DOSEMU: The Linux DOS Emulator |
vzctl
*
|
V:0.7, I:1.1 | 1056 | OpenVZ server virtualization solution - control tools |
vzquota
*
|
V:0.7, I:1.2 | 204 | OpenVZ server virtualization solution - quota tools |
lxc
*
|
V:0.05, I:0.2 | 744 | Linux containers user space tools |
See Wikipedia article Comparison of platform virtual machines for detail comparison of different platform virtualization solutions.
Some functionalities described here are only available in squeeze
.
Default Debian kernels support KVM since lenny
.
Typical work flow for virtualization involves several steps.
Create an empty filesystem (a file tree or a disk image).
mkdir -p /path/to/chroot
".
dd
(1) (see Section 10.2.1, “Making the disk image file” and Section 10.2.5, “Making the empty disk image file”).
qemu-img
(1) can be used to create and convert disk image files supported by QEMU.
Mount the disk image with mount
(8) to the filesystem (optional).
Populate the target filesystem with required system data.
debootstrap
and cdebootstrap
help this process (see Section 9.8.4, “Chroot system”).
Run a program under a virtualized environment.
For the raw disk image file, see Section 10.2, “The disk image”.
For other virtual disk image files, you can use qemu-nbd
(8) to export them using network block device protocol and mount them using the nbd
kernel module.
qemu-nbd
(8) supports disk formats supported by QEMU: QEMU supports following disk formats: raw, qcow2, qcow, vmdk, vdi, bochs, cow (user-mode Linux copy-on-write), parallels, dmg, cloop, vpc, vvfat (virtual VFAT), and host_device.
The network block device can support partitions in the same way as the loop device (see Section 10.2.3, “Mounting the disk image file”). You can mount the first partition of "disk.img
" as follows.
# modprobe nbd max_part=16 # qemu-nbd -v -c /dev/nbd0 disk.img ... # mkdir /mnt/part1 # mount /dev/nbd0p1 /mnt/part1
You may export only the first partition of "disk.img
" using "-P 1
" option to qemu-nbd
(8).
chroot
(8) offers most basic way to run different instances of the GNU/Linux environment on a single system simultaneously without rebooting.
Examples below assumes both parent system and chroot system share the same CPU architecture.
You can learn how to setup and use chroot
(8) by running pbuilder
(8) program under script
(1) as follows.
$ sudo mkdir /sid-root $ sudo pbuilder --create --no-targz --debug --buildplace /sid-root
You see how debootstrap
(8) or cdebootstrap
(1) populate system data for sid
environment under "/sid-root
".
These debootstrap
(8) or cdebootstrap
(1) are used to install Debian by the Debian Installer. These can also be used to install Debian to a system without using a Debian install disk, but instead from another GNU/Linux distribution.
$ sudo pbuilder --login --no-targz --debug --buildplace /sid-root
You see how a system shell running under sid
environment is created as the following.
"/etc/hosts
", "/etc/hostname
", "/etc/resolv.conf
")
/proc
" filesystem
/dev/pts
" filesystem
/usr/sbin/policy-rc.d
" which always exits with 101
chroot /sid-root bin/bash -c 'exec -a -bash bin/bash'
"
Some programs under chroot may require access to more files from the parent system to function than pbuilder
provides. For example, "/sys
", "/etc/passwd
", "/etc/group
", "/var/run/utmp
", "/var/log/wtmp
", etc. may need to be bind-mounted or copied.
The "/usr/sbin/policy-rc.d
" file prevents daemon programs to be started automatically on Debian system. See "/usr/share/doc/sysv-rc/README.policy-rc.d.gz
".
The original purpose of the specialized chroot package, pbuilder
is to construct a chroot system and builds a package inside the chroot. It is an ideal system to use to check that a package's build-dependencies are correct, and to be sure that unnecessary and wrong build dependencies do not exist in the resulting package.
Similar schroot
package may give you an idea to run i386
chroot system under amd64
parent system.
I recommend you to use QEMU or VirtualBox on a Debian stable
system to run multiple desktop systems safely using virtualization. These enable you to run desktop applications of Debian unstable
and testing
without usual risks associated with them.
Since pure QEMU is very slow, it is recommended to accelerate it with KVM when the host system support it.
The virtual disk image "virtdisk.qcow2
" containing Debian system for QEMU can be created using debian-installer: Small CDs as follows.
$ wget http://cdimage.debian.org/debian-cd/5.0.3/amd64/iso-cd/debian-503-amd64-netinst.iso $ qemu-img create -f qcow2 virtdisk.qcow2 5G $ qemu -hda virtdisk.qcow2 -cdrom debian-503-amd64-netinst.iso -boot d -m 256 ...
See more tips at Debian wiki: QEMU.
VirtualBox comes with Qt GUI tools and quite intuitive. Its GUI and command line tools are explained in VirtualBox User Manual and VirtualBox User Manual (PDF).
Running other GNU/Linux distributions such as Ubuntu and Fedra under virtualization is a great way to learn configuration tips. Other proprietary OSs may be run nicely under this GNU/Linux virtualization, too.
Tools and tips for managing binary and text data on the Debian system are described.
The uncoordinated write access to actively accessed devices and files from multiple processes must not be done to avoid the race condition. File locking mechanisms using flock
(1) may be used to avoid it.
The security of the data and its controlled sharing have several aspects.
These can be realized by using some combination of tools.
Here is a summary of archive and compression tools available on the Debian system.
Table 10.1. List of archive and compression tools
package | popcon | size | command | extension | comment |
---|---|---|---|---|---|
tar
*
|
V:61, I:99 | 2660 |
tar (1)
|
.tar
|
the standard archiver (de facto standard) |
cpio
*
|
V:41, I:99 | 920 |
cpio (1)
|
.cpio
|
Unix System V style archiver, use with find (1)
|
binutils
*
|
V:58, I:74 | 11996 |
ar (1)
|
.ar
|
archiver for the creation of static libraries |
fastjar
*
|
V:7, I:31 | 216 |
fastjar (1)
|
.jar
|
archiver for Java (zip like) |
pax
*
|
V:1.5, I:6 | 172 |
pax (1)
|
.pax
|
new POSIX standard archiver, compromise between tar and cpio
|
afio
*
|
V:0.3, I:1.7 | 240 |
afio (1)
|
.afio
|
extended cpio with per-file compression etc.
|
gzip
*
|
V:91, I:99 | 284 |
gzip (1), zcat (1), …
|
.gz
|
GNU LZ77 compression utility (de facto standard) |
bzip2
*
|
V:51, I:79 | 132 |
bzip2 (1), bzcat (1), …
|
.bz2
|
Burrows-Wheeler block-sorting compression utility with higher compression ratio than gzip (1) (slower than gzip with similar syntax)
|
lzma
*
|
V:8, I:80 | 172 |
lzma (1)
|
.lzma
|
LZMA compression utility with higher compression ratio than gzip (1) (deprecated)
|
xz-utils
*
|
V:5, I:26 | 460 |
xz (1), xzdec (1), …
|
.xz
|
XZ compression utility with higher compression ratio than bzip2 (1) (slower than gzip but faster than bzip2 ; replacement for LZMA compression utility)
|
p7zip
*
|
V:2, I:23 | 1052 |
7zr (1), p7zip (1)
|
.7z
|
7-Zip file archiver with high compression ratio (LZMA compression) |
p7zip-full
*
|
V:14, I:26 | 3612 |
7z (1), 7za (1)
|
.7z
|
7-Zip file archiver with high compression ratio (LZMA compression and others) |
lzop
*
|
V:0.7, I:6 | 144 |
lzop (1)
|
.lzo
|
LZO compression utility with higher compression and decompression speed than gzip (1) (lower compression ratio than gzip with similar syntax)
|
zip
*
|
V:8, I:52 | 632 |
zip (1)
|
.zip
|
InfoZIP: DOS archive and compression tool |
unzip
*
|
V:24, I:69 | 408 |
unzip (1)
|
.zip
|
InfoZIP: DOS unarchive and decompression tool |
Do not set the "$TAPE
" variable unless you know what to expect. It changes tar
(1) behavior.
The gzipped tar
(1) archive uses the file extension ".tgz
" or ".tar.gz
".
The xz-compressed tar
(1) archive uses the file extension ".txz
" or ".tar.xz
".
Popular compression method in FOSS tools such as tar
(1) has been moving as follows: gzip
→ bzip2
→ xz
cp
(1), scp
(1) and tar
(1) may have some limitation for special files. cpio
(1) and afio
(1) are most versatile.
cpio
(1) and afio
(1) are designed to be used with find
(1) and other commands and suitable for creating backup scripts since the file selection part of the script can be tested independently.
afio
(1) compresses each file in the archive. This makes afio
to be much safer for the file corruption than the globally compressed tar
or cpio
archives and to be the best archive engine for the backup script.
Internal structure of OpenOffice data files are ".jar
" file.
Here is a summary of simple copy and backup tools available on the Debian system.
Table 10.2. List of copy and synchronization tools
package | popcon | size | tool | function |
---|---|---|---|---|
coreutils
*
|
V:92, I:99 | 13828 | GNU cp | locally copy files and directories ("-a" for recursive) |
openssh-client
*
|
V:52, I:99 | 2104 | scp |
remotely copy files and directories (client, "-r " for recursive)
|
openssh-server
*
|
V:70, I:83 | 700 | sshd | remotely copy files and directories (remote server) |
rsync
*
|
V:19, I:52 | 704 | - | 1-way remote synchronization and backup |
unison
*
|
V:0.9, I:3 | 1816 | - | 2-way remote synchronization and backup |
Copying files with rsync
(8) offers richer features than others.
--exclude
" and "--exclude-from
" options similar to tar
(1)
Execution of the bkup
script mentioned in Section 10.1.9, “A copy script for the data backup” with the "-gl
" option under cron
(8) should provide very similar functionality as Plan9's dumpfs
for the static data archive.
Version control system (VCS) tools in Table 10.16, “List of version control system tools” can function as the multi-way copy and synchronization tools.
Here are several ways to archive and unarchive the entire content of the directory "./source
" using different tools.
GNU tar
(1):
$ tar cvzf archive.tar.gz ./source $ tar xvzf archive.tar.gz
cpio
(1):
$ find ./source -xdev -print0 | cpio -ov --null > archive.cpio; gzip archive.cpio $ zcat archive.cpio.gz | cpio -i
afio
(1):
$ find ./source -xdev -print0 | afio -ovZ0 archive.afio $ afio -ivZ archive.afio
Here are several ways to copy the entire content of the directory "./source
" using different tools.
./source
" directory → "/dest
" directory
./source
" directory at local host → "/dest
" directory at "user@host.dom
" host
rsync
(8):
# cd ./source; rsync -av . /dest # cd ./source; rsync -av . user@host.dom:/dest
You can alternatively use "a trailing slash on the source directory" syntax.
# rsync -av ./source/ /dest # rsync -av ./source/ user@host.dom:/dest
GNU cp
(1) and openSSH scp
(1):
# cd ./source; cp -a . /dest # cd ./source; scp -pr . user@host.dom:/dest
GNU tar
(1):
# (cd ./source && tar cf - . ) | (cd /dest && tar xvfp - ) # (cd ./source && tar cf - . ) | ssh user@host.dom '(cd /dest && tar xvfp - )'
cpio
(1):
# cd ./source; find . -print0 | cpio -pvdm --null --sparse /dest
afio
(1):
# cd ./source; find . -print0 | afio -pv0a /dest
You can substitude ".
" with "foo
" for all examples containing ".
" to copy files from "./source/foo
" directory to "/dest/foo
" directory.
You can substitude ".
" with the absolute path "/path/to/source/foo
" for all examples containing ".
" to drop "cd ./source;
". These copy files to different locations depending on tools used as follows.
/dest/foo
": rsync
(8), GNU cp
(1), and scp
(1)
/dest/path/to/source/foo
": GNU tar
(1), cpio
(1), and afio
(1)
rsync
(8) and GNU cp
(1) have option "-u
" to skip files that are newer on the receiver.
find
(1) is used to select files for archive and copy commands (see Section 10.1.3, “Idioms for the archive” and Section 10.1.4, “Idioms for the copy”) or for xargs
(1) (see Section 9.5.9, “Repeating a command looping over files”). This can be enhanced by using its command arguments.
Basic syntax of find
(1) can be summarized as the following.
-o
" between conditionals) has lower precedence than "logical AND" (specified by "-a
" or nothing between conditionals).
!
" before a conditional) has higher precedence than "logical AND".
-prune
" always returns logical TRUE and, if it is a directory, searching of file is stopped beyond this point.
-name
" matches the base of the filename with shell glob (see Section 1.5.6, “Shell glob”) but it also matches its initial ".
" with metacharacters such as "*
" and "?
". (New POSIX feature)
-regex
" matches the full path with emacs style BRE (see Section 1.6.2, “Regular expressions”) as default.
-size
" matches the file based on the file size (value precedented with "+
" for larger, precedented with "-
" for smaller)
-newer
" matches the file newer than the one specified in its argument.
-print0
" always returns logical TRUE and print the full filename (null terminated) on the standard output.
find
(1) is often used with an idiomatic style as the following.
# find /path/to \ -xdev -regextype posix-extended \ -type f -regex ".*\.afio|.*~" -prune -o \ -type d -regex ".*/\.git" -prune -o \ -type f -size +99M -prune -o \ -type f -newer /path/to/timestamp -print0
This means to do following actions.
/path/to
"
.*\.afio
" or ".*~
" from search by stop processing
.*/\.git
" from search by stop processing
/path/to/timestamp
"
Please note the idiomatic use of "-prune -o
" to exclude files in the above example.
For non-Debian Unix-like system, some options may not be supported by find
(1). In such a case, please consider to adjust matching methods and replace "-print0
" with "-print
". You may need to adjust related commands too.
We all know that computers fail sometime or human errors cause system and data damages. Backup and recovery operations are the essential part of successful system administration. All possible failure modes hit you some day.
Keep your backup system simple and backup your system often. Having backup data is more important than how technically good your backup method is.
There are 3 key factors which determine actual backup and recovery policy.
Knowing what to backup and recover.
~/
"
/var/
" (except "/var/cache/
", "/var/run/
", and "/var/tmp/
")
/etc/
"
/usr/local/
" or "/opt/
"
Knowing how to backup and recover.
Assessing risks and costs involved.
As for secure storage of data, data should be at least on different disk partitions preferably on different disks and machines to withstand the filesystem corruption. Important data are best stored on a write-once media such as CD/DVD-R to prevent overwrite accidents. (See Section 10.3, “The binary data” for how to write to the storage media from the shell commandline. GNOME desktop GUI environment gives you easy access via menu: "Places→CD/DVD Creator".)
You may wish to stop some application daemons such as MTA (see Section 6.3, “Mail transport agent (MTA)”) while backing up data.
You should pay extra care to the backup and restoration of identity related data files such as "/etc/ssh/ssh_host_dsa_key
", "/etc/ssh/ssh_host_rsa_key
", "~/.gnupg/*
", "~/.ssh/*
", "/etc/passwd
", "/etc/shadow
", "/etc/fetchmailrc
", "popularity-contest.conf
", "/etc/ppp/pap-secrets
", and "/etc/exim4/passwd.client
". Some of these data can not be regenerated by entering the same input string to the system.
If you run a cron job as a user process, you must restore files in "/var/spool/cron/crontabs
" directory and restart cron
(8). See Section 9.5.14, “Scheduling tasks regularly” for cron
(8) and crontab
(1).
Here is a select list of notable backup utility suites available on the Debian system.
Table 10.3. List of backup suite utilities
package | popcon | size | description |
---|---|---|---|
rdiff-backup
*
|
V:1.4, I:3 | 804 | (remote) incremental backup |
dump
*
|
V:0.4, I:1.5 | 716 |
4.4 BSD dump (8) and restore (8) for ext2/ext3 filesystems
|
xfsdump
*
|
V:0.3, I:1.9 | 628 |
dump and restore with xfsdump (8) and xfsrestore (8) for XFS filesystem on GNU/Linux and IRIX
|
backupninja
*
|
V:0.5, I:0.6 | 452 | lightweight, extensible meta-backup system |
mondo
*
|
V:0.11, I:0.5 | 1168 | Mondo Rescue: disaster recovery backup suite |
sbackup
*
|
V:0.05, I:0.16 | 488 | simple backup suite for GNOME desktop |
keep
*
|
V:0.13, I:0.3 | 1232 | backup system for KDE |
bacula-common
*
|
V:1.3, I:2 | 1404 | Bacula: network backup, recovery and verification - common support files |
bacula-client
*
|
I:0.9 | 84 | Bacula: network backup, recovery and verification - client meta-package |
bacula-console
*
|
V:0.3, I:1.2 | 184 | Bacula: network backup, recovery and verification - text console |
bacula-server
*
|
I:0.5 | 84 | Bacula: network backup, recovery and verification - server meta-package |
amanda-common
*
|
V:0.4, I:0.8 | 6924 | Amanda: Advanced Maryland Automatic Network Disk Archiver (Libs) |
amanda-client
*
|
V:0.4, I:0.8 | 748 | Amanda: Advanced Maryland Automatic Network Disk Archiver (Client) |
amanda-server
*
|
V:0.11, I:0.3 | 916 | Amanda: Advanced Maryland Automatic Network Disk Archiver (Server) |
backuppc
*
|
V:0.8, I:1.0 | 2460 | BackupPC is a high-performance, enterprise-grade system for backing up PCs (disk based) |
backup-manager
*
|
V:0.4, I:0.6 | 672 | command-line backup tool |
backup2l
*
|
V:0.2, I:0.3 | 152 | low-maintenance backup/restore tool for mountable media (disk based) |
Backup tools have their specialized focuses.
sbackup
and keep
packages provide easy GUI frontend for desktop users to make regular backups of user data. An equivalent function can be realized by a simple script (Section 10.1.8, “An example script for the system backup”) and cron
(8).
Basic tools described in Section 10.1.1, “Archive and compression tools” and Section 10.1.2, “Copy and synchronization tools” can be used to facilitate system backup via custom scripts. Such script can be enhanced by the following.
rdiff-backup
package enables incremental (remote) backups.
dump
package helps to archive and restore the whole filesystem incrementally and efficiently.
See files in "/usr/share/doc/dump/
" and "Is dump really deprecated?" to lean about the dump
package.
For a personal Debian desktop system running unstable
suite, I only need to protect personal and critical data. I reinstall system once a year anyway. Thus I see no reason to backup the whole system or to install a full featured backup utility.
I use a simple script to make a backup archive and burn it into CD/DVD using GUI. Here is an example script for this.
#!/bin/sh -e # Copyright (C) 2007-2008 Osamu Aoki <osamu@debian.org>, Public Domain BUUID=1000; USER=osamu # UID and name of a user who accesses backup files BUDIR="/var/backups" XDIR0=".+/Mail|.+/Desktop" XDIR1=".+/\.thumbnails|.+/\.?Trash|.+/\.?[cC]ache|.+/\.gvfs|.+/sessions" XDIR2=".+/CVS|.+/\.git|.+/\.svn|.+/Downloads|.+/Archive|.+/Checkout|.+/tmp" XSFX=".+\.iso|.+\.tgz|.+\.tar\.gz|.+\.tar\.bz2|.+\.afio|.+\.tmp|.+\.swp|.+~" SIZE="+99M" DATE=$(date --utc +"%Y%m%d-%H%M") [ -d "$BUDIR" ] || mkdir -p "BUDIR" umask 077 dpkg --get-selections \* > /var/lib/dpkg/dpkg-selections.list debconf-get-selections > /var/cache/debconf/debconf-selections { find /etc /usr/local /opt /var/lib/dpkg/dpkg-selections.list \ /var/cache/debconf/debconf-selections -xdev -print0 find /home/$USER /root -xdev -regextype posix-extended \ -type d -regex "$XDIR0|$XDIR1" -prune -o -type f -regex "$XSFX" -prune -o \ -type f -size "$SIZE" -prune -o -print0 find /home/$USER/Mail/Inbox /home/$USER/Mail/Outbox -print0 find /home/$USER/Desktop -xdev -regextype posix-extended \ -type d -regex "$XDIR2" -prune -o -type f -regex "$XSFX" -prune -o \ -type f -size "$SIZE" -prune -o -print0 } | cpio -ov --null -O $BUDIR/BU$DATE.cpio chown $BUUID $BUDIR/BU$DATE.cpio touch $BUDIR/backup.stamp
This is meant to be a script example executed from root.
I expect you to change and execute this as follows.
find … -print0
" with "find … -newer $BUDIR/backup.stamp -print0
" to make a incremental backup.
scp
(1) or rsync
(1) or burn them to CD/DVD for extra data security. (I use GNOME desktop GUI for burning CD/DVD. See Section 12.1.8, “Shell script example with zenity” for extra redundancy.)
Keep it simple!
You can recover debconf configuration data with "debconf-set-selections debconf-selections
" and dpkg selection data with "dpkg --set-selection <dpkg-selections.list
".
For the set of data under a directory tree, the copy with "cp -a
" provides the normal backup.
For the set of large non-overwritten static data under a directory tree such as the one under the "/var/cache/apt/packages/
" directory, hardlinks with "cp -al
" provide an alternative to the normal backup with efficient use of the disk space.
Here is a copy script, which I named as bkup
, for the data backup. This script copies all (non-VCS) files under the current directory to the dated directory on the parent directory or on a remote host.
#!/bin/sh -e # Copyright (C) 2007-2008 Osamu Aoki <osamu@debian.org>, Public Domain fdot(){ find . -type d \( -iname ".?*" -o -iname "CVS" \) -prune -o -print0;} fall(){ find . -print0;} mkdircd(){ mkdir -p "$1";chmod 700 "$1";cd "$1">/dev/null;} FIND="fdot";OPT="-a";MODE="CPIOP";HOST="localhost";EXTP="$(hostname -f)" BKUP="$(basename $(pwd)).bkup";TIME="$(date +%Y%m%d-%H%M%S)";BU="$BKUP/$TIME" while getopts gcCsStrlLaAxe:h:T f; do case $f in g) MODE="GNUCP";; # cp (GNU) c) MODE="CPIOP";; # cpio -p C) MODE="CPIOI";; # cpio -i s) MODE="CPIOSSH";; # cpio/ssh S) MODE="AFIOSSH";; # afio/ssh t) MODE="TARSSH";; # tar/ssh r) MODE="RSYNCSSH";; # rsync/ssh l) OPT="-alv";; # hardlink (GNU cp) L) OPT="-av";; # copy (GNU cp) a) FIND="fall";; # find all A) FIND="fdot";; # find non CVS/ .???/ x) set -x;; # trace e) EXTP="${OPTARG}";; # hostname -f h) HOST="${OPTARG}";; # user@remotehost.example.com T) MODE="TEST";; # test find mode \?) echo "use -x for trace." esac; done shift $(expr $OPTIND - 1) if [ $# -gt 0 ]; then for x in $@; do cp $OPT $x $x.$TIME; done elif [ $MODE = GNUCP ]; then mkdir -p "../$BU";chmod 700 "../$BU";cp $OPT . "../$BU/" elif [ $MODE = CPIOP ]; then mkdir -p "../$BU";chmod 700 "../$BU" $FIND|cpio --null --sparse -pvd ../$BU elif [ $MODE = CPIOI ]; then $FIND|cpio -ov --null | ( mkdircd "../$BU"&&cpio -i ) elif [ $MODE = CPIOSSH ]; then $FIND|cpio -ov --null|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&&cpio -i )" elif [ $MODE = AFIOSSH ]; then $FIND|afio -ov -0 -|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&&afio -i - )" elif [ $MODE = TARSSH ]; then (tar cvf - . )|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&& tar xvfp - )" elif [ $MODE = RSYNCSSH ]; then rsync -rlpt ./ "${HOST}:${EXTP}-${BKUP}-${TIME}" else echo "Any other idea to backup?" $FIND |xargs -0 -n 1 echo fi
This is meant to be command examples. Please read script and edit it by yourself before using it.
I keep this bkup
in my "/usr/local/bin/
" directory. I issue this bkup
command without any option in the working directory whenever I need a temporary snapshot backup.
For making snapshot history of a source file tree or a configuration file tree, it is easier and space efficient to use git
(7) (see Section 10.9.5, “Git for recording configuration history”).
Removable storage devices may be any one of the following.
These removable storage devices can be automatically mounted as a user under modern desktop environment, such as GNOME using gnome-mount
(1).
Mount point under GNOME is chosen as "/media/<disk_label>
" which can be customized by the following.
mlabel
(1) for FAT filesystem
genisoimage
(1) with "-V
" option for ISO9660 filesystem
tune2fs
(1) with "-L
" option for ext2/ext3 filesystem
Automounting under modern desktop environment happens only when those removable media devices are not listed in "/etc/fstab
".
When providing wrong mount option causes problem, erase its corresponding setting under "/system/storage/
" via gconf-editor
(1).
Table 10.4. List of packages which permit normal users to mount removable devices without a matching "/etc/fstab
" entry
package | popcon | size | description |
---|---|---|---|
gnome-mount
*
|
V:15, I:28 | NOT_FOUND | wrapper for (un)mounting and ejecting storage devices (used by GNOME) |
pmount
*
|
V:4, I:19 | 548 | mount removable devices as normal user (used by KDE) |
cryptmount
*
|
V:0.2, I:0.5 | 360 | Management and user-mode mounting of encrypted filesystems |
usbmount
*
|
V:0.4, I:1.4 | 112 | automatically mount and unmount USB storage devices |
When sharing data with other system via removable storage device, you should format it with common filesystem supported by both systems. Here is a list of filesystem choices.
Table 10.5. List of filesystem choices for removable storage devices with typical usage scenarios
filesystem | description of typical usage scenario |
---|---|
FAT12 | cross platform sharing of data on the floppy disk (<32MiB) |
FAT16 | cross platform sharing of data on the small hard disk like device (<2GiB) |
FAT32 | cross platform sharing of data on the large hard disk like device (<8TiB, supported by newer than MS Windows95 OSR2) |
NTFS | cross platform sharing of data on the large hard disk like device (supported natively on MS Windows NT and later version, and supported by NTFS-3G via FUSE on Linux) |
ISO9660 | cross platform sharing of static data on CD-R and DVD+/-R |
UDF | incremental data writing on CD-R and DVD+/-R (new) |
MINIX filesystem | space efficient unix file data storage on the floppy disk |
ext2 filesystem | sharing of data on the hard disk like device with older Linux systems |
ext3 filesystem | sharing of data on the hard disk like device with current Linux systems (journaling filesystem) |
See Section 9.4.1, “Removable disk encryption with dm-crypt/LUKS” for cross platform sharing of data using device level encryption.
The FAT filesystem is supported by almost all modern operating systems and is quite useful for the data exchange purpose via removable hard disk like media.
When formatting removable hard disk like devices for cross platform sharing of data with the FAT filesystem, the following should be safe choices.
Partitioning them with fdisk
(8), cfdisk
(8) or parted
(8) (see Section 9.3.1, “Disk partition configuration”) into a single primary partition and to mark it as the following.
Formatting the primary partition with mkfs.vfat
(8) with the following.
/dev/sda1
" for FAT16
-F 32 /dev/sda1
" for FAT32
When using the FAT or ISO9660 filesystems for sharing data, the following should be the safe considerations.
tar
(1), cpio
(1), or afio
(1) to retain the long filename, the symbolic link, the original Unix file permission and the owner information.
split
(1) command to protect it from the file size limitation.
For FAT filesystems by its design, the maximum file size is (2^32 - 1) bytes = (4GiB - 1 byte)
. For some applications on the older 32 bit OS, the maximum file size was even smaller (2^31 - 1) bytes = (2GiB - 1 byte)
. Debian does not suffer the latter problem.
Microsoft itself does not recommend to use FAT for drives or partitions of over 200 MB. Microsoft highlights its short comings such as inefficient disk space usage in their "Overview of FAT, HPFS, and NTFS File Systems". Of course, we should normally use the ext3 filesystem for Linux.
For more on filesystems and accessing filesystems, please read "Filesystems HOWTO".
When sharing data with other system via network, you should use common service. Here are some hints.
Table 10.6. List of the network service to chose with the typical usage scenario
network service | description of typical usage scenario |
---|---|
SMB/CIFS network mounted filesystem with Samba |
sharing files via "Microsoft Windows Network", see smb.conf (5) and The Official Samba 3.2.x HOWTO and Reference Guide or the samba-doc package
|
NFS network mounted filesystem with the Linux kernel |
sharing files via "Unix/Linux Network", see exports (5) and Linux NFS-HOWTO
|
HTTP service | sharing file between the web server/client |
HTTPS service | sharing file between the web server/client with encrypted Secure Sockets Layer (SSL) or Transport Layer Security (TLS) |
FTP service | sharing file between the FTP server/client |
Although these filesystems mounted over network and file transfer methods over network are quite convenient for sharing data, these may be insecure. Their network connection must be secured by the following.
See also Section 6.10, “Other network application servers” and Section 6.11, “Other network application clients”.
When choosing computer data storage media for important data archive, you should be careful about their limitations. For small personal data backup, I use CD-R and DVD-R by the brand name company and store in a cool, shaded, dry, clean environment. (Tape archive media seem to be popular for professional use.)
A fire-resistant safe are meant for paper documents. Most of the computer data storage media have less temperature tolerance than paper. I usually rely on multiple secure encrypted copies stored in multiple secure locations.
Optimistic storage life of archive media seen on the net (mostly from vendor info).
These do not count on the mechanical failures due to handling etc.
Optimistic write cycle of archive media seen on the net (mostly from vendor info).
Figures of storage life and write cycle here should not be used for decisions on any critical data storage. Please consult the specific product information provided by the manufacture.
Since CD/DVD-R and paper have only 1 write cycle, they inherently prevent accidental data loss by overwriting. This is advantage!
If you need fast and frequent backup of large amount of data, a hard disk on a remote host linked by a fast network connection, may be the only realistic option.
Here, we discuss manipulations of the disk image. See Section 9.3, “Data storage tips”, too.
The disk image file, "disk.img
", of an unmounted device, e.g., the second SCSI drive "/dev/sdb
", can be made using cp
(1) or dd
(1) by the following.
# cp /dev/sdb disk.img # dd if=/dev/sdb of=disk.img
The disk image of the traditional PC's master boot record (MBR) (see Section 9.3.1, “Disk partition configuration”) which reside on the first sector on the primary IDE disk can be made by using dd
(1) by the following.
# dd if=/dev/hda of=mbr.img bs=512 count=1 # dd if=/dev/hda of=mbr-nopart.img bs=446 count=1 # dd if=/dev/hda of=mbr-part.img skip=446 bs=1 count=66
mbr.img
": The MBR with the partition table
mbr-nopart.img
": The MBR without the partition table
part.img
": The partition table of the MBR only
If you have a SCSI device (including the new serial ATA drive) as the boot disk, substitute "/dev/hda
" with "/dev/sda
".
If you are making an image of a disk partition of the original disk, substitute "/dev/hda
" with "/dev/hda1
" etc.
The disk image file, "disk.img
" can be written to an unmounted device, e.g., the second SCSI drive "/dev/sdb
" with matching size, by the following.
# dd if=disk.img of=/dev/sdb
Similarly, the disk partition image file, "partition.img
" can be written to an unmounted partition, e.g., the first partition of the second SCSI drive "/dev/sdb1
" with matching size, by the following.
# dd if=partition.img of=/dev/sdb1
The disk image "partition.img
" containing a single partition image can be mounted and unmounted by using the loop device as follows.
# losetup -v -f partition.img Loop device is /dev/loop0 # mkdir -p /mnt/loop0 # mount -t auto /dev/loop0 /mnt/loop0 ...hack...hack...hack # umount /dev/loop0 # losetup -d /dev/loop0
This can be simplified as follows.
# mkdir -p /mnt/loop0 # mount -t auto -o loop partition.img /mnt/loop0 ...hack...hack...hack # umount partition.img
Each partition of the disk image "disk.img
" containing multiple partitions can be mounted by using the loop device. Since the loop device does not manage partitions by default, we need to reset it as follows.
# modinfo -p loop # verify kernel capability max_part:Maximum number of partitions per loop device max_loop:Maximum number of loop devices # losetup -a # verify nothing using the loop device # rmmod loop # modprobe loop max_part=16
Now, the loop device can manage up to 16 partitions.
# losetup -v -f disk.img Loop device is /dev/loop0 # fdisk -l /dev/loop0 Disk /dev/loop0: 5368 MB, 5368709120 bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x452b6464 Device Boot Start End Blocks Id System /dev/loop0p1 1 600 4819468+ 83 Linux /dev/loop0p2 601 652 417690 83 Linux # mkdir -p /mnt/loop0p1 # mount -t ext3 /dev/loop0p1 /mnt/loop0p1 # mkdir -p /mnt/loop0p2 # mount -t ext3 /dev/loop0p2 /mnt/loop0p2 ...hack...hack...hack # umount /dev/loop0p1 # umount /dev/loop0p2 # losetup -d /dev/loop0
Alternatively, similar effects can be done by using the device mapper devices created by kpartx
(8) from the kpartx
package as follows.
# kpartx -a -v disk.img ... # mkdir -p /mnt/loop0p2 # mount -t ext3 /dev/mapper/loop0p2 /mnt/loop0p2 ... ...hack...hack...hack # umount /dev/mapper/loop0p2 ... # kpartx -d /mnt/loop0
You can mount a single partition of such disk image with loop device using offset to skip MBR etc., too. But this is more error prone.
A disk image file, "disk.img
" can be cleaned of all removed files into clean sparse image "new.img
" by the following.
# mkdir old; mkdir new # mount -t auto -o loop disk.img old # dd bs=1 count=0 if=/dev/zero of=new.img seek=5G # mount -t auto -o loop new.img new # cd old # cp -a --sparse=always ./ ../new/ # cd .. # umount new.img # umount disk.img
If "disk.img
" is in ext2 or ext3, you can also use zerofree
(8) from the zerofree
package as follows.
# losetup -f -v disk.img Loop device is /dev/loop3 # zerofree /dev/loop3 # cp --sparse=always disk.img new.img
The empty disk image "disk.img
" which can grow up to 5GiB can be made using dd
(1) as follows.
$ dd bs=1 count=0 if=/dev/zero of=disk.img seek=5G
You can create an ext3 filesystem on this disk image "disk.img
" using the loop device as follows.
# losetup -f -v disk.img Loop device is /dev/loop1 # mkfs.ext3 /dev/loop1 ...hack...hack...hack # losetup -d /dev/loop1 $ du --apparent-size -h disk.img 5.0G disk.img $ du -h disk.img 83M disk.img
For "disk.img
", its file size is 5.0 GiB and its actual disk usage is mere 83MiB. This discrepancy is possible since ext2fs can hold sparse file.
The actual disk usage of sparse file grows with data which are written to it.
Using similar operation on devices created by the loop device or the device mapper devices as Section 10.2.3, “Mounting the disk image file”, you can partition this disk image "disk.img
" using parted
(8) or fdisk
(8), and can create filesystem on it using mkfs.ext3
(8), mkswap
(8), etc.
The ISO9660 image file, "cd.iso
", from the source directory tree at "source_directory
" can be made using genisoimage
(1) provided by cdrkit by the following.
# genisoimage -r -J -T -V volume_id -o cd.iso source_directory
Similarly, the bootable ISO9660 image file, "cdboot.iso
", can be made from debian-installer
like directory tree at "source_directory
" by the following.
# genisoimage -r -o cdboot.iso -V volume_id \ -b isolinux/isolinux.bin -c isolinux/boot.cat \ -no-emul-boot -boot-load-size 4 -boot-info-table source_directory
Here Isolinux boot loader (see Section 3.3, “Stage 2: the boot loader”) is used for booting.
You can calculate the md5sum value and make the ISO9660 image directly from the CD-ROM device as follows.
$ isoinfo -d -i /dev/cdrom CD-ROM is in ISO 9660 format ... Logical block size is: 2048 Volume size is: 23150592 ... # dd if=/dev/cdrom bs=2048 count=23150592 conv=notrunc,noerror | md5sum # dd if=/dev/cdrom bs=2048 count=23150592 conv=notrunc,noerror > cd.iso
You must carefully avoid ISO9660 filesystem read ahead bug of Linux as above to get the right result.
DVD is only a large CD to wodim
(1) provided by cdrkit.
You can find a usable device by the following.
# wodim --devices
Then the blank CD-R is inserted to the CD drive, and the ISO9660 image file, "cd.iso
" is written to this device, e.g., "/dev/hda
", using wodim
(1) by the following.
# wodim -v -eject dev=/dev/hda cd.iso
If CD-RW is used instead of CD-R, do this instead by the following.
# wodim -v -eject blank=fast dev=/dev/hda cd.iso
If your desktop system mounts CD automatically, unmount it by "sudo unmount /dev/hda
" before using wodim
(1).
If "cd.iso
" contains an ISO9660 image, then the following manually mounts it to "/cdrom
".
# mount -t iso9660 -o ro,loop cd.iso /cdrom
Modern desktop system mounts removable media automatically (see Section 10.1.10, “Removable storage device”).
Here, we discuss direct manipulations of the binary data on storage media. See Section 9.3, “Data storage tips”, too.
The most basic viewing method of binary data is to use "od -t x1
" command.
Table 10.7. List of packages which view and edit binary data
package | popcon | size | description |
---|---|---|---|
coreutils
*
|
V:92, I:99 | 13828 |
basic package which has od (1) to dump files (HEX, ASCII, OCTAL, …)
|
bsdmainutils
*
|
V:81, I:99 | 768 |
utility package which has hd (1) to dump files (HEX, ASCII, OCTAL, …)
|
hexedit
*
|
V:0.3, I:1.9 | 108 | binary editor and viewer (HEX, ASCII) |
bless
*
|
V:0.08, I:0.3 | 1232 | full featured hexadecimal editor (GNOME) |
okteta
*
|
V:0.4, I:3 | 2528 | full featured hexadecimal editor (KDE4) |
ncurses-hexedit
*
|
V:0.07, I:0.5 | 192 | binary editor and viewer (HEX, ASCII, EBCDIC) |
lde
*
|
V:0.04, I:0.3 | 992 | Linux Disk Editor |
beav
*
|
V:0.03, I:0.3 | 164 | binary editor and viewer (HEX, ASCII, EBCDIC, OCTAL, …) |
hex
*
|
V:0.01, I:0.09 | 84 | hexadecimal dumping tool (support Japanese 2 byte codes) |
HEX is used as an acronym for hexadecimal format with radix 16. OCTAL is for octal format with radix 8. ASCII is for American Standard Code for Information Interchange, i.e., normal English text code. EBCDIC is for Extended Binary Coded Decimal Interchange Code used on IBM mainframe operating systems.
There are tools to read and write files without mounting disk.
Software RAID systems offered by the Linux kernel provide data redundancy in the kernel filesystem level to achieve high levels of storage reliability.
There are tools to add data redundancy to files in application program level to achieve high levels of storage reliability, too.
Table 10.9. List of tools to add data redundancy to files
package | popcon | size | description |
---|---|---|---|
par2
*
|
V:0.5, I:1.7 | 272 | Parity Archive Volume Set, for checking and repair of files |
dvdisaster
*
|
V:0.14, I:0.7 | 1388 | data loss/scratch/aging protection for CD/DVD media |
dvbackup
*
|
V:0.01, I:0.09 | 544 |
backup tool using MiniDV camcorders (providing rsbep (1))
|
vdmfec
*
|
V:0.00, I:0.02 | 88 | recover lost blocks using Forward Error Correction |
There are tools for data file recovery and forensic analysis.
Table 10.10. List of packages for data file recovery and forensic analysis
package | popcon | size | description |
---|---|---|---|
testdisk
*
|
V:0.3, I:3 | 4620 | utilities for partition scan and disk recovery |
magicrescue
*
|
V:0.07, I:0.5 | 344 | utility to recover files by looking for magic bytes |
scalpel
*
|
V:0.03, I:0.2 | 124 | frugal, high performance file carver |
myrescue
*
|
V:0.02, I:0.18 | 84 | rescue data from damaged harddisks |
recover
*
|
V:0.07, I:0.6 | 104 | utility to undelete files on the ext2 filesystem |
e2undel
*
|
V:0.07, I:0.5 | 244 | utility to undelete files on the ext2 filesystem |
ext3grep
*
|
V:0.08, I:0.6 | 300 | tool to help recover deleted files on the ext3 filesystem |
scrounge-ntfs
*
|
V:0.03, I:0.4 | 80 | data recovery program for NTFS filesystems |
gzrt
*
|
V:0.01, I:0.12 | 68 | gzip recovery toolkit |
sleuthkit
*
|
V:0.13, I:0.7 | 540 | tools for forensics analysis. (Sleuthkit) |
autopsy
*
|
V:0.07, I:0.4 | 1372 | graphical interface to SleuthKit |
foremost
*
|
V:0.11, I:0.8 | 140 | forensics application to recover data |
guymager
*
|
V:0.00, I:0.02 | 688 | forensic imaging tool based on Qt |
tct
*
|
V:0.03, I:0.2 | 604 | forensics related utilities |
dcfldd
*
|
V:0.03, I:0.2 | 124 |
enhanced version of dd for forensics and security
|
rdd
*
|
V:0.01, I:0.11 | 200 | forensic copy program |
When a data is too big to backup as a single file, you can backup its content after splitting it into, e.g. 2000MiB chunks and merge those chunks back into the original file later.
$ split -b 2000m large_file $ cat x* >large_file
Please make sure you do not have any files starting with "x
" to avoid name crashes.
In order to clear the contents of a file such as a log file, do not use rm
(1) to delete the file and then create a new empty file, because the file may still be accessed in the interval between commands. The following is the safe way to clear the contents of the file.
$ :>file_to_be_cleared
The following commands create dummy or empty files.
$ dd if=/dev/zero of=5kb.file bs=1k count=5 $ dd if=/dev/urandom of=7mb.file bs=1M count=7 $ touch zero.file $ : > alwayszero.file
You should find following files.
5kb.file
" is 5KB of zeros.
7mb.file
" is 7MB of random data.
zero.file
" may be a 0 byte file. If it existed, its mtime
is updated while its content and its length are kept.
alwayszero.file
" is always a 0 byte file. If it existed, its mtime
is updated and its content is reset.
There are several ways to completely erase data from an entire hard disk like device, e.g., USB memory stick at "/dev/sda
".
Check your USB memory stick location with mount
(8) first before executing commands here. The device pointed by "/dev/sda
" may be SCSI hard disk or serial-ATA hard disk where your entire system resides.
Erase all the disk content by resetting data to 0 with the following.
# dd if=/dev/zero of=/dev/sda
Erase all by overwriting random data with the following.
# dd if=/dev/urandom of=/dev/sda
Erase all by overwriting random data very efficiently with the following.
# shred -v -n 1 /dev/sda
Since dd
(1) is available from the shell of many bootable Linux CDs such as Debian installer CD, you can erase your installed system completely by running an erase command from such media on the system hard disk, e.g., "/dev/hda
", "/dev/sda
", etc.
Unused area on an hard disk (or USB memory stick), e.g. "/dev/sdb1
" may still contain erased data themselves since they are only unlinked from the filesystem. These can be cleaned by overwriting them.
# mount -t auto /dev/sdb1 /mnt/foo # cd /mnt/foo # dd if=/dev/zero of=junk dd: writing to `junk': No space left on device ... # sync # umount /dev/sdb1
This is usually a good enough for your USB memory stick. But this is not perfect. Most parts of erased filenames and their attributes may be hidden and remain in the filesystem.
Even if you have accidentally deleted a file, as long as that file is still being used by some application (read or write mode), it is possible to recover such a file.
For example, try the following
$ echo foo > bar $ less bar $ ps aux | grep ' less[ ]' bozo 4775 0.0 0.0 92200 884 pts/8 S+ 00:18 0:00 less bar $ rm bar $ ls -l /proc/4775/fd | grep bar lr-x------ 1 bozo bozo 64 2008-05-09 00:19 4 -> /home/bozo/bar (deleted) $ cat /proc/4775/fd/4 >bar $ ls -l -rw-r--r-- 1 bozo bozo 4 2008-05-09 00:25 bar $ cat bar foo
Execute on another terminal (when you have the lsof
package installed) as follows.
$ ls -li bar 2228329 -rw-r--r-- 1 bozo bozo 4 2008-05-11 11:02 bar $ lsof |grep bar|grep less less 4775 bozo 4r REG 8,3 4 2228329 /home/bozo/bar $ rm bar $ lsof |grep bar|grep less less 4775 bozo 4r REG 8,3 4 2228329 /home/bozo/bar (deleted) $ cat /proc/4775/fd/4 >bar $ ls -li bar 2228302 -rw-r--r-- 1 bozo bozo 4 2008-05-11 11:05 bar $ cat bar foo
Files with hardlinks can be identified by "ls -li
".
$ ls -li total 0 2738405 -rw-r--r-- 1 root root 0 2008-09-15 20:21 bar 2738404 -rw-r--r-- 2 root root 0 2008-09-15 20:21 baz 2738404 -rw-r--r-- 2 root root 0 2008-09-15 20:21 foo
Both "baz
" and "foo
" have link counts of "2" (>1) showing them to have hardlinks. Their inode numbers are common "2738404". This means they are the same hardlinked file. If you do not happen to find all hardlinked files by chance, you can search it by the inode, e.g., "2738404" as the following.
# find /path/to/mount/point -xdev -inum 2738404
The data security infrastructure is provided by the combination of data encryption tool, message digest tool, and signature tool.
Table 10.11. List of data security infrastructure tools
command | package | popcon | size | description |
---|---|---|---|---|
gpg (1)
|
gnupg
*
|
V:43, I:99 | 5288 | GNU Privacy Guard - OpenPGP encryption and signing tool |
N/A |
gnupg-doc
*
|
I:1.1 | 4124 | GNU Privacy Guard documentation |
gpgv (1)
|
gpgv
*
|
V:59, I:99 | 436 | GNU Privacy Guard - signature verification tool |
paperkey (1)
|
paperkey
*
|
V:0.01, I:0.10 | 88 | extract just the secret information out of OpenPGP secret keys |
cryptsetup (8), …
|
cryptsetup
*
|
V:3, I:5 | 1172 | utilities for dm-crypto block device encryption supporting LUKS |
ecryptfs (7), …
|
ecryptfs-utils
*
|
V:0.2, I:0.3 | 416 | utilities for ecryptfs stacked filesystem encryption |
md5sum (1)
|
coreutils
*
|
V:92, I:99 | 13828 | compute and check MD5 message digest |
sha1sum (1)
|
coreutils
*
|
V:92, I:99 | 13828 | compute and checks SHA1 message digest |
openssl (1ssl)
|
openssl
*
|
V:56, I:91 | 2380 |
compute message digest with "openssl dgst " (OpenSSL)
|
See Section 9.4, “Data encryption tips” on dm-crypto and ecryptfs which implement automatic data encryption infrastructure via Linux kernel modules.
Here are GNU Privacy Guard commands for the basic key management.
Table 10.12. List of GNU Privacy Guard commands for the key management
command | description |
---|---|
gpg --gen-key
|
generate a new key |
gpg --gen-revoke my_user_ID
|
generate revoke key for my_user_ID |
gpg --edit-key user_ID
|
edit key interactively, "help" for help |
gpg -o file --exports
|
export all keys to file |
gpg --imports file
|
import all keys from file |
gpg --send-keys user_ID
|
send key of user_ID to keyserver |
gpg --recv-keys user_ID
|
recv. key of user_ID from keyserver |
gpg --list-keys user_ID
|
list keys of user_ID |
gpg --list-sigs user_ID
|
list sig. of user_ID |
gpg --check-sigs user_ID
|
check sig. of user_ID |
gpg --fingerprint user_ID
|
check fingerprint of user_ID |
gpg --refresh-keys
|
update local keyring |
Here is the meaning of the trust code.
Table 10.13. List of the meaning of the trust code
code | description of trust |
---|---|
-
|
no owner trust assigned / not yet calculated |
e
|
trust calculation failed |
q
|
not enough information for calculation |
n
|
never trust this key |
m
|
marginally trusted |
f
|
fully trusted |
u
|
ultimately trusted |
The following uploads my key "1DD8D791
" to the popular keyserver "hkp://keys.gnupg.net
".
$ gpg --keyserver hkp://keys.gnupg.net --send-keys 1DD8D791
A good default keyserver set up in "~/.gnupg/gpg.conf
" (or old location "~/.gnupg/options
") contains the following.
keyserver hkp://keys.gnupg.net
The following obtains unknown keys from the keyserver.
$ gpg --list-sigs --with-colons | grep '^sig.*\[User ID not found\]' |\ cut -d ':' -f 5| sort | uniq | xargs gpg --recv-keys
There was a bug in OpenPGP Public Key Server (pre version 0.9.6) which corrupted key with more than 2 sub-keys. The newer gnupg
(>1.2.1-2) package can handle these corrupted subkeys. See gpg
(1) under "--repair-pks-subkey-bug
" option.
Here are examples for using GNU Privacy Guard commands on files.
Table 10.14. List of GNU Privacy Guard commands on files
command | description |
---|---|
gpg -a -s file
|
sign file into ASCII armored file.asc |
gpg --armor --sign file
|
, , |
gpg --clearsign file
|
clear-sign message |
gpg --clearsign file|mail foo@example.org
|
mail a clear-signed message to foo@example.org
|
gpg --clearsign --not-dash-escaped patchfile
|
clear-sign patchfile |
gpg --verify file
|
verify clear-signed file |
gpg -o file.sig -b file
|
create detached signature |
gpg -o file.sig --detach-sig file
|
, , |
gpg --verify file.sig file
|
verify file with file.sig |
gpg -o crypt_file.gpg -r name -e file
|
public-key encryption intended for name from file to binary crypt_file.gpg |
gpg -o crypt_file.gpg --recipient name --encrypt file
|
, , |
gpg -o crypt_file.asc -a -r name -e file
|
public-key encryption intended for name from file to ASCII armored crypt_file.asc |
gpg -o crypt_file.gpg -c file
|
symmetric encryption from file to crypt_file.gpg |
gpg -o crypt_file.gpg --symmetric file
|
, , |
gpg -o crypt_file.asc -a -c file
|
symmetric encryption intended for name from file to ASCII armored crypt_file.asc |
gpg -o file -d crypt_file.gpg -r name
|
decryption |
gpg -o file --decrypt crypt_file.gpg
|
, , |
Add the following to "~/.muttrc
" to keep a slow GnuPG from automatically
starting, while allowing it to be used by typing "S
" at the index menu.
macro index S ":toggle pgp_verify_sig\n" set pgp_verify_sig=no
The gnupg
plugin let you run GnuPG transparently for files with extension ".gpg
", ".asc
", and ".ppg
".
# aptitude install vim-scripts vim-addon-manager $ vim-addons install gnupg
md5sum
(1) provides utility to make a digest file using the method in rfc1321 and verifying each file with it.
$ md5sum foo bar >baz.md5 $ cat baz.md5 d3b07384d113edec49eaa6238ad5ff00 foo c157a79031e1c40f85931829bc5fc552 bar $ md5sum -c baz.md5 foo: OK bar: OK
The computation for the MD5 sum is less CPU intensive than the one for the cryptographic signature by GNU Privacy Guard (GnuPG). Usually, only the top level digest file is cryptographically signed to ensure data integrity.
There are many merge tools for the source code. Following commands caught my eyes.
Table 10.15. List of source code merge tools
command | package | popcon | size | description |
---|---|---|---|---|
diff (1)
|
diff
*
|
V:68, I:85 | 36 | compare files line by line |
diff3 (1)
|
diff
*
|
V:68, I:85 | 36 | compare and merges three files line by line |
vimdiff (1)
|
vim
*
|
V:15, I:33 | 1792 | compare 2 files side by side in vim |
patch (1)
|
patch
*
|
V:10, I:92 | 244 | apply a diff file to an original |
dpatch (1)
|
dpatch
*
|
V:1.4, I:11 | 344 | manage series of patches for Debian package |
diffstat (1)
|
diffstat
*
|
V:2, I:15 | 92 | produce a histogram of changes by the diff |
combinediff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | create a cumulative patch from two incremental patches |
dehtmldiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | extract a diff from an HTML page |
filterdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | extract or excludes diffs from a diff file |
fixcvsdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 |
fix diff files created by CVS that patch (1) mis-interprets
|
flipdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | exchange the order of two patches |
grepdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | show which files are modified by a patch matching a regex |
interdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | show differences between two unified diff files |
lsdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | show which files are modified by a patch |
recountdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | recompute counts and offsets in unified context diffs |
rediff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | fix offsets and counts of a hand-edited diff |
splitdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | separate out incremental patches |
unwrapdiff (1)
|
patchutils
*
|
V:1.8, I:14 | 292 | demangle patches that have been word-wrapped |
wiggle (1)
|
wiggle
*
|
V:0.01, I:0.11 | 232 | apply rejected patches |
quilt (1)
|
quilt
*
|
V:1.5, I:9 | 872 | manage series of patches |
meld (1)
|
meld
*
|
V:0.7, I:2 | 2576 | compare and merge files (GTK) |
xxdiff (1)
|
xxdiff
*
|
V:0.2, I:1.3 | 1352 | compare and merge files (plain X) |
dirdiff (1)
|
dirdiff
*
|
V:0.08, I:0.6 | 224 | display differences and merge changes between directory trees |
docdiff (1)
|
docdiff
*
|
V:0.01, I:0.14 | 688 | compare two files word by word / char by char |
imediff2 (1)
|
imediff2
*
|
V:0.02, I:0.10 | 76 | interactive full screen 2-way merge tool |
makepatch (1)
|
makepatch
*
|
V:0.01, I:0.17 | 148 | generate extended patch files |
applypatch (1)
|
makepatch
*
|
V:0.01, I:0.17 | 148 | apply extended patch files |
wdiff (1)
|
wdiff
*
|
V:1.6, I:14 | 1024 | display word differences between text files |
One of following procedures extract differences between two source files and create unified diff files "file.patch0
" or "file.patch1
" depending on the file location.
$ diff -u file.old file.new > file.patch0 $ diff -u old/file new/file > file.patch1
The diff file (alternatively called patch file) is used to send a program update. The receiving party applies this update to another file by the following.
$ patch -p0 file < file.patch0 $ patch -p1 file < file.patch1
Here is a summary of the version control systems (VCS) on the Debian system.
If you are new to VCS systems, you should start learning with Git, which is growing fast in popularity.
Table 10.16. List of version control system tools
package | popcon | size | tool | VCS type | comment |
---|---|---|---|---|---|
cssc
*
|
V:0.00, I:0.04 | 2240 | CSSC | local | clone of the Unix SCCS (deprecated) |
rcs
*
|
V:1.3, I:7 | 772 | RCS | local | "Unix SCCS done right" |
cvs
*
|
V:3, I:21 | 3660 | CVS | remote | previous standard remote VCS |
subversion
*
|
V:10, I:31 | 4288 | Subversion | remote | "CVS done right", the new de facto standard remote VCS |
git
*
|
V:5, I:17 | 10632 | Git | distributed | fast DVCS in C (used by the Linux kernel and others) |
mercurial
*
|
V:1.8, I:6 | 368 | Mercurial | distributed | DVCS in Python and some C |
bzr
*
|
V:1.1, I:3 | 16220 | Bazaar | distributed |
DVCS influenced by tla written in Python (used by Ubuntu)
|
darcs
*
|
V:0.19, I:1.4 | 9504 | Darcs | distributed | DVCS with smart algebra of patches (slow) |
tla
*
|
V:0.17, I:1.4 | 932 | GNU arch | distributed | DVCS mainly by Tom Lord (Historic) |
monotone
*
|
V:0.04, I:0.3 | 5272 | Monotone | distributed | DVCS in C++ |
tkcvs
*
|
V:0.08, I:0.4 | 2476 | CVS, … | remote | GUI display of VCS (CVS, Subversion, RCS) repository tree |
gitk
*
|
V:0.8, I:4 | 900 | Git | distributed | GUI display of VCS (Git) repository tree |
VCS is sometimes known as revision control system (RCS), or software configuration management (SCM).
Distributed VCS such as Git is the tool of choice these days. CVS and Subversion may still be useful to join some existing open source program activities.
Debian provides free VCS services via Debian Alioth service. It supports practically all VCSs. Its documentation can be found at http://wiki.debian.org/Alioth .
The git
package was "GNU Interactive Tools" and the git-core
package was DVCS in lenny
.
There are few basics for creating a shared access VCS archive.
umask 002
" (see Section 1.2.4, “Control of permissions for newly created files: umask”)
Here is an oversimplified comparison of native VCS commands to provide the big picture. The typical command sequence may require options and arguments.
Table 10.17. Comparison of native VCS commands
CVS | Subversion | Git | function |
---|---|---|---|
cvs init
|
svn create
|
git init
|
create the (local) repository |
cvs login
|
- | - | login to the remote repository |
cvs co
|
svn co
|
git clone
|
check out the remote repository as the working tree |
cvs up
|
svn up
|
git pull
|
update the working tree by merging the remote repository |
cvs add
|
svn add
|
git add .
|
add file(s) in the working tree to the VCS |
cvs rm
|
svn rm
|
git rm
|
remove file(s) in working tree from the VCS |
cvs ci
|
svn ci
|
- | commit changes to the remote repository |
- | - |
git commit -a
|
commit changes to the local repository |
- | - |
git push
|
update the remote repository by the local repository |
cvs status
|
svn status
|
git status
|
display the working tree status from the VCS |
cvs diff
|
svn diff
|
git diff
|
diff <reference_repository> <working_tree> |
- | - |
git repack -a -d; git prune
|
repack the local repository into single pack |
tkcvs
|
tkcvs
|
gitk
|
GUI display of VCS repository tree |
Invoking a git
subcommand directly as "git-xyz
" from the command line has been deprecated since early 2006.
GUI tools such as tkcvs
(1) and gitk
(1) really help you with tracking revision history of files. The web interface provided by many public archives for browsing their repositories is also quite useful, too.
Git can work directly with different VCS repositories such as ones provided by CVS and Subversion, and provides the local repository for local changes with git-cvs
and git-svn
packages. See git for CVS users, and Section 10.9.4, “Git for the Subversion repository”.
Git has commands which have no equivalents in CVS and Subversion: "fetch", "rebase", "cherry-pick", …
See the following.
cvs
(1)
/usr/share/doc/cvs/html-cvsclient
"
/usr/share/doc/cvs/html-info
"
/usr/share/doc/cvsbook
"
info cvs
"
The following configuration allows commits to the CVS repository only by a member of the "src
" group, and administration of CVS only by a member of the "staff
" group, thus reducing the chance of shooting oneself.
# cd /var/lib; umask 002; mkdir cvs # export CVSROOT=/srv/cvs/project # cd $CVSROOT # chown root:src . # chmod 2775 . # cvs -d $CVSROOT init # cd CVSROOT # chown -R root:staff . # chmod 2775 . # touch val-tags # chmod 664 history val-tags # chown root:src history val-tags
You may restrict creation of new project by changing the owner of "$CVSROOT
" directory to "root:staff
" and its permission to "3775
".
The default CVS repository is pointed by "$CVSROOT
". The following sets up "$CVSROOT
" for the local access.
$ export CVSROOT=/srv/cvs/project
Many public CVS servers provide read-only remote access to them with account name "anonymous
" via pserver service. For example, Debian web site contents are maintained by webwml project via CVS at Debian alioth service. The following sets up "$CVSROOT
" for the remote access to this CVS repository.
$ export CVSROOT=:pserver:anonymous@cvs.alioth.debian.org:/cvsroot/webwml $ cvs login
Since pserver is prone to eavesdropping attack and insecure, write access is usually disable by server administrators.
The following sets up "$CVS_RSH
" and "$CVSROOT
" for the remote access to the CVS repository by webwml project with SSH.
$ export CVS_RSH=ssh $ export CVSROOT=:ext:account@cvs.alioth.debian.org:/cvs/webwml
You can also use public key authentication for SSH which eliminates the remote password prompt.
Create a new local source tree location at "~/path/to/module1
" by the following.
$ mkdir -p ~/path/to/module1; cd ~/path/to/module1
Populate a new local source tree under "~/path/to/module1
" with files.
Import it to CVS with the following parameters.
module1
"
Main-branch
" (tag for the entire branch)
Release-initial
" (tag for a specific release)
$ cd ~/path/to/module1 $ cvs import -m "Start module1" module1 Main-branch Release-initial $ rm -Rf . # optional
CVS does not overwrite the current repository file but replaces it with another one. Thus, write permission to the repository directory is critical. For every new module for "module1
" in repository at "/srv/cvs/project
", run the following to ensure this condition if needed.
# cd /srv/cvs/project # chown -R root:src module1 # chmod -R ug+rwX module1 # chmod 2775 module1
Here is an example of typical work flow using CVS.
Check all available modules from CVS project pointed by "$CVSROOT
" by the following.
$ cvs rls CVSROOT module1 module2 ...
Checkout "module1
" to its default directory "./module1
" by the following.
$ cd ~/path/to $ cvs co module1 $ cd module1
Make changes to the content as needed.
Check changes by making "diff -u [repository] [local]
" equivalent by the following.
$ cvs diff -u
You find that you broke some file "file_to_undo
" severely but other files are fine.
Overwrite "file_to_undo
" file with the clean copy from CVS by the following.
$ cvs up -C file_to_undo
Save the updated local source tree to CVS by the following.
$ cvs ci -m "Describe change"
Create and add "file_to_add
" file to CVS by the following.
$ vi file_to_add $ cvs add file_to_add $ cvs ci -m "Added file_to_add"
Merge the latest version from CVS by the following.
$ cvs up -d
Watch out for lines starting with "C filename
" which indicates conflicting changes.
Look for unmodified code in ".#filename.version
".
Search for "<<<<<<<
" and ">>>>>>>
" in files for conflicting changes.
Edit files to fix conflicts as needed.
Add a release tag "Release-1
" by the following.
$ cvs ci -m "last commit for Release-1" $ cvs tag Release-1
Edit further.
Remove the release tag "Release-1
" by the following.
$ cvs tag -d Release-1
Check in changes to CVS by the following.
$ cvs ci -m "real last commit for Release-1"
Re-add the release tag "Release-1
" to updated CVS HEAD of main by the following.
$ cvs tag Release-1
Create a branch with a sticky branch tag "Release-initial-bugfixes
" from the original version pointed by the tag "Release-initial
" and check it out to "~/path/to/old
" directory by the following.
$ cvs rtag -b -r Release-initial Release-initial-bugfixes module1 $ cd ~/path/to $ cvs co -r Release-initial-bugfixes -d old module1 $ cd old
Use "-D 2005-12-20
" (ISO 8601 date format) instead of "-r Release-initial
" to specify particular date as the branch point.
Work on this local source tree having the sticky tag "Release-initial-bugfixes
" which is based on the original version.
Work on this branch by yourself … until someone else joins to this "Release-initial-bugfixes
" branch.
Sync with files modified by others on this branch while creating new directories as needed by the following.
$ cvs up -d
Edit files to fix conflicts as needed.
Check in changes to CVS by the following.
$ cvs ci -m "checked into this branch"
Update the local tree by HEAD of main while removing sticky tag ("-A
") and without keyword expansion ("-kk
") by the following.
$ cvs up -d -kk -A
Update the local tree (content = HEAD of main) by merging from the "Release-initial-bugfixes
" branch and without keyword expansion by the following.
$ cvs up -d -kk -j Release-initial-bugfixes
Fix conflicts with editor.
Check in changes to CVS by the following.
$ cvs ci -m "merged Release-initial-bugfixes"
Make archive by the following.
$ cd .. $ mv old old-module1-bugfixes $ tar -cvzf old-module1-bugfixes.tar.gz old-module1-bugfixes $ rm -rf old-module1-bugfixes
"cvs up
" command can take "-d
" option to create new directories and "-P
" option to prune empty directories.
You can checkout only a sub directory of "module1
" by providing its name as "cvs co module1/subdir
".
Table 10.18. Notable options for CVS commands (use as first argument(s) to cvs
(1))
option | meaning |
---|---|
-n
|
dry run, no effect |
-t
|
display messages showing steps of cvs activity |
To get the latest files from CVS, use "tomorrow
" by the following.
$ cvs ex -D tomorrow module_name
Add module alias "mx
" to a CVS project (local server) by the following.
$ export CVSROOT=/srv/cvs/project $ cvs co CVSROOT/modules $ cd CVSROOT $ echo "mx -a module1" >>modules $ cvs ci -m "Now mx is an alias for module1" $ cvs release -d .
Now, you can check out "module1
" (alias: "mx
") from CVS to "new
" directory by the following.
$ cvs co -d new mx $ cd new
In order to perform above procedure, you should have appropriate file permissions.
When you checkout files from CVS, their execution permission bit is retained.
Whenever you see execution permission problems in a checked out file, e.g. "filename
", change its permission in the corresponding CVS repository by the following to fix it.
# chmod ugo-x filename
Subversion is a recent-generation version control system replacing older CVS. It has most of CVS's features except tags and branches.
You need to install subversion
, libapache2-svn
and subversion-tools
packages to set up a Subversion server.
Currently, the subversion
package does not set up a repository, so one must set it up manually. One possible location for a repository is in "/srv/svn/project
".
Create a directory by the following.
# mkdir -p /srv/svn/project
Create the repository database by the following.
# svnadmin create /srv/svn/project
If you only access Subversion repository via Apache2 server, you just need to make the repository only writable by the WWW server by the following.
# chown -R www-data:www-data /srv/svn/project
Add (or uncomment) the following in "/etc/apache2/mods-available/dav_svn.conf
" to allow access to the repository via user authentication.
<Location /project> DAV svn SVNPath /srv/svn/project AuthType Basic AuthName "Subversion repository" AuthUserFile /etc/subversion/passwd <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location>
Create a user authentication file with the command by the following.
# htpasswd2 -c /etc/subversion/passwd some-username
Restart Apache2.
Your new Subversion repository is accessible at URL "http://localhost/project
" and "http://example.com/project
" from svn
(1) (assuming your URL of web server is "http://example.com/
").
The following sets up Subversion repository for the local access by a group, e.g. project
.
# chmod 2775 /srv/svn/project # chown -R root:src /srv/svn/project # chmod -R ug+rwX /srv/svn/project
Your new Subversion repository is group accessible at URL "file:///localhost/srv/svn/project
" or "file:///srv/svn/project
" from svn
(1) for local users belonging to project
group. You must run commands, such as svn
, svnserve
, svnlook
, and svnadmin
under "umask 002
" to ensure group access.
A group accessible Subversion repository is at URL "example.com:/srv/svn/project
" for SSH, you can access it from svn
(1) at URL "svn+ssh://example.com:/srv/svn/project
".
Many projects uses directory tree similar to the following for Subversion to compensate its lack of branches and tags.
----- module1 | |-- branches | |-- tags | | |-- release-1.0 | | `-- release-2.0 | | | `-- trunk | |-- file1 | |-- file2 | `-- file3 | `-- module2
You must use "svn copy …
" command to mark branches and tags. This ensures Subversion to record modification history of files properly and saves storage spaces.
Create a new local source tree location at "~/path/to/module1
" by the following.
$ mkdir -p ~/path/to/module1; cd ~/path/to/module1
Populate a new local source tree under "~/path/to/module1
" with files.
Import it to Subversion with the following parameters.
module1
"
file:///srv/svn/project
"
module1/trunk
"
module1/tags/Release-initial
"
$ cd ~/path/to/module1 $ svn import file:///srv/svn/project/module1/trunk -m "Start module1" $ svn cp file:///srv/svn/project/module1/trunk file:///srv/svn/project/module1/tags/Release-initial
Alternatively, by the following.
$ svn import ~/path/to/module1 file:///srv/svn/project/module1/trunk -m "Start module1" $ svn cp file:///srv/svn/project/module1/trunk file:///srv/svn/project/module1/tags/Release-initial
You can replace URLs such as "file:///…
" by any other URL formats such as "http://…
" and "svn+ssh://…
".
Here is an example of typical work flow using Subversion with its native client.
Client commands offered by the git-svn
package may offer alternative work flow of Subversion using the git
command. See Section 10.9.4, “Git for the Subversion repository”.
Check all available modules from Subversion project pointed by URL "file:///srv/svn/project
" by the following.
$ svn list file:///srv/svn/project module1 module2 ...
Checkout "module1/trunk
" to a directory "module1
" by the following.
$ cd ~/path/to $ svn co file:///srv/svn/project/module1/trunk module1 $ cd module1
Make changes to the content as needed.
Check changes by making "diff -u [repository] [local]
" equivalent by the following.
$ svn diff
You find that you broke some file "file_to_undo
" severely but other files are fine.
Overwrite "file_to_undo
" file with the clean copy from Subversion by the following.
$ svn revert file_to_undo
Save the updated local source tree to Subversion by the following.
$ svn ci -m "Describe change"
Create and add "file_to_add
" file to Subversion by the following.
$ vi file_to_add $ svn add file_to_add $ svn ci -m "Added file_to_add"
Merge the latest version from Subversion by the following.
$ svn up
Watch out for lines starting with "C filename
" which indicates conflicting changes.
Look for unmodified code in, e.g., "filename.r6
", "filename.r9
", and "filename.mine
".
Search for "<<<<<<<
" and ">>>>>>>
" in files for conflicting changes.
Edit files to fix conflicts as needed.
Add a release tag "Release-1
" by the following.
$ svn ci -m "last commit for Release-1" $ svn cp file:///srv/svn/project/module1/trunk file:///srv/svn/project/module1/tags/Release-1
Edit further.
Remove the release tag "Release-1
" by the following.
$ svn rm file:///srv/svn/project/module1/tags/Release-1
Check in changes to Subversion by the following.
$ svn ci -m "real last commit for Release-1"
Re-add the release tag "Release-1
" from updated Subversion HEAD of trunk by the following.
$ svn cp file:///srv/svn/project/module1/trunk file:///srv/svn/project/module1/tags/Release-1
Create a branch with a path "module1/branches/Release-initial-bugfixes
" from the original version pointed by the path "module1/tags/Release-initial
" and check it out to "~/path/to/old
" directory by the following.
$ svn cp file:///srv/svn/project/module1/tags/Release-initial file:///srv/svn/project/module1/branches/Release-initial-bugfixes $ cd ~/path/to $ svn co file:///srv/svn/project/module1/branches/Release-initial-bugfixes old $ cd old
Use "module1/trunk@{2005-12-20}
" (ISO 8601 date format) instead of "module1/tags/Release-initial
" to specify particular date as the branch point.
Work on this local source tree pointing to branch "Release-initial-bugfixes
" which is based on the original version.
Work on this branch by yourself … until someone else joins to this "Release-initial-bugfixes
" branch.
Sync with files modified by others on this branch by the following.
$ svn up
Edit files to fix conflicts as needed.
Check in changes to Subversion by the following.
$ svn ci -m "checked into this branch"
Update the local tree with HEAD of trunk by the following.
$ svn switch file:///srv/svn/project/module1/trunk
Update the local tree (content = HEAD of trunk) by merging from the "Release-initial-bugfixes
" branch by the following.
$ svn merge file:///srv/svn/project/module1/branches/Release-initial-bugfixes
Fix conflicts with editor.
Check in changes to Subversion by the following.
$ svn ci -m "merged Release-initial-bugfixes"
Make archive by the following.
$ cd .. $ mv old old-module1-bugfixes $ tar -cvzf old-module1-bugfixes.tar.gz old-module1-bugfixes $ rm -rf old-module1-bugfixes
You can replace URLs such as "file:///…
" by any other URL formats such as "http://…
" and "svn+ssh://…
".
You can checkout only a sub directory of "module1
" by providing its name as "svn co file:///srv/svn/project/module1/trunk/subdir module1/subdir
", etc.
Table 10.19. Notable options for Subversion commands (use as first argument(s) to svn
(1))
option | meaning |
---|---|
--dry-run
|
dry run, no effect |
-v
|
display detail messages of svn activity |
Git can do everything for both local and remote source code management. This means that you can record the source code changes without needing network connectivity to the remote repository.
You may wish to set several global configuration in "~/.gitconfig
" such as your name and email address used by Git by the following.
$ git config --global user.name "Name Surname" $ git config --global user.email yourname@example.com
If you are too used to CVS or Subversion commands, you may wish to set several command aliases by the following.
$ git config --global alias.ci "commit -a" $ git config --global alias.co checkout
You can check your global configuration by the following.
$ git config --global --list
See the following.
/usr/share/doc/git-doc/git.html
)
/usr/share/doc/git-doc/user-manual.html
)
/usr/share/doc/git-doc/gittutorial.html
)
/usr/share/doc/git-doc/gittutorial-2.html
)
/usr/share/doc/git-doc/everyday.html
)
git for CVS users (/usr/share/doc/git-doc/gitcvs-migration.html
)
Other git resources available on the web
/usr/share/doc/gitmagic/html/index.html
)
git-gui
(1) and gitk
(1) commands make using Git very easy.
Do not use the tag string with spaces in it even if some tools such as gitk
(1) allow you to use it. It may choke some other git
commands.
Even if your upstream uses different VCS, it may be good idea to use git
(1) for local activity since you can manage your local copy of source tree without the network connection to the upstream. Here are some packages and commands used with git
(1).
Table 10.20. List of git related packages and commands
command | package | popcon | size | description |
---|---|---|---|---|
N/A |
git-doc
*
|
I:3 | 7436 | official documentation for Git |
N/A |
gitmagic
*
|
I:0.3 | 920 | "Git Magic", easier to understand guide for Git |
git (7)
|
git
*
|
V:5, I:17 | 10632 | Git, the fast, scalable, distributed revision control system |
gitk (1)
|
gitk
*
|
V:0.8, I:4 | 900 | GUI Git repository browser with history |
git-gui (1)
|
git-gui
*
|
V:0.3, I:2 | 1612 | GUI for Git (No history) |
git-svnimport (1)
|
git-svn
*
|
V:0.5, I:3 | 552 | import the data out of Subversion into Git |
git-svn (1)
|
git-svn
*
|
V:0.5, I:3 | 552 | provide bidirectional operation between the Subversion and Git |
git-cvsimport (1)
|
git-cvs
*
|
V:0.17, I:1.6 | 676 | import the data out of CVS into Git |
git-cvsexportcommit (1)
|
git-cvs
*
|
V:0.17, I:1.6 | 676 | export a commit to a CVS checkout from Git |
git-cvsserver (1)
|
git-cvs
*
|
V:0.17, I:1.6 | 676 | CVS server emulator for Git |
git-send-email (1)
|
git-email
*
|
V:0.12, I:1.7 | 404 | send a collection of patches as email from the Git |
stg (1)
|
stgit
*
|
V:0.07, I:0.7 | 1864 | quilt on top of git (Python) |
git-buildpackage (1)
|
git-buildpackage
*
|
V:0.2, I:1.1 | 596 | automate the Debian packaging with the Git |
guilt (7)
|
guilt
*
|
V:0.01, I:0.11 | 336 | quilt on top of git (SH/AWK/SED/…) |
With git
(1), you work on a local branch with many commits and use something like "git rebase -i master
" to reorganize change history later. This enables you to make clean change history. See git-rebase
(1) and git-cherry-pick
(1).
When you want to go back to a clean working directory without loosing the current state of the working directory, you can use "git stash
". See git-stash
(1).
You can check out a Subversion repository at "svn+ssh://svn.example.org/project/module/trunk
" to a local Git repository at "./dest
" and commit back to the Subversion repository. E.g.:
$ git svn clone -s -rHEAD svn+ssh://svn.example.org/project dest $ cd dest ... make changes $ git commit -a ... keep working locally with git $ git svn dcommit
The use of "-rHEAD
" enables us to avoid cloning entire historical contents from the Subversion repository.
You can manually record chronological history of configuration using Git tools. Here is a simple example for your practice to record "/etc/apt/
" contents.
$ cd /etc/apt/ $ sudo git init $ sudo chmod 700 .git $ sudo git add . $ sudo git commit -a
Commit configuration with description.
Make modification to the configuration files.
$ cd /etc/apt/ $ sudo git commit -a
Commit configuration with description and continue your life.
$ cd /etc/apt/ $ sudo gitk --all
You have full configuration history with you.
sudo
(8) is needed to work with any file permissions of configuration data. For user configuration data, you may skip sudo
.
The "chmod 700 .git
" command in the above example is needed to protect archive data from unauthorized read access.
For more complete setup for recording configuration history, please look for the etckeeper
package: Section 9.2.10, “Recording changes in configuration files”.
Tools and tips for converting data formats on the Debian system are described.
Standard based tools are in very good shape but support for proprietary data formats are limited.
Following packages for the text data conversion caught my eyes.
Table 11.1. List of text data conversion tools
package | popcon | size | keyword | description |
---|---|---|---|---|
libc6
*
|
V:97, I:99 | 10012 | charset |
text encoding converter between locales by iconv (1) (fundamental)
|
recode
*
|
V:1.5, I:7 | 772 | charset+eol | text encoding converter between locales (versatile, more aliases and features) |
konwert
*
|
V:0.4, I:4 | 192 | charset | text encoding converter between locales (fancy) |
nkf
*
|
V:0.2, I:2 | 300 | charset | character set translator for Japanese |
tcs
*
|
V:0.02, I:0.14 | 544 | charset | character set translator |
unaccent
*
|
V:0.01, I:0.09 | 76 | charset | replace accented letters by their unaccented equivalent |
tofrodos
*
|
V:1.1, I:7 | 48 | eol |
text format converter between DOS and Unix: fromdos (1) and todos (1)
|
macutils
*
|
V:0.05, I:0.5 | 320 | eol |
text format converter between Macintosh and Unix: frommac (1) and tomac (1)
|
iconv
(1) is provided as a part of the libc6
package and it is always available on practically all systems to convert the encoding of characters.
You can convert encodings of a text file with iconv
(1) by the following.
$ iconv -f encoding1 -t encoding2 input.txt >output.txt
Encoding values are case insensitive and ignore "-
" and "_
" for matching. Supported encodings can be checked by the "iconv -l
" command.
Table 11.2. List of encoding values and their usage
encoding value | usage |
---|---|
ASCII. | American Standard Code for Information Interchange, 7 bit code w/o accented characters |
UTF-8 | current multilingual standard for all modern OSs |
ISO-8859-1 | old standard for western European languages, ASCII + accented characters |
ISO-8859-2 | old standard for eastern European languages, ASCII + accented characters |
ISO-8859-15 | old standard for western European languages, ISO-8859-1 with euro sign |
CP850 | code page 850, Microsoft DOS characters with graphics for western European languages, ISO-8859-1 variant |
CP932 | code page 932, Microsoft Windows style Shift-JIS variant for Japanese |
CP936 | code page 936, Microsoft Windows style GB2312, GBK or GB18030 variant for Simplified Chinese |
CP949 | code page 949, Microsoft Windows style EUC-KR or Unified Hangul Code variant for Korean |
CP950 | code page 950, Microsoft Windows style Big5 variant for Traditional Chinese |
CP1251 | code page 1251, Microsoft Windows style encoding for the Cyrillic alphabet |
CP1252 | code page 1252, Microsoft Windows style ISO-8859-15 variant for western European languages |
KOI8-R | old Russian UNIX standard for the Cyrillic alphabet |
ISO-2022-JP | standard encoding for Japanese email which uses only 7 bit codes |
eucJP | old Japanese UNIX standard 8 bit code and completely different from Shift-JIS |
Shift-JIS | JIS X 0208 Appendix 1 standard for Japanese (see CP932) |
Some encodings are only supported for the data conversion and are not used as locale values (Section 8.3.1, “Basics of encoding”).
For character sets which fit in single byte such as ASCII and ISO-8859 character sets, the character encoding means almost the same thing as the character set.
For character sets with many characters such as JIS X 0213 for Japanese or Universal Character Set (UCS, Unicode, ISO-10646-1) for practically all languages, there are many encoding schemes to fit them into the sequence of the byte data.
For these, there are clear differentiations between the character set and the character encoding.
The code page is used as the synonym to the character encoding tables for some vendor specific ones.
Please note most encoding systems share the same code with ASCII for the 7 bit characters. But there are some exceptions. If you are converting old Japanese C programs and URLs data from the casually-called shift-JIS encoding format to UTF-8 format, use "CP932
" as the encoding name instead of "shift-JIS
" to get the expected results: 0x5C
→ "\
" and 0x7E
→ "~
" . Otherwise, these are converted to wrong characters.
recode
(1) may be used too and offers more than the combined functionality of iconv
(1), fromdos
(1), todos
(1), frommac
(1), and tomac
(1). For more, see "info recode
".
You can check if a text file is encoded in UTF-8 with iconv
(1) by the following.
$ iconv -f utf8 -t utf8 input.txt >/dev/null || echo "non-UTF-8 found"
Use "--verbose
" option in the above example to find the first non-UTF-8 character.
Here is an example script to convert encoding of file names from ones created under older OS to modern UTF-8 ones in a single directory.
#!/bin/sh ENCDN=iso-8859-1 for x in *; do mv "$x" $(echo "$x" | iconv -f $ENCDN -t utf-8) done
The "$ENCDN
" variable should be set by the encoding value in Table 11.2, “List of encoding values and their usage”.
For more complicated case, please mount a filesystem (e.g. a partition on a disk drive) containing such file names with proper encoding as the mount
(8) option (see Section 8.3.6, “Filename encoding”) and copy its entire contents to another filesystem mounted as UTF-8 with "cp -a
" command.
The text file format, specifically the end-of-line (EOL) code, is dependent on the platform.
Table 11.3. List of EOL styles for different platforms
platform | EOL code | control | decimal | hexadecimal |
---|---|---|---|---|
Debian (unix) | LF |
^J
|
10 | 0A |
MSDOS and Windows | CR-LF |
^M^J
|
13 10 | 0D 0A |
Apple's Macintosh | CR |
^M
|
13 | 0D |
The EOL format conversion programs, fromdos
(1), todos
(1), frommac
(1), and tomac
(1), are quite handy. recode
(1) is also useful.
Some data on the Debian system, such as the wiki page data for the python-moinmoin
package, use MSDOS style CR-LF as the EOL code. So the above rule is just a general rule.
Most editors (eg. vim
, emacs
, gedit
, …) can handle files in MSDOS style EOL transparently.
The use of "sed -e '/\r$/!s/$/\r/'
" instead of todos
(1) is better when you want to unify the EOL style to the MSDOS style from the mixed MSDOS and Unix style. (e.g., after merging 2 MSDOS style files with diff3
(1).) This is because todos
adds CR to all lines.
There are few popular specialized programs to convert the tab codes.
Table 11.4. List of TAB conversion commands from bsdmainutils
and coreutils
packages
function |
bsdmainutils
|
coreutils
|
---|---|---|
expand tab to spaces |
"col -x "
|
expand
|
unexpand tab from spaces |
"col -h "
|
unexpand
|
indent
(1) from the indent
package completely reformats whitespaces in the C program.
Editor programs such as vim
and emacs
can be used for TAB conversion, too. For example with vim
, you can expand TAB with ":set expandtab
" and ":%retab
" command sequence. You can revert this with ":set noexpandtab
" and ":%retab!
" command sequence.
Intelligent modern editors such as the vim
program are quite smart and copes well with any encoding systems and any file formats. You should use these editors under the UTF-8 locale in the UTF-8 capable console for the best compatibility.
An old western European Unix text file, "u-file.txt
", stored in the latin1 (iso-8859-1) encoding can be edited simply with vim
by the following.
$ vim u-file.txt
This is possible since the auto detection mechanism of the file encoding in vim
assumes the UTF-8 encoding first and, if it fails, assumes it to be latin1.
An old Polish Unix text file, "pu-file.txt
", stored in the latin2 (iso-8859-2) encoding can be edited with vim
by the following.
$ vim '+e ++enc=latin2 pu-file.txt'
An old Japanese unix text file, "ju-file.txt
", stored in the eucJP encoding can be edited with vim
by the following.
$ vim '+e ++enc=eucJP ju-file.txt'
An old Japanese MS-Windows text file, "jw-file.txt
", stored in the so called shift-JIS encoding (more precisely: CP932) can be edited with vim
by the following.
$ vim '+e ++enc=CP932 ++ff=dos jw-file.txt'
When a file is opened with "++enc
" and "++ff
" options, ":w
" in the Vim command line stores it in the original format and overwrite the original file. You can also specify the saving format and the file name in the Vim command line, e.g., ":w ++enc=utf8 new.txt
".
Please refer to the mbyte.txt "multi-byte text support" in vim
on-line help and Table 11.2, “List of encoding values and their usage” for locale values used with "++enc
".
The emacs
family of programs can perform the equivalent functions.
The following reads a web page into a text file. This is very useful when copying configurations off the Web or applying basic Unix text tools such as grep
(1) on the web page.
$ w3m -dump http://www.remote-site.com/help-info.html >textfile
Similarly, you can extract plain text data from other formats using the following.
Table 11.5. List of tools to extract plain text data
package | popcon | size | keyword | function |
---|---|---|---|---|
w3m
*
|
V:24, I:84 | 1992 | html→text |
HTML to text converter with the "w3m -dump " command
|
html2text
*
|
V:15, I:37 | 248 | html→text | advanced HTML to text converter (ISO 8859-1) |
lynx
*
|
I:22 | 252 | html→text |
HTML to text converter with the "lynx -dump " command
|
elinks
*
|
V:2, I:5 | 1448 | html→text |
HTML to text converter with the "elinks -dump " command
|
links
*
|
V:3, I:9 | 1380 | html→text |
HTML to text converter with the "links -dump " command
|
links2
*
|
V:0.7, I:3 | 3288 | html→text |
HTML to text converter with the "links2 -dump " command
|
antiword
*
|
V:1.3, I:2 | 796 | MSWord→text,ps | convert MSWord files to plain text or ps |
catdoc
*
|
V:1.0, I:2 | 2580 | MSWord→text,TeX | convert MSWord files to plain text or TeX |
pstotext
*
|
V:0.8, I:1.4 | 148 | ps/pdf→text | extract text from PostScript and PDF files |
unhtml
*
|
V:0.02, I:0.14 | 76 | html→text | remove the markup tags from an HTML file |
odt2txt
*
|
V:0.8, I:1.4 | 100 | odt→text | converter from OpenDocument Text to text |
wpd2sxw
*
|
V:0.02, I:0.13 | 156 | WordPerfect→sxw | WordPerfect to OpenOffice.org/StarOffice writer document converter |
You can highlight and format plain text data by the following.
Table 11.6. List of tools to highlight plain text data
package | popcon | size | keyword | description |
---|---|---|---|---|
vim-runtime
*
|
V:3, I:38 | 25864 | highlight |
Vim MACRO to convert source code to HTML with ":source $VIMRUNTIME/syntax/html.vim "
|
cxref
*
|
V:0.05, I:0.4 | 1252 | c→html | converter for the C program to latex and HTML (C language) |
src2tex
*
|
V:0.03, I:0.2 | 1968 | highlight | convert many source codes to TeX (C language) |
source-highlight
*
|
V:0.14, I:1.1 | 2164 | highlight | convert many source codes to HTML, XHTML, LaTeX, Texinfo, ANSI color escape sequences and DocBook files with highlight (C++) |
highlight
*
|
V:0.2, I:1.3 | 756 | highlight | convert many source codes to HTML, XHTML, RTF, LaTeX, TeX or XSL-FO files with highlight (C++) |
grc
*
|
V:0.05, I:0.12 | 164 | text→color | generic colouriser for everything (Python) |
txt2html
*
|
V:0.08, I:0.5 | 296 | text→html | text to HTML converter (Perl) |
markdown
*
|
V:0.07, I:0.4 | 96 | text→html | markdown text document formatter to (X)HTML (Perl) |
asciidoc
*
|
V:0.15, I:1.1 | 3028 | text→any | AsciiDoc text document formatter to XML/HTML (Python) |
python-docutils
*
|
V:0.4, I:3 | 5740 | text→any | ReStructured Text document formatter to XML (Python) |
txt2tags
*
|
V:0.06, I:0.3 | 1028 | text→any | document conversion from text to HTML, SGML, LaTeX, man page, MoinMoin, Magic Point and PageMaker (Python) |
udo
*
|
V:0.01, I:0.07 | 556 | text→any | universal document - text processing utility (C language) |
stx2any
*
|
V:0.00, I:0.04 | 484 | text→any | document converter from structured plain text to other formats (m4) |
rest2web
*
|
V:0.01, I:0.08 | 576 | text→html | document converter from ReStructured Text to html (Python) |
aft
*
|
V:0.01, I:0.06 | 340 | text→any | "free form" document preparation system (Perl) |
yodl
*
|
V:0.01, I:0.06 | 564 | text→any | pre-document language and tools to process it (C language) |
sdf
*
|
V:0.01, I:0.08 | 1940 | text→any | simple document parser (Perl) |
sisu
*
|
V:0.01, I:0.07 | 14384 | text→any | document structuring, publishing and search framework (Ruby) |
The Extensible Markup Language (XML) is a markup language for documents containing structured information.
See introductory information at XML.COM.
XML text looks somewhat like HTML. It enables us to manage multiple formats of output for a document. One easy XML system is the docbook-xsl
package, which is used here.
Each XML file starts with standard XML declaration as the following.
<?xml version="1.0" encoding="UTF-8"?>
The basic syntax for one XML element is marked up as the following.
<name attribute="value">content</name>
XML element with empty content is marked up in the following short form.
<name attribute="value"/>
The "attribute="value"
" in the above examples are optional.
The comment section in XML is marked up as the following.
<!-- comment -->
Other than adding markups, XML requires minor conversion to the content using predefined entities for following characters.
Table 11.7. List of predefined entities for XML
predefined entity | character to be converted from |
---|---|
"
|
" : quote
|
'
|
' : apostrophe
|
<
|
< : less-than
|
>
|
> : greater-than
|
&
|
& : ampersand
|
"<
" or "&
" can not be used in attributes or elements.
When SGML style user defined entities, e.g. "&some-tag:
", are used, the first definition wins over others. The entity definition is expressed in "<!ENTITY some-tag "entity value">
".
As long as the XML markup are done consistently with certain set of the tag name (either some data as content or attribute value), conversion to another XML is trivial task using Extensible Stylesheet Language Transformations (XSLT).
There are many tools available to process XML files such as the Extensible Stylesheet Language (XSL).
Basically, once you create well formed XML file, you can convert it to any format using Extensible Stylesheet Language Transformations (XSLT).
The Extensible Stylesheet Language for Formatting Object (XSL-FO) is supposed to be solution for formatting. The fop
package is in the Debian contrib
(not main
) archive still. So the LaTeX code is usually generated from XML using XSLT and the LaTeX system is used to create printable file such as DVI, PostScript, and PDF.
Table 11.8. List of XML tools
package | popcon | size | keyword | description |
---|---|---|---|---|
docbook-xml
*
|
I:47 | 2488 | xml | XML document type definition (DTD) for DocBook |
xsltproc
*
|
V:4, I:46 | 152 | xslt | XSLT command line processor (XML→ XML, HTML, plain text, etc.) |
docbook-xsl
*
|
V:0.5, I:7 | 12792 | xml/xslt | XSL stylesheets for processing DocBook XML to various output formats with XSLT |
xmlto
*
|
V:0.3, I:2 | 268 | xml/xslt | XML-to-any converter with XSLT |
dblatex
*
|
V:0.2, I:2 | 7340 | xml/xslt | convert Docbook files to DVI, PostScript, PDF documents with XSLT |
fop
*
|
V:0.3, I:2 | 2280 | xml/xsl-fo | convert Docbook XML files to PDF |
Since XML is subset of Standard Generalized Markup Language (SGML), it can be processed by the extensive tools available for SGML, such as Document Style Semantics and Specification Language (DSSSL).
Table 11.9. List of DSSL tools
package | popcon | size | keyword | description |
---|---|---|---|---|
openjade
*
|
V:0.4, I:3 | 1212 | dsssl | ISO/IEC 10179:1996 standard DSSSL processor (latest) |
openjade1.3
*
|
V:0.02, I:0.14 | 2336 | dsssl | ISO/IEC 10179:1996 standard DSSSL processor (1.3.x series) |
jade
*
|
V:0.3, I:2 | 1056 | dsssl | James Clark's original DSSSL processor (1.2.x series) |
docbook-dsssl
*
|
V:0.5, I:4 | 3100 | xml/dsssl | DSSSL stylesheets for processing DocBook XML to various output formats with DSSSL |
docbook-utils
*
|
V:0.2, I:2 | 440 | xml/dsssl |
utilities for DocBook files including conversion to other formats (HTML, RTF, PS, man, PDF) with docbook2* commands with DSSSL
|
sgml2x
*
|
V:0.00, I:0.06 | 216 | SGML/dsssl | converter from SGML and XML using DSSSL stylesheets |
You can extract HTML or XML data from other formats using followings.
Table 11.10. List of XML data extraction tools
package | popcon | size | keyword | description |
---|---|---|---|---|
wv
*
|
V:1.3, I:2 | 2116 | MSWord→any | document converter from Microsoft Word to HTML, LaTeX, etc. |
texi2html
*
|
V:0.3, I:2 | 2076 | texi→html | converter from Texinfo to HTML |
man2html
*
|
V:0.2, I:1.2 | 372 | manpage→html | converter from manpage to HTML (CGI support) |
tex4ht
*
|
V:0.3, I:2 | 924 | tex↔html | converter between (La)TeX and HTML |
xlhtml
*
|
V:0.5, I:1.1 | 184 | MSExcel→html | converter from MSExcel .xls to HTML |
ppthtml
*
|
V:0.5, I:1.1 | 120 | MSPowerPoint→html | converter from MSPowerPoint to HTML |
unrtf
*
|
V:0.4, I:0.9 | 224 | rtf→html | document converter from RTF to HTML, etc |
info2www
*
|
V:0.6, I:1.2 | 156 | info→html | converter from GNU info to HTML (CGI support) |
ooo2dbk
*
|
V:0.03, I:0.16 | 941 | sxw→xml | converter from OpenOffice.org SXW documents to DocBook XML |
wp2x
*
|
V:0.01, I:0.07 | 240 | WordPerfect→any | WordPerfect 5.0 and 5.1 files to TeX, LaTeX, troff, GML and HTML |
doclifter
*
|
V:0.00, I:0.03 | 424 | troff→xml | converter from troff to DocBook XML |
For non-XML HTML files, you can convert them to XHTML which is an instance of well formed XML. XHTML can be processed by XML tools.
Table 11.11. List of XML pretty print tools
package | popcon | size | keyword | description |
---|---|---|---|---|
libxml2-utils
*
|
V:3, I:49 | 160 | xml↔html↔xhtml |
command line XML tool with xmllint (1) (syntax check, reformat, lint, …)
|
tidy
*
|
V:1.0, I:9 | 108 | xml↔html↔xhtml | HTML syntax checker and reformatter |
Once proper XML is generated, you can use XSLT technology to extract data based on the mark-up context etc.
Printable data is expressed in the PostScript format on the Debian system. Common Unix Printing System (CUPS) uses Ghostscript as its rasterizer backend program for non-PostScript printers.
The core of printable data manipulation is the Ghostscript PostScript (PS) interpreter which generates raster image.
The latest upstream Ghostscript from Artifex was re-licensed from AFPL to GPL and merged all the latest ESP version changes such as CUPS related ones at 8.60 release as unified release.
Table 11.12. List of Ghostscript PostScript interpreters
package | popcon | size | description |
---|---|---|---|
ghostscript
*
|
V:18, I:56 | 6716 | The GPL Ghostscript PostScript/PDF interpreter |
ghostscript-x
*
|
V:13, I:28 | 220 | GPL Ghostscript PostScript/PDF interpreter - X display support |
gs-cjk-resource
*
|
V:0.04, I:0.4 | 4528 | resource files for gs-cjk, Ghostscript CJK-TrueType extension |
cmap-adobe-cns1
*
|
V:0.03, I:0.3 | 1572 | CMaps for Adobe-CNS1 (for traditional Chinese support) |
cmap-adobe-gb1
*
|
V:0.03, I:0.3 | 1552 | CMaps for Adobe-GB1 (for simplified Chinese support) |
cmap-adobe-japan1
*
|
V:0.08, I:0.7 | 2428 | CMaps for Adobe-Japan1 (for Japanese standard support) |
cmap-adobe-japan2
*
|
I:0.4 | 416 | CMaps for Adobe-Japan2 (for Japanese extra support) |
cmap-adobe-korea1
*
|
V:0.01, I:0.19 | 872 | CMaps for Adobe-Korea1 (for Korean support) |
libpoppler5
*
|
V:4, I:21 | 2368 | PDF rendering library based on xpdf PDF viewer |
libpoppler-glib4
*
|
V:7, I:19 | 504 | PDF rendering library (GLib-based shared library) |
poppler-data
*
|
I:3 | 12232 | CMaps for PDF rendering library (for CJK support: Adobe-*) |
"gs -h
" can display the configuration of Ghostscript.
You can merge two PostScript (PS) or Portable Document Format (PDF) files using gs
(1) of Ghostscript.
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pswrite -sOutputFile=bla.ps -f foo1.ps foo2.ps $ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=bla.pdf -f foo1.pdf foo2.pdf
The PDF, which is widely used cross-platform printable data format, is essentially the compressed PS format with few additional features and extensions.
For command line, psmerge
(1) and other commands from the psutils
package are useful for manipulating PostScript documents. Commands in the pdfjam
package work similarly for manipulating PDF documents. pdftk
(1) from the pdftk
package is useful for manipulating PDF documents, too.
The following packages for the printable data utilities caught my eyes.
Table 11.13. List of printable data utilities
package | popcon | size | keyword | description |
---|---|---|---|---|
poppler-utils
*
|
V:8, I:49 | 536 | pdf→ps,text,… |
PDF utilities: pdftops , pdfinfo , pdfimages , pdftotext , pdffonts
|
psutils
*
|
V:3, I:21 | 380 | ps→ps | PostScript document conversion tools |
poster
*
|
V:1.2, I:9 | 80 | ps→ps | create large posters out of PostScript pages |
xpdf-utils
*
|
V:0.9, I:4 | 76 | pdf→ps,text,… |
PDF utilities: pdftops , pdfinfo , pdfimages , pdftotext , pdffonts
|
enscript
*
|
V:1.6, I:14 | 2464 | text→ps, html, rtf | convert ASCII text to PostScript, HTML, RTF or Pretty-Print |
a2ps
*
|
V:1.7, I:8 | 4292 | text→ps | 'Anything to PostScript' converter and pretty-printer |
pdftk
*
|
V:1.0, I:5 | 200 | pdf→pdf |
PDF document conversion tool: pdftk
|
mpage
*
|
V:0.18, I:1.5 | 224 | text,ps→ps | print multiple pages per sheet |
html2ps
*
|
V:0.2, I:1.7 | 260 | html→ps | converter from HTML to PostScript |
pdfjam
*
|
V:0.2, I:1.8 | 228 | pdf→pdf |
PDF document conversion tools: pdf90 , pdfjoin , and pdfnup
|
gnuhtml2latex
*
|
V:0.07, I:0.6 | 60 | html→latex | converter from html to latex |
latex2rtf
*
|
V:0.14, I:1.0 | 508 | latex→rtf | convert documents from LaTeX to RTF which can be read by MS Word |
ps2eps
*
|
V:1.3, I:12 | 116 | ps→eps | converter from PostScript to EPS (Encapsulated PostScript) |
e2ps
*
|
V:0.01, I:0.10 | 188 | text→ps | Text to PostScript converter with Japanese encoding support |
impose+
*
|
V:0.03, I:0.2 | 180 | ps→ps | PostScript utilities |
trueprint
*
|
V:0.02, I:0.13 | 188 | text→ps | pretty print many source codes (C, C++, Java, Pascal, Perl, Pike, Sh, and Verilog) to PostScript. (C language) |
pdf2svg
*
|
V:0.10, I:0.5 | 60 | ps→svg | converter from PDF to Scalable vector graphics format |
pdftoipe
*
|
V:0.02, I:0.16 | 88 | ps→ipe | converter from PDF to IPE's XML format |
Both lp
(1) and lpr
(1) commands offered by Common Unix Printing System (CUPS) provides options for customized printing the printable data.
You can print 3 copies of a file collated using one of the following commands.
$ lp -n 3 -o Collate=True filename
$ lpr -#3 -o Collate=True filename
You can further customize printer operation by using printer option such as "-o number-up=2
", "-o page-set=even
", "-o page-set=odd
", "-o scaling=200
", "-o natural-scaling=200
", etc., documented at Command-Line Printing and Options.
The Unix troff program originally developed by AT&T can be used for simple typesetting. It is usually used to create manpages.
TeX created by Donald Knuth is very powerful type setting tool and is the de facto standard. LaTeX originally written by Leslie Lamport enables a high-level access to the power of TeX.
Table 11.14. List of type setting tools
package | popcon | size | keyword | description |
---|---|---|---|---|
texlive
*
|
V:0.5, I:9 | 124 | (La)TeX | TeX system for typesetting, previewing and printing |
groff
*
|
V:0.9, I:7 | 9116 | troff | GNU troff text-formatting system |
Traditionally, roff is the main Unix text processing system. See roff
(7), groff
(7), groff
(1), grotty
(1), troff
(1), groff_mdoc
(7), groff_man
(7), groff_ms
(7), groff_me
(7), groff_mm
(7), and "info groff
".
You can read or print a good tutorial and reference on "-me
" macro in "/usr/share/doc/groff/
" by installing the groff
package.
"groff -Tascii -me -
" produces plain text output with ANSI escape code. If you wish to get manpage like output with many "^H" and "_", use "GROFF_NO_SGR=1 groff -Tascii -me -
" instead.
To remove "^H" and "_" from a text file generated by groff
, filter it by "col -b -x
".
The TeX Live software distribution offers a complete TeX system. The texlive
metapackage provides a decent selection of the TeX Live packages which should suffice for the most common tasks.
There are many references available for TeX and LaTeX.
tex
(1)
latex
(1)
This is the most powerful typesetting environment. Many SGML processors use this as their back end text processor. Lyx provided by the lyx
package and GNU TeXmacs provided by the texmacs
package offer nice WYSIWYG editing environment for LaTeX while many use Emacs and Vim as the choice for the source editor.
There are many online resources available.
/usr/share/doc/texlive-doc-base/english/texlive-en/live.html
") (texlive-doc-base
package)
When documents become bigger, sometimes TeX may cause errors. You must increase pool size in "/etc/texmf/texmf.cnf
" (or more appropriately edit "/etc/texmf/texmf.d/95NonPath
" and run update-texmf
(8)) to fix this.
The TeX source of "The TeXbook" is available at http://tug.ctan.org/tex-archive/systems/knuth/dist/tex/texbook.tex.
This file contains most of the required macros. I heard that you can process this document with tex
(1) after commenting lines 7 to 10 and adding "\input manmac \proofmodefalse
". It's strongly recommended to buy this book (and all other books from Donald E. Knuth) instead of using the online version but the source is a great example of TeX input!
You can print a manual page in PostScript nicely by one of the following commands.
$ man -Tps some_manpage | lpr
$ man -Tps some_manpage | mpage -2 | lpr
The second example prints 2 pages on one sheet.
Although writing a manual page (manpage) in the plain troff format is possible, there are few helper packages to create it.
Table 11.15. List of packages to help creating the manpage
package | popcon | size | keyword | description |
---|---|---|---|---|
docbook-to-man
*
|
V:0.3, I:2 | 240 | SGML→manpage | converter from DocBook SGML into roff man macros |
help2man
*
|
V:0.13, I:1.1 | 376 | text→manpage | automatic manpage generator from --help |
info2man
*
|
V:0.02, I:0.15 | 204 | info→manpage | converter from GNU info to POD or man pages |
txt2man
*
|
V:0.02, I:0.2 | 88 | text→manpage | convert flat ASCII text to man page format |
The following packages for the mail data conversion caught my eyes.
Table 11.16. List of packages to help mail data conversion
package | popcon | size | keyword | description |
---|---|---|---|---|
sharutils
*
|
V:2, I:32 | 904 |
shar (1), unshar (1), uuencode (1), uudecode (1)
|
|
mpack
*
|
V:1.5, I:23 | 84 | MIME |
encoder and decoder MIME messages: mpack (1) and munpack (1)
|
tnef
*
|
V:0.8, I:1.5 | 164 | ms-tnef | unpacking MIME attachments of type "application/ms-tnef" which is a Microsoft only format |
uudeview
*
|
V:0.17, I:1.6 | 132 | encoder and decoder for the following formats: uuencode, xxencode, BASE64, quoted printable, and BinHex | |
readpst
*
|
V:0.04, I:0.3 | 228 | PST | convert Microsoft Outlook PST files to mbox format |
The Internet Message Access Protocol version 4 (IMAP4) server (see Section 6.7, “POP3/IMAP4 server”) may be used to move mails out from proprietary mail systems if the mail client software can be configured to use IMAP4 server too.
Mail (SMTP) data should be limited to 7 bit. So binary data and 8 bit text data are encoded into 7 bit format with the Multipurpose Internet Mail Extensions (MIME) and the selection of the charset (see Section 8.3.1, “Basics of encoding”).
The standard mail storage format is mbox formatted according to RFC2822 (updated RFC822). See mbox
(5) (provided by the mutt
package).
For European languages, "Content-Transfer-Encoding: quoted-printable
" with the ISO-8859-1 charset is usually used for mail since there are not much 8 bit characters. If European text is encoded in UTF-8, "Content-Transfer-Encoding: quoted-printable
" is likely to be used since it is mostly 7 bit data.
For Japanese, traditionally "Content-Type: text/plain; charset=ISO-2022-JP
" is usually used for mail to keep text in 7 bits. But older Microsoft systems may send mail data in Shift-JIS without proper declaration. If Japanese text is encoded in UTF-8, Base64 is likely to be used since it contains many 8 bit data. The situation of other Asian languages is similar.
If your non-Unix mail data is accessible by a non-Debian client software which can talk to the IMAP4 server, you may be able to move them out by running your own IMAP4 server (see Section 6.7, “POP3/IMAP4 server”).
If you use other mail storage formats, moving them to mbox format is the good first step. The versatile client program such as mutt
(1) may be handy for this.
You can split mailbox contents to each message using procmail
(1) and formail
(1).
Each mail message can be unpacked using munpack
(1) from the mpack
package (or other specialized tools) to obtain the MIME encoded contents.
The following packages for the graphic data conversion, editing, and organization tools caught my eyes.
Table 11.17. List of graphic data tools
package | popcon | size | keyword | description |
---|---|---|---|---|
gimp
*
|
V:12, I:44 | 13560 | image(bitmap) | GNU Image Manipulation Program |
imagemagick
*
|
V:13, I:35 | 268 | image(bitmap) | image manipulation programs |
graphicsmagick
*
|
V:1.6, I:3 | 4532 | image(bitmap) |
image manipulation programs (folk of imagemagick )
|
xsane
*
|
V:5, I:36 | 748 | image(bitmap) | GTK+-based X11 frontend for SANE (Scanner Access Now Easy) |
netpbm
*
|
V:4, I:29 | 4612 | image(bitmap) | graphics conversion tools |
icoutils
*
|
V:0.3, I:1.3 | 200 | png↔ico(bitmap) | convert MS Windows icons and cursors to and from PNG formats (favicon.ico) |
scribus
*
|
V:0.5, I:3 | 26888 | ps/pdf/SVG/… | Scribus DTP editor |
openoffice.org-draw
*
|
V:18, I:40 | 10720 | image(vector) | OpenOffice.org office suite - drawing |
inkscape
*
|
V:15, I:32 | 87436 | image(vector) | SVG (Scalable Vector Graphics) editor |
dia-gnome
*
|
V:1.4, I:2 | 576 | image(vector) | diagram editor (GNOME) |
dia
*
|
V:3, I:5 | 572 | image(vector) | diagram editor (Gtk) |
xfig
*
|
V:2, I:4 | 1676 | image(vector) | facility for Interactive Generation of figures under X11 |
pstoedit
*
|
V:1.9, I:16 | 708 | ps/pdf→image(vector) | PostScript and PDF files to editable vector graphics converter (SVG) |
libwmf-bin
*
|
V:1.4, I:13 | 68 | Windows/image(vector) | Windows metafile (vector graphic data) conversion tools |
fig2sxd
*
|
V:0.03, I:0.2 | 200 | fig→sxd(vector) | convert XFig files to OpenOffice.org Draw format |
unpaper
*
|
V:0.2, I:1.7 | 736 | image→image | post-processing tool for scanned pages for OCR |
tesseract-ocr
*
|
V:0.7, I:3 | 3196 | image→text | free OCR software based on the HP's commercial OCR engine |
tesseract-ocr-eng
*
|
V:0.2, I:2 | 1752 | image→text | OCR engine data: tesseract-ocr language files for English text |
gocr
*
|
V:0.8, I:5 | 492 | image→text | free OCR software |
ocrad
*
|
V:0.4, I:4 | 364 | image→text | free OCR software |
gtkam
*
|
V:0.3, I:1.7 | 1100 | image(Exif) | manipulate digital camera photo files (GNOME) - GUI |
gphoto2
*
|
V:0.3, I:2 | 1008 | image(Exif) | manipulate digital camera photo files (GNOME) - command line |
kamera
*
|
V:0.7, I:13 | 312 | image(Exif) | manipulate digital camera photo files (KDE) |
jhead
*
|
V:0.5, I:3 | 132 | image(Exif) | manipulate the non-image part of Exif compliant JPEG (digital camera photo) files |
exif
*
|
V:0.2, I:1.7 | 184 | image(Exif) | command-line utility to show EXIF information in JPEG files |
exiftags
*
|
V:0.14, I:0.9 | 248 | image(Exif) | utility to read Exif tags from a digital camera JPEG file |
exiftran
*
|
V:0.4, I:3 | 56 | image(Exif) | transform digital camera jpeg images |
exifprobe
*
|
V:0.08, I:0.5 | 484 | image(Exif) | read metadata from digital pictures |
dcraw
*
|
V:0.9, I:5 | 444 | image(Raw)→ppm | decode raw digital camera images |
findimagedupes
*
|
V:0.06, I:0.4 | 140 | image→fingerprint | find visually similar or duplicate images |
ale
*
|
V:0.02, I:0.17 | 768 | image→image | merge images to increase fidelity or create mosaics |
imageindex
*
|
V:0.03, I:0.2 | 192 | image(Exif)→html | generate static HTML galleries from images |
f-spot
*
|
V:0.5, I:1.8 | 9488 | image(Exif) | personal photo management application (GNOME) |
bins
*
|
V:0.02, I:0.15 | 2008 | image(Exif)→html | generate static HTML photo albums using XML and EXIF tags |
gallery2
*
|
V:0.2, I:0.4 | 62548 | image(Exif)→html | generate browsable HTML photo albums with thumbnails |
outguess
*
|
V:0.02, I:0.14 | 252 | jpeg,png | universal Steganographic tool |
qcad
*
|
V:1.5, I:2 | 3944 | DXF | CAD data editor (KDE) |
blender
*
|
V:0.5, I:3 | 28336 | blend, TIFF, VRML, … | 3D content editor for animation etc |
mm3d
*
|
V:0.04, I:0.3 | 4536 | ms3d, obj, dxf, … | OpenGL based 3D model editor |
open-font-design-toolkit
*
|
I:0.03 | 36 | ttf, ps, … | metapackage for open font design |
fontforge
*
|
V:0.2, I:1.7 | 6612 | ttf, ps, … | font editor for PS, TrueType and OpenType fonts |
xgridfit
*
|
V:0.01, I:0.07 | 1060 | ttf | program for gridfitting and hinting TrueType fonts |
gbdfed
*
|
V:0.01, I:0.11 | 496 | bdf | editor for BDF fonts |
Search more image tools using regex "~Gworks-with::image
" in aptitude
(8) (see Section 2.2.6, “Search method options with aptitude”).
Although GUI programs such as gimp
(1) are very powerful, command line tools such as imagemagick
(1) are quite useful for automating image manipulation with the script.
The de facto image file format of the digital camera is the Exchangeable Image File Format (EXIF) which is the JPEG image file format with additional metadata tags. It can hold information such as date, time, and camera settings.
The Lempel-Ziv-Welch (LZW) lossless data compression patent has been expired. Graphics Interchange Format (GIF) utilities which use the LZW compression method are now freely available on the Debian system.
Any digital camera or scanner with removable recording media works with Linux through USB storage readers since it follows the Design rule for Camera Filesystem and uses FAT filesystem. See Section 10.1.10, “Removable storage device”.
There are many other programs for converting data. Following packages caught my eyes using regex "~Guse::converting
" in aptitude
(8) (see Section 2.2.6, “Search method options with aptitude”).
Table 11.18. List of miscellaneous data conversion tools
package | popcon | size | keyword | description |
---|---|---|---|---|
alien
*
|
V:1.2, I:11 | 244 | rpm/tgz→deb | converter for the foreign package into the Debian package |
freepwing
*
|
V:0.00, I:0.03 | 568 | EB→EPWING | converter from "Electric Book" (popular in Japan) to a single JIS X 4081 format (a subset of the EPWING V1) |
You can also extract data from RPM format with the following.
$ rpm2cpio file.src.rpm | cpio --extract
I provide some pointers for people to learn programming on the Debian system enough to trace the packaged source code. Here are notable packages and corresponding documentation packages for programing.
Table 12.1. List of packages to help programing
package | popcon | size | documentation |
---|---|---|---|
autoconf
*
|
V:4, I:25 | 2256 |
"info autoconf " provided by autoconf-doc
|
automake
*
|
V:3, I:21 | 1812 |
"info automake " provided by automake1.10-doc
|
bash
*
|
V:91, I:99 | 3536 |
"info bash " provided by bash-doc
|
bison
*
|
V:2, I:15 | 1504 |
"info bison " provided by bison-doc
|
cpp
*
|
V:38, I:82 | 32 |
"info cpp " provided by cpp-doc
|
ddd
*
|
V:0.3, I:2 | 3852 |
"info ddd " provided by ddd-doc
|
exuberant-ctags
*
|
V:1.2, I:5 | 284 |
exuberant-ctags (1)
|
flex
*
|
V:2, I:15 | 1352 |
"info flex " provided by flex-doc
|
gawk
*
|
V:28, I:32 | 2172 |
"info gawk " provided by gawk-doc
|
gcc
*
|
V:17, I:67 | 28 |
"info gcc " provided by gcc-doc
|
gdb
*
|
V:4, I:22 | 4812 |
"info gdb " provided by gdb-doc
|
gettext
*
|
V:8, I:46 | 7272 |
"info gettext " provided by gettext-doc
|
gfortran
*
|
V:0.9, I:6 | 8 |
"info gfortran " provided by gfortran-doc (Fortran 95)
|
gpc
*
|
V:0.07, I:0.5 | 8 |
"info gpc " provided by gpc-doc (Pascal)
|
fpc
*
|
I:0.4 | 40 |
fpc (1) and html by fp-docs (Pascal)
|
glade
*
|
V:0.3, I:2 | 1652 | help provided via menu (UI Builder) |
glade-gnome
*
|
V:0.09, I:1.2 | 508 | help provided via menu (UI Builder) |
libc6
*
|
V:97, I:99 | 10012 |
"info libc " provided by glibc-doc and glibc-doc-reference
|
make
*
|
V:21, I:72 | 1220 |
"info make " provided by make-doc
|
xutils-dev
*
|
V:1.7, I:15 | 1728 |
imake (1), xmkmf (1), etc.
|
mawk
*
|
V:66, I:99 | 244 |
mawk (1)
|
perl
*
|
V:88, I:99 | 18528 |
perl (1) and html pages provided by perl-doc and perl-doc-html
|
python
*
|
V:62, I:97 | 736 |
python (1) and html pages provided by python-doc
|
tcl8.4
*
|
V:8, I:46 | 3332 |
tcl (3) and detail manual pages provided by tcl8.4-doc
|
tk8.4
*
|
V:5, I:34 | 2712 |
tk (3) and detail manual pages provided by tk8.4-doc
|
ruby
*
|
V:9, I:24 | 120 |
ruby (1) and interactive reference provided by ri
|
vim
*
|
V:15, I:33 | 1792 |
help(F1) menu provided by vim-doc
|
susv2
*
|
I:0.03 | 48 | fetch "The Single Unix Specifications v2" |
susv3
*
|
I:0.07 | 48 | fetch "The Single Unix Specifications v3" |
Online references are available by typing "man name
" after installing manpages
and manpages-dev
packages. Online references for the GNU tools are available by typing "info program_name
" after installing the pertinent documentation packages. You may need to include the contrib
and non-free
archives in addition to the main
archive since some GFDL documentations are not considered to be DSFG compliant.
Do not use "test
" as the name of an executable test file. "test
" is a shell builtin.
You should install software programs directly compiled from source into "/usr/local
" or "/opt
" to avoid collision with system programs.
Code examples of creating "Song 99 Bottles of Beer" should give you good idea of practically all the programming languages.
The shell script is a text file with the execution bit set and contains the commands in the following format.
#!/bin/sh ... command lines
The first line specifies the shell interpreter which read and execute this file contents.
Reading shell scripts is the best way to understand how a Unix-like system works. Here, I give some pointers and reminders for shell programming. See "Shell Mistakes" (http://www.greenend.org.uk/rjk/2001/04/shell.html) to learn from mistakes.
Unlike shell interactive mode (see Section 1.5, “The simple shell command” and Section 1.6, “Unix-like text processing”), shell scripts frequently use parameters, conditionals, and loops.
Many system scripts may be interpreted by any one of POSIX shells (see Table 1.13, “List of shell programs”). The default shell for the system is "/bin/sh
" which is a symlink pointing to the actual program.
bash
(1) for lenny
or older
dash
(1) for squeeze
or newer
Avoid writing a shell script with bashisms or zshisms to make it portable among all POSIX shells. You can check it using checkbashisms
(1).
Table 12.2. List of typical bashisms
Good: POSIX | Avoid: bashism |
---|---|
if [ "$foo" = "$bar" ] ; then …
|
if [ "$foo" == "$bar" ] ; then …
|
diff -u file.c.orig file.c
|
diff -u file.c{.orig,}
|
mkdir /foobar /foobaz
|
mkdir /foo{bar,baz}
|
funcname() { … }
|
function funcname() { … }
|
octal format: "\377 "
|
hexadecimal format: "\xff "
|
The "echo
" command must be used with following cares since its implementation differs among shell builtin and external commands.
-e
" and "-E
".
-n
".
Although "-n
option is not really POSIX syntax, it is generally accepted.
Use the "printf
" command instead of the "echo
" command if you need to embed escape sequences in the output string.
Special shell parameters are frequently used in the shell script.
Table 12.3. List of shell parameters
shell parameter | value |
---|---|
$0
|
name of the shell or shell script |
$1
|
first(1) shell argument |
$9
|
ninth(9) shell argument |
$#
|
number of positional parameters |
"$*"
|
"$1 $2 $3 $4 … "
|
"$@"
|
"$1" "$2" "$3" "$4" …
|
$?
|
exit status of the most recent command |
$$
|
PID of this shell script |
$!
|
PID of most recently started background job |
Basic parameter expansions to remember are followings.
Table 12.4. List of shell parameter expansions
parameter expression form |
value if var is set
|
value if var is not set
|
---|---|---|
${var:-string}
|
"$var "
|
"string "
|
${var:+string}
|
"string "
|
"null "
|
${var:=string}
|
"$var "
|
"string " (and run "var=string ")
|
${var:?string}
|
"$var "
|
echo "string " to stderr (and exit with error)
|
Here, the colon ":
" in all of these operators is actually optional.
:
" = operator test for exist and not null
:
" = operator test for exist only
Table 12.5. List of key shell parameter substitutions
parameter substitution form | result |
---|---|
${var%suffix}
|
remove smallest suffix pattern |
${var%%suffix}
|
remove largest suffix pattern |
${var#prefix}
|
remove smallest prefix pattern |
${var##prefix}
|
remove largest prefix pattern |
Each command returns an exit status which can be used for conditional expressions.
"0" in the shell conditional context means "True", while "0" in the C conditional context means "False".
"[
" is the equivalent of the test
command, which evaluates its arguments up to "]
" as a conditional expression.
Basic conditional idioms to remember are followings.
<command> && <if_success_run_this_command_too> || true
"
<command> || <if_not_success_run_this_command_too> || true
"
if [ <conditional_expression> ]; then <if_success_run_this_command> else <if_not_success_run_this_command> fi
Here trailing "|| true
" was needed to ensure this shell script does not exit at this line accidentally when shell is invoked with "-e
" flag.
Table 12.6. List of file comparison operators in the conditional expression
equation | condition to return logical true |
---|---|
-e <file>
|
<file> exists |
-d <file>
|
<file> exists and is a directory |
-f <file>
|
<file> exists and is a regular file |
-w <file>
|
<file> exists and is writable |
-x <file>
|
<file> exists and is executable |
<file1> -nt <file2>
|
<file1> is newer than <file2> (modification) |
<file1> -ot <file2>
|
<file1> is older than <file2> (modification) |
<file1> -ef <file2>
|
<file1> and <file2> are on the same device and the same inode number |
Table 12.7. List of string comparison operators in the conditional expression
equation | condition to return logical true |
---|---|
-z <str>
|
the length of <str> is zero |
-n <str>
|
the length of <str> is non-zero |
<str1> = <str2>
|
<str1> and <str2> are equal |
<str1> != <str2>
|
<str1> and <str2> are not equal |
<str1> < <str2>
|
<str1> sorts before <str2> (locale dependent) |
<str1> > <str2>
|
<str1> sorts after <str2> (locale dependent) |
Arithmetic integer comparison operators in the conditional expression are "-eq
", "-ne
", "-lt
", "-le
", "-gt
", and "-ge
".
There are several loop idioms to use in POSIX shell.
for x in foo1 foo2 … ; do command ; done
" loops by assigning items from the list "foo1 foo2 …
" to variable "x
" and executing "command
".
while condition ; do command ; done
" repeats "command
" while "condition
" is true.
until condition ; do command ; done
" repeats "command
" while "condition
" is not true.
break
" enables to exit from the loop.
continue
" enables to resume the next iteration of the loop.
The C-language like numeric iteration can be realized by using seq
(1) as the "foo1 foo2 …
" generator.
The shell processes a script roughly as the following sequence.
"…"
or '…'
.
The shell splits other part of a line into tokens by the following.
<space> <tab> <newline>
< > | ; & ( )
The shell checks the reserved word for each token to adjust its behavior if not within "…"
or '…'
.
if then elif else fi for in while unless do done case esac
"…"
or '…'
.
The shell expands tilde if not within "…"
or '…'
.
~
" → current user's home directory
~<user>
" → <user>
's home directory
The shell expands parameter to its value if not within '…'
.
$PARAMETER
" or "${PARAMETER}
"
The shell expands command substitution if not within '…'
.
$( command )
" → the output of "command
"
` command `
" → the output of "command
"
The shell expands pathname glob to matching file names if not within "…"
or '…'
.
*
→ any characters
?
→ one character
[…]
→ any one of the characters in "…
"
The shell looks up command from the following and execute it.
$PATH
"
Single quotes within double quotes have no effect.
Executing "set -x
" in the shell or invoking the shell with "-x
" option make the shell to print all of commands executed. This is quite handy for debugging.
In order to make your shell program as portable as possible across Debian system, it is good idea to limit utility programs to ones provided by essential packages.
aptitude search ~E
" lists essential packages.
dpkg -L <package_name> |grep '/man/man.*/'
" lists manpages for commands offered by <package_name>
package.
Table 12.8. List of packages containing small utility programs for shell scripts
package | popcon | size | description |
---|---|---|---|
coreutils
*
|
V:92, I:99 | 13828 | GNU core utilities |
debianutils
*
|
V:93, I:99 | 260 | miscellaneous utilities specific to Debian |
bsdmainutils
*
|
V:81, I:99 | 768 | collection of more utilities from FreeBSD |
bsdutils
*
|
V:77, I:99 | 196 | basic utilities from 4.4BSD-Lite |
moreutils
*
|
V:0.3, I:1.5 | 220 | additional Unix utilities |
Although moreutils
may not exist ouside of Debian, it offers interesting small programs. Most notable one is sponge
(8). See Section 1.6.4, “Global substitution with regular expressions”.
The user interface of a simple shell program can be improved from dull interaction by echo
and read
commands to more interactive one by using one of the so-called dialog program etc.
Table 12.9. List of user interface programs
package | popcon | size | description |
---|---|---|---|
x11-utils
*
|
V:26, I:53 | 652 |
xmessage (1): display a message or query in a window (X)
|
whiptail
*
|
V:42, I:99 | 104 | displays user-friendly dialog boxes from shell scripts (newt) |
dialog
*
|
V:4, I:25 | 1592 | displays user-friendly dialog boxes from shell scripts (ncurses) |
zenity
*
|
V:8, I:41 | 4992 | display graphical dialog boxes from shell scripts (gtk2.0) |
ssft
*
|
V:0.01, I:0.11 | 152 | Shell Scripts Frontend Tool (wrapper for zenity, kdialog, and dialog with gettext) |
gettext
*
|
V:8, I:46 | 7272 |
"/usr/bin/gettext.sh ": translate message
|
Here is a simple script which creates ISO image with RS02 data supplemented by dvdisaster
(1).
#!/bin/sh -e # gmkrs02 : Copyright (C) 2007 Osamu Aoki <osamu@debian.org>, Public Domain #set -x error_exit() { echo "$1" >&2 exit 1 } # Initialize variables DATA_ISO="$HOME/Desktop/iso-$$.img" LABEL=$(date +%Y%m%d-%H%M%S-%Z) if [ $# != 0 ] && [ -d "$1" ]; then DATA_SRC="$1" else # Select directory for creating ISO image from folder on desktop DATA_SRC=$(zenity --file-selection --directory \ --title="Select the directory tree root to create ISO image") \ || error_exit "Exit on directory selection" fi # Check size of archive xterm -T "Check size $DATA_SRC" -e du -s $DATA_SRC/* SIZE=$(($(du -s $DATA_SRC | awk '{print $1}')/1024)) if [ $SIZE -le 520 ] ; then zenity --info --title="Dvdisaster RS02" --width 640 --height 400 \ --text="The data size is good for CD backup:\\n $SIZE MB" elif [ $SIZE -le 3500 ]; then zenity --info --title="Dvdisaster RS02" --width 640 --height 400 \ --text="The data size is good for DVD backup :\\n $SIZE MB" else zenity --info --title="Dvdisaster RS02" --width 640 --height 400 \ --text="The data size is too big to backup : $SIZE MB" error_exit "The data size is too big to backup :\\n $SIZE MB" fi # only xterm is sure to have working -e option # Create raw ISO image rm -f "$DATA_ISO" || true xterm -T "genisoimage $DATA_ISO" \ -e genisoimage -r -J -V "$LABEL" -o "$DATA_ISO" "$DATA_SRC" # Create RS02 supplemental redundancy xterm -T "dvdisaster $DATA_ISO" -e dvdisaster -i "$DATA_ISO" -mRS02 -c zenity --info --title="Dvdisaster RS02" --width 640 --height 400 \ --text="ISO/RS02 data ($SIZE MB) \\n created at: $DATA_ISO" # EOF
You may wish to create launcher on the desktop with command set something like "/usr/local/bin/gmkrs02 %d
".
Make is a utility to maintain groups of programs. Upon execution of make
(1), make
read the rule file, "Makefile
", and updates a target if it depends on prerequisite files that have been modified since the target was last modified, or if the target does not exist. The execution of these updates may occur concurrently.
The rule file syntax is the following.
target: [ prerequisites ... ] [TAB] command1 [TAB] -command2 # ignore errors [TAB] @command3 # suppress echoing
Here " [TAB]
" is a TAB code. Each line is interpreted by the shell after make variable substitution. Use "\
" at the end of a line to continue the script. Use "$$
" to enter "$
" for environment values for a shell script.
Implicit rules for the target and prerequisites can be written, for example, by the following.
%.o: %.c header.h
Here, the target contains the character "%
" (exactly one of them). The "%
" can match any nonempty substring in the actual target filenames. The prerequisites likewise use "%
" to show how their names relate to the actual target name.
Table 12.10. List of make automatic variables
automatic variable | value |
---|---|
$@
|
target |
$<
|
first prerequisite |
$?
|
all newer prerequisites |
$^
|
all prerequisites |
$*
|
"% " matched stem in the target pattern
|
Table 12.11. List of make variable expansions
variable expansion | description |
---|---|
foo1 := bar
|
one-time expansion |
foo2 = bar
|
recursive expansion |
foo3 += bar
|
append |
Run "make -p -f/dev/null
" to see automatic internal rules.
You can set up proper environment to compile programs written in the C programming language by the following.
# apt-get install glibc-doc manpages-dev libc6-dev gcc build-essential
The libc6-dev
package, i.e., GNU C Library, provides C standard library which is collection of header files and library routines used by the C programming language.
See references for C as the following.
info libc
" (C library function reference)
gcc
(1) and "info gcc
"
each_C_library_function_name
(3)
A simple example "example.c
" can compiled with a library "libm
" into an executable "run_example
" by the following.
$ cat > example.c << EOF #include <stdio.h> #include <math.h> #include <string.h> int main(int argc, char **argv, char **envp){ double x; char y[11]; x=sqrt(argc+7.5); strncpy(y, argv[0], 10); /* prevent buffer overflow */ y[10] = '\0'; /* fill to make sure string ends with '\0' */ printf("%5i, %5.3f, %10s, %10s\n", argc, x, y, argv[1]); return 0; } EOF $ gcc -Wall -g -o run_example example.c -lm $ ./run_example 1, 2.915, ./run_exam, (null) $ ./run_example 1234567890qwerty 2, 3.082, ./run_exam, 1234567890qwerty
Here, "-lm
" is needed to link library "/usr/lib/libm.so
" from the libc6
package for sqrt
(3). The actual library is in "/lib/
" with filename "libm.so.6
", which is a symlink to "libm-2.7.so
".
Look at the last parameter in the output text. There are more than 10 characters even though "%10s
" is specified.
The use of pointer memory operation functions without boundary checks, such as sprintf
(3) and strcpy
(3), is deprecated to prevent buffer overflow exploits that leverage the above overrun effects. Instead, use snprintf
(3) and strncpy
(3).
Debug is important part of programing activities. Knowing how to debug programs makes you a good Debian user who can produce meaningful bug reports.
Primary debugger on Debian is gdb
(1) which enables you to inspect a program while it executes.
Let's install gdb
and related programs by the following.
# apt-get install gdb gdb-doc build-essential devscripts
Good tutorial of gdb
is provided by "info gdb
" or found elsewhere on the web.
Here is a simple example of using gdb
(1) on a "program
" compiled with the "-g
" option to produce debugging information.
$ gdb program (gdb) b 1 # set break point at line 1 (gdb) run args # run program with args (gdb) next # next line ... (gdb) step # step forward ... (gdb) p parm # print parm ... (gdb) p parm=12 # set value to 12 ... (gdb) quit
Many gdb
(1) commands can be abbreviated. Tab expansion works as in the shell.
Since all installed binaries should be stripped on the Debian system by default, most debugging symbols are removed in the normal package. In order to debug Debian packages with gdb
(1), corresponding *-dbg
packages need to be installed (e.g. libc6-dbg
in the case of libc6
).
If a package to be debugged does not provide its *-dbg
package, you need to install it after rebuilding it by the following.
$ mkdir /path/new ; cd /path/new $ sudo apt-get update $ sudo apt-get dist-upgrade $ sudo apt-get install fakeroot devscripts build-essential $ sudo apt-get build-dep source_package_name $ apt-get source package_name $ cd package_name*
Fix bugs if needed.
Bump package version to one which does not collide with official Debian versions, e.g. one appended with "+debug1
" when recompiling existing package version, or one appended with "~pre1
" when compiling unreleased package version by the following.
$ dch -i
Compile and install packages with debug symbols by the following.
$ export DEB_BUILD_OPTIONS=nostrip,noopt $ debuild $ cd .. $ sudo debi package_name*.changes
You need to check build scripts of the package and ensure to use "CFLAGS=-g -Wall
" for compiling binaries.
When you encounter program crash, reporting bug report with cut-and-pasted backtrace information is a good idea.
The backtrace can be obtained by the following steps.
gdb
(1).
Reproduce crash.
gdb
prompt.
bt
" at the gdb
prompt.
In case of program freeze, you can crash the program by pressing Ctrl-C
in the terminal running gdb
to obtain gdb
prompt.
Often, you see a backtrace where one or more of the top lines are in "malloc()
" or "g_malloc()
". When this happens, chances are your backtrace isn't very useful. The easiest way to find some useful information is to set the environment variable "$MALLOC_CHECK_
" to a value of 2 (malloc
(3)). You can do this while running gdb
by doing the following.
$ MALLOC_CHECK_=2 gdb hello
Table 12.12. List of advanced gdb commands
command | description for command objectives |
---|---|
(gdb) thread apply all bt
|
get a backtrace for all threads for multi-threaded program |
(gdb) bt full
|
get parameters came on the stack of function calls |
(gdb) thread apply all bt full
|
get a backtrace and parameters as the combination of the preceding options |
(gdb) thread apply all bt full 10
|
get a backtrace and parameters for top 10 calls to cut off irrelevant output |
(gdb) set logging on
|
write log of gdb output to a file (the default is "gdb.txt ")
|
If a GNOME program preview1
has received an X error, you should see a message as follows.
The program 'preview1' received an X Window System error.
If this is the case, you can try running the program with "--sync
", and break on the "gdk_x_error
" function in order to obtain a backtrace.
Use ldd
(1) to find out a program's dependency on libraries by the followings.
$ ldd /bin/ls librt.so.1 => /lib/librt.so.1 (0x4001e000) libc.so.6 => /lib/libc.so.6 (0x40030000) libpthread.so.0 => /lib/libpthread.so.0 (0x40153000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
For ls
(1) to work in a `chroot`ed environment, the above libraries must be available in your `chroot`ed environment.
There are several memory leak detection tools available in Debian.
Table 12.13. List of memory leak detection tools
package | popcon | size | description |
---|---|---|---|
libc6-dev
*
|
V:46, I:68 | 11292 |
mtrace (1): malloc debugging functionality in glibc
|
valgrind
*
|
V:1.3, I:6 | 136416 | memory debugger and profiler |
kmtrace
*
|
V:0.3, I:2 | 324 |
KDE memory leak tracer using glibc's mtrace (1)
|
alleyoop
*
|
V:0.05, I:0.3 | 596 | GNOME front-end to the Valgrind memory checker |
electric-fence
*
|
V:0.05, I:0.8 | 120 |
malloc (3) debugger
|
leaktracer
*
|
V:0.01, I:0.11 | 116 | memory-leak tracer for C++ programs |
libdmalloc5
*
|
V:0.01, I:0.2 | 356 | debug memory allocation library |
mpatrolc2
*
|
V:0.00, I:0.01 | 3592 | library for debugging memory allocations |
There are lint like tools for static code analysis.
Table 12.14. List of tools for static code analysis
package | popcon | size | description |
---|---|---|---|
splint
*
|
V:0.06, I:0.5 | 1836 | tool for statically checking C programs for bugs |
rats
*
|
V:0.06, I:0.2 | 876 | rough Auditing Tool for Security (C, C++, PHP, Perl, and Python code) |
flawfinder
*
|
V:0.01, I:0.15 | 192 | tool to examine C/C++ source code and looks for security weaknesses |
perl
*
|
V:88, I:99 | 18528 |
interpreter with internal static code checker: B::Lint (3perl)
|
pylint
*
|
V:0.2, I:0.7 | 576 | Python code static checker |
jlint
*
|
V:0.01, I:0.09 | 156 | Java program checker |
weblint-perl
*
|
V:0.10, I:0.7 | 28 | syntax and minimal style checker for HTML |
linklint
*
|
V:0.05, I:0.3 | 432 | fast link checker and web site maintenance tool |
libxml2-utils
*
|
V:3, I:49 | 160 |
utilities with xmllint (1) to validate XML files
|
Flex is a Lex-compatible fast lexical analyzer generator.
Tutorial for flex
(1) can be found in "info flex
".
You need to provide your own "main()
" and "yywrap()
". Otherwise, your flex program should look like this to compile without a library. This is because that "yywrap
" is a macro and "%option main
" turns on "%option noyywrap
" implicitly.
%option main %% .|\n ECHO ; %%
Alternatively, you may compile with the "-lfl
" linker option at the end of your cc
(1) command line (like AT&T-Lex with "-ll
"). No "%option
" is needed in this case.
Several packages provide a Yacc-compatible lookahead LR parser or LALR parser generator in Debian.
Table 12.15. List of Yacc-compatible LALR parser generators
package | popcon | size | description |
---|---|---|---|
bison
*
|
V:2, I:15 | 1504 | GNU LALR parser generator |
byacc
*
|
V:0.09, I:1.2 | 168 | Berkeley LALR parser generator |
btyacc
*
|
V:0.00, I:0.07 | 248 |
backtracking parser generator based on byacc
|
Tutorial for bison
(1) can be found in "info bison
".
You need to provide your own "main()
" and "yyerror()
". "main()
" calls "yyparse()
" which calls "yylex()
", usually created with Flex.
%% %%
Autoconf is a tool for producing shell scripts that automatically configure software source code packages to adapt to many kinds of Unix-like systems using the entire GNU build system.
autoconf
(1) produces the configuration script "configure
". "configure
" automatically creates a customized "Makefile
" using the "Makefile.in
" template.
Do not overwrite system files with your compiled programs when installing them.
Debian does not touch files in "/usr/local/
" or "/opt
". So if you compile a program from source, install it into "/usr/local/
" so it does not interfere with Debian.
$ cd src $ ./configure --prefix=/usr/local $ make $ make install # this puts the files in the system
If you have the original source and if it uses autoconf
(1)/automake
(1) and if you can remember how you configured it, execute as follows to uninstall the program.
$ ./configure "all-of-the-options-you-gave-it" # make uninstall
Alternatively, if you are absolutely sure that the install process puts files only under "/usr/local/
" and there is nothing important there, you can erase all its contents by the following.
# find /usr/local -type f -print0 | xargs -0 rm -f
If you are not sure where files are installed, you should consider using checkinstall
(8) from the checkinstall
package, which provides a clean path for the uninstall. It now supports to create a Debian package with "-D
" option.
Although any AWK scripts can be automatically rewritten in Perl using a2p
(1), one-liner AWK scripts are best converted to one-liner Perl scripts manually.
Let's think following AWK script snippet.
awk '($2=="1957") { print $3 }' |
This is equivalent to any one of the following lines.
perl -ne '@f=split; if ($f[1] eq "1957") { print "$f[2]\n"}' |
perl -ne 'if ((@f=split)[1] eq "1957") { print "$f[2]\n"}' |
perl -ne '@f=split; print $f[2] if ( $f[1]==1957 )' |
perl -lane 'print $F[2] if $F[1] eq "1957"' |
perl -lane 'print$F[2]if$F[1]eq+1957' |
The last one is a riddle. It took advantage of following Perl features.
See perlrun
(1) for the command-line options. For more crazy Perl scripts, Perl Golf may be interesting.
Basic interactive dynamic web pages can be made as follows.
Filling and clicking on the form entries sends one of the following URL string with encoded parameters from the browser to the web server.
http://www.foo.dom/cgi-bin/program.pl?VAR1=VAL1&VAR2=VAL2&VAR3=VAL3
"
http://www.foo.dom/cgi-bin/program.py?VAR1=VAL1&VAR2=VAL2&VAR3=VAL3
"
http://www.foo.dom/program.php?VAR1=VAL1&VAR2=VAL2&VAR3=VAL3
"
%nn
" in URL is replaced with a character with hexadecimal nn
value.
QUERY_STRING="VAR1=VAL1 VAR2=VAL2 VAR3=VAL3"
".
program.*
") on the web server executes itself with the environment variable "$QUERY_STRING
".
stdout
of CGI program is sent to the web browser and is presented as an interactive dynamic web page.
For security reasons it is better not to hand craft new hacks for parsing CGI parameters. There are established modules for them in Perl and Python. PHP comes with these functionalities. When client data storage is needed, HTTP cookies are used. When client side data processing is needed, Javascript is frequently used.
For more, see the Common Gateway Interface, The Apache Software Foundation, and JavaScript.
Searching "CGI tutorial" on Google by typing encoded URL http://www.google.com/search?hl=en&ie=UTF-8&q=CGI+tutorial directly to the browser address is a good way to see the CGI script in action on the Google server.
There are programs to convert source codes.
Table 12.16. List of source code translation tools
package | popcon | size | keyword | description |
---|---|---|---|---|
perl
*
|
V:88, I:99 | 18528 | AWK→PERL |
convert source codes from AWK to PERL: a2p (1)
|
f2c
*
|
V:0.12, I:1.2 | 448 | FORTRAN→C |
convert source codes from FORTRAN 77 to C/C++: f2c (1)
|
protoize
*
|
V:0.00, I:0.09 | 100 | ANSI C | create/remove ANSI prototypes from C code |
intel2gas
*
|
V:0.01, I:0.07 | 344 | intel→gas | converter from NASM (Intel format) to the GNU Assembler (GAS) |
If you want to make a Debian package, read followings.
debuild
(1), pbuilder
(1) and pdebuild
(1)
maint-guide
package)
developers-reference
package)
debian-policy
package)
There are packages such as dh-make
, dh-make-perl
, etc., which help packaging.
Here are backgrounds of this document.
The Linux system is a very powerful computing platform for a networked computer. However, learning how to use all its capabilities is not easy. Setting up the LPR printer with non-PostScript printer was a good example of stumble points. (There are no issues anymore since newer installations use new CUPS system.)
There is a complete, detailed map called the "SOURCE CODE". This is very accurate but very hard to understand. There are also references called HOWTO and mini-HOWTO. They are easier to understand but tend to give too much detail and lose the big picture. I sometimes have a problem finding the right section in a long HOWTO when I need a few commands to invoke.
I hope this "Debian Reference (version 2)" provides a good starting direction for people in the Debian maze.
Debian Reference was initiated by Osamu Aoki <osamu at debian dot org> as a personal system administration memo. Many contents came from the knowledge I gained from the debian-user mailing list and other Debian resources.
Following a suggestion from Josip Rodin, who was very active with the Debian Documentation Project (DDP), "Debian Reference (version 1, 2001-2007)" was created as a part of DDP documents.
After 6 years, Osamu realized that the original "Debian Reference (version 1)" was outdated and started to rewrite many contents. New "Debian Reference (version 2)" is released in 2008.
The tutorial contents can trace its origin and its inspiration in followings.
"Linux User's Guide" by Larry Greenfield (December 1996)
"Debian Tutorial" by Havoc Pennington. (11 December, 1998)
"Debian GNU/Linux: Guide to Installation and Usage" by John Goerzen and Ossama Othman (1999)
The package and archive description can trace some of their origin and their inspiration in following.
The other contents can trace some of their origin and their inspiration in following.
"Debian Reference (version 1)" by Osamu Aoki (2001–2007)
The previous "Debian Reference (version 1)" was created with many contributors.
Many manual pages and info pages on the Debian system were used as the primary references to write this document. To the extent Osamu Aoki considered within the fair use, many parts of them, especially command definitions, were used as phrase pieces after careful editorial efforts to fit them into the style and the objective of this document.
The gdb debugger description was expanded using Debian wiki contents on backtrace with consent by Ari Pollak, Loïc Minier, and Dafydd Harries.
Contents of "Debian Reference (version 2)" are mostly my own work except as mentioned above. These has been updated by the contributors too.
The author, Osamu Aoki, thanks all those who helped make this document possible.
The source of the English original document is currently written in AsciiDoc text files. AsciiDoc is used as convenience only since it is less typing than straight XML and supports table in the very intuitive format. You should think XML and PO files as real source files. Via build script, it is converted to DocBook XML format and automatically generated data are inserted to form a final Docbook XML source. This final Docbook XML source can be converted to HTML, plain text, PostScript, and PDF. Currently, only HTML and plain text conversions are enabled.