Skip to content

Latest commit

 

History

History
5565 lines (4363 loc) · 244 KB

shell.org

File metadata and controls

5565 lines (4363 loc) · 244 KB

Shell

Files

View

cat file1 file2

Concatenate file(s) and print on the standard output. [ -A display non-printing characters -n number all output lines -s suppress repeated empty output lines ] ( cat file displays the contents of a file )

less file

Paginates output file. Less supports extended regular expressions. [ /word search forward ?word search backward n repeat previous search N repeat previous search, but in the reverse direction S-< begin document S-> end document h help ] ( zless file display the contents of gzip-compressed text files ) ( more file similar to less but fewer features (only page forward) ) ( dmesg | less print or control the kernel ring buffer )

tail file

Shows the last 10 lines of the file. [ -n print the first X lines -f output appended data as the file grows ] ( head file shows the first 10 lines ) ( tail +15 file print lines starting at line 15 ) ( tail -15 file output the last 15 lines ) ( tail -f file dynamicaly displays last line of file ) ( tail -f /var/log/messages watch what the system is doing in near real-time )

Create

touch file

Set or update the access, change, and modify times of file. If a filename argument is that of a nonexistent file, an empty file is created. ( touch dir/dir-{01..10}/file-{A..Z} create a file from A to Z inside each dir from 01 to 10 )

echo ‘content here’ > name

Writes content to a file. If the file does not already exist, it creates it. ( echo 'content here' >> name append the output to the file instead of overwriting it )

cat <<EOF > file

Create a file with a large section of text by typing it in to the standard input. Ctrl-d tell cat that it has reached end of file EOF on standard input. The items EOF control the here document. [ -A display non-printing characters -n number all output lines -s suppress repeated empty output lines ] ( cat > file equivalent command )

diff -Naur old_file new_file > patch_file

Prepare a patch_file for use with patch command. old_file and new_file are either single files or directories containing files. [ -N treat absent files as empty -a treat all files as text -u output NUM (default 3) lines of unified context -r recursively compare any subdirectories found ] ( patch < patch_file apply it to patch the old file into the new file (it’s not neccesary specify a target file to patch command, the patch_file already contains the filenames in the header) )

mkfifo file

Create named pipes (FIFOs) with the given file.

mktemp /tmp/im1.XXXXXX

Create temporary filenames. Converts the XXXXXX to a unique set of characters and creates an empty file with that name. [ -d create a directory, not a file -d dry run, do not create anything; merely print a name (unsafe) ]

Copy

rsync -b file dir/

Transfer/copy file from the current directory to the directory dir. [ -t preserve modification times -b make backups (useful to avoid overwriting) ] ( rsync -t *.c dir/ transfer all files matching the pattern to the directory ) ( rsync -t *.c foo:dir/ transfer all files matching the to the directory on the machine foo )

cp -i file1 file2

Copy file1 to file2. If copied to an existing file, the file keeps the inode number. [ -i prompt before overwrite -a same as -dR –preserve=all ] ( cp -i file1 file2 dir copy files to directory )

mv -i file1 file2

Move/remove file1 to file2. If file2 exists, it is overwritten with the contents of file1. If file2 does not exist, it is created. In either case, file1 ceases to exist.

[ -i prompt before overwrite -u update, only move files that either don’t exist, or are newer than the existing corresponding files in the destination directory ]

scp files user@host:path

Copy files to other machine (and vice versa).

Delete

rm -i file

Removes file. [ -i prompt before every removal -f remove the write-protected file ]

gio trash file

Move file to Trash folder.

Utilities

ln -s source link

Create a symbolic link from source to link. The new link is the name of the symbolic link, the source is the path of the file or directory that the link points to (absolute paths must be placed). [ -s make symbolic link instead of hard links ] ( ln source link create a hard link from source to link )

chmod +rx file

Add read r and execute x permissions to file. ( chmod 755 file equivalent to above, sets execution permission for all user ) ( chmod 700 file add read and execute only to the user in absolute sintaxis ) ( chmod u+x file add execute for the owner ) ( chmod +x file add execute for all, equivalent to a+x ) ( chmod go-r file remove from group and other read permissions ) ( chmod u+x,go=rx file add execute for the owner and set the permissions for the group and others to read and execute )

# chown new_user file

Change the owner of file to new_user. [ -R operate on files and directories recursively ] ( # chown :admins changes the group owner to the group admins ) ( # chown new_user:new_group file change the owner and group )

lsof -p 123

List the open files for a particular 123 process ID. Open files and the processes using them, it can list network resources, dynamic libraries, pipes, and more. [ +D search for all open instances of directory -p listing of files for the processes ] ( COMMAND the command name for the process that holds the file descriptor FD shows the purpose of the file, it can also list the file descriptor of the open file DEVICE the major and minor number of the device that holds the file NAME the filename ) ( lsof +D /usr displays entries for open files in /usr and all of its subdirectories )

Search

find path options

Search for file corresponding to options in path. [ -name base of file name,the path with the leading directories removed -print print the full name, followed by a newline -print0 print the full name, folowed by a null charater -prune if the file is a dir, don’t descend into it -xdev don’t descend dir on other filesystems -perm 644 search by permission -exec search by executable -type d search by dir -type f search by file -inum 123 search by inode -regex pattern matches regular expression pattern ] ( find -name pattern search for file with a pattern ) ( find -type f/d/l search by filetype: f=file, d=directory, l=link ) ( find -exec cmd execute cmd on the found files ) ( find . -name "*.png" -exec cp {} $HOME/tmp/ \; search for all png files, then copy all files to tmp directory (‘{}’ stands for found files) )

find dir -name file -print

Search file in dir. Be carefull of shell expansion, try to use quotes in the files names. [ -name base of file name,the path with the leading directories removed -iname like -name, but the match is case insensitive -print print the full name, followed by a newline -print0 print the full name, folowed by a null charater -size n file uses less than, more than or exactly n units of space, rounding up ] ( find . -size -9k search files larger than 9 mibi bytes )

find . -name ‘*.gif’ -print0 | xargs -0 file

Verify that every file in the current directory ( . ) tree that ends with .gif is actually a GIF image. This form changes the find output separator and the xargs argument delimiter from a newline to a NULL character, it’s usefull to avoid errors arising from filenames that can include spaces and newlines. xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command one or more times with any initial- arguments followed by items read from standard input. You may need to add two dashes (–) to the end of your xargs command if there’s a chance that any of the target files start with a single dash (-). The double dash (–) tells a program that any arguments that follow are filenames, not options (not all programs support the use of a double dash). [ -name base of file name,the path with the leading directories removed -print print the full name, followed by a newline -print0 print the full name, folowed by a null charater ] ( find . -name '*.gif' -exec file {} \; equivalent command but the syntax is somewhat tricky because need to supply braces )

find dir

Search all files with relative path in current directory and its subdirectories. ( find $PWD search all files with absolute path in the current dir and its subdirs ) ( find $PWD -type f search only files with absolute path in current dir and its subdirs ) ( find $PWD -type d search only dirs with absolute path in current dir and its subdirs ) ( find $PWD -maxdepth 2 search all files with absolute path with 2 depth ) ( find . -maxdepth 1 -type d search only directories in the current path ) ( find . -maxdepth 1 -type d -name 'emacs*' search only directories starting with emacs word in the current directory ) ( find . -maxdepth 1 -type d -name 'emacs*' -printf '%f\n' equivalent to above, but listing without ./ at the beginning )

xargs cmd arg

Reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the cmd command (default is echo) one or more times with any arg initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored. ( | xargs joins all the lines from a pipe ) ( | xargs -n 1 splits pipe elements separated by whitespace into lines )

Analisys

du -h file

Display size of the file file. [ -h human readable ]

file file

Determine file type. [ -i mime ] ( stat file display file status: birth, inode, block, device, type, group, permissions, size,… )

echo

Prints its arguments to the standard output. [ -n don’t output the trailing newline -e enable interpretation of backslash scapes ] ( echo .[^.]* match all dot files except current and parent dir ) ( echo n?me match with name, nome, ntme, … ) ( echo -e "Hola.\nCómo estás?" print using newline ) ( echo Hola.\\nCómo estás print using newline ) ( echo Front-{A,B,C}-Back brace expansion ) ( echo Number_{1..5} brace expanded with a leading portion (preamble) and a trailing portion (postscript) )

grep pattern file

Prints the lines from a file or input stream that match an pattern. When pass regexp containing metacharacters (^ $ . [ ] { } - ? * + ( ) | \) on the command line, it’s vital that they be enclosed in quotes to prevent the shell from attempting to expand them. [ -i ignore-case -v invert, print only those lines that don’t match -G basic-regexp (BREs) -E extended-regexp (EREs) -q quiet -c count matching lines -l name of each file -L name file that don’t match -n number line -h suppress the output of filenames for multi-file search -F fixed strings, not regular expressions ] ( grep root /etc/* check every file in /etc that contains root ) ( grep -h '.zip' files*.txt match to thing like bunzip2, gunzip, gzip,… ) ( zgrep -El 'regex|regular expression' *.gz zgrep provides a front end for grep, allowing it to read compressed files. This command list files containing either the string regex or the string regular expression ) ( cp `ls dir |grep -Ev 'exclude1|exclude2' newDir` copy all files in dir except some files to newDir )

diff file1 file2

See the differences between file1 and file2. [ -i ignore case -w ignore all white space -u patch syntax ] ( cmp file1 file2 compare byte by byte ) ( comm file1 file2 compare two sorted files line by line )

wc file

Print newline, word, and byte counts for each file and a total line if more than one file is specified. [ -l print the newline counts -w print the word counts - read standard input ]

Text Processing

nl

Number lines of files or standar input. ( nl -i 2 -v 5 file set numering increment to 2 and set first line number to 5)

sort file

Sort lines of text file(s). [ -n numeric sort (sorting on numeric rather alphabetic values) -r reverse result -u removes duplicates from the sorted output -k sort based on a key field rather than the entire line -b ignore leading blanks, sorting based on the first non-whitespace character on the line -f –ignore-case –output=file send output to file rather than stdout -t –field-separator=char ] ( sort file |uniq file omit repeated lines in sort file ) ( sort file |uniq -d file report repeated lines in sort file ) ( ls -l | sort -nk 5 sort list based on 5 filed (size) ) ( sort -k 1,1 -k 2n file_dates sort start at field 1 and end at field 1, sort at field 2 numerically )

awk ‘{print $5}’ file

Prints the 5 field (column) of the file (columns are separated by spaces by default). ( awk -F\t '{print $12 , $7}' print 12th atd 7th column, Tab is the separator ) ( | awk '{print $5}' print the 5 field of the previous output ) ( awk '($2=="Name") { print }' < file search Name in the 2 column and print that line ) ( awk '($2=="Name") { print $3,$4 }' < file search Name in the 2 column and print just 3 and 4 field )

cut

Extract a section of text from a line and output the extracted section to standard output. By default, fields must be separated by a single tab character. [ -c list extract the portion of the line defined by list -f list extract one or more fields from the line as defined by list -d delim when -f is specified, use delim as the field delimiting character –complement extract the entire line of text, except for those portions specified by -c and/or -f ] ( cut -c 7-10 file extract character positions 7 through 10 ) ( cut -f 3 file extract third field separate by tab ) ( cut -d ':' -f 1 /etc/passwd first field separate by colon : )

fold -w 20 file

Break lines of text at a specified width. ( fmt -cw 50 file reformat this text to fit a 50-character-wide column ) ( fmt -cw 50 -p '#' file format only the comments and leave the code untouched )

pr -l 50 -w 80 file

Paginate text, define a page of 80 colum wide and 50 lines long. ( ls /usr/bin | pr -3 -t print bin programs in 3 column omited headears and footers )

tr

Is used to transliterate characters. Transliteration is the process of changing characters from one alphabet to another. [ -d delete -s replace each sequence of a repeated character with a single occurrence of that character ] ( | tr 'A-Z' 'a-z' translate upcase to downcase ) ( | tr '[:lower:]' '[:upper:]' translate upcase to downcase ) ( | tr 'A-Z' 'a' convert multiple characters to a single character ) ( | tr -d '\n' concat all lines (remove all newline character) ) ( tr -d '\r' < dos_file > unix_file convert MS-DOS text to Unix-style text (remove all carriage return character) ) ( echo "SECRET frperg" | tr a-zA-Z n-za-mN-ZA-M perform the encoding or decoding ROT13 (method that moves each character 13 places up the alphabet) ) ( echo "aaabbbccc" | tr -s ab squeeze” (delete) repeated instances of a character )

sed ‘s/exp/text/’ file

Substitute the first match exp by text in each line of file and send to standard output. In general, sed takes an address and an operation as one argument. The address is a set of lines (every line by default), and the command determines what to do with the lines. With no file arguments, sed reads from the standard input. [ s substitution p print g global substitution d delete -n suppress automatic printing of pattern space (not to print every line by default) -f add the contents of script-file ] ( |sed '1s/hey/HEY/' replace hey in first line of stdin with HEY ) ( sed 's/:/%/' passwd replace first colon in each line of passwd file with a % ) ( sed 's/:/%/g' passwd replace all colon in each line of passwd file with a % ) ( sed 3,6d file deletes lines 3 to 6 ) ( sed '/exp/d' file deletes any line that matches the regular expression exp ) ( sed -n '1,5p' file print a range of lines, starting with line 1 and continuing to line 5 ) ( sed -n '/ regexp/p' file print lines that match with regexp ) ( sed -i 's/laxy/lazy/; s/jimped/jumped/' file replace two misspellings in file ) ( sed -f script file apply some complex changes using a script )

paste file1 file2

Write lines (adds columns to a file) consisting of the sequentially corresponding lines from each FILE, separated by TABs. ( join file1 file2 for each pair of input lines with identical join fields, write a line to standard output )

expand file

Convert tabs to spaces. ( unexpand file substitute tabs for spaces )

aspell check file

Spelling checker, check various types of text files, including HTML documents, C/C++ programs, email messages. ( aspell -H check file.html check spelling of an HTML file )

Archives

tar cvf archive.tar file

Create an archive.tar of file file. [ c create a new archive v verbose f name of the archive file for tar to create f - use standard input or output specifying the name t list the contents z automatically invoke gzip (extracting with x, creating with c) j automatically invoke bzip2 x extract an archive r append specified pathnames to the end of an archive ] ( tar cvf archive.tar file1 file2 create a tar of files file1 and file2 ) ( tar cvf archive.tar dir create a tar of the dir directory ) ( tar cvf archive.tar dir/ create a tar of all file inside the dir folder ) ( tar cvf archive.tar ~~/dir create a tar with full path of the dir directory ) ( tar tvf archive.tar check the contents of tar ) ( tar cf - dir_orig |(cd dir_target; tar xvf -) archive the entire directory tree within dir_orig and then unpacks the archive into the new directory dir_target (this is useful because it preserves ownership and permissions, and it’s generally faster than others commands) )

tar xvf archive.tar

Unpack archive.tar. [ z automatically invoke gzip (extracting with x, creating with c) j automatically invoke bzip2 x extract p preserve permissions –wildcards use wildcards ] ( tar xvf archive.tar file1 unpack just the file1 ) ( tar xf dir.tar --wildcards 'home/path/dir*' extract only files matching the specified pathname including the wildcard dir* )

tar czvf archive.tar.gz file

Compress and create tar using gzip. [ c create a new archive v verbose f name of the archive file for tar to create f - use standard input or output specifying the name t list the contents z automatically invoke gzip (extracting with x, creating with c) j automatically invoke bzip2 x extract an archive r append specified pathnames to the end of an archive ] ( tar czvf archive.tgz file equivalent to command below (different ext) ) ( tar cjvf archive.tbz file compress and create tar using bzip2 )

tar xzvf archive.tar.gz

Unpack and decompress using gzip. [ z automatically invoke gzip (extracting with x, creating with c) j automatically invoke bzip2 x extract p preserve permissions –wildcards use wildcards ] ( tar xzvf archive.tgz equivalent to command below (different ext) ) ( tar xjvf archive.tgz unpack and decompress using bzip2 )

gzip file

Compress one or more files. The permissions and timestamp are preserved. gzip is the predominant compression program, with bzip2 being a close second. [ -d decompresses (this causes gzip to act like gunzip) -c sends the result to standard output and keep the original files -f force compression even if a compressed version of the original file already exists -l list compression statistics for each file compressed -r recursive -t tested the integrity of the compressed version -number set amount of compression, number is an integer in the range of 1 (fastest, least compression) to 9 (slowest, most compression) ] ( ls dir | gzip > file.gz create a compressed version of a dir list ) ( find . -name 'file-A' | tar cf - --files-from=- | gzip > dir.tgz find produce a list of matching files and piped them into tar, the .tgz extension is the conventional for gzip-compressed tar files (.tar.gz is also used) )

zcat file.gz

Decompress gz file. Compressed files can be restored to their original form using gzip -d or gunzip. ( gunzip file.gz decompress gz file ) ( gzip -d file.gz decompress gz file ) ( gunzip -c file.gz | less only to view the contents ) ( zcat file.tar.gz | tar xpvf - decompress and unpack tar file ) ( zcat /usr/share/man/man1/ls.1.gz |groff -mandoc -T ascii |less simulate the man cmd ) ( zcat /usr/share/man/man1/ls.1.gz | groff -mandoc > ls.ps create PostScript file from ls man page ) ( ps2pdf file.ps file.pdf convert the PostScript file into a Portable Document Format (PDF) )

bzip2 file

Similar to gzip but uses a different compression algorithm that achieves higher levels of compression at the cost of compression speed. ( bunzip2 file.bz2 decompress file )

zip -rX file dir

Creates the archive file.zip, containing all the files and directories in the directory dir. The zip program is both a compression tool and an archiver and its format is familiar to Windows users. If an existing archive is specified, it is updated rather than replaced (existing archive is preserved, but new files are added and matching files are replaced). [ -r recursive -X do not save extra file attributes (more compatibility between different OS) ] [ unzip: -l list archive files -v list archive files (verbose format) or show diagnostic version info] ( unzip file.zip decompress file ) ( unzip file.zip file1 file2 extract selectively files ) ( xz file compress file (compact little more than gzip) ) ( unxz file.xz decompress file ) ( unrar x file.rar extract .rar file )

Directories

Create

mkdir dir

Creates a new directory dir. [ -p no error if existing, make parent directories as needed ] ( mkdir -p dirA/dirB creation of parent directory if needed ) ( mkdir -p dir_parent/{dir1,dir2} create a main directory and 2 subdirectories ) ( mkdir -p dir_parent/dir-{01..10} create a main dir and 10 subdirs )

mv dir1 dir2

If directory dir2 doesn’t exist, create directory dir2 and move the contents of directory dir1 into dir2 and delete directory dir1. If directory dir2 does exist, move directory dir1 (and its contents) into directory dir2. [ -i interactive -u update, only move files that either don’t exist, or are newer than the existing corresponding files in the destination directory ]

mktemp -d /tmp/im1.XXXXXX

Create temporary directory. Converts the XXXXXX to a unique set of characters and creates an empty file with that name. [ -d create a directory, not a file -d dry run, do not create anything; merely print a name (unsafe) ]

Copy

cp -ai dir_source dir_dest

Copy dir_dource to dir_dest preserve all. [ -a preserve-all, copy the files and directories and all of their attributes, including ownerships and permissions -r recursively copy directories and their contents -u update, only copy files that either don’t exist or are newer than the existing corresponding files, in the destination directory -i interactive -v verbose, explain what is being done ] ( cp -a dir_source/* dir_dest copy just file or dir inside the dir_sorce )

rsync -a dir/ dest_dir

Transfer everything inside dir to dest_dir. With -a option, transfer hierarchies with symbolic links, permissions, modes, and devices. This is not an exact replica, destination may keep some files. A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. Use relative or absolute path, don’t use dot notation. [ -a archive mode is equivalent to -rlptgoD (no -A,-X,-U,-N,-H) -n dry run mode, perform a trial run with no changes made -v increase verbosity -vv more details –delete delete files in the destination directory that do not exist in the source directory -c computes checksums of the files to see if they’re the same –stats summary after the transfer –progress show progress during transfer ] ( rsync -a dir dest_dir transfer everything (dir folder will be inside dest_dir) ) ( rsync -nva dir/ dest_dir run a trial without actually copying any files ) ( rsync -a --delete dir/ dest_dir make an exact replica of the source directory, deleting files in the destination directory that do not exist in the source directory (careful with trailing-slash because can easily remove unrelated files this way) )

Delete

rm -r dir

Removes the directory dir and its contents. Don’t use the -r flag with globs such as a star *. [ -r remove directories and their contents recursively, this means that if a directory being deleted has subdirectories, delete them too. To delete a directory, this option must be specified ] ( rmdir dirA remove the empty directory )

gio trash dir

Move dir to Trash folder.

gio trash –empty

Empty the trash.

rm -Rf ~/.local/share/Trash/*

Emptying the user trash-bins (or wastebaskets). ( rm -Rf /root/dot.local/share/Trash/* emptying the administrator’s trash-bins (note that /dot.local is actually /.local) ) ( rm -Rf /media/your_id/your_disk/dot.Trash_1000/* emptying the external trash-bins, locates on your external disks (note that /dot.Trash is actually /.Trash))

Utilities

cd dir

Change directory. ( cd change to home directory ) ( cd - returns to the previous directory ) ( cd -2 returns to the 2 last directory )

ls -lh dir

Lists the contents of a directory dir. [ -l long listing format -h human readable -a all -i inode numbers -d list directories themselves, not their contents -S sort by file size, largest first -t sort by time, newest first –si powers of 1000 not 1024 -r reverse order -A almost-all, don’t list implied . and .. -1 single-column -o like -l, but do not list group information -g like -l, but do not list owner ] ( permissions ( - regular file d directory l symbolic link b block c character p pipe s sockect ) | hard links | owner | group | size | modification time | filename ) ( r file-read/dir-list w file-write/dir-create x file-execute/dir-enter ) ( ls /usr/bin | pr -3 -t bin programs in 3 column omited headears and footers ) ( ls -d */ only show the directories in the current path )

find dir

Search all files with relative path in current directory and its subdirectories. [ -name base of file name,the path with the leading directories removed -print print the full name, followed by a newline -print0 print the full name, folowed by a null charater -prune if the file is a dir, don’t descend into it -xdev don’t descend dir on other filesystems -perm 644 search by permission -exec search by executable -type d search by dir -type f search by file -inum 123 search by inode -regex pattern matches regular expression pattern ] ( find $PWD search all files with absolute path in the current dir and its subdirs ) ( find $PWD -type f search only files with absolute path in current dir and its subdirs ) ( find $PWD -type d search only dirs with absolute path in current dir and its subdirs ) ( find $PWD -maxdepth 2 search all files with absolute path with 2 depth ) ( find . -maxdepth 1 -type d search only directories in the current path ) ( find . -maxdepth 1 -type d -name 'emacs*' search only directories starting with emacs word in the current directory ) ( find . -maxdepth 1 -type d -name 'emacs*' -printf '%f\n' equivalent to above, but listing without ./ at the beginning )

du -h dir

Display the estimate file space usage in the dir directory. du output in most Linux distributions is in 1,024-byte blocks. [ -s summarize -m block-size 1M -h human readable -c produce a grand total -d –max-depth -a write counts for all files, not just directories ] ( ncdu equivalent command ) ( du -d 1 |sort -nr sorting folders according to size ) ( du -bs * |sort -nr list all files in the current dir sorted by decreasing size ) ( du --max-depth=1 $(echo /home/*) 2> /dev/null |sort -nr sorting home folders ignoring errors )

df

View the size and utilization of your currently mounted filesystems. df output in most Linux distributions is in 1,024-byte blocks. [ -m block-size 1M -h human readable –total produce a gran total -T print file system type ] ( df dir view the info in the specific directory )

stat dir

Display dir status (birth, inode, block, device, type, group, permissions, size,…) ( pwd print working directory )

tree -d

List contents of directories in a tree-like format. Show only directories and subdirectories. [ -d list directories only -L 3 max display depth of the directory tree -h print the size of each file –du for each directory report its size as the accumulation of sizes of all its files and sub-directories ] ( tree -d -L 1 only show the directories in the current path )

gio list trash://

List the Trash folder.

System

Processes

top

List processes running on the system in a semi-graphical table. [ -p only processes with specified process IDs -b batch mode, it doesn’t accept any interactive inputs -n 3 run top with 10 updates, then exit ] ( PR The process’s priority. The lower the number, the higher the priority | NI Nice value, negative nice value implies higher priority | VIRT Virtual memory used by the task | RES Resident memory used by the process | SHR Shared Memory size used by a task | S Status of the process (D uninterruptible sleep R running S sleeping T traced stopped Z zombie) | %CPU The share of CPU time used by the process since the last update | %MEM The share of physical memory used ) [ y Highlight running tasks x Highlights the column b Bold ? help ] [ < , > choose how to sort the information f different statistics R reverse sort order SPC Update ] [ M Sort by %MEM P Sort by %CPU T Sort by TIME+ N Sort by PID ] [ u only user’s processes ] [ 1 Individual CPU Core Statistics t CPU Usage Graph m Memory Usage Graph H Threads ] [ c Full Command Line V Process Hierarchy ] [ k kill process ] ( top -p pid1 [-p pid2 ...] monitor one or more specific processes over time ) ( pidstat -p 123 1 monitoring process 123, updating every second ) ( pidstat -p 123 1 -r monitoring process 123, report page faults and memory utilization ) ( pidstat -p 123 1 -d monitoring process 123, report I/O statistics ) ( top -b -n 1 > file.txt dump top output as plain text to a file )

ps aux

Report a snapshot of the current processes belonging to every user. [ ax all processes a all processes with a terminal (tty) x all processes owned by you u more detailed user-oriented format information on processes c simple name of executable -H show process hierarchy (forest) ] ( PID process ID | VSZ virtual memory size | RSS resident set size. This is the amount of physical memory (RAM) the process is using in kilobytes | TTY terminal device where the process is running | STAT process status (S sleeping, R running D uninterruptible sleep T stooped Z zombie < high-priority N low-priority ) | START time when the process started | TIME amount of CPU time that the process has used so far | COMMAND be careful a process can change this field from its original value and the shell can perform glob expansion, and this field will reflect the expanded command instead of what you enter at the prompt ) ( ps u 123 inspect the 123 process ) ( ps m display the thread information ) ( ps m -o pid,tid,command shows only the PIDs, TIDs, and command ) ( ps -ax -o %mem,pid,comm |sort list of all processes sorted by memory usage ) ( pidstat -p 123 report statistics for 123 tasks )

pstree

Print all process in a tree. [ -h highlight the current process and its ancestors -H like -h, but highlight the specified process instead -s show parent processes of the specified process ] ( pstree -s 123 display a tree of parent processes of 123 process )

pidof name

Find the process ID of a running program. ( pgrep name equivalent tool )

kill signal pid

Kill a process using its pid. [ -STOP freeze a process -CONT continue running the process again -KILL brutal way to terminate process -9 another notation for -KILL -15 another notation for -SIGTERM -TSTP terminal stop -INT interrupt -HUP hangup ] ( kill 123 send the default signal, SIGTERM or TERM (terminate the process) ) ( kill 123 3453 send the default signal, SIGTERM, to all those processes ) ( kill -STOP 123 freeze the 123 process ) ( killall xlogo send signals to multiple processes matching a specified program or username ) ( pkill signal name kill a process using its name )

command &

Detach a process command from the shell and put it in the “background”. ( gunzip file.gz & decompress file in the background ) ( bg %123 move to background ) ( fg %123 bring to foreground ) ( jobs show suspended processes on current terminal ) ( disow %2 remove jobs from the job table, or to mark jobs so that a SIGHUP signal is not sent to them if the parent shell receives it (useful for close the shell while keeping background jobs running) )

renice 20 pid

Change the nice value to 20. Niceness values range from -20 (most favorable to the process) to 19 (least favorable to the process). ( nice -n 19 tar cvzf name.tgz name run a program with modified scheduling priority )

lsof -p 123

List the open files for a particular 123 process ID. Open files and the processes using them, it can list network resources, dynamic libraries, pipes, and more. [ +D search for all open instances of directory -p listing of files for the processes ] ( COMMAND the command name for the process that holds the file descriptor FD shows the purpose of the file, it can also list the file descriptor of the open file DEVICE the major and minor number of the device that holds the file NAME the filename ) ( lsof +D /usr displays entries for open files in /usr and all of its subdirectories )

strace

System call trace. Prints all the system calls that a process makes. [ -o save_file save the output in a file ] ( strace cat /dev/null first lines of the output should show execve() in action, followed by a memory initialization call, brk() ) ( ltrace command tracks shared library calls )

Performance

free

Display amount of free and used memory in the system. [ -h human readable ]

vmstat

Report virtual memory statistics, swap and disk I/O. You’ll find it handy for getting a high-level view of how often the kernel is swapping pages in and out, how busy the CPU is, and how I/O resources are being utilized. [ 2 statistics every two second -d report disk statistics -s statistics -w wide output ] ( swap for the pages pulled in and out of swap io for disk usage system for the number of times the kernel switches into kernel code cpu for the time used by different parts of the system ) ( us percentage of time the CPU is spending on user tasks sy system (kernel) tasks id idle time wa waiting for I/O ) ( b processes are blocked (prevented from running) while waiting for memory pages so swapped out, moving pages onto the disk bi blocks in bo blocks out ) ( vmstat 3 snapshot of system resource usage every 3 seconds )

uptime

How long the system has been running. The load average is the average number of processes currently ready to run (see if the process waiting list is not too long). That is, it is an estimate of the number of processes that are capable of using the CPU at any given time—this includes processes that are running and those that are waiting for a chance to use the CPU. When thinking about a load average, keep in mind that most processes on your system are usually waiting for input (from the keyboard, mouse, or network, for example), meaning they’re not ready to run and shouldn’t contribute anything to the load average. Only processes that are actually doing something affect the load average. A load average of 0 is usually a good sign, because it means that your processor isn’t being challenged and you’re saving power. If a load average goes up to around 1, a single process is probably using the CPU nearly all of the time (one process is running while another one is waiting) (single CPU system). ( load average past 1 minute, past 5, past 15 ) ( w show who is logged on and what they are doing )

iostat -p ALL

Central Processing Unit (CPU) statistics and input/output statistics for ALL devices and partitions. The sum of the partition columns won’t necessarily add up to the disk column. Although a read from sda1 also counts as a read from sda, keep in mind that you can read from sda directly, such as when reading the partition table. [ 2 statistics every two second -d report disk statistics -2 d only disk statistic every two second -p ALL all of the partition information] ( tps average number of data transfers per second kB_read/s average number of kilobytes read per second kB_wrtn/s average number of kilobytes written per second kB_read total number of kilobytes read kB_wrtn total number of kilobytes written ) ( iotop simple top-like I/O monitor )

time command

Find out how much CPU time a command uses during its lifetime. User time (user) is the number of seconds that the CPU has spent running the program’s own code. The system time (sys) is how much time the kernel spends doing the process’s work (reading files and directories,…). The real time (real) is the total time it took to run the process from start to finish, including the time that the CPU spent doing other tasks. Subtracting the user and system time from real time can give you a general idea of how long a process spends waiting for system and external resources. For example, the time spent waiting for a network server to respond to a request would show up in the elapsed time, but not in the user or system time.

Systemd

journalctl -f

Show only the most recent journal entries, starting with the oldest, and continuously (real-time) print new entries as they are appended to the journal. [ -S (since) entries on or newer than the specified date -f only the most recent entries -u specified systemd unit -U until that time -g grep -r reverse output -k only kernel messages -n lines -b start of the current boot -F all possible data values the specified field ] ( journalctl -S -4h messages from the past 4 hours in current time zone ) ( journalctl -S 06:00:00 specific hour ) ( journalctl -S 13:30:00 -U 14:30:00 specific timestat) ( journalctl -S 2020-01-14 from that day) ( journalctl -S 2020-01-13 -U 2020-01-14 specific day) ( journalctl -S '2020-01-14 14:30:00' specific hour and day ) ( journalctl --unit=sshd.service view all of a unit’s messages ) ( journalctl -F _SYSTEMD_UNIT list all units in the journal ) ( journalctl -u cron.service filter by systemd unit ) ( journalctl _PID=123 search for messages from process ID 123 ) ( journalctl -g 'kernel.*memory' contain kernel followed somewhere by memory ) ( journalctl -r -b -1 check whether the machine shut down cleanly on the last cycle ) ( journalctl -N list all available fields ) ( journalctl SYSLOG_IDENTIFIER=sudo find the sudo logs ) ( tail -f /var/log/messages watch what the system is doing in near real-time )

systemctl list-units

List of active units. [ list-units list of active units –all all units –full full names of the units list-timers llist timer units currently in memory –type=inactive limit display to inactive unit types ] ( systemctl --user list-units talk to the service manager of the calling user, rather than the service manager of the system (some units are shown here and not if you use list-units only) ) ( systemctl list-unit-files list all installed unit files ) ( systemctl --type=service show all service units ) ( journalctl --unit=sshd.service view all of a unit’s messages ) ( service --status-all list of all running and not running services using System V )

systemctl status sshd.service

Getting the status of a sshd.service unit. [ status status information ]

systemctl start unit

Activate unit. [ active active one or more units stop deactive one or more units restart stop and then start reload asks all units listed on the command line to reload their configuration ] ( systemctl stop unit deactivate one unit specified ) ( systemctl restart unit stop and then start one unit specified ) ( systemctl reload unit reloads just the configuration for unit ) ( systemctl daemon-reload reloads all unit configurations )

systemctl enable unit

Enable one or more units or unit instances. This will create a set of symlinks, as encoded in the [Install] sections of the indicated unit files. After the symlinks have been created, the system manager configuration is reloaded (in a way equivalent to daemon-reload), in order to ensure the changes are taken into account immediately. ( systemctl disable unit disable one unit or unit instances )

systemctl cat unit

Show backing files of one or more units. Prints the “fragment” and “drop-ins” (source files) of units.

systemctl list-jobs

Check the current jobs.

systemctl -p UnitPath show

Check the current systemd configuration search path. ( pkg-config systemd --variable=systemdsystemunitdir see the system unit ) ( pkg-config systemd --variable=systemdsystemconfdir see the system configuration directories )

systemd-analyze

Used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager, and to verify the correctness of unit files. ( systemd-analyze time prints the time spent in the kernel before userspace has been reached, the time spent in the initial RAM disk (initrd) before normal system userspace has been reached, and the time normal system userspace took to initialize ) ( systemd-analyze plot >bootup.svg plot a bootchart )

systemctl poweroff

Shuts down system. ( systemctl halt halts system ) ( systemctl reboot reboots system ) ( systemctl suspend suspends system ) ( systemctl hibernate hibernates system )

I/O

command1 | command2

Send the standard output of a command1 to the standard input of another command1.

command > file

Send the output of command to a file instead of the terminal. The shell creates file if it does not already exist. If file exists, the shell erases (clobbers) the original file first. ( command >> file append the output to the file instead of overwriting it ) ( command 2> error send the standard error to error ) ( command 2>> error append the standard error to error ) ( echo "Hello" > #<buffer test-buffer> send the stdout to Emacs buffer ) ( ls >> #<buffer *scratch*> append the stdout to Emacs buffer )

command < file

To channel a file to a program’s standard input. ( <file command another syntax ) ( cat file | command equivalent command ) ( head < /proc/cpuinfo see the file header )

command > file 2> error

Send standard output to file and standard error to error.

command > file 2>&1

Send the standard error to the same place as stdout, in this case both to file. ( command 2>&1 > file there’s no effect because both (stderr) and (stdout) are already going to the terminal. Then > file redirects (stdout) to file. But (stderr) is still going to the terminal ) ( command &> file redirect both standard output and standard error to the file, the same as (command > file 2>&1) )

command < file

Change the source of standard input from the keyboard to the file.

echo “Debugging” >&2

Send the messages to standard error to separate them from normal output. Useful for debugging in scripts.

tee

Read from standard input and write to standard output and files. ( ls /usr/bin |tee file |grep zip capture the entire directory listing to the file before grep filters the pipeline’s contents )

grep pattern file_match file_doesnt_exist

Command to test rederictions. The first line sends the match to standard output (terminal screen) and the second line sends the error to the standar error (terminal screen).

Utilities

whoami

Print the current user id and name. ( who print all usernames currently logged in ) ( groups see what group you’re in )

id

Print user and group id’s (uid & gid). ( id user print user and group id’s (root only) ) ( groups see what group you’re in )

adduser

Add a user or group to the system. [ –ingroup set the group –group add a user group –uid choose UID –home override the home directory –shell override the shell –system add the system user ] ( adduser name add a normal user ) ( adduser --group name add a user group (addgroup) ) ( addgroup --system name add a system group )

deluser

Remove a user or group from the system. [ –force name remove the root acoount –remove-home name delete the user and his home directory –remove-all-files name delete the user and all his files –backup name backup all files before deleting ] ( deluser name remove a user from the system ) ( deluser --group name remove group from the system ) ( deluser group name remove a user from specific group )

passwd

Change the password of the current user. ( passwd user change password of the user )

sudo

Switch to administrator mode, superuser ( su start a shell for the superuser ) ( su - start a shell for the superuser and his environment is loaded ) ( su user allows commands to be run with a substitute user )

chsh

Change login shell. ( chfn change real user name and information ) ( tty print the current terminal’s name )

shutdown -h now

Power off the machine immediately. now is an alias for “+0”, i.e. for triggering an immediate shurdown. If no time argument is specified, “+1” is implied. [ -h equivalent to –poweroff -r reboot -f force ] ( poweoff equivalent to above command ) ( systemctl poweroff equivalent to above command ) ( shutdown -h +5 halt in 5 minutes ) ( shutdown -r reboot the machine ) ( reboot equivalent to above command ) ( shutdown -r now reboot the machine now )

at

Queue, examine, or delete jobs for later execution. Reads the commands from the standard input at a specified time. End the input with CTRL-D. This command is used to run a job once in the future without using cron. ( atq check that the job has been scheduled ) ( atrm remove job ) ( at 22:30 30.09.15 schedule jobs days into the future ) ( # systemd-run --on-calendar='2022-08-14 18:00' /bin/echo this is a test creates a transient timer unit, this systemd timer units is a substitute for at command that can view tieh systemctl list-timers )

Info

man command

See the manual page for the command and configurations files. [ -k search by keyword (same as apropos command) ] ( man -k sort looking for a command to sort something ) ( man 5 passwd read the /etc/passwd file description ) ( info command access an info manual ) ( apropos command searches the descriptions for instances of keyword )

whereis command

Locate the binary, source, and manual page files for a command. ( which command display an executable’s location ) ( type command indicate how a command name is interpreted ) ( whatis display one-line manual page descriptions ) ( help command get help for shell builtins )

date

Print or set the system date and time. ( date +%s current time as the number of seconds since 12:00 midnight on January 1, 1970, UTC ) ( tzselect help you identify a time zone file ) ( export TZ=US/Central use a time zone other than the system default for just one shell session )

uname -a

Print system information. [ -a all information -r kernel release -n network node hostname -p processor type -o operating system ] ( lsb_release -a system information like distribution name ) ( cat /proc/version version of the Linux kernel used, its name, the version of the compiler used ) ( dpkg -l | egrep "linux-(header|image)" list all kernels installed )

history

History of commands that have been entered. ( history | grep echo list the command used with echo ) ( !88 expand into the contents of the 88th line in the history list ) ( script file record an entire shell session and store it in a file ) ( cat typescript read the scriptfile produced when executing the script command )

fc-list

List available fonts. ( fc-list :spacing=mono view a list of monospaced client-side fonts ) ( fc-match -s helvetica show an ordered list of fonts matching a pattern ) ( fc-cache -fv rebuilds cached list of fonts in ~/.cache/fontconfig )

set

Show list of enviroment, shell variables and shell functions. [ -e exit immediately if a command exits with a non-zero status -C disallow existing regular files to be overwritten by redirection of output -x print commands and their arguments as they are executed ] ( printenv only display the environment variables ) ( printenv variable list the value of a specific variable ) ( set -x print all command (useful to debug) ) ( set -C avoid clobbering in bash. In some commands like (command > file) if file exists, the shell erases (clobbers) the original file first. Some shells have parameters that prevent clobbering )

Miscellaneous

(cd dir; ls)

Executes the command ls while in dir and leaves the original shell intact. ( (PATH=/usr/confusing:$PATH; uglyprogram) add a component to the path that might cause problems as a permanent change ) ( PATH=/usr/confusing:$PATH uglyprogram equivalent command that avoids the subshell )

pushd path

Save the path. ( popd return to the save path )

exit

Cause normal process termination. ( exec ls hack to close the shell )

source file

Execute commands from a file in the current shell. ( source ~.bashrc reread the modified .bashrc file )

bc

The bc program reads a file written in its own C-like language and executes it. A bc script may be a separate file, or it may be read from standard input. The bc language supports quite a few features including variables, loops, and programmer-defined functions. [ ibase=N treat all numbers as base N obase=N output all numbers in base N ] ( echo "obase=2;240" | bc -l calculates 240 in binary basis ) ( echo "obase=10; ibase=16; FF" | bc convert FF hex to decimal ) ( echo "obase=2; ibase=16; FF" | bc convert FF hex to binary ) ( echo Five divided by two equals $((5/2)) arithmetic expansion ) ( bc <<< "2+2" the ability to take standard input means that we can use here documents, here strings, and pipes to pass scripts )

alias name=’string’

After we define our alias, we can use it anywhere the shell would expect a command. ( alias see all the aliases defined in the environment ) ( type name test if an alias name is already being used ) ( unalias name remove an alias )

python3 -m http.server 8000

This starts a basic web server on port 8000 that makes the current directory available to any browser on the network. If the machine you run this on is at address 10.1.2.4, point the browser on the destination system to http://10.1.2.4:8000.

Miscellaneous

Encryption

gpg –list-key

Show all public keys currently stored in your local GPG keyring. ( gpg --list-key keyID show the public key currently stored )

gpg –keyserver keyring.debian.org –recv-keys E145360

Fetch a key from the server keyring.debian.org and put into local GPG keyring. [ –keyserver specifies the keyserver to use for operations like fetching keys –recv-keys imports public keys from a keyserver into your local GPG keyring ]

gpg –verify SHA512SUMS.sign SHA512SUMS

Verify the authenticity and integrity of a file using a digital signature.

sha512sum file

Compute SHA512 message digest.

sha512sum -c –ignore-missing file

Vverify file integrity against a checksum file. It compares the calculated SHA-512 hash of each file with the values listed in the checksum file. [ -c read SHA512 sums from the files and check them –ignore-missing skip any files listed in the checksum file that are not found on your system ] ( echo "sha512sum-of-the-file file" |sha256sum -c check the sha256sum with sha256sum string and file ) ( md5sum file compute and check MD5 message digest ) ( md5sum file1 file2 compute message digest of the files )

Compiling

cc -o file file.c

Compile the file.c and give the name file. [ -o place the output into <file> -c compile and assemble, but do not link -n prints the commands necessary for a build but prevents make from actually running any commands -f file tells make to read from file instead of Makefile or makefile ] ( cc file.c compile the file.c )

cc -c file.c

Create the object files.

cc -o myprog main.o aux.o

Compile and create an executable called myprog from these two main.o, aux.o object files.

cc -o myprog object.o -lcurses

Compile and create the executable myprog with object.o object file and link against curses library. ( cc -o myprog object.o -lcurses -L/usr/junk/lib -lcrud create myprog with libcrud.a library in /usr/lib )

cc -c -I/usr/junk/include badinclude.c

If the notfound.h header is found in /usr/junk/include, tell the compiler to add this directory to its search path.

ldd prog

Show what shared libraries a executable prog uses. ( ldd /bin/bash show shared libraries )

ldd /bin/bash
# linux-vdso.so.1 (0x00007ffe9c9ec000)
# libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f5d79c66000)
# libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5d79c60000)
# libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5d79a8c000)
# /lib64/ld-linux-x86-64.so.2 (0x00007f5d79dec000)

## what the executable knows => where ld.so finds the library

Install package

./configure –prefix=new_prefix

Configure the package changing the prefix to new_prefix. By default, the install target from an autoconf-generated Makefile uses a prefix of usr/local. [ /–bindir=dir installs executables in dir –sbindir=dir installs system executables in dir –libdir=dir installs libraries in dir –disable-shared prevents the package from building shared libraries –with-package=dir tells configure that package is in dir (this is handy when a necessary library is in a nonstandard location) ]

make

GNU make utility to maintain groups of programs. [ -n dry-run print the commands that would be executed, but do not execute them ]

# checkinstall make install

Shows the settings pertaining to the package that you’re about to build, and gives you the opportunity to change them. When you proceed with the installation, checkinstall keeps track of all of the files to be installed on the system and puts them into a .deb file. You can then use dpkg to install (and remove) the new package.

pkg-config –list-all

Show all libraries that pkg-config knows about, including a brief description of each. ( pkg-config --libs zlib find the libraries required for a popular compression library )

patch -p0 < patch_file

Apply the patch (becareful with current directory).

Debug

dmesg

View the messages in the kernel ring buffer. ( who -r print current runlevel )

# dbus-monitor –system

Debug probe to print message bus messages. Is used to monitor messages going through a D-Bus message bus. ( dbus-monitor --session )

xwininfo

Window information utility for X. Is a utility for displaying information about windows. ( xlsclients -l list of all window IDs and clients )

xev

Creates a window and then asks the X server to send it events whenever anything happens to the window (such as it being moved, resized, typed in, clicked in, etc.). One common use is to extract keycodes and key symbols for different keyboards when remapping the keyboard.

xinput –list

Show input device configuration on the machine. ( xinput --list-props 8 view properties of the device number 8 ) ( xinput --set-button-map device 3 2 1 reverse the order of mouse buttons (three-button mouse at device) )

xlogo

Program displays the X Window System logo. Useful for testing windows. ( xeyes a follow the mouse X demo )

Network

App layer

wget URL file

Download only the raw page with the URL. Allows recursive downloads, supports several protocols and is licensed under GNU GPL while curl is licensed under MIT. [ -E save HTML/CSS files with .html/.css extensions -H enable spanning across hosts when doing recursive retrieving -k make links in downloaded HTML point to local files -K when converting a file, back up the original version with a .orig suffix -p download all the files that are necessary to display a given HTML page (inlined images, sounds, and referenced stylesheets) -np dont ascend to the parent directory when retrieving recursively -c continue getting a partially-downloaded file -r recursive -l depth max subdir that Wget will recurse into to dept ] ( wget -E -H -k -K -p URL download whole page from website, this includes such things as inlined images, sounds, and referenced stylesheets ) ( wget -r -l 2 -p URL recursive download up to 3 pages and their files, will cause e.g. 1.html, 1.gif, 2.html, 2.gif, 3.html, and 3.gif to be downloaded )

curl -# -o name URL

Transfer data from server with the URL and save to name file. Does not allow recursive downloading unlike wget. [ -I fetch the headers only -# progress bar -o write output to <file> instead of stdout -v verbose during the operation. Useful for debugging and seeing what’s going on “under the hood” ] ( curl --trace-ascii fileName https://eloquentjavascript.net/author record details about its communication ) ( curl -v URL | head -50 show more info like handshake, header,… )

netcat

netcat (or nc) can connect to remote TCP/UDP ports, specify a local port, listen on ports, scan ports, redirect standard I/O to and from network connections, and more. End the connection at any time by pressing CTRL-C. [ -u specifies UDP -4 for IPv4 -6 for IPv6 ] ( netcat host port open a TCP connection to a port ) ( netcat -l port_number listen on a particular port )

telnet example.org 80

User interface to the TELNET protocol to conect example.org on port 80. To get back to the shell, press CTRL-] on a line by itself and then press CTRL-D. ( telnet localhost 22222 connect to localhost on port 222222 )

# Connect to the IANA documentation example web server.
telnet example.org 80
# Enter these two lines.
GET / HTTP/1.1
Host: example.org
# Press ENTER twice.
# To terminate the connection, press CTRL-D.

# This exercise demonstrates that:
# The remote host has a web server process listening on TCP port 80.
# telnet was the client that initiated the connection.

mail -s “Subject” [email protected] < file

Process mail messages. [ -s subject -A attach file ] ( echo | mail -s "Subject" -A file [email protected] equivalent command ) ( sendmail [email protected] reads a message from standard input until EOF or until it reads a line with only a . character, and arranges for delivery )

Transport layer

netstat

Displays the use of the network by the processes. [ -n disable hostname resolution DNS -t TCP port info -u UDP port info -l listening ports -a every active port -r kernel’s network routing table, this shows how the network is configured to send packets from network to network -6 show only IPv6 -4 show only IPv4 -i a table of all network interfaces -e extend additional information, use this option twice for maximum detail -ie network interfaces in more detail ] ( netstat -nt show the TCP connections currently open on the machine ) ( netstat -t show TCP connections with host and port names ) ( netstat -ntl list all TCP ports that your machine is listening on )

netstat   -r
Kernel IP routing table

Destination  Gateway      Genmask       Flags MSS Window irtt Iface
192.168.1.0  *            255.255.255.0     U   0      0    0  eth0
default      192.168.1.1  0.0.0.0          UG   0      0    0  eth0

# Gateway, is the name or IP address of the gateway (router) used to go
# from the current host to the destination network. An asterisk in this
# field indicates that no gateway is needed.

# lsof

Detailed list of file and network usage. [ -i list all Internet network files -n inhibits the conversion of network numbers to host names -W don’t truncate IP addressses -p PID and name program -U listing of UNIX domain socket files -P disable /etc/services port name lookups ] ( # lsof -n -P -i shows users and process IDs for server and client programs currently using or listening to ports ) ( # lsof -i show with host names and port names (slows down the output) ) ( # lsof -iTCP -sTCP:LISTEN show only the processes listening on TCP ports ) ( # lsof -iprotocol@host:port looking for a particular port (full syntax) ) ( # lsof -iTCP:ssh connections only on TCP with ssh service ) ( # lsof -iTCP:443 connections only on TCP port 443 ) ( # lsof -i6TCP:443 IPv6 connections only on TCP port 443 ) ( # lsof -U list of Unix domain sockets currently in use )

nmap host

Generic scan on a host. Network exploration POWERFULL tool and security / port scanner.

netcat

netcat (or nc) can connect to remote TCP/UDP ports, specify a local port, listen on ports, scan ports, redirect standard I/O to and from network connections, and more. End the connection at any time by pressing CTRL-C. [ -u specifies UDP -4 for IPv4 -6 for IPv6 ] ( netcat host port open a TCP connection to a port ) ( netcat -l port_number listen on a particular port )

# tcpdump

Puts the network interface card into promiscuous mode and reports on every packet that comes across (GUI alternative is Wireshark). [ -i interface listen on interface -e print the link-level header on each dump line -n don’t convert addresses -N don’t print donamin name qualification of host names -c 10 print only the first 10 packages -X also print the data of each packet tcp TCP packets udp UDP packets ip IPv4 packets ip6 IPv6 packets port 80 TCP and-or UDP packets to-from port 80 host host packets to or from host net network packets to or from network or specifies that the condition on either the left or right can be true to pass the filter and requires both conditions to be true ] ( # tcpdump tcp only TCP packets ) ( # tcpdump udp or port 80 or port 443 web packets and UDP packets )

ss

Utility to investigate sockets, is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools.

# iptables -L

Show the current configuration of iptables. [ -L list all rules in the selected chain -P set the policy on a chain -A INPUT appends a rule to the INPUT chain -s specifies the source IP address -j DROP tells the kernel to discard any packet matching the rule -p tcp specify TCP packets only –destination-port 25 apply only to traffic to port 25 ] ( # iptables -A INPUT -s 192.168.34.63 -j DROP drop packets from 192.168.34.63 host ) ( # iptables -P FORWARD DROP set the INPUT chain policy to DROP ) ( # iptables -D INPUT 3 delete the 3 rule of the INPUT table )

IP layer

ip

Show and manipulate network interfaces, routing, network devices and tunnels.

ip address

Show the addresses that are active on the machine. [ -6 show ipv6 configuration ] ( ifconfig equivalent command ) ( nmcli equivalent command but shows more info specially wireless connections ) ( nmcli connection show show all connections, type device uuid ) ( nmcli device status show state type connection ) ( # ip address add 192.168.1.2/24 dev eth0 add an IP address and subnet for a kernel network interface ) ( ip address show enp0s3 | grep "inet " | tr -s " " ":" | cut -d: -f3 displays its locap IP on enp0s3 )

# The flag UP tells you that the interface is working.
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
fq_codel state UP group default qlen 1000
# link/ether means MAC address on the physical layer.
    link/ether 40:8d:5c:fc:24:1f brd ff:ff:ff:ff:ff:ff
    inet 10.23.2.4/24 brd 10.23.2.255 scope global noprefixroute
enp0s31f6
       valid_lft forever preferred_lft forever
    inet6 2001:db8:8500:e:52b6:59cc:74e9:8b6e/64 scope global
dynamic noprefixroute
       valid_lft 86054sec preferred_lft 86054sec
    inet6 fe80::d05c:97f9:7be8:bca/64 scope link
       valid_lft forever preferred_lft forever

ip route

Show routing table. [ -4 restrict the output ro IPv4 -6 show ipv6 configuration ] ( route -n show IP instead of attempting to show hosts and networks by name ) ( ip -4 neigh current neighbour table in kernel (ARP cache) ) ( ip neigh del host dev interface delete an ARP cache entry ) ( # ip route add default via 192.168.1.1 dev eth0 add routes, which is typically just a matter of setting the default gateway ) ( # ip route del default remove the default gateway )

ip route show
# Each line is a routing rule.
default via 10.3.2.1 dev enp37s0 proto static metric 100
# default (0.0.0.0/0) matches any address on the internet.
# This is the default route, and the address configured as the
# intermediary in the default route is the default gateway.
# The mechanism is via 10.3.2.1, indicating that traffic using the
# default route is to be sent to 10.3.2.1 (this is a router).
# dev enp37s0 indicates that the physical transmission will happen on
# that network interface.

10.3.2.0/24 dev enp37s0 proto kernel scope link src 10.3.2.4 metric 100
# 10.3.2.0/24 is a destination network, this is the host’s local subnet.
# This rule says that the host can reach the local subnet directly
# through its network interface, indicated by the dev enp37s0
# mechanism label after the destination.
netstat -rn   # same command to route -n.
Destination  Gateway      Genmask       Flags MSS Window irtt Iface
192.168.1.0  *            255.255.255.0     U   0      0    0  eth0
default      192.168.1.1  0.0.0.0          UG   0      0    0  eth0
# Gateway, is the name or IP address of the gateway (router) used to go
# from the current host to the destination network. An asterisk in this
# field indicates that no gateway is needed.

ping machine

Sends ICMP echo request packets (ping) to a host machine that asks a recipient host to return the packet to the sender. A gap in the sequence numbers (icmp_req), usually means there’s some kind of connectivity problem. Packets shouldn’t be arriving out of order, because ping sends only one packet a second. If a response takes more than a second (1,000 ms) to arrive, the connection is extremely slow. The round-trip time is the total elapsed time between the moment that the request packet leaves and the moment that the response packet arrives. If there’s no way to reach the destination, the final router to see the packet returns an ICMP “host unreachable” packet to ping. [ -4 IPv4 only -6 IPv6 only -A adaptive ping -O report outstanding ICMP ECHO reply before sending next packet -a audible ping ] ( ping -OA 8.8.8.8 send ICMP reporting outstanding and adaptive ping )

traceroute machine

Print the route packets trace to network host. [ -T use TCP SYN for probes (useful for router that don’t provide identifying info) -I use ICMP ECHO for probes ]

dig host

Dig output begins with information about the command issued and the name server(s) used, then prints the resolver flags in use, then decodes the DNS message received back as an answer. After printing the header fields and flags, the question is printed, followed by the answer, authority records, and additional records sections. Each of these sections contains zero or more resource records, which are printed in a human-readable format, beginning with the domain name, then the Time To Live, then the type code, and finally the data field. Finally, summary information is printed about how long the exchange required. [ -x dot-notation, shortcut for reverse lookups -p port sends the query to a non-standard port on the server, instead of the default port 53 ] ( host www.example.com DNS lookup utility. Find the IP address behind a domain name ) ( host 8.8.8.8 in reverse to try to discover the hostname behind the IP address ) ( whois host client for the whois directory service )

hostname

Show or set the system’s host name. [ -i the ip network address(es) of the host name -I all ip network addresses of the host ] ( hostnamectl query and change the system hostname and related settings )

nm-online

Show whether the network is up or down.

iw

Show and change kernel space device and network configuration.

curl ifconfig.me

Displays public IP.

Device

Hardware

lsscsi

List the SCSI devices on the system. ( identifies the address of the device on the system ( SCSI host adapter number | SCSI bus number | device SCSI ID | LUN logical unit number) | describes what kind of device it is | | | | where to find the device file )

lspci

Lists connected PCI devices. ( lspci | egrep "3D|Display|VGA" display the graphics card model ) ( lspci | grep -i "net" | cut -d: -f3 show the Wifi card model ) ( lspci | grep -i audio | cut -d: -f3 show the soundcard model )

lsusb

Lists connected USB devices.

lsblk

Prints all block devices (except RAM disks) in a tree-like format by default. ( lsblk -S info about SCSI devices only (model, serial, vendor,…) ) ( lsblk -o NAME,MODEL,SERIAL,SIZE,STATE -d show all block devices (name, model, size,…) ) ( # blkid view a list of devices and the corresponding filesystems and UUIDs on the system )

lscpu

Display information about the CPU architecture.

dmidecode

Display table that contains a description of the system’s hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision. ( # dmidecode --type memory display the memory description ) ( # dmidecode --type system display the system description ) ( lshw -short display the system info ) ( inxi -F display a full system information )

cat /proc/cpuinfo

Displays processor information. ( cat /proc/partitions displays mounted partitions )

Utilities

dd if=/dev/zero of=new_file bs=1024 count=1

Copies a single 1024 byte block from //dev/zero/ (a continuous stream of zero bytes) to new_file. Warning!: Always double-check input and output before pressing enter! Data definiton or destroy disk dd copies data in blocks of a fixed size. This is extremely useful when you are working with block and character devices. Its sole function is to read from an input file or stream and write to an output file or stream, possibly doing some encoding conversion on the way. One particularly useful dd feature with respect to block devices is that you can process a chunk of data in the middle of a file, ignoring what comes before or after. [ if=file input file, default is the standard input of=file output file, default is the standard output bs=size block size ibs=size, obs=size input and output block sizes bs same block size for both input and output count=num total number of blocks to copy skip=num skip past the first num blocks in the input file or stream, and do not copy them to the output ] ( dd if=input_file of=output_file [bs=block_size [count=blocks]] syntax ) ( dd if=/dev/sdb of=/dev/sdc if we attached two dirves of the same size to the computer, we can copy (clone) everything on the first drive to the second drive ) ( dd if=/dev/sdb of=flash_drive.img if only the first device were attached to the computer, we can copy (clone) its contents to an ordinary file for later restoration or copying )

udevadm monitor

To monitor uevents. It will print the received events for: UDEV - the event which udev sends out after rule processing and KERNEL - the kernel uevent. [ –kernel see only kernel events –udev see only udevd processing events ] ( udevadm monitor --kernel watch the kernel event changes about partitions ) ( udevadm info --query=all --name=/dev/sda show the path and several other interesting attributes of the device )

sync

Synchronize cached writes to persistent storage. If for some reason you can’t unmount a filesystem before you turn off the system, be sure to run sync first.

iostat -p ALL

Central Processing Unit (CPU) statistics and input/output statistics for ALL devices and partitions. The sum of the partition columns won’t necessarily add up to the disk column. Although a read from sda1 also counts as a read from sda, keep in mind that you can read from sda directly, such as when reading the partition table. [ 2 statistics every two second -d report disk statistics -2 d only disk statistic every two second -p ALL all of the partition information] ( tps average number of data transfers per second kB_read/s average number of kilobytes read per second kB_wrtn/s average number of kilobytes written per second kB_read total number of kilobytes read kB_wrtn total number of kilobytes written )

udevadm

Controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms.

mkswap

Sets up a Linux swap area on a device or in a file. ( swapon enable/disable devices and files for paging and swapping ) ( swapon --show display a definable table of swap areas )

Filesystem

# mkfs -t ext4 / dev / sdf2

Create a filesystem ext4 partition on //dev/sdf2/. [ -t type -n check without modifying anything ]

mount

Show the current filesystem status of the system.

# mount -t type device mountpoint

Mount a filesystem manually. [ -t filesystem type -r mounts the filesystem in read-only mode UUID mount a filesystem by its UUID rw mounts the filesystem in read-write mode exec enables execution of programs on the filesystem nosuid disables setuid programs ] ( # mount -t ext4 /dev/sdf2 /home/extra mount the Fourth Extended filesystem ) ( # mount UUID=b600fe63-d2e9-461c-a5cd-d3b373a5e1d2 /home/extra mount a filesystem by its UUID ) ( # mount -n -o remount / remounts the root directory in read-write mode )

# umount mountpoint

Unmount a filesystem.

# fsck -n /dev/sdb1

Check and repair a Linux filesystem. Never use fsck on a mounted filesystem. [ -n check the filesystem without modifying anything ] ( e2fsck check a Linux ext2/ext3/ext4 file system ) ( debugfs interactive file system debugger ) ( debugfs undelete the specified inode number (revcover deleted files) )

Partition

# parted -l

Show system’s partition table. ( # fdisk -l equivalent command )

# fdisk /dev/sdd

Creating a partition of the device.

Tools

apt

apt list

Display a list of packages satisfying certain criteria. It supports glob(7) patterns for matching package names. [ –installed list installed –upgradeable list upgradeable –all-versions list all available versions -a versions and archive areas of available packages ] ( apt list '?obsolete' list obsolete packages, useful when upgraded system (it’s possible that some packages are no longer available on the new repositories) )

apt-mark

Show, set and unset various settings for a package. ( apt-mark showauto print a list of automatically installed packages ) ( apt-mark showmaual print a list of manually installed packages )

apt update

Retrieve a fresh copy of the package lists from the configured sources. Users are recommended to use the new apt(8) command for interactive usage and use the apt-get(8) and apt-cache(8) commands in the shell script. ( apt update && apt full-upgrade && apt autoclean update the repositories information + update your system + clean the packages in cache )

apt upgrade

Install newest versions of all packages currently installed from a repository without removing any (safe upgrade). Is used to install available upgrades of all packages currently installed on the system from the sources configured via sources.list. New packages will be installed if required to satisfy dependencies, but existing packages will never be removed. If an upgrade for a package requires the removal of an installed package the upgrade for this package isn’t performed. ( apt-get dist-upgrade update packages (has a “smart” conflict resolution system) ) ( apt full-upgrade performs the function of upgrade but will remove currently installed packages if this is needed to upgrade the system as a whole (usually issued while doing system upgrades)) ( apt upgrade name upgrade a package name ) ( apt --simulate upgrade see which packages would be installed )

apt search pattern

Can be used to search for the given regex term(s) in the list of available packages and display matches. ( apt-cache search pattern equivalent command ) ( dpkg-query -l '*<name>*' search packages only by name (note the * in the command show all possible name packages) ) ( dpkg -S /path/to/file find out what debian package a particular file belongs to ) ( apt-cache rdepends name determine which packages depend on a specific package )

apt show name

Show information about the given package(s) including its dependencies, installation and download size, sources the package is available from, the description of the packages content and much more. ( apt-cache show name equivalent command ) ( apt-get showpkg name show a package’s dependencies ) ( apt show name -a view all available versions of a package ) ( apt info name detailed information about a package )

apt-cache policy name

Displays version of package and if it’s installed on the system ) ( dpkg -s name determine whether a package is installed )

apt install name

Downloaded the package from a repository and installed with full dependency resolution. ( apt-cache depends name to dependencies of a package ) ( apt reinstall name reinstall a package ) ( apt reinstall <package> $(apt-cache depends --recurse --installed <package> ||grep '[ ]') reinstall a package and all dependencies ) ( apt install name/bookworm-backports install name from bookworm-backports, preferring dependencies from stable (bookworm is the stable version) ) ( apt install name/bookworm-backports dependency/bookworm-backports install a newer version of name and all its reverse dependencies from bookworm-backports instead of the older one from Debian stable release )

apt remove name

Uninstalled the package. Note that removing a package leaves its configuration files on the system. On the other hand you can get rid of these leftovers by calling purge even on already removed packages. Note that this does not affect any data or configuration stored in your home directory. ( apt purge name remove the package and all its configuration and data files ) ( apt autoremove name auto remove unecessary packages that no other packages depend on. Packages that were automatically installed to satisfy dependencies (packages which you have installed explicitly/directly via install are also never proposed for automatic removal) ) ( apt-get --simulate remove name no action; perform a simulation of events that would occur based on the current system state but do not actually change the system ) ( apt remove '?obsolete' remove obsolete packages, useful when upgraded system (it’s possible that some packages are no longer available on the new repositories) )

apt clean

Remove local cached package files already installed. The commands below are listed to continue with the total system cleanup. ( apt autoremove --purge delete useless packages, unecessary dependencies, and old configuration files ) ( apt autoclean remove the local cache of the obsolete packages, retain a local cache of only the most recent versions ) ( dpkg --list | awk '/^rc/ {print $2}' and apt purge $(dpkg --list | awk '/^rc/ {print $2}') list and purge configuration files that have remained in place despite the removal of applications (note that slash bar / ^rc / in the commands) ) ( rm -Rf ~/.local/share/Trash/* emptying the user trash-bins ) ( rm -Rf ~/.cache/* if (using the tools described above) you detect that a folder becomes too fat, remove this cache folder ) ( rm -Rf ~/.thumbnails if necessary, remove the thumbnails folder )

apt depends name

List all packages needed by the one given. ( apt rdepends name reverse dependencies, list all packages that need the one given ) ( apt-cache rdepends --installed name like above but limit it to packages that are installed on your system )

dpkg

dpkg -l

List packages matching given pattern. [ -S search for a filename from installed packages -l list packages by pattern -L list files installed to your system from package-name ] ( dpkg -s name determine whether a package is installed ) ( dpkg -S name search for a filename from (show path) ) ( dpkg -L name list files installed by a package ) ( dpkg --get-selections > packages.dpkg save a list of installed packages (useful if you find you need to reinstall Debian), and to install all packages in new system apt install $(cat packages.dpkg | awk '{print $1}') )

dpkg-query

Show information about packages listed in the dpkg database. ( dpkg-query --show -f '${Package} ${Version} ${Section}\n' list packages installed from a particular sections/component (main, contrib, non-free, …) ) ( dpkg-query -l '*' check status of all packages) ( dpkg-query -l '*<name>*' search packages only by name (note the * in the command show all possible name packages) )

dpkg -i name.deb

Installing a package from a package file. Dpkg is a software utility handling the packages, like does apt, but without managing the dependencies. This means that if you use dpkg to install external packages, you need to install the “dependent” packages one by one from your terminal. An error message will let you know if some dependencies are missing. Then simply install them the classic way with apt (apt install dependency_1 dependency_2 …). Then relaunch the installation of your external package (dpkg -i name). [ -I package information -c list contents of package file -S wich package a file belongs to -V audit check sums of installed packages ] ( dpkg -i name.deb update an existing installation of package to the new version ) ( dpkg --purge package_name remove an external package )

npm

npm ls

List locally installed packages present in the node_modules folder of a project. [ -g global –depth nth show nth dependencies ] ( npm ls -g list globally installed packages ) ( npm list --depth 0 list installed packages and dependencies no dependencies of dependency ) ( npm config get prefix show the folder where the global packages are installed ) ( npm list name search dependencies of this package ) ( npm root -g check where the package are installed ) ( npm ls --all -g show all global dependencies ) ( npm ls --package-lock-only show the dependency tree of a project that doesn’t contain a node_modules folder based on its package-lock.json file )

npm search name

Search the name package. ( npm s name equivalent command ) ( npm show name show info about package ) ( npm show @vue/cli versions shows the available versions of the @vue/cli package )

npm install name

Add the name dependency to your project. Install the package (will use ^ … only allow minor version upgrades and patches) and all its dependencies in the node_modules/ folder, while updating the package.json file by adding the package and its version as a dependency of the project. [ -g global, without associating it with the project -D or –save-dev as a development dependency -P or –save-prod as a production dependecy (default) install or i or add install ] ( npm i name equivalent command ) ( npm install install all the dependencies in your project ) ( npm install [email protected] install lit-element version 2.3.1 (won’t allow upgrades to other versions) ) ( npm install --save-prod name switch installed package from development to production ) ( npm outdated integrated unit update system ) ( npm dedupe check packages and unify dependencies )

npm uninstall name

Uninstall package. [ uninstall or r or remove or unlink or rm uninstall ] ( npm r name equivalent command ) ( npm uninstall -g name uninstall packages globally )

npm init -y

Initialize a project in npm non-iteratively. [ -y non-iteratively ]

npm test

Run this project’s tests.

npm run name

Run the script named name. ( npm run shows available tasks ) ( cat package.json | jq .scripts shows the scripts of package.json ) ( npm run start usually used for project startup tasks ) ( npm run dev usually used to start local development servers ) ( npm run serve usually used to start local development servers ) ( npm run build builds the final files to upload to the production web ) ( npm run test usually starts a battery of tests ) ( npm run deploy usually deploys the webapp built with build to the production web )

npm command -h

Quick help on command. ( npm -l display usage info for all commands ) ( npm help term search for help on term )

npm doctor

checks the status of the npm installation. ( npm config set prefix '~/.npm-global' changes the default path where packages are installed from usr to .npm-global , then set export PATH=~/.npm-global/bin:$PATH )

npx

Run CLI packages installed at project level (or uninstalled). npx check if the package is installed in node_modules/.bin. If it is, run it. If it is not, it will look for it in the system (globally installed). If it is, it runs it. If it is not found, it will install it temporarily and run it. ( npx ./node_modules/.bin/name equivalent command ) ( cat package.json | jq .devDependencies,.dependencies show dependencies development and dependencies production ) ( cat package.json | jq .dependencies | jq -r 'keys[]' | xargs npx npm-size size of packages )

npm docs name

Access the documentation homepage of the name package. ( npm home name equivalent command ) ( npm repo name access the repository of the name package ) ( npm issues name access the name package issues page) ( npm bugs name equivalent to the above command ) ( npm audit shows info about the security problem )

rsync

rsync -b file dir/

Transfer file from the current directory to the directory dir. [ -t preserve modification times -b make backups (useful to avoid overwriting) ] ( rsync -t *.c dir/ transfer all files matching the pattern to the directory ) ( rsync -t *.c foo:dir/ transfer all files matching the to the directory on the machine foo )

rsync file1 file2 user@host:

Copy a group of files file1, fil2 to the home directory, where user is the username on host. [ -t preserve modification times -b make backups (useful to avoid overwriting) ] ( rsync file1 file2 host: if the username is the same on the two hosts ) ( rsync -t *.c host:dir_dest/ transfer all files matching the pattern from the current directory to the directory dir_dest )

rsync -a dir/ dest_dir

Transfer everything inside dir to dest_dir. This is not an exact replica (destination may keep some files). A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. Use relative or absolute path, don’t use dot notation. [ -a archive mode (causes hierarchy recursion and preservation of file attributes) -n dry run mode, perform a trial run with no changes made -v increase verbosity -vv more details –delete delete files in the destination directory that do not exist in the source directory -c computes checksums of the files to see if they’re the same –stats summary after the transfer -v increase verbosity ] ( rsync -nva dir/ dest_dir run a trial without actually copying any files ) ( rsync -a dir dest_dir transfer everything (dir folder will be inside dest_dir) ) ( rsync -a dir1 dir2 dest_dir transfer dir1 and dir2 to dest_dir ) ( rsync -a --delete dir/ dest_dir make an exact replica of the source directory, deleting files in the destination directory that do not exist in the source directory (careful with trailing-slash because can easily remove unrelated files this way) )

rsync -az dir/ host:dest_dir

Copies everything inside dir to dest_dir on host without actually creating dir on the destination host (trailing-slash version). This is not an exact replica, destination may keep some files. [ -a archive mode (causes hierarchy recursion and preservation of file attributes) -n dry run mode, perform a trial run with no changes made -v increase verbosity -vv more details –delete delete files in the destination directory that do not exist in the source directory -z compress file data during the transfer -c computes checksums of the files to see if they’re the same –bwlimit limit the bandwidth –stats summary after the transfer ] ( rsync -nva dir/ host:dest_dir run a trial without actually copying any files ) ( rsync -az dir host:dest_dir transfer everything without trailing-slash (dir folder will be inside dest_dir) ) ( rsync -az --delete dir/ host:dest_dir make an exact replica of the source directory, deleting files in the destination directory that do not exist in the source directory (careful with trailing-slash because can easily remove unrelated files this way) ) ( rsync -az --exclude=.git src host: exclude anything named .git ) ( rsync -az --exclude=/src/.git src host: exclude one specific item, specify an absolute path that starts with forward-slash this is not the root directory of the system but rather the base directory of the transfer ) ( rsync --bwlimit=100 -a dir host:dest_dir limit the bandwidth to 100Kbps ) ( rsync -az --delete --rsh=ssh dir host:dest_dir copy using ssh as its remote shell (ssh-encrypted tunnel to securely transfer the data from the local system to the remote host) )

rsync -az host:src_dir dest_dir

Transfer src_dir on the remote system to dest_dir on the local host. [ –exclude exclude files matching PATTERN -c computes checksums of the files to see if they’re the same –ignore-existing doesn’t clobber files already on the target side –backup doesn’t clobber files already on the target but rather renames these existing files by adding a ~ suffix to their names before transferring the new files –suffix=s changes the suffix used with –backup from ~ to s –update doesn’t clobber any file on the target that has a later date than the corresponding file on the source ]

rsync somehost.mydomain.com

List all the (listable) modules available from a particular rsync daemon by leaving off the module name.

rclone

rclone config

Enter an interactive configuration session. (https://rclone.org/docs/) ( rclone config show print (decrypted) config file, or the config for a single remote ) ( rclone config file show path of configuration file in use )

rclone lsd remote:path/to/dir

List all directories objects,containers,buckets in the path. [ -R to make them recurse ] ( rclone ls remote list size and path of every objects only ) ( rclone lsl remote: list all the file objects in the path with size, modification time and path ) ( rclone lsf remote: list all file and dir in easy to parsr format ) ( rclone lsjson remote: list objects and directories in JSON format ) ( rclone lsd remotedrive: list all directories objects )

rclone tree

List the contents of the remote in a tree like fashion. [ -d list directories only –level int descend only level directories deep -s size -r reverse ] ( rclone tree remotemega: --level 2 descend 2 levels (folder and subfolders) )

rclone size remote:path/dir

Return the total size and number of objects in remote:path. ( rclone about remote: return free and used size )

rclone check /local/path remote:path –size-only

Check if the files in the source and destination match. [ –size-only only compare the sizes not the hashes as well ]

rclone mkdir remote:path

Make the path if it doesn’t already exist. ( rclone rmdir remote:path remove the path ) ( rclone delete remote:path remove the contents of path ) ( rclone purge remote:path remove the path and all of its contents )

rclone copy /local/dir remote:path/dir

Copy files from source to dest, skipping already copied. ( rclone copy remote:file.ext /tmp/download the file (file.ext) on the remote will be copy inside (/tmp/download) on the local ) ( rclone move source:path dest:path [flags] move files from source to dest )

rclone sync -iP /local/dir remote:path/dir

Make source (/local/dir) and dest (path/dir) identical, modifying destination only. The destination path is used without the initial forward slash. It is always the contents of the directory that is synced, not the directory itself. (Doesn’t transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. If you don’t want to delete files from destination, use the rclone copy command instead.) (Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, e.g. “remote:myfolder” to look at “myfolder” in Google drive.) [ -i interactive –dry-run test first -P view real-time transfer statistics –bwlimit 10M limit the upload and download bandwidth to 10 MiB/s ] ( rclone dedupe drive:dupes to deal with “Duplicate object/directory found in source/destination - ignoring” errors ) ( rclone bisync bidirectional synchronization between two paths ) ( rclone sync --dry-run / local/dir remote:path/dir test first ) ( rclone sync -i --bwlimit 75k:125k / local/dir remote:path/dir sync local dir to remote dir with limit the upload bandwidth to 75 KiB/s and the download bandwidth to 125 KiB/s ) ( rclone sync -i --bwlimit 10M:off / local/dir remote:path/dir sync with the limit the upload bandwidth to 10 MiB/s but the download bandwidth would be unlimited ) ( rclone sync -P ~/Frangie remotedrive:Frangie sync my Frangie folder with my google drive )

rclone command –help

For more information about a command.

OpenSSH

ssh remote_username@remote_host

Log in to a remote host. You may omit remote_username@ if your local username is the same as on remote_host. [ -port 123 port to connect ot on the remote host ] ( ssh remote_host if the local username is the same as on remote_host ) ( ssh remote_host 'ls *' > file perform an ls on the remote system and redirect the output to a file on the local system ) ( ssh remote_host 'ls * > file' perform an ls on the remote system and redirect the output to a file on the remote machine ) ( ssh -X remote_host launch and run an X client program (a graphical application) on a remote system ) ( exit exit ssh connection )

ssh remote_host ‘tar cf - dir’ | tar xf -

Copy a directory dir from a remote system to the local system. ( tar zcvf - dir | ssh remote_host tar zxvf - copies a directory dir to another host )

scp user@host:file .

Copy a file from a remote host to the current directory. SCP can only be used for transferring files, and it is non-interactive. SFTP is more elaborate, and allows interactive commands to do things like creating directories, deleting directories and files. ( scp host:file . if the local username is the same as on remote host ) ( scp file user@host:dir copy a file from the local machine to a remote host ) ( scp user1@host1:file user2@host2:dir copy a file from one remote host to a second remote host ) ( sftp host conect to FTP-like server )

# ssh-keygen -t rsa -N ” -f /etc/ssh/ssh_host_rsa_key

Create SSH protocol version 2 key. ( # ssh-keygen -t dsa -N '' -f /etc/ssh/ssh_host_dsa_key )

Image

mogrify -format png * .jpg

Convert and compress all jpg images in the current directory to png. ( convert input.png output.jpg convert the image format [ -resize 30% resize -quality 20% quality ] )

jpegoptim -d dir -m 75 image.jpg

Reduce JPG image to a quality of 75 (suggested scale 60-80) and send to other directory dir. Overwrites the original image if the -d dir option is not set. Interesting feature is that it accepts the exact size of the target file. [ -n simulate compression and see what will be the size -d save in other directory -S 100k try to optimize file to given size (percentage can also be used -S 30%) -m 60 sets the maximum image quality factor (60 high quality-low size 10 low quality-high size) (disables lossless optimization mode, which is by default enabled) ] ( jpegoptim -S 20k -d dir image.jpg reduce to specif size ) ( jpegoptim -n image.jpg only shows possible results )

pngquant -Q 70-95 image.png

Converter and lossy PNG image. Overwrites the original image if these -f –ext.png options are not set. [ -f overwrite existing output files –ext .png file extension to use for output files instead of the default ‘fs8.png’ -o file output to file name -Q min-man will use the least amount of colors required to meet or exceed the max quality 0 worst to 100 perfect ] ( pngquant -f --ext .png -Q 70-95 image.png overwrites the original images ) ( pngquant 64 image.png resulting image will have 64 colors )

gifsicle -b –colors 256 file.gif

To optimize (compress) file.gif. Reduce the number of distinct colors in each output GIF to 256. ( gifsicle -e file.gif to explode file.gif into its component frames ) ( gifsicle -I file.gif to print information )

optipng -out output.png input.png

Reduce the PNG image to what thinks it’s probably the most effective and rename it in a new output file. Overwrites the original image if the -out option is not set. [ -out write output file to file -dir write the output files to directory -o select the optimization level (0 minimal effort 1 probably the most effective 2 higher enable) ] ( optipng -out output -o1 input equivalent command ) ( pngquant -f --ext .png image.png; opting image.png a little more compression )

yt-dlp

sudo yt-dlp -U

Update yt-dlp program to the latest stable version. Useful when you get errors.

yt-dlp -F URL

List available formats of each video. [ -x extract-audio -f 00 download format 00 from the list -/f mp4/ download format mp4 - r 4M limit-rate RATE, maximum download rate in bytes per second -c continue, resume partially downloaded files/fragments ] ( yt-dlp -f 'bv*+ba' URL the best available audio and the best available video ) ( yt-dlp -f best URL the best quality of video )

yt-dlp –write-auto-subs –sub-lang en-orig –skip-download URL

Download only the write automatically generate subtitle file in english original. [ –convert-subs=srt convert file to srt -k keep original file ] ( yt-dlp --write-subs --sub-lang en-orig --skip-download URL only the write subtitle )

yt-dlp URL –downloader ffmpeg –downloader-args “ffmpeg_i:-ss 12 -to 123”

Download only the portion from 12 seconds to 123 seconds.

ffmpeg

ffmpeg -i name

Video info.

ffmpeg -i input.avi output.mp4

Convert an input media file to a different format, by re-encoding media streams.

ffmpeg -i input.mp4 -ss 00:00 -to 00:10 -c copy output.mp4

Cut the video. [ -ss specifies the start time, e.g. 00:01:23.000 or 83 (in seconds) -t specifies the duration of the clip (same format) -to supply the end -c copy copies the first video, audio, and subtitle bitstream from the input to the output file without re-encoding them. This won’t harm the quality and make the command run within seconds ]

ffmpeg -i input.mp4 -vn output.mp3

Convert video files to audio files. ( ffmpeg -i input.mp4 -an output.mp4 removing audio stream from a video file )

ffmpeg -i input -b:v 2500k -b:a 192k output

Change the bitrate of the video file inpuy to 2500k and audio to 192k . [ -b:v 2500k only the video bitrate changes -b:a 192k only the audio bitrate changes -vcodec libx264 change the video codec -vcodec libx265 takes longer than libx264 but weighs less -vcodec copy -acodec mp3 changes audio codec but keeps video codec, useful to save time ]

ffmpeg -i origen -vf scale=iw/2:ih/2 destino

Resize by dividing by 2 (there are codecs that only allow to reduce or enlarge in multiplies of 4).

ffmpeg -i input.mp4 -qscale:v 2 outpu.jpg

Compress a video or image with quality 2. Normal range for JPEG is 2-31 with 31 being the worst quality.

ffmpeg -i input.wav -filter:a “volume=1.5” output.wav

Change volume 150%.

ffmpeg -i input.flv -vcodec libx264 output.mp4

Convert an input media file to a different video encoder libx264, which is a H.264 encoder. ( ffmpeg -i input.flv -vcodec libx264 -acodec aac output.mp4 convert an input media file to a different video codec libx264 and audio encoder aac, which is the built-in FFmpeg AAC encoder )

ffmpeg -txt_format text -i input.mp4 out.srt

Read and/or extract subtitles from embedded subtitles tracks.

docker

docker build -t hlw_test .

Build the image (reads the Dockerfile in the current directory and applies the identifier hlw_test to the image).

docker images

Verify image.

docker run -it hlw_test

Start container with the hlw_test image.

docker ps

Show the currently running containers. [ -a see all ]

rm

Remove a terminated container. ( rmi remove an image )

Git

Basics

git init

Create a new subdirectory named .git that contains all necessary repository files — a Git repository skeleton. [ –bare initializes the repository without a working directory ]

git clone <url>

Get a copy of an existing Git repository. It creates a new directory, goes into it and runs git init to make it an empty Git repository, adds a remote (git remote add) to the URL that you pass it (by default named origin), runs a git fetch from that remote repository and then checks out the latest commit into your working directory with git checkout. [ -o <name> rename default remote branch ] ( git clone <url> <dir> clone the repository into another directory name )

git status

Determine which files are in which state. ( A added | M modified | ? not tracked ) ( left-hand column indicates the status of the staging area and the right-hand column indicates the status of the working tree) . [ -s more simplified output ] ( git ls-files to take a more raw look at what your staging area looks like ) ( git status -s more simplied output )

git add <file>

Specify the file file to be tracked or staged or merge-conflicted. ( git add -A stage all (new, modified, deleted) files ) ( git add . stage all (new, modified, deleted) files in current folder (not higher directories) ) ( git add --ignore-removal . stage new and modified files only (not delete files) ) ( git add -u stage modified and deleted files only (not new files) )

git commit -m “message”

Commit the changes. Just changes to anything that was staged. Records a new permanent snapshot in the database and then moves the branch pointer on the current branch up to it. [ -v puts the diff of the changes -m type commit message inline -a automatically stage every file that is already tracked before doing the commit (includes all changed files. Skip the git add) ] ( git commit -a -m 'whatever' commit, automatically stage and message inline ) ( git commit -a -S -m 'Signed commit' signing commits directly with GPG key )

git show

Show various types of objects in a simple and human readable way. Normally you would use this to show the information about a tag or a commit.

git log

Lists the commits made in that repository in reverse chronological order. ( git log --oneline --decorate --graph --all better view ) ( git shortlog Is used to summarize the output of git log )

git pull

Fetches data from the server you originally cloned from and automatically tries to merge it into the code you’re currently working on. (automatically sets up local master branch to track the remote master branch on the server you cloned from) [ –verify-signatures inspect and reject when merging a commit that does not carry a trusted GPG signature ] ( git fetch ; git merge FETCH_HEAD equivalent command )

git push <remote> <branch>

Push branch to origin server remote . Push any commits you’ve done back up to the server. Calculate what your local database has that the remote one does not, and then pushes the difference into the other repository. [ -u configures the branches for easier pushing and pulling later -f when rebased the branch to your push command in order to be able to replace the branch on the server with a commit that isn’t a descendant of it ] ( git push origin name:othername this format push a local branch into a remote branch that is named differently (rename) ) ( git push origin --delete serverfix delete serverfix branch from the server )

Remotes

git remote -v

Lists the shortnames of each remote specified. [ -v show URLs that Git has stored for the shortname ]

git ls-remote

Get a list of all the branches and tags and other references in the repository. (If the repository is on GitHub and you have any Pull Requests that have been opened, you’ll get these references that are prefixed with refs/pull/ . These are basically branches, but since they’re not under refs/heads/ you don’t get them normally when you clone or fetch from the server — the process of fetching ignores them normally)

git remote show <remote>

Show more information about particular remote.

git remote add <shortname> <url>

Add a new remote Git repository as a shortname (link to a repository). Is a management tool for your record of remote repositories. It allows you to save long URLs as short handles.

git remote rename <name1> <name2>

Change a remote’s shortname, renaming name1 to name2 . (that this changes all remote-tracking branch names. referenced at name1/master to name2/master ) ( git remote set-url origin NEW_URL updating any existing local clones to point to the new repository URL (when you rename a it repository (first rename the host repository in the github setting e.g.)) )

git remote rm name

Remove a remote name . All remote-tracking branches and configuration settings associated with that remote are also deleted. ( git remote remove name equivalent command )

git fetch <remote>

Fetches any new work that has been pushed to that server since you cloned (or last fetched from) it. (only downloads the data to your local repository — it doesn’t automatically merge it with any of your work or modify what you’re currently working on) [ –all totally up to date ahead and behind numbers ] ( git fetch origin refs/pull/958/head fetching the reference directly (connect to the origin remote, and download the ref named refs/pull/958/head) )

git pull

Fetches data from the server you originally cloned from and automatically tries to merge it into the code you’re currently working on. (automatically sets up local master branch to track the remote master branch on the server you cloned from) [ –verify-signatures inspect and reject when merging a commit that does not carry a trusted GPG signature ] ( git fetch ; git merge FETCH_HEAD equivalent command )

git push <remote> <branch>

Push branch to origin server remote . Push any commits you’ve done back up to the server. Calculate what your local database has that the remote one does not, and then pushes the difference into the other repository. [ -u configures the branches for easier pushing and pulling later -f when rebased the branch to your push command in order to be able to replace the branch on the server with a commit that isn’t a descendant of it ] ( git push origin name:othername this format push a local branch into a remote branch that is named differently (rename) ) ( git push origin --delete serverfix delete serverfix branch from the server )

git request-pull origin/master myfork

Take the base branch into which you want your topic branch pulled and the Git repository URL you want them to pull from, and produces a summary of all the changes you’re asking to be pulled.

Difference

git diff

Show difference between working environment and staging area. Show exactly what was changed. (doesn’t show all changes made since last commit — only changes that are still unstaged) ( git diff --staged between staging area and last commit ) ( git diff master branchB between two commits ) ( git diff A...B between branches )

git diff master…contrib

Shows you only the work your current topic branch has introduced since its common ancestor with master . (to do a diff between the last commit of the branch you’re on and its common ancestor with another branch)

git diff –ours

Show what the merge introduced. To compare your result to what you had in your branch before the merge. ( git diff --theirs how the result of the merge differed from what was on their side ) ( git diff --base how the file has changed from both sides with )

git diff -b

Filter out whitespace differences. ( git diff --check look for possible whitespace issues before committing )

git difftool

Launches an external tool to show difference between two trees.

Branching

git branch

List of current branchs. [ -v last commit on each branch –merged show which branches are already merged into the branch –no-merged filter no merge branch –all all branches -vv see what tracking branches you have set up. List out your local branches with more information including what each branch is tracking and if your local branch is ahead, behind or both ]

git branch name

Create a new branch called name . [ -D force remove -f Reset <branchname> to <start-point>, even if <branchname> exists already. Without -f, git branch refuses to change an existing branch -m Move/rename a branch, together with its config and reflog -M shortcut for -m -f allow renaming the branch even if the new branch name already exists ] ( git branch -d namebranch delete the branch namebranch ) ( git branch --move bad-name corrected-name replaces bad-name branch with corrected-name branch, but this change is only local for now ) ( git push --set-upstream origin corrected-name corrected branch name on the remote ) ( git push origin --delete bad-branch-name delete bad name from remote ) ( git checkout -b <branch> <remote>/<branch> to set up a local branch with a different name than the remote branch. Then, local branch <branch> will automatically pull from <remote>/<branch> )

git switch -c <newbranch>

Create a new branch and switch to it. [ -c create ] ( git branch newbranch ; git switch newbranch equivalent command ) ( git switch <name> to switch to an existing branch ) ( git switch - return to previously checked out branch )

git checkout

Switch branches and check content out into your working directory.

git checkout –track origin/serverfix

To start tracking branches .

git branch -u origin/serverfix

If you already have a local branch and want to set it to a remote branch you just pulled down, or want to change the upstream branch you’re tracking. [ -u upstream to ]

Merging & Rebasing

git merge namebranch

Merge the namebranch with the current branch. [ –squash takes all the work on the merged branch and squashes it into one changeset producing the repository state as if a real merge happened, without actually making a merge commit. This means your future commit will have one parent only and allows you to introduce all the changes from another branch and then make more changes before recording the new commit –verify-signatures inspect and reject when merging a commit that does not carry a trusted GPG signature -S sign the resulting merge commit itself ] ( git merge origin/serverfix merge work into current working branch ) ( git merge --verify-signatures -S signed-branch verifies that every commit in the branch to be merged is signed and furthermore signs the resulting merge commit )

git cherry-pick e43a6

It takes the change (patch) that was introduced in a commit and tries to reapply (re-introduce) it on the branch you’re currently on. This is useful to only take one or two commits from a branch individually rather than merging in the branch which takes all the changes or if you only have one commit on a topic branch and you’d prefer to cherry-pick it rather than run rebase.

git rebase main

Take all the changes that were committed on one branch and replay them on a different branch. (is basically an automated cherry-pick. It determines a series of commits and then cherry-picks them one by one in the same order somewhere else) [ -i interactive ] ( git rebase <basebranch> <topicbranch> equivalent command whitout switch )

git rebase –onto main server client

Take the client branch, figure out the patches since it diverged from the server branch, and replay these patches in the client branch as if it was based directly off the master branch instead.

git mergetool

A graphical tool to resolve merge conflicts.

Undoing Things

git restore –staged <file>

To unstage the file file. ( git reset HEAD <file> to unstage the file file. It moves around the HEAD pointer and optionally changes the index or staging area )

git restore <file>

Discard changes in working directory of the file file. Careful any local changes made to that file are gone. Git just replaced that file with the last staged or committed version. (modify -> unmodify)

git commit –amend

Redo last commit (if commit and then realize forgot to stage the changes in a file wanted to add to this commit). This command takes staging area and uses it for the commit. (Is used to modify the most recent commit. It combines changes in the staging environment with the latest commit, and creates a new commit. This new commit replaces the latest commit entirely) [ –no-edit will allow you to make the amendment to your commit without changing its commit message ] ( git commit --amend -m "an updated commit message" adding the -m option allows you to pass in a new message from the command line without being prompted to open an editor )

git reset

We use when we want to move the repository back to a previous commit, discarding any changes made after that commit. [ –hard change the working directory, this option makes it possible for this command to lose your work ] ( git reset HEAD~1 reset to the last commit and preserve changes done (move the HEAD pointer back one commit) ) ( git reset --soft HEAD~1 reset to the last commit and preserve changes done and index(stage area) ) ( git reset --hard HEAD~1 reset to the last commit and also remove all unstaged changes (files are reset to their state at last commit) ) ( git reflog reset (if destroy a commit, but then discover you needed it after all) )

git revert

Is essentially a reverse git cherry-pick. It creates a new commit that applies the exact opposite of the change introduced in the commit you’re targeting, essentially undoing or reverting it. (we use when we want to take a previous commit and add it as a new commit, keeping the log intact.) (reset if the commit being reset only exists locally. revert creates a new commit that undoes the changes, so if the commit to revert has already been pushed to a shared repository, it is best to use revert as it doesn’t overwrite commit history) ( git revert -m 1 HEAD the -m 1 flag indicates which parent is the “mainline” and should be kept )

git merge –abort

Back out of the merge. (tries to revert back to your state before you ran the merge. The only cases where it may not be able to do this perfectly would be if you had unstashed, uncommitted changes in your working directory when you ran it) ( git merge -Xignore-all-space whitespace if you see that you have a lot of whitespace issues in a merge, you can simply abort it and do it again (ignores whitespace completely when comparing lines) )

git rm <file>

Removes a file from Git (remove it from tracked files (remove it from staging area and also removes the file from working directory)). [ -f force the removal (if modified the file or had already added it to the staging area) –cached keep the file in working tree but remove it from staging area ] ( git rm log/\*.log removes all files that have the .log extension in the log/ directory (backslash (\) in front of the (*) is necessary because Git does its own filename expansion in addition to shell’s filename expansion) ) ( git rm \*~ removes all files whose names end with a ~ ) ( git rm --cached <file> only remove file from the staging area but leaving it in the working directory )

git mv <from> <to>

Rename a file. (Git doesn’t explicitly track file movement. If rename a file in Git, no metadata is stored in Git that tells it renamed the file) ( mv file_from file_to ;git rm file_from ;git add file_to equivalent command )

Commit History

git log

Lists the commits made in that repository in reverse chronological order. [ -p shows the difference (the patch output) introduced in each commit -2 limit the number of log entries displayed –stat show statistics for files modified in each commit –pretty=oneline show commits in an alternate format. Option values include oneline other option are short, full, fuller –pretty=format specify own log output format –graph display an ASCII graph of the branch and merge history beside the log output –abbrev-commit show only the first few characters of the SHA-1 –shortstat display only the changed/insertions/deletions line from the –stat command –relative-date display the date in a relative format –no-merges prevent the display of merge commits ] ( git log --pretty=format:"%h - %an, %ar : %s" abbreviated commit hash - author name , author date relative : subject) ( git log --since=2.weeks list of commits made in the last two weeks ) ( git log -S function_name find the last commit that added or removed a reference to a specific function ) ( git log -- path/to/file specify a directory or file name, you can limit the log output to commits that introduced a change to those files ) ( git log --pretty="%h - %s" --author='Junio C Hamano' --since="2008-10-01" --before="2008-11-01" --no-merges -- t/ commits modifying test files in the Git source code history were committed by Junio Hamano in the month of October 2008 and are not merge commits ) ( git log --oneline --decorate --graph --all print out the history of your commits, showing where your branch pointers are and how your history has diverged )

git log featureA..featureB

Show what commits are unique to a branch relative to another branch.

git log contrib –not master

See what changes each commit introduces. [ –not exclude commits in the master branch -p append the diff introduced to each commit ]

git log –show-signature -1

To see and verify GPG signatures. ( git log --pretty="format:%h %G? %aN %s" check any signatures it finds and list them in its output )

git log -S ZLIB_BUF_MAX –oneline

Find out when the ZLIB_BUF_MAX constant was originally introduced. Shows only those commits that changed the number of occurrences of that string. ( git log -L :git_deflate_bound:zlib.c see every change made to the function git_deflate_boun. This will try to figure out what the bounds of that function are and then look through the history and show every change that was made to the function as a series of patches back to when the function was first created. If Git can’t figure out how to match a function or method in your programming language, you can also provide it with a regular expression )

git shortlog

Is used to summarize the output of git log. (instead of listing out all of the commits it will present a summary of the commits grouped by author) ( git shortlog --no-merges master --not v1.0.1 gives you a summary of all the commits since your last release, if your last release was named v1.0.1 )

git describe master

Git generates a string consisting of the name of the most recent tag earlier than that commit, followed by the number of commits since that tag, followed finally by a partial SHA-1 value of the commit being described. (Because Git doesn’t have monotonically increasing numbers like ‘v123’ or the equivalent to go with each commit. This way, you can export a snapshot or build and name it something understandable to people)

git archive master –prefix=’project/’ | gzip > `git describe master`.tar.gz

Create an archive of the latest snapshot. If someone opens that tarball, they get the latest snapshot of your project under a project directory. ( git archive master --prefix='project/' --format=zip > `git describe master`.zip create a zip archive in much the same way )

gitk –all

Graphical history viewer. (each dot represents a commit, the lines represent parent relationships, and refs are shown as colored boxes, the yellow dot represents HEAD, and the red dot represents changes that are yet to become a commit) [ –all show commits reachable from any ref, not just HEAD ] ( git gui tool for crafting commits )

Tagging

git tag

Listing the existing tags in alphabetical order. Is used to give a permanent bookmark to a specific point in the code history. Generally this is used for things like releases. [ -l mandatory if using a wildcard ] ( git tag -l "v1.8.5*" only in looking at the 1.8.5 series )

git tag v1.4-lw

Create a Lightweight tag commit. The commit checksum stored in a file — no other information (don’t supply any of the -a, -s, or -m options, just provide a tag name). [ -d delete tag (does not remove the tag from any remote servers) ]

git tag -a v1.4 -m “my version 1.4”

Create a Annotated tag commit. [ -a annotated tag -m specifies a tagging message -d delete tag (does not remove the tag from any remote servers) ] ( git tag -s v1.5 -m 'my signed 1.5 tag' sign tags with GPG private key ) ( git tag -v v1.4.2.1 use GPG to verify the signature ) ( git tag -a v1.2 9fceb02 create a tag that the commit checksum ends in 9fceb02 ) ( git tag -a maintainer-pgp-pub <key> create a tag that points directly to it by specifying the new SHA-1 value that the hash-object command gave you )

git show v1.4

Shows the tagger information of v1.4 , the date the commit was tagged, and the annotation message before showing the commit information.

git push origin <tagname>

Push tags to a shared server after they’ve been created. ( git push origin --tags will transfer all tags to the remote server that aren’t there yet ) ( git push <remote> --tags will push both lightweight and annotated tags ) ( git push <remote> --follow-tags only annotated tags will be pushed to the remote (There’s currently no option to push only lightweight tags) )

git push origin –delete <tagname>

Deleting a tag tagname from a remote server. ( git push <remote> :refs/tags/<tagname> equivalent command )

git checkout v2.0.0

Show the versions of files a tag is pointing to, although this puts your repository in “detached HEAD” state, which has some ill side effects. In “detached HEAD” state, if you make changes and then create a commit, the tag will stay the same, but your new commit won’t belong to any branch and will be unreachable, except by the exact commit hash.

Emails & Patch

git format-patch

Is used to generate a series of patches in mbox format that you can use to send to a mailing list properly formatted.

git format-patch -M origin/master

Prepare patches for e-mail submission. Prints out the names of the patch files it creates. [ -M switch tells Git to look for renames ]

git apply

Applies a patch created with the git diff or even GNU diff command. It is similar to what the patch command might do with a few small differences. This command is an “apply all or abort all” model where either everything is applied or nothing is, whereas patch can partially apply patchfiles, leaving your working directory in a weird state. ( git apply / tmp/patch-ruby-client.patch apply patch saved in //tmp/patch-ruby-client.patch/ . This modifies the files in your working directory ( patch -p1 equivalent command ) )

git apply –check 01-see-if-this-helps.patch

To see if a patch applies cleanly before you actually applying it. If there is no output, then the patch should apply cleanly. This command also exits with a non-zero status if the check fails.

git am

Is used to apply patches from an email inbox, specifically one that is mbox formatted. This is useful for receiving patches over email and applying them to your project easily.

git am 0001-limit-log-function.patch

To apply a patch generated by format-patch . [ -i interactive mode ] ( git imap-send uploads a mailbox generated with git format-patch into an IMAP drafts folder ) ( git send-email is used to send patches that are generated with git format-patch over email )

git request-pull

Is simply used to generate an example message body to email to someone. If you have a branch on a public server and want to let someone know how to integrate those changes without sending the patches over email, you can run this command and send the output to the person you want to pull the changes in.

Config & Help

git config

Get and set configuration variables. [ –system reads and writes from the file /etc/gitconfig –global reads and writes from the file ~~/.config/git/config~ and affects all of the repositories you work with on your system –local reads and writes from the file .git/config this is the default option (each level overrides values in the previous level) ]

git config –list –show-origin

Show all settings and where they are coming from. ( git config --list show all settings )

git config –global alias.ci commit

Set up an alias for a command commit . Instead of typing git commit, just need to type git ci . ( git config --global alias.unstage 'reset HEAD --' correct the usability problem you encountered with unstaging a file ) ( git config --global alias.last 'log -1 HEAD' to add a last command )

git config –global alias.visual ‘!gitk’

With character ! it run an external command, rather than a Git subcommand. Then git visual runs gitk .

git config –global user.name “John Doe”

Set name on the system. (if you want to override this with a different name for specific projects, you can run the command without the –global option) ( git config --global user.email [email protected] set email on the system ) ( git config --global core.editor emacs set the editor emacs ) ( git config --global init.defaultBranch main to set main as the default branch name )

git config –global user.signingkey 0A46826A!

Configure Git to use signing things. Then Git will use your key by default to sign tags and commits if you want.

git config –global rerere.enabled true

When rerere is enabled, Git will keep a set of pre- and post-images from successful merges, and if it notices that there’s a conflict that looks exactly like one you’ve already fixed, it’ll just use the fix from last time, without bothering you with it. Whenever you do a merge that resolves conflicts, the resolution will be recorded in the cache in case you need it in the future.

git config –global credential.helper cache

Set up a “credential cache”. If you don’t want to type password every single time you push.

git help command

The manpage help for the commad . ( git add -h quick refresher on the available options for a Git command ) ( man gitignore show more details about file .gitignore )

Miscellaneous

gpg -a –export F721C45A | git hash-object -w –stdin

Can directly import the key into the Git database by exporting it and piping that through which writes a new blob with those contents into Git and gives you back the SHA-1 of the blob.

git show maintainer-pgp-pub | gpg –import

Can directly import your PGP key by pulling the blob directly out of the database and importing it into GPG.

git instaweb –httpd=webrick

Starts up an HTTPD server on port 1234 and then automatically starts a web browser that opens a page that shows how the project would look like. [ –httpd start instaweb with a non-lighttpd handler ] ( git instaweb --httpd=webrick --stop shut down the server )

git rev-parse

To take just about any string and turn it into an object SHA-1.

git clean

To remove unwanted files from working directory. This could include removing temporary build artifacts or merge conflict files.

git stash

Is used to temporarily store uncommitted work in order to clean out your working directory without having to commit unfinished work on a branch.

git bisect

Incredibly helpful debugging tool used to find which specific commit was the first one to introduce a bug or problem by doing an automatic binary search.

git blame

Command annotates the lines of any file with which commit was the last one to introduce a change to each line of the file and what person authored that commit. This is helpful in order to find the person to ask for more information about a specific section of your code.

git grep

Find any string or regular expression in any of the files in your source code, even older versions of your project. (this command is really fast and you can search through any tree in Git, not just the working directory) [ -n print out the line numbers where Git has found matches -c summarize the output by showing you only which files contained the search string and how many matches there were in each file -p display the enclosing method or function for each matching string –and ensures that multiple matches must occur in the same line of text –break split up the output into a more readable format –heading split up the output into a more readable format ] ( git grep -p gmtime_r *.c )

Shell

Assignment

A single = performs assignment, but be careful it is differente from ==.

foo = 5 says “make foo equal to 5” (performs assignment) foo == 5 says “does foo equal 5?” (evaluates equivalence)

In the shell, this can be a little confusing because the [test command accepts a single = for string equivalence.

The syntax VARIABLE=value means, sets the value of the variable named VARIABLE to value. To access this variable, use $VARIABLE.

The syntax export VARIABLE means, make the VARIABLE shell variable into an environment variable (tells the shell to make the contents of VARIABLE available to child processes of this shell).

foo="This is some "
echo $foo
# This is some

foo=$foo"text."
echo $foo
# This is some text.


# Append a directory name dir to the end of the PATH variable.
PATH=$PATH:dir

# Append to the beginning of the path.
PATH=dir:$PATH

Expansion

Parameter expansion

Basic Parameters

The simplest form of parameter expansion is reflected in the ordinary use of variables.

When expanded $a, this becomes whatever the variable a contains. Simple parameters may also be surrounded by braces ${a} has no effect on the expansion, but is required if the variable is adjacent to other text, which may confuse the shell.

Note: It’s always good practice to enclose parameter expansions in double quotes to prevent unwanted word splitting, unless there is a specific reason not to. This is especially true when dealing with filenames since they can often include embedded spaces and other assorted nastiness.

a="foo"
echo "$a"
# foo

a="foo"
echo "$a_file"
#

a="foo"
echo "${a}_file"
# foo_file

Expansions to manage empty variables

${parameter:-word}

If parameter is unset (i.e., does not exist) or is empty, this expansion results in the value of word. If parameter is not empty, the expansion results in the value of parameter.

foo=
echo ${foo:-"substitute value if unset"}
# substitute value if unset

echo $foo
#

foo=bar
echo ${foo:-"substitute value if unset"}
# bar

echo $foo
# bar
${parameter:=word}

If parameter is unset or empty, this expansion results in the value of word. In addition, the value of word is assigned to parameter. If parameter is not empty, the expansion results in the value of parameter.

Note: Positional and other special parameters cannot be assigned this way.

foo=
echo ${foo:="default value if unset"}
# default value if unset

echo $foo
# default value if unset

foo=bar
echo ${foo:="default value if unset"}
# bar

echo $foo
# bar
${parameter:?word}

If parameter is unset or empty, this expansion causes the script to exit with an error, and the contents of word are sent to standard error. If parameter is not empty, the expansion results in the value of parameter.

foo=
echo ${foo:?"parameter is empty"}
# bash: foo: parameter is empty

echo $?
# 1

foo=bar
echo ${foo:?"parameter is empty"}
# bar

echo $?
# 0
${parameter:+word}

If parameter is unset or empty, the expansion results in nothing. If parameter is not empty, the value of word is substituted for parameter; however, the value of parameter is not changed.

foo=
echo ${foo:+"substitute value if set"}

foo=bar
echo ${foo:+"substitute value if set"}
# substitute value if set

Expansions that return variable names

The shell has the ability to return the names of variables. This is used in some rather exotic situations.

${!prefix*} ${!prefix@}

This expansion returns the names of existing variables with names beginning with prefix. According to the bash documentation, both forms of the expansion perform identically.

# List all the variables in the environment with names that begin BASH:
echo ${!BASH*}
# BASH BASH_ARGC BASH_ARGV BASH_COMMAND BASH_COMPLETION
# BASH_COMPLETION_DIR BASH_LINENO BASH_SOURCE BASH_SUBSHELL
# BASH_VERSINFO BASH_VERSION

String operations

${#parameter}

Expands into the length of the string contained by parameter. Normally, parameter is a string; however, if parameter is either @ or *, then the expansion results in the number of positional parameters.

foo="This string is long."
echo "'$foo' is ${#foo} characters long."
# 'This string is long.' is 20 characters long.
${parameter:offset}

These expansions are used to extract a portion of the string contained in parameter. The extraction begins at offset characters from the beginning of the string and continues until the end of the string, unless length is specified ${parameter:offset:length}.

If parameter is @, the result of the expansion is length positional parameters, starting at offset.

foo="This string is long."
echo ${foo:5}
# string is long.

echo ${foo:5:6}
# string

If the value of offset is negative, it is taken to mean it starts from the end of the string rather than the beginning. Note that negative values must be preceded by a space to prevent confusion with the ${parameter:-word} expansion. length, if present, must not be less than zero.

foo="This string is long."
echo ${foo: -5}
# long.

echo ${foo: -5:2}
# lo
${parameter#pattern}

These expansions remove a leading portion of the string contained in parameter defined by pattern. pattern is a wildcard pattern like those used in pathname expansion. The difference in this forms and ${parameter##pattern} is that the # form removes the shortest match, while the ## form removes the longest match.

foo=file.txt.zip
echo ${foo#*.}
# txt.zip

echo ${foo##*.}
# zip
${parameter%pattern}

These expansions are the same as ${parameter#pattern} expansion, except they remove text from the end of the string contained in parameter rather than from the beginning.

foo=file.txt.zip
echo ${foo%.*}
# file.txt

echo ${foo%%.*}
# file
${parameter/pattern/string}

This expansion performs a search-and-replace operation upon the contents of parameter. If text is found matching wildcard pattern, it is replaced with the contents of string.

  • The ${parameter/pattern/string} form, only the first occurrence of pattern is replaced.
  • The ${parameter//pattern/string} form, all occurrences are replaced.
  • The ${parameter/#pattern/string} form requires that the match occur at the beginning of the string.
  • The ${parameter/%pattern/string} form requires the match to occur at the end of the string. In every form, /string may be omitted, causing the text matched by pattern to be deleted.
foo=JPG.JPG
echo ${foo/JPG/jpg}
# jpg.JPG

echo ${foo//JPG/jpg}
# jpg.jpg

echo ${foo/#JPG/jpg}
# jpg.JPG

echo ${foo/%JPG/jpg}
# JPG.jpg

Case conversion

bash has four parameter expansions and two declare command options to support the uppercase/lowercase conversion of strings.

The declare command is good for normalize the user’s input. That is, convert it into a standardized form before we attempt the database lookup. We can do this by converting all the characters in the user’s input to either lower or uppercase and ensure that the database entries are normalized the same way.

Using declare, we can force a variable to always contain the desired format no matter what is assigned to it.

#!/bin/bash
# ul-declare: demonstrate case conversion via declare

declare -u upper
declare -l lower

if [[ $1 ]]; then
    upper="$1"
    lower="$1"
    echo "$upper"
    echo "$lower"
fi

ul-declare aBc
# ABC
# abc
FormatExpand the value of parameter…
${parameter,,pattern}…into all lowercase
${parameter,pattern}…changing only first char to lowercase
${parameter^^pattern}…into all uppercase letters
${parameter^pattern}…changing only first char to uppercase

Here is a script that demonstrates these expansions. While this script uses the first positional parameter, parameter may be any string, variable, or string expression.

#!/bin/bash
# ul-param: demonstrate case conversion via parameter expansion

if [[ "$1" ]]; then
    echo "${1,,}"
    echo "${1,}"
    echo "${1^^}"
    echo "${1^}"
fi

ul-param aBc
# abc
# aBc
# ABC
# ABc

Examples

Expansions can improve the efficiency of scripts by eliminating the use of external programs.

Example of the program longest-word that finds longest string in a file using parameter expansion instead of other common commands.

We’ll use the parameter expansion ${#j} in place of the command substitution $(echo -n $j | wc -c) and its resulting subshell, and then compare the efficiency of the two versions by using the time command.

# Brief:
foo=hello
echo $(echo -n $foo  | wc -c)
# 5
echo ${#foo}
# 5

# Script:
#!/bin/bash
# longest-word3: find longest string in a file
for i; do
    if [[ -r "$i" ]]; then
        max_word=
        max_len=0
        for j in $(strings $i); do
            # parameter expansion
            len="${#j}"
            if (( len > max_len )); then
                max_len="$len"
                max_word="$j"
            fi
        done
        echo "$i: '$max_word' ($max_len characters)"
    fi
done

#!/bin/bash
# longest-word2: find longest string in a file
for i; do
    if [[ -r "$i" ]]; then
        max_word=
        max_len=0
        for j in $(strings "$i"); do
            # command subtitution
            len="$(echo -n "$j" | wc -c)"
            if (( len > max_len )); then
                max_len="$len"
                max_word="$j"
            fi
        done
        echo "$i: '$max_word' ($max_len characters)"
    fi
done

# Comparison of the efficiency of the two versions of the script.
time longest-word2 file.txt
# file.txt: 'scrollkeeper-get-extended' (25 characters)
#
# real 0m3.618s
# user 0m1.544s
# sys  0m1.768s

time longest-word3 file.txt
# file.txt: 'scrollkeeper-get-extended' (25 characters)
#
# real 0m0.060s
# user 0m0.056s
# sys  0m0.008s

Arithmetic expansion

Basic form

The arithmetic expansion is used to perform various arithmetic operations on integers. Its basic form is $((expression)), where expression is a valid arithmetic expression. This is related to the compound command (( )) used for arithmetic evaluation (truth tests).

echo $((2 + 2))
# 4

echo $(((5**2) * 3))
# 75

Number Bases

NotationDescription
numberdecimaly (base 10) integers by default
0numbernumbers with a leading zero are considered octal
0xnumberhexadecimal notation
base#numbernumber is in base
# print the value of the hexadecimal ff (the largest two-digit number)
echo $((0xff))
# 255

# print the value of the binary (base 2) number (the largest eight-digit)
echo $((2#11111111))
# 255

Arithmetic Operators

Most of the arithmetic operators ( + , - , * , / , ** , % ) are self-explanatory, but integer division and modulo require further discussion.

Since the shell’s arithmetic operates only on integers, the results of division are always whole numbers. This makes the determination of a remainder in a division operation more important.

echo $(( 5 / 2 ))
# 2

echo $(( 5 % 2 ))
# 1


#Display a line of numbers, highlighting each multiple of 5:
#!/bin/bash
# modulo: demonstrate the modulo operator
for ((i = 0; i <= 20; i = i + 1)); do
    remainder=$((i % 5))
    if (( remainder == 0 )); then
        printf "<%d> " "$i"
    else
        printf "%d " "$i"
    fi
done
printf "\n"

# Evaluate script.
modulo
# <0> 1 2 3 4 <5> 6 7 8 9 <10> 11 12 13 14 <15> 16 17 18 19 <20>

Assignments

Arithmetic expressions may perform assignment. Each time we give a variable a value, we are performing assignment. We can also do it within arithmetic expressions.

NotationDescriptionEquivalent
parameter = valueAssigns value to parameterp = v
parameter += valueAdditionp = p + v
parameter -= valueSubtractionp = p – v
parameter *= valueMultiplicationp = p * v
parameter /= valueInteger divisionp = p / v
parameter %= valueModulop = p % v
parameter++Variable post-incrementp = p + 1
parameter−−Variable post-decrementp = p − 1
++parameterVariable pre-incrementp = p + 1
–parameterVariable pre-decrementp = p − 1

In the post-increment/decrement, the parameter is incremented/decre-mented before the parameter is returned, while the pre-increment/decrement, the operation is performed after the parameter is returned.

foo=
echo $foo
#

# The next process will assign the value of 5 to the variable foo, and
# it evaluates to true because foo was assigned a non-zero value.
if (( foo = 5 )); then echo "It is true."; fi
# It is true.

echo $foo
# 5


# This is the more expected behavior
# Pre-increment.
foo=1
echo $((++foo))
# 2
echo $foo
# 2


# This behavior is not used very often.
# Post-increment.
foo=1
echo $((foo++))
# 1
echo $foo
# 2


#!/bin/bash
# modulo2: demonstrate the modulo operator
for ((i = 0; i <= 20; ++i )); do
    if (((i % 5) == 0 )); then
        printf "<%d> " "$i"
    else
        printf "%d " "$i"
    fi
done
printf "\n"

# Evaluate script.
modulo
# <0> 1 2 3 4 <5> 6 7 8 9 <10> 11 12 13 14 <15> 16 17 18 19 <20>

Wildcard expansion

WildcardMeaning
’*Matches any characters
?Matches any single character
[chars]M any char that’s a member of the set chars
[!chars]M any char that’s not a member of the set chars
[[:class:]M any char that’s a member of the specified class

Note: Be careful with other notation like [A-Z] and [a-z] because they will not produce the expected results in some system unless properly configured, instead use [:upper:] and [:lower:]. *

Data???
# Any file beginning with “Data” followed by exactly three characters.

[abc]*
# Any file beginning with either an “a”, a “b”, or a “c”.

BACKUP.[0-9][0-9][0-9]
# Any file beginning with “BACKUP.” followed by exactly three numerals.

[[:upper:]]*
# Any file beginning with an uppercase letter.

[![:digit:]]*
# Any file not beginning with a numeral.

*[[:lower:]123]
# Any file ending with a lowercase letter or the numerals “1”, “2”, or “3”

echo .[!.]*
# Expands into every filename that begins with only one period fol-
# lowed by any other characters. This will work correctly with most
# hidden files (though it still won't include filenames with multiple
# leading periods).


# Depending on the Linux distribution, we will get a different list of
# files with [] notation. *
ls /usr/sbin/[ABCDEFGHIJKLMNOPQRSTUVWXYZ]*
# /usr/sbin/MAKEFLOPPIES
# /usr/sbin/NetworkManagerDispatchre

ls /usr/sbin/[A-Z]*
# /usr/sbin/biosdecode
# /usr/sbin/chat

Brace expansion

echo Front-{A,B,C}-Back
# Front-A-Back Front-B-Back Front-C-Back

echo Number_{1..5}
# Number_1 Number_2 Number_3 Number_4 Number_5

echo {01..15}
# 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15

echo {001..15}
# 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015

echo {Z..A}
# Z Y X W V U T S R Q P O N M L K J I H G F E D C B A

echo a{A{1,2},B{3,4}}b
#aA1b aA2b aB3b aB4b

mkdir {2007..2009}-{01..12}
# 2007-01 2007-07 2008-01 2008-07 2009-01 2009-07
# 2007-02 2007-08 2008-02 2008-08 2009-02 2009-08
# 2007-03 2007-09 2008-03 2008-09 2009-03 2009-09
# 2007-04 2007-10 2008-04 2008-10 2009-04 2009-10
# 2007-05 2007-11 2008-05 2008-11 2009-05 2009-11
# 2007-06 2007-12 2008-06 2008-12 2009-06 2009-12

Tilde expansion

echo ~
# /home/carlos

History expansion

!!
#Repeat the last command

!number
# Repeat history list item number.

!string
# Repeat last history list item starting with string.

!?string
# Repeat last history list item containing string.

POSIX Character Classes

CharacterClass Description
[:alnum:]alphanumeric characters. In ASCII: [A-Za-z0-9]
[:word:]same as [:alnum:], with addition of underscore (_)
[:alpha:]alphabetic characters. In ASCII: [A-Za-z]
[:lower:]lowercase letters
[:upper:]uppercase characters.
[:digit:]numerals 0 through 9.
[:xdigit:]char used express hexadecimal. In ASCII: [0-9A-Fa-f]
[:punct:]punctuation.ASCII [-!”#$%&’()*+,./:;<=>?@[\\]_`{}~]
[:graph:]visible characters. In ASCII includes 33 - 126
[:print:]printable chars. All in [:graph:] plus space char
[:cntrl:]ASCII control codes. Includes ASCII: 0 - 31 and 127
[:blank:]includes the space and tab characters
[:space:]whitespace char(space, tab, carriage return, newline
vertical tab, form feed) In ASCII: [ \t\r\n\v\f]

Regex

POSIX BRE vs ERE

POSIX splits regular expression implementations into two kinds: basic regular expressions (BRE) and extended regular expressions (ERE). The difference between BRE and ERE is a matter of metacharacters.

With BRE, the following metacharacters are recognized, all other characters are considered literals: ^ $ . [ ] *

With ERE, the following metacharacters (and their associated functions) are added: ( ) { } ? + |

However, the ( , ) , { , and } characters are treated as metacharacters in BRE if they are escaped with a backslash, whereas with ERE, preceding any metacharacter with a backslash causes it to be treated as a literal.

GNU version of grep also supports extended regular expressions when the -E option is used.

Nota: La expresión regular de emacs es básicamente BRE, pero se ha ampliado para tratar + y ? como metacaracteres así como en ERE. Por lo tanto, no hay necesidad de escaparlos con \ en la expresión regular de emacs.

BRE(emacs)EREDescription regex
.[]^$*.[]^$*common metacharacters
\+\?\(\)\{\}BRE only “" escaped metacharacters
+ ?(){}ERE only non-“" escaped metacharacters
ccmatch non-metacharacter “c”
\c\cmatch a literal character “c” even if “c”
is metacharacter by itself
..match any character including newline
’^’^position at the beginning of a string
’$’$position at the end of a string
\<\<position at the beginning of a word
\>\>position at the end of a word
[abc…][abc…]match any characters in “abc…”
[^abc…][^abc…]match any characters except in “abc…”
r*r*match 0 or more regex identified by “r”
r\+r+match one or more regex identified by “r”
r\?r?match 0 or one regex identified by “r”
r1\¦r2r1¦r2match 1 of the regex identified by r1 or r2
\(r1\¦r2\)(r1¦r2)match 1 of the regex identified by r1 or r2
(¦=pipe) and treat it as a bracketed regex

Any Character .

It will match any character in that character position.

grep '.zip' dirlist*.txt
# bunzip2
# bzip2
# bzip2recover
# Not match to zip beacuse the inclusion of the dot metacharacter
# increased the length of the required match to four characters.

Anchors ^ , $

The caret ^ and dollar sign $ characters are treated as anchors in regular expressions. This means they cause the match to occur only if the regular expression is found at the beginning of the line ^ or at the end of the line $.

Note that the regular expression ^$ (a beginning and an end with nothing in between) will match blank lines.

grep '^zip' dirlist*.txt
# zip
# zipcloak

grep 'zip$' dirlist*.txt
# gunzip
# zip

grep '^zip$' dirlist*.txt
# zip

Bracket Expressions [ ]

We can match a single character from a specified set of characters (including characters that would otherwise be interpreted as metacharacters) by using bracket expressions.

A set may contain any number of characters, and metacharacters lose their special meaning when placed within brackets. However, there are two cases in which metacharacters are used within bracket expressions and have different meanings.

negation
if the first character in a bracket expression is a caret ^, the remaining characters are taken to be a set of characters that must not be present at the given character position. Note that a negated character set still requires a character at the given position. The caret character only invokes negation if it is the first character within a bracket epression; otherwise, it loses its special meaning and becomes an ordinary character in the set.
ranges
the dash -, which is used to indicate a character range.
grep '[bg]zip' dirlist*.txt
# bzip2
# gzip


grep '[^bg]zip' dirlist*.txt
# bunzip2
# gpg-zip
# Notice that the file zip was not found. A negated character set still
# requires a character at the given position, but the character must not
# be a member of the negated set.

grep '[^]zip' dirlist*.txt
# grep: Unmatched [, [^, [:, [., or [=
grep '[\^]zip' dirlist*.txt
# ^zip


grep '^[A-Z]' dirlist*.txt
# MAKEDEV
# ControlPanel
# GET

# Matches all filenames starting with letters and numbers.
grep '^[A-Za-z0-9]' dirlist*.txt

Alternation |

Allows a single character to match from a set of specified characters, alternation allows matches from a set of strings or other regular expressions.

Notice that since this is an extended feature, we added the -E option to grep (though we could have just used the egrep program instead), and we enclosed the regular expression in quotes to prevent the shell from interpreting the vertical-bar metacharacter as a pipe operator.

echo "AAA" | grep -E 'AAA|BBB'
# AAA
echo "BBB" | grep -E 'AAA|BBB'
# BBB
echo "CCC" | grep -E 'AAA|BBB'
#

echo "AAA" | grep -E 'AAA|BBB|CCC'
# AAA

# To combine alternation with other regexp elements, use ().
grep -E '^(bz|gz|zip)' dirlist*.txt
# Match the file in lists that start with either bz,gz, or zip.

Quantifiers ? , * , + , { }

?

Match an element zero or one time. This means, in effect, “Make the preceding element optional.”

# To check valid number of two form: (nn) nn-n or nn nn-n.
echo "(55) 12-4" |grep -E '^\(?[0-9][0-9]\)? [0-9][0-9]-[0-9]$'
# (55) 12-4
echo "55 12-4" |grep -E '^\(?[0-9][0-9]\)? [0-9][0-9]-[0-9]$'
# 55 12-4
echo "AA 12-4" |grep -E '^\(?[0-9][0-9]\)? [0-9][0-9]-[0-9]$'
#

*

Match an element zero or more times. Like the ? metacharacter, the * is used to denote an optional item; however, unlike the ?, the item may occur any number of times, not just once.

# To check a string was a sentence; that is, it starts with an
# uppercase letter, then contains any number of uppercase and
# lowercase letters and spaces, and ends with a period.
echo "This works." | grep -E '[[:upper:]][[:upper:][:lower:] ]*\.'
# This works.
echo "This Works." | grep -E '[[:upper:]][[:upper:][:lower:] ]*\.'
# This Works.
echo "this doesnt" | grep -E '[[:upper:]][[:upper:][:lower:] ]*\.'
#

+

Match an element one or more times. The + metacharacter works much like the * , except it requires at least one instance of the preceding element to cause a match.

# Lines consisting of groups of one or more alphabetic characters
# separated by single spaces.
echo "This that" | grep -E '^([[:alpha:]]+ ?)+$'
# This that
echo "a b c" | grep -E '^([[:alpha:]]+ ?)+$'
# a b c
echo "a b 9" | grep -E '^([[:alpha:]]+ ?)+$'
#
echo "abc d" | grep -E '^([[:alpha:]]+ ?)+$'
#

{ }

Match an element a specific number of times. The { and } metacharacters are used to express minimum and maximum numbers of required matches.

SpecifierMatch
{n}preceding element if it occurs exactly n times
{n,}preceding element if it occurs n or more times
{,m}preceding element if it occurs no more than m times
{n,m}precedi ele if occurs at least n but no more than m
# To check valid number of two form: (nnnn) nnn-nnnn or nnn nnn-nnnn.
echo "(555) 123-4567" | grep -E '^\(?[0-9]{3}\)? [0-9]{3}-[0-9]{4}$'
# (555) 123-4567
echo "555 123-4567" | grep -E '^\(?[0-9]{3}\)? [0-9]{3}-[0-9]{4}$'
# 555 123-4567
echo "5555 123-4567" | grep -E '^\(?[0-9]{3}\)? [0-9]{3}-[0-9]{4}$'
#

CTRL

CTRL-C

To terminate a process that is running in the current terminal is the same as using kill to end the process with the INT (interrupt) signal.

CTRL-D

On an empty line stops the current standard input entry from the terminal with an EOF (end-of-file) message (and often terminates a program). Don’t confuse this with CTRL-C, which usually terminates a program regardless of its input or output.

CTRL-Z

To send TSTP (similar to STOP) signals to programs. This allows you to suspend and switch between programs you’re using. For example, you can send a TSTP signal with CTRL-Z and then start the process again by entering fg (bring to foreground) or bg (move to background).

The background process may have problems if it needs to work with the standard input (or worse, read directly from the terminal). The best way to make sure that a background process doesn’t bother is to redirect its output (and possibly input).

CTRL-R

Prompt puts you in reverse isearch mode.

CTRL-L

To redraw the entire screen shell.

CTRL-ALT-F1

The first (mainb) virtual console /dev/tty1.

CTRL-ALT-F2

Second virtual console.

CTRL-ALT-DEL

On a virtual console, on most systems, this is some sort of reboot command using the shutdown command.

SUPER L-impr pa

Es una combinación de teclas “mágicas” que se puede presionar y el kernel responderá independientemente de lo que esté haciendo, a menos que esté completamente bloqueado. Presionar Alt-SysRq (PrtScr) seguido de una tecla hace la magia de rescatar el control del sistema.

kfadescription of action ( kfa = key following Alt-SysRq )
rrestore the keyboard from raw mode after X crashes
kkill all processes on the current virtual console (SAK)
ssync all mounted filesystems to avoid data corruption
uremount all mounted filesystems read-only (umount)
eSend a SIGTERM to all processes, except for init
iSend a SIGKILL to all processes, except for init
breboot the system without syncing or unmounting disks
hdisplay help
kSAK Kills all programs on the current virtual console
lShows a stack backtrace for all active CPUs
mdump current memory info to your console
tlist of current tasks and their info to your console
vForcefully restores framebuffer console
wdumps tasks that are in uninterruptable (blocked) state

Notas:

  • Algunos teclados pueden no tener una tecla etiquetada como ‘SysRq’. La tecla ‘SysRq’ también se conoce como la tecla ‘Imprimir pantalla’ o ‘Print screen’. Además, algunos teclados no pueden manejar tantas teclas presionadas al mismo tiempo, por lo que es posible que tenga mejor suerte presionando Alt, presione SysRq, suelte SysRq, presione <tecla de comando>, suelte todo.
  • Desde el terminal SSH, etc., puede usar la función Alt-SysRq escribiendo en “/proc/sysrq-trigger”. Por ejemplo, “echo s > /proc/sysrq-trigger; echo u > /proc/sysrq-trigger” desde el indicador de shell raíz sincroniza y desmonta todos los sistemas de archivos montados.

CTRL-SHIFT-x-*

Paste all files in a directory.

Note: first type * in the terminal and then press CTRL-SHIFT-x-*.

Keystrokes

History commands

KeyAction
C-pMove to the previous history entry (or up arrow)
C-nMove to the next history entry (or down arrow)
M-<Move to the beginning (top) of the history list
M->Move to the end (bottom) of the history list
M-pReverse search, nonincremental
M-nForward search, nonincremental
C-oExecute the current and advance to next (sequence)
C-rReverse incremental search
C-rAgain. Find the next occurrence (moving “up”)
C-gQuit searching

Move cursor

KeyAction
C-bMove the cursor left
C-fMove the cursor right
C-aMove the cursor to the beginning of the line
C-eMove the cursor to the end of the line

Kill text

KeyAction
C-hKill the preceding character
C-dKill the next character
C-wKill the preceding word
M-dKill the next word
C-uKill text from cursor to the beginning of line
C-kKill text from cursor to the end of line
C-yYank text from the kill-ring (i.e., from CTRL-U)

Transpose, convert text

KeyAction
C-tTranspose the char at the cursor with preceding
M-tTranspose the word at the cursor with preceding
M-lConvert chars from cursor to end of word lowercase
M-uConvert chars from cursor to end of word uppercase

Special Characters

CharName(s)Uses
*star, asteriskRegular expression, glob character
.dotCurrent dir, file/hostname delimit
!bangNegation, command history
¦pipeCommand pipes
///(forward) slashDirectory delimiter, search cmd
\backslashLiterals, macros (never dirs)
$dollarVariables, end of line
tick,quoteLiteral strings
`backtick,Command substitution ()
double quoteSemi-literal strings
^caretNegation, beginning of line
~tilde, squiggleNegation, directory shortcut
#hash, sharpComments, preprocessor, substits
[ ]bracketsRanges, test
{ }braces,Statement blocks, ranges
$( )parentesesCommand substitution
(cmd;cmd)Run cmd;cmd in a subshell
{cmd;cmd;}Like (cmd;cmd) without a subshell
_underscore, underCheap substitute for a space
&unpersandBackground job
%jobpercentageIdendify job
((..))Arithmetic evaluation
-eStop shell when error
CharRedirections
>& fileRedirect stdout and stderr to file
m> fileRedirect output filedescriptor m to file
m< fileRedirect input filedescriptor m from file
m>> fileAppend output filedescriptor m to file
<&mTake standard input from file descriptor m
>&mUse file descriptor m as standard output
<&-Close standard input
>&-Close standard output
m<&nConnect input filedescriptor n to file descriptor m
n>&mConnect output filedescriptor n to file descriptor m
m<&-Close input file descriptor m
m>&-Close output file descriptor m

Absolute permission

OctalBinaryPermission TypeSymbol
0000no permission
1001execute-x-
2010write-w-
3011execute + write-wx
4100readr–
5101read + executer-x
6110read + writerw-
7111read + write + executerwx

Everywhere a 1 appears in the binary value of the mask (umask command), an attribute is unset.

Original file— rw- rw- rw-0666
Mask000 000 010 0100022
Result— rw- r– r–0644

Signal usd by kill/pkil

SignalModeAction
-1HUPReload the process configuration file
-2INTInterrupt the process
-3QUITQuit the process
-9KILLKill the process (to avoid, try ‘-15’ first)
-15TERMComplete the process properly
-18STOPFreeze the process
-20CONTResume execution of a frozen process

Script

Shell script

A shell script (also known as a Bourne shell script) is a series of commands written in a file; the shell reads the commands from the file just as it would if you typed them into a terminal.

Bourne shell scripts generally start with the line #!/bin/sh, which indicates that the /bin/sh program should execute the commands in the script file. (Make sure that there’s no whitespace at the beginning of the script file.) The #! part is called a shebang.

#!/bin/sh
#grep r.*t /etc/passwd
# Print something, then run ls
echo About to run the ls command.
ls

Running a script with a shebang is almost (but not quite) the same as running a command with your shell; for example, running a script called myscript causes the kernel to run /bin/sh myscript.

When writing scripts and working on the command line, remember what happens when the shell runs a command:

  1. Before running the command, the shell looks for variables, globs, and other substitutions and performs the substitutions if they appear.
  2. The shell passes the results of the substitutions to the command.
# Let’s say you’re looking for all entries in /etc/passwd that match the
# regular expression r.*t. You can run this command:
grep r.*t /etc/passwd

# It works most of the time, but sometimes it mysteriously fails. Why?
# The answer is probably in your current directory. If that directory
# contains files with names such as r.input and r.output, then the shell
# expands r.*t to r.input r.output and creates this command:
grep r.input r.output /etc/passwd

Notes:

  • The shebang doesn’t have to be #!/bin/sh; it can be built to run anything on your system that accepts scripting input, such as #!/usr/bin/python to run Python programs. In addition, you might come across scripts with a different pattern that includes /usr/bin/env. For example, you might see something like #!/usr/bin/env python as the first line. This instructs the env utility to run python. The reason for this is fairly simple; env looks for the command to run in the current command path, so you don’t need a standardized location for the executable. The disadvantage is that the first matching executable in the command path might not be what you want.
  • Be aware of your shell script sizes. Keep your shell scripts short. Bourne shell scripts aren’t meant to be big.
  • You can’t change an environment variable with a shell script, because scripts run as subshells.
  • If a line in your shell script gets too long , making it difficult to read and manipulate in your text editor, you can split it up with a backslash ( \ ).
#!/bin/sh
gs -q -dBATCH -dNOPAUSE -dSAFER \
-sOutputFile=- -sDEVICE=pnmraw $@

Exit codes

When a Unix program finishes, it leaves an exit code, a numeric value also known as an error code or exit value, for the parent process that started the program. When the exit code is zero 0, it typically means that the program ran without a problem. However, if the program has an error, it usually exits with a number other than 0 (but not always).

The shell holds the exit code of the last command in the $? special variable.

If you intend to use a command’s exit code, you must use or store that code immediately after running the command (because the next command you run overwrites the previous code).

When writing shell code, you may come across situations where your script needs to halt due to an error (such as a bad filename). Use exit 1 in your script to terminate and pass an exit code of 1 back to whatever parent process ran the script. (You can use different nonzero numbers if your script has various abnormal exit conditions.)

Note that some programs, like diff and grep, use nonzero exit codes to indicate normal conditions. For example, grep returns 0 if it finds something matching a pattern and 1 if it doesn’t and 2 if they encounter an actual problem.

Note: The shell provides two extremely simple builtin commands that do nothing except terminate with either a 0 or 1 exit status. The true command always executes successfully and the false command always executes unsuccessfully.

Quotes

Single quotes

The easiest way to create a literal and make the shell leave a string alone is to enclose the entire string in single quotes ( ' ). As far as the shell is concerned, all characters between two single quotes, including spaces, make up a single parameter.

Double quotes

Double quotes ( " ) work just like single quotes, except that the shell expands any variables that appear within double quotes.

If we place text inside double quotes, all the special characters used by the shell lose their special meaning and are treated as ordinary characters. The exceptions are $, \ (backslash), and ` (back-quote).

Note: By default, word-splitting looks for the presence of spaces, tabs, and newlines (linefeed characters) and treats them as delimiters between words. This means unquoted spaces, tabs, and newlines are not considered to be part of the text. They serve only as separators.

echo "There is no * in my path: $PATH"   # double quotes
# There is no * in my path: /sbin:/sbin:/usr/local/bin:/usr/bin:/bin

echo 'There is no * in my path: $PATH'   # single quotes
# There is no * in my path: $PATH


echo 'The first argument was "'$?'"'     # quotes inside quotes
# The first argument was "0"


echo "$(lsblk | grep sda)"   # exact output keeping the format
# sda      8:0    0 119.2G  0 disk
# |-sda1   8:1    0   128M  0 part /boot/efi
# |-sda2   8:2    0     8G  0 part [SWAP]
# |-sda3   8:3    0 111.1G  0 part /

echo '$(lsblk | grep sda)'   # single quotes
# $(lsblk | grep sda)

echo $(lsblk | grep sda)     # string on a line separated by whitespace
# sda 8:0 0 119.2G 0 disk |-sda1 8:1 0 128M 0 part /boot/efi |-sda2 8:2
# 0 8G 0 part [SWAP] |-sda3 8:3 0 111.1G 0 part /

echo here are 4    spaces
# here are 4 spaces
echo "here are 4    spaces"
# here are 4    spaces
# Word-splitting is suppressed and the embedded spaces are not treated
# as delimiters; rather they become part of the argument.
# Once the double quotes are added, our command line
# contains a command followed by a single argument.

Literal single quotes

One way to pass a literal single quote to a command is to place a backslash before the single quote character. The backslash and quote must appear outside any pair of single quotes. A string such as 'don\'t results in a syntax error.

echo I don\'t like contractions inside shell scripts.
# I don't like contractions inside shell scripts.

General rule to quote an entire string with no substitutions, follow this procedure:

  1. Change all instances of ' (single quote) to '\'' (single quote, backslash, single quote, single quote).
  2. Enclose the entire string in single quotes.
echo 'this isn'\''t a forward slash: \'
# this isn't a forward slash: \

Literal doble quotes

Sometimes we want to quote only a single character. To do this, we can precede a character with a backslash, which in this context is called the escape character. Often this is done inside double quotes to selectively prevent an expansion.

To allow a backslash character to appear, escape it by typing \\. Note that within single quotes, the backslash loses its special meaning and is treated as an ordinary character.

echo "The balance for user $USER is: \$5.00"
# The balance for user me is: $5.00

Single and double quotes

As we will see, with each succeeding level of quoting, more and more of the expansions are suppressed.

echo text ~/*.txt {a,b} $(echo foo) $((2+2)) $USER
# text /home/me/ls-output.txt a b foo 4 me

echo "text ~/*.txt {a,b} $(echo foo) $((2+2)) $USER"
# text ~/*.txt {a,b} foo 4 me

echo 'text ~/*.txt {a,b} $(echo foo) $((2+2)) $USER'
# text ~/*.txt {a,b} $(echo foo) $((2+2)) $USER

Variables

Global

How do we create a variable? Simple, we just use it. When the shell encounters a variable, it automatically creates it. This differs from many programming languages in which variables must be explicitly declared or defined before use.

There are some rules about variable names:

  1. Variable names may consist of alphanumeric characters and underscore characters.
  2. The first character of a variable name must be either a letter or an underscore.
  3. Spaces and punctuation symbols are not allowed.

Note that in an assignment, there must be no spaces between the variable name, the equal sign, and the value.

A common convention is to use uppercase letters to designate constants and lowercase letters for true variables.

# Assign the string "z" to variable a.
a=z
# Embedded spaces must be within quotes.
b="a string"
# Expansions variables can be expanded into the assignment.
c="a string and $b"
# Results of a command.
d="$(ls -l foo.txt)"
# Arithmetic expansion.
e=$((5 * 7))
# Escape sequences such as tabs and newlines.
f="\t\ta string\n"
# Multiple variable assignments may be done on a single line.
a=5 b="a string"

Quotation marks in variables are optional, but might sometimes cause problems such as with empty strings and cases where a value could expand into multiword strings, as with filenames containing embedded spaces.

#!/bin/bash
# trouble: script to demonstrate common errors
number=
if [ $number = 1 ]; then
    echo "Number is equal to 1."
else
    echo "Number is not equal to 1."
fi

trouble
# The command [ $number = 1 ] yields [   = 1 ]
# The = operator is a binary operator (it requires a value on each
# side), but the first value is missing, so the test command expects a
# unary operator (such as -z) instead.
/home/me/bin/trouble: line 7: [: =: unary operator expected
# since the test failed (because of the error), the if command receives
# a non-zero exit code and acts accordingly, and the second echo command
# is executed.
Number is not equal to 1.
# If we quote [ "$number" = 1 ] yields [ "" = 1 ]
# therefore the program works perfectly.

Commun mistake:

foo="yes"
echo $foo
# yes
echo $fool
#

# This error is because the shell happily created the variable fool when
# it encountered it and gave it the default value of nothing, or
# empty, so if there's a command expecting for a variable, it will
# receive nothing and may fail. From this, we learn that we must pay
# close attention to our spelling!

During expansion, variable names may be surrounded by optional curly braces, {}. This is useful in cases where a variable name becomes ambiguous because of its surrounding context.

# We try to change the name of a file from myfile to myfile1,
# using a variable
filename="myfile"
touch "$filename"
mv "$filename" "$filename1"
# mv: missing destination file operand after `myfile'

mv "$filename" "${filename}1"
# By adding the surrounding braces, the shell no longer interprets the
# trailing 1 as part of the variable name.

Note: The shell actually does provide a way to enforce the immutability of constants, through the use of the declare builtin command with the -r (read-only) option. Had we assigned TITLE the way below. The shell would prevent any subsequent assignment to TITLE. This feature is rarely used, but it exists for very formal scripts.

declare -r TITLE="Page Title"

# You can orce the shell to restrict the assignment to integers by
# using the declare command with the -i option, but, like setting
# variables as read-only, this is rarely done.

Local

Local variables are only accessible within the shell function in which they are defined and cease to exist once the shell function terminates.

They are defined by preceding the variable name with the word local. This creates a variable that is local to the shell function in which it is defined. Once outside the shell function, the variable no longer exist.

#!/bin/bash
# local-vars: script to demonstrate local variables
foo=0  # global variable foo

funct_1 () {
    local foo # variable foo local to funct_1
    foo=1
    echo "funct_1: foo = $foo"
}

funct_2 () {
    local foo # variable foo local to funct_2
    foo=2
    echo "funct_2: foo = $foo"
}

echo "global:  foo = $foo"
funct_1
echo "global:  foo = $foo"
funct_2
echo "global:  foo = $foo"

local-vars
# global:  foo = 0
# funct_1: foo = 1
# global:  foo = 0
# funct_2: foo = 2
# global:  foo = 0

$1, $2, …

$1, $2, and all variables named as positive nonzero integers contain the values of the script parameters, or arguments.

Note: You can actually access more than the first nine parameters using parameter expansion. To specify a number greater than nine, surround the number in braces as in ${10}, ${55}, ${211}, and so on.

#!/bin/sh
echo First argument: $1
echo Third argument: $3

# Running the script.
$ ./name_file one two three
# First argument: one
# Third argument: three

$#

This variable holds the number of arguments passed to a script.

It’s especially important when you’re running shift in a loop to pick through arguments. When $# is 0, no arguments remain, so $1 is empty.

#!/bin/sh
echo How many argument are there? $#

# Running the script.
$ ./name_file one two three
# How many argument are there? 3

$@, $*

$@
expands into the list of all positional parameters, starting with 1. When surrounded by double quotes, it expands each positional parameter into a separate word as if it was surrounded by double quotes.
$*
expands into the list of all positional parameters, starting with 1. When surrounded by double quotes, it expands into a double-quoted string containing all of the positional parameters, each separated by the first character of the IFS shell variable (by default a space character).

"$@" is by far the most useful for most situations because it preserves the integrity of each positional parameter.

#!/bin/sh
echo What are the arguments? "$@"

# Running the script.
$ ./name_file one two three
# What are the arguments? one two three


#!/bin/bash
# posit-params3: script to demonstrate $* and $@
print_params () {
    echo "\$1 = $1"
    echo "\$2 = $2"
    echo "\$3 = $3"
    echo "\$4 = $4"
}
pass_params () {
    echo -e "\n" '$* :'; print_params $*
    echo -e "\n" '"$*" :'; print_params "$*"
    echo -e "\n" '$@ :'; print_params $@
    echo -e "\n" '"$@" :'; print_params "$@"
}
pass_params "word" "words with spaces"

posit-param3
#   $* :
# $1 = word
# $2 = words
# $3 = with
# $4 = spaces

#   "$*" :
# $1 = word words with spaces
# $2 =
# $3 =
# $4 =

#   $@ :
# $1 = word
# $2 = words
# $3 = with
# $4 = spaces

#   "$@" :
# $1 = word
# $2 = words with spaces
# $3 =
# $4 =

$?

This variable holds the exit code of the last command that the shell executed.

#!/bin/sh
echo What\'s the last exit code? $?

# Running the script.
./name_file
What's the last exit code? 0

$$, $!

This variable holds the process ID of the shell.

The $! shell parameter, will always contain the process ID of the last job put into the background.

#!/bin/sh
echo What\'s the process ID of the shell? $$

# Running the script.
./name_file
# What's the process ID of the shell? 167893

$0

This variable holds the name of the script and is useful for generating diagnostic messages.

For example, say your script needs to report an invalid argument that is stored in the $BADPARM variable. You can print the diagnostic message with the following line so that the script name appears in the error message. All diagnostic error messages should go to the standard error. For writing to the standard error, you can use 1>&2.

#!/bin/sh
BADPARM="Some bad parameter."
echo $0: bad option $BADPARM
echo $0 bad argument $1
echo $0: error to output $BADPARM 2>&1
echo $0: output to error $BADPARM 1>&2

# Running the script.
./name_file one two three
# ./name_file: bad option Some bad parameter.
# ./name_file bad argument one
# ./name_file: error to output Some bad parameter.
# ./name_file: output to error Some bad parameter.

# Running the script in this other way.
./name_file one two three > out_file 2> error_file
cat out_file
# ./name_file: bad option Some bad parameter.
# ./name_file bad argument one
# ./name_file: error to output Some bad parameter.
cat error_file
# ./name_file: output to error Some bad parameter.

shift

This built-in shell command can be used with argument variables to remove the first argument $1 and advance the rest of the arguments so that $2 becomes $1, $3 becomes $2, and so on.

#!/bin/sh
echo Argument: $1
shift
echo Argument: $1
shift
echo Argument: $1

# Running the script.
./name_file one two three
# Argument: one
# Argument: two
# Argument: three


#!/bin/bash
# posit-param2: script to display all arguments
count=1
while [[ $# -gt 0 ]]; do
    echo "Argument $count = $1"
    count=$((count + 1))
    shift
done

posit-param2 a b c d
# Argument 1 = a
# Argument 2 = b
# Argument 3 = c
# Argument 4 = d

Example

Command line options: Suppose we have a program to which several options will be added as follows:

output file
we will add an option to specify a name for a file to contain the program’s output. It will be specified as either -f file or --file file.
Iiteractive mode
this option will prompt the user for an output filename and will determine whether the specified file already exists. If it does, the user will be prompted before the existing file is overwritten. This option will be specified by either -i or --interactive.
help
either -h or --help may be specified to cause the program to output an informative usage message.
# Display a message when the help option is invoked
# or an unknown option is attempted.
usage () {
    echo "$PROGNAME: usage: $PROGNAME [-f file | -i]"
    return
}

# process command line options
interactive=
filename=

while [[ -n "$1" ]]; do
    case "$1" in
        -f | --file)
            shift
            filename="$1"
            ;;
        -i | --interactive) interactive=1
                            ;;
        -h | --help)
            usage
            exit
            ;;
        *)
            usage >&2
            exit 1
            ;;
    esac
    shif
done
# Notice how the case statement detects only whether the user chooses to
# overwrite or quit. Any other choice causes the loop to continue and
# prompts the user again.

# interactive mode
if [[ -n "$interactive" ]]; then
    while true; do
        read -p "Enter name of output file: " filename
        if [[ -e "$filename" ]]; then
            read -p "'$filename' exists. Overwrite? [y/n/q] > "
            case "$REPLY" in
                Y|y) break
                     ;;
                Q|q) echo "Program terminated."
                     exit
                     ;;
                *)
                    continue
                    ;;
            esac
        elif [[ -z "$filename" ]]; then
            continue
        else
            break
        fi
    done
fi

Test

[, test

The [ character is an actual program on a Unix system. All Unix systems have a command called [ that performs tests for shell script conditionals.

[ works: the exit code is 0 if the test is true and nonzero when the test fails. This program is also known as test; the manual pages for test and [ are the same (the shell doesn’t always run [ ).

Since test and [[]] do roughly the same thing, which is preferable? test is traditional (and part of the POSIX specification for standard shells, which are often used to run system startup scripts), whereas the double bracket is specific to bash (and a few other modern shells). It’s important to know how to use test since it is widely used, but double bracket is clearly more useful and is easier to code, so it is preferred for modern scripts.

In the shell, a single = performs assignment while == evaluates equivalence, this can be a little confusing because the test command accepts a single = for string equivalence. This is yet another reason to use the more modern [[]] and (( )) compound commands in place of test.

Note: Since all expressions and operators used by test are treated as command arguments by the shell (unlike [[]] and (( )) ), characters that have special meaning to bash, such as < , > , ( , and ) , must be quoted or escaped.

[ hello = hello ]; echo $?
# 0

test hello = hello; echo $?
# 0

[ hello = bye ]
# returns 1


#!/bin/bash
# test-integer: evaluate the value of an integer.
INT=-5
if [ -z "$INT" ]; then
    echo "INT is empty." >&2
    exit 1
fi
if [ "$INT" -eq 0 ]; then
    echo "INT is zero."
else
    if [ "$INT" -lt 0 ]; then
        echo "INT is negative."
    else
        echo "INT is positive."
    fi
    if [ $((INT % 2)) -eq 0 ]; then
        echo "INT is even."
    else
        echo "INT is odd."
    fi
fi

There are many possibilities for using commands other than [ for tests. Here’s an example that uses grep:

#!/bin/sh
if grep -q daemon /etc/passwd; then
    echo The daemon user is in the passwd file.
else
    echo There is a problem. daemon is not in the passwd file.
fi

[[]]

A More Modern Version of test.

The [[]] command is similar to test (it supports all of its expressions), but adds an important new string expression.

string1 =~ regex

This returns true if string1 is matched by the extended regular expression regex.

Another added feature of [[]] is that the == operator supports pattern matching the same way pathname expansion does.

This makes [[]] useful for evaluating file and pathnames.

#!/bin/bash
# test-integer2: evaluate the value of an integer.
INT=-5
if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [ "$INT" -eq 0 ]; then
        echo "INT is zero."
    else
        if [ "$INT" -lt 0 ]; then
            echo "INT is negative."
        else
            echo "INT is positive."
        fi
        if [ $((INT % 2)) -eq 0 ]; then
            echo "INT is even."
        else
            echo "INT is odd."
        fi
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi
# By applying the regular expression, we are able to limit the value of
# INT to only strings that begin with an optional minus sign, followed
# by one or more numerals. This expression also eliminates the
# possibility of empty values.


FILE=foo.bar
if [[ $FILE == foo.* ]]; then
> echo "$FILE matches pattern 'foo.*'"
> fi
# foo.bar matches pattern 'foo.*'

(( ))

This is used to perform arithmetic truth tests. An arithmetic truth test results in true if the result of the arithmetic evaluation is non-zero. This is useful for operating on integers.

When used for logical operations, expressions follow the rules of arithmetic logic; that is, expressions that evaluate as zero are considered false, while non-zero expressions are considered true. The (( )) compound command maps the results into the shell’s normal exit codes.

Notice, that because the compound command (( )) is part of the shell syntax rather than an ordinary command, and it deals only with integers, it is able to recognize variables by name and does not require expansion to be performed.

OperatorDescription
<=Less than or equal to
>=Greater than or equal to
<Less than
>Greater than
==Equal to
!=Not equal to
&&Logical AND
¦¦Logical OR
expr1?expr2:expr3Comparison (ternary) operator

Ternary operator: If expression expr1 evaluates to be non-zero (arithmetic true), then expr2; else expr3.

The ternary operator performs a stand-alone logical test. It can be used as a kind of if/then/else statement. It acts on three arithmetic expressions (strings won’t work), and if the first expression is true (or non-zero), the second expression is performed. Otherwise, the third expression is performed.

if ((1)); then echo "true"; else echo "false"; fi
# true
if ((0)); then echo "true"; else echo "false"; fi
# false


# This example implements a toggle. Each time the operator is performed,
# the value of the variable a switches from zero to one or vice versa.
a=0
((a<1?++a:--a))
echo $a
# 1
((a<1?++a:--a))
echo $a
# 0

# Performing assignment within the expressions is not straightforward.
a=0
# This throws an error.
((a<1?a+=1:a-=1))
# error token is "-=1"
# Avoid it by surrounding the assignment expression with parentheses.
((a<1?(a+=1):(a-=1)))
# 1


#!/bin/bash
# test-integer2a: evaluate the value of an integer.
INT=-5
if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if ((INT == 0)); then
        echo "INT is zero."
    else
        if ((INT < 0)); then
            echo "INT is negative."
        else
            echo "INT is positive."
        fi
        if (( ((INT % 2)) == 0)); then
            echo "INT is even."
        else
            echo "INT is odd."
        fi
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi


#!/bin/bash
# arith-loop: script to demonstrate arithmetic operators
finished=0
a=0
printf "a\ta**2\ta**3\n"
printf "=\t====\t====\n"

until ((finished)); do
    b=$((a**2))
    c=$((a**3))
    printf "%d\t%d\t%d\n" "$a" "$b" "$c"
    ((a<10?++a:(finished=1)))
done

arith-loop
# a a**2 a**3
# = ==== ====
# 0    0    0
# 1    1    1
# 2    4    8
# 3    9   27
# 4   16   64
# 5   25  125
# 6   36  216
# 7   49  343
# 8   64  512
# 9   81  729
# 10 100 1000

Testing

File Tests

Most file tests are called unary operations because they require only one argument: the file to test.

Three binary operators (tests that need two files as arguments) are used in file tests.

Ope.Tests forOpe.Permission
-fRegular file-rReadable
-eFile exists-wWritable
-sfile isn’t empty-xExecutable
-dDirectory-uSetuid
-hSymbolic link-gSetgid
-bBlock device-k“Sticky”
-cCharacter device
-pNamed pipe
-SSocket
-ntNewer than
-otOlder than
-efInode and device
[ -f file ]
# returns true if file is a regular file.

[ file1 -nt file2 ]; echo $?
# returns true if file1 has a newer modification date than file2.

[ file1 -ef file2 ]; echo $?
# returns true if they share inode numbers and devices.

Note: If the test command is used on a symbolic link, it tests the actual object being linked to, not the link itself (except for the -h test). That is, if link is a symbolic link to a regular file, [ -f link ] returns an exit code of true 0.

String Tests

  • The binary string operator = returns true if its operands are equal.
  • The != operator that returns true if its operands are not equal.
  • -z Returns true if its argument is empty (length of string is zero).
  • -n Returns true if its argument is not empty (length of string is greater than zero).
  • == Returns true if its operands are equal. Single or double equal signs may be used. The use of double equal signs is supported by bash and is generally preferred, but it is not POSIX compliant.
[ hello = hello ];echo $?
# 0

[ hello == hello ];echo $?
# 0

[ hello != bye ]
# returns 0

[ -z "" ]
# returns 0

[ -n "something" ]
# returns 0

Arithmetic Tests

When working with numbers, use -eq instead of the equal sign.

Ope.Returns true if
-eqequal to
-nenot equal to
-ltless than
-gtgreater than
-leless than or equal to
-gegreater than or equal to
[ 1 -eq 1 ]
# returns true.

[ 01 -eq 1 ]
# returns true.


#!/bin/bash
# test-integer: evaluate the value of an integer.
INT=-5
if [ -z "$INT" ]; then
    echo "INT is empty." >&2
    exit 1
fi
if [ "$INT" -eq 0 ]; then
    echo "INT is zero."
else
    if [ "$INT" -lt 0 ]; then
        echo "INT is negative."
    else
        echo "INT is positive."
    fi
    if [ $((INT % 2)) -eq 0 ]; then
        echo "INT is even."
    else
        echo "INT is odd."
    fi
fi

Conditionals

if, then, else

The words if, then, else and fi in the shell script are shell keywords. What the if statement really does is evaluate the success or failure of commands.

  1. The shell runs the command after the if keyword and collects the exit code of that command.
  2. If the exit code is 0, the shell executes the commands that follow the then keyword, stopping when it reaches an else or fi keyword.
  3. If the exit code is not 0 and there’s an else clause, the shell runs the commands after the else keyword.
  4. The conditional ends at fi.
# Syntax:
if commands; then
    commands
else
    commands
fi

# if in the command line.
if commands; then commands; fi


# Check that argument is a directory.
if [[ ! -d "$1" ]]; then
    usage
    exit 1
fi


# Note: If a list of commands follows if, the last command in
# the list is evaluated:
if false; true; then echo "It's true."; fi
# It's true.
if true; false; then echo "It's true."; fi
#


#!/bin/sh
if [ "$1" = "hi" ]; then
    echo 'The first argument was "hi"'
else
    echo -n 'The first argument was not "hi" -- '
    echo It was '"'$1'"'
fi

# Running the script.
./name_file bye
# The first argument was not "hi" -- It was "bye"


# Other way for using commands other than [ for tests.
#!/bin/sh
if grep -q daemon /etc/passwd; then
    echo The daemon user is in the passwd file.
else
    echo There is a problem. daemon is not in the passwd file.
fi

Note: Without quotes " in if [ "$1" = hi ]; then in the previous example, that is if [ $1 = hi ]; then it would throw an error if a user might run the script with no parameters. If $1 is empty, the test reads [ = hi ], and the [ command will abort with an error.

#!/bin/bash
# test-file: Evaluate the status of a file
FILE=~/.bashrc
if [ -e "$FILE" ]; then
    if [ -f "$FILE" ]; then
        echo "$FILE is a regular file."
    fi
    if [ -d "$FILE" ]; then
        echo "$FILE is a directory."
    fi
    if [ -r "$FILE" ]; then
        echo "$FILE is readable."
    fi
    if [ -w "$FILE" ]; then
        echo "$FILE is writable."
    fi
    if [ -x "$FILE" ]; then
        echo "$FILE is executable/searchable."
    fi
else
    echo "$FILE does not exist"
    exit 1
fi
exit

elif

The elif keyword lets you string if conditionals together.

# Syntax:
if commands; then
    commands
elif commands; then
    commands
else
    commands
fi


#!/bin/sh
if [ "$1" = "hi" ]; then
    echo 'The first argument was "hi"'
elif [ "$2" = "bye" ]; then
    echo 'The second argument was "bye"'
else
    echo 'The first argument was not "hi" and the second was not "bye"'
fi

# Running the script.
./name_file hi hello
# The first argument was "hi"
./name_file hello bye
# The second argument was "bye"
./name_file hello seeya
# The first argument was not "hi" and the second was not "bye"
./name_file hi bye
# The first argument was "hi"

case

The case keyword forms another conditional construct that is exceptionally useful for matching strings. It does not execute any test commands and therefore does not evaluate exit codes. However, it can do pattern matching.

The patterns used by case are the same as those used by pathname expansion. Patterns are terminated with a ) character.

PatternDescription
a)Matches if word equals “a”
[[:alpha:])Matches if word is a single alphabetic char
???)Matches if word is exactly three chars long
*.txt)Matches if word ends with the chars “.txt”
*)Matches any value of word

It is good practice to include * as the last pattern in a case command, to catch any values of word that did not match a previous pattern, that is, to catch any possible invalid values.

Note: Modern versions of bash add the ;;& notation to match more than one test (allows case to continue to the next test rather than simply terminating).

  1. The script matches $1 against each case value demarcated with the ) character.
  2. If a case value matches $1, the shell executes the commands below the case until it encounters ;;, at which point it skips to the esac keyword.
  3. The conditional ends with esac.
# Syntax:
# case word in
#    [pattern [| pattern]...) commands ;;]...
# esac

#!/bin/sh
case $1 in
    Bye)
        echo Fine, Bye.
        ;;
    Hi|Hello)
        echo Nice to see you.
        ;;
    What*)
        echo Whatever.
        ;;
    *)
        echo 'Huh?'
        ;;
esac


#!/bin/bash
read -p "enter word > "
case "$REPLY" in
    [[:alpha:]]) echo "is a single alphabetic character." ;;
    [ABC][0-9])  echo "is A, B, or C followed by a digit." ;;
    ???)         echo "is three characters long." ;;
    *.txt)       echo "is a word ending in '.txt'" ;;
    *)           echo "is something else." ;;
esac


#!/bin/bash
# case4-2: test a character
read -n 1 -p "Type a character > "
echo

case "$REPLY" in
    [[:upper:]])  echo "'$REPLY' is upper case." ;;&
    [[:lower:]])  echo "'$REPLY' is lower case." ;;&
    [[:alpha:]])  echo "'$REPLY' is alphabetic." ;;&
    [[:digit:]])  echo "'$REPLY' is a digit." ;;&
    [[:graph:]])  echo "'$REPLY' is a visible character." ;;&
    [[:punct:]])  echo "'$REPLY' is a punctuation symbol." ;;&
    [[:space:]])  echo "'$REPLY' is a whitespace character." ;;&
    [[:xdigit:]]) echo "'$REPLY' is a hexadecimal digit." ;;&
esac

It is also possible to combine multiple patterns using the vertical bar character as a separator. This creates an “or” conditional pattern. This is useful for such things as handling both uppercase and lowercase characters.

#!/bin/bash
# case-menu: a menu driven system information program
clear
echo "
Please Select:

A. Display System Information
B. Display Disk Space
C. Display Home Space Utilization
Q. Quit
"
read -p "Enter selection [A, B, C or Q] > "

case "$REPLY" in
    q|Q) echo "Program terminated."
         exit
         ;;
    a|A) echo "Hostname: $HOSTNAME"
         uptime
         ;;
    b|B) df -h
         ;;
    c|C) if [[ "$(id -u)" -eq 0 ]]; then
             echo "Home Space Utilization (All Users)"
             du -sh /home/*
         else
             echo "Home Space Utilization ($USER)"
             du -sh "$HOME"
         fi
         ;;
    *)
        echo "Invalid entry" >&2
        exit 1
        ;;
esac

;

It’s just the regular shell marker for the end of a command.

Usually appears before then keyword because we want to put the then keyword on the same line. Without the semicolon, the shell passes then as a parameter to the [ command, which often results in an error that isn’t easy to track. You can avoid the semicolon by placing the then keyword on a separate line.

if [ "$1" = "hi" ]
then
    echo 'The first argument was "hi"'
fi

# It's the same as:
if [ "$1" = "hi" ]; then
    echo 'The first argument was "hi"'
fi

!

You can invert a test (that is, a logical not) by placing this operator before a test.

#!/bin/sh
if [ ! "$1" = hi ]; then
    echo 'The first argument was not hi'
fi
# In this specific case of comparisons, you might see != used as an
# alternative, but ! can be used with any of the condition tests.


#!/bin/bash
# test-integer4: determine if an integer is outside a
# specified range of values.
MIN_VAL=1
MAX_VAL=100
INT=50
if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [[ ! ("$INT" -ge "$MIN_VAL" && "$INT" -le "$MAX_VAL") ]]; then
        echo "$INT is outside $MIN_VAL to $MAX_VAL."
    else
        echo "$INT is in range."
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

# We also include parentheses around the expression, for grouping. If
# these were not included, the negation would only apply to the first
# expression and not the combination of the two. Coding with test:
if [ ! \( "$INT" -ge "$MIN_VAL" -a "$INT" -le "$MAX_VAL" \) ];
then
    echo "$INT is outside $MIN_VAL to $MAX_VAL."
else
    echo "$INT is in range."
fi

&&, ||, -a, -o

The command1 && command2 command; runs command1, and if the exit code is 0, the shell also runs command2.

The command1 || command2 command; if the command before a || returns a nonzero exit code, the shell runs the second command.

The constructs && and || are often used in if tests, and in both cases, the exit code of the last command run determines how the shell processes the conditional. In the case of the && construct, if the first command fails, the shell uses its exit code for the if statement, but if the first command succeeds, the shell uses the exit code of the second command for the conditional. In the case of the || construct, the shell uses the exit code of the first command if successful, or the exit code of the second if the first is unsuccessful.

If the conditionals include the test command [, can use -a and -o instead of && and ||.

Operation[, test[[]], (( ))
AND-a&&
OR-o¦¦
#!/bin/sh
if [ "$1" = hi ] || [ "$1" = bye ]; then
    echo 'The first argument was "'$1'"'
fi

# The same that above but using -o instead of ||.
#!/bin/sh
if [ "$1" = hi -o "$1" = bye ]; then
    echo 'The first argument was "'$1'"'
fi


#!/bin/bash
# test-integer3: determine if an integer is within a
# specified range of values.
MIN_VAL=1
MAX_VAL=100
INT=50
if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [[ "$INT" -ge "$MIN_VAL" && "$INT" -le "$MAX_VAL" ]]; then
        echo "$INT is within $MIN_VAL to $MAX_VAL."
    else
        echo "$INT is out of range."
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

Loops

for

Traditional Shell Form

The for loop (which is a “for each” loop) is the most common. for , in , do , and done are all shell keywords.

variable is the name of a variable that will increment during the execution of the loop, words is an optional list of items that will be sequentially assigned to variable, and commands are the commands that are to be executed on each iteration of the loop.

The really powerful feature of for is the number of interesting ways we can create the list of words.

# Syntax:
for variable [in words]; do
    commands
done


for i in A B C; do echo $i; done
# A
# B
# C


for i in {A..C}; do
    echo $i
done
# A
# B
# C


for i in distros*.txt; do echo "$i"; done
# distros-by-date.txt
# distros-dates.txt
# distros-key-names.txt

# The one precaution needed is to check that the expansion actually
# matched something. To guard against this, we would code as follows

for i in distros*.txt; do
    if [[ -e "$i" ]]; then
        echo "$i"
    fi
done


# longest-word: find longest string in a file
while [[ -n "$1" ]]; do
    if [[ -r "$1" ]]; then
        max_word=
        max_len=0
        for i in $(strings "$1"); do
            len="$(echo -n "$i" | wc -c)"
            if (( len > max_len )); then
                max_len="$len"
                max_word="$i"
            fi
        done
        echo "$1: '$max_word' ($max_len characters)"
    fi
    shift
done
# Note that, we do not surround the command substitution $(strings "$1")
# with double quotes. This is because we actually want word splitting to
# occur to give us our list. If we had surrounded the command
# substitution with quotes, it would produce only a single word
# containing every string in the file.

# By omitting the list of words in the for command, the positional
# parameters are used instead.
#!/bin/bash
# longest-word2: find longest string in a file
for i; do
    if [[ -r "$i" ]]; then
        max_word=
        max_len=0
        for j in $(strings "$i"); do
            len="$(echo -n "$j" | wc -c)"
            if (( len > max_len )); then
                max_len="$len"
                max_word="$j"
            fi
        done
        echo "$i: '$max_word' ($max_len characters)"
    fi
done

C Language Form

Here expression1, expression2, and expression3 are arithmetic expressions and commands are the commands to be performed during each iteration of the loop.

  • expression1 is used to initialize conditions for the loop
  • expression2 is used to determine when the loop is finished
  • expression3 is carried out at the end of each iteration of the loop
for (( expression1; expression2; expression3 )); do
    commands
done


for (( i=0; i<4; i=i+1 )); do echo $i; done
# 0
# 1
# 2
# 3

# In terms of behavior, this form is equivalent to the
# following construct:
(( expression1 ))
while (( expression2 )); do
    commands
    (( expression3 ))
done


#!/bin/bash
# simple_counter: demo of C style for command
for (( i=0; i<5; i=i+1 )); do
    echo $i
done

simple_counter
# 0
# 1
# 2
# 3
# 4

while

The Bourne shell’s while loop uses exit codes, like the if conditional.

bash provides two builtin commands that can be used to control program flow inside loops. The break command immediately terminates a loop, and program control resumes with the next statement following the loop. The continue command causes the remainder of the loop to be skipped, and program control resumes with the next iteration of the loop.

while commands; do commands; done

#!/bin/bash
# while-count: display a series of numbers
count=1
while [[ "$count" -le 5 ]]; do
    echo "$count"
    count=$((count + 1))
done
echo "Finished."

while-count
# 1
# 2
# 3
# 4
# 5
# Finished.


#!/bin/sh
FILE=/tmp/whiletest.$$;
echo firstline > $FILE

while tail -10 $FILE | grep -q firstline; do
# add lines to $FILE until tail -10 $FILE no longer prints "firstline"
    echo -n Number of lines in $FILE:' '
    wc -l $FILE | awk '{print $1}'
    echo newline >> $FILE
done

rm -f $FILE


#!/bin/bash
# while-menu2: a menu driven system information program
DELAY=3 # Number of seconds to display results

while true; do
    clear
    cat <<- _EOF_
Please Select:

1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
4. Quit

_EOF_
    read -p "Enter selection [0-3] > "

    if [[ "$REPLY" =~ ^[0-3]$ ]]; then
        if [[ "$REPLY" == 1 ]]; then
            echo "Hostname: $HOSTNAME"
            uptime
            sleep "$DELAY"
            continue
        fi
        if [[ "$REPLY" == 2 ]]; then
            df -h
            sleep "$DELAY"
            continue
        fi
        if [[ "$REPLY" == 3 ]]; then
            if [[ "$(id -u)" -eq 0 ]]; then
                echo "Home Space Utilization (All Users)"
                du -sh /home/*
            else
                echo "Home Space Utilization ($USER)"
                du -sh "$HOME"
            fi
            sleep "$DELAY"
            continue
        fi
        if [[ "$REPLY" == 0 ]]; then
            break
        fi
    else
        echo "Invalid entry."
        sleep "$DELAY"
    fi
done
echo "Program terminated."

until

The Bourne shell also has an until loop that works just like while, except that it breaks the loop when it encounters a zero exit code rather than a nonzero exit code.

#!/bin/bash
# until-count: display a series of numbers
count=1
until [[ "$count" -gt 5 ]]; do
    echo "$count"
    count=$((count + 1))
done
echo "Finished."

Reading files

while and until can process standard input. This allows files to be processed with while and until loops.

To redirect a file to the loop, we place the redirection operator after the done statement. The loop will use read to input the fields from the redirected file. The read command will exit after each line is read, with a zero exit status until the end-of-file is reached. At that point, it will exit with a non-zero exit status, thereby terminating the loop. It is also possible to pipe standard input into a loop.

# Display the contents of the distros.txt file.
#!/bin/bash
# while-read: read lines from a file
while read distro version release; do
    printf "Distro: %s\tVersion: %s\tReleased: %s\n" \
           "$distro" \
           "$version" \
           "$release"
done < distros.txt

#!/bin/bash
# while-read2: read lines from a file
sort -k 1,1 -k 2n distros.txt | while read distro version release; do
    printf "Distro: %s\tVersion: %s\tReleased: %s\n" \
           "$distro" \
           "$version" \
           "$release"
done

Command substitution $( )

The Bourne shell can redirect a command’s standard output back to the shell’s own command line. That is, you can use a command’s output as an argument to another command, or you can store the command output in a shell variable by enclosing a command in $().

Shells typically implement command substitution by creating a child process to run the first command with its standard output piped back to the shell, which reads that output, parsing it into words separated by whitespace. Because the shell can’t know it has all the output from the child until the pipe closes or the child dies, it waits until then before it starts another child process to run the second command.

Note: The traditional syntax for command substitution is to enclose the command in backticks `` . The $() syntax is a newer form, but it is a POSIX standard.

#!/bin/sh
FLAGS=$(grep ^flags /proc/cpuinfo | sed 's/.*://' | head -1)
echo Your processor supports:
for f in $FLAGS; do
    case $f in
        fpu) MSG="floating point unit"
             ;;
        3dnow) MSG="3DNOW graphics extensions"
               ;;
        mtrr) MSG="memory type range register"
              ;;
        *) MSG="unknown"
           ;;
    esac
    echo $f: $MSG
done
# This example is somewhat complicated because it demonstrates that you
# can use both single quotes and pipelines inside the command
# substitution.


echo "$(lsblk | grep sda)"   # exact output keeping the format
# sda      8:0    0 119.2G  0 disk
# |-sda1   8:1    0   128M  0 part /boot/efi
# |-sda2   8:2    0     8G  0 part [SWAP]
# |-sda3   8:3    0 111.1G  0 part /

echo '$(lsblk | grep sda)'   # single quotes
# $(lsblk | grep sda)

echo $(lsblk | grep sda)     # string on a line separated by whitespace
# sda 8:0 0 119.2G 0 disk |-sda1 8:1 0 128M 0 part /boot/efi |-sda2 8:2
# 0 8G 0 part [SWAP] |-sda3 8:3 0 111.1G 0 part /


# longest-word: find longest string in a file
while [[ -n "$1" ]]; do
    if [[ -r "$1" ]]; then
        max_word=
        max_len=0
        for i in $(strings "$1"); do
            len="$(echo -n "$i" | wc -c)"
            if (( len > max_len )); then
                max_len="$len"
                max_word="$i"
            fi
        done
        echo "$1: '$max_word' ($max_len characters)"
    fi
    shift
done
# Note that, we do not surround the command substitution $(strings "$1")
# with double quotes. This is because we actually want word splitting to
# occur to give us our list. If we had surrounded the command
# substitution with quotes, it would produce only a single word
# containing every string in the file.

Functions

Shell functions are “mini-scripts” that are located inside other scripts and can act as autonomous programs. Shell functions have two syntactic forms. These are as follow, where name is the name of the function and commands is a series of commands contained within the function.

Shell functions can return an exit status by including an integer argument to the return command.

# Simpler (and generally preferred) form:
name () {
     commands
     return
}

# Formal form:
function name {
     commands
     return
}
# The return command (which is optional) satisfies the requirement.

# The following is a script that demonstrates the use of a shell
# function.
#!/bin/bash
# Shell function demo
function step2 {
     echo "Step 2"
     return
}
# Main program starts here
echo "Step 1"
step2
echo "Step 3"
# Note that for function calls to be recognized as shell functions and
# not interpreted as the names of external programs, shell function
# definitions must appear in the script before they are called.


# Shell functions with exit status including in return command.
function step2 {
     echo "Step 2"
     return 1
}


# Function that detect whether the user had permission to read
# all home directories.
report_home_space () {
    if [[ "$(id -u)" -eq 0 ]]; then
        cat <<- _EOF_
             <h2>Home Space Utilization (All Users)</h2>
             <pre>$(du -sh /home/*)</pre>
             _EOF_
    else
        cat <<- _EOF_
             <h2>Home Space Utilization ($USER)</h2>
             <pre>$(du -sh $HOME)</pre>
             _EOF_
    fi
    return
}

Shell Functions In .bashrc File: Shell functions make excellent replacements for aliases, and are actually the preferred method of creating small commands for personal use. Aliases are limited in the kind of commands and shell features they support, whereas shell functions allow anything that can be scripted.

# Function named ds for our .bashrc file:
ds () {
    echo “Disk Space Utilization”
    df -h
}

Positional parameters ( $0 , $1, $2,…) can be used to pass arguments to shell functions.

file_info () {
    # file_info: function to display file information
    if [[ -e "$1" ]]; then
        echo -e "\nFile Type:"
        file "$1"
        echo -e "\nFile Status:"
        stat "$1"
    else
        echo "$FUNCNAME: usage: $FUNCNAME file" >&2
        return 1
    fi
}

Utilities

Debug

-x

bash also provides a method of tracing, implemented by the -x option.

With tracing enabled, we see the commands performed with expansions applied. The leading plus signs indicate the display of the trace to distinguish them from lines of regular output.

#!/bin/bash -x
# trouble: script to demonstrate common errors
number=1
echo "number=$number" # DEBUG
if [ $number = 1 ]; then
    echo "Number is equal to 1."
else
    echo "Number is not equal to 1."
fi

trouble
# + number=1
# + '[' 1 = 1 ']'
# + echo 'Number is equal to 1.'
# Number is equal to 1.

set -x

To perform a trace on a selected portion of a script, rather than the entire script, we can use the set command with the -x option.

We use the set command with the -x option to activate tracing and the +x option to deactivate tracing. This technique can be used to examine multiple portions of a trouble-some script.

#!/bin/bash
# trouble: script to demonstrate common errors
number=1
echo "number=$number" # DEBUG
set -x # Turn on tracing
if [ $number = 1 ]; then
    echo "Number is equal to 1."
else
    echo "Number is not equal to 1."
fi
set +x # Turn off tracing

Defensive Programming

Be careful with some commands like rm. This command may produce errors.

cd $dir_name
rm *
# If the dir_name doesn't exist, the cd command fails, and the script
# continues to the next line and deletes the files in the current
# working directory.

cd "$dir_name" && rm *
# This way, if the cd command fails, the rm command is not carried
# out. This is better but still leaves open the possibility that the
# variable, dir_name, is unset or empty, which would result in the files
# in the user’s home directory being deleted.

[[ -d "$dir_name" ]] && cd "$dir_name" && rm *
# It works perfectly.

# Delete files in directory $dir_name
if [[ ! -d "$dir_name" ]]; then
    echo "No such directory: '$dir_name'" >&2
    exit 1
fi
if ! cd "$dir_name"; then
    echo "Cannot cd to '$dir_name'" >&2
    exit 1
fi
if ! rm *; then
    echo "File deletion failed. Check results" >&2
    exit 1
fi
# Often, it is best to include logic to terminate the script and report
# an error when an situation such as the one shown previously occurs.

Watch Out for Filenames

Unix is extremely permissive about them. In fact, there are only two characters that cannot be included in a filename. The first is the / character since it is used to separate elements of a pathname, and the second is the null character (a zero byte), which is used internally to mark the ends of strings.

Of particular concern are leading hyphens. For example, it’s perfectly legal to have a file named -rf ~.

In certain case is better use ./* than just *.

As a general rule, always precede wildcards (such as * and ?) with ./ to prevent misinterpretation by commands. This includes things like *.pdf and ???.mp3, for example.

rm *
# to the following:
rm ./*
# This will prevent a filename starting with a hyphen from being
# interpreted as a command option.

exec

It’s a built-in shell feature that replaces the current shell process with the program you name after exec. It carries out the exec() system call. This feature is designed for saving system resources, but remember that there’s no return; when you run exec in a shell script, the script and shell running the script are gone, replaced by the new command.

To test this in a shell window, try running exec cat. After you press CTRL-D or CTRL-C to terminate the cat program, your window should disappear because its child process no longer exists.

subshells

A subshell, is an entirely new shell process that you can create just to run a command or two. The new shell has a copy of the original shell’s environment, and when the new shell exits, any changes you made to its shell environment disappear (including variable assignment), leaving the initial shell to run as normal.

Subshell allows commands to be grouped together (cmd1; cmd2; [cmd3;...]) . This can also be done with a group command { cmd1; cmd2; [cmd3; ...] } (note that the braces must be separated from the commands by a space), but the difference is a group command executes all of its commands in the current shell while a subshell executes its commands in a child copy of the current shell. Therefore, in most cases, unless a script requires a subshell, group commands are preferable to subshells. Group commands are both faster and require less memory.

Subshell really shines is with pipelines. When constructing a pipeline of commands, it is often useful to combine the results of several commands into a single stream.

Note: The “Compound Commands” section of the bash man page contains a full description of group command and subshell notations.

# The following line executes the command uglyprogram while in uglydir
# and leaves the original shell intact:
(cd uglydir; uglyprogram)

# The following line shows how to add a component to the path that might
# cause problems as a permanent change:
(PATH=/usr/confusing:$PATH; uglyprogram)
# Same command but in built-in syntax that avoids the subshell.
PATH=/usr/confusing:$PATH uglyprogram


# This segment performs redirections on multiple commands.
ls -l > output.txt
echo "Listing of foo.txt" >> output.txt
cat foo.txt >> output.txt

# Using a subshell.
(ls -l; echo "Listing of foo.txt"; cat foo.txt) > output.txt

# Using group command.
{ ls -l; echo "Listing of foo.txt"; cat foo.txt; } > output.txt


# Archive the entire directory tree within orig and then unpacks the
# archive into the new directory target, which effectively duplicates
# the files and folders in orig
tar cf - orig | (cd target; tar xvf -)
# Double-check this sort of command before you run it to make sure that
# the target directory exists and is completely separate from the orig directory
# You can check using this:
[ -d orig -a ! orig -ef target ]


echo "foo" | read
echo $REPLY
#
# The content of the REPLY variable is always empty because the read
# command is executed in a subshell, and its copy of REPLY is destroyed
# when the subshell terminates.

echo "foo" |(read; echo $REPLY)
# foo

Here documents EOF

Say you want to print a large section of text or feed a lot of text to another command. Rather than using several echo commands, you can use the shell’s here document feature.

The items EOF control the here document. <<EOF tells the shell to redirect all subsequent lines to the standard input of the command that precedes <<EOF, which in the next example is cat. The redirection stops as soon as the EOF marker occurs on a line by itself. Convention dictates that the marker be in all uppercase letters.

Here documents can be used with any command that accepts standard input.

NOTE: by default, single and double quotes within here documents lose their special meaning to the shell. The shell treats them as ordinary characters. This allows us to embed quotes freely within a here document.

#!/bin/sh
DATE=$(date)
cat <<EOF
Date: $DATE

The output above is from the Unix date command.
It's not a very interesting command.
EOF


# Here is a way to transfer a file using anonymous ftp.
#!/bin/bash
# Script to retrieve a file via FTP
FTP_SERVER=ftp.nl.debian.org
FTP_PATH=/debian/dists/stretch/main/installer-amd64/current/images/cdrom REMOTE_FILE=debian-cd_info.tar.gz
ftp -n << _EOF_
open $FTP_SERVER
user anonymous me@linuxbox
cd $FTP_PATH
hash
get $REMOTE_FILE
bye
_EOF_
ls -l "$REMOTE_FILE"

# If we change the redirection operator from << to <<-, the shell will
# ignore leading tab characters (but not spaces) in the here document.


# Here is a way to transfer a file using anonymous ftp.
#!/bin/sh
# Usage:
#     ftpfile machine file
# set -x
SOURCE=$1
FILE=$2
GETHOST="uname -n"
BFILE=`basename $FILE`
ftp -n $SOURCE <<EndFTP
ascii
user anonymous $USER@`$GETHOST`
get $FILE /tmp/$BFILE
EndFTP

Read values from stdin

read

The read builtin command is used to read a single line of standard input. This command can be used to read keyboard input or, when redirection is employed, a line of data from a file.

The command has the syntax read [-options] [variable...] where options is one or more of the available options listed later in the table and variable is the name of one or more variables used to hold the input value. If no variable name is supplied, the shell variable REPLY contains the line of data.

OptionDescription
-a arrayAssign the input to array, starting with index zero
-d delimiThe first char in the string delimiter is used to
indicate the end of input, rather than a newline char
-eUse Readline to handle input. This permits input
editing in the same manner as the command line
-i stringUse string as a default reply if the user simply
presses Enter. Requires the -e option
-n numRead num chars of input, rather than an entire line
-p promptDisplay prompt for input using string prompt
-rRaw mode. Don’t interpret backslash chars as escapes
-sSilent mode. Don’t echo chars to the display as they
are typed (useful for inputting passwords)
-t secondsTimeout. Terminate input after secs. read
returns a non-zero exit status if an input times out
-u fdUse input from file descriptor fd, rather than stdin
#!/bin/bash
# read-integer: evaluate the value of an integer.
# This line is the usual method.
# read -p "Please enter an integer > " int
# This line is another method.
echo -n "Please enter an integer -> "
read int
if [[ "$int" =~ ^-?[0-9]+$ ]]; then
    if [ "$int" -eq 0 ]; then
        echo "$int is zero."
    else
        if [ "$int" -lt 0 ]; then
            echo "$int is negative."
        else
            echo "$int is positive."
        fi
        if [ $((int % 2)) -eq 0 ]; then
            echo "$int is even."
        else
            echo "$int is odd."
        fi
    fi
else
    echo "Input value is not an integer." >&2
    exit 1
fi

read-integer
# Please enter an integer ->
5
# 5 is positive.
# 5 is odd.


# read can assign input to multiple variables.
#!/bin/bash
# read-multiple: read multiple values from keyboard
echo -n "Enter one or more values > "
read var1 var2 var3

echo "var1 = '$var1'"
echo "var2 = '$var2'"
echo "var3 = '$var3'"

read-multiple
# Enter one or more values >
a b c
# var1 = 'a'
# var2 = 'b'
# var3 = 'c'

read-multiple
# Enter one or more values >
a
# var1 = 'a'
# var2 = ''
# var3 = ''
# When  read receives fewer than the expected number, the extra
# variables are empty.

read-multiple
# Enter one or more values >
a b c d e
# var1 = 'a'
# var2 = 'b'
# var3 = 'c d e'
# When receives an excessive amount of input results in the final
# variable containing all of the extra input.


#!/bin/bash
# read-single: read multiple values into default variable
echo -n "Enter one or more values > "
read
echo "REPLY = '$REPLY'"

read-single
# Enter one or more values >
a b c d
# REPLY = 'a b c d'
# If no variables are listed after the read command, a shell variable,
# REPLY, will be assigned all the input.


#!/bin/bash
# read-secret: input a secret passphrase
if read -t 10 -sp "Enter secret passphrase > " secret_pass; then
    echo -e "\nSecret passphrase = '$secret_pass'"
else
    echo -e "\nInput timed out" >&2
    exit 1
fi
# The script prompts the user for a secret passphrase and waits ten
# seconds for input. If the entry is not completed within the specified
# time, the script exits with an error. Since the -s option is included,
# the characters of the passphrase are not echoed to the display as they
# are typed.

# To supply the user with a default response use -e and -i.
#!/bin/bash
# read-default: supply a default value if user presses Enter key.
read -e -p "What is your user name? " -i $USER
echo "You answered: '$REPLY'"

read-default
# What is your user name? Carlos
# You answered: 'Carlos'


echo "foo" |(read; echo $REPLY)
# foo

IFS

Normally, the shell performs word splitting on the input provided to read. This means that multiple words separated by one or more spaces become separate items on the input line and are assigned to separate variables by read. This behavior is configured by a shell variable named IFS (for Internal Field Separator). The default value of IFS contains a space, a tab, and a newline character, each of which will separate items from one another.

We can adjust the value of IFS to control the separation of fields input to read. For example, the /etc/passwd file contains lines of data that use the colon character as a field separator. By changing the value of IFS to a single colon, we can use read to input the contents of /etc/passwd and successfully separate fields into different variables.

#!/bin/bash
# read-ifs: read fields from a file
FILE=/etc/passwd
read -p "Enter a username > " user_name
# regular expression used by grep assures that the username will match
# only a single line in the /etc/passwd file.
file_info="$(grep "^$user_name:" $FILE)"
if [ -n "$file_info" ]; then
# The effect of the assignment is temporary changing only the
# environment for the duration of the command. In our case, the value of
# IFS is changed to a colon character.
    IFS=":" read user pw uid gid name home shell <<< "$file_info"
# The <<< operator indicates a here string. A here string is like a here
# document, only shorter, consisting of a single string.
    echo "User =      '$user'"
    echo "UID =       '$uid'"
    echo "GID =       '$gid'"
    echo "Full Name = '$name'"
    echo "Home Dir. = '$home'"
    echo "Shell =     '$shell'"
else
    echo "No such user '$user_name'" >&2
    exit 1
fi

Note: You Can’t Pipe read While the read command normally takes input from standard input, you cannot do this:

echo "foo" | read

We would expect this to work, but it does not. The command will appear to succeed, but the REPLY variable will always be empty.

The explanation has to do with the way the shell handles pipelines. In bash (and other shells such as sh), pipelines create subshells. These are copies of the shell and its environment that are used to execute the command in the pipeline. In our previous example, read is executed in a subshell.

Subshells in Unix-like systems create copies of the environment for the processes to use while they execute. When the processes finishes, the copy of the environment is destroyed. This means that a subshell can never alter the environment of its parent process. read assigns variables, which then become part of the environment. In the previous example, read assigns the value foo to the variable REPLY in its subshell’s environment, but when the command exits, the subshell and its environment are destroyed, and the effect of the assignment is lost.

Validating Input

Often the difference between a well-written program and a poorly written one lies in the program’s ability to deal with the unexpected. It is important to perform these kinds of programming checks every time a program receives input to guard against invalid data. Omitting these safeguards in the interests of economy might be excused if a program is to be used once and only by the author to perform some special task. Even then, if the program performs dangerous tasks such as deleting files, it would be wise to include data validation, just in case.

#!/bin/bash
# read-validate: validate input
invalid_input () {
    echo "Invalid input '$REPLY'" >&2
    exit 1
}

read -p "Enter a single item > "

# input is empty (invalid)
[[ -z "$REPLY" ]] && invalid_input

# input is multiple items (invalid)
(( "$(echo "$REPLY" | wc -w)" > 1 )) && invalid_input

# is input a valid filename?
if [[ "$REPLY" =~ ^[-[:alnum:]\._]+$ ]]; then
    echo "'$REPLY' is a valid filename."
    if [[ -e "$REPLY" ]]; then
        echo "And file '$REPLY' exists."
    else
        echo "However, file '$REPLY' does not exist."
    fi
    # is input a floating point number?
    if [[ "$REPLY" =~ ^-?[[:digit:]]*\.[[:digit:]]+$ ]]; then
        echo "'$REPLY' is a floating point number."
    else
        echo "'$REPLY' is not a floating point number."
    fi
    # is input an integer?
    if [[ "$REPLY" =~ ^-?[[:digit:]]+$ ]]; then
        echo "'$REPLY' is an integer."
    else
        echo "'$REPLY' is not an integer."
    fi
else
    echo "The string '$REPLY' is not a valid filename."
fi

Menus

A common type of interactivity is called menu-driven. In menu-driven programs, the user is presented with a list of choices and is asked to choose one.

Please Select:

1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
4. Quit

Enter selection [0-3] >
#!/bin/bash
# read-menu: a menu driven system information program
clear

echo "
Please Select:

1. Display System Information
2. Display Disk Space
3. Display Home Space Utilization
0. Quit
"
read -p "Enter selection [0-3] > "

if [[ "$REPLY" =~ ^[0-3]$ ]]; then
    if [[ "$REPLY" == 0 ]]; then
        echo "Program terminated."
        exit
    fi
    if [[ "$REPLY" == 1 ]]; then
        echo "Hostname: $HOSTNAME"
        uptime
        exit
    fi
    if [[ "$REPLY" == 2 ]]; then
        df -h
        exit
    fi
    if [[ "$REPLY" == 3 ]]; then
        # Detect whether the user had permission to
        # read all home directories.
        if [[ "$(id -u)" -eq 0 ]]; then
            echo "Home Space Utilization (All Users)"
            du -sh /home/*
        else
            echo "Home Space Utilization ($USER)"
            du -sh "$HOME"
        fi
        exit
    fi
else
    echo "Invalid entry." >&2
    exit 1
fi

exit

The exit command accepts a single, optional argument, which becomes the script’s exit status. When no argument is passed, the exit status defaults to the exit status of the last command executed. Usually when the exit command appearing on the last line of the script is there as a formality. When a script “runs off the end” (reaches end of file), it terminates with an exit status of the last command executed.

Name Pipes

Named pipes are used to create a connection between two processes and can be used just like other types of files.

Named pipes behave like files but actually form first-in first-out (FIFO) buffers. As with ordinary (unnamed) pipes, data goes in one end and emerges out the other.

With named pipes, it is possible to set up something like this: process1 > named_pipe and this: process2 < named_pipe and it will behave like this: process1 | process2.

# Create a named pipe.
mkfifo pipe1

# To see how the named pipe works, we'll need two terminal windows. In
# the first terminal, we enter a simple command and redirect its output
# to the named pipe.
ls -l > pipe1
# After we press the Enter key, the command will appear to hang. This is
# because there is nothing receiving data from the other end of the pipe
# yet. When this occurs, it is said that the pipe is blocked. This
# condition will clear once we attach a process to the other end and it
# begins to read input from the pipe. Using the second terminal window,
# we enter this command:
cat < pipe1
# -rw-r--r--  1 richard richard      106K Oct  4 13:34 bootup.svg
# drwxr-xr-x  3 richard richard      4.0K Oct  6 00:35 compiling

Miscellaneous

Process substitution

Process substitution allows us to treat the output of a subshell as an ordinary file for purposes of redirection.

Process substitution is expressed in two ways. For processes that produce standard output, it looks like this <(list) or, for processes that intake standard input, it looks like this >(list) where list is a list of commands.

Note: The EXPANSION section of the bash man page contains a subsection covering process substitution.

# Process substitution allows us to treat the output of a subshell as an
# ordinary file for purposes of redirection. In fact, since it is a form
# of expansion, we can examine its real value.
echo <(echo "foo")
# /dev/fd/63
# Le pasa el archivo, i.e. el nombre del descriptor que tiene el archivo.
# By using echo to view the result of the expansion, we see that the
# output of the subshell is being provided by a file named /dev/fd/63.


echo "foo" | read
echo $REPLY
#
# The content of the REPLY variable is always empty because the read
# command is executed in a subshell, and its copy of REPLY is destroyed
# when the subshell terminates. To solve our problem with read, we can
# employ process substitution like this:
read < <(echo "foo")
echo $REPLY
# foo


# read loop that processes the contents of a directory listing created
# by a subshell:
#!/bin/bash
# pro-sub: demo of process substitution
while read attr links owner group size date time filename; do
    cat <<- EOF
         Filename:   $filename
         Size:       $size
         Owner:      $owner
         Group:      $group
         Modified:   $date $time
         Links:      $links
         Attributes: $attr
    EOF
done < <(ls -l | tail -n +2)

pro-sub | head -n 20
# Filename:   addresses.ldif
# Size:       14540
# Owner:      me
# Group:      me
# Modified:   2009-04-02 11:12
# Links:      1
# Attributes: -rw-r--r--
#
# Filename:   bin
# Size:       4096
# Owner:      me
# Group:      me
# Modified:   2009-07-10 07:31
# Links:      2
# Attributes: drwxr-xr-x
#
# Filename:   bookmarks.html
# Size:       394213
# Owner:      me
# Group:      me

Files in scripts

To include code from another file in shell script, use the dot ( . ) operator.

This method of inclusion is also called sourcing a file and is useful for reading variables (for example, in a shared configuration file) and other kinds of definitions. This is not the same as executing another script; when you run a script (as a command), it starts in a new shell, and you can’t get anything back other than the output and the exit code.

# For example, This runs the commands in the file config.sh:
. config.sh

basename

If you need to strip the extension from a filename or get rid of the directories in a full pathname, use the basename command.

basename example.html .html
# example
basename /usr/local/bin/example
# example

# This script convert GIF image files to the PNG format.
#!/bin/sh
for file in *.gif; do
    # exit if there are no files
    if [ ! -f $file ]; then
        exit
    fi
    b=$(basename $file .gif)
    echo Converting $b.gif to $b.png...
    giftopnm $b.gif | pnmtopng > $b.png
done

trap

trap uses the syntax trap argument signal [signal...] where argument is a string that will be read and treated as a command and signal is the specification of a signal that will trigger the execution of the interpreted command.

A common problem with scripts that employ temporary files is that if the script is aborted, the temporary files could be left behind. In the next example (if we didn’t use the trap command), pressing CTRL-C before the second cat command leaves a temporary file in /tmp. Avoid this if possible. Instead, use the trap command to create a signal handler to catch the signal that CTRL-C generates and remove the temporary files.

#!/bin/bash
# trap-demo: simple signal handling demo
trap "echo 'I am ignoring you.'" SIGINT SIGTERM

for i in {1..5}; do
    echo "Iteration $i of 5"
    sleep 5
done
#This script defines a trap that will execute an echo command each time
#either the SIGINT or SIGTERM signal is received while the script is
#running. Execution of the program looks like this when the user
#attempts to stop the script by pressing Ctrl-c:
trap-demo
#Iteration 1 of 5
#Iteration 2 of 5
#^CI am ignoring you.
#Iteration 3 of 5
#^CI am ignoring you.
#Iteration 4 of 5
#Iteration 5 of 5


# This script shows the device interrupts that have occurred in the
# last two seconds:
#!/bin/sh
TMPFILE1=$(mktemp /tmp/im1.XXXXXX)
TMPFILE2=$(mktemp /tmp/im2.XXXXXX)
trap "rm -f $TMPFILE1 $TMPFILE2; exit 1" INT
# The trap command to create a signal handler to catch the signal that
# CTRL-C generates and remove the temporary files.
# You must use exit in the handler to explicitly end script execution.

cat /proc/interrupts > $TMPFILE1
sleep 2
cat /proc/interrupts > $TMPFILE2
diff $TMPFILE1 $TMPFILE2
rm -f $TMPFILE1 $TMPFILE2


# A separate shell function is specified for each signal to be handled:
#!/bin/bash
# trap-demo2: simple signal handling demo
exit_on_signal_SIGINT () {
    echo "Script interrupted." 2>&1
    exit 0
}
exit_on_signal_SIGTERM () {
    echo "Script terminated." 2>&1
    exit 0
}
trap exit_on_signal_SIGINT SIGINT
trap exit_on_signal_SIGTERM SIGTERM

for i in {1..5}; do
    echo "Iteration $i of 5"
    sleep 5
done
# Note the inclusion of an exit command in each of the signal-handling
# functions. Without an exit, the script would continue after completing
# the function.

wait

bash has a builtin command to help manage asynchronous execution. The wait command causes a parent script to pause until a specified process (i.e., the child script) finishes.

#!/bin/bash
# async-parent: Asynchronous execution demo (parent)
echo "Parent: starting..."

echo "Parent: launching child script..."
async-child &
# $! shell parameter, will always contain the process ID of
# the last job put into the background
pid=$!
echo "Parent: child (PID= $pid) launched."

echo "Parent: continuing..."
sleep 2

echo "Parent: pausing to wait for child to finish..."
wait "$pid"

echo "Parent: child is finished. Continuing..."
echo "Parent: parent is done. Exiting."

#!/bin/bash
# async-child: Asynchronous execution demo (child)
echo "Child: child is running..."
sleep 5
echo "Child: child is done. Exiting."

async-parent
# Parent: starting...
# Parent: launching child script...
# Parent: child (PID= 6741) launched.
# Parent: continuing...
# Child: child is running...
# Parent: pausing to wait for child to finish...
# Child: child is done. Exiting.
# Parent: child is finished. Continuing...
# Parent: parent is done. Exiting.

-e

Stop shell when find a error.

expr

If you need to use arithmetic operations in your shell scripts, the expr command can help (and even do some string operations).

The expr command is a clumsy, slow way of doing math. If you find yourself using it frequently, you should probably be using a language like Python instead of a shell script.

expr 1 + 2
# 3

Arrays

Creating an array

Note: Most of the following only works in bash.

Arrays are variables that hold more than one value at a time. An array has cells, which are called elements, and each element contains data. An individual array element is accessed using an address called an index or subscript.

Array variables are named just like other bash variables, and are created automatically when they are accessed.

# Creating an array.
a[1]=foo
# Element 1 of array a is assigned the value “foo”.

# Displays the stored value of element 1.
echo ${a[1]}
# foo
# The use of braces is required to prevent the shell from attempting
# pathname expansion on the name of the array element.


# An array can also be created with the declare command.
declare -a a
# Using the -a option, this example of declare creates the array a.
Assigning values to an array

Single values may be assigned using the syntax name[subscript]=value where name is the name of the array and subscript is an integer (or arithmetic expression) greater than or equal to zero. value is a string or integer assigned to the array element.

Multiple values may be assigned using the syntax name=(value1 value2 ...) where name is the name of the array and value placeholders are values assigned sequentially to elements of the array, starting with element zero.

# Assign abbreviated days of the week to the array days.
days=(Sun Mon Tue Wed Thu Fri Sat)

# It's also possible to assign values to a specific element.
# Doesn't work in zsh.
days=([0]=Sun [1]=Mon [2]=Tue [3]=Wed [4]=Thu [5]=Fri [6]=Sat)
Array operations

The subscripts * and @ can be used to access every element in an array. As with positional parameters, the @ notation is the more useful of the two.

animals=("a dog" "a cat" "a fish")
for i in ${animals[*]}; do echo $i; done
# a
# dog
# a
# cat
# a
# fish

for i in ${animals[@]}; do echo $i; done
# a
# dog
# a
# cat
# a
# fish

for i in "${animals[*]}"; do echo $i; done
# a dog a cat a fish

for i in "${animals[@]}"; do echo $i; done
# a dog
# a cat
# a fish

References

  • Manual pages (man pages).
  • How Linux Works by Brian Ward.
  • The Linux Command Line by William Shotts.
  • Debian Reference.