Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
UNIX and Linux System Administration Handbook

UNIX and Linux System Administration Handbook

by Evi Nemeth 2010 1327 pages
4.45
500+ ratings
Listen
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Master the Command Line and Scripting Fundamentals

In our experience, professional administrators spend much of their time writing scripts.

Core administrative tasks. Effective system administration relies heavily on command-line proficiency and scripting. While graphical tools exist, the command line offers speed, flexibility, and the ability to automate repetitive tasks, which is crucial for managing multiple systems efficiently. Scripting transforms manual processes into reliable, repeatable operations.

Essential shell skills. Familiarity with shell basics like command editing, pipes, redirection, and variables is non-negotiable. Understanding common filter commands such as grep, sort, cut, head, and tail allows for powerful text processing directly from the command line or within scripts. Regular expressions are the universal language for pattern matching and manipulation.

Beyond the shell. For more complex automation, scripting languages like Perl and Python are invaluable. They offer richer data structures, better control flow, and extensive libraries for tasks like parsing logs, managing users, or interacting with network services. While shell scripts are great for simple tasks, Perl or Python are often better choices for larger, more maintainable projects.

2. Understand the Core OS Architecture: Booting, Processes, Filesystem, Kernel

An awful lot of UNIX and Linux information is available these days, so we’ve designed this book to occupy a specific niche in the ecosystem of man pages, blogs, magazines, books, and other reference materials that address the needs of system administrators.

System lifecycle. A fundamental understanding of how the operating system starts up (bootstrapping) and shuts down is essential for troubleshooting and recovery. Knowing the steps from the initial ROM code to the init process and startup scripts allows administrators to diagnose boot failures and enter recovery modes.

Processes and control. The OS manages running programs as processes, each with unique identifiers (PID, PPID), ownership (UID, GID), and resource allocation (niceness). Administrators must know how to monitor processes (ps, top), send signals (kill), and manage their lifecycle to maintain system stability and performance.

The unified filesystem. UNIX and Linux treat almost everything as a file, organized in a single hierarchical tree. Understanding pathnames, file types (regular, directory, device, socket, pipe, link), and permissions (mode bits, ACLs) is critical for managing access control and locating resources. Filesystems must be mounted and unmounted to become accessible.

Kernel and drivers. The kernel is the core of the OS, managing hardware and providing system services. Device drivers interface the kernel with specific hardware. While administrators rarely write kernel code, they must understand how to configure kernel modules, tune parameters, and manage device files to support hardware and optimize performance.

3. Storage Management is Layered and Critical for Reliability

ZFS: all your storage problems solved.

Hardware and interfaces. Storage relies on various hardware technologies like HDDs and SSDs, connected via interfaces such as SATA, SAS, and Fibre Channel. Understanding their characteristics (speed, capacity, reliability, cost) and interfaces is the first step in designing a storage solution.

Software abstraction layers. Between the raw hardware and the user's view of files lie several software layers:

  • Partitioning: Dividing a disk into sections (MBR, GPT).
  • RAID: Combining disks for performance or redundancy (RAID 0, 1, 5, 6).
  • Logical Volume Management (LVM): Pooling storage devices into flexible volumes (LVM2, SVM, AIX LVM).
  • Filesystems: Organizing data into files and directories (ext4, ZFS, VxFS, JFS2).

Designing for reliability. Disk failures are inevitable. RAID provides redundancy, but backups are essential for data recovery from other failures. ZFS integrates many of these layers (filesystem, LVM, RAID) into a single system, offering features like snapshots and checksumming for enhanced data integrity and simplified management.

4. TCP/IP Networking is the Foundation; Understand Addresses and Routing

TCP/IP and the Internet share a history that goes back several decades.

The protocol stack. TCP/IP is a suite of protocols organized in layers (IP, ICMP, TCP, UDP) that enable communication across networks. Understanding how packets are addressed (MAC, IP, ports) and encapsulated is fundamental to network configuration and debugging. IPv4 is current, but IPv6 is the future.

Addressing and subnets. IP addresses identify network interfaces. Subnetting divides networks into smaller, manageable segments using netmasks. CIDR notation simplifies addressing and routing. DHCP automates IP address assignment and network configuration for clients.

Routing packets. Routing is the process of directing packets towards their destination using routing tables. Static routes are manually configured, while dynamic routing protocols (RIP, OSPF, BGP) allow routers to automatically discover and adapt to network topology changes. Understanding how packets traverse gateways is key to diagnosing connectivity issues.

5. DNS is the Internet's Essential Directory Service

Who needs DNS?

Mapping names to numbers. The Domain Name System (DNS) is a distributed database that translates human-readable hostnames (like www.example.com) into machine-readable IP addresses, and vice versa (reverse DNS). It's critical for accessing resources on the internet and within private networks.

Hierarchical and distributed. DNS is organized into a hierarchy of domains and zones, managed by different organizations worldwide. Name servers (master, slave, caching) store and serve this data. Delegation allows domains to hand off authority for subdomains to other servers.

Caching and security. Name servers cache query results to improve performance. Understanding TTLs (Time To Live) is important for managing cache freshness. DNSSEC adds cryptographic signatures to DNS data to prevent forgery and tampering, enhancing security, although deployment is still ongoing.

6. Email Systems are Complex; Design and Filter Carefully

Mail systems.

Components and flow. Email delivery involves multiple software components: user agents (MUA), submission agents (MSA), transport agents (MTA), delivery agents (DA), and access agents (AA). Messages flow through these components, often across multiple servers, using protocols like SMTP, IMAP, and POP.

MTA is the core. The Mail Transport Agent (MTA) is responsible for routing messages between servers and delivering them locally. Popular MTAs include sendmail, Exim, and Postfix. Configuring the MTA is a major administrative task, involving defining mail domains, routing rules, and security policies.

Fighting spam and malware. Content scanning is essential for protecting users from unsolicited email and malicious attachments. Techniques include blacklists, whitelists, greylisting, heuristic analysis (SpamAssassin), and cryptographic verification (SPF, DKIM). Scanning can occur in-line during the SMTP session or after messages are queued.

7. Security is an Ongoing Process, Not a Product

Is UNIX secure?

No system is perfectly secure. Security is not a state you achieve by installing a single product; it's a continuous process of vigilance, patching, configuration, and monitoring. UNIX systems, while robust, have vulnerabilities that require constant attention.

Threat vectors. Security can be compromised through social engineering (exploiting human trust), software vulnerabilities (bugs in code), and configuration errors (misconfigured services or permissions). Attackers use tools like rootkits and malware to gain and maintain unauthorized access.

Layered defense. Effective security requires a multi-layered approach:

  • Access Control: Managing user privileges (root, sudo, PAM, ACLs).
  • Patching: Keeping software up-to-date to fix known vulnerabilities.
  • Firewalls: Filtering network traffic to block unwanted connections.
  • Monitoring: Using tools (Nmap, Nessus, OSSEC, logs) to detect suspicious activity.
  • Cryptography: Using tools (SSH, PGP, Kerberos, TLS) for secure communication and authentication.

8. Automate Tasks and Manage Configuration Systematically

Good system administrators write scripts.

Efficiency and consistency. Manual configuration of multiple systems is time-consuming and error-prone. Automation through scripting (shell, Perl, Python) ensures tasks are performed consistently and frees administrators for more complex work.

Configuration management tools. For larger environments, dedicated configuration management systems (like cfengine, LCFG) or package management systems (APT, yum, Zypper) provide structured ways to define and enforce system configurations across a network. These tools manage software installation, configuration files, and dependencies.

Version control. Tracking changes to configuration files and scripts using revision control systems (Subversion, Git) is crucial for debugging problems and reverting to known-good states. This provides an audit trail and facilitates collaboration among administrators.

9. Leverage Tools for Monitoring, Debugging, and Analysis

Network management is the art and science of keeping a network healthy.

Visibility is key. You cannot manage what you cannot see. A wide array of tools provides insight into system and network activity, helping administrators diagnose problems and understand performance characteristics.

Essential debugging tools:

  • ping: Check host reachability.
  • traceroute: Trace network paths.
  • netstat: View network connections and statistics.
  • tcpdump/Wireshark: Capture and analyze network packets.
  • strace/truss/tusc: Trace system calls and signals.

Performance analysis. Tools like vmstat, iostat, sar, and top monitor resource utilization (CPU, memory, disk I/O) to identify bottlenecks. Understanding these metrics is vital for tuning system performance.

Network management systems. For larger networks, tools like SNMP managers, Cacti (graphing), and Nagios (event monitoring) provide centralized visibility and alerting for network devices and services. NetFlow provides detailed traffic analysis.

10. Design for Scale, Reliability, and Disaster Recovery

A service is only as reliable as the data center that houses it.

Beyond single systems. Managing a few stand-alone systems is different from managing a large, interconnected infrastructure. Design decisions must consider how systems will scale to handle increased load and how they will remain available in the face of failures.

Data center fundamentals. Physical infrastructure (power, cooling, racks) is the foundation of reliability. Data center tiers classify reliability levels based on redundancy. Environmental monitoring is crucial.

High availability and recovery. Redundancy (RAID, clustering, load balancing) minimizes downtime from component failures. Disaster recovery planning ensures business continuity after major events. Backups are the cornerstone of any recovery plan, requiring careful planning, execution, and testing.

Virtualization benefits. Technologies like server virtualization and cloud computing offer flexibility, scalability, and improved resource utilization, contributing to both cost savings and enhanced availability through features like live migration and rapid provisioning.

11. Integrate with Other Platforms, Especially Windows

Chances are high that your environment includes both Microsoft Windows and UNIX systems.

Interoperability is necessary. In heterogeneous environments, UNIX/Linux systems must coexist and interact with other platforms, most commonly Microsoft Windows. This requires understanding and configuring cross-platform services.

Common integration points:

  • Remote Access: Using SSH clients (PuTTY, SecureCRT) on Windows to access UNIX command lines, or RDP clients (rdesktop) on UNIX to access Windows desktops.
  • File Sharing: Using Samba on UNIX/Linux to act as a CIFS/SMB server for Windows clients, or mounting CIFS shares on Linux clients.
  • Printer Sharing: Configuring Samba to share UNIX printers with Windows clients.
  • Authentication: Integrating UNIX/Linux systems into an Active Directory domain using Samba's winbind or alternative solutions.

Leveraging cross-platform tools. Many applications and command-line tools are available for both Windows and UNIX/Linux (e.g., Cygwin, OpenOffice, Wine, native ports), facilitating cross-platform work and simplifying administration.

12. Documentation, Policy, and Communication are Vital Soft Skills

The more experienced you become at system management, the more the user community comes to depend on you.

Beyond technical skills. Effective system administration requires strong soft skills in addition to technical expertise. Managing users, communicating with management, and collaborating with colleagues are crucial for success.

Process and policy. Standardizing procedures and documenting configurations are essential for consistency, maintainability, and disaster recovery. Policies define acceptable use, security requirements, and service level agreements, providing a framework for operations and conflict resolution.

Communication is key. Clear and timely communication with users about system changes, outages, and security issues builds trust and reduces support requests. Effective communication with management ensures resources are allocated appropriately and expectations are managed.

Continuous learning. The IT landscape is constantly evolving. Staying current with new technologies, security threats, and best practices through documentation, mailing lists, conferences, and certifications is vital for long-term effectiveness.

Last updated:

Review Summary

4.45 out of 5
Average of 500+ ratings from Goodreads and Amazon.

UNIX and Linux System Administration Handbook is highly regarded as an essential resource for system administrators. Readers praise its comprehensive coverage, accessibility, and humor. Many consider it the "Bible" of Linux administration, offering a solid foundation and overview of various topics. While some find it lengthy and occasionally outdated, most appreciate its depth and practical insights. The book is valued by beginners and experienced professionals alike, serving as both an introduction and a reference guide. Its clear explanations and real-world examples make it a valuable tool for understanding complex systems and advancing careers in IT.

Your rating:
Be the first to rate!

About the Author

Evi Nemeth was a renowned computer scientist and author, best known for her contributions to system administration and networking. Evi Nemeth co-authored several influential books, including the UNIX and Linux System Administration Handbook. She was a professor at the University of Colorado Boulder and played a significant role in developing early UNIX systems. Nemeth was respected for her practical approach to teaching and her ability to explain complex concepts clearly. Her work greatly influenced the field of system administration, and she was known for mentoring many students and professionals throughout her career. Tragically, Nemeth disappeared at sea in 2013 during a sailing expedition, but her legacy continues through her writings and the countless professionals she inspired.

Download PDF

To save this UNIX and Linux System Administration Handbook summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.31 MB     Pages: 17

Download EPUB

To read this UNIX and Linux System Administration Handbook summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 3.01 MB     Pages: 14
Listen to Summary
0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Home
Library
Get App
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Personalized for you
Ratings: Rate books & see your ratings
100,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on May 23,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Loading...