Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Structured Computer Organization

Structured Computer Organization

by Andrew S. Tanenbaum 1976 813 pages
4.03
500+ ratings
Listen
Listen to Summary
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Computers are structured as a hierarchy of levels, each with a specific function.

The first three editions of this book were based on the idea that a computer can be regarded as a hierarchy of levels, each one performing some well-defined function.

Layered Abstraction. A computer system isn't a monolithic entity but rather a series of layers, each building upon the one below it. This layered approach simplifies design and understanding, allowing engineers to focus on specific levels without needing to grasp the entire system at once. Each level provides a virtual machine, abstracting away the complexities of the lower levels.

Levels of the Hierarchy. The book emphasizes key levels:

  • Digital Logic Level: The foundation, dealing with gates and circuits.
  • Microarchitecture Level: Implements the ISA using datapaths and control signals.
  • Instruction Set Architecture (ISA) Level: Defines the machine language.
  • Operating System Level: Manages resources and provides services to applications.
  • Assembly Language Level: A human-readable representation of machine code.
  • Problem-Oriented Language Level: High-level languages like Java or C++.

Benefits of the Hierarchical Model. This structure allows for modularity, making it easier to update or replace components at one level without affecting others. It also enables a division of labor, with different teams focusing on different aspects of the system. This abstraction is crucial for managing the complexity of modern computers.

2. Computer architecture has evolved through distinct generations, each marked by technological advancements.

Modern computer history starts here.

Mechanical Beginnings. The earliest attempts at computation involved mechanical devices, such as Babbage's Analytical Engine (1834), which, though never fully realized, laid the groundwork for digital computers. These machines used gears and levers to perform calculations, representing a significant conceptual leap.

The Vacuum Tube Era. The first generation of electronic computers (1945-1955) relied on vacuum tubes. The COLOSSUS (1943) and ENIAC (1946) were pioneering examples, but they were large, power-hungry, and prone to failure. The EDSAC (1949) marked a crucial step by implementing the stored-program concept.

Transistors and Integrated Circuits. The second (1955-1965) and third (1965-1980) generations saw the introduction of transistors and integrated circuits, respectively. These innovations led to smaller, more reliable, and more energy-efficient computers. The IBM 360 (1964) was a landmark, establishing the concept of a product line designed as a family.

VLSI and Beyond. The fourth generation (1980-present) is characterized by Very Large Scale Integration (VLSI), enabling the creation of microprocessors and personal computers. Moore's Law predicted the exponential growth in the number of transistors on a chip, driving continuous advancements in computing power.

3. The computer "zoo" encompasses a wide spectrum of devices, each tailored to specific needs and economic constraints.

The current spectrum of computers available. The prices should be taken with a grain (or better yet, a metric ton) of salt.

Diverse Landscape. The computer market isn't a monolith; it's a diverse ecosystem with devices ranging from disposable computers in greeting cards to multi-million dollar supercomputers. Each type caters to specific needs, balancing performance, cost, and size.

Examples of Computer Types:

  • Embedded computers: Found in watches, cars, and appliances.
  • Game computers: Home video game consoles.
  • Personal computers: Desktop and portable computers.
  • Servers: Network servers for businesses.
  • Mainframes: Used for batch data processing in large organizations.
  • Supercomputers: Tackle complex scientific and engineering problems.

Technological and Economic Forces. The evolution of the computer "zoo" is driven by technological advancements and economic considerations. Moore's Law, predicting the exponential increase in transistor density, has enabled the creation of increasingly powerful and affordable devices. Market demands and competitive pressures further shape the landscape.

4. Processors execute instructions through pipelines, enhancing performance via parallelism.

The data path of a typical von Neumann machine.

Instruction Execution Cycle. Processors execute instructions in a cycle involving fetching, decoding, operand fetching, execution, and write-back. Modern processors employ techniques like pipelining and parallelism to improve performance.

Pipelining. Pipelining divides the instruction execution cycle into stages, allowing multiple instructions to be processed concurrently. This increases throughput, but the pipeline can be stalled by dependencies or branches.

Parallelism. Modern processors achieve parallelism through:

  • Instruction-Level Parallelism (ILP): Executing multiple instructions simultaneously.
  • Processor-Level Parallelism: Using multiple processors to work on different parts of a task.

RISC vs. CISC. Reduced Instruction Set Computing (RISC) architectures favor simple instructions that can be executed quickly, while Complex Instruction Set Computing (CISC) architectures use more complex instructions that can perform more work per instruction. Modern processors often blend aspects of both.

5. Memory is organized hierarchically, balancing speed, cost, and capacity.

A five-level memory hierarchy.

The Memory Hierarchy. Computer memory is structured as a hierarchy, with each level offering a different balance of speed, cost, and capacity. This hierarchy typically includes:

  • Registers: Fastest, most expensive, and smallest.
  • Cache Memory: Fast, relatively expensive, and small.
  • Primary Memory (RAM): Moderately fast, moderately expensive, and medium-sized.
  • Secondary Memory (Disk): Slow, inexpensive, and large.
  • Tertiary Memory (Tape, Optical Disk): Very slow, very inexpensive, and very large.

Cache Memory. Cache memory is a small, fast memory that stores frequently accessed data, reducing the need to access slower main memory. Cache performance depends on factors like size, associativity, and replacement policy.

Memory Packaging and Types. Memory is packaged in various forms, such as SIMMs and DIMMs. Different types of RAM, including SRAM and DRAM, offer different performance characteristics.

6. Input/Output (I/O) systems facilitate communication between the computer and the external world.

Physical structure of a personal computer.

Buses. I/O devices connect to the computer via buses, which are shared communication pathways. Modern systems often have multiple buses, including high-speed buses like PCI and USB, and slower buses like ISA.

Controllers. Each I/O device has a controller that manages communication with the bus and the device itself. Controllers may use Direct Memory Access (DMA) to transfer data directly to or from memory without CPU intervention.

Common I/O Devices:

  • Terminals: Keyboards and monitors for user interaction.
  • Mice: Pointing devices for graphical user interfaces.
  • Printers: Output devices for producing hard copies.
  • Modems: Devices for transmitting data over telephone lines.

Character Codes. Character codes like ASCII and Unicode are used to represent text characters in a standardized format.

7. Digital logic gates and Boolean algebra form the foundation of computer hardware.

A transistor inverter.

Gates. Digital circuits are built from logic gates, such as AND, OR, NOT, NAND, and NOR gates. These gates perform basic Boolean operations on binary inputs.

Boolean Algebra. Boolean algebra provides a mathematical framework for analyzing and designing digital circuits. It uses operators like AND, OR, and NOT to manipulate binary values.

Basic Digital Logic Circuits. Combinational circuits, such as adders and multiplexers, are built from interconnected logic gates. Sequential circuits, such as flip-flops and registers, use feedback to store state.

CPU Chips and Buses. CPU chips contain complex digital logic circuits, including ALUs, control units, and registers. Computer buses provide communication pathways between the CPU, memory, and I/O devices.

8. Microarchitecture implements the instruction set architecture (ISA) using datapaths and control signals.

The data path of the example microarchitecture used in this chapter.

Datapaths. The microarchitecture level implements the ISA using datapaths, which are the physical pathways through which data flows. A typical datapath includes registers, an ALU, and a shifter.

Microinstructions. Microinstructions control the operation of the datapath. Each microinstruction specifies which registers to read, which ALU operation to perform, and which register to write the result to.

Microinstruction Control. Microinstruction control can be implemented using microprogramming, where a microprogram stored in ROM controls the execution of microinstructions.

Design Trade-offs. Microarchitecture design involves trade-offs between speed and cost. Techniques like pipelining and caching can improve performance, but they also increase complexity and cost.

9. The Instruction Set Architecture (ISA) defines the machine language and its properties.

Properties of the ISA Level.

ISA Properties. The ISA level defines the machine language, including the instruction set, memory model, registers, and data types. It provides an abstraction of the underlying hardware, allowing programmers to write code without needing to know the details of the microarchitecture.

Memory Models. The ISA defines the memory model, including how memory is addressed and organized. Common memory models include linear addressing and segmented addressing.

Registers. The ISA specifies the number and types of registers available to programmers. Registers are used to store data and addresses during program execution.

Instruction Formats. The ISA defines the format of machine instructions, including the opcode and operands. Instruction formats can be fixed-length or variable-length.

10. Operating systems manage virtual memory, I/O, and parallel processing.

Positioning of the operating system machine level.

Virtual Memory. Operating systems implement virtual memory, which allows processes to access more memory than is physically available. Virtual memory uses techniques like paging and segmentation to map virtual addresses to physical addresses.

Virtual I/O Instructions. Operating systems provide virtual I/O instructions that allow processes to access I/O devices in a device-independent way. This involves managing files, directories, and device drivers.

Virtual Instructions for Parallel Processing. Operating systems provide mechanisms for creating and synchronizing processes, enabling parallel processing. This includes process creation, race condition avoidance, and process synchronization using semaphores.

Example Operating Systems. UNIX and Windows NT are two widely used operating systems that provide these services.

11. Assembly language provides a human-readable interface to machine code.

A selection of the Pentium II integer instructions.

Assembly Language Basics. Assembly language is a low-level programming language that provides a symbolic representation of machine code. It uses mnemonics to represent instructions and labels to represent memory addresses.

Assembly Language Statements. Assembly language statements typically consist of a label, an opcode, and operands. Pseudoinstructions are used to control the assembly process.

Macros. Macros are a way to define reusable code sequences in assembly language. They can be used to simplify programming and improve code readability.

The Assembly Process. The assembly process involves two passes: pass one, which builds the symbol table, and pass two, which generates the machine code.

12. Parallel computer architectures address performance through various communication models and interconnection networks.

Flynn’s taxonomy of parallel computers.

Design Issues. Parallel computer design involves considering communication models, interconnection networks, performance, and software. Communication models can be shared memory or message passing.

Interconnection Networks. Interconnection networks connect the processing elements in a parallel computer. Common topologies include stars, rings, grids, and hypercubes.

SIMD Computers. Single Instruction, Multiple Data (SIMD) computers, such as array processors and vector processors, execute the same instruction on multiple data elements simultaneously.

Shared-Memory Multiprocessors. Shared-memory multiprocessors have multiple CPUs that share a common memory. Memory semantics, cache coherence, and bus-based architectures are important considerations.

Message-Passing Multicomputers. Message-passing multicomputers have multiple CPUs, each with its own private memory. Communication between CPUs is done by passing messages over an interconnection network.

Last updated:

FAQ

What is "Structured Computer Organization" by Andrew S. Tanenbaum about?

  • Hierarchical computer organization: The book presents computers as a hierarchy of levels, from digital logic up to parallel computer architectures, each with distinct roles and functions.
  • Comprehensive coverage: It covers processors, memory, I/O devices, instruction set architectures, operating systems, and assembly language, providing a layered understanding of computer systems.
  • Modern examples and technologies: The text uses real-world examples like Pentium II, UltraSPARC II, and Java Virtual Machine, ensuring relevance to current industry standards.
  • Educational focus: Designed for students and professionals, it includes practical exercises, code samples, and simulators to reinforce learning.

Why should I read "Structured Computer Organization" by Andrew S. Tanenbaum?

  • Solid foundational knowledge: The book offers a clear, structured approach to understanding computer architecture, essential for anyone studying or working in computing.
  • Real-world relevance: It connects theory to practice with examples from actual processors and operating systems, such as UNIX and Windows NT.
  • Hands-on learning: Nearly 300 exercises, Java code examples, and a Mic-1 simulator provide opportunities for practical application.
  • Up-to-date content: Coverage of modern technologies like RAID, PCI, USB, and parallel architectures ensures readers stay current with industry trends.

What are the key takeaways from "Structured Computer Organization" by Andrew S. Tanenbaum?

  • Layered computer model: Understanding computers as a hierarchy of levels clarifies how hardware and software interact.
  • Diverse architectures: Exposure to CISC, RISC, and JVM architectures broadens knowledge of different design philosophies.
  • Parallel and memory systems: In-depth discussion of parallel architectures and memory hierarchies highlights performance considerations in modern systems.
  • Practical assembly and microprogramming: The book demystifies low-level programming and microarchitecture, bridging the gap between high-level code and hardware execution.

What are the main levels of computer organization described in "Structured Computer Organization" by Andrew S. Tanenbaum?

  • Six-level model: The book details digital logic, microarchitecture, instruction set architecture, operating system machine, assembly language, and parallel computer architecture levels.
  • Distinct functions: Each level has a specific role, from basic electronic circuits to high-level software and hardware interactions.
  • Support mechanisms: The text explains how each level is implemented, such as hardware for logic and microarchitecture, and software for operating system and assembly levels.
  • Inter-level relationships: Understanding how levels interact helps clarify the flow of data and control in a computer system.

How does "Structured Computer Organization" by Andrew S. Tanenbaum explain instruction set architecture (ISA)?

  • ISA as interface: The ISA is presented as the boundary between hardware and software, defining available instructions, registers, and data types.
  • Real-world examples: The book uses Pentium II, UltraSPARC II, and JVM ISAs to illustrate different instruction formats and architectural choices.
  • Instruction types: It covers data movement, arithmetic, control flow, procedure calls, and exception handling in detail.
  • Design implications: The text discusses how ISA design affects compiler construction, performance, and hardware complexity.

What are the key concepts related to memory and caching in "Structured Computer Organization" by Andrew S. Tanenbaum?

  • Memory hierarchy: The book explains the structure and importance of primary, cache, and secondary memory, emphasizing the role of caches in reducing latency.
  • Cache design: It covers direct-mapped and set-associative caches, cache lines, replacement policies like LRU, and write strategies such as write-through and write-back.
  • Performance impact: Multi-level caches (L1, L2, L3) and their effect on bandwidth and latency are discussed, highlighting their significance in modern CPUs.
  • Virtual memory: The text also explores virtual memory models in UNIX and Windows NT, including paging, memory-mapped files, and shared memory.

How does "Structured Computer Organization" by Andrew S. Tanenbaum address parallel computer architectures?

  • Comprehensive coverage: The book provides an expanded chapter on parallel architectures, including multiprocessors (UMA, NUMA, COMA) and multicomputers (MPP, COW).
  • Design issues: It discusses interconnection networks, communication models, performance metrics, and software considerations for parallel systems.
  • Types of parallelism: SIMD, shared-memory multiprocessors, and message-passing multicomputers are explained, with examples and taxonomy.
  • Flynn’s taxonomy: The book uses Flynn’s classification (SISD, SIMD, MISD, MIMD) and its extensions to categorize parallel computers.

What are the main differences between UNIX and Windows NT operating systems as described in "Structured Computer Organization" by Andrew S. Tanenbaum?

  • System structure: UNIX features a small kernel with user-level shells, while NT has a modular kernel and multiple environmental subsystems.
  • Security models: NT offers comprehensive security (Orange Book C2 compliance), whereas UNIX uses simpler user/group/other permissions.
  • System calls and APIs: UNIX has minimal, stable system calls; NT uses the extensive Win32 API, including many functions beyond basic system calls.
  • Process and memory management: NT supports threads, fibers, and advanced memory management; UNIX supports processes, POSIX threads, and shared memory primitives.

How does "Structured Computer Organization" by Andrew S. Tanenbaum explain microarchitecture and its significance?

  • Microarchitecture level focus: The book uses the Java Virtual Machine as an example to illustrate microprogrammed machine design and data path control.
  • Performance trade-offs: It discusses the balance between design cost and performance, showing evolution from simple to pipelined designs like the Mic-4.
  • Advanced techniques: Topics include caching, branch prediction, out-of-order execution, speculative execution, and predication for CPU performance.
  • Practical microprogramming: Detailed microprograms for JVM instructions demonstrate how high-level operations are implemented at the microarchitecture level.

What are the key features of the Intel IA-64 architecture discussed in "Structured Computer Organization" by Andrew S. Tanenbaum?

  • EPIC paradigm: IA-64 uses Explicitly Parallel Instruction Computing, with instruction bundles and templates to indicate parallelism, shifting scheduling to the compiler.
  • Predication: Many conditional branches are replaced with predicated instructions, reducing pipeline stalls and improving parallelism.
  • Speculative loads: The architecture supports speculative LOAD instructions, allowing early execution and later validation for increased instruction-level parallelism.
  • Adoption challenges: The book notes the complexity for compilers, operating system support, and software ecosystem readiness as hurdles for IA-64.

How does "Structured Computer Organization" by Andrew S. Tanenbaum approach assembly language and its relationship to high-level languages?

  • Assembly vs. high-level: Assembly provides fine control but is more time-consuming; high-level languages offer productivity but may need tuning for performance.
  • Pseudoinstructions and macros: Assemblers simplify coding with pseudoinstructions and macros, improving readability and maintainability.
  • Symbol tables and linking: The book explains the assembly process, symbol management, and how modules are linked and relocated.
  • Dynamic linking: Concepts like DLLs and shared libraries are introduced, showing how modern systems manage code reuse and memory.

What foundational knowledge about binary and floating-point numbers does "Structured Computer Organization" by Andrew S. Tanenbaum provide?

  • Number systems: The book covers decimal, binary, octal, and hexadecimal representations, including conversions and their use in computing.
  • Integer representations: It explains signed magnitude, one’s complement, two’s complement, and excess notation for signed integers.
  • Floating-point formats: IEEE single and double precision formats are detailed, including normalization, exponent bias, and special values.
  • Arithmetic operations: Examples of floating-point addition, subtraction, and normalization illustrate hardware implementation and challenges.

Review Summary

4.03 out of 5
Average of 500+ ratings from Goodreads and Amazon.

Structured Computer Organization receives mostly positive reviews, with an average rating of 4.03/5. Readers appreciate its humor, clear explanations, and comprehensive coverage of computer architecture. Many find it an excellent introduction for beginners, praising its organization and readability. Some criticize it for being dated or occasionally confusing on deeper topics. The book is noted for its approach of building understanding from transistors up to software. While some find the material dry, others appreciate Tanenbaum's ability to keep it engaging.

Your rating:
4.42
34 ratings

About the Author

Andrew S. Tanenbaum is a renowned computer scientist and author known for his contributions to computer science education and operating systems. He has written several influential textbooks, including "Structured Computer Organization," which has been widely used in undergraduate computer science courses. Tanenbaum's writing style is praised for its clarity, humor, and ability to explain complex concepts in an accessible manner. He has a talent for presenting technical material in a way that engages readers and builds understanding from fundamental principles. Tanenbaum's work has had a significant impact on computer science education, helping countless students and professionals grasp the intricacies of computer architecture and operating systems.

Download PDF

To save this Structured Computer Organization summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.30 MB     Pages: 16

Download EPUB

To read this Structured Computer Organization summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 3.01 MB     Pages: 13
0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Home
Library
Get App
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Personalized for you
Ratings: Rate books & see your ratings
100,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 10
📜 Unlimited History
Free users are limited to 10
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on May 13,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Loading...
Black Friday Sale 🎉
$20 off Lifetime Access
$79.99 $59.99
Upgrade Now →