Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
How Computers Really Work

How Computers Really Work

by Matthew Justice 2020 392 pages
4.35
143 ratings
Listen
16 minutes
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Computers are Digital Machines Built on Binary

It’s a point worth repeating: everything on your computer is stored as 0s and 1s.

Digital vs. Analog. Modern computers are fundamentally digital devices, contrasting sharply with analog systems. While analog data, like a needle on a scale or mercury in a thermometer, represents information through continuous, infinitely varying values, digital systems simplify this by representing all data as sequences of discrete symbols. In nearly all contemporary computers, these symbols are limited to just two: 0 and 1. This binary approach, though seemingly restrictive, allows for highly reliable processing, storage, and copying of data, as it avoids the precision and decay issues inherent in analog representations.

Bits and Bytes. These 0s and 1s are known as bits, short for "binary digits." A single bit can only convey two states—on or off, true or false, high or low voltage. To represent more complex information, bits are grouped together. The most common grouping is eight bits, which forms a byte. A byte can represent 256 unique combinations (2^8), enough to encode a single character of text (like in ASCII) or a shade of gray. Larger data types, such as images, audio, or video, require millions or even billions of bits, often expressed using standard prefixes like kilobyte (KB), megabyte (MB), or gigabyte (GB).

Context is Key. The meaning of a sequence of 0s and 1s is entirely dependent on how a computer program interprets it. The same binary sequence could represent a number, a text character, a color, or even a machine instruction. For instance, 01100001 could be the number 97, the lowercase letter 'a' in ASCII, or a component of a color value. This flexibility means that once a device is built to work with binary data, it can be adapted through software to handle virtually any kind of information, making binary the universal language of computing.

2. Electricity is the Physical Foundation of Computing

In particular, modern computers are electronic devices, so the laws of electricity are the natural foundation upon which these devices are built.

Fundamental Concepts. At its core, a computer is an electronic device, meaning its operation is governed by the principles of electricity. Key electrical terms include:

  • Electric Charge: The fundamental property of matter that causes it to experience a force in an electromagnetic field.
  • Electric Current (Amps): The flow of electric charge, analogous to water flowing through a pipe.
  • Voltage (Volts): The difference in electric potential between two points, akin to water pressure, driving current flow.
  • Resistance (Ohms): A material's opposition to the flow of electric current, like a narrow pipe impeding water flow.

Ohm's and Kirchhoff's Laws. These concepts are quantified by fundamental laws. Ohm's Law (I = V/R) states that the current (I) flowing through a conductor between two points is directly proportional to the voltage (V) across the two points and inversely proportional to the resistance (R) between them. Kirchhoff's Voltage Law dictates that the sum of all voltages around any closed loop in a circuit must equal zero, meaning the voltage supplied by a source is "used up" by the components in the circuit.

Building Blocks. Simple electrical components like resistors and light-emitting diodes (LEDs) demonstrate these principles. Resistors are used to control current flow, while LEDs light up when current passes through them in a specific direction. These basic components, when arranged on a breadboard, form simple circuits that illustrate how electrical energy is managed and transformed, laying the groundwork for understanding more complex digital circuits.

3. Transistors and Logic Gates Form Digital Circuits

The transistor is the basis of modern electronics, including computing devices.

Digital Circuits Defined. Digital circuits operate with signals representing a limited number of states, typically two: 0 (low voltage) and 1 (high voltage). Unlike analog circuits where voltages can vary continuously, digital circuits interpret voltages within specific ranges as either a definite 0 or a definite 1. This clear distinction is crucial for reliable data processing. Early conceptual digital circuits could be built with mechanical switches, where an open switch represents 0 and a closed switch represents 1, demonstrating basic logical operations like AND and OR.

The Amazing Transistor. The mechanical switch, however, is impractical for complex computers. The solution lies in the transistor, an electronic component that acts as an electrically controlled switch. By applying a small current to one terminal (the base), a larger current can be switched on or off between two other terminals (collector and emitter). This ability to switch current electronically, without moving parts, makes transistors the fundamental building blocks of all modern digital electronics.

Logic Gates and Encapsulation. Transistors are combined with resistors to form logic gates, which are circuits that implement binary logical functions (AND, OR, NOT, NAND, NOR, XOR). These gates take high/low voltage inputs and produce high/low voltage outputs according to their truth tables. Logic gates are often manufactured as integrated circuits (ICs), encapsulating complex transistor arrangements into single, easy-to-use components. This encapsulation hides the intricate internal details, allowing engineers to design complex systems by combining gates without needing to understand every transistor, a principle vital for building multi-bit adders and memory devices like latches and flip-flops.

4. The CPU: The Programmable Brain of the Computer

It’s the processor that allows a computer to have the flexibility to run programs that weren’t even conceived of at the time the processor was designed.

Programmability is Key. What distinguishes a computer from other electronic devices is its programmability—its ability to perform new tasks without hardware changes. This is achieved through the Central Processing Unit (CPU), or processor, which executes a set of simple instructions defined by its Instruction Set Architecture (ISA). These instructions cover fundamental operations like:

  • Memory access (read/write)
  • Arithmetic (add, subtract, multiply)
  • Logic (AND, OR, NOT)
  • Program flow (jump, call)

CPU Internals. A CPU comprises several key components:

  • Processor Registers: Small, high-speed storage locations within the CPU for temporary data during processing.
  • Arithmetic Logic Unit (ALU): Performs all mathematical and logical operations.
  • Control Unit: The coordinator, fetching instructions from memory, decoding them, and directing the ALU and registers to execute them. It uses a program counter to track the next instruction's address.

Clock, Cores, and Cache. CPUs operate in sync with a clock signal, with each pulse signaling a state transition. While clock speeds (measured in GHz) have plateaued due to physical limits, performance gains now come from multi-core CPUs, where multiple independent processing units (cores) execute instructions in parallel. To bridge the speed gap between fast CPUs and slower main memory, CPUs use cache memory—small, fast internal storage that holds frequently accessed data, organized in hierarchical levels (L1, L2, L3) for quicker access.

5. Software: Instructions for the Machine

No matter how a program was originally written, no matter what programming language was used, no matter what technologies were involved, in the end, that program becomes a series of 0s and 1s, representing instructions that a CPU can execute.

Machine Code and Assembly Language. At its lowest level, software is machine code: binary instructions directly understood by a CPU. Each CPU architecture (like x86 or ARM) has its own unique machine language. Programmers rarely write in raw machine code due to its complexity. Instead, they use assembly language, a human-readable representation where each machine instruction has a mnemonic (e.g., mov for "move data"). An assembler translates assembly language into machine code, and a linker combines these machine code files into an executable program.

High-Level Programming Languages. To overcome the tedious, error-prone, and architecture-specific nature of assembly language, high-level programming languages (like C or Python) were developed. These languages are closer to human language, abstracting away CPU-specific details. A compiler translates high-level source code into machine code for a target processor, allowing a single program to be compiled and run on different CPU architectures. Interpreted languages, like Python, use an interpreter to execute source code directly at runtime, offering platform independence without a compilation step.

Common Programming Constructs. High-level languages provide intuitive ways to express fundamental computing operations:

  • Variables: Named memory locations to store data, often with a specific type (e.g., integer, text string).
  • Math & Logic: Operators like +, -, *, / for arithmetic, and &, |, ^ for bitwise logic, or and, or, not for Boolean logic.
  • Program Flow: if/else statements for conditional execution and while/for loops for repetitive tasks.
  • Functions: Reusable blocks of code that perform specific tasks, accepting inputs and returning outputs, promoting code organization and reusability (encapsulation).

6. Operating Systems Manage the Computer's Resources

An operating system (OS) is software that communicates with computer hardware and provides an environment for the execution of programs.

The OS as an Intermediary. Unlike early game consoles where software directly controlled hardware, modern computers rely on an operating system (OS) as a crucial layer between applications and hardware. The OS manages system resources, handles hardware initialization, and provides a controlled environment for programs to run. This abstraction allows software developers to focus on application logic rather than hardware specifics, making programs more portable across diverse devices.

Kernel Mode vs. User Mode. To ensure stability and security, operating systems leverage CPU privilege levels. The OS kernel and device drivers run in highly privileged "kernel mode," granting them full access to all memory, I/O devices, and special CPU instructions. Applications and other non-OS software run in restricted "user mode," preventing them from directly interfering with other programs or the kernel. This "user mode bubble" ensures that untrusted code cannot directly perform I/O or access sensitive system resources.

Processes and Threads. When a program starts, the OS creates a "process," a running instance of that program with its own private virtual memory space. Within a process, "threads" are schedulable units of execution that allow a program to perform multiple tasks in parallel. The OS scheduler manages these threads, giving each a turn on the CPU cores, creating the illusion of simultaneous execution. User mode applications rely on "system calls" to request privileged operations from the kernel, such as reading files or communicating over a network, acting as a controlled gateway to hardware interaction.

7. The Internet Connects Computers Globally

The internet is a network of networks, connecting networks from various organizations all around the world.

A Network of Networks. The internet is a vast, interconnected global system of computer networks, enabling communication between billions of devices. It operates on a standardized set of rules called the Internet Protocol Suite, commonly known as TCP/IP (Transmission Control Protocol/Internet Protocol). This suite is organized into a four-layer model, each with specific responsibilities, allowing for modular design and communication:

  • Link Layer: Handles communication on a local network (e.g., Wi-Fi, Ethernet) using MAC addresses.
  • Internet Layer: Enables data routing across different networks using IP addresses (IPv4, IPv6).
  • Transport Layer: Provides communication channels for applications (e.g., TCP for reliable connections, UDP for speed).
  • Application Layer: Defines protocols for specific tasks (e.g., HTTP for web, SMTP for email).

IP Addressing and Subnets. Every device on the internet has a unique IP address (e.g., 192.168.1.23). These addresses are divided into a network prefix and a host identifier, defining subnets. Devices on the same subnet can communicate directly, while those on different subnets must pass traffic through a router. IP addresses can be assigned dynamically via DHCP (Dynamic Host Configuration Protocol) or be static. Private IP addresses (e.g., 192.168.x.x) are used on local networks and are non-routable on the public internet, often managed by Network Address Translation (NAT) routers that allow multiple devices to share a single public IP.

Domain Name System (DNS). Since IP addresses are hard for humans to remember, the Domain Name System (DNS) provides a crucial service: mapping human-friendly names (like www.example.com) to their corresponding IP addresses. DNS is a hierarchical, distributed system of servers that resolve these names, acting as the "phone book of the internet." When a client wants to connect to a server by name, it queries a DNS server, which then provides the necessary IP address for communication.

8. The World Wide Web: A Layer of Information on the Internet

The World Wide Web, often just called the web, is a set of resources, delivered using HyperText Transfer Protocol (HTTP) over the internet.

Distributed, Addressable, Linked. The World Wide Web is a system built atop the internet, characterized by three core attributes:

  • Distributed: No central authority controls content; anyone can publish.
  • Addressable: Every resource has a unique Uniform Resource Locator (URL), specifying its location and access method (scheme, host, path, query).
  • Linked: Hyperlinks connect resources, forming a vast "web" of information.

HTTP and HTTPS. The web's primary communication protocol is HyperText Transfer Protocol (HTTP), a request-response model where clients (browsers) send requests (e.g., GET, POST) and servers respond with status codes (e.g., 200 OK, 404 Not Found). For secure communication, HTTPS (HTTP Secure) encrypts data using Transport Layer Security (TLS), protecting privacy and integrity. This involves cryptographic keys (public and private) to establish a secure, encrypted channel between client and server.

The Languages of the Web. Web pages are constructed using a trio of languages interpreted by web browsers:

  • HTML (HyperText Markup Language): A markup language defining the structure and content of a page (headings, paragraphs, images).
  • CSS (Cascading Style Sheets): A styling language dictating the appearance of HTML elements (fonts, colors, layout).
  • JavaScript: A programming language enabling interactivity and dynamic behavior on web pages, often manipulating the Document Object Model (DOM) to respond to user actions.

Web Browsers and Servers. Web browsers are client applications that retrieve web resources, render HTML and CSS, and execute JavaScript. They contain rendering engines (like WebKit, Blink, Gecko) and JavaScript engines. Web servers host these resources, serving static files or dynamically generating content using any programming language, responding to HTTP requests. The evolution of the web has seen a shift from purely static pages to complex dynamic applications, and more recently, a trend back towards static sites enhanced with client-side JavaScript.

9. Modern Computing: Expanding Horizons

This concept of connecting all kinds of devices to the internet is known as the Internet of Things (IoT).

Apps: Native vs. Web. The term "app" has evolved to denote user-centric software, often mobile-focused and distributed via digital storefronts. Native apps are built specifically for an operating system's API (e.g., iOS, Android), offering deep device integration and performance. Web apps, conversely, are built with web technologies (HTML, CSS, JavaScript) and run in a browser, offering cross-platform compatibility. Progressive Web Apps (PWAs) bridge this gap, providing web apps with native-like features such as offline capability and home screen installation.

Virtualization and Emulation. Modern computing heavily relies on creating virtual environments:

  • Virtualization: Software creates virtual machines (VMs) that run entire operating systems on a single physical machine, abstracting hardware details. Hypervisors manage these VMs, either directly on hardware (Type 1) or as applications (Type 2). Containers offer lighter-weight isolation, sharing the host OS kernel.
  • Emulation: Software makes one type of device behave like another, allowing programs compiled for an obsolete or different hardware architecture to run on a modern system (e.g., a Sega Genesis emulator on a PC).

Cloud Computing. The delivery of computing services over the internet, "the cloud," has transformed how organizations manage IT. Cloud providers handle underlying hardware and infrastructure, offering services on demand:

  • IaaS (Infrastructure as a Service): Provides virtualized hardware (VMs, containers); consumer manages OS and applications.
  • PaaS (Platform as a Service): Provider manages hardware, OS, and runtime; consumer deploys applications.
  • FaaS (Function as a Service): Serverless model where consumer's code (functions) runs in response to events.
  • SaaS (Software as a Service): Fully managed applications delivered to end-users (e.g., Microsoft 365).

Emerging Technologies. Beyond these, modern computing encompasses:

  • Deep/Dark Web: The deep web requires login (e.g., online banking); the dark web requires specialized software (e.g., Tor) for anonymity.
  • Bitcoin: The first decentralized cryptocurrency, based on blockchain technology—a public, immutable ledger of transactions maintained by a network of "miners" who solve computational problems.
  • Virtual/Augmented Reality (VR/AR): VR immerses users in virtual worlds; AR overlays virtual elements onto the real world, offering new human-computer interaction paradigms.
  • Internet of Things (IoT): The proliferation of everyday devices connected to the internet, from smart home appliances to industrial sensors, expanding the reach of computing into countless physical objects.

Last updated:

Want to read the full book?

Review Summary

4.35 out of 5
Average of 143 ratings from Goodreads and Amazon.

How Computers Really Work is highly praised for its clear explanations of computer fundamentals, from basic electronics to programming. Readers appreciate the hands-on activities and practical exercises, making it accessible for beginners and young audiences. The book's comprehensive coverage, from transistors to high-level programming, is commended. Some reviewers note that certain sections, particularly on modern computing, feel less substantial. Overall, it's considered an excellent resource for understanding computer operations, with a few critiques on writing style and depth in specific areas.

Your rating:
4.66
3 ratings

About the Author

Matthew Justice is the author of How Computers Really Work. While specific biographical information is not provided in the given content, the book's reception suggests Justice is a knowledgeable and skilled writer in the field of computer science and technology. His ability to break down complex concepts into understandable explanations is frequently praised by readers. Justice's approach combines theoretical knowledge with practical exercises, indicating a background in both academic understanding and hands-on experience with computers. The book's comprehensive coverage, from basic electronics to modern computing, suggests Justice has a broad expertise in various aspects of computer technology and its historical development.

Download PDF

To save this How Computers Really Work summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.39 MB     Pages: 18

Download EPUB

To read this How Computers Really Work summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 2.95 MB     Pages: 18
Listen16 mins
Now playing
How Computers Really Work
0:00
-0:00
Now playing
How Computers Really Work
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
200,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Aug 24,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
200,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...