This is a reference page for networking concepts that are essential for building web applications. It covers network architectures, the TCP/IP protocol stack, HTTP, and the key trade-offs you need to understand when designing networked systems.
How to use this page: Keep it open as a reference while working on your projects. The concepts here underpin everything you build with Node.js and React — every time your browser talks to a server, it relies on these protocols.
Network Architectures
When designing a networked application, the first decision is how your devices will communicate. There are two fundamental models, plus a practical combination of both.
Client-Server Architecture
The client-server model is the most common architecture for web-based systems. It defines two distinct roles:
Role
Responsibility
Client
Initiates requests; consumes resources (e.g., your web browser)
Server
Listens for requests; provides resources (e.g., your Node.js backend)
Key characteristics:
Multiple clients can connect to the same server simultaneously
Connections are always initiated by the client, never the server
It is a centralized architecture — all communication flows through the server
When you build a web app, you are building both sides: a server (Node.js/Express) that provides data and a client (React) that runs in the user’s browser.
Peer-to-Peer (P2P) Architecture
In a peer-to-peer architecture, there is no dedicated server. Every node in the network is both a supplier and a consumer of resources.
Key characteristics:
Decentralized — no single point of control
Peers are equally privileged participants
Each peer is both a supplier and consumer of resources
P2P is rare in pure form. BitTorrent is a well-known example: when you download a file via BitTorrent, your client receives chunks directly from other peers who already have parts of the file — no central file server is involved.
Hybrid Architectures
In practice, most systems that need P2P benefits use a hybrid approach: some communication goes through a central server, while some happens directly between peers.
Example — Apple FaceTime: For 1-on-1 calls, FaceTime attempts a direct peer-to-peer connection between devices for the lowest possible latency. If that fails (e.g., due to NAT or firewall restrictions), it routes communication through Apple’s relay servers. For Group FaceTime calls, all participants connect to Apple’s servers, since each device sending a separate video stream to every other participant would overwhelm its upload bandwidth.
Comparing Architectures
Aspect
Client-Server
Peer-to-Peer
Hybrid
Structure
Centralized
Decentralized
Mixed
Single point of failure
Yes (the server)
No
Partial
Scalability
Add more servers
Scales with peers
Flexible
Use case
Web apps, APIs, databases
File sharing, distributed backup
Video calls, gaming
Throughput and Latency
Two critical quality attributes for any networked system:
Throughput measures the volume of work processed per unit of time.
Example: “The API server handles 500 requests per second during peak load.”
Latency (response time) measures how long a single request takes to receive a reply.
Example: “Each database query returns results in 40ms.”
These are related but not the same:
Duplicating servers increases throughput (more requests handled in parallel) without necessarily reducing latency.
Implementing caching reduces latency (individual requests are faster) and may also increase throughput.
Analogy: Think of a highway between two cities. Latency is the speed limit — it determines how fast a single truck makes the journey. Throughput is the number of lanes — adding lanes lets you move more total cargo per hour, but it doesn’t make any individual truck arrive faster. Scaling horizontally (more servers) adds lanes; optimizing code or adding caches raises the speed limit.
The TCP/IP Protocol Stack
The internet uses a layered architecture called the TCP/IP stack. Each layer solves a specific problem and relies only on the layer directly below it. This design provides reusability (lower layers can be shared) and flexibility (you can swap one layer’s implementation without affecting the others).
The Four Layers
Layer
Responsibility
Example Protocols
Application Layer
Provides an interface for applications to access network services
HTTP, HTTPS, SSH, DNS, FTP, SMTP, POP, IMAP
Transport Layer
Provides end-to-end communication between applications on different hosts
TCP, UDP
Internet Layer
Enables communication between networks through addressing and routing
IPv4, IPv6, ICMP
Link Layer
Handles the physical transmission of data over local network hardware
Ethernet, Wi-Fi, ARP
Where does TLS fit? TLS (and its predecessor SSL, now deprecated) sits between the transport and application layers — it wraps a TCP connection and exposes an encrypted channel that an application protocol like HTTP runs on top of. HTTPS is “HTTP over TLS over TCP.”
Encapsulation (Package Wrapping)
Higher-layer protocols use the protocols directly below them to send messages. Each layer wraps the higher-layer message as its payload and adds its own header — like sealing a letter inside successively larger envelopes, each addressed for a different step of the journey:
Ethernet Header
IP Header
TCP Header
HTTP Header
Payload (data)
Link Layer
Internet
Transport
Application
Each message consists of a header (meta information like destination, origin, content type, checksums) and a payload (the actual content of the message).
IP Addresses
Every device on the internet needs a unique address. IP addresses solve this by having two parts: a network portion (like a city) and a host portion (like a street address within that city). Routers use the network portion to forward packets toward the right destination network; once there, the host portion identifies the specific device.
IPv4 addresses are 32-bit numbers written as four decimal octets: 0.0.0.0 to 255.255.255.255 (about 4 billion possible addresses)
IPv6 was created because the world ran out of IPv4 addresses — it uses 128-bit addresses, providing vastly more unique values
Localhost and the Loopback Interface
127.0.0.1 (or its alias localhost) is a special address called the loopback address. Unlike a normal IP address that routes packets out through your network hardware, loopback traffic never leaves your machine — the operating system short-circuits it internally.
This is why it is indispensable for local development:
When you run node server.js, your server listens on localhost:3000 (or whichever port you choose)
Your browser — also running on the same machine — sends an HTTP request to localhost:3000
The OS intercepts the request before it ever touches Wi-Fi or Ethernet and routes it directly to your server process
No internet connection is required; the traffic is entirely internal to your computer
Practical consequence: A server listening on localhost is only reachable from the same machine. If a classmate tries to connect to your laptop’s localhost:3000 from their machine, it will fail — localhost on their machine refers to their machine, not yours.
Public vs. Private IP Addresses
Not all IP addresses are reachable from the internet:
Range
Type
Example
127.0.0.0/8
Loopback (your own machine)
127.0.0.1
192.168.x.x, 10.x.x.x, 172.16–31.x.x
Private (local network only)
192.168.1.42
Everything else
Public (internet-reachable)
142.250.80.46
Your laptop typically has a private IP address assigned by your router (e.g. 192.168.1.42). Your router holds the single public IP address that the internet sees. When you deploy a server to the cloud, it gets a public IP — that is what makes it reachable by anyone.
Ports
An IP address identifies a machine, but a single machine can run many networked applications simultaneously (a web server, a database, an SSH daemon…). Ports identify which application on that machine should receive a given message.
The combination of an IP address and a port — written IP:port — is called a socket address and uniquely identifies a communication endpoint:
192.168.1.42:3000 → your Node.js server
192.168.1.42:5432 → your PostgreSQL database
Port numbers range from 0 to 65535
Well-known ports (0–1023) are reserved for standard services: 80 (HTTP), 443 (HTTPS), 22 (SSH), 5432 (PostgreSQL)
Ephemeral ports (typically 49152–65535) are assigned automatically by the OS for the client side of a connection — you never type these in, but every outgoing TCP connection uses one
When developing locally, you pick an unprivileged port like 3000 or 5000 to avoid needing administrator privileges (ports below 1024 require root/admin on most systems)
DNS (Domain Name System)
Humans use names like github.com; computers use IP addresses like 140.82.121.4. DNS is the distributed directory that translates one into the other — effectively the phone book of the internet.
When you type github.com into your browser:
Your OS checks its local DNS cache — if it recently resolved this name, it reuses the answer
If not cached, it sends a DNS query (over UDP, port 53) to a DNS resolver — typically provided by your ISP or configured manually (e.g. Google’s 8.8.8.8)
The resolver works through a hierarchy of DNS servers to find the authoritative answer
Your OS receives the IP address, caches it for a configurable time (the TTL), and the browser proceeds with the HTTP request
This is why DNS uses UDP: each lookup is a single independent question-and-answer pair. If the response is lost, the client simply retries — no persistent connection is needed.
Transport Layer Protocols: TCP vs. UDP
The transport layer offers two protocols with fundamentally different trade-offs. Choosing between them is one of the most important networking decisions you will make.
UDP (User Datagram Protocol)
UDP simply “throws” messages at the receiver without establishing a connection first.
Fast and lightweight — no connection setup overhead
Connectionless — just sends the data
Does not guarantee delivery or order
Includes a checksum for error detection (mandatory in IPv6), but does not recover from errors — corrupted packets are silently discarded
If a message is lost, it is simply gone
UDP is ideal when speed matters more than reliability: DNS name resolution (a fast, independent lookup where a retry is cheap — though DNS falls back to TCP when a response is too large for a single UDP packet), live GPS position broadcasts in navigation apps, and live financial-market tick streams pushed to traders’ dashboards (where a stale price is no longer worth waiting for).
@startuml
participant sender: Sender
participant receiver: Receiver
sender ->> receiver: Datagram [1]
sender ->> receiver: Datagram [2]
note right of receiver: checksum failed — discard silently
sender ->> receiver: Datagram [3]
sender ->> receiver: Datagram [4]
note right of receiver: packet lost — never arrives
sender ->> receiver: Datagram [5]
note over sender: sender never knows about the lost or corrupted packets
@enduml
TCP (Transmission Control Protocol)
TCP is more complex but provides reliable, ordered delivery. It uses a three-way handshake to establish a connection:
Connection Setup (3-Way Handshake):
@startuml
participant client: Client
participant server: Server
client ->> server: SYN
server ->> client: SYN-ACK
client ->> server: ACK
note over client, server: Connection established
@enduml
Data Transfer: Messages are sent in order, each with a checksum for error detection (like UDP, but TCP goes further). The receiver sends ACKs to confirm receipt. If the sender doesn’t receive an ACK within a timeout, it retransmits the message — this error recovery is what distinguishes TCP from UDP.
@startuml
participant client: Client
participant server: Server
client ->> server: Data [seq=1]
server ->> client: ACK [seq=1]
client ->> server: Data [seq=2]
note right of server: packet lost — no ACK sent
note over client: timeout — retransmit
client ->> server: Data [seq=2]
server ->> client: ACK [seq=2]
@enduml
Connection Teardown:
@startuml
participant client: Client
participant server: Server
client ->> server: FIN
server ->> client: ACK
server ->> client: FIN
client ->> server: ACK
note over client, server: Connection closed
@enduml
The cost of reliability: For N data messages, TCP sends significantly more total messages than UDP — the handshake, ACKs, and teardown all add overhead. UDP would send just N messages.
TCP vs. UDP — Trade-Offs at a Glance
Aspect
TCP
UDP
Message order
Preserved
Any order
Error detection
Included (checksums)
Included (checksums), but no error recovery
Lost messages
Retransmitted
Lost forever
Speed
Slower (overhead)
Fast (no overhead)
When to Use Each
Protocol
Best For
Examples
TCP
Data that must arrive completely and in order
Pushing code to a Git repository, submitting an online tax return, transferring files via SFTP, web browsing
UDP
Real-time data where speed beats reliability
DNS queries (primarily), live GPS updates, live screen sharing during remote presentations, live IoT sensor telemetry
Live online stock-trading platforms use a hybrid: UDP for high-frequency price-tick broadcasts (often hundreds of updates per second per symbol), since a missed tick is harmless — the next one carries the current price milliseconds later. TCP handles trade orders, account balance updates, and trade confirmations, where a lost or reordered message would corrupt the user’s account state. UDP ticks include the absolute current price of each symbol, so a single dropped packet never causes lasting inconsistency.
HTTP (Hypertext Transfer Protocol)
HTTP is the foundation of data communication on the World Wide Web. It is an application-layer protocol that runs on top of TCP.
Key Property: Stateless
HTTP is a stateless protocol — each request is independent, and the server does not remember anything about previous requests from the same client. Every request must contain all the information the server needs to respond. (Real applications layer state on top of HTTP using mechanisms like cookies, sessions, or bearer tokens such as JWTs.)
HTTP versions. HTTP/1.1 (1997) introduced persistent connections and pipelining. HTTP/2 (2015) added binary framing and multiplexing over a single TCP connection. HTTP/3 (standardized 2022) replaces TCP with QUIC, which runs over UDP and integrates TLS — so an HTTP/3 connection avoids head-of-line blocking and can establish in fewer round trips.
HTTPS is HTTP wrapped in TLS (the successor to the now-deprecated SSL). It provides confidentiality (no eavesdropping), integrity (no tampering), and server authentication (you really are talking to ucla.edu).
HTTP Verbs (Methods)
Verb
Purpose
Response Contains
GET
Retrieve a resource (web page, data, image, file). Safe and idempotent.
The resource content + status code
POST
Send data for processing — typically to create a new resource (form submission, file upload). Not idempotent.
Status code (and often the new resource or its location)
PUT
Create or replace the resource at a specific URI. Idempotent.
Status code
PATCH
Apply a partial update to an existing resource.
Status code
DELETE
Delete a resource on the server. Idempotent.
Status code
HEAD
Retrieve only headers of a resource, not the body.
Every HTTP response includes a status code that tells the client what happened:
Category
Meaning
Common Codes
2xx
Success
200 OK — request succeeded; 201 Created — new resource created
4xx
Client error
400 Bad Request — malformed syntax; 401 Unauthorized; 403 Forbidden; 404 Not Found — resource doesn’t exist
5xx
Server error
500 Internal Server Error — generic server failure; 502 Bad Gateway; 503 Service Unavailable
Rule of thumb: 2xx = you did it right, 4xx = you messed up, 5xx = the server messed up.
HTTP Headers
Each HTTP message includes headers with metadata about the request or response. A critical header:
Content-Type — tells the receiver what kind of data is in the body:
Content-Type
Used For
text/html; charset=utf-8
HTML web pages
text/plain
Plain text
application/json
JSON data (the standard for API communication)
HTTPS (HTTP Secure)
HTTPS uses SSL/TLS encryption to secure communication. It is essential whenever sensitive data is transferred (passwords, personal information, private messages) and has become the default for all public web pages, even for non-sensitive content.
Building a Server with Node.js
Node.js ships with a built-in http module that lets you create an HTTP server from scratch:
consthttp=require('http');constPORT=3000;constserver=http.createServer((req,res)=>{res.writeHead(200,{'Content-Type':'text/plain'});res.end('Hello, World!\n');});server.listen(PORT,'localhost',()=>{console.log(`Server running at http://localhost:${PORT}/`);});
For real applications, the Express framework provides much cleaner routing:
constexpress=require('express');constapp=express();constport=5000;// GET /courses/:courseId — route parameterapp.get('/courses/:courseId',(req,res)=>{res.send(`GET request for course ${req.params.courseId}`);});// POST /enrollments — create a new enrollmentapp.post('/enrollments',(req,res)=>{res.send('POST request to enroll in a course');});// Catch-all 404 handler — must be lastapp.all('*',(req,res)=>{res.status(404).send('404 - Page not found');});app.listen(port,()=>{console.log(`Express server listening on port ${port}`);});
Review key networking concepts: architectures, protocols, HTTP, and the TCP/IP stack.
Difficulty:Basic
What are the two roles in a client-server architecture, and who initiates the connection?
The client consumes resources and always initiates the connection. The server provides resources and passively listens for incoming requests. Multiple clients can connect to the same server simultaneously.
Difficulty:Basic
How does a peer-to-peer (P2P) architecture differ from client-server?
In P2P, there is no central server. Every node is equally privileged and acts as both a supplier and consumer of resources. It is decentralized, so there is no single point of failure — but if a peer goes offline, its unique resources become unavailable.
Difficulty:Intermediate
What is a hybrid architecture? Give a real-world example.
A hybrid combines client-server and P2P. Apple FaceTime uses hybrid: for 1-on-1 calls it attempts a direct P2P connection for lower latency, falling back to Apple’s relay servers if NAT or firewalls block the direct path. Group FaceTime routes all participants through Apple’s servers to prevent each device from uploading a separate video stream to every other participant.
Difficulty:Basic
Explain the difference between throughput and latency.
Throughput = volume of requests processed per unit time (e.g., an API handling 500 req/sec during peak load). Latency = time for a single request to complete (e.g., a database query returning in 40ms). They are not always correlated: adding more servers increases throughput but doesn’t reduce per-request latency. Caching reduces latency and may also increase throughput.
Difficulty:Advanced
You type a URL into your browser and press Enter. Trace the journey of that HTTP request down the four layers of the TCP/IP stack — name each layer and describe what it contributes.
Application Layer — your browser constructs the HTTP request (verb, URL, headers). 2. Transport Layer — TCP wraps it in a segment with port numbers and sequence info for reliable delivery. 3. Internet Layer — IP wraps it in a packet with source and destination IP addresses for routing between networks. 4. Link Layer — Ethernet/Wi-Fi wraps it in a frame with MAC addresses and physically transmits it to the next hop.
Difficulty:Intermediate
What is encapsulation (package wrapping) in the TCP/IP stack?
Each layer wraps the higher-layer message as its payload and adds its own header — like sealing a letter inside successively larger envelopes. An HTTP message (the letter) is placed inside a TCP envelope (labeled with port numbers), which is placed inside an IP envelope (labeled with IP addresses), which is placed inside an Ethernet envelope (labeled with MAC addresses). Each envelope carries only the addressing information needed for that one delivery step.
Difficulty:Advanced
What is the TCP three-way handshake and why is it needed?
SYN → SYN-ACK → ACK. The client sends SYN (‘I want to connect’), the server replies SYN-ACK (‘OK, I’m ready’), the client confirms with ACK (‘let’s go’). It ensures both parties are ready to send and receive data before any data is transmitted.
Difficulty:Advanced
How does TCP guarantee reliable delivery during data transfer?
TCP sends data in ordered segments with checksums for error detection. The receiver sends ACKs to confirm receipt (a single ACK can cover multiple segments). If the sender doesn’t receive an ACK within a timeout, it retransmits the missing data. This guarantees delivery, order, and integrity — at the cost of additional overhead.
Difficulty:Basic
What does it mean that HTTP is stateless?
Each HTTP request is independent — the server does not remember any information about previous requests from the same client. Every request must contain all the information the server needs to respond. Web apps use cookies/sessions to maintain state across requests.
Difficulty:Basic
Name at least three main HTTP verbs and what each does.
GET — retrieve a resource. POST — send data for processing, typically to create a new resource. PUT — create or replace the resource at a specific URI (idempotent). DELETE — delete a resource. HEAD — retrieve only the headers of a resource (not the body).
Difficulty:Basic
What is 127.0.0.1 and what is it commonly called?
Localhost — a special reserved IP address that always refers to your own machine. During development you might run your Express backend on localhost:5000 and your React frontend on localhost:3000; both processes are on the same machine and communicate without ever touching the public internet.
Difficulty:Intermediate
What is a URL and what are its components?
{protocol}://{domain}(:{port})(/{resource}). Example: http://localhost:5000/courses/cs101. Protocol (http/https), domain (the server’s address), port (which application on the server — optional, defaults to 80/443), resource path (which resource to access — optional, defaults to /).
Difficulty:Basic
What does HTTPS add on top of HTTP, and why is it important?
HTTPS adds SSL/TLS encryption to HTTP. It protects sensitive data (passwords, personal info) from being intercepted in transit. It has become the default for all public web pages, even those without sensitive data, because it also prevents tampering and ensures you’re talking to the real server.
Workout Complete!
Your Score: 0/13
Come back later to improve your recall!
Networking Fundamentals Quiz
Test your understanding of network architectures, the TCP/IP protocol stack, HTTP, and how the internet works.
Difficulty:Basic
In a client-server architecture, which statement is TRUE?
A server can send data after a connection or session exists, but in this simple client-server
model the client initiates contact.
Server push requires an established mechanism such as WebSockets or server-sent events. It is
not the default meaning of client-server architecture.
Many clients can connect to one server. The architecture centralizes service, not exclusivity.
Correct Answer:
Explanation
In client-server architecture, the client always initiates the connection — the server passively listens and never reaches out to a client unprompted. In client-server architectures, the client always initiates the connection. The server passively listens for incoming requests and responds to them. Multiple clients can connect to the same server simultaneously, and the server never reaches out to a client unprompted (unless a different pattern like WebSockets is used on top).
Difficulty:Basic
What is the key advantage of peer-to-peer (P2P) architecture over client-server?
P2P can improve resilience, but it does not guarantee better speed. Peer availability, upload
capacity, and routing all affect performance.
P2P is often harder to implement because discovery, trust, NAT traversal, and consistency move
into the application design.
P2P can produce more coordination messages than client-server. Its main advantage here is
avoiding one central failure point.
Correct Answer:
Explanation
P2P eliminates the single point of failure because there is no central server — every peer can continue communicating with other available peers if one goes offline. In a P2P architecture there is no central server whose failure would bring down the whole system. Each peer continues communicating with other available peers. However, if a peer goes offline, any resources unique to that peer become temporarily unavailable — P2P eliminates a single point of failure at the infrastructure level, but individual peers are still fallible.
Difficulty:Basic
What is the difference between throughput and latency?
A system can have high throughput and still make one user wait a long time. Volume per second
and delay per request are different measurements.
Server count and client count may influence performance, but they are not the definitions of
latency and throughput.
Both latency and throughput matter for TCP, UDP, and higher-level protocols. They are general
performance concepts, not protocol-exclusive metrics.
Correct Answer:
Explanation
Throughput measures volume per time period (e.g., requests/sec) while latency measures the time for a single request to complete — they are related but independently optimized. Throughput is about volume (e.g., 500 requests/second), while latency is about individual speed (e.g., a single database query takes 40ms). They are related but distinct — adding more servers increases throughput without necessarily reducing latency. Caching reduces latency and may also improve throughput.
Difficulty:Intermediate
In the TCP/IP stack, what is the purpose of the Transport Layer?
Physical transmission over Wi-Fi or Ethernet belongs below the transport layer. TCP and UDP
operate above that link-level delivery.
Routing packets between networks is the Internet layer’s job. The transport layer adds
application-to-application communication through ports and transport behavior.
HTTP is an application-layer protocol. It uses transport services rather than being provided by
the transport layer itself.
Correct Answer:
Explanation
The Transport Layer provides end-to-end communication between specific applications (ports) on different hosts — distinguishing it from the Internet Layer which routes between machines. The Transport Layer (TCP, UDP) sits between the Internet Layer and the Application Layer. It enables communication between specific applications on different machines — not just between machines. The Link Layer handles physical transmission, the Internet Layer handles routing between networks, and the Application Layer gives apps protocols like HTTP to talk to each other.
Difficulty:Intermediate
When data travels down through the TCP/IP stack before being sent, what happens at each layer?
Headers are removed when data moves upward at the receiver. Moving downward adds each layer’s
header around the higher-layer payload.
Encryption may happen in some protocols, but encapsulation is the normal layer-by-layer
operation being tested here.
Fragmentation or segmentation can happen, but the general per-layer operation is wrapping data
with layer-specific metadata.
Correct Answer:
Explanation
Each layer wraps the higher-layer message as its payload and adds its own header — a process called encapsulation — which reverses as decapsulation at the receiver. This process is called encapsulation — think of sealing a letter inside successively larger envelopes. An HTTP message is placed inside a TCP envelope (port numbers added), that is placed inside an IP envelope (IP addresses added), that is placed inside an Ethernet envelope (MAC addresses added). Each envelope carries only the metadata needed for its own delivery step. The process reverses (decapsulation) at the receiving end.
Difficulty:Basic
A student runs node server.js and their terminal shows: Server listening on http://localhost:5000. They open a browser on the same machine. Which URL should they visit?
0.0.0.0 is a bind address meaning all local interfaces; it is not the usual destination URL
typed into a browser.
A browser on the same machine can reach a loopback server without public IPs or port forwarding.
A local hostname may work if name resolution is configured, but the reliable loopback URL shown
by localhost is 127.0.0.1 with the port.
Correct Answer:
Explanation
127.0.0.1 is the loopback address that always refers to the local machine itself, so a server listening there is reachable only from that same machine — no internet connection required.127.0.0.1 is the loopback (localhost) address that always refers to the local machine itself. A server listening on 127.0.0.1:5000 is reachable from any browser on the same machine at that exact address — no internet connection or public IP required. This is the standard local development workflow.
Difficulty:Basic
HTTP is described as a ‘stateless’ protocol. What does this mean?
Stateless does not mean a server literally clears all memory after every request. It means HTTP
itself does not remember a client’s previous request context.
Encryption is a separate HTTP-versus-HTTPS issue. Statelessness is about request independence.
HTTP can transfer many media types, including images and other binary content. Statelessness is
not about payload format.
Correct Answer:
Explanation
HTTP is stateless because the server retains no memory of previous requests — each request is treated as completely independent, which is why cookies and session tokens are needed. Stateless means every HTTP request is treated as completely independent. The server does not automatically track which requests came from the same user or session. This is why web applications use mechanisms like cookies and session tokens to maintain state across multiple requests.
Difficulty:Basic
Your Express route handler queries the database for a course by ID, but no matching course exists. Which HTTP status code should the handler return?
The server did not successfully return the requested resource. A handled request with missing
data should not pretend success with 200.
201 Created is for successful creation of a new resource. A missing course lookup is not a
creation event.
A missing course is normally a client-visible resource absence, not an unexpected server
failure. Use 500 for server-side faults such as crashes or unhandled exceptions.
Correct Answer:
Explanation
404 Not Found is correct because the server successfully handled the request but the resource simply does not exist — 500 is only for unexpected server-side failures. 404 Not Found is the correct response when the server successfully handled the request but the requested resource simply doesn’t exist at that URL. Use 200 when data is returned successfully, 201 when a new resource was created, and 500 only for unexpected server-side failures (like an unhandled exception or a crashed database connection).
Difficulty:Basic
Why was HTTPS created, and what does it add on top of HTTP?
HTTPS may have performance optimizations in practice, but its defining addition is TLS security,
not compression.
HTTPS still commonly runs HTTP over TLS over TCP. It does not replace TCP with UDP just by
adding security.
Caching is an HTTP/application concern. HTTPS protects traffic in transit rather than adding
server-side caching.
Correct Answer:
Explanation
HTTPS wraps HTTP in SSL/TLS encryption to prevent eavesdropping and tampering in transit — it does not change the underlying TCP transport or add server-side caching. HTTPS (HTTP Secure) wraps HTTP inside an SSL/TLS encryption layer. This prevents anyone intercepting traffic between the client and server from reading or modifying it — critical for passwords, personal information, and financial transactions. HTTPS has become the default for all public web pages, even static ones without sensitive data, because it also verifies the server’s identity.
Difficulty:Intermediate
Arrange the TCP/IP layers in order from bottom (closest to hardware) to top (closest to the application).
Drag lines into the solution area in the correct order (some items are distractors that should not be used). Keyboard: focus a line and press Space or Enter to move it between the bank and the answer area. Use Arrow Up or Arrow Down to reorder within the answer area.
↓ Drop here ↓
Correct order: Link Layer Internet Layer Transport Layer Application Layer
Explanation
The correct bottom-to-top order is Link → Internet → Transport → Application, with each layer using only the layer immediately below it. Bottom to top: Link (physical hardware — Ethernet, Wi-Fi), Internet (IP addressing and routing between networks), Transport (TCP/UDP end-to-end communication between applications), Application (HTTP, HTTPS, DNS, SSH — the protocols your code uses directly). Each layer uses only the layer immediately below it, enabling clean separation of concerns.
Difficulty:Intermediate
Which of the following are guarantees provided by TCP but NOT by UDP? (Select all that apply)
Ordering is one of the central TCP guarantees. UDP datagrams can arrive out of order unless the
application adds its own sequencing.
TCP detects missing data and retransmits. UDP leaves loss handling to the application.
TCP checksum failures cause bad segments to be discarded, and reliability mechanisms lead to
retransmission. UDP has a checksum too, but not the same delivery recovery guarantee.
TCP’s guarantees require acknowledgments, sequencing, and retransmission machinery. Those
mechanisms add overhead rather than eliminating latency.
Correct Answers:
Explanation
TCP guarantees ordering, retransmission, and error detection at the cost of overhead — ‘zero additional latency’ is a property of UDP, not TCP. TCP guarantees ordering (sequence numbers), retransmission of lost messages (ACK + timeout), and error detection (checksums) — but all three come at a cost: the 3-way handshake, per-message ACKs, and connection teardown add overhead. ‘Zero additional latency’ is a property of UDP, which fires data at the receiver with no setup and no acknowledgment.
Workout Complete!
Your Score: 0/11
Networking: Making Decisions
Given real-world application scenarios, choose the right network architecture, transport protocol, and application protocol. These questions test your ability to analyze trade-offs and justify design decisions.
Difficulty:Intermediate
You are building a collaborative coding interview platform where the candidate and the interviewer edit the same file at the same time, character by character. The candidate types def foo():, then immediately replaces it with def bar():. If those two edits arrive at the interviewer in the wrong order, the interviewer’s screen ends up showing def foo(): even though the candidate’s screen shows def bar():. Which transport protocol should the editing channel use?
Latency does matter, but the platform also depends on the order of every operation. A faster
channel that delivers a delete before its earlier insert leaves the shared file inconsistent.
Each keystroke is a separate operation (insert this character, delete that one), so a missing edit
cannot be reconstructed by the next one. Replacement semantics only work when every message carries
the full state, not a delta.
Timestamps can sort edits at the receiver, but a missing edit never arrives at all. Sorting cannot
fix a hole — the receiver still ends up with a different file than the sender.
Correct Answer:
Explanation
TCP is required because every edit must arrive in the order it was typed. Collaborative editors send each keystroke as a small insert or delete operation; if a delete arrives before its preceding insert, or never arrives at all, the two screens drift apart. TCP’s ordering and retransmission guarantees rule this out, and the handshake/ACK overhead is negligible for tiny keystroke payloads — the same reason web apps and SSH (both TCP) handle their interactive workloads cleanly.
Difficulty:Intermediate
You’re building a smart doorbell with a live camera feed. When a visitor presses the button, the homeowner’s phone displays the camera in real time so the homeowner can see who’s there before deciding to answer. Which transport protocol should carry the camera video stream?
A single missing frame is replaced within milliseconds by the next one — the visitor’s face stays
visible. Waiting to retransmit it instead causes a visible stall right when the homeowner is trying to act.
Frames are displayed in order, but a re-sent frame from a moment ago shows a moment that has already
passed. Real-time video benefits more from skipping than from waiting.
Live video still travels over the transport layer. Real-time media commonly uses UDP-based protocols
(RTP, WebRTC); it does not skip the transport layer.
Correct Answer:
Explanation
UDP is correct because a re-sent video frame arrives too late to be useful. A dropped UDP packet causes a tiny visual glitch (often imperceptible — the next frame arrives within milliseconds), but TCP’s retransmission would pause the entire stream to wait for a packet describing what the visitor was doing a moment ago — exactly when the homeowner needs the present view. Every live-video product (FaceTime, Zoom, real-time camera feeds) uses UDP-based protocols for this reason.
Difficulty:Advanced
An indie team is building an online multiplayer racing game. Each player’s car position and speed update 60 times per second so all players see each other accurately on the track. The game also records lap completion events, awards podium finishes, and lets players spend earned currency on car cosmetic upgrades that persist between matches. What transport-protocol strategy fits best?
Re-sending 60 stale position updates per second would freeze the screen waiting for snapshots that
no longer matter. Position data is a continuous stream where each new update replaces the previous one.
Some game data must never be lost. A missed podium finish or a vanished cosmetic purchase would
corrupt the player’s persistent progress, and there is no later message that reconstructs it.
HTTP is request-response and runs over TCP — it is poorly suited to 60-Hz position broadcasts. The
transport choice should match each data type’s tolerance for loss, not default to one application protocol.
Correct Answer:
Explanation
A hybrid is correct because position updates and progression events have opposite requirements. Car positions arrive at 60 Hz with absolute coordinates — a missed snapshot is replaced within ~17 ms, so UDP is ideal. But a lost lap completion or purchase would corrupt persistent state — those need TCP’s reliable, ordered delivery. This is the same hybrid pattern the SEBook describes for live online stock-trading platforms: UDP for the high-frequency snapshot stream of values that supersede each other, TCP for the events that must be recorded once and exactly once.
Difficulty:Basic
You are building a cloud file storage service similar to Dropbox or Google Drive. A user clicks ‘Upload’ on a 200 MB folder of design files. The folder must arrive at the server bit-for-bit identical so that other devices syncing the same folder see the exact same files. Which transport protocol should carry the upload?
Faster transfer is irrelevant if the file arrives corrupted. A storage service’s core promise is that
what was uploaded is exactly what comes down again on every other device.
Detecting and re-requesting missing chunks on top of UDP is essentially rebuilding TCP — and getting
the details (timeouts, sequence numbers, congestion handling) wrong, when the OS already provides
them for free.
File size doesn’t change the requirement. A 5 MB photo and a 500 MB video both need byte-perfect
delivery for the sync invariant to hold.
Correct Answer:
Explanation
TCP is required because the storage service’s core invariant is byte-for-byte fidelity. A single flipped bit in a .psd or .docx file can corrupt it silently — the user only discovers the damage when they reopen the file later. TCP’s retransmission, ordering, and checksum guarantees prevent this. The handshake/ACK overhead is negligible compared to the size of the payload, which is why every major cloud-sync product uses TCP — typically over HTTPS.
Difficulty:Intermediate
A startup is launching an online concert ticketing platform. Fans browse upcoming shows, pay with a credit card, and receive a unique QR-code ticket. The platform must prevent two fans buying the same seat, and it must keep an immutable record of every sale for tax and refunds. Should the backend be client-server or peer-to-peer?
Direct peer-to-peer negotiation cannot enforce the no-double-booking rule across the whole platform.
Without a single coordinator, two peers can each independently decide to sell the same seat.
Saving on infrastructure cost still leaves the platform with no way to record sales authoritatively
or process refunds. The product depends on a central authority that pure P2P does not provide.
The architecture matters because the requirements include central inventory, central payment
processing, and a tamper-resistant audit trail — features client-server provides and pure P2P does not.
Correct Answer:
Explanation
Client-server is required because the platform must serialize seat reservations, process payments, and own the audit trail. A central server is the only place where seat inventory can be locked, transactions can be recorded once, and disputes can be resolved against an authoritative record. P2P would let two buyers commit to the same seat independently and would have nowhere to enforce payment finality. This is the same pattern any multi-sided marketplace (Uber, Airbnb, eBay) uses for the same reasons.
Difficulty:Intermediate
A research consortium is designing a distributed scientific data archive: each participating university hosts a copy of selected genome datasets and serves them directly to other universities that request a copy. There must be no single institution that controls or can take down the archive, and the system should keep functioning even if several universities go offline at once. Which architecture fits these requirements best?
Operational simplicity is real, but it conflicts with the explicit requirement that no single
institution control the archive or be a single point of failure.
A central index reintroduces the single point of failure the requirements rule out. If the indexing
institution goes offline or revokes access, the whole archive becomes unreachable.
Even if raw data transfer is peer-to-peer, a single central index is still a single point of control
and failure. The requirement here is about control, not just bandwidth.
Correct Answer:
Explanation
Peer-to-peer is correct because decentralization is an explicit requirement, not just a preference. Any central server (including just a central index) creates a single point of control and failure that the requirements forbid. True P2P systems use a Distributed Hash Table or similar peer-discovery mechanism so participants find each other without any central coordinator — the same idea behind BitTorrent’s trackerless mode and IPFS. Each peer is both supplier and consumer, exactly the property the SEBook describes for P2P architectures.
Difficulty:Basic
You are building a walkie-talkie style voice app for outdoor crews — a hiker holds the talk button, speaks for a few seconds, and any teammate within range hears the audio in real time. The audio must feel immediate, and a brief audio gap is far less disruptive than a hesitation in the middle of a sentence. Which transport protocol should carry the voice audio?
Losing a tiny slice of live audio sounds like a brief crackle. Waiting to retransmit it produces a
noticeable hesitation right when the speaker is mid-sentence — far more disruptive than the original gap.
Ordering an old audio packet correctly is not useful after its playback moment has passed. Real-time
voice prefers timely packets over late perfect ones.
Voice audio still uses transport-layer protocols. Real-time voice typically rides on UDP (often via
RTP or WebRTC); it does not skip the transport layer.
Correct Answer:
Explanation
UDP is correct because in real-time voice, a re-sent packet arrives too late to be useful. TCP’s retransmission would stall the audio waiting for a packet describing a moment the listener has already passed — producing audible jitter exactly when the speaker is mid-word. A dropped UDP packet creates a sub-millisecond gap that is largely imperceptible. This is why every real-time voice product (FaceTime audio, Discord voice, push-to-talk apps) uses UDP, often via RTP or WebRTC.
Difficulty:Basic
A smart-home product ships a phone app that refreshes every 5 seconds to show the current state of the user’s connected devices — lights on/off, thermostat temperature, door-lock status. The phone app sends a request to the company’s central hub server, which responds with the latest readings collected from devices in the home. Which architecture pattern is this?
Sending and receiving data is not what defines an architecture — every networked node does both. The
relevant question is whether peers coordinate directly or always through a central service.
Polling is just a way of using a client-server connection, not a separate architecture. Repeating
a request every 5 seconds doesn’t make the design hybrid by itself.
In the scenario, the smart devices report to the company’s central hub, not directly to the phone.
The phone always reaches the server, so direct device-to-phone P2P isn’t happening here.
Correct Answer:
Explanation
Client-server is correct — polling is just a frequency choice within client-server. The phone (client) initiates each request; the central hub (server) responds — the textbook definition of client-server. The 5-second interval changes only how often requests happen, not who initiates or where the authoritative state lives. The same shape applies to most dashboard-style apps that poll a backend for the latest data.
Difficulty:Intermediate
For which of the following would TCP be the better choice over UDP? (Select all that apply)
A payment submission must be recorded once and exactly once. Transport-level loss or reordering
would either drop the charge or duplicate it — neither acceptable at a payment boundary.
Live broadcast frames are only useful while they’re current. A re-sent frame describing a moment
of the match that has already passed is no longer worth waiting for, and the wait itself would
freeze the stream right when the action is happening.
Email content must arrive complete and uncorrupted. A missing or reordered byte in the message
body or the PDF attachment would render the text as gibberish or break the attached file when the
recipient tries to open it.
A software update must arrive byte-for-byte intact — a single corrupted byte can break installation
or fail the package’s signature verification.
Correct Answers:
Explanation
Payments, emails, and software downloads all require byte-perfect, ordered delivery — exactly what TCP provides. A missing or duplicated payment would lose money or charge a customer twice; a corrupted email body or attachment would render text as gibberish or break the file the recipient is trying to open; a single corrupted byte can break a software install. The live sports broadcast is the UDP case: a re-sent frame from a few seconds ago is useless to a viewer watching the action live — the same trade-off the SEBook describes for any live media stream.
Workout Complete!
Your Score: 0/9
Cookie & Privacy Notice:
This site stores a few preferences and your progress locally in your browser
(cookies and localStorage) so it works the way you left it.
Nothing is sent to or stored on any external server, and this site does not
sell, share, or disclose any user data to third parties.
View & manage your data →