Code Smells & Refactoring — A Music Streaming App
Identify smells in real Python code and apply named refactorings safely. The first refactoring you do by hand to anchor the safety dance; the rest you drive with Monaco's tool support so you can spend your time thinking instead of typing. Tests stay green throughout. Live UML class diagrams make the structural change visible. Built around a music streaming codebase you incrementally clean up over ten steps.
The Cost of Duplication You Didn't Notice
Welcome — what this tutorial trains
You’re a CS2 student who already knows Python and pytest. You haven’t yet learned the discipline of changing existing code without breaking it. That discipline has a name — refactoring — and a lot of structure to it. Over the next ten steps you’ll learn:
- How to recognize a handful of high-impact code smells in real Python.
- How to apply named refactorings that fix each smell safely.
- How tests give you the safety net to change structure without changing behavior.
- How to judge when a refactoring is worth doing, and when it isn’t.
The codebase grows over the tutorial: you start with three small functions and end with a small music streaming app. Most of the typing is done by Monaco’s refactoring tools — you’ll select code, pick a refactoring, and judge the diff. The thinking is yours.
Prerequisite: Testing Foundations — pytest discovery,
assert, and@pytest.mark.parametrize. If those feel new, do that one first.
Step 1 — A Bug, in Triplicate
Learning objective. After this step you will be able to (a) define refactoring as behavior-preserving structural change, and (b) explain operationally why duplicated code multiplies the cost of every future bug.
Open royalty.py. The RoyaltyCalculator class has three methods that calculate the creator’s share of streaming royalties for three different track types — songs, podcasts, audiobooks. They all use the same formula: plays × rate × 0.7 (the platform takes 30%, creators get 70%).
class RoyaltyCalculator:
def royalty_song(self, plays: int, rate: float) -> float:
return plays * rate * 0.7
def royalty_podcast(self, plays: int, rate: float) -> float:
return plays + rate * 0.7 # ← bug
def royalty_audiobook(self, plays: int, rate: float) -> float:
return plays + rate * 0.7 # ← bug
Two of the three have the same bug — + where there should be *. The bug ships unnoticed because the test suite only covers royalty_song. (You can already see the class in the UML diagram in the bottom-left — three methods, one box.)
Your task
- Run the existing test (
test_royalty.py). It passes —royalty_songis correct. - Extend the parametrize table in
test_royalty.pyto coverroyalty_podcastandroyalty_audiobookwith at least two(plays, rate)cases each. Use the same formulaplays * rate * 0.7to compute the expected values. - Run again. Two tests will turn red.
- Fix the bug in
royalty.py. Notice that you have to fix it in two separate places because the logic is duplicated.
You will not yet refactor the duplication away. That waits until Step 4. The point of this step is to feel the cost: a single bug, fixed in N places, where N is the number of duplicates.
Refactoring, defined
Throughout this tutorial we use Martin Fowler’s definition (Refactoring, 2018):
Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.
The two halves are equally important. Structural change without behavior preservation is a rewrite; behavior preservation without structural change is unnecessary churn. A refactoring is both at once.
Not every triplet is a smell
Visual similarity isn’t the diagnostic — bug-coupling is. Look at this counter-example:
def round_currency_usd(amount: float) -> float:
return round(amount, 2) # cents: 2 decimal places
def round_currency_jpy(amount: float) -> float:
return round(amount, 0) # yen: no fractional unit
def round_currency_kwd(amount: float) -> float:
return round(amount, 3) # Kuwaiti dinar: 3 decimal places
Three near-identical functions, but each captures a real domain rule (ISO 4217 minor-unit precision per currency). Consolidating these into one function with a precision parameter would not remove a bug coupling — there are no shared bugs to fix in three places, because each precision is independent. The visual similarity is a coincidence of the API shape, not duplicated knowledge.
The rule of thumb: if changing one of the functions would also force you to change the others (in lockstep), that’s duplication. If they evolve independently, leave them alone. Step 4 returns to this trade-off — for now, hold the distinction in mind.
"""Streaming royalty calculations.
The platform takes 30% commission; creators get 70% of plays * rate.
"""
class RoyaltyCalculator:
"""Computes creator royalty payouts per track type."""
def royalty_song(self, plays: int, rate: float) -> float:
return plays * rate * 0.7
def royalty_podcast(self, plays: int, rate: float) -> float:
return plays + rate * 0.7
def royalty_audiobook(self, plays: int, rate: float) -> float:
return plays + rate * 0.7
"""Monthly creator-payout aggregator — uses RoyaltyCalculator across all track types.
This is the *production caller* that exercises the three royalty methods.
The bug in royalty_podcast / royalty_audiobook propagates through here:
MonthlyPayouts.total_creator_earnings will return wildly wrong totals
until the underlying calculator is fixed.
"""
from typing import List, Tuple
from royalty import RoyaltyCalculator
class MonthlyPayouts:
"""Aggregates monthly creator earnings across songs, podcasts, audiobooks."""
def __init__(self, calculator: RoyaltyCalculator) -> None:
self.calculator: RoyaltyCalculator = calculator
def total_creator_earnings(
self,
song_plays: List[Tuple[int, float]],
podcast_plays: List[Tuple[int, float]],
audiobook_plays: List[Tuple[int, float]],
) -> float:
"""Sum royalties across all three track types for one month."""
total: float = 0.0
for plays, rate in song_plays:
total += self.calculator.royalty_song(plays, rate)
for plays, rate in podcast_plays:
total += self.calculator.royalty_podcast(plays, rate)
for plays, rate in audiobook_plays:
total += self.calculator.royalty_audiobook(plays, rate)
return total
"""Tests for streaming royalty calculations and the MonthlyPayouts caller."""
import pytest
from royalty import RoyaltyCalculator
from payouts import MonthlyPayouts
@pytest.fixture
def calc() -> RoyaltyCalculator:
return RoyaltyCalculator()
@pytest.mark.parametrize("plays,rate,expected", [
(100, 0.01, 0.7),
(1000, 0.005, 3.5),
])
def test_royalty_song(calc: RoyaltyCalculator, plays: int, rate: float, expected: float) -> None:
assert calc.royalty_song(plays, rate) == pytest.approx(expected)
# TODO: extend the parametrize table to cover royalty_podcast and
# royalty_audiobook with at least two cases each. Use the formula
# plays * rate * 0.7 to compute the expected values.
def test_monthly_payouts_sums_across_track_types(calc: RoyaltyCalculator) -> None:
"""Integration test: the bug in any royalty_* method also breaks this aggregate."""
payouts: MonthlyPayouts = MonthlyPayouts(calc)
total: float = payouts.total_creator_earnings(
song_plays=[(100, 0.01), (1000, 0.005)], # 0.7 + 3.5 = 4.2
podcast_plays=[(100, 0.02)], # 1.4 = 1.4
audiobook_plays=[(200, 0.02)], # 2.8 = 2.8
)
assert total == pytest.approx(8.4)
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
Step 1 — The Cost of Duplication
Min. score: 70%1. Retrieval. Which operator was wrong in two of the three royalty functions, and what should it have been?
The docstring stated the formula as plays × rate × 0.7. Two of the three functions had plays + rate * 0.7, which silently returns garbage values for any non-trivial input. Naming the bug operator from memory anchors the smell — when you next see “three near-identical functions,” the very next thought should be “what bug might be hiding in just two of them?”
2. Which of the following best matches Fowler’s definition of refactoring?
Refactoring has two halves: structural change AND behavior preservation. Without the second half, it’s a rewrite. Without the first, it’s churn. Both halves are non-negotiable.
3. You discover the same bug in three near-identical copies of a function. Which order do you apply?
Fix-then-refactor. Fix the bug in each duplicate first, with tests passing for each location. Only then refactor to consolidate — the consolidated helper now contains the corrected formula. This is one of the most useful sequencing rules in the refactoring toolkit.
4. Three near-identical functions, one bug introduced in two of them. How many places did you change to fix it?
The cost of duplication is empirical: it equals the number of duplicates that contain the same bug. Step 4 will introduce Extract Function to consolidate three places into one, so the next bug is found and fixed exactly once.
Long Method → Extract Function (by hand)
Long Method, the smell
Learning objective. After this step you will be able to (a) recognize a Long Method by its sub-goal structure, (b) apply Extract Function while keeping tests green, and (c) explain the safety dance — the test discipline that makes refactoring safe.
Open player.py. The class Streaming has a method process_play_event(user, track) that is 35 lines long and does five different things:
- Verify the user’s subscription
- Check that the track is available in the user’s region
- Compute royalty for the play
- Update the play count
- Append to the user’s history
The comments label these sub-goals — that’s a hint that the method is doing too much. A coherent method does one thing and is named for it. A method that does five things needs five names, which means five separate methods.
The safety dance
Refactoring without tests is a coin flip. Refactoring with tests is reliable. The discipline:
| Step | Action |
|---|---|
| 1 | Run the tests. They should pass. Now you have a baseline. |
| 2 | Make one structural change. Do not change behavior. |
| 3 | Run the tests again. They must still pass. If they fail, the change broke something — undo and try again. |
| 4 | Repeat from (2) until the structure is clean. |
The rhythm is green → change → green → change → green. You are never more than one undo away from a known-good state.
Predict before reading
The first sub-goal of process_play_event is the subscription check (lines tagged # 1.). Before you read the worked example below, write down the function name you would choose for that block. One word or short phrase. Hold your choice in mind. After you see the worked example, compare to the name we chose — and ask yourself whether yours is clearer, equivalent, or worse.
Naming is the load-bearing micro-decision in every Extract Function. Generating a name yourself first — even if you change it after — anchors the schema better than reading our name first.
Worked example — extracting the first sub-goal
Look at the first comment block in process_play_event:
# 1. Verify the user's subscription
if user.subscription_tier == "free":
if user.daily_plays >= 5:
raise PermissionError("Free tier daily limit reached")
elif user.subscription_tier == "premium":
pass # unlimited
else:
raise ValueError(f"Unknown tier: {user.subscription_tier}")
Five lines doing one thing. That’s a perfect Extract Method candidate.
The mechanics, narrated.
- Cut the five lines.
- Write a new helper method on the
Streamingclass:def _check_subscription(self, user: User) -> None: """Raise an exception if the user is not allowed to play.""" if user.subscription_tier == "free": if user.daily_plays >= 5: raise PermissionError("Free tier daily limit reached") elif user.subscription_tier == "premium": pass # unlimited else: raise ValueError(f"Unknown tier: {user.subscription_tier}") - Replace the cut block in
process_play_eventwith a call:self._check_subscription(user) - Run the tests. Green.
Three things to notice:
- The helper has a type-annotated signature.
user: Userand the-> Nonereturn tell future readers what it consumes and what it produces. The annotation is part of the refactoring, not optional polish. - The helper has a leading underscore in its name. Convention:
_namemeans “internal — not part of the public API.”process_play_eventis public; helpers it depends on are internal. - The helper’s name describes what it does, not how.
_check_subscriptionis a coherent sub-goal;_block_onewould not be.
Your task
Two more sub-goals are still inline in process_play_event. Extract them as methods on Streaming. The file already has the worked-example extraction applied; you continue from there.
- Sub-goal 2: geo-restriction. Extract
_check_geo_restriction(self, user: User, track: Track) -> None(raises on failure). - Sub-goal 3: royalty calculation. Extract
_compute_royalty(self, track: Track) -> floatreturning the royalty amount.
Run the tests after each extraction. They should stay green throughout. Watch the UML diagram in the bottom-left grow with each extraction — Streaming gains methods, process_play_event’s body shrinks.
Why we do this manually one time. Steps 4–8 will use Monaco’s tool to do extractions for you with two clicks. Doing one slow extraction by hand right now anchors the mechanics in your fingers. When the tool acts later, you’ll recognize what it did because you’ll have done it yourself.
"""The streaming player — orchestrates a single play event."""
from dataclasses import dataclass, field
from typing import List
@dataclass
class User:
user_id: str
subscription_tier: str # "free" or "premium"
region: str
daily_plays: int = 0
history: List[str] = field(default_factory=list)
@dataclass
class Track:
track_id: str
title: str
duration_sec: int
available_regions: List[str]
rate: float
play_count: int = 0
class Streaming:
"""Orchestrates streaming play events for users and tracks."""
def _check_subscription(self, user: User) -> None:
"""Raise if the user is not allowed to play right now."""
if user.subscription_tier == "free":
if user.daily_plays >= 5:
raise PermissionError("Free tier daily limit reached")
elif user.subscription_tier == "premium":
pass # unlimited
else:
raise ValueError(f"Unknown tier: {user.subscription_tier}")
def process_play_event(self, user: User, track: Track) -> float:
"""Run one play: returns the royalty paid to the creator."""
# 1. Verify the user's subscription
self._check_subscription(user)
# 2. Geo-restriction check
if track.available_regions and user.region not in track.available_regions:
raise PermissionError(f"Track not available in {user.region}")
# 3. Compute royalty (creator gets 70%)
royalty: float = 1 * track.rate * 0.7
# 4. Update play count
track.play_count += 1
user.daily_plays += 1
# 5. Append to user's history
user.history.append(track.track_id)
return royalty
"""A playback session runs multiple plays for one user.
Production caller for Streaming.process_play_event. Behavior must be
preserved across the Long-Method extraction in Step 2 — the body of
process_play_event changes shape internally, but its contract (royalty
per play, raises on subscription/geo violation) stays identical.
"""
from typing import List
from player import Streaming, User, Track
class PlaybackSession:
"""Plays a sequence of tracks and accumulates the total royalty paid."""
def __init__(self, streaming: Streaming, user: User) -> None:
self.streaming: Streaming = streaming
self.user: User = user
self.total_royalty: float = 0.0
def run_session(self, tracks: List[Track]) -> float:
"""Play each track in order; return cumulative royalty."""
for track in tracks:
self.total_royalty += self.streaming.process_play_event(self.user, track)
return self.total_royalty
"""Behavior tests — lock the contract before refactoring."""
import pytest
from typing import List
from player import User, Track, Streaming
from playback_session import PlaybackSession
@pytest.fixture
def streaming() -> Streaming:
return Streaming()
@pytest.fixture
def free_user() -> User:
return User(user_id="u1", subscription_tier="free", region="US", daily_plays=2)
@pytest.fixture
def premium_user() -> User:
return User(user_id="u2", subscription_tier="premium", region="EU")
@pytest.fixture
def track_us_only() -> Track:
return Track(track_id="t1", title="Song A", duration_sec=180,
available_regions=["US"], rate=0.01)
@pytest.fixture
def track_global() -> Track:
return Track(track_id="t2", title="Song B", duration_sec=240,
available_regions=[], rate=0.02)
def test_premium_user_global_track_pays_royalty(streaming: Streaming,
premium_user: User,
track_global: Track) -> None:
# Premium has no daily limit; global track has no geo restriction.
royalty: float = streaming.process_play_event(premium_user, track_global)
assert royalty == pytest.approx(0.014)
assert track_global.play_count == 1
assert premium_user.daily_plays == 1
assert premium_user.history == ["t2"]
def test_free_user_under_limit_pays_royalty(streaming: Streaming,
free_user: User,
track_us_only: Track) -> None:
royalty: float = streaming.process_play_event(free_user, track_us_only)
assert royalty == pytest.approx(0.007)
assert track_us_only.play_count == 1
assert free_user.daily_plays == 3
def test_free_user_at_daily_limit_blocked(streaming: Streaming,
free_user: User,
track_us_only: Track) -> None:
free_user.daily_plays = 5
with pytest.raises(PermissionError, match="daily limit"):
streaming.process_play_event(free_user, track_us_only)
def test_geo_restriction_blocks_play(streaming: Streaming,
premium_user: User,
track_us_only: Track) -> None:
with pytest.raises(PermissionError, match="not available"):
streaming.process_play_event(premium_user, track_us_only)
# ---- Caller test: PlaybackSession exercises process_play_event ----
def test_playback_session_accumulates_royalty(
streaming: Streaming, premium_user: User, track_global: Track
) -> None:
session: PlaybackSession = PlaybackSession(streaming, premium_user)
tracks: List[Track] = [track_global, track_global, track_global]
total: float = session.run_session(tracks)
# 3 plays × 0.014 royalty per play
assert total == pytest.approx(0.014 * 3)
assert track_global.play_count == 3
assert premium_user.daily_plays == 3
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
Step 2 — Long Method
Min. score: 70%1. Retrieval. What’s the minimum test discipline a refactoring requires?
Green → change → green is the rhythm. Each structural change happens between two green test runs; if the second run fails, you know exactly which change to revert.
2. Which of the following is the best sign that a function is too long?
Comments-as-section-headers are the diagnostic. When a reader needs # 1. Validate, # 2. Compute, # 3. Format to navigate a function, the function is doing those three things and should be three functions. The comment names become the new function names.
3. You add a comment to a long method explaining what each section does. Have you fixed the smell?
Comments-as-deodorant. Section-header comments are a sign the code wants to be split. The fix isn’t tighter comments; it’s named functions whose call sites read like an outline. After Extract Function, the comment in the call site is the helper’s name.
4. After extracting one sub-goal into a helper, you run the tests. They pass. What do you do next?
Trust the tests, then move on. The whole point of behavior preservation tests is that they make manual inspection unnecessary. Each green run is a small commit point. Adding new tests for the helper is optional — the existing tests cover the public behavior, which is what matters.
Boolean Anti-Patterns — and the trap that costs you everything
Three boolean smells
Learning objective. After this step you will be able to (a) simplify three common boolean anti-patterns, (b) recognize the IfsMerged trap — a dangerous “simplification” that silently drops a branch — and (c) use a
@pytest.mark.parametrizetruth table to detect behavior change in boolean code.
Open gates.py. The PlaybackGates class has three small methods that guard playback. All three have boolean smells. Two are safe to simplify; one looks safe but isn’t.
Sub-task A — boolean return
def is_trial_expired(self, free_trial_days_left: int) -> bool:
if free_trial_days_left <= 0:
return True
else:
return False
The body is if condition: return True else: return False. The condition is already a boolean — wrapping it in an if/else does nothing. Simplify to:
def is_trial_expired(self, free_trial_days_left: int) -> bool:
return free_trial_days_left <= 0
This is the most common conditional anti-pattern in novice Python — about 30% of conditional anti-patterns in one large empirical study (Naude et al., 2024).
Sub-task B — confusing else
def stream_quality(self, is_premium: bool) -> str:
if is_premium:
return "high"
if not is_premium:
return "low"
return "low" # unreachable, but the type checker insists
Two ifs with mutually-exclusive conditions and a “just in case” return at the end. The intent is if/else. Simplify to:
def stream_quality(self, is_premium: bool) -> str:
if is_premium:
return "high"
else:
return "low"
Sub-task C — the IfsMerged trap
Here’s the method:
def play_decision(self, is_logged_in: bool, is_offline: bool) -> str:
if is_logged_in:
if not is_offline:
return "stream"
else:
return "play_cached"
return "error_login_required"
A reasonable-looking simplification:
def play_decision(self, is_logged_in: bool, is_offline: bool) -> str:
if is_logged_in and not is_offline:
return "stream"
return "error_login_required"
It compiles. The happy-path test passes. Looks clean. It’s wrong.
The original returned "play_cached" for (is_logged_in=True, is_offline=True). The simplified version returns "error_login_required" for the same input. We’ve silently dropped a branch.
Why a happy-path test misses this
A single test like assert gates.play_decision(True, False) == "stream" exercises one of four possible truth-value combinations of (is_logged_in, is_offline). Three remain unchecked, and the bug lives in one of those three.
The fix is a truth table — a parametrize over all four combinations:
@pytest.mark.parametrize("is_logged_in,is_offline,expected", [
(False, False, "error_login_required"),
(False, True, "error_login_required"),
(True, False, "stream"),
(True, True, "play_cached"),
])
def test_play_decision(gates: PlaybackGates, is_logged_in: bool, is_offline: bool, expected: str) -> None:
assert gates.play_decision(is_logged_in, is_offline) == expected
Four rows. Two columns of input. One expected output per row. Any behavior change in the function shows up somewhere in the table.
Mini warmup —
parametrizein 5 lines. If@pytest.mark.parametrizeis unfamiliar, the syntax is("name1,name2", [(value1a, value2a), (value1b, value2b), ...]). Each tuple becomes one test case; the test function receives the tuple’s values as named parameters. That’s it.
Your task
- Sub-task A: simplify
is_trial_expired. - Sub-task B: rewrite
stream_qualityas a cleanif/else. - Sub-task C — fill the truth table FIRST, then refactor. Open
truth_table.md. Two of the four rows are pre-filled. Fill the other two by reading the original nestedplay_decision(ingates.py). Then circle (with a← BREAKScomment) the row where the naiveif is_logged_in and not is_offline: return "stream"simplification would diverge from the original. Only after the table is complete, write the correct simplified version ofplay_decisionand extend the parametrize table intest_play_decisionto cover all four rows.
The lesson is not “the original was unsimplifiable” — it can be simplified, but only if the simplification preserves the four-row truth table. The elegant version that does:
def play_decision(self, is_logged_in: bool, is_offline: bool) -> str:
if not is_logged_in:
return "error_login_required"
return "play_cached" if is_offline else "stream"
Externalizing the truth table on paper before touching the function is the load-bearing move: it converts a 6-element working-memory task (4 truth combinations × 2 versions to compare) into a written artifact you can stare at.
"""Boolean playback gates — one class, three method-level smells."""
class PlaybackGates:
"""Encapsulates the boolean decisions a player makes per playback event."""
def is_trial_expired(self, free_trial_days_left: int) -> bool:
# Sub-task A: simplify
if free_trial_days_left <= 0:
return True
else:
return False
def stream_quality(self, is_premium: bool) -> str:
# Sub-task B: rewrite as if/else
if is_premium:
return "high"
if not is_premium:
return "low"
return "low"
def play_decision(self, is_logged_in: bool, is_offline: bool) -> str:
# Sub-task C: don't simplify naively — fill the truth table first.
if is_logged_in:
if not is_offline:
return "stream"
else:
return "play_cached"
return "error_login_required"
"""Routes play decisions through all three boolean gates.
Production caller for PlaybackGates. The gate methods get simplified
internally during this step (boolean-return collapse, if/else cleanup,
IfsMerged-safe nested-if rewrite) but their *behavior* must stay
identical — this caller's tests prove it.
"""
from dataclasses import dataclass
from gates import PlaybackGates
@dataclass
class UserState:
free_trial_days_left: int
is_premium: bool
is_logged_in: bool
is_offline: bool
class PlayController:
"""Decides what action to take for a user about to play."""
def __init__(self, gates: PlaybackGates) -> None:
self.gates: PlaybackGates = gates
def decide(self, state: UserState) -> str:
"""Return one of: trial_expired, error_login_required, play_cached, stream:high, stream:low."""
if self.gates.is_trial_expired(state.free_trial_days_left):
return "trial_expired"
decision: str = self.gates.play_decision(state.is_logged_in, state.is_offline)
if decision == "stream":
quality: str = self.gates.stream_quality(state.is_premium)
return f"stream:{quality}"
return decision
"""Truth-table-driven tests for boolean gates and the PlayController caller."""
import pytest
from gates import PlaybackGates
from play_controller import PlayController, UserState
@pytest.fixture
def gates() -> PlaybackGates:
return PlaybackGates()
@pytest.mark.parametrize("days,expected", [
(5, False),
(1, False),
(0, True),
(-1, True),
])
def test_is_trial_expired(gates: PlaybackGates, days: int, expected: bool) -> None:
assert gates.is_trial_expired(days) == expected
@pytest.mark.parametrize("is_premium,expected", [
(True, "high"),
(False, "low"),
])
def test_stream_quality(gates: PlaybackGates, is_premium: bool, expected: str) -> None:
assert gates.stream_quality(is_premium) == expected
# TODO: extend this table to cover ALL FOUR truth combinations of
# (is_logged_in, is_offline). The current table is happy-path only and
# will not catch the IfsMerged trap.
@pytest.mark.parametrize("is_logged_in,is_offline,expected", [
(True, False, "stream"),
])
def test_play_decision(gates: PlaybackGates, is_logged_in: bool, is_offline: bool, expected: str) -> None:
assert gates.play_decision(is_logged_in, is_offline) == expected
# ---- Caller tests: PlayController routes through all three gates ----
@pytest.mark.parametrize("state,expected", [
(UserState(free_trial_days_left=0, is_premium=True, is_logged_in=True, is_offline=False), "trial_expired"),
(UserState(free_trial_days_left=5, is_premium=False, is_logged_in=False, is_offline=False), "error_login_required"),
(UserState(free_trial_days_left=5, is_premium=True, is_logged_in=True, is_offline=True), "play_cached"),
(UserState(free_trial_days_left=5, is_premium=True, is_logged_in=True, is_offline=False), "stream:high"),
(UserState(free_trial_days_left=5, is_premium=False, is_logged_in=True, is_offline=False), "stream:low"),
])
def test_play_controller_decide(gates: PlaybackGates, state: UserState, expected: str) -> None:
controller: PlayController = PlayController(gates)
assert controller.decide(state) == expected
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
# Truth table for `play_decision`
Read the **original nested** `play_decision` in `gates.py`. Fill the two
remaining rows by simulating what the original would return for each
combination. Then put a `← BREAKS` comment on the row where the naive
`if is_logged_in and not is_offline: return "stream"` collapse diverges.
| is_logged_in | is_offline | original returns | naive `A and not B` returns |
|--------------|------------|-----------------------|------------------------------|
| False | False | error_login_required | error_login_required |
| False | True | error_login_required | error_login_required |
| True | False | TODO | TODO |
| True | True | TODO | TODO |
Step 3 — Boolean Anti-Patterns
Min. score: 70%
1. Retrieval / prediction. In a truth table over (is_logged_in, is_offline), which row does the naive if is_logged_in and not is_offline collapse get wrong?
(True, True) is the IfsMerged row. Original code returns "play_cached"; the naive if A and not B collapse returns "error_login_required". The bug is invisible to any test that doesn’t include this exact input. Truth-table tests are not optional for boolean logic — they’re load-bearing.
2. Which of the following is the simplest correct simplification of if cond: return True else: return False?
The condition IS the answer. Comparisons (<=, ==, in, etc.) already return booleans. Wrapping them in if/return True/return False adds no information.
3. You see this code:
if is_active:
process()
if not is_active:
skip()
Confusing else is the official name (cf. anti-patterns catalogues). Two ifs with mutually-exclusive conditions are an if/else in disguise. The fix removes the duplicate condition evaluation and makes the structure more change-resistant.
4. Which of these is the safest way to verify a boolean refactoring?
For boolean code, a truth-table parametrize is the only sound oracle. Two-input booleans need 4 rows; three-input need 8; n-input need 2^n. If the table has fewer rows than the input space, the verification is incomplete.
Duplicated Code → Extract Function (with the tool)
The tool does the typing — you do the thinking
The tool will do the typing. You are doing three things: choosing the boundary, choosing the name, and judging whether the result is better. Those decisions are yours.
Learning objective. After this step you will be able to (a) recognize duplicated code where the variation between copies is captured by a single parameter, (b) use Monaco’s Refactor: Extract Function/Method action, and (c) decide when to extract using the Rule of Three.
Open filters.py. The MusicLibrary class has two filter methods:
class MusicLibrary:
def __init__(self, tracks: List[Dict]) -> None:
self.tracks: List[Dict] = tracks
def apply_genre_filter(self, genre: str) -> List[Dict]:
result: List[Dict] = [t for t in self.tracks if t["genre"] == genre]
result.sort(key=lambda t: t["title"])
return result[:50]
def apply_artist_filter(self, artist: str) -> List[Dict]:
result: List[Dict] = [t for t in self.tracks if t["artist"] == artist]
result.sort(key=lambda t: t["title"])
return result[:50]
The structure is identical — filter, sort, paginate. The variation is exactly one thing: the predicate that decides which tracks to keep. That variation is a candidate parameter.
Initial state
(One class with two near-duplicate methods. After the refactor a private _apply_filter helper joins them, and each public method becomes a one-line delegation.)
Draft the Issue line FIRST
Before you click anything, open memo.md and write a one-sentence Issue line naming the smell in your own words. Doing this before tool invocation calibrates whether the refactoring you’re about to apply matches the smell you diagnosed. If you can’t articulate the smell in a sentence, the tool will gladly produce a “clean” diff that doesn’t actually fix anything.
💡 Expert’s note. When you see two near-duplicates, the question is what varies between them. If the variation is a constant, it’s a parameter. If the variation is a piece of behavior — like a predicate — it’s a callable parameter. Naming the variation is the load-bearing decision.
How the tool works
- Select the lines you want to extract. (Highlight them in the editor.)
- Right-click to open the context menu. You’ll see a group of Refactor: actions:
- Refactor: Rename Symbol…
- Refactor: Extract Function/Method…
- Refactor: Introduce Parameter Object…
- Refactor: Move Method…
- Refactor: Move Field…
- Click the action you want. A dialog appears asking for the new function’s name.
- Type the name. The tool shows you a diff preview of the change.
- Accept the diff if it looks right. Reject if it doesn’t.
The tool handles the mechanics — cutting the lines, generating the function definition, writing the call site, updating any references it can find. It does not handle judgment. You decide which lines to extract, what to name them, and whether the post-state is better. Those are the load-bearing decisions; the typing is not.
Your task
- In
apply_genre_filter, select the body (list comprehension, sort, slice). - Invoke Refactor: Extract Function/Method. Name the new method
_apply_filter. Accept the diff. - The tool may have made the predicate a literal string. Edit the extracted method so that the predicate is a callable parameter with this signature:
from typing import Callable, Dict, List def _apply_filter(self, predicate: Callable[[Dict], bool]) -> List[Dict]: result: List[Dict] = [t for t in self.tracks if predicate(t)] result.sort(key=lambda t: t["title"]) return result[:50]And rewrite
apply_genre_filterto call it:def apply_genre_filter(self, genre: str) -> List[Dict]: return self._apply_filter(lambda t: t["genre"] == genre) - Replace the body of
apply_artist_filterwith a similar call:def apply_artist_filter(self, artist: str) -> List[Dict]: return self._apply_filter(lambda t: t["artist"] == artist) - Run the tests. They should still pass. The UML diagram now shows a third method
_apply_filteronMusicLibrary— visible proof of the extraction.
The Memo template
From this step on, every refactoring gets a memo — a structured note that captures the design decision. The four fields:
| Field | What it captures |
|---|---|
| Issue | What smell is present in the original code? |
| Rationale | Why is this refactoring the right fix? |
| Invariant | What property of behavior is preserved? |
| Tests | Which tests confirm the invariant? |
Open memo.md. Two fields are pre-filled (Rationale, Invariant). You write the Issue and Tests fields. Don’t skip this — the memo is part of the deliverable for this step.
When NOT to extract — the Rule of Three
Two duplicates is the minimum for extraction. Some teams use the Rule of Three: don’t extract until you see the same pattern three times. The reason: the third occurrence reveals what’s truly common vs. what’s accidental similarity. Extracting on the second occurrence sometimes produces an abstraction that breaks when you find the third.
For this step, two is enough — the variation (predicate) is clearly the only difference. If you’re unsure, wait for the third instance.
"""Filters for the music library."""
from typing import Dict, List
class MusicLibrary:
"""An in-memory music library with genre and artist filters."""
def __init__(self, tracks: List[Dict]) -> None:
self.tracks: List[Dict] = tracks
def apply_genre_filter(self, genre: str) -> List[Dict]:
result: List[Dict] = [t for t in self.tracks if t["genre"] == genre]
result.sort(key=lambda t: t["title"])
return result[:50]
def apply_artist_filter(self, artist: str) -> List[Dict]:
result: List[Dict] = [t for t in self.tracks if t["artist"] == artist]
result.sort(key=lambda t: t["title"])
return result[:50]
"""Dispatches user search queries to the right MusicLibrary filter.
Production caller for MusicLibrary's filter methods. The student
extracts a private `_apply_filter` helper inside MusicLibrary; the
PUBLIC methods (`apply_genre_filter`, `apply_artist_filter`) keep
their signatures, so this caller doesn't need to change. That's the
point of refactoring — internal structure changes, external contract
stays identical.
"""
from typing import Dict, List
from filters import MusicLibrary
class SearchHandler:
"""Routes a (attribute, value) query to the right filter method."""
def __init__(self, library: MusicLibrary) -> None:
self.library: MusicLibrary = library
def search(self, attribute: str, value: str) -> List[Dict]:
"""Dispatch to the genre or artist filter based on attribute name."""
if attribute == "genre":
return self.library.apply_genre_filter(value)
elif attribute == "artist":
return self.library.apply_artist_filter(value)
return []
"""Behavior tests for the filters and the SearchHandler caller."""
import pytest
from typing import Dict, List
from filters import MusicLibrary
from search_handler import SearchHandler
@pytest.fixture
def library() -> MusicLibrary:
tracks: List[Dict] = [
{"title": "B", "artist": "Alice", "genre": "rock"},
{"title": "A", "artist": "Alice", "genre": "jazz"},
{"title": "C", "artist": "Bob", "genre": "rock"},
{"title": "D", "artist": "Carol", "genre": "rock"},
]
return MusicLibrary(tracks)
def test_genre_filter_returns_sorted_matches(library: MusicLibrary) -> None:
result: List[Dict] = library.apply_genre_filter("rock")
assert [t["title"] for t in result] == ["B", "C", "D"]
def test_artist_filter_returns_sorted_matches(library: MusicLibrary) -> None:
result: List[Dict] = library.apply_artist_filter("Alice")
assert [t["title"] for t in result] == ["A", "B"]
def test_genre_filter_paginates_at_50() -> None:
tracks: List[Dict] = [
{"title": f"T{i:03d}", "artist": "X", "genre": "rock"} for i in range(100)
]
big_library = MusicLibrary(tracks)
result: List[Dict] = big_library.apply_genre_filter("rock")
assert len(result) == 50
# ---- Caller tests: SearchHandler dispatches both filter kinds ----
def test_search_handler_dispatches_to_genre(library: MusicLibrary) -> None:
handler: SearchHandler = SearchHandler(library)
result: List[Dict] = handler.search("genre", "rock")
assert [t["title"] for t in result] == ["B", "C", "D"]
def test_search_handler_dispatches_to_artist(library: MusicLibrary) -> None:
handler: SearchHandler = SearchHandler(library)
result: List[Dict] = handler.search("artist", "Alice")
assert [t["title"] for t in result] == ["A", "B"]
def test_search_handler_unknown_attribute_returns_empty(library: MusicLibrary) -> None:
handler: SearchHandler = SearchHandler(library)
assert handler.search("year", "2020") == []
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
# Refactoring memo — Step 4
## Issue
<!-- TODO: name the smell in one sentence. What's wrong with the original code? -->
## Rationale
The variation between `apply_genre_filter` and `apply_artist_filter` is the predicate
(what makes a track a "match"). Everything else — the list comprehension, the sort,
the pagination — is identical. Extracting the common structure into a helper that
accepts a callable predicate captures the duplication exactly, with no over-generalization.
## Invariant
For any library and any field/value pair, calling the original duplicated function
and the new factored-through-`_apply_filter` version produces identical results
(same list, same order, same length). External behavior is unchanged.
## Tests
<!-- TODO: which tests in test_filters.py confirm the invariant? List them by name. -->
Step 4 — Duplicated Code
Min. score: 70%
1. Retrieval / prediction. After Extract Function, which three regions of filters.py will have changed?
New helper, first delegator, second delegator. Each duplicate becomes a single-line delegating call. The test file should not change — if it did, you changed behavior, not just structure.
2. Spacing back to Step 1. Suppose Step 1’s royalty_song, royalty_podcast, royalty_audiobook (with the bug + in two of them) had been refactored before the bug was discovered — extracted into a single _royalty(plays, rate) helper. What would have happened?
Fix-then-refactor. This is exactly why Step 1’s lesson is “fix first.” Extracting buggy duplicates into a helper consolidates the bug, which feels like progress but isn’t. The bug is still there — it’s just in a single, harder-to-spot location.
3. You see a piece of code that’s been duplicated once (two copies). Should you extract it?
Judgment beats rules. Sandi Metz: “Duplication is far cheaper than the wrong abstraction.” When unsure, wait for the third occurrence. When the variation is one obvious dimension, extract early.
4. What name does the tool assign to the extracted function if you don’t type one?
Naming is the load-bearing decision. The tool refuses to proceed without a name because choosing a name forces you to articulate what the helper is. That articulation is the heart of the refactoring.
Long Parameter List → Introduce Parameter Object (with the tool)
Eight parameters and a clump
Learning objective. After this step you will be able to (a) recognize a Long Parameter List and the data clump hiding inside it, (b) use Monaco’s Refactor: Introduce Parameter Object to consolidate the clump into a
@dataclass, and (c) explain why type-annotated parameter objects beat dicts for this purpose.
Open track.py. The class TrackCatalog has an add_track method that takes eight parameters:
class TrackCatalog:
def add_track(
self,
title: str,
artist: str,
album: str,
duration_sec: int,
genre: str,
release_year: int,
bpm: int,
isrc: str,
) -> Dict:
...
Eight is a lot. But the smell isn’t just the count — it’s that four of the eight always travel together: artist, album, genre, release_year describe the album a track belongs to. Every call site that passes one of them passes all four; every call site that mutates one mutates them as a unit.
That’s a data clump. The fix is Introduce Parameter Object — a small class (here a @dataclass) that names the clump and makes the relationship explicit.
Initial state
(One class, one method with eight flat parameters. After the refactor, four of these will be replaced by an AlbumInfo parameter object — and you’ll see a new AlbumInfo box appear in the live UML next to TrackCatalog.)
A 30-second @dataclass primer
Python’s @dataclass decorator generates __init__, __repr__, and __eq__ for you from a list of typed field declarations. It looks like this:
from dataclasses import dataclass
@dataclass
class Point:
x: int
y: int
That’s the entire class. Point(x=3, y=4) works. Point(3, 4) == Point(3, 4) returns True. No constructor body, no boilerplate. The tool will generate one for you in a moment — recognize it; don’t try to derive it.
Sketch AlbumInfo BEFORE invoking the tool
Take 60 seconds. In a comment at the top of track.py, write what you’d put in an AlbumInfo dataclass if you were writing it by hand: field names + Python types. Don’t peek at the tool’s output yet. Then invoke the tool and compare. The compare is the lesson — your sketch usually matches the tool’s output, which builds your trust that the tool is doing what you’d do, just faster.
How to drive the tool
- Place your cursor in
TrackCatalog.add_track’s parameter list. Right-click → Refactor: Introduce Parameter Object… - The tool asks for the class name — type
AlbumInfo. Selectartist,album,genre,release_year. - Inspect the diff before accepting. The new
add_trackshould accept(self, title, album_info, duration_sec, bpm, isrc)— five parameters (plus self), one of which is structured. - After accepting, look at the live UML in the bottom-left. A new
AlbumInfobox has appeared with the four clumped fields, andTrackCatalog.add_track’s signature is shorter.
Run the tests
After you accept the refactoring, run the tests. If they fail, the tool may have missed a call site somewhere — tools rewrite syntax they can see, and dynamic-feeling Python call patterns can slip through static analysis.
Don’t read ahead about which call. Run the tests, watch the failure, find the offending call site yourself, fix it. The closing reflection — “what did the tool do well? what did it not catch?” — is the load-bearing lesson of this step. Behavior preservation remains your responsibility even with a smart tool.
Why a @dataclass, not a dict?
You could also pass {"artist": ..., "album": ..., "genre": ..., "release_year": ...} as a dict. That’s worse for three reasons:
| Property | @dataclass AlbumInfo |
dict |
|---|---|---|
| Type-checked at definition | yes (mypy / IDE) | no |
| Auto-completion in editor | yes | partial |
| Typos caught early | yes (album_info.titel is an error) |
no (d["titel"] returns KeyError at runtime) |
The dict version “works” but defers every type error to runtime. The dataclass moves the same errors to definition time, which is where bugs are cheap to fix.
"""The track catalog: add a new track to the library."""
from typing import Dict, List
class TrackCatalog:
"""Stores and inserts tracks. Pretend the library list is a database."""
def __init__(self) -> None:
self.library: List[Dict] = []
def add_track(
self,
title: str,
artist: str,
album: str,
duration_sec: int,
genre: str,
release_year: int,
bpm: int,
isrc: str,
) -> Dict:
"""Insert a new track into the library."""
record: Dict = {
"title": title,
"artist": artist,
"album": album,
"duration_sec": duration_sec,
"genre": genre,
"release_year": release_year,
"bpm": bpm,
"isrc": isrc,
}
self.library.append(record)
return record
"""Helpers for seeding default tracks."""
from track import TrackCatalog
def seed_library(catalog: TrackCatalog) -> None:
"""Populate a catalog with some default tracks."""
catalog.add_track("Lullaby", "Alice", "Bedtime", 180, "ambient", 2020, 60, "ISRC001")
catalog.add_track("Sprint", "Bob", "Workout", 120, "electro", 2021, 140, "ISRC002")
# Refactoring memo — Step 5
## Issue
`add_track` takes eight parameters, four of which (`artist`, `album`, `genre`,
`release_year`) always travel together as album-level metadata. This is a
**Long Parameter List with a data clump** — the four fields describe the album
a track belongs to, and they always co-vary in call sites.
## Rationale
<!-- TODO: explain WHY introducing AlbumInfo is the right fix.
Why a @dataclass, not a dict? Why these four parameters and not others? -->
## Invariant
<!-- TODO: which property of behavior must be preserved across the refactor?
(Hint: same input values → same record dict in LIBRARY.) -->
## Tests
`test_add_track_inserts_record` confirms the record values are unchanged after the
refactor. `test_seed_library_populates_two_tracks` confirms the helper module's
calls to `add_track` continue to work. Both tests must stay green throughout.
"""Tests — verify behavior is preserved across the refactor."""
import pytest
from typing import Dict
from track import TrackCatalog
from helpers import seed_library
@pytest.fixture
def catalog() -> TrackCatalog:
return TrackCatalog()
def test_add_track_inserts_record(catalog: TrackCatalog) -> None:
# After the refactor, the call will use AlbumInfo for the album fields.
record: Dict = catalog.add_track(
title="Echo",
artist="Carol",
album="Reflections",
duration_sec=200,
genre="indie",
release_year=2022,
bpm=110,
isrc="ISRC003",
)
assert record["title"] == "Echo"
assert record["artist"] == "Carol"
assert record["album"] == "Reflections"
assert record["genre"] == "indie"
assert record["release_year"] == 2022
assert len(catalog.library) == 1
def test_seed_library_populates_two_tracks(catalog: TrackCatalog) -> None:
seed_library(catalog)
assert len(catalog.library) == 2
assert catalog.library[0]["title"] == "Lullaby"
assert catalog.library[1]["bpm"] == 140
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
Step 5 — Long Parameter List
Min. score: 70%
1. Retrieval. Which of add_track’s eight parameters travel together as a clump?
(select all that apply)
The clump is artist, album, genre, release_year — these describe the album, not the individual track. Two tracks on the same album always pass these four together with the same values. That co-variation is the diagnostic for a parameter object.
2. Why prefer @dataclass over a plain dict for the parameter object?
Type-checked field access is the load-bearing benefit. album_info.titel (typo) is a static error; d["titel"] is a runtime KeyError. Moving errors earlier — from runtime to edit time — is the same theme as test-driven development.
3. The tool rewrites call sites for you. After applying Introduce Parameter Object, your tests fail. What’s the most likely cause?
The tool sees what it can statically analyze. Positional calls in helper modules sometimes get missed — particularly when the helper imports add_track indirectly. The tests catch the miss; you fix it manually. Behavior preservation remains your responsibility, even with a tool.
4. What’s the difference between using @dataclass and writing the class by hand?
Boilerplate generation. @dataclass writes the constructor, equality check, and repr from your field annotations. You get the same behavior as a hand-written class with one-third the code.
Feature Envy → Move Method (with the tool)
Methods belong with the data they use
Learning objective. After this step you will be able to (a) recognize Feature Envy by the rule “uses zero state of its host class,” (b) use Monaco’s Refactor: Move Method to relocate the method to the class whose data it actually uses, and (c) explain why type-annotated signatures make moves safer.
Open media.py. There are two classes, Player and Track, and one suspicious method:
class Player:
def __init__(self, volume: int) -> None:
self.volume: int = volume
def compute_remaining_seconds(self, track: "Track") -> int:
return track.duration_sec - track.current_position
compute_remaining_seconds lives on Player but uses zero state of Player. It only touches fields of Track. That’s Feature Envy — the method “envies” the data of another class.
Initial state
(compute_remaining_seconds lives on Player, but its body only reads Track fields. The arrow points the wrong way.)
💡 Diagnostic. Read the body of any method. Does it touch
selfat all? If not — or if it touchesselfonly as a delegator — the method probably belongs on the other object whose data it does touch.
Run the move
Place your cursor inside compute_remaining_seconds, then Refactor: Move Method… with target class Track. Same flow as Step 4’s Extract Function — preview, accept. Watch the live UML in the bottom-left: the method migrates from Player to Track, and the Player → Track “uses” arrow disappears because Player no longer reaches into Track for this calculation.
Spacing callback — apply Step 3’s lesson here
Track has a sibling method is_finished that suffers from the boolean-return anti-pattern from Step 3:
def is_finished(self) -> bool:
if self.current_position >= self.duration_sec:
return True
else:
return False
After moving compute_remaining_seconds onto Track, simplify is_finished while you’re there. Same pattern, new context — the smell doesn’t care which class it lives on.
The Memo
Open memo.md. Three of four fields are blank — the Tests field is pre-filled. You write Issue, Rationale, and Invariant based on what you observe in the move.
"""The streaming media classes."""
from dataclasses import dataclass
@dataclass
class Track:
track_id: str
duration_sec: int
current_position: int = 0
def is_finished(self) -> bool:
# Sub-task: simplify this boolean (Step 3 callback)
if self.current_position >= self.duration_sec:
return True
else:
return False
@dataclass
class Player:
volume: int = 50
def compute_remaining_seconds(self, track: Track) -> int:
# Feature Envy: uses no Player state, only Track state.
return track.duration_sec - track.current_position
"""UI helper that formats the player's current state for display.
This caller currently goes through Player.compute_remaining_seconds(track),
which is Feature Envy on Track. After the Move Method refactor, the call
site here MUST be updated from `self.player.compute_remaining_seconds(track)`
to `track.compute_remaining_seconds()`. The tool may rewrite this for you;
if it doesn't, the tests will catch the missed call site.
"""
from media import Player, Track
class PlaybackUI:
"""Renders the player display."""
def __init__(self, player: Player) -> None:
self.player: Player = player
def format_remaining(self, track: Track) -> str:
"""Return a `M:SS remaining` string for the given track."""
seconds: int = self.player.compute_remaining_seconds(track)
minutes: int = seconds // 60
secs: int = seconds % 60
return f"{minutes}:{secs:02d} remaining"
"""Behavior tests for Player, Track, and the PlaybackUI caller."""
from media import Player, Track
from playback_ui import PlaybackUI
import pytest
def test_remaining_seconds_at_start() -> None:
t: Track = Track(track_id="t1", duration_sec=300, current_position=0)
# The student must update this call site after the move:
# before: Player().compute_remaining_seconds(t)
# after: t.compute_remaining_seconds()
p: Player = Player()
assert p.compute_remaining_seconds(t) == 300
def test_remaining_seconds_partway() -> None:
t: Track = Track(track_id="t1", duration_sec=300, current_position=120)
p: Player = Player()
assert p.compute_remaining_seconds(t) == 180
def test_is_finished_true_at_end() -> None:
t: Track = Track(track_id="t1", duration_sec=100, current_position=100)
assert t.is_finished() is True
def test_is_finished_false_partway() -> None:
t: Track = Track(track_id="t1", duration_sec=100, current_position=50)
assert t.is_finished() is False
# ---- Caller tests: PlaybackUI must keep working across the Move Method refactor ----
def test_playback_ui_formats_remaining_at_start() -> None:
t: Track = Track(track_id="t1", duration_sec=125, current_position=0)
ui: PlaybackUI = PlaybackUI(Player())
# 125s = 2:05 remaining
assert ui.format_remaining(t) == "2:05 remaining"
def test_playback_ui_formats_remaining_partway() -> None:
t: Track = Track(track_id="t1", duration_sec=125, current_position=60)
ui: PlaybackUI = PlaybackUI(Player())
# 125 - 60 = 65s = 1:05 remaining
assert ui.format_remaining(t) == "1:05 remaining"
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
# Refactoring memo — Step 6
## Issue
<!-- TODO: name the smell. Why does compute_remaining_seconds belong on Track instead of Player? -->
## Rationale
<!-- TODO: explain WHY moving the method is the right fix. What's the heuristic? -->
## Invariant
<!-- TODO: which property of behavior is preserved? Be specific about the contract. -->
## Tests
`test_remaining_seconds_at_start`, `test_remaining_seconds_partway`, `test_is_finished_true_at_end`,
and `test_is_finished_false_partway` confirm the invariant — same inputs produce the same
outputs both before and after the move (after updating call sites). The test file should not
need changes other than rewriting `p.compute_remaining_seconds(t)` to `t.compute_remaining_seconds()`.
Step 6 — Feature Envy
Min. score: 70%
1. Prediction (before reading the body). Player.compute_remaining_seconds(track) returns a number of seconds. Without reading the body, which class is most likely to be the right home for this method?
The noun is the receiver. A method named “compute X of Y” is usually a method on Y. Reading the body confirms — uses zero Player state, all Track state. Move Method.
2. What’s the rule that diagnoses Feature Envy?
“Uses zero state of host class” is the precise diagnostic. A method that doesn’t touch self (other than to read it) is suspicious; a method that touches self only to delegate to another object is suspicious; a method that uses only foreign state is unambiguously Feature Envy.
3. Step 3 callback. Why simplify is_finished from if cond: return True else: return False to return cond?
The condition (self.current_position >= self.duration_sec) is already a bool. Wrapping it in if/else with literal True and False returns is pure ceremony — same anti-pattern from Step 3, now in a method context.
4. Could you instead move the data (Track’s fields) into Player, leaving the method where it is?
Methods are easier to move than data. A method has one home and a few callers; a field tends to have many readers and writers. Moving fields shotgun-surgeries the codebase. Moving methods tightens it.
5. Variation Theory — non-example. A Catalog.build_default_track() method on a Catalog class uses no self state and only returns a new Track from a few constants. Is this Feature Envy?
The diagnostic catches Feature Envy candidates, but judgment decides. “Uses zero host state” makes a method a candidate for Move Method, not a guaranteed instance of the smell. Factories, default constructors, and pure-policy synthesizers live where the API surface is, not where the data is.
6. Mid-tutorial mix — Step 1 callback. You’ve just discovered that two functions have the same arithmetic bug. According to what you practiced in Step 1, what’s the correct order of operations?
Fix-then-refactor. The Step 1 lesson generalizes: never refactor over a known bug. Fix in each location with green tests, then consolidate. This is the highest-leverage sequencing rule in the toolkit.
7. Mid-tutorial mix — Step 4 callback. A teammate spots two near-duplicate functions and proposes Extract Function. The duplicates were just written this morning and the third occurrence isn’t on the roadmap. What does the Rule of Three suggest?
Two duplicates is the minimum, three is the safe bet. The Rule of Three exists because two near-duplicates can mislead — what looks like the same pattern may diverge in the third occurrence in a way that breaks the abstraction.
God Class → Extract Class (with the tool)
One class, two responsibilities
Learning objective. After this step you will be able to (a) recognize a God Class by its multiple responsibility clusters, (b) use Monaco’s Refactor: Extract Class to move a cohesive field/method cluster into a new class, and (c) explain why a clean extraction reduces change locality — fewer files change for any given feature.
Open streaming_app.py. The StreamingApp class has two distinct responsibilities mixed into one body:
- Catalog — track index, search history, recommendation cache, search & recommend methods.
- Billing — subscription tier, payment method, invoice list, charging methods.
The comments in the source file label the two clusters explicitly. That’s a hint: when responsibility clusters need labels to stay legible, the class is doing too many things at once.
Initial state
Why split? — predict the change-locality, then verify
Before you do any moves, predict: if a new payment method like PayPal needed to be added today, how many places in streaming_app.py would have to change? Skim the file, count, and write your number in a comment at the top of streaming_app.py like # BEFORE: N places change for a new payment method.
After the refactor, count again — how many files / classes change for the same feature. Write # AFTER: M places change.
The point of writing both numbers down is that change locality is the only metric that distinguishes “decomposed” from “renamed.” A renamed class still has the same change footprint. A decomposed class shrinks it.
💡 Expert’s note — recognizing seams. The seam between two responsibility clusters isn’t visible from how methods are named — it’s visible from which
self.Xfields each method touches. Methods that touch only the catalog field-set and methods that touch only the billing field-set form two clusters. A method that touches both is the boundary case and goes last. Read the methods one at a time, mark the field-sets they touch, and the seam will be obvious.
Use Extract Class
Place your cursor anywhere inside StreamingApp, then choose Refactor: Extract Class….
In the dialog:
- Name the new class
BillingManager. - Name the delegate field
billing. - Select the billing fields:
subscription_tier,payment_method,invoice_list. - Select the billing methods:
charge_monthly,charge_annual,send_invoice, and the unlabelednotify_payment_due. - Preview the diff, then apply it.
The tool creates BillingManager, moves the selected field/method cluster, replaces the old fields with self.billing = BillingManager(...), and rewrites straightforward typed call sites from streaming_app.charge_monthly() to streaming_app.billing.charge_monthly().
💡 Expert’s note. Extract Class still follows the field-then-method safety rule internally. Fields become the new class’s state first; then methods that use only that field-set can move without leaving dangling
self.Xreferences behind.
Spacing callback — Rule of Three from Step 4
Look at charge_monthly and charge_annual carefully. They share most of their logic. Should you Extract Function on the shared body before moving them?
The Step 4 lesson said: extract when the variation is one obvious dimension. Here the variation is the multiplier (12 vs. 1). The bodies are otherwise identical.
Recommended order: Move first, then Extract second. Keeps the extracted helper inside BillingManager, exactly where billing logic should live. (If you Extract first, the helper is on StreamingApp and then has to be moved too — twice the work for the same result.)
The unlabeled stray method
One method, notify_payment_due, isn’t labeled with a comment. Read its body. Which cluster does it belong in? The answer is in the body — this is a small recovery exercise in seam recognition.
"""The streaming app — currently a God Class.
Two responsibility clusters share one class body. Your task is to
extract the billing cluster into a new BillingManager class.
"""
from dataclasses import dataclass, field
from typing import Dict, List
class StreamingApp:
def __init__(self, user_id: str) -> None:
self.user_id: str = user_id
# ----- Catalog cluster -----
self.track_index: Dict[str, dict] = {}
self.search_history: List[str] = []
self.recommendation_cache: Dict[str, List[str]] = {}
# ----- Billing cluster -----
self.subscription_tier: str = "free"
self.payment_method: str = ""
self.invoice_list: List[Dict] = []
# ----- Catalog methods -----
def search(self, query: str) -> List[str]:
self.search_history.append(query)
return [tid for tid, info in self.track_index.items()
if query.lower() in info.get("title", "").lower()]
def record_recommendation(self, seed: str, results: List[str]) -> None:
self.recommendation_cache[seed] = results
# ----- Billing methods -----
def charge_monthly(self) -> Dict:
invoice: Dict = {"period": "monthly", "amount": 9.99 * 1, "method": self.payment_method}
self.invoice_list.append(invoice)
return invoice
def charge_annual(self) -> Dict:
invoice: Dict = {"period": "annual", "amount": 9.99 * 12, "method": self.payment_method}
self.invoice_list.append(invoice)
return invoice
def send_invoice(self, invoice_index: int) -> bool:
if invoice_index >= len(self.invoice_list):
return False
# Pretend to email the invoice.
return True
# ----- Which cluster does this belong in? -----
def notify_payment_due(self) -> str:
return f"Payment of $9.99 due on {self.payment_method}"
# Refactoring memo — Step 7
## Issue
<!-- TODO: name the smell. What makes StreamingApp a God Class? -->
## Rationale
<!-- TODO: why Extract Class? Why this seam? Why does the field-set define the boundary? -->
## Invariant
<!-- TODO: what behavior must be preserved? What does the public API
of StreamingApp guarantee that the refactor must not break? -->
## Tests
The behavior tests in `test_streaming_app.py` cover both the catalog API
(`test_search_returns_matching_titles`, `test_search_records_history`) and the
billing API (`test_charge_monthly_creates_invoice`, `test_charge_annual_creates_invoice`,
`test_invoice_list_grows_per_charge`). The catalog tests should require no edits
across the refactor. The billing tests are written defensively to support both the
old (`app.charge_monthly()`) and new (`app.billing.charge_monthly()`) APIs, so they
stay green throughout the migration.
"""Daily app routine that exercises both the catalog and billing clusters.
Every call site here that touches billing (charge_monthly, invoice_list)
will need to be updated after the Extract Class refactor — from
`self.app.charge_monthly()` to `self.app.billing.charge_monthly()`,
and from `self.app.invoice_list` to `self.app.billing.invoice_list`.
The tool may not catch all of these; the tests will.
"""
from typing import List
from streaming_app import StreamingApp
class AppRunner:
"""Drives a daily flow over StreamingApp's public surface."""
def __init__(self, app: StreamingApp) -> None:
self.app: StreamingApp = app
def daily_routine(self, queries: List[str]) -> int:
"""Run searches, then process the monthly charge.
Returns the number of invoices accumulated so far.
"""
for q in queries:
self.app.search(q)
self.app.charge_monthly()
return len(self.app.invoice_list)
"""Public-API behavior tests for StreamingApp and AppRunner."""
import pytest
from typing import Dict
from streaming_app import StreamingApp
from app_runner import AppRunner
@pytest.fixture
def app() -> StreamingApp:
a: StreamingApp = StreamingApp(user_id="u42")
a.track_index = {
"t1": {"title": "Echoes"},
"t2": {"title": "Echo Chamber"},
"t3": {"title": "Bright Lights"},
}
# The billing fields will move; the test sets them via whatever
# path remains correct (direct attribute today, .billing.X tomorrow).
return a
# ---- Catalog public API ----
def test_search_returns_matching_titles(app: StreamingApp) -> None:
assert sorted(app.search("echo")) == ["t1", "t2"]
def test_search_records_history(app: StreamingApp) -> None:
app.search("echo")
app.search("bright")
assert app.search_history == ["echo", "bright"]
# ---- Billing public API — written to survive the extraction ----
def test_charge_monthly_creates_invoice(app: StreamingApp) -> None:
# After refactor, billing.subscription_tier, billing.payment_method, etc.
if hasattr(app, 'billing'):
app.billing.payment_method = "visa-1234"
invoice: Dict = app.billing.charge_monthly()
else:
app.payment_method = "visa-1234"
invoice = app.charge_monthly()
assert invoice["amount"] == pytest.approx(9.99)
assert invoice["method"] == "visa-1234"
def test_charge_annual_creates_invoice(app: StreamingApp) -> None:
if hasattr(app, 'billing'):
app.billing.payment_method = "visa-1234"
invoice: Dict = app.billing.charge_annual()
else:
app.payment_method = "visa-1234"
invoice = app.charge_annual()
assert invoice["amount"] == pytest.approx(9.99 * 12)
def test_invoice_list_grows_per_charge(app: StreamingApp) -> None:
if hasattr(app, 'billing'):
app.billing.charge_monthly()
app.billing.charge_monthly()
assert len(app.billing.invoice_list) == 2
else:
app.charge_monthly()
app.charge_monthly()
assert len(app.invoice_list) == 2
# ---- Caller tests: AppRunner must keep working across the Extract Class refactor ----
def test_app_runner_records_searches_and_charges(app: StreamingApp) -> None:
runner: AppRunner = AppRunner(app)
invoice_count: int = runner.daily_routine(["echo", "bright"])
assert app.search_history == ["echo", "bright"]
assert invoice_count == 1
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
Step 7 — God Class
Min. score: 70%
1. Up-front classification. Which of the following are billing-only fields of the original StreamingApp?
(select all that apply)
Three billing fields: subscription_tier, payment_method, invoice_list. Identifying these before using the tool is the responsibility-recognition gate. Without it, you risk moving fields that don’t actually belong to the billing cluster.
2. What’s the safe order for extracting a class via Move Method/Field?
Fields first, methods second. Methods depend on fields; reverse order breaks references during the intermediate state. The tool’s per-move soundness depends on you sequencing correctly.
3. Spacing callback to Step 4. charge_monthly and charge_annual share most of their body. Should you Extract Function on them before moving them to BillingManager?
Move first, extract second. Same code shape, but the extracted helper ends up where billing logic actually belongs. This is a sequencing optimization — both orderings produce correct code, but one of them avoids redundant work.
4. What does NOT count as fixing the God Class smell, even if it makes the file look cleaner?
Renaming is local cleanup, not decomposition. A class with 14 methods named with prefixes is still one class with 14 methods. The fields and methods remain coupled to the same self. Decomposition requires a new class to receive the migrated members. Watch for this confusion — “I cleaned it up by renaming” is a frequent self-deception about responsibility decomposition.
Replace Conditional with Polymorphism (with the tool)
When if track_kind == is the smell
Learning objective. After this step you will be able to (a) recognize a type-conditional that begs for polymorphism, (b) use Monaco’s Refactor: Move Method to migrate each branch into a subclass’s overridden method, and (c) explain when not to use polymorphism (a small
matchstatement is sometimes fine).
Open tracks.py. The play function has an if/elif chain over track_kind:
def play(track_kind: str, track: Track, user: User) -> str:
if track_kind == "song":
# Songs respect repeat mode
...
elif track_kind == "podcast":
# Podcasts auto-resume from last position
...
elif track_kind == "audiobook":
# Audiobooks respect playback speed
...
else:
raise ValueError(f"Unknown track kind: {track_kind}")
Three branches, three different rules, one flat Track data class. This is type-conditional dispatch — the function asks “what kind of thing am I?” and chooses behavior. Every time you add a new track kind, this function grows by one elif.
Initial state
(One flat Track class with all the fields any kind of track might need; one play function with three branches dispatching on a string. After the refactor, Track becomes abstract and three subclasses each own their play behavior.)
A 30-second abstractmethod primer
Python’s abc module lets you declare a class as abstract — it can’t be instantiated directly. Subclasses must override the abstract methods to be instantiable.
from abc import ABC, abstractmethod
class Animal(ABC):
@abstractmethod
def speak(self) -> str:
...
class Dog(Animal):
def speak(self) -> str:
return "woof"
Animal() # TypeError: Can't instantiate abstract class
Dog() # OK
ABC (Abstract Base Class) is the marker. @abstractmethod says “every subclass MUST override this.” Together they enforce a contract at instantiation time, before any logic runs.
💡 We’re giving you the hierarchy shape on purpose. Designing class hierarchies from scratch is a separate skill (single-responsibility, Liskov substitution, type variance). For this step, we provide the abstract
Trackbase + three subclass skeletons. Your task is the dispatch inversion — moving eachplaybranch into the right subclass — not the hierarchy design.
How to apply this
The starter tracks.py already declares Track, Song, Podcast, Audiobook as a hierarchy with Track.play(user) as @abstractmethod. Run the dispatch micro-tests first to confirm polymorphism is wired correctly. Then, for each branch of the original play function, move the body into the corresponding subclass’s play(self, user) method (replace track.X with self.X). Finally, replace the original play function body with return track.play(user).
Tool-wise, this is just Refactor: Move Method done three times — same flow you used in Steps 6 and 7. Some branches may be easier to copy/paste because the conditional pattern doesn’t always select cleanly for tool-driven moves; that’s fine.
Spacing callback — Step 6’s Feature Envy in disguise
Each if track_kind == "X" branch in the original play function is a bit of code that uses only track-kind-specific data. By Step 6’s diagnostic (“uses zero state of host class”), each branch is Feature Envy on the Track-of-that-kind. The polymorphism refactoring is a special case of Move Method: instead of moving one method to one class, we move N methods to N subclasses dispatched by type.
Comparison — change locality (round two)
How many files change to add a new Live track type?
- Before: edit
tracks.py(add anelif), edit any caller, possibly edit tests for completeness. - After: add a new
Live(Track)subclass with its ownplay. Theplayfunction and its callers don’t change.
That’s the Open/Closed Principle — open for extension (new subclasses), closed for modification (existing dispatch unchanged).
When polymorphism is not the right answer
Three branches dispatching on a string-valued type can be a small match statement instead. Decide based on:
- Does the variation extend? If the type set is closed (e.g., HTTP status code categories — there are exactly four), polymorphism is over-engineered. A
matchworks. - Does each branch carry its own state? If yes, polymorphism wins (state goes into the subclass). If branches are stateless calculations,
matchis simpler. - Will subclasses share more behavior over time? If the kinds will accumulate shared methods, the hierarchy pays back. If they’ll stay independent, polymorphism is bookkeeping.
"""The track hierarchy — to be built up via Replace Conditional with Polymorphism."""
from abc import ABC, abstractmethod
from dataclasses import dataclass
@dataclass
class User:
user_id: str
@dataclass
class Track(ABC):
track_id: str
duration_sec: int
# All possible per-kind state (used by various subclasses)
repeat_mode: bool = False
last_position: int = 0
playback_speed: float = 1.0
@abstractmethod
def play(self, user: User) -> str:
"""Each subclass owns its play behavior."""
...
@dataclass
class Song(Track):
# TODO: implement play() — songs respect repeat_mode.
# If repeat_mode is True, return f"playing song {self.track_id} on repeat"
# otherwise return f"playing song {self.track_id}"
def play(self, user: User) -> str:
pass
@dataclass
class Podcast(Track):
# TODO: implement play() — podcasts auto-resume from last_position.
# Return f"resuming podcast {self.track_id} at {self.last_position}s"
def play(self, user: User) -> str:
pass
@dataclass
class Audiobook(Track):
# TODO: implement play() — audiobooks adjust playback speed.
# Return f"playing audiobook {self.track_id} at {self.playback_speed}x speed"
pass
# The original procedural dispatcher — to be collapsed.
def play(track_kind: str, track: Track, user: User) -> str:
"""Procedural dispatch. After polymorphism, this collapses to one line."""
if track_kind == "song":
if track.repeat_mode:
return f"playing song {track.track_id} on repeat"
else:
return f"playing song {track.track_id}"
elif track_kind == "podcast":
return f"resuming podcast {track.track_id} at {track.last_position}s"
elif track_kind == "audiobook":
return f"playing audiobook {track.track_id} at {track.playback_speed}x speed"
else:
raise ValueError(f"Unknown track kind: {track_kind}")
# Refactoring memo — Step 8
## Issue
<!-- TODO: name the smell. Why is the if/elif chain over track_kind a smell? -->
## Rationale
<!-- TODO: why does Replace Conditional with Polymorphism fix it?
What does the post-state achieve that the pre-state can't? -->
## Invariant
<!-- TODO: behavior preservation — what does play(...) still guarantee
for each track kind after the refactor? -->
## Tests
<!-- TODO: which tests confirm the invariant?
Hint: dispatch micro-tests + per-subclass behavior tests. -->
"""Plays a queue of tracks one after another.
Currently each call site has to pass the kind string explicitly:
`play("song", song_instance, user)`. After Replace Conditional with
Polymorphism, every call here can collapse to `track.play(user)` —
one polymorphic call regardless of subclass. Watch the lines shrink.
"""
from typing import List
from tracks import Track, Song, Podcast, Audiobook, User, play
class PlaybackLoop:
"""Drives a sequence of plays for a given user."""
def __init__(self, user: User) -> None:
self.user: User = user
def play_one_of_each(
self, song: Song, podcast: Podcast, audiobook: Audiobook
) -> List[str]:
"""Play one of each kind, returning the result strings.
After polymorphism, this method can shrink to a single list
comprehension over `[song, podcast, audiobook]` calling
`t.play(self.user)` directly.
"""
return [
play("song", song, self.user),
play("podcast", podcast, self.user),
play("audiobook", audiobook, self.user),
]
"""Behavior tests for the polymorphic Track hierarchy and PlaybackLoop caller."""
import pytest
from tracks import Track, Song, Podcast, Audiobook, User, play
from playback_loop import PlaybackLoop
# ----- Dispatch micro-tests (run first) -----
def test_track_cannot_be_instantiated() -> None:
"""Track is abstract; instantiation should fail."""
with pytest.raises(TypeError):
Track(track_id="t0", duration_sec=10)
def test_subclass_dispatch_calls_subclass_play() -> None:
"""Calling play on a subclass instance dispatches to that subclass's method."""
s: Song = Song(track_id="s1", duration_sec=180)
p: Podcast = Podcast(track_id="p1", duration_sec=2400, last_position=300)
# If the subclass overrides correctly, these don't error and don't return None.
assert s.play(User("u1")) is not None
assert p.play(User("u1")) is not None
# ----- Per-subclass behavior tests -----
def test_song_default_returns_plain_play_message() -> None:
s: Song = Song(track_id="s1", duration_sec=180)
assert s.play(User("u1")) == "playing song s1"
def test_song_with_repeat_mode_returns_repeat_message() -> None:
s: Song = Song(track_id="s2", duration_sec=200, repeat_mode=True)
assert s.play(User("u1")) == "playing song s2 on repeat"
def test_podcast_resumes_from_last_position() -> None:
p: Podcast = Podcast(track_id="p1", duration_sec=2400, last_position=300)
assert p.play(User("u1")) == "resuming podcast p1 at 300s"
def test_audiobook_uses_playback_speed() -> None:
a: Audiobook = Audiobook(track_id="a1", duration_sec=3600, playback_speed=1.5)
assert a.play(User("u1")) == "playing audiobook a1 at 1.5x speed"
# ----- The dispatcher collapses to a one-liner -----
def test_play_function_delegates_to_subclass() -> None:
s: Song = Song(track_id="s1", duration_sec=180)
# The procedural play() should now just call track.play(user).
# Both call styles must agree.
assert play("song", s, User("u1")) == s.play(User("u1"))
# ---- Caller test: PlaybackLoop must keep working across the polymorphism refactor ----
def test_playback_loop_runs_each_track_kind() -> None:
loop: PlaybackLoop = PlaybackLoop(User("u1"))
s: Song = Song(track_id="s1", duration_sec=180)
p: Podcast = Podcast(track_id="p1", duration_sec=2400, last_position=300)
a: Audiobook = Audiobook(track_id="a1", duration_sec=3600, playback_speed=1.5)
results: list = loop.play_one_of_each(s, p, a)
assert results == [
"playing song s1",
"resuming podcast p1 at 300s",
"playing audiobook a1 at 1.5x speed",
]
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
Step 8 — Replace Conditional with Polymorphism
Min. score: 70%
1. Retrieval / dispatch. When play(user) is called on a Podcast instance, which method runs?
The override runs. Podcast.play overrides Track.play, so the dynamic dispatch from track.play(user) resolves to Podcast.play. This is the same mechanism C++ calls “virtual”; Python’s methods are virtual by default.
2. Is polymorphism always the right fix for if/elif chains over a type field?
It depends on extension and state. Polymorphism wins when the type set will grow and each type carries its own state. match wins when the type set is closed and the branches are pure dispatch. Both can be valid; choose based on the change vector.
3. What does @abstractmethod actually enforce?
Enforcement is at instantiation. @abstractmethod makes the abstract base un-instantiable; subclasses without all required overrides are also un-instantiable. The error message names the missing methods, which is far better than discovering at call time that the method does nothing.
4. Spacing back to Step 6. Why is moving each conditional branch into its corresponding subclass a special case of Move Method?
Same diagnostic, different host. Step 6’s Feature Envy (“uses zero state of host class”) generalized: each branch of the conditional uses only state of its respective subclass. Polymorphism is Move Method dispatched by type.
Hotspots & The Boy Scout Rule
When refactoring earns its keep — and when it doesn’t
Learning objective. After this step you will be able to (a) explain why “always refactor” and “never refactor” are both wrong, (b) name two anti-rules — speculative generality and premature abstraction — that punish over-eager refactoring, and (c) apply the hotspot rule: refactor where the code is already changing.
Eight steps in, you’ve learned six refactorings (Extract Function, Boolean simplification, parameterised Extract, Introduce Parameter Object, Move Method, Extract Class) and one big idea (Replace Conditional with Polymorphism). With this much hammering, every line of code starts to look like a nail.
It isn’t.
Two anti-rules
Speculative generality is the smell of refactoring for an extension that never comes. A class StripePaymentPlugin(PaymentPlugin) with no second implementation, no plan for one, and no tests demonstrating multi-plugin behavior is just complexity. The abstraction might be useful if you ever need a second plugin. Most of the time, you don’t — and the code is harder to read while you wait for that future use.
Premature abstraction (a sibling smell) is over-eagerly extracting on the second occurrence of a pattern instead of the third. The Step-4 quiz hit this: with two duplicates the variation might be obvious or might be misleading. The Rule of Three exists because the third occurrence reveals what’s truly common vs. accidentally similar.
Both anti-rules are reactions to the same human pattern: developers like to feel “future-proof,” and refactoring tools make abstraction cheap. The cost of unused abstraction is paid daily by every reader who has to mentally instantiate the abstract types in their head before understanding the concrete code.
The hotspot rule
A hotspot is code that is already changing — high churn, frequent bug fixes, recent feature work. Refactoring buys you the most when applied to hotspots:
- You’re going to read the code anyway (because you’re changing it).
- The cleanup is amortized over many future changes (because the code keeps changing).
- The behavior preservation tests are most likely to exist on hotspot code (because someone wrote them recently).
Refactoring cold code — files nobody has touched for two years — is rarely worth the time. The code “looks ugly,” but nobody’s reading it; nobody’s changing it; nobody benefits from the cleanup. The cost is paid; the benefit isn’t realized.
The opposite extreme is also wrong. “Never refactor” means every hotspot accrues debt that compounds with every change — bug fixes take longer, new features ship slower, the team’s tolerance for the code degrades. Eventually the codebase becomes the kind of thing you can only rewrite, not edit. That’s not a virtue; it’s a failure mode.
The Boy Scout Rule
Robert C. Martin’s formulation: “Always leave the campground cleaner than you found it.” Applied to code: small, incremental, concurrent improvements bundled with whatever change is already happening — one Extract Method, one renamed variable, one collapsed boolean. The Boy Scout rule (clean while changing) and the hotspot rule (clean where changing) are the same principle from two angles, and together they reject “always” and “never” in favor of opportunistic improvement.
A misconception to avoid: “any cleaner loop is interchangeable”
One refactoring that looks innocuous but routinely breaks behavior: replacing an indexed for loop with a for x in iterable loop without checking what the index was doing. The classic example:
# Original: sums every other element starting at index 0
total = 0
for i in range(0, len(values), 2):
total += values[i]
# "Cleaner" but broken: now sums every element
total = 0
for v in values:
total += v
The two loops look similar, the linter might even prefer the second, and the tests on small inputs may pass. But the index-stride semantics is gone. Documented in Oliveira/Keuning/Jeuring (2023) as one of the most common refactoring misconceptions among CS students: students replace for i in range(...) with for v in iterable because the latter is “more Pythonic,” without checking whether the original index pattern was load-bearing.
The rule: before you simplify a loop, ask what the original was doing with the index. If the answer is “nothing” — the index was unused or only used to access iterable[i] sequentially — the simplification is safe. If the index was strided, reversed, paired with another sequence, or used for enumerate-like positional logic, the simplification is a behavior change.
Putting numbers on it
One large empirical study (Palomba et al., 2018) tracked 395 releases of 30 projects. Smelly classes had a median change-proneness of 32 vs. 12 for non-smelly ones; fault-proneness of 9 vs. 3. When three smells co-occurred in a class, change-proneness rose to a median of 54 and faults to 12. The data say: smells are real, smells in hotspots are expensive, and the strongest ROI is fixing co-occurring smells in churning code.
No code task this step — read, internalize, then quiz. Step 10 (the next one) puts the judgment to work on a snippet with multiple smells coexisting.
Step 9 — Hotspots & The Boy Scout Rule
Min. score: 70%1. Retrieval. Which of these scenarios would you defer refactoring for? (select all that apply)
Cold code is the only one to defer. Hotspots, soon-to-be-touched code, and co-occurring-smell hotspots all justify refactoring effort. Lack of tests is a side-quest (write tests first), not a deferral.
2. What’s wrong with the rule “always refactor when you see a smell”?
Speculative generality and premature abstraction. “Always refactor” produces abstractions that anticipate extension that never arrives. The reader pays the cost of the abstraction every day; the imagined benefit may never come.
3. What’s wrong with the rule “never refactor working code”?
All three are facets of the same failure. Pure “never refactor” produces compounding debt in exactly the places where reducing it pays back the most.
4. The Boy Scout Rule, applied to code, says:
Small, incremental, concurrent. The Boy Scout rule trades coordinated big-bang refactors for many small per-task improvements. Combined with the hotspot rule, it covers most of the value of refactoring at almost none of the coordination cost.
5. Loop-replacement misconception. Consider:
# Original
total = 0
for i in range(0, len(values), 2):
total += values[i]
# Suggested
total = 0
for v in values:
total += v
Strided loops aren’t interchangeable with foreach. Before simplifying a loop, ask what the index was doing. Strided, reversed, paired-with-another-sequence, or used-for-position — all four patterns break under naive for x in iterable replacement. This is a documented misconception (Oliveira/Keuning/Jeuring 2023) that survives all the way into professional code.
The Refactoring Memo (Synthesis)
Putting it all together
Learning objective. After this step you will be able to (a) diagnose multiple smells coexisting in a piece of unfamiliar code, (b) choose which refactoring to apply first based on the dependencies between smells, and (c) defend the choice in a four-field refactoring memo before touching any code.
Open playlist.py. The class PlaylistManager has multiple smells stacked together. Your job is to (a) diagnose them, (b) choose one refactoring to apply, (c) write the memo, and only then (d) execute the refactoring with the tool.
Warm-up — interleaved diagnosis
Before you look at playlist.py, name the smell and refactoring that fits each of the four snippets below. No code changes — just diagnose. Each snippet has exactly one dominant smell from earlier steps.
Snippet A
def shuffle_score(
plays: int,
age: int,
last_played: int,
mood: str,
weather: str,
time_of_day: str,
) -> float:
base: float = 0.0
if mood == "energetic" and weather == "sunny":
base = plays / age
elif mood == "calm":
base = plays / (age + last_played)
# ... 8 more elifs ...
return base
Snippet B
def update_metadata(
track: Track,
title: str,
artist: str,
album: str,
year: int,
genre: str,
label: str,
) -> None:
track.title = title
track.artist = artist
...
Snippet C
class Statistics:
def avg_track_length(self, tracks: List[Track]) -> float:
total: int = sum(t.duration_sec for t in tracks)
return total / len(tracks)
Snippet D
def is_hi_res(track: Track) -> bool:
if track.bitrate >= 1411:
return True
else:
return False
Snippet E (the foil)
class StripeProcessor:
def charge(self, amount_cents: int, currency: str) -> bool:
return self._gateway.charge(amount_cents, currency)
class PayPalProcessor:
def charge(self, amount_cents: int, currency: str) -> bool:
return self._gateway.charge(amount_cents, currency)
Two near-identical classes — same method shape, same body. Tempting to extract a base class or polymorphism, but this might be fine. Each processor has its own SDK, error handling, and configuration that isn’t shown here. The visual similarity is shallow.
Hold all five diagnoses in your head as you read on. The quiz will check whether you can discriminate real smells from looks-similar-but-fine.
The synthesis snippet — playlist.py
PlaylistManager.add_to_playlist(...) is the function we’ll refactor. Smells stacked in it (in plain English):
- Long Parameter List — eight parameters, several of which travel as a clump.
- Long Method — three sub-goals (validation, deduplication, persistence) jammed together.
- Duplicated Code — the validation block is duplicated in
add_to_queue.
Two plausible refactorings — pick ONE
| Option | What it tackles first | Trade-off |
|---|---|---|
| A: Introduce Parameter Object on the eight parameters first | Long Parameter List | Cleaner signatures, but the duplicated validation block is still duplicated |
| B: Extract Function on the validation block first | Long Method + Duplication | Smaller methods, both add_to_playlist and add_to_queue deduped, but the eight-parameter signature is still long |
Either choice is defensible. The tutorial doesn’t grade which choice you make — it grades the memo you write to defend it.
The memo — fully blank this time
Open memo.md. All four sections are empty. You write all four:
- Issue — name the smell(s) in plain English.
- Rationale — explain why your chosen refactoring is the right first step. (You can mention follow-up refactorings, but commit to one first move.)
- Invariant — what behavior must be preserved across the refactor?
- Tests — which tests in
test_playlist.pyconfirm the invariant?
A reference card at the bottom of memo.md defines each field in one sentence in case you need to refresh.
Then execute
After writing the memo, invoke the appropriate tool action:
- For Option A: place cursor in the parameter list, Refactor: Introduce Parameter Object, name it
TrackInfo, bundle the four album-related parameters. - For Option B: select the validation block in
add_to_playlist, Refactor: Extract Function/Method, name it_validate_track_data. Then replace the duplicated block inadd_to_queuewith a call to the new helper.
Run the tests after the refactor. They must still pass.
What’s next after this tutorial
You’ve covered method-level smells (Long Method, Duplication, Boolean anti-patterns), data-level smells (Long Parameter List, Feature Envy), and class-level smells (God Class, type-conditional dispatch). The next layer is design principles — the rules of thumb that make smells less likely to appear in the first place. Two suggested follow-ups:
- SOLID design principles tutorial — single-responsibility, open/closed, Liskov substitution, interface segregation, dependency inversion. The vocabulary the smells in this tutorial implicitly invoked.
- Observer pattern tutorial — one of many design patterns; an example of how design principles solidify into reusable structural recipes.
"""The playlist manager — multiple smells stacked together."""
from typing import Dict, List
class PlaylistManager:
def __init__(self) -> None:
self.playlist: List[Dict] = []
self.queue: List[Dict] = []
def add_to_playlist(
self,
title: str,
artist: str,
album: str,
year: int,
genre: str,
duration_sec: int,
bpm: int,
isrc: str,
) -> Dict:
# Validation
if not title or not isinstance(title, str):
raise ValueError("title is required")
if duration_sec <= 0:
raise ValueError("duration_sec must be positive")
if bpm <= 0:
raise ValueError("bpm must be positive")
# Deduplication
for existing in self.playlist:
if existing["isrc"] == isrc:
return existing
# Persistence
record: Dict = {
"title": title, "artist": artist, "album": album,
"year": year, "genre": genre, "duration_sec": duration_sec,
"bpm": bpm, "isrc": isrc,
}
self.playlist.append(record)
return record
def add_to_queue(
self,
title: str,
artist: str,
album: str,
year: int,
genre: str,
duration_sec: int,
bpm: int,
isrc: str,
) -> Dict:
# Validation (DUPLICATED from add_to_playlist)
if not title or not isinstance(title, str):
raise ValueError("title is required")
if duration_sec <= 0:
raise ValueError("duration_sec must be positive")
if bpm <= 0:
raise ValueError("bpm must be positive")
record: Dict = {
"title": title, "artist": artist, "album": album,
"year": year, "genre": genre, "duration_sec": duration_sec,
"bpm": bpm, "isrc": isrc,
}
self.queue.append(record)
return record
"""Bulk-imports tracks into a PlaylistManager.
If you pick Option A (Introduce Parameter Object), every call site here
will need to pass a `TrackInfo` instead of the four album-related fields
flat. If you pick Option B (Extract Function on validation), the call
shape stays the same — but the helper inside PlaylistManager is shared.
Either way: the tests below exercise the caller across the refactor.
"""
from typing import Any, Dict, List
from playlist import PlaylistManager
class TrackImporter:
"""Bulk-imports a list of track records into a playlist or queue."""
def __init__(self, manager: PlaylistManager) -> None:
self.manager: PlaylistManager = manager
def import_to_playlist(self, tracks: List[Dict[str, Any]]) -> int:
"""Add each track dict to the underlying playlist. Returns count added."""
count: int = 0
for t in tracks:
self.manager.add_to_playlist(
title=t["title"],
artist=t["artist"],
album=t["album"],
year=t["year"],
genre=t["genre"],
duration_sec=t["duration_sec"],
bpm=t["bpm"],
isrc=t["isrc"],
)
count += 1
return count
"""Behavior tests for PlaylistManager and the TrackImporter caller."""
import pytest
from typing import Any, Dict, List
from playlist import PlaylistManager
from importer import TrackImporter
@pytest.fixture
def pm() -> PlaylistManager:
return PlaylistManager()
def _track_args(**overrides: Any) -> Dict[str, Any]:
base: Dict[str, Any] = dict(
title="Echoes", artist="Alice", album="Reflections", year=2022,
genre="indie", duration_sec=200, bpm=110, isrc="ISRC001",
)
base.update(overrides)
return base
def test_add_to_playlist_returns_record(pm: PlaylistManager) -> None:
# The student may need to switch to a TrackInfo object after the refactor.
args: Dict[str, Any] = _track_args()
# Try keyword args first; if Refactoring made it positional, fall back.
try:
rec: Dict = pm.add_to_playlist(**args)
except TypeError:
# Refactor introduced a parameter object — try the object form.
from playlist import TrackInfo # may not exist before refactor
rec = pm.add_to_playlist(
title=args["title"],
track_info=TrackInfo(artist=args["artist"], album=args["album"],
year=args["year"], genre=args["genre"]),
duration_sec=args["duration_sec"],
bpm=args["bpm"],
isrc=args["isrc"],
)
assert rec["title"] == "Echoes"
assert rec["isrc"] == "ISRC001"
def test_add_to_playlist_validates_title(pm: PlaylistManager) -> None:
with pytest.raises(ValueError, match="title"):
args: Dict[str, Any] = _track_args(title="")
try:
pm.add_to_playlist(**args)
except TypeError:
from playlist import TrackInfo
pm.add_to_playlist(
title="",
track_info=TrackInfo(artist=args["artist"], album=args["album"],
year=args["year"], genre=args["genre"]),
duration_sec=args["duration_sec"], bpm=args["bpm"], isrc=args["isrc"],
)
def test_add_to_playlist_validates_duration(pm: PlaylistManager) -> None:
with pytest.raises(ValueError, match="duration"):
args: Dict[str, Any] = _track_args(duration_sec=0)
try:
pm.add_to_playlist(**args)
except TypeError:
from playlist import TrackInfo
pm.add_to_playlist(
title=args["title"],
track_info=TrackInfo(artist=args["artist"], album=args["album"],
year=args["year"], genre=args["genre"]),
duration_sec=0, bpm=args["bpm"], isrc=args["isrc"],
)
def test_dedup_returns_existing_record(pm: PlaylistManager) -> None:
args: Dict[str, Any] = _track_args()
try:
rec1: Dict = pm.add_to_playlist(**args)
rec2: Dict = pm.add_to_playlist(**args)
except TypeError:
from playlist import TrackInfo
ti = TrackInfo(artist=args["artist"], album=args["album"],
year=args["year"], genre=args["genre"])
rec1 = pm.add_to_playlist(title=args["title"], track_info=ti,
duration_sec=args["duration_sec"], bpm=args["bpm"],
isrc=args["isrc"])
rec2 = pm.add_to_playlist(title=args["title"], track_info=ti,
duration_sec=args["duration_sec"], bpm=args["bpm"],
isrc=args["isrc"])
assert rec1 is rec2 # same dict returned
def test_add_to_queue_validates_title(pm: PlaylistManager) -> None:
with pytest.raises(ValueError, match="title"):
args: Dict[str, Any] = _track_args(title="")
try:
pm.add_to_queue(**args)
except TypeError:
from playlist import TrackInfo
pm.add_to_queue(
title="",
track_info=TrackInfo(artist=args["artist"], album=args["album"],
year=args["year"], genre=args["genre"]),
duration_sec=args["duration_sec"], bpm=args["bpm"], isrc=args["isrc"],
)
# ---- Caller test: TrackImporter must keep working across the refactor ----
def test_track_importer_bulk_imports(pm: PlaylistManager) -> None:
importer: TrackImporter = TrackImporter(pm)
records: List[Dict[str, Any]] = [_track_args(isrc="A"), _track_args(isrc="B")]
# If the student picked Option A (Parameter Object), the importer's
# call site needs to be updated to pass TrackInfo. We support both.
try:
added: int = importer.import_to_playlist(records)
except TypeError:
pytest.skip("Importer needs updating for Option A — see importer.py")
return
assert added == 2
assert len(pm.playlist) == 2
if __name__ == "__main__":
import sys
pytest.main([sys.argv[0], "-v"])
# Refactoring memo — Step 10
## Issue
<!-- Name the smell(s) you see in playlist.py. There are at least three. -->
## Rationale
<!-- Which refactoring will you apply FIRST? Why is that the right first step
given the smells you named? You may mention follow-up refactorings. -->
## Invariant
<!-- What property of behavior must be preserved across the refactor?
(Hint: external API contract — same inputs produce same outputs.) -->
## Tests
<!-- List which tests in test_playlist.py confirm the invariant. -->
---
### Reference card
| Field | One-sentence definition |
|---|---|
| **Issue** | The smell present in the original code. |
| **Rationale** | Why this refactoring is the right fix, given the smell. |
| **Invariant** | The behavior property preserved across the refactor. |
| **Tests** | The tests that confirm the invariant. |
Step 10 — Synthesis (Cumulative)
Min. score: 70%
1. Cumulative classification. Which smells appear in the original PlaylistManager.add_to_playlist?
(select all that apply)
Three smells stacked: Long Parameter List + Long Method + Duplicated Code. Recognizing them simultaneously is the synthesis skill. Each previous step taught one in isolation; here they coexist.
2. Interleaved warmup retrieval. Snippet A (the shuffle_score function with if mood == "energetic" and weather == "sunny" branches) most resembles which smell from this tutorial?
Type-conditional dispatch. When if mood == "X" chooses behavior, you have a polymorphism candidate. Each mood/weather pair is dispatching to different scoring logic — same shape as the track_kind example in Step 8.
3. Interleaved warmup retrieval. Snippet B (update_metadata with seven parameters that travel together) most resembles which smell?
Long Parameter List with a data clump. Six of the seven parameters describe album metadata — they always travel together. The fix is Introduce Parameter Object.
4. Interleaved warmup retrieval. Snippet D (is_hi_res with if track.bitrate >= 1411: return True else: return False) most resembles which smell?
Classic boolean-return anti-pattern. The condition is already a boolean; wrapping it in if/else with return True / return False adds nothing. Simplify to return track.bitrate >= 1411. (And maybe also Move Method onto Track, but that’s secondary — Snippet D’s primary smell is boolean.)
5. Snippet C (Statistics class) retrieval. Statistics.avg_track_length(tracks) uses zero Statistics state — it iterates over the parameter tracks and computes a sum. Which earlier-step smell does this most resemble?
Feature Envy. The method uses zero state of its host (Statistics) and only data from the parameter tracks. Step 6’s diagnostic — “uses zero host state” — fires cleanly here. The likely fix is Move Method onto whichever class owns the track collection.
6. Snippet E (the foil) — discrimination check. Two near-identical processor classes (StripeProcessor, PayPalProcessor) with the same charge signature and similar body. Should you extract a common base class or apply Replace Conditional with Polymorphism?
Variation Theory in action. “Looks duplicated” can mean three different things: (1) actual duplication that should be extracted; (2) visual similarity over independent domain rules (currency rounding from Step 1); or (3) parallel infrastructure with hidden differences (this snippet). Discriminating between them is the judgment Step 9’s hotspot/Boy-Scout framing was for. When in doubt, wait for the third occurrence and a real bug-coupling signal before extracting.
7. Step 1 callback — fix-then-refactor. During the synthesis you spot a bug in one of the duplicated validation blocks of playlist.py. What’s the right discipline?
Fix-then-refactor. Step 1’s load-bearing rule generalizes across the entire toolkit: never refactor over a known bug. Fix in each location with green tests, then consolidate. This sequencing is one of the strongest rules in the discipline.
8. Memo synthesis. Why is the memo important — couldn’t you just refactor and let the diff speak for itself?
The memo answers questions a diff can’t. What smell? Why this refactoring? What invariant? Which tests confirm? A reviewer reading the diff sees what changed; the memo tells them why the change is correct. That’s the difference between a reviewable refactor and a trust-me one.