1

Hello, Shell!

Welcome to the Shell Scripting Tutorial! On the top is a code editor; on the bottom is a real Linux terminal.

Shell scripting has a reputation for tricky syntax — even experienced developers regularly look up Bash quoting rules. If something feels confusing, that’s a sign you’re engaging with genuinely hard material, not a sign you’re doing it wrong. Every error message is a clue; every mistake is a step forward.

Why learn shell scripting?

Every time you repeat a task in the terminal — processing files, checking log files, running complex builds — you are a candidate for automation. A shell script captures those commands in a file so you can re-run, share, and schedule them without retyping anything. So learning shell scripting can supercharge your productivity as a developer.

Shell scripts are the foundation of Continuous Integration / Continuous Delivery (CI/CD) pipelines, Docker entrypoints, deployment scripts, and system administration. The skills you learn here transfer directly to real production workflows.

Two lines every script needs

Open morning.sh in the editor. It already has:

#!/bin/bash
set -e

Line 1 — the shebang (#!): When you run a file, Linux reads the first two bytes to decide how to execute it. #! followed by a path tells the OS which interpreter to use. Without it, the OS guesses — and often guesses wrong. #!/bin/bash is the standard choice when Bash is at /bin/bash (true on most Linux systems). For maximum portability across systems where Bash may live elsewhere, you can also use #!/usr/bin/env bash, which finds the first bash in your $PATH.

Line 2 — the safety net (set -e): By default, Bash happily continues running after a failed command. set -e exits the script when a command fails, preventing a cascade of confusing failures. Always include it. (We’ll cover its edge cases in later steps — for now, just know it makes scripts safer.)

New Concept: Command Substitution

You can capture the output of a command and use it as a string by wrapping it in $(...). Try running this in your terminal right now: echo "I am $(whoami)"

Exploring Man Pages

Whenever you encounter an unfamiliar command or want to see all available options, the built-in manual is your first stop:

man date
man echo
man chmod

Each manual page is divided into sections: NAME, SYNOPSIS, DESCRIPTION, and OPTIONS. Navigate with the arrow keys, search with /keyword (then n for next match), and quit with q.

Try man date now to browse all available format specifiers — that’s how you’d discover that +%A prints the full weekday name, +%H:%M gives the time, and dozens of other options exist.

Your task

Add three commands to morning.sh:

  1. Print the literal string “Good morning!” using echo.
  2. Print “Today is “ followed by the current day. (Hint: the command date +%A outputs the day of the week. Use command substitution!)
  3. Print “You are logged in as: “ followed by your username. (Hint: use the whoami command).

Then save (Ctrl+S / Cmd+S) and run in the terminal:

chmod +x morning.sh
./morning.sh

Breaking it down:

Starter files
morning.sh
#!/bin/bash
set -e
2

Navigating the Filesystem

Before you can automate tasks with scripts, you need to move around the filesystem confidently. In a GUI you click folders; in the shell you type commands. Let’s build muscle memory for the essential ones.

Where am I? What’s here?

pwd          # Print Working Directory — your current location
ls           # List what's in the current directory
ls -l        # Long format — shows permissions, size, dates

Predict: Run ls now. You should see morning.sh from the previous step. Now run ls -a. What extra entries appear?

Commit to your prediction, then run it. The . and .. entries are special: . is the current directory, .. is the parent. Files starting with . are “hidden” — ls skips them by default, but ls -a shows everything.

Moving around with cd

cd /tmp          # go to an absolute path
pwd              # confirm you moved
cd ..            # go up one level (to /)
pwd
cd ~             # go to your home directory (shortcut for $HOME)
pwd

Try each command above. Notice that cd with no output is normal — it silently changes your location. Use pwd to confirm.

Important: Now return to the tutorial working directory:

cd /tutorial

Creating structure with mkdir

mkdir testdir                          # create one directory

Predict: Now try mkdir testdir/a/b — what happens? The parent testdir/a/ doesn’t exist yet.

Try it and see — then use the fix:

mkdir -p testdir/a/b                   # -p creates parents too

The -p flag creates all missing parent directories at once. Without it, mkdir requires every parent to already exist. Clean up the test directory before moving on: rm -r testdir

Copying with cp

cp duplicates files. The original stays in place.

cp notes.txt notes_backup.txt          # copy a file (try it!)

Predict: What happens if you try to copy a directory without any flags? Run:

mkdir temp_demo
cp temp_demo /tmp/backup

Will it (a) copy the whole directory, (b) copy just the name, or (c) fail with an error?

Try it — then read on. You need cp -r (recursive) to copy a directory and everything inside it. Clean up: rm -r temp_demo

Moving and renaming with mv

mv does double duty — it moves and renames:

mv notes_backup.txt notes_copy.txt    # rename (try it!)
ls                                     # notes_backup.txt is gone,
                                       # notes_copy.txt appeared

Unlike cp, mv works on directories without needing -r — it just updates the path, it doesn’t copy data.

Removing with rm

rm notes_copy.txt        # remove the copy we just made (no undo!)
rm -r directory/         # remove a directory and ALL its contents
rmdir empty_dir/         # remove ONLY if the directory is empty

Try the first command — notes_copy.txt from the mv example is now gone. The other two are syntax references for the task below.

Predict: After building the project below, try running rm myproject/ — without the -r flag — on a directory that contains files. Will it (a) delete everything, (b) delete just the directory, or (c) refuse with an error?

Try it and see. The shell protects you: without -r, rm refuses to touch directories. This is intentional.

Your task — Build a project skeleton

Use the commands you just learned to create this directory structure and manipulate files within it. We’ve provided notes.txt and data.csv as starting materials.

  1. Create the directory tree: myproject/src/, myproject/docs/, myproject/tests/ (Hint: mkdir -p can do this in one command)
  2. Copy notes.txt into myproject/docs/
  3. Move data.csv into myproject/src/ and rename it to input.csv
  4. Copy morning.sh into myproject/src/ as a backup
  5. Create an empty file myproject/tests/test_placeholder.txt (Hint: touch creates empty files)
  6. Remove the now-empty myproject/tests/test_placeholder.txt
  7. Verify your work: ls -R myproject (the -R flag lists recursively)
Starter files
notes.txt
Project Notes
=============
- Set up directory structure
- Process log files
- Write monitoring script
data.csv
timestamp,level,message
08:12:01,INFO,server started
08:15:45,ERROR,request failed
08:18:33,ERROR,timeout
3

Pipes — Connecting Commands

The pipe operator | is one of the most powerful ideas in Unix. It connects programs so that the output of one becomes the input of the next, letting you build data-processing pipelines from small, single-purpose tools. Data flows through memory from one process to the next — no intermediate files needed.

But before you connect tools, you need to know what each one does on its own. First, explore each tool individually — then we’ll combine them with pipes.

Part 1: Meet your tools (one at a time)

wc -l — count lines of input

wc -l < /etc/hosts   # how many lines are in /etc/hosts?

grep PATTERN file — print only lines that match a pattern

grep "WARN" server_log.txt   # show only warning lines

sort — sort lines alphabetically; add -n for numeric order, -r to reverse

echo -e "banana\napple\ncherry" | sort   # → apple, banana, cherry

uniq -c — collapse consecutive duplicate lines and prefix each with its count (always sort first so duplicates are adjacent)

echo -e "cat\ncat\ndog" | uniq -c   # →  2 cat   1 dog

cut -d' ' -f<n> — extract the n-th space-separated field

cut -d' ' -f2 server_log.txt   # extract the message type on each line

head -n — show only the first n lines

head -5 server_log.txt   # the first 5 log entries

Explore the data

A file called server_log.txt is provided. Browse it first:

cat server_log.txt

Now try each tool individually on the log file. Run each command in the terminal and observe what it does:

grep "ERROR" server_log.txt       # only ERROR lines
wc -l < server_log.txt             # total line count
cut -d' ' -f2 server_log.txt       # just the message types
head -3 server_log.txt             # first 3 lines only

Tool isolation exercises

Save the result of each single tool to a file:

  1. grep practice: Use grep to find all lines containing "WARN". Save to grep_result.txt.
  2. cut practice: Use cut to extract the second field (the message types: INFO, WARN, ERROR). Save to cut_result.txt.
  3. head practice: Use head to show only the first 3 lines of the log. Save to head_result.txt.

Part 2: Building pipelines

Now that you know what each tool does alone, let’s connect them.

The pipe | takes the stdout of the left command and feeds it directly into the stdin of the right command:

grep "ERROR" server_log.txt | wc -l   # count ERROR lines

No intermediate files — data flows through memory. You can chain as many commands as you need.

Redirection connects commands to files:

grep "INFO" server_log.txt > info_only.txt   # create/overwrite
echo "extra line" >> info_only.txt             # append (safe)
wc -l < info_only.txt                          # read from file

Where do errors go? (stderr)

Every program has two output streams: stdout (normal output, file descriptor 1) and stderr (error messages, file descriptor 2). By default both appear on your terminal, which makes them look the same — but they are separate streams that can be redirected independently.

Try this sequence — but predict before you run each step:

Step A: Run a command that produces both normal output AND an error:

ls server_log.txt no_such_file.txt

You should see both a successful listing and an error message on your terminal.

Step B — Predict first! If you redirect stdout to a file with >, what happens to the error message? Will it (a) go into the file, (b) still appear on your terminal, or (c) disappear entirely?

Commit to your answer, then run:

ls server_log.txt no_such_file.txt > ls_out.txt

Were you right? If the error still appeared on screen, that’s the key insight: > only captures stdout. The error traveled on a completely separate stream.

Step C: Now redirect stderr separately:

ls server_log.txt no_such_file.txt > ls_out.txt 2> ls_err.txt
cat ls_out.txt    # the successful listing
cat ls_err.txt    # just the error message

Key insight: > only captures stdout. Errors travel on stderr (2>), which is why they “leak through” regular redirection.

Note: The tests below check that ls_out.txt and ls_err.txt exist with the expected content. Make sure you actually ran the commands from Steps B and C above!

Pipeline exercises

For each question, build a pipeline and save the result to the named file using >. The tests below will check every file.

Tip: wc -l server_log.txt prints 15 server_log.txt (count + filename). To get just the number, redirect: wc -l < server_log.txt prints only 15. Use the redirect form when saving counts to files.

  1. Count total lines: Feed server_log.txt into wc -l. Save to line_count.txt.
  2. Filter errors: Print only lines containing “ERROR”. Save to errors_only.txt.
  3. Count errors: Pipe grep "ERROR" server_log.txt into wc -l. Save to error_count.txt.
  4. Extract timestamps: Extract just the first field (the timestamps). Save to timestamps.txt.
  5. Top message types: Find the 2 most frequent message types. (Build step by step: extract field 2 → sort → count duplicates → sort by count descending → top 2) Save to top_message_types.txt.
Starter files
server_log.txt
08:12:01 INFO server started on port 8080
08:12:03 INFO database connection established
08:14:22 WARN high memory usage detected (82%)
08:15:45 ERROR failed to process request /api/users
08:16:01 INFO request completed in 230ms
08:18:33 ERROR database timeout after 30s
08:19:02 WARN disk usage above threshold (91%)
08:20:15 INFO cache refreshed successfully
08:22:47 ERROR connection refused by upstream service
08:23:01 INFO retry succeeded for /api/users
08:25:00 INFO scheduled backup completed
08:27:12 WARN deprecated API endpoint called: /v1/legacy
08:30:00 INFO health check passed
08:31:44 ERROR out of memory on worker-3
08:32:01 INFO worker-3 restarted
4

Variables & The Quoting Trap

Variables store values for reuse. In Bash, you assign with = and read with $.

The spaces rule — easy to break, hard to debug

color="blue"      # correct
color = "blue"    # WRONG — shell sees three words: "color", "=", "blue"

There must be no spaces around =. The shell interprets color = "blue" as running a command named color with arguments = and blue.

The quoting problem

When you write $variable, the shell replaces it with the value — then word-splits the result on any characters in $IFS (the Internal Field Separator, which defaults to space, tab, and newline). This causes chaos when values contain spaces:

file="my report.txt"
wc -l $file      # shell splits into: wc -l my report.txt  (TWO args!)
wc -l "$file"    # correct: one argument, treated as a unit

Rule: always double-quote your variables unless you have a specific reason not to.

See the bug (Predict → Debug)

buggy.sh has a deliberate bug related to what you just learned.

Before running it, open buggy.sh in the editor and read it carefully. The variable filename is set to "my report.txt" — a value with a space. Look at every line that uses $filename. Can you spot which line will break? Predict the exact error message you’ll see, then run:

bash buggy.sh

Was your prediction correct? The error message tells you exactly what Bash tried to do — and why it failed.

Fix it:

  1. Diagnose why wc -l is throwing an error based on what you just learned.
  2. Fix the syntax and run the script again.

Build your own

Open inventory.sh and write a script from scratch that:

  1. Declares a variable for a project name and another for a version number.
  2. Uses command substitution $(...) to dynamically count the number of .sh files in the current directory and save it to a variable. (Hint: try ls *.sh | wc -l. This works for simple filenames; production scripts use find instead.)
  3. Uses echo to print a single string combining all three variables, e.g., Project: mytools v1.0 — 5 scripts found
Starter files
buggy.sh
#!/bin/bash
set -e
# This script has a bug — can you find it?

filename="my report.txt"
echo "creating a test file..."
echo "important data" > "$filename"

# Something below is broken — can you find it?
line_count=$(wc -l $filename)
echo "Line count: $line_count"

rm "$filename"
inventory.sh
#!/bin/bash
set -e
# Create variables for a project name and version, then count .sh files
5

Conditionals — Making Decisions

Scripts need to react to different situations. Bash’s if statement runs commands conditionally based on whether a test succeeds.

Syntax

if [ condition ]; then
    # runs when condition is true
elif [ other_condition ]; then
    # runs when first is false but this is true
else
    # runs when all conditions are false
fi

Why the spaces inside [ ] are mandatory

[ is a shell builtin command (a synonym for test) — not special syntax. Like any command, its arguments must be separated by spaces:

[ -f "$file" ]    # correct: "[" receives "-f" and "$file" as args
[-f "$file"]      # WRONG: shell tries to run a command named "[-f"

You can confirm this with type -a [, which shows both the builtin and the external /usr/bin/[ binary. Bash always uses the builtin.

Common tests (Your Toolbox)

Test Meaning
-f path Path exists and is a regular file
-z "$var" String is empty (zero length)
"$a" = "$b" Strings are equal
$x -eq $y Integers are equal
$x -gt $y Integer greater than
! condition Logical NOT

Important: use -eq, -lt, -gt for numbers; use = and != for strings. Mixing them gives wrong results silently!

Pro Tip: [[ ]] vs [ ]

While [ ] is the standard POSIX way, Bash also provides [[ ]]. It is more powerful because:

Discover a trap first

Before we start, try this experiment. Predict what happens, then run:

grep -c "NONEXISTENT" server_log.txt
echo "Did this print?"

Both lines should run fine. Now try it with set -e active:

bash -c 'set -e; grep -c "NONEXISTENT" server_log.txt; echo "Did this print?"'

What happened? grep -c found zero matches and returned exit code 1. With set -e, that non-zero exit code killed the entire script — echo never ran. But this isn’t really an error; it’s just “no matches found.” This is a common trap: grep treats “no matches” as failure.

The fix is || true — it means “if the command fails, succeed anyway.” The skeleton below uses this idiom. We’ll cover || fully in a later step.

Your task

We are providing a skeleton file health_check.sh. To help you structure your thinking, we’ve left blanks (_____) where the tests should go. Look at the “Common tests” toolbox above to fill them in logically:

  1. First blank: We want to exit if the file does not exist. How do you negate a file existence check?
  2. Second blank: We want to mark CRITICAL if error_count is greater than 3.
  3. Third blank: We want to mark WARNING if error_count is greater than 0.
chmod +x health_check.sh
./health_check.sh server_log.txt    # should report CRITICAL (4 errors)
./health_check.sh nonexistent.txt   # should print an error and exit 1
Starter files
health_check.sh
#!/bin/bash
set -e

file="${1:-server_log.txt}"

# Step 1: Check if the file exists
if [ _____ ]; then
    echo "Error: $file not found" >&2
    exit 1
fi

# Step 2: Count ERROR lines
# Note: grep -c exits with code 1 when no matches are found.
# The "|| true" prevents set -e from killing the script in that case.
error_count=$(grep -c "ERROR" "$file" || true)

# Step 3: Decide severity
if [ _____ ]; then
    echo "CRITICAL: $error_count errors found"
elif [ _____ ]; then
    echo "WARNING: $error_count errors found"
else
    echo "OK: no errors found"
fi
6

Loops — Repeating Work

Loops eliminate repetition. Let’s look at iterating over globs (file expansions).

for f in *.sh; do # expands to all matching filenames
    echo "Found: $f"
done

Accumulating totals

A common pattern is keeping running counts across loop iterations using arithmetic expansion $(( ... )):

passed=0
# ... inside loop:
passed=$((passed + 1))

Your task

Open batch_check.sh. We’ve provided the skeleton — the loop structure, counters, and summary line are already in place. Your job is to fill in the body of the loop (the three blanks):

  1. First blank: Capture the first line of the current file into the variable first. (Hint: head -1 "$f" prints the first line. Wrap it in $(...) to capture the output.)
  2. Second blank: Test whether first equals exactly #!/bin/bash. (Hint: use = for string comparison inside [ ]. Remember to quote both sides!)
  3. Third blank: The else branch — print a fail message and increment the failed counter. (Mirror the structure of the pass branch above it.)

Before running, predict: How many .sh files are in the directory right now? Which ones have a proper #!/bin/bash shebang and which don’t? (Hint: look at the files created in earlier steps — including no_shebang.sh that we’ve provided.) Write down your expected pass/fail counts, then run:

chmod +x batch_check.sh
./batch_check.sh

Does the output match your prediction? If not, check which files surprised you — that’s where the learning happens.

Starter files
batch_check.sh
#!/bin/bash
set -e

passed=0
failed=0

for f in *.sh; do
    # Blank 1: Capture the first line of "$f" into variable "first"
    first=_____

    # Blank 2: Check if "first" equals exactly "#!/bin/bash"
    if [ _____ ]; then
        echo "pass $f"
        passed=$((passed + 1))
    else
        # Blank 3: Print a fail message and increment "failed"
        _____
        _____
    fi
done

total=$((passed + failed))
echo "Checked $total files: $passed passed, $failed failed"
no_shebang.sh
set -e
7

Arguments & Special Variables

When you run ./script.sh one two three, the shell sets special variables automatically:

Variable Contains
$0 The script’s own name (great for usage messages)
$1, $2, … Positional arguments
$# Total number of arguments passed
$@ All positional arguments (properly word-safe only when quoted as "$@")

Looping over arguments

"$@" expands to all arguments as separate, properly-quoted words. You can loop over them like this:

for f in "$@"; do
    echo "Processing: $f"
done

Your task

Now we remove the training wheels. Write file_info.sh completely from scratch.

Requirements:

  1. Input Validation: Check if the number of arguments ($#) is equal to 0. If it is, print a usage message (e.g., echo "Usage: $0 <file1>...") and exit 1.
  2. Iteration: Loop over all arguments passed to the script using a for loop and "$@".
  3. Conditionals: Inside the loop, for each file:
    • Check if it is a directory (-d). If so, print <name>: directory.
    • Otherwise, check if the file does NOT exist (! -f). If so, print <name>: not found.
    • Else (it’s a real file), use wc -l < "$f" to count the lines and print <name>: <N> lines.

Tip: Think about the flow of data. Combine what you learned in the Conditionals step with the for loop shown above.

Test your script with:

chmod +x file_info.sh
./file_info.sh server_log.txt morning.sh /tmp nope.txt
Starter files
file_info.sh
#!/bin/bash
set -e
# Write your code below!
8

Functions — Reusable Building Blocks

Functions let you name a block of code and call it anywhere, just like external commands.

greet() {
    local name="$1"
    echo "Hello, ${name}!"
}

greet "engineer"   # → Hello, engineer!

Rule of Thumb: Always use local for variables declared inside a function so they don’t leak out and overwrite global variables. Functions receive $1, $2, etc. independently of the script’s own arguments.

Return Values

Functions exit with a numeric status code (0–255) set by return. By convention, return 0 means success and any non-zero value means failure — which lets you use functions directly in if statements. You can return specific non-zero codes (e.g., return 2 for bad arguments) to give callers richer information. To return data (strings, numbers), use echo inside the function and capture it outside with $(...)return only carries an exit code, not data.

Your task

Write toolkit.sh and create these three functions:

  1. to_upper: Echoes its argument converted to uppercase. (Tool hint: echo "$1" | tr '[:lower:]' '[:upper:]')
  2. file_ext: Echoes the file extension of its argument. (Tool hint: echo "${1##*.}" strips everything up to the last dot)
  3. is_number: Checks if its argument is a valid integer using the Regex test [[ "$1" =~ ^-?[0-9]+$ ]]. If true, return 0. Else, return 1.

Write a small script below the functions to test them, ensuring they work!

Watch out for set -e: is_number returns 1 (failure) for non-numbers. If you call is_number abc as a bare command, set -e will kill your script. Always test it inside an if or with &&/|| — e.g., if is_number "$val"; then ....

Starter files
toolkit.sh
#!/bin/bash
set -e
9

Case Statements & Exit Codes

case — readable multi-way branching

When you need to check one variable against many possible values, case is cleaner than if/elif:

case "$input" in
    start)   echo "Starting..."  ;;
    stop)    echo "Stopping..."  ;;
    *)       echo "Unknown: $input" ;;
esac

Exit codes: the language of success and failure

Every command exits with a number. 0 always means success; any other value means failure.

exit 0    # success
exit 1    # general error
exit 2    # misuse / wrong arguments

Conditional chaining: && and ||

Because every command returns an exit code, you can chain commands without a full if/then/fi block:

mkdir output && echo "Directory created"   # runs echo only if mkdir succeeds
cd /target || exit 1                        # exits script if cd fails

This is widely used in professional scripts for concise error handling. Note: set -e does not trigger for commands that are not the last in a &&/|| chain — those are treated as intentional control flow.

Your task

Write service.sh — a simulated service controller. Use a case statement to check the first argument $1.

Requirements:

Starter files
service.sh
#!/bin/bash
set -e
10

Build a Log Monitor

Time to combine everything into a real tool. This is a retrieval practice exercise: you have all the knowledge, now you must retrieve it from memory and synthesize it.

Before you write any code, look at server_log.txt one more time and predict: How many ERROR, WARN, and INFO lines are there? What severity status should your script report? What exit code should it return? Write your predictions down — you’ll check them against your script’s actual output.

Challenge

Write monitor.sh — a log-monitoring tool that analyzes server_log.txt and produces a complete status report.

Requirements:

  1. Accept an optional filename argument. If not provided, default to server_log.txt.
  2. Validate that the file exists; if not, print to stderr and exit.
  3. Print a header: === Log Monitor Report ===
  4. Summary section — write a function called count_by_level that takes a log level (e.g., “ERROR”) and the filename, and echoes the count. Use it to report:
    • Total entries
    • Count of ERROR, WARN, and INFO entries
  5. Error details: Loop over ERROR lines and print each one. (Remember: grep -c exits with code 1 when there are zero matches. Use || true to prevent set -e from killing your script — just like in the health_check step.)
  6. Severity assessment: Use a case statement on the error count: 0 → print Status: HEALTHY, 1|2|3Status: WARNING, * (anything else) → Status: CRITICAL. (Note: case uses glob patterns, not numeric ranges. Use | to match multiple values: 1|2|3) matches 1, 2, or 3.)
  7. Exit with code 0 if no errors are found, and code 1 if errors are present.

Design Approach

Don’t just write code immediately. In learning science, planning reduces cognitive load. Sketch your script out in comments first:

# 1. Handle arguments and default file
# 2. Check if file exists
# 3. Print Header
# 4. Calculate counts using grep/wc
# ...

Once your structure is clear, write the bash code.

When NOT to use Shell Scripting

Shell scripting is powerful for text processing and automation, but it has real limits. Knowing when not to use a tool is as important as knowing how to use it. Switch to Python (or another general-purpose language) when:

Bash is a glue language: brilliant for orchestrating other programs and processing text streams. Use it for that, and reach for a real programming language when the task outgrows it.

Starter files
monitor.sh
#!/bin/bash
set -e