mock 53
Every exercise mixes at least two of: command/args shell scripting, ConfigMaps, Secrets, Downward API, and shared volumes. No solutions are provided.
Exercise 1 — Deployment with a Logging Sidecar and Log Rotation
Scenario: You have a web-app Deployment whose main container writes structured logs to a shared volume. A sidecar is responsible for shipping those logs (simulated here by printing them). A third init container pre-creates the log directory with the correct permissions.
Requirements:
Create a namespace weblog.
Create a ConfigMap log-config in weblog with:
LOG_PATH=/var/log/app
LOG_FILE=app.log
MAX_ENTRIES=100
Create a Deployment web-logger in weblog with 2 replicas and 3 containers:
Init container — dir-setup (busybox):
- Reads
LOG_PATHfrom the ConfigMap as an env var. - Creates the directory at
$LOG_PATHand writes a fileinit.txtcontaining the textdirectory created by init at <timestamp>using command substitution.
Main container — app (busybox):
- Reads
LOG_PATH,LOG_FILE, andMAX_ENTRIESas env vars from the ConfigMap. - Also exposes
MY_POD_NAMEandMY_POD_IPvia the Downward API as env vars. - Runs an infinite loop that every 2 seconds appends a JSON-like line to
$LOG_PATH/$LOG_FILE:{"pod":"<MY_POD_NAME>","ip":"<MY_POD_IP>","entry":<counter>,"ts":"<date>"} - When
counterexceedsMAX_ENTRIES, reset it to 0 and truncate the log file (overwrite instead of append on that iteration).
Sidecar container — log-shipper (busybox):
- Reads
LOG_PATHandLOG_FILEfrom the ConfigMap as env vars. - Waits until
$LOG_PATH/$LOG_FILEexists (poll in a loop withsleep 1), then runstail -fon it.
All three containers share an emptyDir volume at /var/log/app.
Verify:
kubectl logs <pod> -c log-shippershows JSON lines with correct pod name and IP.- Both replicas use their own pod name/IP in the log entries.
Exercise 2 — StatefulSet with Per-Pod Identity via Downward API and Init Container
Scenario: A StatefulSet where each pod must generate a unique config file on startup based on its own identity — pod name, ordinal index (parsed from the pod name), and namespace.
Requirements:
Create a namespace stateful-app.
Create a Secret cluster-secret in stateful-app with:
CLUSTER_TOKEN=tok3n-xyz-9999
JOIN_KEY=s3cur3-join-k3y
Create a StatefulSet identity-sts in stateful-app with 3 replicas, image busybox, and no headless service needed for this exercise.
Init container — config-gen (busybox):
- Exposes
MY_POD_NAMEandMY_NAMESPACEvia Downward API as env vars. - Mounts the Secret
cluster-secretas a volume at/etc/cluster-secret(not as env vars). - Parses the ordinal from
MY_POD_NAME(the number after the last-) using shell string manipulation — no external tools. - Writes
/shared/node.confwith content like:NODE_ID=<ordinal>NODE_NAME=<MY_POD_NAME>NAMESPACE=<MY_NAMESPACE>TOKEN=<contents of /etc/cluster-secret/CLUSTER_TOKEN file>JOIN_KEY=<contents of /etc/cluster-secret/JOIN_KEY file>
Main container — node (busybox):
- Waits for
/shared/node.conf, then prints its full contents once and sleeps indefinitely.
Share an emptyDir at /shared between init and main containers.
Verify:
- Each pod prints a
node.confwith the correctNODE_ID(0, 1, 2). - Token and join key values are correctly read from the secret volume.
Exercise 3 — Job with Retry Logic and Exit Code Control via Args
Scenario: A data-processing Job that simulates flaky work. It should retry on failure (up to 4 times), and the container itself decides success or failure based on a config value.
Requirements:
Create a namespace batch.
Create a ConfigMap job-config in batch with:
FAILURE_THRESHOLD=3
WORK_UNITS=10
Create a Job flaky-processor in batch:
backoffLimit: 4completions: 1- Image:
busybox - Injects
FAILURE_THRESHOLDandWORK_UNITSas env vars from the ConfigMap. - Also exposes
MY_POD_NAMEvia Downward API. - The container's shell script must:
- Print
Starting job on pod $MY_POD_NAME. - Loop from 1 to
$WORK_UNITS, printingProcessing unit <n>with asleep 0.5between each. - After the loop, generate a random number between 1 and 10 using
$RANDOMand shell arithmetic. - If the random number is less than or equal to
$FAILURE_THRESHOLD, printFAILED: random=<n>, threshold=$FAILURE_THRESHOLDand exit with code1. - Otherwise print
SUCCESS: random=<n>and exit0.
- Print
Write the entire logic in the args field (using sh -c in command). Do not use a ConfigMap-mounted script file.
Verify:
kubectl get job flaky-processoreventually shows1/1completions after some retries.kubectl describe job flaky-processorshows retry history.
Exercise 4 — CronJob that Archives Logs from a Shared PVC
Scenario: An always-running Deployment writes logs to a file on a shared emptyDir. A CronJob periodically reads those logs and writes an archive summary. Both share a volume — but since CronJobs spin up new pods, the volume must be a hostPath (use /tmp/app-logs on the node) to simulate persistence.
Note: In a real cluster this would use a PVC. For this exercise, use
hostPathwith path/tmp/app-logsand typeDirectoryOrCreate.
Requirements:
Create a namespace archive.
Create a ConfigMap archive-config in archive with:
LOG_FILE=/logs/app.log
ARCHIVE_DIR=/logs/archives
LINES_PER_SUMMARY=20
Deployment log-producer in archive (1 replica, busybox):
- Reads
LOG_FILEfrom the ConfigMap. - Infinite loop: every 1 second append
<timestamp> - event <counter>to$LOG_FILE. - Mounts the
hostPathvolume at/logs.
CronJob log-archiver in archive (schedule: every 2 minutes — use */2 * * * *):
- Reads
LOG_FILE,ARCHIVE_DIR, andLINES_PER_SUMMARYfrom the ConfigMap. - Also exposes
MY_POD_NAMEvia Downward API. - The job's shell script must:
- Create
$ARCHIVE_DIRif it doesn't exist. - Read the last
$LINES_PER_SUMMARYlines from$LOG_FILEusingtail. - Write them to
$ARCHIVE_DIR/summary-<timestamp>.txt. - Print
Archived by $MY_POD_NAME at <timestamp>: <line-count> lines written.
- Create
- Mounts the same
hostPathvolume at/logs. - Set
successfulJobsHistoryLimit: 3andfailedJobsHistoryLimit: 1.
Verify:
- After 2+ minutes,
kubectl logs <archiver-pod>shows the archive message. - Files appear under
/tmp/app-logs/archives/on the node.
Exercise 5 — DaemonSet Node Agent with Downward API and Secret Token Auth
Scenario: A DaemonSet that runs a monitoring agent on every node. Each agent identifies itself using its node name (Downward API), authenticates using a shared secret token, and writes a per-node heartbeat file to a hostPath directory.
Requirements:
Create a namespace monitoring.
Create a Secret agent-auth in monitoring with:
AUTH_TOKEN=node-agent-s3cr3t
ENDPOINT=https://monitor.internal/ingest
Create a ConfigMap agent-config in monitoring with:
HEARTBEAT_INTERVAL=5
AGENT_VERSION=1.2.0
Create a DaemonSet node-agent in monitoring (busybox image) with:
MY_NODE_NAMEexposed via Downward API (usespec.nodeNamefield ref).MY_POD_NAMEandMY_NAMESPACEexposed via Downward API.AUTH_TOKENandENDPOINTinjected from the Secret usingsecretKeyRef(env vars, not volume).HEARTBEAT_INTERVALandAGENT_VERSIONfrom the ConfigMap as env vars.- Mounts a
hostPathvolume (/tmp/agent-heartbeats, typeDirectoryOrCreate) at/heartbeats. - The container runs an infinite loop that every
$HEARTBEAT_INTERVALseconds writes a file/heartbeats/$MY_NODE_NAME.jsonwith content:Truncate the token to its first 6 characters using shell string slicing:{"node":"<MY_NODE_NAME>","pod":"<MY_POD_NAME>","ns":"<MY_NAMESPACE>","version":"<AGENT_VERSION>","token":"<first 6 chars of AUTH_TOKEN>...","ts":"<date>"}${AUTH_TOKEN:0:6}.
Verify:
- A
.jsonfile exists under/tmp/agent-heartbeats/for each node. - The token field shows only the first 6 characters followed by
....
Exercise 6 — Multi-Stage Job with Init + Sidecar Coordination via Shared Volume and Exit File
Scenario: A Job where the main container performs work and a sidecar monitors progress. The sidecar must exit cleanly when the main container is done — without using shareProcessNamespace. Use an exit-file convention on a shared volume.
Requirements:
Create a namespace pipeline.
Create a ConfigMap pipeline-cfg in pipeline with:
STAGES=5
STAGE_DURATION=3
OUTPUT_DIR=/work/output
Create a Secret pipeline-creds in pipeline with:
PIPELINE_ID=pipe-2024-alpha
SIGN_KEY=abc123def456
Create a Job staged-pipeline in pipeline:
backoffLimit: 0- Two containers (not init — both must run concurrently):
Container 1 — worker (busybox):
- Reads
STAGES,STAGE_DURATION,OUTPUT_DIRfrom ConfigMap as env vars. - Reads
PIPELINE_IDandSIGN_KEYfrom Secret as env vars. - Exposes
MY_POD_NAMEvia Downward API. - The shell script:
- Creates
$OUTPUT_DIR. - Loops through stages 1 to
$STAGES. For each stage:- Prints
[worker] Stage <n>/$STAGES starting. - Writes
$OUTPUT_DIR/stage-<n>.txtwith content:stage=<n> pipeline=$PIPELINE_ID pod=$MY_POD_NAME signed=<first-8-chars-of-SIGN_KEY>. - Sleeps
$STAGE_DURATIONseconds. - Prints
[worker] Stage <n> complete.
- Prints
- After all stages, writes
/work/done(empty file) to signal completion. - Prints
[worker] All stages complete. Exiting.and exits0.
- Creates
Container 2 — monitor (busybox):
- Reads
STAGESandOUTPUT_DIRfrom ConfigMap as env vars. - The shell script:
- Polls every 2 seconds. On each poll:
- Counts how many
stage-*.txtfiles exist in$OUTPUT_DIRusingls | wc -l(handle missing dir gracefully). - Prints
[monitor] <n>/$STAGES stages completed.
- Counts how many
- When
/work/doneexists, prints[monitor] Worker finished. Shutting down.and exits0.
- Polls every 2 seconds. On each poll:
Share an emptyDir at /work between both containers.
Verify:
kubectl logs <pod> -c workershows all stage completions.kubectl logs <pod> -c monitorshows progress updates and a clean shutdown message.kubectl get job staged-pipelineshows1/1completions.
Exercise 7 — Deployment Rolling Update with ConfigMap-Driven Behavior Change
Scenario: You have a running Deployment whose container behavior is entirely driven by a ConfigMap. You must update the ConfigMap and trigger a rolling restart, then verify the new behavior — all without deleting the Deployment.
Requirements:
Create a namespace rollout.
Create a ConfigMap app-behavior in rollout with:
MODE=verbose
REPEAT=3
MESSAGE=hello from kubernetes
Create a Deployment configurable-app in rollout with 3 replicas (busybox):
- Injects
MODE,REPEAT, andMESSAGEas env vars from the ConfigMap. - Also exposes
MY_POD_NAMEvia Downward API. - The shell script runs an infinite loop:
- If
MODE=verbose: every 4 seconds, print[verbose][<MY_POD_NAME>] <MESSAGE>repeated$REPEATtimes (each on its own line using a nested loop). - If
MODE=quiet: every 4 seconds, print only.(a dot).
- If
Part A: Deploy and verify verbose mode is active in pod logs.
Part B: Update the ConfigMap:
MODE=quiet
REPEAT=3
MESSAGE=hello from kubernetes
Then trigger a rolling restart using the appropriate kubectl command (not kubectl delete).
Verify:
- Old pods show verbose output before the restart.
- New pods show only
.after the restart. - During the rollout, at least one old and one new pod are running simultaneously — confirm this by watching
kubectl get pods -n rolloutduring the restart.
Exercise 8 — CronJob + Init Container + Downward API Volume (Not Env Vars)
Scenario: A CronJob whose init container uses a Downward API volume (not env vars) to read pod labels, and uses those label values to name its output file.
Requirements:
Create a namespace scheduled.
Create a ConfigMap schedule-cfg in scheduled with:
REPORT_DIR=/data/reports
ITEM_COUNT=5
Create a Secret schedule-secret in scheduled with:
REPORT_TOKEN=rpt-t0k3n-x99
Create a CronJob labeled-reporter in scheduled (schedule: */3 * * * *) with the following pod template:
Pod labels (set on the pod template):
app: reporter
env: staging
team: infra
Init container — label-reader (busybox):
- Mounts a Downward API volume at
/etc/podinfoexposing these labels as a file:labels. - Mounts a shared
emptyDirat/data. - The shell script:
- Reads
/etc/podinfo/labels(format iskey="value"per line). - Parses the
envandteamlabel values usinggrepandcutorsed. - Writes
/data/report-name.txtcontaining<env>-<team>(e.g.staging-infra).
- Reads
Main container — reporter (busybox):
- Reads
REPORT_DIRandITEM_COUNTfrom ConfigMap as env vars. - Reads
REPORT_TOKENfrom Secret as an env var. - Mounts the same
emptyDirat/data. - The shell script:
- Waits for
/data/report-name.txtto exist. - Reads the report name from that file.
- Creates
$REPORT_DIRif needed. - Writes a report file
$REPORT_DIR/<report-name>-<timestamp>.txtcontaining:token=<first-8-chars-of-REPORT_TOKEN>items=<ITEM_COUNT>- One line per item:
item-<n>: generated at <date>(loop from 1 to$ITEM_COUNT).
- Prints
Report written: <filename>and exits.
- Waits for
Set successfulJobsHistoryLimit: 2.
Verify:
- The report file is named
staging-infra-<timestamp>.txt. - Token is truncated to 8 characters.
- File contains the correct number of item lines.
Exercise 9 — StatefulSet with Per-Pod ConfigMap Projection + Secret Volume + Downward API Volume (All Three as Volumes)
Scenario: A StatefulSet where every container reads its full configuration exclusively from mounted volumes — no env vars at all. ConfigMap, Secret, and Downward API are all mounted as separate volume paths.
Requirements:
Create a namespace vault.
Create a ConfigMap vault-config in vault with:
MAX_CONNECTIONS=50
TIMEOUT_SECONDS=30
REGION=us-east-1
Create a Secret vault-secret in vault with:
ROOT_TOKEN=vault-r00t-t0k3n
UNSEAL_KEY=un53al-k3y-42
Create a StatefulSet vault-node in vault with 2 replicas, image busybox.
Each pod must:
- Mount
vault-configConfigMap as a volume at/config/app(all keys become files). - Mount
vault-secretSecret as a volume at/config/secrets(all keys become files). - Mount a Downward API volume at
/config/metaexposing:pod-name→metadata.namepod-namespace→metadata.namespacepod-ip→status.podIPnode-name→spec.nodeName
The container shell script must — using only file reads, no env vars — produce this output on startup:
=== Vault Node Config ===
Identity:
pod: <contents of /config/meta/pod-name>
ns: <contents of /config/meta/pod-namespace>
ip: <contents of /config/meta/pod-ip>
node: <contents of /config/meta/node-name>
App Config:
max_connections: <contents of /config/app/MAX_CONNECTIONS>
timeout: <contents of /config/app/TIMEOUT_SECONDS>
region: <contents of /config/app/REGION>
Secrets:
root_token: <first-10-chars>...
unseal_key: <first-10-chars>...
After printing, the container sleeps indefinitely.
Implement the 10-char truncation using shell cut or parameter expansion — not hardcoded.
Verify:
- Both pods print their own pod name and IP (different for each).
- Secret values are truncated correctly.
- No
env:orenvFrom:fields exist anywhere in the StatefulSet spec.
Notes
- For exercises involving
hostPath, you may need toexecinto the node or use a debug pod to verify files. - Shell arithmetic for random numbers:
$(( RANDOM % 10 + 1 )). - String slicing in busybox
sh:${VAR:0:N}works inash; if not, useecho $VAR | cut -c1-N. - Parsing ordinals from pod names:
${POD_NAME##*-}extracts everything after the last-. - Downward API
status.podIPmay not be populated instantly — a shortsleep 2in init containers can help.