diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 00000000..5e90802f Binary files /dev/null and b/.DS_Store differ diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md new file mode 100644 index 00000000..63d53ecb --- /dev/null +++ b/.github/pull_request_template.md @@ -0,0 +1,17 @@ +## Goal + +In short: why this PR? + +## Changes + +What and why are we changing? + +## Testing + +How we checked: steps/logic + +### Checklist + +- [ ] PR has a clear, specific title +- [ ] Updated README as needed +- [ ] No secrets and junk/large temporary files diff --git a/.github/workflows/github-actions-demo.yml b/.github/workflows/github-actions-demo.yml new file mode 100644 index 00000000..acbfd7a1 --- /dev/null +++ b/.github/workflows/github-actions-demo.yml @@ -0,0 +1,35 @@ +name: GitHub Actions Demo + +on: + push: + branches: [ "feature/lab3" ] + workflow_dispatch: + +jobs: + Explore-GitHub-Actions: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Print GitHub context + run: | + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "πŸŽ‰ The job was manually triggered using workflow_dispatch" + else + echo "πŸŽ‰ The job was automatically triggered by a ${{ github.event_name }} event." + fi + + - name: List files in the repository + run: | + ls ${{ github.workspace }} + + - name: System Information + run: | + echo "πŸ–₯️ Runner Environment Information:" + echo "OS: $(uname -a)" + echo "CPU Info:" + lscpu + echo "Memory Info:" + free -h + echo "Disk Info:" + df -h diff --git a/gitops-lab/current-state.txt b/gitops-lab/current-state.txt new file mode 100644 index 00000000..345c3ef0 --- /dev/null +++ b/gitops-lab/current-state.txt @@ -0,0 +1,3 @@ +version: 1.0 +app: myapp +replicas: 3 diff --git a/gitops-lab/desired-state.txt b/gitops-lab/desired-state.txt new file mode 100644 index 00000000..345c3ef0 --- /dev/null +++ b/gitops-lab/desired-state.txt @@ -0,0 +1,3 @@ +version: 1.0 +app: myapp +replicas: 3 diff --git a/gitops-lab/health.log b/gitops-lab/health.log new file mode 100644 index 00000000..2c5246e1 --- /dev/null +++ b/gitops-lab/health.log @@ -0,0 +1,25 @@ +Wed Nov 12 13:29:15 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:25 MSK 2025 - ❌ CRITICAL: State mismatch detected! + Desired MD5: a15a1a4f965ecd8f9e23a33a6b543155 + Current MD5: 48168ff3ab5ffc0214e81c7e2ee356f5 +Wed Nov 12 13:29:37 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:50 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:53 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:56 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:59 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:02 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:05 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:08 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:11 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:14 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:17 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:33 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:36 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:39 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:42 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:45 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:48 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:52 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:55 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:58 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:31:01 MSK 2025 - βœ… OK: States synchronized diff --git a/gitops-lab/healthcheck.sh b/gitops-lab/healthcheck.sh new file mode 100755 index 00000000..28edd239 --- /dev/null +++ b/gitops-lab/healthcheck.sh @@ -0,0 +1,13 @@ +#!/bin/bash +# healthcheck.sh - Monitor GitOps sync health + +DESIRED_MD5=$(md5sum desired-state.txt | awk '{print $1}') +CURRENT_MD5=$(md5sum current-state.txt | awk '{print $1}') + +if [ "$DESIRED_MD5" != "$CURRENT_MD5" ]; then + echo "$(date) - ❌ CRITICAL: State mismatch detected!" | tee -a health.log + echo " Desired MD5: $DESIRED_MD5" | tee -a health.log + echo " Current MD5: $CURRENT_MD5" | tee -a health.log +else + echo "$(date) - βœ… OK: States synchronized" | tee -a health.log +fi diff --git a/gitops-lab/monitor.sh b/gitops-lab/monitor.sh new file mode 100755 index 00000000..f8792127 --- /dev/null +++ b/gitops-lab/monitor.sh @@ -0,0 +1,10 @@ +#!/bin/bash +# monitor.sh - Combined reconciliation and health monitoring + +printf "Starting GitOps monitoring...\n" +for i in {1..10}; do + printf "\n--- Check #%d ---\n" "$i" + ./healthcheck.sh + ./reconcile.sh + sleep 3 +done diff --git a/gitops-lab/reconcile.sh b/gitops-lab/reconcile.sh new file mode 100755 index 00000000..022bc936 --- /dev/null +++ b/gitops-lab/reconcile.sh @@ -0,0 +1,14 @@ +#!/bin/bash +# reconcile.sh - GitOps reconciliation loop + +DESIRED=$(cat desired-state.txt) +CURRENT=$(cat current-state.txt) + +if [ "$DESIRED" != "$CURRENT" ]; then + echo "$(date) - ⚠️ DRIFT DETECTED!" + echo "Reconciling current state with desired state..." + cp desired-state.txt current-state.txt + echo "$(date) - βœ… Reconciliation complete" +else + echo "$(date) - βœ… States synchronized" +fi diff --git a/labs/images/submission3/1759872633220.png b/labs/images/submission3/1759872633220.png new file mode 100644 index 00000000..030d3cc6 Binary files /dev/null and b/labs/images/submission3/1759872633220.png differ diff --git a/labs/images/submission8/lab8-checkly-alerts.png b/labs/images/submission8/lab8-checkly-alerts.png new file mode 100644 index 00000000..dd5cb213 Binary files /dev/null and b/labs/images/submission8/lab8-checkly-alerts.png differ diff --git a/labs/images/submission8/lab8-checkly-api.png b/labs/images/submission8/lab8-checkly-api.png new file mode 100644 index 00000000..c7d8b761 Binary files /dev/null and b/labs/images/submission8/lab8-checkly-api.png differ diff --git a/labs/images/submission8/lab8-checkly-browser.png b/labs/images/submission8/lab8-checkly-browser.png new file mode 100644 index 00000000..308fdc4b Binary files /dev/null and b/labs/images/submission8/lab8-checkly-browser.png differ diff --git a/labs/images/submission8/lab8-checkly-dashboard.png b/labs/images/submission8/lab8-checkly-dashboard.png new file mode 100644 index 00000000..230f4cdf Binary files /dev/null and b/labs/images/submission8/lab8-checkly-dashboard.png differ diff --git a/labs/images/submission8/lab8-checkly-result.png b/labs/images/submission8/lab8-checkly-result.png new file mode 100644 index 00000000..72eb4208 Binary files /dev/null and b/labs/images/submission8/lab8-checkly-result.png differ diff --git a/labs/images/submission9/trivy-scan.png b/labs/images/submission9/trivy-scan.png new file mode 100644 index 00000000..faa2187f Binary files /dev/null and b/labs/images/submission9/trivy-scan.png differ diff --git a/labs/images/submission9/zap-report.png b/labs/images/submission9/zap-report.png new file mode 100644 index 00000000..35e60fc0 Binary files /dev/null and b/labs/images/submission9/zap-report.png differ diff --git a/labs/submission1.md b/labs/submission1.md new file mode 100644 index 00000000..d4ffd66d --- /dev/null +++ b/labs/submission1.md @@ -0,0 +1 @@ +Signed commits help to verify that the changes were made by the developer and not someone else. This protects the project from code substitution and increases trust within the team. On GitHub, such commits are marked as Verified, and anyone can verify their authenticity. You can use SSH or GPG keys for signing, SSH is most often used, as it is easier to set up. diff --git a/labs/submission10.md b/labs/submission10.md new file mode 100644 index 00000000..b0508484 --- /dev/null +++ b/labs/submission10.md @@ -0,0 +1,145 @@ +# Lab 10 Submission β€” Cloud Computing Fundamentals + +## Task 1 β€” Artifact Registries Research + +### Services Overview + +**AWS:** +- **Amazon Elastic Container Registry (ECR):** Private Docker/OCI registry with vulnerability scanning (Amazon Inspector), encryption at rest, IAM integration, cross-region/account replication, and CI/CD hooks. +- **AWS CodeArtifact:** Managed package repository for Maven, npm, PyPI, NuGet, and Cargo. Integrates with standard package managers and AWS build tools. + +**Google Cloud:** +- **Artifact Registry:** Unified registry for container images and language packages (Maven, npm, Python, Go, etc.). Includes IAM, vulnerability scanning, attestations, and Cloud Build integration. + +**Azure:** +- **Azure Container Registry (ACR):** Private Docker/OCI registry with geo-replication (Premium tier), content trust/signing, Private Link, and ACR Tasks. +- **Azure Artifacts:** Azure DevOps service for language packages (npm, Maven, NuGet, Python, Cargo, Universal Packages). + +### Supported Artifact Types + +| Cloud | Service | Containers | Helm | Maven | npm | Python | NuGet | Go | OS Packages | Generic | +|-------|---------|-----------|------|-------|-----|--------|-------|----|-------------|---------| +| AWS | ECR | βœ… | βœ… (OCI) | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | βœ… (OCI) | +| AWS | CodeArtifact | ❌ | ❌ | βœ… | βœ… | βœ… | βœ… | βœ… (Cargo) | ❌ | ❌ | +| GCP | Artifact Registry | βœ… | βœ… | βœ… | βœ… | βœ… | ❌ | βœ… | βœ… (apt/yum) | βœ… | +| Azure | ACR | βœ… | βœ… (OCI) | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | +| Azure | Azure Artifacts | ❌ | ❌ | βœ… | βœ… | βœ… | βœ… | βœ… (Cargo) | ❌ | βœ… (Universal) | + +### Key Features + +**Security & Compliance:** +- **ECR:** Image scanning via Inspector, KMS/SSE encryption, IAM policies +- **Artifact Registry:** Vulnerability scanning and attestations, IAM +- **ACR:** Image signing/content trust, Defender integrations, private networking + +**Networking & Replication:** +- **ECR:** Cross-region and cross-account replication, VPC endpoints +- **Artifact Registry:** Regional repositories, Private Service Connect +- **ACR:** Geo-replication (Premium tier), Private Link + +**CI/CD & Ecosystem:** +- **ECR:** Tight integration with ECS/EKS/CodeBuild/CodePipeline +- **Artifact Registry:** Cloud Build, Cloud Deploy, GKE +- **ACR:** AKS, GitHub Actions/Azure Pipelines, ACR Tasks (builds, base-image updates) + +### Comparison Table + +| Factor | AWS ECR | GCP Artifact Registry | Azure ACR | +|--------|---------|----------------------|-----------| +| Artifact formats | Docker/OCI | Docker/OCI + Maven/npm/Python/Go/OS packages | Docker/OCI | +| Vulnerability scanning | βœ… (Inspector) | βœ… (Artifact Analysis) | βœ… (Defender/partner) | +| Replication | Cross-region/account | Regional repos | Geo-replication (Premium) | +| Access control | IAM | IAM | RBAC/AAD | +| Private networking | VPC endpoints | Private Service Connect | Private Link | +| CI/CD integration | ECS/EKS/Code* | Cloud Build/Deploy/GKE | AKS/ACR Tasks/Pipelines | +| Pricing | Storage + egress | Storage + egress | SKU tier + features | + +### Analysis: Multi-Cloud Strategy + +For a multi-cloud setup, **GCP Artifact Registry** is the most unified option β€” it covers containers, language packages, and OS packages in one service. If you need everything in one place, it's hard to beat. + +For **AWS-centric stacks**, pair **ECR** (images) with **CodeArtifact** (packages) for full coverage and deep AWS integration. + +For **Azure-centric stacks** that need geo-replication and private networking, **ACR Premium** makes sense. + +**Bottom line:** Choose based on your platform preference and network/replication needs. Keep artifacts OCI-compliant and policies portable to avoid lock-in. + +--- + +## Task 2 β€” Serverless Computing Platform Research + +### Services Overview + +**AWS:** +- **Lambda:** Functions-as-a-Service (FaaS) with rich event ecosystem (S3, SNS, EventBridge, API Gateway). Max runtime **15 minutes**. Cold start mitigation: Provisioned Concurrency, SnapStart (Java). + +**Google Cloud:** +- **Cloud Functions (Gen2) / Cloud Run:** Functions on Cloud Run or direct serverless containers with HTTP/event triggers. Cloud Run allows per-request runtimes up to **60 minutes** and supports minimum instances to keep containers warm. + +**Azure:** +- **Azure Functions:** FaaS with multiple hosting plans. **Consumption** has default timeout up to **10 minutes**; **Premium** reduces cold starts via pre-warmed instances and allows longer runtimes with VNet integration. + +### Runtimes and Execution Models + +**Lambda:** Multiple managed runtimes (Node.js, Python, Java, .NET, Ruby, Go) or custom container images. Automatic scaling, concurrency controls, wide event sources. + +**Cloud Functions/Run:** HTTP and event triggers, Pub/Sub, Eventarc. Min/max instances for scale and cold-start control. Supports Node.js, Python, Go, Java, .NET, PHP, Ruby. + +**Azure Functions:** HTTP/queue/timer/event triggers. Premium keeps instances pre-warmed. Deep Azure integrations. Supports JavaScript/TypeScript, C#/F#, Python, Java, PowerShell, custom handlers. + +### Performance Characteristics + +**Cold starts:** +- Lambda: Provisioned Concurrency and SnapStart reduce startup latency +- Cloud Run: Min instances keep containers hot +- Azure Functions Premium: Pre-warmed workers + +**Throughput & concurrency:** All three provide automatic scaling with per-platform concurrency and quota controls. + +**Observability:** CloudWatch (AWS), Cloud Logging/Trace (GCP), Application Insights (Azure). + +### Limits and Timeouts + +| Platform | Max Duration | Cold-Start Mitigation | +|----------|--------------|----------------------| +| AWS Lambda | 15 minutes | Provisioned Concurrency, SnapStart (Java) | +| GCP Cloud Run | 60 minutes | Min instances | +| GCP Cloud Functions (Gen2) | Inherits Cloud Run (60 min) | Min instances | +| Azure Functions | 10 min (Consumption), longer on Premium | Pre-warmed instances (Premium) | + +### Comparison Table + +| Factor | AWS Lambda | GCP Cloud Functions / Cloud Run | Azure Functions | +|--------|------------|--------------------------------|-----------------| +| Model | FaaS | FaaS / serverless containers | FaaS | +| Max duration | 15 min | 60 min (Cloud Run HTTP) | 10 min (Consumption), longer in Premium | +| Cold start mitigation | Provisioned Concurrency, SnapStart | Min instances (Cloud Run) | Pre-warmed instances (Premium) | +| Triggers | Broad AWS events + HTTP | HTTP, Pub/Sub, Eventarc | HTTP, Timer, Queues, Event Hub | +| Networking | VPC integration | VPC/serverless VPC access | VNet integration | +| Pricing | Requests + GB-s + optional provisioned | Requests + time/CPU/mem | Requests + time; Premium warm cost | + +### Analysis: Best Fit for REST API Backend + +For **low latency, AWS-native** setups: **Lambda with Provisioned Concurrency** provides predictable startup at extra cost. + +For **containerized HTTP with more control**: **Cloud Run** offers standard containers, long HTTP timeouts (60 min), and min instances to keep things warm. Best choice if you want flexibility. + +For **Azure-native with stable latency**: **Functions on Premium plan** for pre-warmed workers and VNet integration. + +I'd lean toward **Cloud Run** for a REST API β€” it accepts standard containers or functions, allows high concurrency per instance, has the longest HTTP timeout, and lets you keep a warm instance running to smooth out latency. + +### Reflection: Pros & Cons of Serverless + +**Pros:** +- No server management +- Automatic scaling +- Pay-for-use +- Scale-to-zero + +**Cons:** +- Cold starts can cause delays +- Per-platform limits and quotas +- Requires tuning for latency +- Possible vendor lock-in with proprietary triggers and monitoring + +The trade-off is clear: you get operational simplicity and cost efficiency, but you lose some control and have to work around platform-specific limitations. diff --git a/labs/submission2.md b/labs/submission2.md new file mode 100644 index 00000000..d7b755f0 --- /dev/null +++ b/labs/submission2.md @@ -0,0 +1,284 @@ +# Lab 2 Submission - Version Control & Advanced Git + +## Task 1 β€” Git Object Model Exploration (2 pts) + +### Commands and Outputs + +**Commit Object (3dba6e3ebdfabb1edae18603ad25e5444a8ebbad):** +```bash +$ git cat-file -p HEAD +tree a786f2b5ea04cec24ab6b9ed1c10d7221a9e7257 +parent cbeca4e903147ce87d7375eaa0f893b7ab0a41d3 +author Arthur Babkin 1758056551 +0300 +committer Arthur Babkin 1758056551 +0300 +gpgsig -----BEGIN SSH SIGNATURE----- + U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgdUHUC3uAyyiAFr7GXLGXFjh6Oe + 2WDoAI1y3blWwXGdoAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAABTAAAAC3NzaC1lZDI1NTE5 + AAAAQKoexSYikpZXR7GKKnR31PybxEwrlIJf4yHrT28C1CvSYWWy1bZ7yf7R3R0mEmNnur + DNjD23vnGLm3IvoAexNAg= + -----END SSH SIGNATURE----- + +feat: update test file content +``` + +**Tree Object (a786f2b5ea04cec24ab6b9ed1c10d7221a9e7257):** +```bash +$ git cat-file -p HEAD^{tree} +040000 tree 6c09998f23b0b1ce80cc196191ad447c1353f7a2 .github +100644 blob 4db373667a50f14a411bb5c7e879690fd08aacc1 README.md +040000 tree cb1959162a7ad6f2263ccf0136246e3f36a8f5cf labs +040000 tree 2f0387f9eebb6ad846cd02dbd1e7a4a151c06a7e lectures +100644 blob 0ebbb9ac9f7f5ded65759de45964daece16c5645 test-object.txt +``` + +**Blob Object (0ebbb9ac9f7f5ded65759de45964daece16c5645):** +```bash +$ git cat-file -p 0ebbb9ac9f7f5ded65759de45964daece16c5645 +This is a test file for Git object exploration +Additional content for object demo +``` + +### Explanation + +- **Commit objects** store metadata about a snapshot including tree hash, parent commit(s), author/committer info, timestamps, and commit message. +- **Tree objects** represent directory structures, containing references to blobs (files) and other trees (subdirectories) with their permissions and names. +- **Blob objects** store the actual file content as binary data, identified by the SHA-1 hash of their content. + +## Task 2 β€” Reset and Reflog Recovery (3 pts) + +### Commands and Process + +**Initial Setup:** +```bash +$ git switch -c git-reset-practice +$ echo "First commit" > file.txt && git add file.txt && git commit -m "First commit" +$ echo "Second commit" >> file.txt && git add file.txt && git commit -m "Second commit" +$ echo "Third commit" >> file.txt && git add file.txt && git commit -m "Third commit" +``` + +**Initial State:** +```bash +$ git log --oneline -3 +d8d9f7e (HEAD -> git-reset-practice) Third commit +3b47be4 Second commit +0983940 First commit + +$ cat file.txt +First commit +Second commit +Third commit +``` + +**Soft Reset (--soft HEAD~1):** +```bash +$ git reset --soft HEAD~1 +$ git status +On branch git-reset-practice +Changes to be committed: + (use "git restore --staged ..." to unstage) + modified: file.txt + +$ cat file.txt +First commit +Second commit +Third commit +``` + +**Hard Reset (--hard HEAD~1):** +```bash +$ git reset --hard HEAD~1 +HEAD is now at 0983940 First commit + +$ git status +On branch git-reset-practice +nothing to commit, working tree clean + +$ cat file.txt +First commit +``` + +**Using Reflog for Recovery:** +```bash +$ git reflog +0983940 (HEAD -> git-reset-practice) HEAD@{0}: reset: moving to HEAD~1 +3b47be4 HEAD@{1}: reset: moving to HEAD~1 +d8d9f7e HEAD@{2}: commit: Third commit +3b47be4 HEAD@{3}: commit: Second commit +0983940 (HEAD -> git-reset-practice) HEAD@{4}: commit: First commit + +$ git reset --hard d8d9f7e +HEAD is now at d8d9f7e Third commit + +$ cat file.txt +First commit +Second commit +Third commit +``` + +### Explanation + +- **--soft reset**: Moves HEAD pointer but keeps the index (staging area) and working tree unchanged. The "Third commit" changes remained staged. +- **--hard reset**: Moves HEAD, resets index, and discards working tree changes completely. All traces of "Second commit" and "Third commit" were lost from the working directory. +- **git reflog**: Shows the history of HEAD movements, allowing recovery of seemingly "lost" commits by their hash, even after hard reset operations. + +## Task 3 β€” Visualize Commit History (2 pts) + +### Commands and Process + +**Creating a side branch:** +```bash +$ git switch -c side-branch +$ echo "Branch commit content" > history.txt +$ git add history.txt && git commit -m "Side branch commit" +$ git switch - +``` + +**Visualizing with graph:** +```bash +$ git log --oneline --graph --all +* 49134d9 (side-branch) Side branch commit +| * d8d9f7e (git-reset-practice) Third commit +| * 3b47be4 Second commit +| * 0983940 First commit +|/ +* 3dba6e3 (HEAD -> feature/lab2) feat: update test file content +* cbeca4e feat: add test file for object exploration +* 8b4f42b (main) docs: add commit signing summary +* 049fbeb (origin/main, origin/HEAD) docs: add PR template +| * 336529e (origin/feature/lab1, feature/lab1) docs: add commit signing summary +|/ +* 82d1989 feat: publish lab3 and lec3 +* 3f80c83 feat: publish lec2 +* 499f2ba feat: publish lab2 +* af0da89 feat: update lab1 +* 74a8c27 Publish lab1 +* f0485c0 Publish lec1 +* 31dd11b Publish README.md +``` + +**Commit Messages List:** +- 49134d9: Side branch commit +- d8d9f7e: Third commit +- 3b47be4: Second commit +- 0983940: First commit +- 3dba6e3: feat: update test file content +- cbeca4e: feat: add test file for object exploration +- 8b4f42b: docs: add commit signing summary +- 049fbeb: docs: add PR template + +### Reflection + +The graph visualization clearly shows the branching structure and relationships between commits. The asterisks (*) represent commits, vertical lines (|) show branch continuity, and the forward slashes (/) indicate where branches diverge or merge, making it easy to understand the development flow and identify parallel work streams. + +## Task 4 β€” Tagging Commits (1 pt) + +### Commands and Process + +**Creating and pushing tags:** +```bash +$ git tag v1.0.0 +$ git push origin v1.0.0 + +$ echo "Additional content for v1.1.0" >> test-object.txt +$ git add test-object.txt && git commit -m "feat: prepare for v1.1.0 release" +$ git tag v1.1.0 +$ git push origin v1.1.0 +``` + +**Verifying tags:** +```bash +$ git tag -l +v1.0.0 +v1.1.0 + +$ git show v1.0.0 --no-patch --format="Tag: %D, Commit: %H" +Tag: tag: v1.0.0, Commit: 3dba6e3ebdfabb1edae18603ad25e5444a8ebbad + +$ git show v1.1.0 --no-patch --format="Tag: %D, Commit: %H" +Tag: HEAD -> feature/lab2, tag: v1.1.0, Commit: 2bf6c87a94008328625172c1763b6d4d879b0c8a +``` + +### Tag Information + +- **v1.0.0**: Associated with commit `3dba6e3ebdfabb1edae18603ad25e5444a8ebbad` +- **v1.1.0**: Associated with commit `2bf6c87a94008328625172c1763b6d4d879b0c8a` + +### Importance of Tags + +Tags are crucial for versioning and release management, providing immutable references to specific commits that trigger CI/CD pipelines, enable rollbacks, and facilitate release notes generation for production deployments. + +## Task 5 β€” git switch vs git checkout vs git restore (2 pts) + +### Commands and Process + +**Branch switching with git switch (modern):** +```bash +$ git switch -c cmd-compare +Switched to a new branch 'cmd-compare' + +$ git status +On branch cmd-compare +Untracked files: + (use "git add ..." to include in what will be committed) + labs/submission2.md +nothing added to commit but untracked files present (use "git add" to track) + +$ git switch - +Switched to branch 'feature/lab2' + +$ git branch + cmd-compare +* feature/lab2 + git-reset-practice + main + side-branch +``` + +**Legacy git checkout (overloaded):** +```bash +$ git checkout -b cmd-compare-2 +Switched to a new branch 'cmd-compare-2' +``` + +**File restoration with git restore (modern):** +```bash +$ echo "scratch content" >> demo.txt +$ git add demo.txt && git commit -m "Add demo file" +$ echo "modified content" >> demo.txt +$ git status +On branch cmd-compare-2 +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git restore ..." to discard changes in working directory) + modified: demo.txt + +$ git restore demo.txt # Discard working tree changes +$ cat demo.txt +scratch content + +# Demonstrating --staged option +$ echo "new changes" >> demo.txt +$ git add demo.txt +$ git status +On branch cmd-compare-2 +Changes to be committed: + (use "git restore --staged ..." to unstage) + modified: demo.txt + +$ git restore --staged demo.txt # Unstage while keeping working tree +$ git status +On branch cmd-compare-2 +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git restore ..." to discard changes in working directory) + modified: demo.txt +``` + +### Summary of Differences + +**git switch**: Modern, dedicated command for branch operations - creating, switching, and toggling between branches. Clearer intent than the overloaded checkout. + +**git checkout**: Legacy command that handles both branch switching AND file restoration, making it confusing. Still works but less explicit about intent. + +**git restore**: Modern, explicit command for file operations - discarding working tree changes, unstaging files, or restoring from specific commits. Replaces the confusing `git checkout -- ` syntax with clear options like `--staged` and `--source`. + diff --git a/labs/submission3.md b/labs/submission3.md new file mode 100644 index 00000000..e438055d --- /dev/null +++ b/labs/submission3.md @@ -0,0 +1,172 @@ +# Lab 3 Submission - CI/CD with GitHub Actions + +## Task 1 β€” First GitHub Actions Workflow + +### 1.1: GitHub Actions, what I did: + +1. Created `.github/workflows` directory in the repository +2. Created a new workflow file `github-actions-demo.yml` +3. Implemented a basic workflow that: + - Triggers on push to feature/lab3 branch + - Runs on ubuntu-latest runner + - Checks out the repository + - Prints GitHub context information + - Lists repository files + - Gathers system information + +Key concepts learned: + +- **Jobs**: Basic units of work in a workflow +- **Steps**: Individual tasks within a job +- **Runners**: Virtual machines that execute the jobs +- **Triggers**: Events that start a workflow (push, workflow_dispatch) + +### 1.2: Workflow Trigger Test + +The workflow is configured to trigger on: + +1. Push events to the feature/lab3 branch +2. Manual trigger via workflow_dispatch + +[Link to successful workflow run](https://github.com/ArthurBabkin/F25-DevOps-Intro/actions/runs/18325542519/job/52188985705#step:1:19) + +![1759872633220](images/submission3/1759872633220.png) + +The workflow was successfully triggered by pushing to the feature/lab3 branch. The run completed successfully in 4 seconds. + +### Analysis of Workflow Execution: + +1. The workflow was automatically triggered by the push event +2. It ran on an Ubuntu-based runner +3. Successfully executed all defined steps: + - Checked out the repository + - Printed GitHub context + - Listed repository files + - Gathered system information + +## Task 2 β€” Manual Trigger + System Information + +### 2.1: Manual Trigger Implementation + +I added the `workflow_dispatch` trigger to enable manual workflow execution: + +```yaml +on: + push: + branches: [ "feature/lab3" ] + workflow_dispatch: +``` + +**Important Discovery**: Initially, the "Run workflow" button was not visible in the GitHub Actions UI. After researching the issue, I discovered that **workflow_dispatch can only be triggered manually from the UI when the workflow file exists in the default branch (main)**. + +To resolve this: + +1. I copied the workflow file from `feature/lab3` to the `main` branch +2. After pushing to main, the "Run workflow" button appeared in the Actions UI +3. I was then able to select the `feature/lab3` branch and trigger the workflow manually + +This is a key limitation of GitHub Actions: manual triggers via UI require the workflow definition to be present in the repository's default branch, even if you want to run it on a different branch. Or I understood it not correctly, but I really tried to do it without changing main. + +### 2.2: System Information Collection + +Added a step to gather system information: + +```yaml +- name: System Information + run: | + echo "πŸ–₯️ Runner Environment Information:" + echo "OS: $(uname -a)" + echo "CPU Info:" + lscpu + echo "Memory Info:" + free -h + echo "Disk Info:" + df -h +``` + +System information from the runner: + +``` +πŸ–₯️ Runner Environment Information: +OS: Linux runnervmwhb2z 6.11.0-1018-azure #18~24.04.1-Ubuntu SMP Sat Jun 28 04:46:03 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux + +CPU Info: +- Architecture: x86_64 +- CPU(s): 4 +- Model name: AMD EPYC 7763 64-Core Processor +- Thread(s) per core: 2 +- Core(s) per socket: 2 +- Socket(s): 1 +- Virtualization: AMD-V +- Hypervisor vendor: Microsoft + +Memory Info: +- Total: 15Gi +- Used: 768Mi +- Free: 13Gi +- Buff/cache: 1.5Gi +- Available: 14Gi +- Swap: 4.0Gi (unused) + +Disk Info: +- Root (/): 72G total, 50G used, 23G available +- Boot: 881M total +- Additional volume: 74G total, 66G available +``` + +### Analysis + +#### Automatic vs Manual Triggers + +- **Automatic (push)**: Triggers workflow automatically when code is pushed to feature/lab3 branch +- **Manual (workflow_dispatch)**: Allows triggering workflow on-demand through GitHub UI + +### Comparison of Trigger Types: + +1. **Push Trigger Results:** + + - [Link to push-triggered run](https://github.com/ArthurBabkin/F25-DevOps-Intro/actions/runs/18326246115) + - Triggered automatically on push to feature/lab3 + - Run time: 4 seconds + - Event type: push + - Commit: 5452282 + - Message displayed: "πŸŽ‰ The job was automatically triggered by a push event." +2. **Manual Trigger Results:** + + - [Link to manually-triggered run](https://github.com/ArthurBabkin/F25-DevOps-Intro/actions/runs/18326597471) + - Triggered manually via Actions UI + - Run time: 4 seconds + - Event type: workflow_dispatch + - Commit: 5452282 (same commit) + - Message displayed: "πŸŽ‰ The job was manually triggered using workflow_dispatch" + +### Key Observations: + +1. **Successful Differentiation**: The updated workflow correctly identifies and displays different messages for push vs manual triggers +2. **Consistent Performance**: Both runs completed in 4 seconds with identical system specifications +3. **Same Environment**: Both runs used the same runner environment (Linux, AMD EPYC processor, 15Gi RAM) +4. **Trigger Flexibility**: Manual trigger allows running workflows on-demand without code changes + +#### Runner Environment + +The workflow runs on ubuntu-latest, which provides: + +**Hardware Specifications:** + +- **CPU**: AMD EPYC 7763 64-Core Processor (4 cores available) +- **Memory**: 15Gi total RAM, ~14Gi available +- **Storage**: 72G root filesystem, 74G additional volume +- **Architecture**: x86_64 with virtualization support + +**Software Environment:** + +- **OS**: Ubuntu Linux (Azure-hosted) +- **Virtualization**: Microsoft Hyper-V +- **Git**: Version 2.51.0 +- **Runner**: GitHub Actions Runner 2.328.0 + +**Security Features:** + +- Various CPU vulnerability mitigations enabled +- Secure boot and trusted execution environment +- Isolated container environment for each job diff --git a/labs/submission4.md b/labs/submission4.md new file mode 100644 index 00000000..8402629a --- /dev/null +++ b/labs/submission4.md @@ -0,0 +1,379 @@ +# Lab 4 Submission - Operating Systems & Networking + +## Task 1 β€” Operating System Analysis + +### 1.1: Boot Performance Analysis + +#### System Boot Time Analysis + +**Note:** Running on macOS (Darwin), systemd commands not available. Using macOS equivalents. + +**Command:** `system_profiler SPSoftwareDataType | grep "Boot Volume\|System Version\|Time since boot"` + +``` +System Version: macOS 15.6.1 (24G90) +Boot Volume: Macintosh HD +Time since boot: 16 days, 7 hours +``` + +**Analysis:** System has been running for 16 days without restart, indicating stable operation. + +#### System Load Check + +**Command:** `uptime` + +``` +0:59 up 16 days, 7 hrs, 1 user, load averages: 5.31 4.67 4.10 +``` + +**Command:** `w` + +``` +0:59 up 16 days, 7 hrs, 1 user, load averages: 5.21 4.66 4.10 +USER TTY FROM LOGIN@ IDLE WHAT +theother_a console - 21Sep25 16days - +``` + +**Analysis:** High load averages (5.31, 4.67, 4.10) indicate system under heavy load. Single user logged in via console since September 21st. + +### 1.2: Process Forensics + +#### Memory-Intensive Processes + +**Command:** `ps -eo pid,ppid,comm,%mem,%cpu | sort -k4 -nr | head -n 6` + +``` +45156 1 /Applications/Te 5.2 14.6 +45462 45451 /Applications/Cu 4.2 37.9 + 803 1 /Applications/Go 2.2 0.3 +51819 803 /Applications/Go 1.8 0.0 +51764 803 /Applications/Go 1.5 1.5 +45099 803 /Applications/Go 1.5 0.1 +``` + +#### CPU-Intensive Processes + +**Command:** `ps -eo pid,ppid,comm,%mem,%cpu | sort -k5 -nr | head -n 6` + +``` +45462 45451 /Applications/Cu 4.2 191.9 +30819 1 /System/Library/ 0.3 65.2 + 168 1 /System/Library/ 1.4 46.1 +45459 45451 /Applications/Cu 0.6 38.9 +45156 1 /Applications/Te 5.2 17.7 + 805 1 /System/Library/ 0.5 7.3 +``` + +**Analysis:** + +- Top memory consumer: Process 45156 (Application) using 5.2% memory +- Top CPU consumer: Process 45462 (Application) using 191.9% CPU (multi-core usage) +- Several system processes and applications are actively running + +### 1.3: Service Dependencies + +#### System Dependencies + +**Note:** macOS uses launchd instead of systemd. Using launchctl to list services. + +**Command:** `launchctl list | head -10` + +``` +PID Status Label +- 0 com.apple.SafariHistoryServiceAgent +- -9 com.apple.progressd +- -9 com.apple.cloudphotod +65769 -9 com.apple.MENotificationService +869 0 com.apple.Finder +83519 -9 com.apple.homed +73669 -9 com.apple.dataaccess.dataaccessd +- 0 com.apple.quicklook +- 0 com.apple.parentalcontrols.check +``` + +#### Apple System Services + +**Command:** `launchctl list | grep -E "com.apple" | head -8` + +``` +- 0 com.apple.SafariHistoryServiceAgent +- -9 com.apple.progressd +- -9 com.apple.cloudphotod +65769 -9 com.apple.MENotificationService +869 0 com.apple.Finder +83519 -9 com.apple.homed +73669 -9 com.apple.dataaccess.dataaccessd +- 0 com.apple.quicklook +``` + +**Analysis:** + +- Status -9 indicates services that have exited +- Status 0 indicates successfully running services +- PID shows process ID for running services +- Various Apple system services are managed by launchd + +### 1.4: User Sessions + +#### Current Login Activity + +**Command:** `who -a` + +``` + system boot Sep 21 17:59 +theother_archee console Sep 21 18:00 +theother_archee ttys015 Sep 23 23:08 term=0 exit=0 + . run-level 3 +``` + +#### Recent Login History + +**Command:** `last -n 5` + +``` +theother_archee ttys015 Tue Sep 23 23:08 - 23:08 (00:00) +theother_archee console Sun Sep 21 18:00 still logged in +reboot time Sun Sep 21 17:59 +theother_archee console Wed Sep 17 00:27 - 17:59 (4+17:32) +reboot time Wed Sep 17 00:25 +``` + +**Analysis:** + +- User theother_archee logged in via console since Sep 21 18:00 (still active) +- Brief terminal session on Sep 23 23:08 (lasted 0 minutes) +- System rebooted on Sep 21 17:59 +- Previous session lasted over 4 days before reboot + +### 1.5: Memory Analysis + +#### Memory Allocation Overview + +**Note:** macOS doesn't have `free` command. Using `vm_stat` and `system_profiler` instead. + +**Command:** `system_profiler SPHardwareDataType | grep "Memory:"` + +``` +Memory: 16 GB +``` + +#### Detailed Memory Information + +**Command:** `vm_stat` + +``` +Mach Virtual Memory Statistics: (page size of 16384 bytes) +Pages free: 8401. +Pages active: 279841. +Pages inactive: 276594. +Pages speculative: 3164. +Pages throttled: 0. +Pages wired down: 173043. +Pages purgeable: 3279. +"Translation faults": 3655954495. +Pages copy-on-write: 112203228. +Pages zero filled: 1502194173. +Pages reactivated: 666925788. +Pages purged: 79590776. +File-backed pages: 163066. +Anonymous pages: 396533. +Pages stored in compressor: 982146. +Pages occupied by compressor: 266330. +Decompressions: 1866502411. +Compressions: 2056482764. +Pageins: 122005821. +Pageouts: 1775048. +Swapins: 142043743. +Swapouts: 154221794. +``` + +**Memory Analysis:** + +- Total Memory: 16 GB +- Page size: 16,384 bytes (16 KB) +- Free pages: 8,401 (β‰ˆ 137.6 MB free) +- Active pages: 279,841 (β‰ˆ 4.6 GB active) +- Inactive pages: 276,594 (β‰ˆ 4.5 GB inactive) +- Wired pages: 173,043 (β‰ˆ 2.8 GB wired/kernel) +- Memory pressure indicated by high compression/decompression activity + +## Task 2 β€” Networking Analysis + +### 2.1: Network Path Tracing + +#### Traceroute to GitHub + +**Command:** `traceroute github.com` + +``` +traceroute to github.com (140.82.121.3), 64 hops max, 40 byte packets + 1 * * * + 2 * * * + 3 * * * + 4 * * * + 5 * * * + 6 * * * + 7 * * * + 8 * * * + 9 * * * +10 * * * +11 * * * +12 * * * +13 * * * +14 * * * +15 * * * +16 * * * +17 * * * +18 * * * +[Truncated - all hops showed timeouts] +``` + +#### DNS Resolution Check + +**Command:** `dig github.com` + +``` +; <<>> DiG 9.10.6 <<>> github.com +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51290 +;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;github.com. IN A + +;; ANSWER SECTION: +github.com. 47 IN A 140.82.121.3 + +;; Query time: 134 msec +;; SERVER: 1.1.1.1#53(1.1.1.1) +;; WHEN: Wed Oct 08 01:08:07 MSK 2025 +;; MSG SIZE rcvd: 55 +``` + +**Analysis:** + +- Traceroute shows timeouts (*) at all hops - likely due to ICMP filtering by routers +- DNS resolution successful: github.com resolves to 140.82.121.3 +- Using Cloudflare DNS server (1.1.1.1) +- Query time: 134ms (reasonable response time) +- TTL: 47 seconds for the A record + +### 2.2: Packet Capture + +#### DNS Traffic Capture + +**Note:** tcpdump requires sudo privileges which are not available in this environment. + +**Alternative approach - DNS query generation:** +**Command:** `dig google.com +short` + +``` +forcesafesearch.google.com. +216.239.38.XXX +``` + +**Analysis:** Generated DNS traffic by performing lookup. In a real tcpdump capture, we would see: + +- UDP packets on port 53 +- Query packets (client β†’ DNS server) +- Response packets (DNS server β†’ client) +- Packet structure with DNS headers and payload + +### 2.3: Reverse DNS + +#### PTR Lookup for 8.8.4.4 + +**Command:** `dig -x 8.8.4.4` + +``` +; <<>> DiG 9.10.6 <<>> -x 8.8.4.4 +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61687 +;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;4.4.8.8.in-addr.arpa. IN PTR + +;; ANSWER SECTION: +4.4.8.8.in-addr.arpa. 5917 IN PTR dns.google. + +;; Query time: 37 msec +;; SERVER: 1.1.1.1#53(1.1.1.1) +;; WHEN: Wed Oct 08 01:08:56 MSK 2025 +;; MSG SIZE rcvd: 73 +``` + +#### PTR Lookup for 1.1.2.2 + +**Command:** `dig -x 1.1.2.2` + +``` +; <<>> DiG 9.10.6 <<>> -x 1.1.2.2 +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 46941 +;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;2.2.1.1.in-addr.arpa. IN PTR + +;; AUTHORITY SECTION: +1.in-addr.arpa. 1748 IN SOA ns.apnic.net. read-txt-record-of-zone-first-dns-admin.apnic.net. 22966 7200 1800 604800 3600 + +;; Query time: 424 msec +;; SERVER: 1.1.1.1#53(1.1.1.1) +;; WHEN: Wed Oct 08 01:09:03 MSK 2025 +;; MSG SIZE rcvd: 137 +``` + +**Reverse DNS Analysis:** + +- 8.8.4.4 successfully resolves to dns.google. (Google's public DNS) +- 1.1.2.2 returns NXDOMAIN (no PTR record exists) +- Query times: 37ms vs 424ms (successful vs failed lookup) +- Different response sizes: 73 bytes vs 137 bytes + +## Analysis and Observations + +### Key Findings + +- **Top Memory-Consuming Process:** Process 45156 (Application) using 5.2% memory +- **Boot Performance:** System uptime 16 days, 7 hours - excellent stability +- **Network Path Insights:** Traceroute blocked by firewalls, but DNS resolution working properly +- **DNS Patterns:** Using Cloudflare DNS (1.1.1.1), query times 37-424ms depending on record availability + +### Resource Utilization Patterns + +**System Load: + +* CPU is basically on fire (averages: 5.31, 4.67, 4.10). +* Memory’s struggling β€” lots of compress/decompress going on. +* A bunch of apps are hogging resources like it’s their full-time job. + +**Network Behavior:** + +* DNS works fine, even with security rules in place. +* Reverse DNS is a bit hit-or-miss (some IPs answer, some don’t bother). +* Can’t run traceroute because security rejects (I guess it is because of VPN usage). + +**Security Observations:** + +- ICMP filtering prevents traceroute visibility +- System services properly managed by launchd +- Long-running stable system (16+ days uptime) + +### Security Considerations + +All sensitive information has been sanitized according to security best practices: + +- IP addresses have last octet replaced with XXX where appropriate +- Sensitive process names have been generalized +- Internal network topology details have been omitted diff --git a/labs/submission5.md b/labs/submission5.md new file mode 100644 index 00000000..afb165f0 --- /dev/null +++ b/labs/submission5.md @@ -0,0 +1,302 @@ +# Lab 5 Submission - Virtualization & System Analysis + +## Task 1 β€” VirtualBox Installation + +### Installation + +- **OS**: macOS 15.6.1 (24G90) +- **VirtualBox**: Version 7.2.2 r170484 (Qt6.8.0 on cocoa) +- **No issues were encountered at this stage** + +## Task 2 β€” Ubuntu VM and System Analysis + +I have encountered a problem, that Intel/AMD 64 Bit images weren't possible to run on my mac even through virtualization. After researching similar issues and reviewing a reference PR from a friend ([PR #65](https://github.com/inno-devops-labs/F25-DevOps-Intro/pull/65)), I decided to install the same version he used: [Ubuntu 24.04.3 (Noble Numbat)](https://cdimage.ubuntu.com/releases/noble/release/) + +### VM Configuration + +- **RAM**: 4096 MB (4GB) +- **CPUs**: 2 +- **Disk**: 20 GB + +### System Information Discovery + +#### CPU + +**Tools discovered:** `lscpu`, `cat /proc/cpuinfo`, `nproc` + +**Commands used:** + +```sh +$ lscpu +``` + +``` +Architecture: aarch64 +CPU op-mode(s): 64-bit +Byte Order: Little Endian +CPU(s): 2 +On-line CPU(s) list: 0-1 +Vendor ID: Apple +Model name: - +Model: 0 +Thread(s) per core: 1 +Core(s) per cluster: 2 +Socket(s): - +Cluster(s): 1 +Stepping: 0x0 +BogoMIPS: 48.00 +Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics + fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop + sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm + sb paca pacg dcpodp flagm2 frint bf16 afp +NUMA: +NUMA node(s): 1 +NUMA node0 CPU(s): 0-1 +Vulnerabilities: +Gather data sampling: Not affected +Ghostwrite: Not affected +Indirect target selection: Not affected +Itlb multihit: Not affected +L1tf: Not affected +Mds: Not affected +Meltdown: Not affected +Mmio stale data: Not affected +Reg file data sampling: Not affected +Retbleed: Not affected +Spec rstack overflow: Not affected +Spec store bypass: Vulnerable +Spectre v1: Mitigation; __user pointer sanitization +Spectre v2: Mitigation; CSV2, but not BHB +Srbds: Not affected +Tsx async abort: Not affected +``` + +#### Memory + +**Tools discovered:** `free`, `cat /proc/meminfo`, `vmstat` + +**Commands used:** + +```sh +$ free -h +``` + +``` + total used free shared buff/cache available +Mem: 3.8Gi 892Mi 2.5Gi 28Mi 456Mi 3.0Gi +Swap: 0B 0B 0B +``` + +```sh +$ vmstat -s +``` + +``` + 3987456 K total memory + 914432 K used memory + 987648 K active memory + 184832 K inactive memory + 2621440 K free memory + 32768 K buffer memory + 467968 K swap cache + 0 K total swap + 0 K used swap + 0 K free swap + 512 non-nice user cpu ticks + 32 nice user cpu ticks + 491 system cpu ticks + 60949 idle cpu ticks + 118 IO-wait cpu ticks + 0 IRQ cpu ticks + 21 softirq cpu ticks + 0 stolen cpu ticks + 0 non-nice guest cpu ticks + 0 nice guest cpu ticks + 361772 K paged in + 12102 K paged out + 0 pages swapped in + 0 pages swapped out + 78418 interrupts + 139877 CPU context switches + 1759397297 boot time + 1530 forks +``` + +#### Network + +**Tools discovered:** `ip`, `hostname`, `ss`, `netstat` + +**Commands used:** + +```sh +$ ip addr show +``` + +``` +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host noprefixroute + valid_lft forever preferred_lft forever +2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 08:00:27:a4:c2:9e brd ff:ff:ff:ff:ff:ff + inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3 + valid_lft 86057sec preferred_lft 86057sec + inet6 fd17:625c:f037:2:7a3b:4c2f:9e1a:8b2d/64 scope global temporary dynamic + valid_lft 86058sec preferred_lft 14058sec + inet6 fd17:625c:f037:2:a00:27ff:fea4:c29e/64 scope global dynamic mngtmpaddr + valid_lft 86058sec preferred_lft 14058sec + inet6 fe80::a00:27ff:fea4:c29e/64 scope link + valid_lft forever preferred_lft forever +``` + +```sh +$ hostname -I +``` + +``` +10.0.2.15 fd17:625c:f037:2:7a3b:4c2f:9e1a:8b2d fd17:625c:f037:2:a00:27ff:fea4:c29e +``` + +#### Storage + +**Tools discovered:** `df`, `lsblk`, `fdisk`, `du` + +**Commands used:** + +```sh +$ df -h +``` + +``` +Filesystem Size Used Avail Use% Mounted on +tmpfs 391M 1.2M 390M 1% /run +/dev/sda2 19G 4.2G 14G 24% / +tmpfs 1.9G 0 1.9G 0% /dev/shm +tmpfs 5.0M 4.0K 5.0M 1% /run/lock +efivarfs 256K 102K 155K 40% /sys/firmware/efi/efivars +/dev/sda1 1.1G 6.4M 1.1G 1% /boot/efi +tmpfs 391M 60K 391M 1% /run/user/1000 +/dev/sr0 51M 51M 0 100% /media/user/VBox_GAs_7.2.21 +``` + +```sh +$ lsblk -f +``` + +``` +NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS +loop0 + squash 4.0 0 100% /snap/bare/5 +loop1 + squash 4.0 0 100% /snap/core22/2049 +loop2 + squash 4.0 0 100% /snap/gnome-42-2204/201 +loop3 + squash 4.0 0 100% /snap/firefox/6563 +loop4 + squash 4.0 0 100% /snap/gtk-common-themes/1535 +loop5 + squash 4.0 0 100% /snap/snap-store/1271 +loop6 + squash 4.0 0 100% /snap/snapd/24787 +loop7 + squash 4.0 0 100% /snap/snapd-desktop-integration/316 +sda +β”œβ”€sda1 +β”‚ vfat FAT32 618C-0389 1G 1% /boot/efi +└─sda2 + ext4 1.0 8cf5a297-b38f-4e24-91c2-2f35eeb8478e 14G 24% / +sr0 iso966 Jolie VBox_GAs_7.2.2 2025-09-10-17-10-16-91 0 100% /media/user/VBox_GAs_7.2.21 +``` + +#### OS Information + +**Tools discovered:** `hostnamectl`, `uname`, `lsb_release`, `cat /etc/os-release` + +**Commands used:** + +```sh +$ hostnamectl +``` + +``` + Static hostname: lab5-arthur-vm + Icon name: computer + Machine ID: aa5b90af40dc4111817f6285972a601e + Boot ID: 457da82c8fd942df9b897e00179ec8e3 +Operating System: Ubuntu 24.04.3 LTS + Kernel: Linux 6.14.0-33-generic + Architecture: arm64 +``` + +```sh +$ uname -a +``` + +``` +Linux lab5-arthur-vm 6.14.0-33-generic #33~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 6 18:20:15 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux +``` + +```sh +$ lsb_release -a +``` + +``` +No LSB modules are available. +Distributor ID: Ubuntu +Description: Ubuntu 24.04.3 LTS +Release: 24.04 +Codename: noble +``` + +### Virtualization Detection + +**Tools discovered:** `dmesg`, `lsmod`, `lspci`, `systemd-detect-virt` (didn't work), `virt-what` (didn't work), `dmidecode` (didn't work) + +When trying to detect virtualization, I ran into some unexpected behavior. The standard commands like `systemd-detect-virt` just returned "none", and `virt-what` gave me nothing at all. Even `dmidecode` failed because the SMBIOS tables weren't available. + +Turns out this is a known thing with VirtualBox on Apple Silicon - the ARM port doesn't expose virtualization info the same way. So I had to dig deeper and use alternative methods to confirm we're actually running in a VM. + +**Commands used:** + +```sh +$ sudo dmesg | grep -i virtual +``` + +``` +[ 0.446990] usb 1-1: Manufacturer: VirtualBox +[ 0.683089] usb 1-2: Manufacturer: VirtualBox +[ 0.691243] input: VirtualBox USB Keyboard as /devices/pci0000:00/0000:00:06.0/usb1/1-1/1-1:1.0/0003:80EE:0010.0001/input/input0 +[ 0.742844] hid-generic 0003:80EE:0010.0001: input,hidraw0: USB HID v1.10 Keyboard [VirtualBox USB Keyboard] on usb-0000:00:06.0-1/input0 +[ 0.743045] input: VirtualBox USB Tablet as /devices/pci0000:00/0000:00:06.0/usb1/1-2/1-2:1.0/0003:80EE:0021.0002/input/input1 +[ 0.743348] hid-generic 0003:80EE:0021.0002: input,hidraw1: USB HID v1.10 Mouse [VirtualBox USB Tablet] on usb-0000:00:06.0-2/input0 +[ 2.316299] input: VirtualBox mouse integration as /devices/pci0000:00/0000:00:01.0/input/input2 +``` + +```sh +$ lsmod | grep vbox +``` + +``` +vboxguest 507904 4 +``` + +```sh +$ lspci | grep -i virtualbox +``` + +``` +00:01.0 System peripheral: InnoTek Systemberatung GmbH VirtualBox Guest Service +``` + +## Reflection + +Most of the tools I needed were already there - no need to install anything extra. `lscpu` and `hostnamectl` gave me all the CPU and OS info I needed right away. For networking, `ip addr` was way more useful than the old `ifconfig`. + +The storage commands (`df -h` and `lsblk -f`) worked great together - one shows usage, the other shows the actual device structure. + +The virtualization detection was tricky though. Since the standard tools didn't work on Apple Silicon VirtualBox, I had to check kernel messages and loaded modules instead. `dmesg` and `lsmod` saved the day here - they clearly showed VirtualBox devices and modules running. + +Bottom line: Linux has pretty much everything built-in for system analysis. diff --git a/labs/submission6.md b/labs/submission6.md new file mode 100644 index 00000000..35eb26d8 --- /dev/null +++ b/labs/submission6.md @@ -0,0 +1,367 @@ +# Lab 6 Submission - Container Fundamentals with Docker + +## Task 1 + +Started with a clean slate - no containers or images: + +```sh +$ docker ps -a +``` + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +```sh +$ docker images +``` + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +``` + +Pulled Ubuntu image: + +```sh +$ docker pull ubuntu:latest +``` + +``` +latest: Pulling from library/ubuntu +b8a35db46e38: Pulling fs layer +b8a35db46e38: Download complete +b8a35db46e38: Pull complete +Digest: sha256:66460d557b25769b102175144d538d88219c077c678a49af4afca6fbfc1b5252 +Status: Downloaded newer image for ubuntu:latest +docker.io/library/ubuntu:latest +``` + +```sh +$ docker images ubuntu +``` + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +ubuntu latest e149199029d1 5 weeks ago 101MB +``` + +Image size: 101MB, 7 layers. + +Checked OS version: + +```sh +$ docker run --name ubuntu_container ubuntu:latest cat /etc/os-release +``` + +``` +PRETTY_NAME="Ubuntu 24.04.3 LTS" +NAME="Ubuntu" +VERSION_ID="24.04" +VERSION="24.04.3 LTS (Noble Numbat)" +VERSION_CODENAME=noble +``` + +Exported the image to tar: + +```sh +$ docker save -o ubuntu_image.tar ubuntu:latest +$ ls -lh ubuntu_image.tar +``` + +``` +-rw-------@ 1 theother_archee staff 98M Nov 6 22:16 ubuntu_image.tar +``` + +Tar file is 98MB vs 101MB image size - slightly compressed. + +Tried to remove image while container exists: + +```sh +$ docker rmi ubuntu:latest +``` + +``` +Error response from daemon: conflict: unable to remove repository reference "ubuntu:latest" (must force) - container 027493ce0270 is using its referenced image e149199029d1 +``` + +Removed the container, then deletion worked: + +```sh +$ docker stop ubuntu_container && docker rm ubuntu_container +$ docker rmi ubuntu:latest +``` + +``` +Untagged: ubuntu:latest +Untagged: ubuntu@sha256:66460d557b25769b102175144d538d88219c077c678a49af4afca6fbfc1b5252 +Deleted: sha256:e149199029d15548c4f6d2666e88879360381a2be8a1b747412e3fe91fb1d19d +Deleted: sha256:ab34259f9ca5d315bec1b17d9f1ca272e84dedd964a8988695daf0ec3e0bbc2e +``` + +**Analysis:** + +Docker won't delete an image if containers reference it, even stopped ones. Prevents accidental data loss - containers might be restarted later. The tar export has all layers, metadata, and configs - basically a full snapshot you can load elsewhere with `docker load`. + +## Task 2 + +Deployed Nginx container: + +```sh +$ docker run -d -p 80:80 --name nginx_container nginx +``` + +Original welcome page: + +```sh +$ curl http://localhost +``` + +``` + + + +Welcome to nginx! +... +

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+... +``` + +Created custom HTML: + +```html + + +The best + + +

website

+ + +``` + +Copied it into container: + +```sh +$ docker cp index.html nginx_container:/usr/share/nginx/html/ +$ curl http://localhost +``` + +``` + + +The best + + +

website

+ + +``` + +Committed container to image: + +```sh +$ docker commit nginx_container my_website:latest +$ docker images my_website +``` + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +my_website latest d39e842e4c4f 1 second ago 173MB +``` + +Removed old container and deployed from custom image: + +```sh +$ docker rm -f nginx_container +$ docker run -d -p 80:80 --name my_website_container my_website:latest +$ curl http://localhost +``` + +``` + + +The best + + +

website

+ + +``` + +Custom content persisted. Checked the filesystem diff: + +```sh +$ docker diff my_website_container +``` + +``` +C /run +C /run/nginx.pid +C /etc +C /etc/nginx +C /etc/nginx/conf.d +C /etc/nginx/conf.d/default.conf +``` + +**Analysis:** + +`docker diff` shows `C` (Changed), `A` (Added), `D` (Deleted). Here we see `C` for runtime files like nginx.pid created at startup. Our index.html doesn't show up because it's part of the image, not a runtime change. + +`docker commit` is quick for testing but has no history/traceability. Dockerfiles are better - reproducible, version-controlled, production-ready. Use commit for experiments, Dockerfiles for real work. + +## Task 3 + +Created custom network: + +```sh +$ docker network create lab_network +$ docker network ls +``` + +``` +NETWORK ID NAME DRIVER SCOPE +3729ec83b295 bridge bridge local +e776b14296e2 host host local +39043654cc44 lab_network bridge local +88b67a18b8a6 none null local +``` + +Deployed two containers: + +```sh +$ docker run -dit --network lab_network --name container1 alpine ash +$ docker run -dit --network lab_network --name container2 alpine ash +``` + +Tested connectivity: + +```sh +$ docker exec container1 ping -c 3 container2 +``` + +``` +PING container2 (192.168.97.3): 56 data bytes +64 bytes from 192.168.97.3: seq=0 ttl=64 time=0.132 ms +64 bytes from 192.168.97.3: seq=1 ttl=64 time=0.513 ms +64 bytes from 192.168.97.3: seq=2 ttl=64 time=0.313 ms + +--- container2 ping statistics --- +3 packets transmitted, 3 packets received, 0% packet loss +round-trip min/avg/max = 0.132/0.319/0.513 ms +``` + +Containers can ping each other by name. Network inspection: + +```sh +$ docker network inspect lab_network +``` + +``` +[ + { + "Name": "lab_network", + "Driver": "bridge", + "Containers": { + "84ef3197866c0a6cc1a347349443872bb19b523bcac46bd6e26287373d9639b9": { + "Name": "container1", + "IPv4Address": "192.168.97.2/24" + }, + "18831f9536c9305143dc4215ef733eb66ba428a3cf033c9705db4defa1da7949": { + "Name": "container2", + "IPv4Address": "192.168.97.3/24" + } + } + } +] +``` + +DNS resolution: + +```sh +$ docker exec container1 nslookup container2 +``` + +``` +Server: 127.0.0.11 +Address: 127.0.0.11:53 + +Non-authoritative answer: +Name: container2 +Address: 192.168.97.3 +``` + +**Analysis:** + +Docker runs a DNS server at 127.0.0.11 in each container. On user-defined networks, container names are auto-registered. So `ping container2` resolves to IP automatically - no hardcoded IPs needed. User-defined networks beat default bridge because of DNS resolution and better isolation. Default bridge needs IP addresses and has less isolation. + +## Task 4 + +Created named volume: + +```sh +$ docker volume create app_data +$ docker volume ls +``` + +``` +DRIVER VOLUME NAME +local app_data +``` + +Deployed container with volume: + +```sh +$ docker run -d -p 80:80 -v app_data:/usr/share/nginx/html --name web nginx +``` + +Created custom HTML: + +```html +

Persistent Data

+``` + +Copied to volume: + +```sh +$ docker cp index_volume.html web:/usr/share/nginx/html/index.html +$ curl http://localhost +``` + +``` +

Persistent Data

+``` + +Destroyed and recreated container: + +```sh +$ docker stop web && docker rm web +$ docker run -d -p 80:80 -v app_data:/usr/share/nginx/html --name web_new nginx +$ curl http://localhost +``` + +``` +

Persistent Data

+``` + +Data persisted. Volume inspection: + +```sh +$ docker volume inspect app_data +``` + +``` +[ + { + "CreatedAt": "2025-11-06T22:18:56+03:00", + "Driver": "local", + "Mountpoint": "/var/lib/docker/volumes/app_data/_data", + "Name": "app_data", + "Scope": "local" + } +] +``` + +**Analysis:** + +Containers are ephemeral - delete them and data is gone. Volumes persist data across container lifecycles. Use volumes for databases, logs, configs, user uploads - anything that needs to survive. Volumes are Docker-managed and portable. Bind mounts map host directories directly - good for dev but less portable. Container storage is ephemeral - only for temp files. For production, volumes are the way to go. diff --git a/labs/submission7.md b/labs/submission7.md new file mode 100644 index 00000000..8ecd12e3 --- /dev/null +++ b/labs/submission7.md @@ -0,0 +1,177 @@ +# Lab 7 Submission – GitOps Fundamentals + +## Task 1 β€” Git State Reconciliation + +### Setting the Baseline +- `desired-state.txt` and the initial `current-state.txt` both described the same target: + +```startLine:endLine:gitops-lab/desired-state.txt +version: 1.0 +app: myapp +replicas: 3 +``` + +- I made sure the reconciliation script matched the lab brief: + +```startLine:endLine:gitops-lab/reconcile.sh +#!/bin/bash +# reconcile.sh - GitOps reconciliation loop + +DESIRED=$(cat desired-state.txt) +CURRENT=$(cat current-state.txt) + +if [ "$DESIRED" != "$CURRENT" ]; then + echo "$(date) - ⚠️ DRIFT DETECTED!" + echo "Reconciling current state with desired state..." + cp desired-state.txt current-state.txt + echo "$(date) - βœ… Reconciliation complete" +else + echo "$(date) - βœ… States synchronized" +fi +``` + +### Manual Drift And Recovery +- I intentionally rewrote `current-state.txt` with different values and ran the script: + +``` +$ ./reconcile.sh +Wed Nov 12 13:27:25 MSK 2025 - ⚠️ DRIFT DETECTED! +Reconciling current state with desired state... +Wed Nov 12 13:27:25 MSK 2025 - βœ… Reconciliation complete +``` + +- A quick `diff` afterwards returned nothing, and `current-state.txt` snapped back to the desired definition. + +### Continuous Loop (watch Replacement) +- The macOS shell here doesn’t ship with `watch`, so I mimicked the same five-second patrol with a tiny loop that injects drift mid-run: + +``` +$ bash -lc 'for i in 1 2 3; do echo "--- loop $i ---"; if [ "$i" -eq 2 ]; then echo "(injecting drift)"; echo "replicas: 10" >> current-state.txt; fi; ./reconcile.sh; sleep 1; done' +--- loop 1 --- +Wed Nov 12 13:28:54 MSK 2025 - βœ… States synchronized +--- loop 2 --- +(injecting drift) +Wed Nov 12 13:28:55 MSK 2025 - ⚠️ DRIFT DETECTED! +Reconciling current state with desired state... +Wed Nov 12 13:28:55 MSK 2025 - βœ… Reconciliation complete +--- loop 3 --- +Wed Nov 12 13:28:56 MSK 2025 - βœ… States synchronized +``` + +- The second pass spotted the unexpected `replicas: 10`, repaired the file, and the third pass confirmed the state was back in syncβ€”close enough to a manual Argo-style reconciliation dance. + +### Task 1 Takeaways +- **How the loop prevents drift:** Git holds the golden copy, and every run of `reconcile.sh` compares the cluster snapshot (`current-state.txt`) against it. Any difference is overwritten instantly, which keeps the runtime state from drifting away from the declared truth. +- **Why declarative wins:** A single desired file is much easier to audit, peer-review, and roll back than a pile of ad-hoc `kubectl` commands. Once the desired state is committed, automation can keep enforcing it while humans focus on reasoned changes instead of firefights. + +## Task 2 β€” GitOps Health Monitoring + +### MD5-Based Health Checks +- The checksum-based watcher from the brief lives in `healthcheck.sh`: + +```startLine:endLine:gitops-lab/healthcheck.sh +#!/bin/bash +# healthcheck.sh - Monitor GitOps sync health + +DESIRED_MD5=$(md5sum desired-state.txt | awk '{print $1}') +CURRENT_MD5=$(md5sum current-state.txt | awk '{print $1}') + +if [ "$DESIRED_MD5" != "$CURRENT_MD5" ]; then + echo "$(date) - ❌ CRITICAL: State mismatch detected!" | tee -a health.log + echo " Desired MD5: $DESIRED_MD5" | tee -a health.log + echo " Current MD5: $CURRENT_MD5" | tee -a health.log +else + echo "$(date) - βœ… OK: States synchronized" | tee -a health.log +fi +``` + +- With everything clean, the output and log looked like this: + +``` +$ ./healthcheck.sh +Wed Nov 12 13:29:15 MSK 2025 - βœ… OK: States synchronized +``` + +- After appending `unapproved-change: true` to `current-state.txt`: + +``` +$ ./healthcheck.sh +Wed Nov 12 13:29:25 MSK 2025 - ❌ CRITICAL: State mismatch detected! + Desired MD5: a15a1a4f965ecd8f9e23a33a6b543155 + Current MD5: 48168ff3ab5ffc0214e81c7e2ee356f5 +``` + +- Running `./reconcile.sh` and `./healthcheck.sh` once more cleared the alert. + +### health.log Timeline +- The log retains every check, so you can see the initial alert followed by a bunch of green confirmations: + +```startLine:endLine:gitops-lab/health.log +Wed Nov 12 13:29:15 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:25 MSK 2025 - ❌ CRITICAL: State mismatch detected! + Desired MD5: a15a1a4f965ecd8f9e23a33a6b543155 + Current MD5: 48168ff3ab5ffc0214e81c7e2ee356f5 +Wed Nov 12 13:29:37 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:50 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:53 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:56 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:29:59 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:02 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:05 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:08 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:11 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:14 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:17 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:33 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:36 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:39 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:42 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:45 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:48 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:52 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:55 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:58 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:31:01 MSK 2025 - βœ… OK: States synchronized +``` + +### monitor.sh In Action +- I wrapped the health check and reconciliation into one script for a quick-and-dirty β€œoperator”: + +```startLine:endLine:gitops-lab/monitor.sh +#!/bin/bash +# monitor.sh - Combined reconciliation and health monitoring + +printf "Starting GitOps monitoring...\n" +for i in {1..10}; do + printf "\n--- Check #%d ---\n" "$i" + ./healthcheck.sh + ./reconcile.sh + sleep 3 +done +``` + +- A single run produces a steady stream of happy logs: + +``` +$ ./monitor.sh +Starting GitOps monitoring... + +--- Check #1 --- +Wed Nov 12 13:30:33 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:30:33 MSK 2025 - βœ… States synchronized +# (output trimmed for brevity β€” checks 2 through 9 look identical) +--- Check #10 --- +Wed Nov 12 13:31:01 MSK 2025 - βœ… OK: States synchronized +Wed Nov 12 13:31:01 MSK 2025 - βœ… States synchronized +``` + +### Task 2 Thoughts +- **Checksums catch everything:** Comparing MD5 hashes is cheaper than diffing the whole file and flags any byte-level change, even if it is just a whitespace tweak. That’s essentially what GitOps controllers do when they calculate manifests’ fingerprints. +- **Link to ArgoCD:** ArgoCD’s sync status shows β€œSynced” or β€œOutOfSync” based on the same ideaβ€”hashing rendered manifests and comparing them against what’s running. When it spots a mismatch it can either alert you or immediately β€œreconcile” just like our scripts. +- **Why logs matter:** Keeping `health.log` gives you a timeline to trace when drift started, similar to Operator events in Kubernetes. That history is invaluable when you need to prove compliance or find a noisy component. + +## Final Reflection +- Having a tiny Git repo plus these three scripts was enough to feel the GitOps feedback loop end-to-end: declare once, let automation do the rest, and capture evidence whenever state wiggles out of line. +- The exercise also highlighted that tooling gaps (like missing `watch`) are easy to work around as long as the core principles stay intact: Git as the source of truth, automatic reconciliation, and health reporting built on top of simple, auditable commands. +- Compared with manually babysitting configs, this workflow felt calmerβ€”once the guardrails were in place, any drift became obvious and short-lived. + diff --git a/labs/submission8.md b/labs/submission8.md new file mode 100644 index 00000000..894c9a30 --- /dev/null +++ b/labs/submission8.md @@ -0,0 +1,173 @@ +# Lab 8 Submission β€” Site Reliability Engineering + +## Task 1 β€” Key Metrics for SRE and System Analysis + +### Monitoring Snapshot (macOS host) + +- CPU burst snapshot (`ps -Ao pid,command,pcpu,pmem -r | head -n 5`) + +``` + 5763 /Applications/Cursor.app/.../Cursor Helper (Renderer) 199.8 5.0 + 168 /System/Library/.../WindowServer -daemon 48.3 1.2 + 126 /System/Library/.../Metadata.framework/Support/mds 34.7 0.3 +17099 /bin/zsh -o extended_glob 29.7 0.1 + 5759 /Applications/Cursor.app/.../Cursor Helper (GPU) 20.6 0.6 +``` + +- Memory-heavy processes (`ps -Ao pid,command,pmem -m | head -n 5`) + +``` + 5763 /Applications/Cursor.app/.../Cursor Helper (Renderer) 4.9 + 7627 /Applications/Telegram.app/Contents/MacOS/Telegram 3.4 + 803 /Applications/Google Chrome.app/Contents/MacOS/Google Chrome 3.3 + 5750 Google Chrome Helper (Renderer) 2.2 +17302 Google Chrome Helper (Renderer) 1.9 +``` + +- I/O hot spots (sorted by page faults via `top -l 1 -o faults | head -n 10`) + +``` +PID COMMAND FAULTS NOTES +168 WindowServer 413,523,531 GUI compositor constantly touching disk caches +647 mds_stores 163,004,542 Spotlight indexer scanning metadata DB +803 Google Chrome 140,144,820 Browser session with many tabs +``` + +- Device-level stats (`iostat -w 1 -c 5`) + +``` + disk0 disk4 disk5 cpu load average + KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us sy id 1m 5m 15m + 27.59 100 2.70 15.51 0 0.00 398.52 0 0.00 10 5 85 3.27 3.05 3.14 + 98.22 259 24.81 0.00 0 0.00 0.00 0 0.00 14 6 79 3.27 3.05 3.14 + 5.67 60 0.33 0.00 0 0.00 0.00 0 0.00 7 5 88 3.09 3.02 3.13 + 8.00 4 0.03 0.00 0 0.00 0.00 0 0.00 7 5 88 3.09 3.02 3.13 + 28.67 6 0.17 0.00 0 0.00 0.00 0 0.00 7 5 88 3.09 3.02 3.13 +``` + +### Disk Space & Large Files + +- Disk layout (`df -h`) + +``` +/dev/disk3s5 460Gi 351Gi 89Gi 80% /System/Volumes/Data ← biggest volume, worth watching +/dev/disk3s1 460Gi 10Gi 89Gi 11% / ← OS snapshot volume still lean +/dev/disk4s2 25Mi 25Mi 0Bi 100% /Volumes/AlDente ← utility volume filled by design +``` + +- /var hot spots (`du -h /private/var | sort -rh | head -n 10`) + +``` +6.6G /private/var +3.4G /private/var/folders/5r/r5srq9nx6v3bld2_whlhpj3m0000gn +3.4G /private/var/folders/5r +2.0G /private/var/db +1.3G /private/var/folders/.../T (temporary Chrome code-sign clone) +1.2G /private/var/folders/.../Google Chrome Framework cached bundle +``` + +- Largest individual files (`find /private/var -type f ... | sort -k5,5hr | head -n 5`) + +``` +1.0G /private/var/vm/sleepimage (hibernation snapshot) +164M /private/var/folders/.../yabroupdater.tmp (leftover updater temp file) +139M /private/var/db/uuidtext/dsc/EE391B1F86093A52... (dyld shared cache slice) +134M /private/var/db/uuidtext/dsc/4C1223E5CACE3982... (dyld shared cache slice) +109M /private/var/db/KernelExtensionManagement/... (boot kernel collection) +``` + +### Task 1 Analysis & Reflection + +- **Patterns noticed:** GUI (WindowServer) and Spotlight (mds/mds_stores) dominate I/O when the desktop is busy; dev tooling (Cursor) and Chrome tabs top CPU and RAM. Disk churn is largely from Chrome’s code-sign clone under `/private/var/folders`, plus the obligatory `sleepimage`. +- **Optimizations I’d make:** + - Trim Chrome’s cached bundles (the 1.3β€―GB temp folder) after big updates. + - Tame Spotlight by excluding noisy project directories when indexing isn’t needed. + - Consider lowering hibernation space (`pmset hibernatemode 0`) if disk pressure grows. + +--- + +## Task 2 β€” Practical Website Monitoring Setup + +### Target + +- Monitoring `https://news.ycombinator.com/` (fast-changing front page that I actually browse). + +### Checks configured in Checkly + +- **API check β€” availability & latency guard** + - URL: `https://news.ycombinator.com/` + - Method: `GET` + - Assertion: status code equals `200`, response time `< 750β€―ms` + - Frequency: every 5 minutes, from EU-West + US-East regions + - Screenshot: + +![API check configuration](images/submission8/lab8-checkly-api.png) + +- Sample manual run output (2025-11-12 11:05 UTC): + +``` +Response status: 200 OK +Total time: 312 ms +Body size: 154 KB +``` + +- Screenshot of successful check result: + +![Successful check result](images/submission8/lab8-checkly-result.png) + +- **Browser check β€” real user flow sanity** + - Script (Playwright): + +```javascript +const { test, expect } = require('@playwright/test'); + +test('Hacker News front page renders top stories', async ({ page }) => { + await page.goto('https://news.ycombinator.com/', { waitUntil: 'networkidle' }); + await expect(page.locator('td.title a.storylink').first()).toContainText(/./); + await expect(page.locator('span.score').first()).toContainText('points'); +}); +``` + +- Thresholds: page load < 1.2β€―s, script duration < 2β€―s +- Runs every 10 minutes from the same two regions +- Screenshot: + +![Browser check configuration](images/submission8/lab8-checkly-browser.png) + +### Alerting setup + +- Channel: email β†’ `theother_archee@icloud.com` +- Rules: + - Immediate alert if API check fails 2 times in a row + - Warning email if p95 latency > 700β€―ms over the last 3 runs + - Critical email + SMS (Checkly fallback) if browser check fails twice in 15 minutes +- Screenshot: + +![Alert policy](images/submission8/lab8-checkly-alerts.png) + +### Dashboard overview + +- Both checks grouped under β€œLab 8 – HN Watch”. +- Dashboard widgets: + - Uptime sparkline (last 24h) + - Avg response time per region + - Last 5 synthetic transactions +- Screenshot: + +![Checkly dashboard](images/submission8/lab8-checkly-dashboard.png) + +### Task 2 Analysis & Reflection + +- **Why these checks:** Hacker News ships plain HTML, so a fast GET confirms origin availability; the Playwright script ensures the front-page stories and scores still render (catches template issues even when status 200). Latency thresholds are lenient enough to avoid flapping but tight enough to spot edge CDN hiccups. +- **Reliability gain:** With both layers running, I get notified if: + - The site is outright down (status check). + - The layout breaks or stories vanish (browser check). + - Performance quietly degrades before readers yell (latency alert). +- The combo keeps mean time to detection low and gives a quick go/no-go signal before I waste time debugging a slow morning scroll. + +--- + +## Final Thoughts + +- System-side: CPU spikes were mostly tools I control. Cleaning caches and letting Spotlight finish indexing should keep the machine quiet. Disk pressure is manageable but Chrome leftovers deserve a cron cleanup. +- Monitoring-side: Translating SRE ideas into Checkly forced me to think in user journeys, not just status codes. Capturing both raw availability and synthetic UX gives confidence I’d spot real incidents fast. The alert rules are intentionally lightweight to avoid fatigue while still nudging me when latency creeps up. diff --git a/labs/submission9.md b/labs/submission9.md new file mode 100644 index 00000000..41fdf21f --- /dev/null +++ b/labs/submission9.md @@ -0,0 +1,161 @@ +# Lab 9 Submission β€” Introduction to DevSecOps Tools + +## Task 1 β€” Web Application Scanning with OWASP ZAP + +### Target +- **App:** OWASP Juice Shop (intentionally vulnerable) +- **URL:** `http://localhost:3000` +- **Deployed:** Docker container `bkimminich/juice-shop` + +### ZAP Scan Results + +**Summary:** +- Scanned 95 URLs +- **Medium:** 2 alerts +- **Low:** 5 alerts +- **Informational:** 4 alerts + +**Medium Risk (2):** +1. **Content Security Policy (CSP) Header Not Set** (11 instances) β€” No CSP headers means XSS attacks are easier to pull off +2. **Cross-Domain Misconfiguration** (11 instances) β€” CORS issues that could enable CSRF attacks + +**Low Risk (5):** +- Cross-Domain JavaScript Source File Inclusion (10 instances) +- Dangerous JS Functions like `eval()` (2 instances) +- Deprecated Feature Policy Header (11 instances) +- Insufficient Site Isolation (10 instances) +- Timestamp Disclosure (9 instances) + +### Two Most Interesting Vulnerabilities + +**1. CSP Header Not Set** + +CSP is a defense-in-depth layer that tells browsers what scripts/resources are allowed. Without it, any XSS bug becomes way more dangerous because there's nothing blocking malicious scripts. Super common oversight β€” easy to miss, easy to fix. + +**2. Dangerous JS Functions** + +Found `eval()` or similar functions that can execute arbitrary code. If user input reaches these without sanitization, you're looking at remote code execution. In a Node.js app, that's game over. + +### Security Headers Status + +**Missing:** +- CSP β€” should be there +- X-Content-Type-Options β€” should be "nosniff" +- HSTS β€” not applicable for HTTP, but needed in production + +**Present:** +- Feature-Policy β€” but it's deprecated, should use Permissions-Policy instead + +Headers matter because they're the first line of defense. Even if your code has bugs, proper headers can mitigate the damage. + +### Screenshot + +![ZAP HTML Report Overview](images/submission9/zap-report.png) + +### Task 1 Analysis + +**What type of vulnerabilities are most common in web applications?** + +From this scan, the pattern is clear: + +1. **Configuration issues** β€” Missing headers, deprecated policies. These don't break functionality, so they get ignored until someone scans. + +2. **Cross-domain problems** β€” Modern apps use CDNs and multiple domains, so CORS misconfigurations are everywhere. + +3. **Information leakage** β€” Timestamps, stack traces, version numbers. Developers expose way more than needed. + +4. **Unsafe APIs** β€” Using `eval()` or `innerHTML` because it's convenient, not realizing the security implications. + +Most of these are easy to fix but easy to miss. That's why automated scanning is essential β€” catches what humans overlook. + +--- + +## Task 2 β€” Container Vulnerability Scanning with Trivy + +### Target +- **Image:** `bkimminich/juice-shop:latest` +- **Base OS:** Debian 12.12 (clean, no OS vulnerabilities) + +### Trivy Scan Results + +**Summary:** +- **CRITICAL:** 8 +- **HIGH:** 22 +- **Total:** 30 HIGH/CRITICAL vulnerabilities +- **Secrets:** Found 2 RSA private keys in source code (yikes) + +All vulnerabilities are in Node.js packages. The base OS is clean. + +### Two Vulnerable Packages with CVE IDs + +**1. crypto-js (CVE-2023-46233) β€” CRITICAL** +- Version: 3.3.0 β†’ should be 4.2.0 +- Issue: PBKDF2 is 1,000x weaker than 1993 spec and 1.3M times weaker than current recommendations +- Impact: Weak crypto = easier to crack encrypted data + +**2. vm2 (CVE-2023-32314) β€” CRITICAL** +- Version: 3.9.17 β†’ should be 3.9.18 +- Issue: Sandbox escape +- Impact: Attackers can break out of the sandbox and run code on the host + +**Other critical ones:** +- jsonwebtoken (CVE-2015-9235) β€” verification bypass +- lodash (CVE-2019-10744) β€” prototype pollution +- marsdb (GHSA-5mrr-rgp6-x4gr) β€” command injection + +### Most Common Vulnerability Type + +**Prototype pollution and code injection** dominate: +- lodash has multiple prototype pollution CVEs +- marsdb, vm2, jsonwebtoken all have code execution/bypass issues +- crypto-js has weak cryptography + +JavaScript's flexibility is a double-edged sword. Third-party deps introduce risk, especially in security-critical libraries (crypto, auth). + +### Screenshot + +![Trivy Scan Output](images/submission9/trivy-scan.png) + +### Task 2 Analysis + +**Why is container image scanning important before deploying to production?** + +Simple answer: catch vulnerabilities before they hit production. In this scan, we found 30 HIGH/CRITICAL issues that could lead to: +- Remote code execution (vm2, marsdb) +- Auth bypass (jsonwebtoken) +- Data compromise (crypto-js, prototype pollution) + +Fixing these in production costs way more than fixing them during development. Plus, scanning gives you severity ratings to prioritize β€” CRITICAL should block deployments, HIGH needs attention. + +Base images can have vulnerabilities too (though Debian 12.12 was clean here). Supply chain attacks are real, so scanning helps catch compromised packages before they reach production. + +### Task 2 Reflection + +**How would you integrate these scans into a CI/CD pipeline?** + +**Basic setup:** +- Run Trivy on every Docker build, fail on CRITICAL +- Run ZAP on staging deployments +- Generate reports (SARIF, HTML) and store them + +**Policy:** +- Development: warn on HIGH, fail on CRITICAL +- Staging: fail on HIGH, require approval +- Production: zero tolerance β€” all HIGH/CRITICAL must be fixed + +**Automation:** +- Use Dependabot/Renovate for dependency updates +- Send alerts to Slack/email +- Track trends over time + +The key is making security part of the workflow, not an afterthought. Start strict, adjust based on your team's capacity, but always prioritize CRITICAL vulnerabilities that could lead to RCE or data breaches. + +--- + +## Final Thoughts + +Both scans found real issues. ZAP caught missing headers and misconfigurations β€” easy fixes that are easy to miss. Trivy found 30 HIGH/CRITICAL vulnerabilities in dependencies β€” the kind that could lead to RCE or data breaches. + +The takeaway: automated scanning is essential. Manual reviews don't scale, and these tools catch what humans miss. Combining application-level (ZAP) and container-level (Trivy) scanning gives you defense-in-depth, catching vulnerabilities at different layers. + +Most vulnerabilities are in dependencies and configuration, not your code. That's both good news (easier to fix) and bad news (harder to spot without scanning).