diff --git a/MohamadMahdiReisi/Problem1_PostgreSQL/README.md b/.gitignore similarity index 100% rename from MohamadMahdiReisi/Problem1_PostgreSQL/README.md rename to .gitignore diff --git a/AlirezaMirzaei/Problem1_PostgreSql/README.md b/AlirezaMirzaei/Problem1_PostgreSql/README.md new file mode 100644 index 00000000..2a5c69d1 --- /dev/null +++ b/AlirezaMirzaei/Problem1_PostgreSql/README.md @@ -0,0 +1,46 @@ +# My PostgreSQL Container Setup + +This here is a quick way to spin up a PostgreSQL database with my schema and sample data already loaded, so here’s exactly how I did it for this assignment: + +1. **Write the initialization script** + + In my project root I created an `init-db.sql` file containing everything I wanted PostgreSQL to run on first boot: + + - Create the database and initialize it. + - Create necessary tables (teams). + - Insert value into the tables and initialize them with the data necessary. + - Some changes to the database and confirmations that are in the container log after running the container. + +2. **Create my startup script** + + I wrote `setup-postgres.sh` so I could reproduce this on any machine: + + - The script runs the docker container using the username and credentials provided plainly, this could be parameterized or given as a docker secret but seeing as this is a local installation with no need to access the internet i made it more simple. + - Then the script initializes, fills and updates the database using the file `init-db.sql`. + - Then the container status is checked via the command `pg_isready` in a loop. + +3. **Verify the result** + + After running `chmod +x setup-postgres.sh` and `./setup-postgres.sh`, I double-checked the container to be running: + + ``` + docker exec -it postgres_ctf \ + psql -U postgres -d ctf_db \ + -c "SELECT * FROM teams;" + ``` + + This should output the final data in the database after any change we make to the `teams` table + +4. **Tearing down for a fresh start** + + Whenever I want to start over, I simply do: + + ``` + docker stop postgres_ctf && docker rm postgres_ctf + docker volume rm postgres_data + ./setup-postgres.sh + ``` + +### End. The video links: + +https://iutbox.iut.ac.ir/index.php/s/oLbwink98GPyA36 diff --git a/Samples and Hints/Problem 1/README.md b/AlirezaMirzaei/Problem1_PostgreSql/initdb.sql similarity index 59% rename from Samples and Hints/Problem 1/README.md rename to AlirezaMirzaei/Problem1_PostgreSql/initdb.sql index 386d0523..f0c4bfc3 100644 --- a/Samples and Hints/Problem 1/README.md +++ b/AlirezaMirzaei/Problem1_PostgreSql/initdb.sql @@ -1,9 +1,8 @@ -# Hint: SQL Commands for Question 1 - -```sql -- Create a new database CREATE DATABASE ctf_db; +-- Connect to the newly created database +\connect ctf_db; -- Create a table to store team information CREATE TABLE teams ( @@ -14,11 +13,13 @@ CREATE TABLE teams ( -- Insert sample data into the table INSERT INTO teams (team_name, challenge_assigned) -VALUES +VALUES ('Red Team', true), - ('Blue Team', false); + ('Blue Team', false), + ('Green Team', false); --- Retrieve all records from the table +-- This is just to verify the data was inserted +-- The output will show in the container logs SELECT * FROM teams; -- Update a team's challenge assignment status @@ -29,4 +30,10 @@ WHERE team_name = 'Blue Team'; -- Delete a team from the table DELETE FROM teams WHERE team_name = 'Red Team'; -``` + +-- Insert an additional row as requested +INSERT INTO teams (team_name, challenge_assigned) +VALUES ('Yellow Team', true); + +-- Final state of the table +SELECT * FROM teams; \ No newline at end of file diff --git a/AlirezaMirzaei/Problem1_PostgreSql/setup_postgres.sh b/AlirezaMirzaei/Problem1_PostgreSql/setup_postgres.sh new file mode 100755 index 00000000..1fc1cc51 --- /dev/null +++ b/AlirezaMirzaei/Problem1_PostgreSql/setup_postgres.sh @@ -0,0 +1,22 @@ +#!/bin/bash + +# Create volume for PostgreSQL data persistence +docker volume create postgres_data + +# Run PostgreSQL container with initialization +docker run --name postgres_ctf \ + -e POSTGRES_USERNAME=ctfadmin \ + -e POSTGRES_PASSWORD=s#cr#tpasssswithasalt \ + -d \ + -p 5432:5432 \ + -v postgres_data:/var/lib/postgresql/data \ + -v $(pwd)/initdb.sql:/docker-entrypoint-initdb.d/init-db.sql \ + postgres:14-alpine + +# Wait until Postgres is actually ready +echo "Waiting for PostgreSQL to initialize…" +until docker exec postgres_ctf pg_isready -U postgres >/dev/null 2>&1; do + sleep 1 +done + +echo "PostgreSQL container initialized with our database schema and data!" diff --git a/AlirezaMirzaei/Problem2_Redis/README.md b/AlirezaMirzaei/Problem2_Redis/README.md new file mode 100644 index 00000000..5b22c1f3 --- /dev/null +++ b/AlirezaMirzaei/Problem2_Redis/README.md @@ -0,0 +1,61 @@ +# My Redis Server & IPC Demo + +We also need a lightweight pub/sub demo alongside Redis key/value storage, so here’s how I put that together for this assignment: + +1. **Spinning up Redis** + In `setup-redis.sh` I: + - I made a folder for redis data (volume) to be mounted in. + - I configured redis for listening address and mode of data access in `redis.conf`. + - I start the container with image `6-alpine` with the given settings and options. + - Check if redis is running successfuly using ping and pong command and output. +2. **Preparing my Python environment** + I set up a virtualenv and installed `redis`: + + ```bash + python -m venv .venv + source .venv/bin/activate + pip install redis + ``` + +3. **Writing the producer** (`redis-producer.py`) + + - Connect to Redis + - Set a few string keys and a hash + - Publish a “notification” every second, incrementing a counter + +4. **Writing the consumer** (`redis-consumer.py`) + + - Connect to Redis + - Dump all existing keys/hashes on start + - Subscribe to the `notifications` channel and print each new message + - The output is printed to stdout and can easily be inspected. + +5. **Wrapping it all together** + In `run-demo.sh` I made sure Redis is running, then: + + - I run the consumer file in background and set it's output to `consumer.log`. + - I run the producer file in foreground and stop the consumer when the producer is stopped. + - The logs can be viewed in `consumer.log`. + +6. **Monitoring with RedisInsight** + I used `setup-redisinsight.sh` to spin up RedisInsight so I could watch keys, hashes, and pub/sub traffic through a GUI. + +7. **Wrap Up** + The commands for using this module are run in this order: + + - `./setup-redis.sh` to run and install redis. + - `./setup-redisinsight.sh` to run and install redis insight. + - Install requirements as mentioned in step 2 (don't forget). + - Run the demo file: `./run-demo.sh`. + - Inspect the outputs and redis insight logs. + +8. **To Restart**: + - Run `docker stop redis-server` and `docker rm redis-server`. + - Run the commands above again. + +### End. The video links: +https://iutbox.iut.ac.ir/index.php/s/oLbwink98GPyA36 + +--- + +That’s exactly how I went step-by-step through each setup for this assignment—hope it’s clear and easy to follow from my perspective! diff --git a/AlirezaMirzaei/Problem2_Redis/redis-consumer.py b/AlirezaMirzaei/Problem2_Redis/redis-consumer.py new file mode 100755 index 00000000..fad78726 --- /dev/null +++ b/AlirezaMirzaei/Problem2_Redis/redis-consumer.py @@ -0,0 +1,76 @@ +#!/usr/bin/env python3 +import redis +import sys +import threading +import time +import logging + +# Configure logging to write everything to consumer.log +logging.basicConfig( + filename="consumer.log", + filemode="a", + format="%(asctime)s %(levelname)s %(message)s", + level=logging.INFO, +) + + +def subscribe_to_channel(r, channel): + """Subscribe to the specified Redis channel and log messages""" + logging.info(f"Subscribing to channel '{channel}'...") + pubsub = r.pubsub() + pubsub.subscribe(channel) + + for message in pubsub.listen(): + if message["type"] == "message": + data = message["data"] + logging.info(f"Received message on '{channel}': {data}") + + +def fetch_data(r): + """Periodically fetch and log data from Redis""" + while True: + try: + service_status = r.get("service_status") + last_update = r.get("last_update") + message_count = r.get("message_count") + system_info = r.hgetall("system_info") + + logging.info("Current Redis Data:") + logging.info(f" service_status: {service_status}") + logging.info(f" last_update: {last_update}") + logging.info(f" message_count: {message_count}") + logging.info(" system_info:") + for key, value in system_info.items(): + logging.info(f" {key}: {value}") + + time.sleep(5) + except Exception as e: + logging.error(f"Error fetching data: {e}", exc_info=True) + time.sleep(5) + + +def main(): + logging.info("Connecting to Redis server...") + try: + r = redis.Redis(host="localhost", port=6379, db=0, decode_responses=True) + r.ping() + logging.info("Successfully connected to Redis server") + except redis.ConnectionError as e: + logging.error(f"Failed to connect to Redis: {e}") + sys.exit(1) + + # Start data-fetch thread + data_thread = threading.Thread(target=fetch_data, args=(r,), daemon=True) + data_thread.start() + + # Run subscriber in main thread + try: + subscribe_to_channel(r, "notifications") + except KeyboardInterrupt: + logging.info("Shutting down consumer due to keyboard interrupt") + + logging.info("Consumer exiting") + + +if __name__ == "__main__": + main() diff --git a/AlirezaMirzaei/Problem2_Redis/redis-producer.py b/AlirezaMirzaei/Problem2_Redis/redis-producer.py new file mode 100755 index 00000000..8e0bb174 --- /dev/null +++ b/AlirezaMirzaei/Problem2_Redis/redis-producer.py @@ -0,0 +1,60 @@ +#!/usr/bin/env python3 +import redis +import time +import sys + + +def main(): + print("Producer: Connecting to Redis server...") + try: + # Connect to Redis server + r = redis.Redis(host="localhost", port=6379, db=0, decode_responses=True) + r.ping() # Test connection + print("Producer: Successfully connected to Redis server") + except redis.ConnectionError as e: + print(f"Producer: Failed to connect to Redis: {e}", file=sys.stderr) + sys.exit(1) + + # Set some key-value pairs + print("Producer: Setting key-value pairs...") + r.set("service_status", "running") + r.set("last_update", time.strftime("%Y-%m-%d %H:%M:%S")) + r.set("message_count", "0") + + # Create a hash (dictionary) + r.hset( + "system_info", + mapping={ + "hostname": "redis-test", + "version": "1.0.0", + "environment": "development", + }, + ) + + print("Producer: Key-value pairs set successfully") + + # Publish messages to a channel + channel = "notifications" + message_count = 0 + + print(f"Producer: Starting to publish messages to channel '{channel}'") + print("Producer: Press Ctrl+C to stop") + + try: + while True: + message_count += 1 + message = f"Message #{message_count} at {time.strftime('%H:%M:%S')}" + r.publish(channel, message) + r.set("message_count", str(message_count)) + print(f"Producer: Published: {message}") + + # Sleep for 2 seconds between messages + time.sleep(2) + except KeyboardInterrupt: + print("\nProducer: Stopping message publication") + + print("Producer: Done") + + +if __name__ == "__main__": + main() diff --git a/AlirezaMirzaei/Problem2_Redis/run-demo.sh b/AlirezaMirzaei/Problem2_Redis/run-demo.sh new file mode 100755 index 00000000..2cc5a88d --- /dev/null +++ b/AlirezaMirzaei/Problem2_Redis/run-demo.sh @@ -0,0 +1,30 @@ +#!/bin/bash + +echo "Ensuring Redis is running..." +docker ps | grep redis-server > /dev/null +if [ $? -ne 0 ]; then + echo "Redis server not running. Starting it now..." + ./setup-redis.sh +fi + +# Run the consumer in background +echo "Starting Redis consumer in background..." +./redis-consumer.py & +CONSUMER_PID=$! + +echo "Consumer started with PID $CONSUMER_PID" +echo "Consumer logs are being written to consumer.log" + + +# Wait a bit to ensure consumer is ready +sleep 2 + +echo "Starting Redis producer in foreground..." +echo "Press Ctrl+C to stop the producer when done" +./redis-producer.py + +# When producer is stopped, also stop the consumer +echo "Stopping consumer process..." +kill $CONSUMER_PID + +echo "Demo completed. You can view the consumer logs in consumer.log" diff --git a/AlirezaMirzaei/Problem2_Redis/setup-redis.sh b/AlirezaMirzaei/Problem2_Redis/setup-redis.sh new file mode 100755 index 00000000..e2aa1a6b --- /dev/null +++ b/AlirezaMirzaei/Problem2_Redis/setup-redis.sh @@ -0,0 +1,37 @@ +#!/bin/bash + +# Create a directory for Redis data +mkdir -p redis-data + +# Create a simple Redis configuration file for our needs +cat >redis.conf < ") + print(" challenge_name: 'todo-app' or 'juice-shop'") + print(" team_id: Team identifier (e.g., 'team1')") + sys.exit(1) + + challenge_name = sys.argv[1] + team_id = sys.argv[2] + + print( + f"Testing Celery tasks with challenge '{challenge_name}' for team '{team_id}'..." + ) + + # First, check container status before starting anything + print("\nChecking initial container status...") + status = get_container_status.delay( + team_id=team_id, challenge_name=challenge_name + ).get() + print_response("Initial Status", status) + + # Start container + print("\nStarting container...") + start_result = start_container.delay(challenge_name, team_id).get() + print_response("Start Container Result", start_result) + + if start_result.get("status") != "success": + print("Failed to start container. Exiting.") + sys.exit(1) + + container_id = start_result["container"]["id"] + + # Wait a moment and check container status + print("\nWaiting for container to fully start...") + time.sleep(5) + + status = get_container_status.delay(container_id=container_id).get() + print_response("Container Status After Start", status) + + # Let the user see the running container + input("\nPress Enter to stop the container...") + + # Stop container + print("\nStopping container...") + stop_result = stop_container.delay(container_id).get() + print_response("Stop Container Result", stop_result) + + # Check final status + time.sleep(2) + status = get_container_status.delay(container_id=container_id).get() + print_response("Final Container Status", status) + + print("\nTest completed!") + + +if __name__ == "__main__": + main() diff --git a/AlirezaMirzaei/Problem4_WebAPI/.env.example b/AlirezaMirzaei/Problem4_WebAPI/.env.example new file mode 100644 index 00000000..3d8d0d6a --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/.env.example @@ -0,0 +1,3 @@ +DATABASE_URL=postgresql://postgres:password@postgres_ctf:5432/ctf_db +CELERY_BROKER_URL=redis://localhost:6379/0 +CELERY_RESULT_BACKEND=redis://localhost:6379/1 diff --git a/AlirezaMirzaei/Problem4_WebAPI/Dockerfile b/AlirezaMirzaei/Problem4_WebAPI/Dockerfile new file mode 100644 index 00000000..17e759c1 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/Dockerfile @@ -0,0 +1,19 @@ +# For dockerizing this part, for the docker-compose based deployment +# Use official Python image +FROM python:3.11-slim + +# Set working directory +WORKDIR /usr/src/app + +# Copy only requirements first for caching +COPY requirements.txt ./ +RUN pip install --no-cache-dir -r requirements.txt + +# Copy the rest of the application +COPY . . + +# Expose FastAPI port +EXPOSE 8000 + +# Default command (overridden by docker-compose for celery or web) +CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"] diff --git a/AlirezaMirzaei/Problem4_WebAPI/README.md b/AlirezaMirzaei/Problem4_WebAPI/README.md new file mode 100644 index 00000000..9f218585 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/README.md @@ -0,0 +1,124 @@ +## Problem 4: Web API for Team Challenge Management + +### Overview + +This service provides HTTP endpoints to assign and remove CTF challenge containers to/from teams. It uses: + +- **FastAPI** for HTTP API +- **PostgreSQL** for persistent storage +- **Celery** with **Redis** for background task processing +- **Docker SDK** for container management + +### Endpoints + +#### `POST /assign` + +Request: + +```json +{ + "team_id": 1, + "challenge_id": 2 +} +``` + +Response: + +```json +{ + "team_id": 1, + "challenge_id": 2, + "container_id": "...", + "address": "http://:", + "status": "running" +} +``` + +#### `POST /remove` + +Request: + +```json +{ + "team_id": 1, + "challenge_id": 2 +} +``` + +Response: + +```json +{ + "team_id": 1, + "challenge_id": 2, + "container_id": "...", + "status": "stopped" +} +``` + +### Database Schema + +Table: `team_challenges` + +| Column | Type | Description | +| -------------- | ------- | ------------------------------------ | +| `id` | Integer | Primary key | +| `team_id` | Integer | ID of the team | +| `challenge_id` | Integer | ID of the challenge | +| `container_id` | String | Docker container ID | +| `address` | String | URL address of the running container | +| `status` | String | `running` or `stopped` | + +### Setup and Run + +1. Configure environment variables (copy from .env.example in project root) in a `.env` file: + +```ini +DATABASE_URL=postgresql://postgres:password@postgres_ctf:5432/ctf_db +CELERY_BROKER_URL=redis://redis-server-ip:6379/0 +... +``` + +#### Instead of the steps below, until the testing section, after running the redis and pgsql containers using .sh files given in the previous sections, you can run ./run_app.sh to do the steps 2 through 4 + +2. Build and install dependencies: + +```bash +pip install -r requirements.txt +``` + +3. Start Celery worker: + +```bash +celery -A celery_app.celery_app worker --loglevel=info +``` + +4. Run the API: + +```bash +uvicorn app.main:app --host 0.0.0.0 --port 8000 +``` + +### Testing with Postman or Python + +Send a `POST` to `http://localhost:8000/assign` with JSON: + +```json +{ "team_id": 1, "challenge_id": 2 } +``` + +Observe that a container starts, and the database record is created. + +Send a `POST` to `http://localhost:8000/remove` with JSON: + +```json +{ "team_id": 1, "challenge_id": 2 } +``` + +Observe the container stops and the record status updates. + +You can also use `test_api.py` to quickly verify the endpoints using Python. + +### End. The video links: + +https://iutbox.iut.ac.ir/index.php/s/oLbwink98GPyA36 diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/__init__.py b/AlirezaMirzaei/Problem4_WebAPI/app/__init__.py new file mode 100644 index 00000000..8d7c9bc2 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/__init__.py @@ -0,0 +1 @@ +# to mark this directory as a package diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/database.py b/AlirezaMirzaei/Problem4_WebAPI/app/database.py new file mode 100644 index 00000000..d1f70bb7 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/database.py @@ -0,0 +1,12 @@ +from sqlalchemy import create_engine +from sqlalchemy.ext.declarative import declarative_base +from sqlalchemy.orm import sessionmaker +import os + +DATABASE_URL = os.getenv( + "DATABASE_URL", "postgresql://postgres:password@postgres_ctf:5432/ctf_db" +) + +engine = create_engine(DATABASE_URL) +SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) +Base = declarative_base() diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/main.py b/AlirezaMirzaei/Problem4_WebAPI/app/main.py new file mode 100644 index 00000000..04bd44cc --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/main.py @@ -0,0 +1,29 @@ +from fastapi import FastAPI, HTTPException +from app.schemas import AssignRequest, AssignResponse, RemoveRequest, RemoveResponse +from app.database import engine, Base, SessionLocal +from app.models import TeamChallenge +from app.tasks import start_challenge, stop_challenge +import uvicorn + +# create tables +Base.metadata.create_all(bind=engine) + +app = FastAPI(title="CTF Challenge Manager") + + +@app.post("/assign", response_model=AssignResponse) +def assign_challenge(req: AssignRequest): + result = start_challenge.delay(req.team_id, req.challenge_id) + res = result.get(timeout=30) + return res + + +@app.post("/remove", response_model=RemoveResponse) +def remove_challenge(req: RemoveRequest): + result = stop_challenge.delay(req.team_id, req.challenge_id) + res = result.get(timeout=30) + return res + + +if __name__ == "__main__": + uvicorn.run("app.main:app", host="0.0.0.0", port=8000, reload=True) diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/models.py b/AlirezaMirzaei/Problem4_WebAPI/app/models.py new file mode 100644 index 00000000..14654912 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/models.py @@ -0,0 +1,13 @@ +from sqlalchemy import Column, Integer, String +from app.database import Base + + +class TeamChallenge(Base): + __tablename__ = "team_challenges" + + id = Column(Integer, primary_key=True, index=True) + team_id = Column(Integer, index=True, nullable=False) + challenge_id = Column(Integer, index=True, nullable=False) + container_id = Column(String, unique=True, nullable=False) + address = Column(String, nullable=False) + status = Column(String, nullable=False, default="running") diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/schemas.py b/AlirezaMirzaei/Problem4_WebAPI/app/schemas.py new file mode 100644 index 00000000..422828d7 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/schemas.py @@ -0,0 +1,27 @@ +# app/schemas.py +from pydantic import BaseModel + + +class AssignRequest(BaseModel): + team_id: int + challenge_id: int + + +class AssignResponse(BaseModel): + team_id: int + challenge_id: int + container_id: str + address: str + status: str + + +class RemoveRequest(BaseModel): + team_id: int + challenge_id: int + + +class RemoveResponse(BaseModel): + team_id: int + challenge_id: int + container_id: str + status: str diff --git a/AlirezaMirzaei/Problem4_WebAPI/app/tasks.py b/AlirezaMirzaei/Problem4_WebAPI/app/tasks.py new file mode 100644 index 00000000..d7ed22f6 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/app/tasks.py @@ -0,0 +1,85 @@ +from celery_app import celery_app +import docker +import os +from app.database import SessionLocal +from app.models import TeamChallenge + +docker_client = docker.from_env() + + +@celery_app.task(bind=True) +def start_challenge(self, team_id: int, challenge_id: int) -> dict: + # select image based on challenge_id + images = {1: "pasapples/apjctf-todo-java-app:latest", 2: "bkimminich/juice-shop"} + image = images.get(challenge_id) + if not image: + raise ValueError(f"Unknown challenge {challenge_id}") + + container = docker_client.containers.run(image, detach=True, ports={"80/tcp": None}) + container.reload() + + network_settings = container.attrs["NetworkSettings"] + ports = network_settings.get("Ports", {}) + + host_port = None + for pinfo in ports.values(): + if pinfo and isinstance(pinfo, list): + host_port = pinfo[0].get("HostPort") + break + + if not host_port: + raise RuntimeError("Could not determine mapped host port for container.") + + ip = os.getenv( + "DOCKER_HOST_IP", "localhost" + ) # Use .env override or default to localhost + addr = f"http://{ip}:{host_port}" + db = SessionLocal() + tc = TeamChallenge( + team_id=team_id, + challenge_id=challenge_id, + container_id=container.id, + address=addr, + status="running", + ) + db.add(tc) + db.commit() + db.refresh(tc) + db.close() + + return { + "team_id": team_id, + "challenge_id": challenge_id, + "container_id": container.id, + "address": addr, + "status": "running", + } + + +@celery_app.task(bind=True) +def stop_challenge(self, team_id: int, challenge_id: int) -> dict: + db = SessionLocal() + tc = ( + db.query(TeamChallenge) + .filter_by(team_id=team_id, challenge_id=challenge_id, status="running") + .first() + ) + if not tc: + raise ValueError( + f"No active container for team {team_id} challenge {challenge_id}" + ) + + container = docker_client.containers.get(tc.container_id) + container.stop() + container.remove() + tc.status = "stopped" + db.commit() + db.refresh(tc) + db.close() + + return { + "team_id": team_id, + "challenge_id": challenge_id, + "container_id": tc.container_id, + "status": "stopped", + } diff --git a/AlirezaMirzaei/Problem4_WebAPI/celery_app.py b/AlirezaMirzaei/Problem4_WebAPI/celery_app.py new file mode 100644 index 00000000..7d4356f4 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/celery_app.py @@ -0,0 +1,27 @@ +from dotenv import load_dotenv +import os +from celery import Celery + +# Load .env from project root +load_dotenv() + +# Broker and backend URLs from environment +CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "redis://redis-server:6379/0") +CELERY_RESULT_BACKEND = os.getenv( + "CELERY_RESULT_BACKEND", "redis://redis-server:6379/1" +) + +# Include the task modules so Celery registers them +celery_app = Celery( + "ctf_manager", + broker=CELERY_BROKER_URL, + backend=CELERY_RESULT_BACKEND, + include=["app.tasks"], +) +celery_app.conf.update( + task_serializer="json", + accept_content=["json"], + result_serializer="json", + timezone="UTC", + enable_utc=True, +) diff --git a/AlirezaMirzaei/Problem4_WebAPI/requirements.txt b/AlirezaMirzaei/Problem4_WebAPI/requirements.txt new file mode 100644 index 00000000..d707790a --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/requirements.txt @@ -0,0 +1,9 @@ +fastapi +uvicorn +celery +redis +docker +sqlalchemy +pydantic +psycopg2-binary +python-dotenv \ No newline at end of file diff --git a/AlirezaMirzaei/Problem4_WebAPI/run_app.sh b/AlirezaMirzaei/Problem4_WebAPI/run_app.sh new file mode 100755 index 00000000..b48ae31d --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/run_app.sh @@ -0,0 +1,18 @@ +#!/bin/bash + +# Load env variables from .env if it exists +if [ -f .env ]; then + export $(cat .env | grep -v '^#' | xargs) +fi + +# Install dependencies +pip install -r requirements.txt + +# Run database migrations (create tables) +python -c "from app.database import Base, engine; import app.models; Base.metadata.create_all(bind=engine)" + +# Start celery worker in background +celery -A celery_app.celery_app worker --loglevel=info & + +# Start FastAPI server +uvicorn app.main:app --host 0.0.0.0 --port 8000 diff --git a/AlirezaMirzaei/Problem4_WebAPI/test_api.py b/AlirezaMirzaei/Problem4_WebAPI/test_api.py new file mode 100755 index 00000000..c5c4b727 --- /dev/null +++ b/AlirezaMirzaei/Problem4_WebAPI/test_api.py @@ -0,0 +1,67 @@ +#!/usr/bin/env python3 +""" +A more complete test script for the CTF Challenge Manager API, +including Celery connectivity check and assign/remove flows. +""" + +import os +import time +import argparse +import requests + +# Load .env if present +from dotenv import load_dotenv + +load_dotenv() + +BASE_URL = os.getenv("API_URL", "http://localhost:8000") + +# Check Celery broker connection +from celery_app import celery_app + +ping = celery_app.control.ping(timeout=1) +if not ping: + print("[!] Celery broker not reachable. Ping returned:", ping) + exit(1) +print("[+] Celery broker reachable:", ping) + + +def assign(team_id: int, challenge_id: int) -> dict: + url = f"{BASE_URL}/assign" + resp = requests.post(url, json={"team_id": team_id, "challenge_id": challenge_id}) + if resp.ok: + data = resp.json() + print(f"[+] Assigned: {data}") + return data + else: + print(f"[!] Assign failed {resp.status_code}: {resp.text}") + exit(1) + + +def remove(team_id: int, challenge_id: int) -> dict: + url = f"{BASE_URL}/remove" + resp = requests.post(url, json={"team_id": team_id, "challenge_id": challenge_id}) + if resp.ok: + data = resp.json() + print(f"[+] Removed: {data}") + return data + else: + print(f"[!] Remove failed {resp.status_code}: {resp.text}") + exit(1) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser(description="CTF API Tester") + parser.add_argument("--team", type=int, default=1, help="Team ID to use") + parser.add_argument( + "--challenge", type=int, default=2, help="Challenge ID to assign/remove" + ) + parser.add_argument( + "--wait", type=int, default=10, help="Seconds to wait before removal" + ) + args = parser.parse_args() + + result = assign(args.team, args.challenge) + print(f"[.] Waiting {args.wait} seconds before removal...") + time.sleep(args.wait) + remove(args.team, args.challenge) diff --git a/AlirezaMirzaei/Problem5_DockerCompose/.env.example b/AlirezaMirzaei/Problem5_DockerCompose/.env.example new file mode 100644 index 00000000..c500f6d0 --- /dev/null +++ b/AlirezaMirzaei/Problem5_DockerCompose/.env.example @@ -0,0 +1,7 @@ +POSTGRES_USER=postgres +POSTGRES_PASSWORD=password +POSTGRES_DB=ctf_db +DATABASE_URL=postgresql://postgres:password@postgres_ctf:5432/ctf_db +CELERY_BROKER_URL=redis://redis-server:6379/0 +CELERY_RESULT_BACKEND=redis://redis-server:6379/1 +DOCKER_HOST_IP=localhost # for host-access fallback diff --git a/AlirezaMirzaei/Problem5_DockerCompose/README.md b/AlirezaMirzaei/Problem5_DockerCompose/README.md new file mode 100644 index 00000000..4aa1763c --- /dev/null +++ b/AlirezaMirzaei/Problem5_DockerCompose/README.md @@ -0,0 +1,30 @@ +# Problem 5: Docker Compose Integration – Full System README + +## Overview + +This project combines all components of a CTF (Capture The Flag) management system using Docker Compose. It consists of microservices for database management, Redis queueing, Celery-based task execution, a web API, and dynamic challenge containers. + +## How Services Are Connected + +The system is composed of four main services: + +- **PostgreSQL** stores all persistent information, including which team is assigned to which challenge. +- **Redis** acts as the broker and result backend for Celery. +- **Celery** executes background tasks such as starting or stopping challenge containers. +- **FastAPI Web API** allows clients to assign and remove challenges via HTTP endpoints. + +These services communicate internally through a shared Docker network, ensuring isolated and reliable inter-service communication. The Celery worker uses Docker SDK to manage containers based on requests received from the FastAPI service. + +## How to Start and Use the System + +1. Navigate to the `AlirezaMirzaei` root directory in your terminal. +2. Move into the `Problem5_DockerCompose` folder. +3. Ensure that the `.env` file exists inside `Problem4_WebAPI` and contains the required configuration. +4. Start the system with `docker-compose up --build`. +5. Use the Python script provided in `Problem4_WebAPI/test_api.py` or any HTTP client (like curl or Postman) to assign and remove challenges for different teams. + +The API provides two main endpoints: + +- `/assign` — Starts a challenge container and registers it. +- `/remove` — Stops the container and updates the database. + diff --git a/AlirezaMirzaei/Problem5_DockerCompose/docker-compose.yml b/AlirezaMirzaei/Problem5_DockerCompose/docker-compose.yml new file mode 100644 index 00000000..9f42f000 --- /dev/null +++ b/AlirezaMirzaei/Problem5_DockerCompose/docker-compose.yml @@ -0,0 +1,70 @@ +version: "3.8" + +services: + postgres_ctf: + image: postgres:14-alpine + restart: unless-stopped + environment: + POSTGRES_USER: ${POSTGRES_USER} + POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} + POSTGRES_DB: ${POSTGRES_DB} + volumes: + - pgdata:/var/lib/postgresql/data + networks: + - ctf_net + + redis-server: + image: redis:6-alpine + restart: unless-stopped + command: ["redis-server", "/usr/local/etc/redis/redis.conf"] + volumes: + - ../Problem2_Redis/redis.conf:/usr/local/etc/redis/redis.conf:ro + - ../Problem2_Redis/redis_data:/data + ports: + - "6379:6379" + networks: + - ctf_net + + celery_worker: + build: ../Problem4_WebAPI + working_dir: /usr/src/app + command: > + celery -A celery_app.celery_app + --loglevel=info + worker + env_file: + - .env + depends_on: + - redis-server + - postgres_ctf + volumes: + - ../Problem4_WebAPI:/usr/src/app + - /var/run/docker.sock:/var/run/docker.sock:ro + networks: + - ctf_net + + web: + build: ../Problem4_WebAPI + working_dir: /usr/src/app + command: uvicorn app.main:app --host 0.0.0.0 --port 8000 + env_file: + - .env + ports: + - "8000:8000" + depends_on: + - postgres_ctf + - redis-server + - celery_worker + volumes: + - ../Problem4_WebAPI:/usr/src/app + - /var/run/docker.sock:/var/run/docker.sock:ro + networks: + - ctf_net + +volumes: + pgdata: + redis_data: + +networks: + ctf_net: + driver: bridge diff --git a/DockerAssignemnt.pdf b/DockerAssignemnt.pdf deleted file mode 100644 index 8415eebe..00000000 Binary files a/DockerAssignemnt.pdf and /dev/null differ diff --git a/MohamadMahdiReisi/Problem2_Redis/README.md b/MohamadMahdiReisi/Problem2_Redis/README.md deleted file mode 100644 index 8b137891..00000000 --- a/MohamadMahdiReisi/Problem2_Redis/README.md +++ /dev/null @@ -1 +0,0 @@ - diff --git a/MohamadMahdiReisi/README.md b/MohamadMahdiReisi/README.md deleted file mode 100644 index 5fd544e5..00000000 --- a/MohamadMahdiReisi/README.md +++ /dev/null @@ -1 +0,0 @@ -Sample diff --git a/Samples and Hints/Problem 2/README.md b/Samples and Hints/Problem 2/README.md deleted file mode 100644 index 7c683334..00000000 --- a/Samples and Hints/Problem 2/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Hint: Redis Key-Value Example in Python - -This simple Python script shows how to connect to Redis, set a key-value pair, and retrieve it. - -```python -import redis - -# Connect to the Redis server -r = redis.Redis(host='localhost', port=6379, decode_responses=True) - -# Set a key-value pair -r.set("team:red", "assigned") - -# Get the value for the key -value = r.get("team:red") - -print(f"The value of 'team:red' is: {value}") -``` \ No newline at end of file diff --git a/Samples and Hints/Problem 3 /README.md b/Samples and Hints/Problem 3 /README.md deleted file mode 100644 index 9f0462e4..00000000 --- a/Samples and Hints/Problem 3 /README.md +++ /dev/null @@ -1,24 +0,0 @@ -# Hint: Basic Celery Task (Without Redis) - -This example shows how to define and run a basic Celery task that prints `'doing task'`. - -### 📄 `tasks.py` - -```python -from celery import Celery - -app = Celery('simple_task', broker='memory://') - -@app.task -def do_something(): - print("doing task") -``` - -### 📄 `main.py` - -```python -from tasks import do_something - -do_something.delay() -``` - diff --git a/Samples and Hints/Problem 4/README.md b/Samples and Hints/Problem 4/README.md deleted file mode 100644 index 627c908f..00000000 --- a/Samples and Hints/Problem 4/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Hint: Simple Flask Echo API for Question 4 - -This example Flask app echoes back any JSON data sent to it via POST requests. - -### 📄 `app.py` - -```python -from flask import Flask, request, jsonify - -app = Flask(__name__) - -@app.route('/echo', methods=['POST']) -def echo(): - data = request.get_json() - return jsonify({ - "you_sent": data - }) - -if __name__ == '__main__': - app.run(debug=True, host='0.0.0.0', port=5000) -``` \ No newline at end of file diff --git a/Samples and Hints/Problem 5/README.md b/Samples and Hints/Problem 5/README.md deleted file mode 100644 index e69de29b..00000000