Production-ready OpenAI Strands Agent optimized for deployment to AWS Bedrock AgentCore. This project focuses exclusively on the working OpenAI integration path, providing a clean, maintainable solution for AI agent deployment.
# 1. Configure OpenAI API key
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY
# 2. Install dependencies
uv sync
# 3a. Run locally (no Docker, recommended for fast dev)
python src/agents/openai_agent.py
# 3b. Or run via Docker (from project root)
python deployment/deploy_local.py
# 4. Test locally in another terminal (works for either method)
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "hello"}'
# 5. Deploy image to ECR for AgentCore (when ready)
python deployment/deploy_ecr.py
# 6. (After creating/updating your AgentCore runtime in AWS Console)
# Invoke your deployed agent (replace with your Agent Runtime ARN)
python deployment/invoke_agent.py \
--agent-arn arn:aws:bedrock-agentcore:us-east-1:ACCOUNT_ID:runtime/YOUR_AGENT_ID \
--prompt "Hello from README"- Python 3.12+
- OpenAI API key (Get one here)
- AWS Account with AgentCore access
- Docker (for deployment)
- uv package manager (recommended) or pip
# Install all dependencies (already configured in pyproject.toml)
uv sync
# Alternative with pip (if not using uv)
pip install -r requirements.txt/
βββ src/
β βββ agents/
β β βββ openai_agent.py # π€ Production OpenAI agent
β βββ config/
β β βββ settings.py # βοΈ Configuration management
β βββ utils/
β βββ helpers.py # π οΈ Common utilities
βββ deployment/
β βββ Dockerfile # π³ AgentCore deployment image
β βββ requirements.txt # π¦ Minimal production deps
β βββ deploy_local.py # βΆοΈ Local Docker run
β βββ deploy_ecr.py # βοΈ Build & push to ECR
β βββ invoke_agent.py # π§ͺ Invoke deployed AgentCore runtime
β βββ README.md # π Deployment docs
βββ tests/
β βββ test_agent_basic.py # π§ͺ Basic health checks
βββ .env.example # π Environment template
βββ pyproject.toml # π Project configuration
βββ requirements.txt # π¦ Dev deps (alt to uv)
βββ README.md # π This file
Create a .env file with your credentials:
# AWS Credentials (for Bedrock agents)
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_DEFAULT_REGION=us-east-1
# OpenAI Configuration (for OpenAI agents)
OPENAI_API_KEY=your_openai_api_key-
OpenAI API Key:
- Go to OpenAI Platform
- Create a new API key
- Add it to your
.envfile
-
AWS Credentials:
- Follow the AWS IAM Setup Guide for detailed instructions on creating IAM users and configuring AWS CLI
# 1. Start the agent locally (no Docker)
python src/agents/openai_agent.py
# 2. In another terminal, test it
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "hello"}'
# 3. Run comprehensive pytest tests
pytest tests/test_agent_basic.py -v
# Or run directly
python tests/test_agent_basic.py# 1. Start the agent in Docker (from project root)
python deployment/deploy_local.py
# 2. In another terminal, test it
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "hello"}'
# 3. Run comprehensive pytest tests
pytest tests/test_agent_basic.py -v
# Or run directly
python tests/test_agent_basic.pyExpected response:
{
"result": {
"content": [{"text": "Wake up neo"}],
"role": "assistant"
}
}# Run pytest test suite
pytest tests/test_agent_basic.py -v
# Quick run with direct execution
python tests/test_agent_basic.py
# Run all tests in tests directory
pytest tests/ -v- Calculator Tool: Performs mathematical calculations
- Production Ready: Single, focused agent for AgentCore deployment
- Cost Efficient: Uses GPT-4o-mini by default for lower costs
- Health Monitoring: Built-in /ping endpoint for monitoring
- Error Handling: Robust error handling and validation
The agent runs on http://localhost:8080/invocations and accepts:
{
"prompt": "hello"
}Health check endpoint: http://localhost:8080/ping
-
Port in Use:
lsof -i :8080 | grep LISTEN | awk '{print $2}' | xargs kill -9
-
Environment Variables:
source load_env.sh echo $OPENAI_API_KEY # Should show your key
- Deploy to AWS: Follow the Strands documentation for AgentCore deployment
- Add Custom Tools: Extend agents with your own tools and functions
- Multi-Agent Systems: Explore Strands' multi-agent patterns
- Production Setup: Add monitoring, logging, and error handling
π For complete step-by-step deployment instructions, see: AGENT_CORE_DEPLOYMENT_GUIDE.md
The comprehensive deployment guide covers:
- β Complete IAM setup with correct permissions
- β ECR repository creation and Docker image management
- β Manual AWS console deployment (recommended)
- β Environment configuration and troubleshooting
- β Real-world deployment experience and solutions
For experienced users, here's the condensed version:
- AWS CLI configured with appropriate permissions
- IAM role for AgentCore with proper permissions
- Docker installed and running
- See AGENT_CORE_DEPLOYMENT_GUIDE.md for detailed setup
- Build and push image to ECR:
python deployment/deploy_ecr.py
- In AWS Console, create/update an AgentCore Runtime using the pushed image.
- Set environment variables in the runtime:
OPENAI_API_KEY: your-api-keyOPENAI_MODEL: gpt-4o-mini (default)
- Test the deployed runtime from your machine:
python deployment/invoke_agent.py \ --agent-arn arn:aws:bedrock-agentcore:us-east-1:ACCOUNT_ID:runtime/YOUR_AGENT_ID \ --prompt "Hello"
# Test the deployed agent
python deployment/invoke_agent.py \
--agent-arn arn:aws:bedrock-agentcore:us-east-1:ACCOUNT:runtime/AGENT-ID \
--prompt "Hello"# Option A: Run locally without Docker (fast dev loop)
python src/agents/openai_agent.py
# In another terminal, hit the local endpoint
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello"}'
# Option B: Run via Dock# Ensure Docker Desktop is running
# Start the container locally
python deployment/deploy_local.py
# In another terminal, hit the local endpoint
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello"}'# Test deployed agent (replace with your ARN)
python deployment/invoke_agent.py \
--agent-arn arn:aws:bedrock-agentcore:us-east-1:XXXXXXXXX:runtime/YOUR_AGENT_ID \
--prompt "hello" \
--session-id "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
--qualifier "DEFAULT"