Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 147 additions & 4 deletions ai_news_generator/README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,164 @@

# AI News generator
# AI News Generator

This project leverages CrewAI and Cohere's Command-R:7B model to build an AI news generator!
This project leverages **CrewAI Flows** and Cohere's Command-R:7B model to build a modular, agentic AI news generator!

The application has been refactored to use CrewAI's new Flow-based architecture, providing better modularity, state management, and workflow orchestration.

## ✨ Features

- **Flow-Based Architecture**: Built using CrewAI Flows with `@start` and `@listen` decorators
- **Two-Phase Workflow**: Research phase followed by content writing phase
- **Modular Design**: Separate agents for research and content writing
- **State Management**: Proper state handling between workflow phases
- **Multiple Interfaces**: Both CLI and Streamlit web interface
- **Structured Output**: Well-formatted markdown blog posts with citations

## 🏗️ Architecture

The application uses a **Flow-based architecture** with two main phases:

```
┌─────────────────┐ @start ┌─────────────────┐
│ Research Phase │──────────────▶│ Writing Phase │
│ │ @listen │ │
├─────────────────┤ ├─────────────────┤
│ • Web search │ │ • Content │
│ • Fact checking │ │ generation │
│ • Source │ │ • Formatting │
│ validation │ │ • Citations │
└─────────────────┘ └─────────────────┘
```

### Core Components

- **`NewsGeneratorFlow`**: Main Flow class orchestrating the workflow
- **Research Agent**: Senior Research Analyst for comprehensive research
- **Writing Agent**: Content Writer for transforming research into engaging content
- **State Management**: Pydantic models for structured data flow

## Installation and setup

**Get API Keys**:
- [Serper API Key](https://serper.dev/)
- [Cohere API Key](https://dashboard.cohere.com/api-keys)


**Install Dependencies**:
Ensure you have Python 3.11 or later installed.
```bash
pip install crewai crewai-tools
pip install crewai crewai-tools streamlit python-dotenv
```

**Environment Setup**:
Create a `.env` file in the project directory:
```env
SERPER_API_KEY=your_serper_api_key_here
COHERE_API_KEY=your_cohere_api_key_here
```

## 🚀 Usage

### Command Line Interface

Run the flow directly from the command line:

```bash
# Basic usage
python main.py --topic "Latest developments in artificial intelligence"

# Save to file
python main.py --topic "Climate change solutions" --output article.md

# Verbose mode
python main.py --topic "Blockchain innovations" --verbose
```

### Streamlit Web Interface

Launch the interactive web interface:

```bash
streamlit run app.py
```

Then open your browser to the displayed URL (typically `http://localhost:8501`).

### Programmatic Usage

Use the flow in your own Python code:

```python
from news_flow import NewsGeneratorFlow

# Create and run the flow
flow = NewsGeneratorFlow()
result = flow.kickoff(inputs={"topic": "Your topic here"})
print(result)

# Or use the convenience function
from news_flow import generate_content_with_flow

content = generate_content_with_flow("Your topic here")
print(content)
```

## 🔧 Flow Implementation Details

### Flow Structure

The `NewsGeneratorFlow` class implements the CrewAI Flow pattern:

```python
class NewsGeneratorFlow(Flow[ResearchState]):
@start()
def research_phase(self) -> str:
# Initial research using Senior Research Analyst

@listen(research_phase)
def writing_phase(self, research_results: str) -> str:
# Content writing using research results
```

### State Management

The flow uses structured state management with Pydantic models:

```python
class ResearchState(BaseModel):
topic: str
research_brief: str
sources: list[str] = []
key_findings: list[str] = []
```

### Agent Specialization

- **Senior Research Analyst**: Handles web research, fact-checking, and source validation
- **Content Writer**: Transforms research into engaging, well-structured blog content

## 🧪 Testing

Test the basic functionality:

```bash
# Test the flow with a simple topic
python main.py --topic "Python programming" --verbose

# Test the web interface
streamlit run app.py
```

## 📁 Project Structure

```
ai_news_generator/
├── app.py # Streamlit web interface
├── main.py # CLI entry point
├── news_flow.py # Core Flow implementation
├── README.md # This file
└── .env # Environment variables (create this)
```

---

## 📬 Stay Updated with Our Newsletter!
Expand Down
Binary file added ai_news_generator/__pycache__/app.cpython-311.pyc
Binary file not shown.
Binary file not shown.
Binary file not shown.
102 changes: 5 additions & 97 deletions ai_news_generator/app.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import os
import streamlit as st
from crewai import Agent, Task, Crew, LLM
from crewai_tools import SerperDevTool
from news_flow import generate_content_with_flow
from dotenv import load_dotenv

# Load environment variables
Expand Down Expand Up @@ -46,101 +45,10 @@
""")

def generate_content(topic):
llm = LLM(
model="command-r",
temperature=0.7
)

search_tool = SerperDevTool(n_results=10)

# First Agent: Senior Research Analyst
senior_research_analyst = Agent(
role="Senior Research Analyst",
goal=f"Research, analyze, and synthesize comprehensive information on {topic} from reliable web sources",
backstory="You're an expert research analyst with advanced web research skills. "
"You excel at finding, analyzing, and synthesizing information from "
"across the internet using search tools. You're skilled at "
"distinguishing reliable sources from unreliable ones, "
"fact-checking, cross-referencing information, and "
"identifying key patterns and insights. You provide "
"well-organized research briefs with proper citations "
"and source verification. Your analysis includes both "
"raw data and interpreted insights, making complex "
"information accessible and actionable.",
allow_delegation=False,
verbose=True,
tools=[search_tool],
llm=llm
)

# Second Agent: Content Writer
content_writer = Agent(
role="Content Writer",
goal="Transform research findings into engaging blog posts while maintaining accuracy",
backstory="You're a skilled content writer specialized in creating "
"engaging, accessible content from technical research. "
"You work closely with the Senior Research Analyst and excel at maintaining the perfect "
"balance between informative and entertaining writing, "
"while ensuring all facts and citations from the research "
"are properly incorporated. You have a talent for making "
"complex topics approachable without oversimplifying them.",
allow_delegation=False,
verbose=True,
llm=llm
)

# Research Task
research_task = Task(
description=("""
1. Conduct comprehensive research on {topic} including:
- Recent developments and news
- Key industry trends and innovations
- Expert opinions and analyses
- Statistical data and market insights
2. Evaluate source credibility and fact-check all information
3. Organize findings into a structured research brief
4. Include all relevant citations and sources
"""),
expected_output="""A detailed research report containing:
- Executive summary of key findings
- Comprehensive analysis of current trends and developments
- List of verified facts and statistics
- All citations and links to original sources
- Clear categorization of main themes and patterns
Please format with clear sections and bullet points for easy reference.""",
agent=senior_research_analyst
)

# Writing Task
writing_task = Task(
description=("""
Using the research brief provided, create an engaging blog post that:
1. Transforms technical information into accessible content
2. Maintains all factual accuracy and citations from the research
3. Includes:
- Attention-grabbing introduction
- Well-structured body sections with clear headings
- Compelling conclusion
4. Preserves all source citations in [Source: URL] format
5. Includes a References section at the end
"""),
expected_output="""A polished blog post in markdown format that:
- Engages readers while maintaining accuracy
- Contains properly structured sections
- Includes Inline citations hyperlinked to the original source url
- Presents information in an accessible yet informative way
- Follows proper markdown formatting, use H1 for the title and H3 for the sub-sections""",
agent=content_writer
)

# Create Crew
crew = Crew(
agents=[senior_research_analyst, content_writer],
tasks=[research_task, writing_task],
verbose=True
)

return crew.kickoff(inputs={"topic": topic})
"""
Generate content using the new CrewAI Flow-based approach
"""
return generate_content_with_flow(topic)

# Main content area
if generate_button:
Expand Down
98 changes: 98 additions & 0 deletions ai_news_generator/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
#!/usr/bin/env python3
"""
Main entry point for the AI News Generator using CrewAI Flows.

This script provides a command-line interface to generate news content
using the NewsGeneratorFlow implementation.
"""

import argparse
import sys
from typing import Optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused import

The Optional type from typing is imported but never used in the code.

-from typing import Optional
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from typing import Optional
🧰 Tools
🪛 Ruff (0.12.2)

11-11: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

🤖 Prompt for AI Agents
In ai_news_generator/main.py at line 11, the import statement for Optional from
typing is unused. Remove the import of Optional to clean up the code and avoid
unnecessary imports.

from news_flow import NewsGeneratorFlow


def main():
"""
Main function to run the AI News Generator Flow from command line.
"""
parser = argparse.ArgumentParser(
description="Generate AI news content using CrewAI Flows",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python main.py --topic "Latest developments in artificial intelligence"
python main.py --topic "Climate change solutions" --output article.md
"""
)

parser.add_argument(
"--topic",
type=str,
required=True,
help="Topic to research and write about"
)

parser.add_argument(
"--output",
type=str,
help="Output file to save the generated content (optional)"
)

parser.add_argument(
"--verbose",
action="store_true",
help="Enable verbose logging"
)

args = parser.parse_args()

if not args.topic.strip():
print("Error: Topic cannot be empty", file=sys.stderr)
sys.exit(1)

try:
print(f"🚀 Starting AI News Generator Flow for topic: '{args.topic}'")
print("📊 Initializing research phase...")

# Create and run the flow
flow = NewsGeneratorFlow()
result = flow.kickoff(inputs={"topic": args.topic})

# Convert result to string if it's not already
content = str(result)

print("✅ Content generation completed!")

# Save to file if specified
if args.output:
try:
with open(args.output, 'w', encoding='utf-8') as f:
f.write(content)
print(f"💾 Content saved to: {args.output}")
except IOError as e:
print(f"Error saving file: {e}", file=sys.stderr)
sys.exit(1)
else:
# Print to stdout
print("\n" + "="*80)
print("GENERATED CONTENT:")
print("="*80)
print(content)
print("="*80)

return 0

except KeyboardInterrupt:
print("\n⚠️ Operation cancelled by user", file=sys.stderr)
return 1
except Exception as e:
print(f"❌ Error: {e}", file=sys.stderr)
if args.verbose:
import traceback
traceback.print_exc()
return 1


if __name__ == "__main__":
sys.exit(main())
Loading