Skip to content

Commit

Permalink
feat: add react app with agents on query decorator (#15)
Browse files Browse the repository at this point in the history
* feat: add react app with agents on query decorator

* feat: add readme file
  • Loading branch information
gautamgambhir97 authored Nov 19, 2024
1 parent 2cd9862 commit bd6c7d7
Show file tree
Hide file tree
Showing 19 changed files with 794 additions and 0 deletions.
116 changes: 116 additions & 0 deletions 1-uagents/react-web/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# React Web News Sentiment Analyzer

![tech:react](https://img.shields.io/badge/react-61DAFB?style=flat&logo=react&logoColor=black)
![tech:python](https://img.shields.io/badge/python-3776AB?style=flat&logo=python&logoColor=white)
![tech:flask](https://img.shields.io/badge/flask-000000?style=flat&logo=flask&logoColor=white)
![tech:llm](https://img.shields.io/badge/llm-E85D2E?style=flat&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMTAiIGhlaWdodD0iOCIgdmlld0JveD0iMCAwIDEwIDgiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI%2BCjxwYXRoIGQ9Ik00LjUgMUM0LjUgMS4yMTg3NSA0LjQyMTg4IDEuNDIxODggNC4zMTI1IDEuNTc4MTJMNC43NjU2MiAyLjU2MjVDNC45MjE4OCAyLjUzMTI1IDUuMDc4MTIgMi41IDUuMjUgMi41QzUuODEyNSAyLjUgNi4zMjgxMiAyLjcxODc1IDYuNzE4NzUgMy4wNjI1TDggMi4xMDkzOEM4IDIuMDc4MTIgOCAyLjA0Njg4IDggMkM4IDEuNDUzMTIgOC40Mzc1IDEgOSAxQzkuNTQ2ODggMSAxMCAxLjQ1MzEyIDEwIDJDMTAgMi41NjI1IDkuNTQ2ODggMyA5IDNDOC44NDM3NSAzIDguNzE4NzUgMi45ODQzOCA4LjU5Mzc1IDIuOTIxODhMNy4zMTI1IDMuODU5MzhDNy40MjE4OCA0LjE0MDYyIDcuNSA0LjQzNzUgNy41IDQuNzVDNy41IDUgNy40NTMxMiA1LjIzNDM4IDcuMzc1IDUuNDUzMTJMOC41IDYuMTI1QzguNjU2MjUgNi4wNDY4OCA4LjgxMjUgNiA5IDZDOS41NDY4OCA2IDEwIDYuNDUzMTIgMTAgN0MxMCA3LjU2MjUgOS41NDY4OCA4IDkgOEM4LjQzNzUgOCA4IDcuNTYyNSA4IDdWNi45ODQzOEw2Ljg1OTM4IDYuMzEyNUM2LjQ1MzEyIDYuNzM0MzggNS44NzUgNyA1LjI1IDdDNC4xNzE4OCA3IDMuMjgxMjUgNi4yNjU2MiAzLjA0Njg4IDUuMjVIMS44NTkzOEMxLjY4NzUgNS41NjI1IDEuMzU5MzggNS43NSAxIDUuNzVDMC40Mzc1IDUuNzUgMCA1LjMxMjUgMCA0Ljc1QzAgNC4yMDMxMiAwLjQzNzUgMy43NSAxIDMuNzVDMS4zNTkzOCAzLjc1IDEuNjg3NSAzLjk1MzEyIDEuODU5MzggNC4yNUgzLjA0Njg4QzMuMTcxODggMy43MzQzOCAzLjQ1MzEyIDMuMjk2ODggMy44NTkzOCAyLjk4NDM4TDMuNDA2MjUgMkMyLjg5MDYyIDEuOTUzMTIgMi41IDEuNTMxMjUgMi41IDFDMi41IDAuNDUzMTI1IDIuOTM3NSAwIDMuNSAwQzQuMDQ2ODggMCA0LjUgMC40NTMxMjUgNC41IDFaTTUuMjUgNS41QzUuNTE1NjIgNS41IDUuNzUgNS4zNTkzOCA1Ljg5MDYyIDUuMTI1QzYuMDMxMjUgNC45MDYyNSA2LjAzMTI1IDQuNjA5MzggNS44OTA2MiA0LjM3NUM1Ljc1IDQuMTU2MjUgNS41MTU2MiA0IDUuMjUgNEM0Ljk2ODc1IDQgNC43MzQzOCA0LjE1NjI1IDQuNTkzNzUgNC4zNzVDNC40NTMxMiA0LjYwOTM4IDQuNDUzMTIgNC45MDYyNSA0LjU5Mzc1IDUuMTI1QzQuNzM0MzggNS4zNTkzOCA0Ljk2ODc1IDUuNSA1LjI1IDUuNVoiIGZpbGw9IndoaXRlIi8%2BCjwvc3ZnPgo%3D)

## Introduction

This example demonstrates how to build a React application integrated with a Flask backend, using various uAgents to perform tasks such as fetching news, scraping webpage data, and analyzing news sentiment using the Hugging Face FinBERT model.

## Project Structure

```
react-web/
├── frontend/
│ ├── public/
│ │ └── index.html
│ ├── src/
│ │ ├── components/
│ │ │ ├── NewsFeed.jsx
│ │ │ ├── SearchComponent.jsx
│ │ │ └── SearchComponent.css
│ │ ├── App.css
│ │ └── App.js
│ └── package.json
├── backend/
│ ├── app.py
│ ├── requirements.txt
│ └── agents/
│ ├── news_agent.py
│ ├── webscraper_agent.py
│ └── sentiment_agent.py
```

## Prerequisites

1. Node.js: Download from [Node.js official website](https://nodejs.org/)
2. Python 3.10+: Download from [Python official website](https://python.org/)
3. Flask: Install via pip:
```bash
pip install Flask flask-cors
```


## Backend Setup

1. Create and activate virtual environment:
```bash
python -m venv venv
source venv/bin/activate
# or
venv\Scripts\activate
```

2. Install dependencies:
```bash
cd backend
pip install -r requirements.txt
```

3. Set up environment variables:
- ALPHA_VANTAGE_API_KEY (Get from [Alpha Vantage](https://www.alphavantage.co/))
- GNEWS_API_KEY (Get from [GNews](https://gnews.io/))
- HUGGING_FACE_API_KEY (Get from [Hugging Face](https://huggingface.co/))

4. Start the agents and Flask server:
```bash
python app.py
python agents/news_agent.py
python agents/webscraper_agent.py
python agents/sentiment_agent.py
```

## Frontend Setup

1. Install dependencies:
```bash
cd frontend
npm install
```

2. Start development server:
```bash
npm start
```

## Usage

1. Open http://localhost:3000 in your browser
2. View the fetched news articles and their sentiment analysis


## Architecture

The application consists of three main components:

1. **News Agent**: Fetches news articles from various sources using Alpha Vantage and GNews APIs
2. **Web Scraper Agent**: Extracts content from news articles using BeautifulSoup
3. **Sentiment Analysis Agent**: Analyzes the sentiment of news content using FinBERT model

## Dependencies

### Backend
- Python 3.10+
- Flask
- uAgents
- aiohttp
- beautifulsoup4
- requests

### Frontend
- React
- Node.js
- npm
74 changes: 74 additions & 0 deletions 1-uagents/react-web/backend/agents/ sentiment_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Import Required libraries
import requests
from uagents import Agent, Context, Model
from uagents.setup import fund_agent_if_low
import time
import asyncio
import aiohttp
import json

# Define Request and Response Data Models
class SentimentRequest(Model):
news : str

class SentimentResponse(Model):
sentiment : str

class ErrorResponse(Model):
error : str

# Define Sentiment analysis Agent
SentimentAgent = Agent(
name="Sentiment Agent",
port=8002,
seed="Sentiment Agent secret phrase",
endpoint=["http://127.0.0.1:8002/submit"],
)

# Registering agent on Almanac and funding it.
fund_agent_if_low(SentimentAgent.wallet.address())

# Define function to provide sentiment for given content
async def sentiment_analysis(news):
API_URL = "https://api-inference.huggingface.co/models/ProsusAI/finbert"
headers = {"Authorization": "Bearer <Hugging face API>"}

payload = {"inputs": news}

async with aiohttp.ClientSession() as session:
async with session.post(API_URL, headers=headers, json=payload) as response:
if response.status == 200:
sentiments = await response.json()
await asyncio.sleep(5) # Proper async sleep
# Flatten the list of dicts to a single list
flattened_sentiments = [item for sublist in sentiments for item in sublist]
max_sentiment = max(flattened_sentiments, key=lambda s: s['score'])
max_label = str(max_sentiment['label'])
max_score = str(round(max_sentiment['score'], 3))
return f"{max_label},{max_score}"
else:
return "Error: Failed to fetch data from API"

# On agent startup printing address
@SentimentAgent.on_event('startup')
async def agent_details(ctx: Context):
ctx.logger.info(f'Search Agent Address is {SentimentAgent.address}')

# On_query handler for processing sentiment request
@SentimentAgent.on_query(model=SentimentRequest, replies={SentimentResponse})
async def query_handler(ctx: Context, sender: str, msg: SentimentRequest):
try:
sentiment = await sentiment_analysis(msg.news)
if sentiment == "Error: Failed to fetch data from API":
sentiment = await sentiment_analysis(msg.news[:500]) # if model is not ale to perform sentiment request we will just take string with 500 characters
ctx.logger.info(msg.news[:300])
ctx.logger.info(sentiment)
await ctx.send(sender, SentimentResponse(sentiment = sentiment))
except Exception as e:
error_message = f"Error fetching job details: {str(e)}"
ctx.logger.error(error_message)
# Ensure the error message is sent as a string
await ctx.send(sender, ErrorResponse(error=str(error_message)))

if __name__ == "__main__":
SentimentAgent.run()
124 changes: 124 additions & 0 deletions 1-uagents/react-web/backend/agents/news_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Import Required libraries
import requests
import os
from uagents import Agent, Context, Model
from uagents.setup import fund_agent_if_low

# Define Request and Response Models
class NewsRequest(Model):
company_name: str

class UrlRequest(Model):
company_name: str

class NewsResponse(Model):
news_list : list

class UrlResponse(Model):
url_list: list

class ErrorResponse(Model):
error : str

ALPHA_VANTAGE_API_KEY = os.getenv('ALPHA_VANTAGE_API_KEY')
GNEWS_API_KEY = os.getenv('GNEWS_API_KEY')

# Define function to get ticker symbol for given company name
async def fetch_symbol(company_name):
url = f"https://www.alphavantage.co/query?function=SYMBOL_SEARCH&keywords={company_name}&apikey={ALPHA_VANTAGE_API_KEY}"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
# Typically, the best match will be the first item in the bestMatches list
if data.get('bestMatches') and len(data['bestMatches']) > 0:
Symbol = data['bestMatches'][0]['1. symbol'] # Return the symbol of the best match
return Symbol
else:
return 'No Symbol found'
else:
return 'No Symbol found'

async def fetch_news(company_name): # get news urls and description for the given news company or ticker
url = f"https://gnews.io/api/v4/search?q={company_name}&token={GNEWS_API_KEY}&lang=en"
response = requests.get(url)
articles = response.json().get('articles', [])
# Return a list of titles and descriptions with hyperlinks
news_list = []
for article in articles:
article_url = article.get('url', 'No url')
description = article.get("description", "No Description")
# Create a hyperlink using HTML anchor tag
hyperlink = {"url": article_url,
"title": description}
news_list.append(hyperlink)
return news_list

async def fetch_url(company_name): # Get the news url's for given company name or symbol
url = f"https://gnews.io/api/v4/search?q={company_name}&token={GNEWS_API_KEY}&lang=en"
response = requests.get(url)
articles = response.json().get('articles', [])
# Return a list of titles and descriptions with hyperlinks
url_list = []
for article in articles:
article_url = article.get('url', 'No url')
url_list.append(article_url)
return url_list

# Define News Agent
NewsAgent = Agent(
name="NewsAgent",
port=8000,
seed="News Agent secret phrase",
endpoint=["http://127.0.0.1:8000/submit"],
)

# Registering agent on Almanac and funding it.
fund_agent_if_low(NewsAgent.wallet.address())

# On agent startup printing address
@NewsAgent.on_event('startup')
async def agent_details(ctx: Context):
ctx.logger.info(f'Search Agent Address is {NewsAgent.address}')

# On_query handler for news request
@NewsAgent.on_query(model=NewsRequest, replies={NewsResponse})
async def query_handler(ctx: Context, sender: str, msg: NewsRequest):
try:
ctx.logger.info(f'Fetching news details for company_name: {msg.company_name}')
symbol = await fetch_symbol(msg.company_name)
ctx.logger.info(f' Symbol for company provided is {symbol}')
if symbol != None: #if company symbol fetch successfully getting news using ticker symbol else using the company name itself.
news_list = await fetch_news(symbol)
else:
news_list = await fetch_news(msg.company_name)
ctx.logger.info(str(news_list))
await ctx.send(sender, NewsResponse(news_list=news_list))

except Exception as e:
error_message = f"Error fetching job details: {str(e)}"
ctx.logger.error(error_message)
# Ensure the error message is sent as a string
await ctx.send(sender, ErrorResponse(error=str(error_message)))

# On_query handler for news_url request
@NewsAgent.on_query(model=UrlRequest, replies={UrlResponse})
async def query_handler(ctx: Context, sender: str, msg: UrlRequest):
try:
ctx.logger.info(f'Fetching news url details for company_name: {msg.company_name}')
symbol = await fetch_symbol(msg.company_name)
ctx.logger.info(f' Symbol for company provided is {symbol}')
if symbol != None:
url_list = await fetch_url(symbol)
else:
url_list = await fetch_url(msg.company_name)
ctx.logger.info(str(url_list))
await ctx.send(sender, UrlResponse(url_list=url_list))
except Exception as e:
error_message = f"Error fetching job details: {str(e)}"
ctx.logger.error(error_message)
# Ensure the error message is sent as a string
await ctx.send(sender, ErrorResponse(error=str(error_message)))


if __name__ == "__main__":
NewsAgent.run()
77 changes: 77 additions & 0 deletions 1-uagents/react-web/backend/agents/webscraper_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Import Required libraries
import requests
import aiohttp
from uagents import Agent, Context, Model
from uagents.setup import fund_agent_if_low
from bs4 import BeautifulSoup
import time

# Define data Models to handle request
class wrapRequest(Model):
url : str

class Message(Model):
message : str

class wrapResponse(Model):
summary : str

class ErrorResponse(Model):
error : str

# Define webscraper Agent
webScraperAgent = Agent(
name="Web Scraper Agent",
port=8001,
seed="Web Scraper Agent secret phrase",
endpoint=["http://127.0.0.1:8001/submit"],
)

# Define function to scrap webpage and get paragraph content.
async def get_webpage_content(url):
try:
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
response_text = await response.text()
soup = BeautifulSoup(response_text, 'html.parser')

for script_or_style in soup(["script", "style", "header", "footer", "nav", "aside"]):
script_or_style.decompose()

text_blocks = soup.find_all('p')
text_content = ' '.join(block.get_text(strip=True) for block in text_blocks if block.get_text(strip=True))

words = text_content.split()
limited_text = ' '.join(words[:500]) # Limit to first 500 words for faster response of sentiment agent.
return limited_text
else:
return "Error: Unable to fetch content."
except Exception as e:
return f"Error encountered: {str(e)}"


# On agent startup printing address
@webScraperAgent.on_event('startup')
async def agent_details(ctx: Context):
ctx.logger.info(f'Search Agent Address is {webScraperAgent.address}')

# On_query handler to handle webpage wrapping
@webScraperAgent.on_query(model=wrapRequest, replies={wrapResponse})
async def query_handler(ctx: Context, sender: str, msg: wrapRequest):
try:
ctx.logger.info(f'URL wrapper for request : {msg.url}')
news_content = await get_webpage_content(msg.url)
ctx.logger.info(news_content)
if "Error" not in news_content:
await ctx.send(sender, wrapResponse(summary = news_content))
else:
await ctx.send(sender, ErrorResponse(error = "ERROR" + news_content))
except Exception as e:
error_message = f"Error fetching job details: {str(e)}"
ctx.logger.error(error_message)
# Ensure the error message is sent as a string
await ctx.send(sender, ErrorResponse(error=str(error_message)))

if __name__ == "__main__":
webScraperAgent.run()
Loading

0 comments on commit bd6c7d7

Please sign in to comment.