Skip to content

Python service that takes coin arguments and runs tone analysis using Twitter and Watson API's

Notifications You must be signed in to change notification settings

DanielMulitauopele/teller-ai

Repository files navigation

Teller AI

Module 4 | Team: Autumn Martin, Daniel Mulitauopele, Dina Caraballo, Nick Dambrosio

Cross-Pollination Capstone: a project where backend and frontend students collaborate and build an app together. This project encompasses separate backend and frontend apps that communicate to each other via API requests.

screen shot 2019-01-23 at 10 11 27 pm

About

Intro

Would you like to find out the most recent attitudes towards a particular cryptocurrency? Teller AI retrieves relevant tweets, and runs Watson sentiment analysis to discover sentiment towards a coin.

Teller AI is deployed here.

Background

Teller AI is a Python Django API. It consumes tweet data from Twitter and accesses Watson Sentiment Analysis to analyze tweets.

Teller AI is a microservice for our Teller project, which also includes a Ruby on Rails backend, Teller Api, and a React frontend, Teller.

Our Teller Project Demo

Tech Stack

Python 3.7.1, Django 2.1.4, Heroku, Django-Nose testing

Relevant Links

Teller (Front-End) GitHub | Heroku
Teller API (Back-End) GitHub | Heroku

Endpoints

Get /teller/watson_analysis?coin=#{search_params}

Example Request:

GET /teller/watson_analysis?coin=dogecoin

Example Response:

{ "document_tones": ["joy", "tentative"] }

Getting Started

First, clone down this project by running git clone [email protected]:DanielMulitauopele/teller-ai.git in the CLI.

If you are new to Python and Django, this may be helpful for setting up.

This app uses Python 3.7.1O and Django 2.1.4. Other requirements for this app are located here.

Development

For development, you may either start the Django development server or start the Gunicorn server server.

To start the Django server, run python manage.py runserver 8080. The default port is 8000, but this last command-line argument will change it to port 8080.

To start the Gunicorn server locally (that's right--it's a Green Unicorn!) run gunicorn tellerai.wsgi

Staging & Production

This app is deployed to Heroku. We use beta-teller-ai for staging prior to production, and teller-ai for production.

Changes can only be pushed to beta-teller-ai by accepted collaborators from the master branch. From there, this build may be manually promoted to production on teller-ai.

Testing

To see if tests pass and what test coverage is, run python manage.py test.

This command already includes command line arguments, --with-coverage --cover-package=teller --verbosity=1, due to our settings. This runs coverage for files within the teller directory. The output will be similar to this one:

test coverage output

If you would like more detailed test coverage information, run coverage run manage.py. To view the results in terminal run coverage report.

For a more elegant presentation, run coverage html. This will create or update a folder called htmlcov. View the file path htmlcov/index.html, and open this file in your browser. For example, right click on index.html within htmlcov. Choose open with and click on your browser of choice (note: you may need to view it in finder first).

About

Python service that takes coin arguments and runs tone analysis using Twitter and Watson API's

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •