Skip to content

Grammatical Errors in README #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,32 @@ Sentiment Analysis with LSTM in Persian

## First Phase
#### Data Aquisition
In this repository I have used LSTM for prediction of whether people would like or dislike a product based on the previous comments in the Digikala site. I have scraped the data from Digikala and have labeled them based on the stars people who had bought the products gave to them. I have also used another label from the same website which indicates people suggest others to buy that products or not. because many of the comments are noisy and do not provide a clean data for us and it is not such a reliable source by adding the second label to the data we can ensure a higher accuracy of our training data.
For more clarifying the labels:
1 indicates suggesting others to buy and 2 means otherwise, 3 illusterates a neutral opinion about the product and 4 means the person has rate the product, but not suggest whether to buy or not.
In this repository, I have used LSTM for the prediction of whether people would like or dislike a product based on the previous comments on the Digikala site. I have scraped the data from Digikala and have labeled them based on the stars people who had bought the products gave to them. I have also used another label from the same website which indicates people suggest others to buy that product or not. because many of the comments are noisy and do not provide clean data for us and it is not such a reliable source by adding the second label to the data we can ensure higher accuracy of our training data.

For label clarifications:\
(1) -> Indicates customers suggesting others to buy
(2) -> Indicates otherwise\
(3) -> Illusterates a neutral opinion about the product\
(4) -> Customer has rate the product, but not suggested whether to buy it or not.\
and the two or three digits number indicates the satisfaction percentage of the consumer with the preceding comment.

You can reach this data in the "totalReviewWithSuggestion.csv" file.

## Second Phase
#### Data Preparation
In this Phase I have cleaned my data with Hazm library and other modifications which has commented in my source code. Then, I have splitted my dataset to parts for training and testing.
In this Phase, I have cleaned my data with Hazm library and other modifications which have been commented on in my source code. Then, I have split my dataset into parts for training and testing.

## Third Phase
#### Build your own Neural Network in Tensorflow for LSTM
I have built my own graph for calculation the sentiments of each sentence based on the scores mentioning above.
I have built my graph for calculation the sentiments of each sentence based on the scores mentioning above.

## Forth Phase
#### compute the word embeddings
In this phase I have used the precious guide from other repository and I have included that in my repository in the "ipynb_checkpoints" folder for more guidence to who wants to become more familiar with ehat I have done. As it is mentioned there using a one-hot method is too cumbersome and unefficient I have prepared a dictionary of my vocabulary and convert that to a feature vector.
In this phase, I have used the precious guide from another repository and I have included that in my repository in the "ipynb_checkpoints" folder for more guidance to those who want to become more familiar with what I have done. As is mentioned there using a one-hot method is too cumbersome and inefficient I have prepared a dictionary of my vocabulary and convert that to a feature vector.

## Fifth Phase
#### Training and Testing
I have trained and have tested the code on my own dataset which have high accuracy near 93 percent.
I have trained and have tested the code on my dataset which has high accuracy near 93 percent.

#### Thanks
Thanks to Mr. [AminMozhgani](https://github.com/AminMozhgani) for his devoted assistance through the project.
Expand Down