Skip to content

Commit 5be45fd

Browse files
authored
Updating figures & descriptions for Ch 8 #23
1 parent 9912d41 commit 5be45fd

File tree

1 file changed

+39
-11
lines changed

1 file changed

+39
-11
lines changed

Ch8/README.md

+39-11
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,53 @@
11

22
# Social Media
33

4-
Set of notebooks associated with Chapter 8 of the book
4+
## 🔖 Outline
55

6-
1. **[Create a wordcloud](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/01_WordCloud.ipynb)**: How to create a wordcloud. This is often used to get a quick sense of given text corpus at hand.
6+
To be added
77

8-
2. **[Effect of different tokenizers on Social Media Text Data](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/02_DifferentTokenizers.ipynb)** : Here we show how different tokenizers can give different output for the same input text. When dealing to text data from social platforms this can have huge bearing on the performance of the task. Here, we will be working with 5 different tokenizers, namely:
98

10-
* [word_tokenize from NLTK](https://www.nltk.org/api/nltk.tokenize.html)
11-
* [TweetTokenizer from NLTK](https://www.nltk.org/api/nltk.tokenize.html)
12-
* [Twikenizer](https://pypi.org/project/twikenizer/)
13-
* [Twokenizer by ARK@CMU](http://www.cs.cmu.edu/~ark/TweetNLP/)
14-
* [twokenize](https://github.com/leondz/twokenize)
15-
9+
## 🗒️ Notebooks
10+
11+
Set of notebooks associated with the chapter.
12+
13+
14+
1. **[Create a wordcloud](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/01_WordCloud.ipynb)**: How to create a word cloud. This is often used to get a quick sense of given text corpus at hand.
15+
16+
2. **[Effect of different tokenizers on Social Media Text Data](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/02_DifferentTokenizers.ipynb)** : Here we show how different tokenizers can give different output for the same input text. When dealing with text data from social platforms this can have a huge bearing on the performance of the task.  Here, we will be working with 5 different tokenizers, namely:
17+
18+
    * [word_tokenize from NLTK](https://www.nltk.org/api/nltk.tokenize.html)
19+
    * [TweetTokenizer from NLTK](https://www.nltk.org/api/nltk.tokenize.html)
20+
    * [Twikenizer](https://pypi.org/project/twikenizer/)
21+
    * [Twokenizer by ARK@CMU](http://www.cs.cmu.edu/~ark/TweetNLP/)
22+
    * [twokenize](https://github.com/leondz/twokenize)
23+
   
1624

1725
3. **[Trending topics](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/03_TrendingTopics.ipynb)**: Find trending topics on Twitter using tweepy
1826

1927
4. **[Sentiment Analysis](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/04_Sentiment_Analysis_Textblob.ipynb)**: Basic sentiment analysis using TextBlob
2028

21-
5. **[Preprocessing Social Media Text Data](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/O5_smtd_preprocessing.py)**: Common functions involved in pre-processing pipeline for Social Media Text Data.
29+
5. **[Preprocessing Social Media Text Data](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/O5_smtd_preprocessing.py)**: Common functions involved in the pre-processing pipeline for Social Media Text Data.
2230

2331
6. **[Text representation of Social Media Text Data](https://github.com/practical-nlp/practical-nlp/blob/master/Ch8/06_SMTD_embeddings.ipynb)**: How to use embeddings to represent Social Media Text Data
2432

25-
7. **Sentiment Analysis**: Here we use the preprocessing and representation steps learnt before to build a better classifier.
33+
7. **Sentiment Analysis**:  Here we use the preprocessing and representation steps learnt before to build a better classifier. 
34+
35+
36+
## 🖼️ Figures
37+
38+
Color figures as requested by the readers.
39+
40+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-1.png)
41+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-2.png)
42+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-3.png)
43+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-4.png)
44+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-5.png)
45+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-6.png)
46+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-7.png)
47+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-8.png)
48+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-9.png)
49+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-10.png)
50+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-11.png)
51+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-12.png)
52+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-13.png)
53+
![figure](https://github.com/practical-nlp/practical-nlp-figures/raw/master/figures/8-14.png)

0 commit comments

Comments
 (0)