diff --git a/Labs/Lab16/DAT10-lab16.ipynb b/Labs/Lab16/DAT10-lab16.ipynb
new file mode 100644
index 0000000..eabc04f
--- /dev/null
+++ b/Labs/Lab16/DAT10-lab16.ipynb
@@ -0,0 +1,1340 @@
+{
+ "metadata": {
+ "name": "",
+ "signature": "sha256:863a0270eb0f83577956f5f255bd851bc1ad6164f4882b755cc53fa4aceebae7"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+ {
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# GA Data Science 10 (DAT10) - Lab 16\n",
+ "\n",
+ "### Text Mining: Vectorization, TF-IDF, & Cosine Similarity\n",
+ "\n",
+ "1. The Vector Space Model\n",
+ "2. Vector Normalization\n",
+ "3. Term Frequency - Inverse Document Frequency (TF-IDF)\n",
+ "4. Cosine Similarity\n"
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 1,
+ "metadata": {},
+ "source": [
+ "1. Introduction to the Vector Space Model (VSM)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In information retrieval or text mining, the term frequency \u2013 inverse document frequency also called tf-idf, is a well know method to evaluate how important is a word in a document. Tf-idf are also a very interesting way to convert the textual representation of information into a Vector Space Model (VSM), or into sparse features. We\u2019ll discuss more about it later. First, let\u2019s try to understand what tf-idf and the VSM are."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "1.1 Going to the Vector Space"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The first step in modeling the document into a vector space is to create a dictionary of terms present in documents. To do that, you can simple select all terms from the document and convert it to a dimension in the vector space, but we know that there are some kind of words (stop words) that are present in almost all documents, and what we\u2019re doing is extracting important features from documents, features do identify them among other similar documents, so using terms like \u201cthe, is, at, on\u201d, etc.. isn\u2019t going to help us, so in the information extraction, we\u2019ll just ignore them.\n",
+ "\n",
+ "Let\u2019s take the documents below to define our (extremely basic) document space:"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "metadata": {},
+ "source": [
+ "Train Document Set:\n",
+ "\n",
+ "d1: The sky is blue.\n",
+ "d2: The sun is bright.\n",
+ "\n",
+ "Test Document Set:\n",
+ "\n",
+ "d3: The sun in the sky is bright.\n",
+ "d4: We can see the shining sun, the bright sun."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, what we have to do is to create a index vocabulary (dictionary) of the words of the train document set, using the documents d1 and d2 from the document set, we\u2019ll have the following index vocabulary denoted as {E}(t) where the t is the term:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that the terms like \u201cis\u201d and \u201cthe\u201d were ignored as cited before. Now that we have an index vocabulary, we can convert the test document set into a vector space where each term of the vector is indexed as our index vocabulary, so the first term of the vector represents the \u201cblue\u201d term of our vocabulary, the second represents \u201csun\u201d and so on. Now, we\u2019re going to use the term-frequency to represent each term in our vector space; the term-frequency is nothing more than a measure of how many times the terms present in our vocabulary {E}(t) are present in the documents d3 or d4, we define the term-frequency as a counting function:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "where the {fr}(x, t) is a simple function defined as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So, what the tf(t,d) returns is how many times is the term t is present in the document d. An example of this, could be tf(``sun'', d4) = 2 since we have only two occurrences of the term \u201csun\u201d in the document d4. Now you understood how the term-frequency works, we can go on into the creation of the document vector, which is represented by:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Each dimension of the document vector is represented by the term of the vocabulary, for example, the {tf}(t_1,d_2) represents the frequency-term of the term 1 or t_1 (which is our \u201cblue\u201d term of the vocabulary) in the document d_2.\n",
+ "\n",
+ "Let\u2019s now show a concrete example of how the documents d_3 and d_4 are represented as vectors:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Which evaluates to:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As you can see, since the documents d_3 and d_4 are:"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "metadata": {},
+ "source": [
+ "d3: The sun in the sky is bright.\n",
+ "d4: We can see the shining sun, the bright sun."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The resulting vector v_{d_3} shows that we have, in order, 0 occurrences of the term \u201cblue\u201d, 1 occurrence of the term \u201csun\u201d, and so on. In the v_{d_3}, we have 0 occurences of the term \u201cblue\u201d, 2 occurrences of the term \u201csun\u201d, etc.\n",
+ "\n",
+ "But wait, since we have a collection of documents, now represented by vectors, we can represent them as a matrix with |D| \\times F shape, where |D| is the cardinality of the document space, or how many documents we have and the F is the number of features, in our case represented by the vocabulary size. An example of the matrix representation of the vectors described above is:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As you may have noted, these matrices representing the term frequencies tend to be very sparse (with majority of terms zeroed), and that\u2019s why you\u2019ll see a common representation of these matrix as sparse matrices."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "1.2 Now, in Python"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from sklearn.feature_extraction.text import CountVectorizer"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 15
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "vectorizer = CountVectorizer()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 16
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "train_set = (\"The sky is blue.\",\n",
+ " \"The sun is bright.\")"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 17
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "test_set = (\"The sun in the sky is bright.\",\n",
+ " \"We can see the shining sun, the bright sun.\")"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 18
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print vectorizer"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "CountVectorizer(analyzer=u'word', binary=False, charset=None,\n",
+ " charset_error=None, decode_error=u'strict',\n",
+ " dtype=, encoding=u'utf-8', input=u'content',\n",
+ " lowercase=True, max_df=1.0, max_features=None, min_df=1,\n",
+ " ngram_range=(1, 1), preprocessor=None, stop_words=None,\n",
+ " strip_accents=None, token_pattern=u'(?u)\\\\b\\\\w\\\\w+\\\\b',\n",
+ " tokenizer=None, vocabulary=None)\n"
+ ]
+ }
+ ],
+ "prompt_number": 19
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print train_set"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "('The sky is blue.', 'The sun is bright.')\n"
+ ]
+ }
+ ],
+ "prompt_number": 20
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "vectorizer.fit_transform(train_set)\n",
+ "print vectorizer.vocabulary_"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "{u'blue': 0, u'bright': 1, u'sun': 4, u'is': 2, u'sky': 3, u'the': 5}\n"
+ ]
+ }
+ ],
+ "prompt_number": 21
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "What are some things we notice about the vocabulary we have fit above?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "That's right -- words like \"is\" and \"the\" are so common that they are not really helpful features. Let's remove them in the next step:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "vectorizer = CountVectorizer(stop_words=\"english\")"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 22
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "count_vectors = vectorizer.fit_transform(train_set)\n",
+ "print vectorizer.vocabulary_"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "{u'blue': 0, u'sun': 3, u'bright': 1, u'sky': 2}\n"
+ ]
+ }
+ ],
+ "prompt_number": 23
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print count_vectors"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ " (0, 2)\t1\n",
+ " (0, 0)\t1\n",
+ " (1, 3)\t1\n",
+ " (1, 1)\t1\n"
+ ]
+ }
+ ],
+ "prompt_number": 24
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we have constructed our vocabulary with the fit function, let's pass our test set through that vocabulary using the transform function:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "sparse_matrix = vectorizer.transform(test_set)\n",
+ "\n",
+ "print vectorizer.vocabulary_ #Note that the vocabulary has not changed; we defined it with the training set."
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "{u'blue': 0, u'sun': 3, u'bright': 1, u'sky': 2}\n"
+ ]
+ }
+ ],
+ "prompt_number": 25
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print test_set"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "('The sun in the sky is bright.', 'We can see the shining sun, the bright sun.')\n"
+ ]
+ }
+ ],
+ "prompt_number": 26
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print sparse_matrix"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ " (0, 1)\t1\n",
+ " (0, 2)\t1\n",
+ " (0, 3)\t1\n",
+ " (1, 1)\t1\n",
+ " (1, 3)\t2\n"
+ ]
+ }
+ ],
+ "prompt_number": 27
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that the sparse matrix created called smatrix is a Scipy sparse matrix with elements stored in a Coordinate format. But we can convert it into a dense format:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "test_set_sp = sparse_matrix.todense()\n",
+ "sparse_matrix.todense()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 28,
+ "text": [
+ "matrix([[0, 1, 1, 1],\n",
+ " [0, 1, 0, 2]])"
+ ]
+ }
+ ],
+ "prompt_number": 28
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Each row in the above represents a document in our test set (d3 and d4). So, that gives us a (very basic) matrix of term frequencies in our documents. We have mapped our text documents into a vector space!"
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 1,
+ "metadata": {},
+ "source": [
+ "2. Vector Normalization"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In Section 1 above, we learned how to use the term-frequency to represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator. The important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of all your documents?\n",
+ "\n",
+ "The tf-idf weight solves this problem. What tf-idf gives is how important is a word to a document in a collection. That\u2019s why tf-idf incorporates local and global parameters; it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms. A term that occurs 10 times more than another isn\u2019t 10 times more important than it. That\u2019s why tf-idf uses the logarithmic scale to do that.\n",
+ "\n",
+ "Let\u2019s go back to our definition of the {tf}(t,d) which is actually the term count of the term t in the document d. The use of this simple term frequency could lead us to problems like keyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.\n",
+ "\n",
+ "To overcome this problem, the term frequency {tf}(t,d) of a document on a vector space is usually also normalized. In this Section 2, we will learn how to normalize this vector."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Suppose we are going to normalize the term-frequency vector {v_{d_4}} that we have calculated in the first part of this tutorial. The document d4 from the first part of this tutorial had this textual representation:"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "metadata": {},
+ "source": [
+ "d4: We can see the shining sun, the bright sun."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And the vector space representation using the non-normalized term-frequency of that document was:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To normalize the vector, is the same as calculating the Unit Vector of the vector, and they are denoted using the \u201chat\u201d notation: hat{v}. The definition of the unit vector hat{v} of a vector {v} is:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Where the hat{v} is the unit vector, or the normalized vector, the {v} is the vector going to be normalized and the \\|{v}\\|_p is the norm (magnitude, length) of the vector {v} in the L^p space."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the L^p spaces, also called Lebesgue spaces."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "2.1 Lebesgue Spaces"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Usually, the length of a vector {u} = (u_1, u_2, u_3, ..., u_n) is calculated using the Euclidean norm \u2013 a norm is a function that assigns a strictly positive length or size to all vectors in a vector space -, which is defined by:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "But this isn\u2019t the only way to define length, and that\u2019s why you see (sometimes) a number p together with the norm notation, like in \\|{u}\\|_p. That\u2019s because it could be generalized as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "and simplified as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So when you read about a L2-norm, you\u2019re reading about the Euclidean norm, a norm with p=2, the most common norm used to measure the length of a vector, typically called \u201cmagnitude\u201d; actually, when you have an unqualified length measure (without the p number), you have the L2-norm (Euclidean norm).\n",
+ "\n",
+ "When you read about a L1-norm, you\u2019re reading about the norm with p=1, defined as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Which is nothing more than a simple sum of the components of the vector, also known as Taxicab distance, also called Manhattan distance."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length 6 x sqrt{2} = 8.48, and is the unique shortest path."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that you can also use any norm to normalize the vector, but we\u2019re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of the scikit-learn. You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you\u2019ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn\u2019t going to have the length 1 if you\u2019re going to take its L2-norm later)."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "2.2 Back to Vector Normalization"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we\u2019ll use the right terms now) to normalize our vector v_{d4} = (0,2,1,0) in order to get its unit vector hat{v_{d4}}. To do that, we\u2019ll simple plug it into the definition of the unit vector to evaluate it:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And that is it ! Our normalized vector hat{v_{d_4}} has now a L2-norm \\|hat{v_{d_4}}\\|_2 = 1.0.\n",
+ "\n",
+ "Note that here we have normalized our term frequency document vector, but later we\u2019re going to do that after the calculation of the tf-idf."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 1,
+ "metadata": {},
+ "source": [
+ "3 The Term Frequency - Inverse Document Frequency (TF-IDF) Weight"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "metadata": {},
+ "source": [
+ "Train Document Set:\n",
+ "\n",
+ "d1: The sky is blue.\n",
+ "d2: The sun is bright.\n",
+ "\n",
+ "Test Document Set:\n",
+ "\n",
+ "d3: The sun in the sky is bright.\n",
+ "d4: We can see the shining sun, the bright sun."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Our document space can be defined then as D = { d1, d2, ..., dn} where n is the number of documents in our corpus, and in our case as D_{train} = {d1, d2} and D_{test} = {d3, d4}. The cardinality of our document space is defined by |{D_{train}}| = 2 and |{D_{test}}| = 2, since we have only 2 two documents for training and testing, but they obviously don\u2019t need to have the same cardinality.\n",
+ "\n",
+ "Let\u2019s see now, how idf (inverse document frequency) is then defined:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "where |{d : t in d}| is the number of documents where the term t appears, when the term-frequency function satisfies tf(t,d) <> 0, we\u2019re only adding 1 into the formula to avoid zero-division.\n",
+ "\n",
+ "The formula for the tf-idf is then:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*This formula has an important consequence: a high weight of the tf-idf calculation is reached when we have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (global parameter).*\n",
+ "\n",
+ "Now, let\u2019s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first section above: (NOTE that the ordering of our vectors is slightly different due to the fact that sklearn now puts the dictionary in alphabetical order. So, our IDF vector will be slightly different. Do not panic ;-)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print test_set_sp"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "[[0 1 1 1]\n",
+ " [0 1 0 2]]\n"
+ ]
+ }
+ ],
+ "prompt_number": 45
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Since we have 4 features, we have to calculate idf(t1), idf(t2), idf(t3), idf(t4):"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "These idf weights can be represented by a vector as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we have our matrix with the term frequency (M_{train}) and the vector representing the idf for each feature of our matrix ({idf_{train}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrix M_{train} with the respective {idf_{train}} vector dimension. To do that, we can create a square diagonal matrix called M_{idf} with both the vertical and horizontal dimensions equal to the vector {idf_{train}} dimension:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "and then multiply it to the term frequency matrix, so the final result can be defined then as:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Please note that the matrix multiplication isn\u2019t commutative, the result of A x B will be different than the result of the B x A, and this is why the M_{idf} is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let\u2019s see now a concrete example of this multiplication:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And finally, we can apply our L2 normalization process to the M_{tf-idf} matrix. Please note that this normalization is \u201crow-wise\u201d because we\u2019re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If we take the L2-norm of each row of the matrix, we\u2019ll see that they all have a L2-norm of 1."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "3.1 Now in Python"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The first step is to create our training and testing document set and computing the term frequency matrix:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "#from sklearn.feature_extraction.text import CountVectorizer\n",
+ "\n",
+ "#train_set = (\"The sky is blue.\", \"The sun is bright.\")\n",
+ "#test_set = (\"The sun in the sky is bright.\", \"We can see the shining sun, the bright sun.\")\n",
+ "\n",
+ "count_vectorizer = CountVectorizer(stop_words=\"english\")\n",
+ "vocab_train = count_vectorizer.fit_transform(train_set)\n",
+ "print \"Vocabulary:\", count_vectorizer.vocabulary_"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "Vocabulary: {u'blue': 0, u'sun': 3, u'bright': 1, u'sky': 2}\n"
+ ]
+ }
+ ],
+ "prompt_number": 29
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "freq_term_matrix = count_vectorizer.transform(test_set)\n",
+ "print freq_term_matrix.todense()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "[[0 1 1 1]\n",
+ " [0 1 0 2]]\n"
+ ]
+ }
+ ],
+ "prompt_number": 30
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we have the frequency term matrix (called freq_term_matrix), we can instantiate the TfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from sklearn.feature_extraction.text import TfidfTransformer\n",
+ "\n",
+ "tfidf = TfidfTransformer(norm=u'l2')\n",
+ "#tfidf = TfidfTransformer()\n",
+ "tfidf.fit(freq_term_matrix)\n",
+ "\n",
+ "print \"IDF:\", tfidf.idf_\n",
+ "#print idf_vec"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "IDF: [ 2.09861229 1. 1.40546511 1. ]\n"
+ ]
+ }
+ ],
+ "prompt_number": 31
+ },
+ {
+ "cell_type": "heading",
+ "level": 3,
+ "metadata": {},
+ "source": [
+ "WTF...?!? What happened? Any thoughts....?!? How can we debug this?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "print \"IDF:\", tfidf.idf_"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "IDF: [ 2.09861229 1. 1.40546511 1. ]\n"
+ ]
+ }
+ ],
+ "prompt_number": 32
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that we\u2019ve specified the norm as L2. This is optional (actually the default is L2-norm), but we\u2019ve added the parameter to make it explicit that it\u2019s going to use the L2-norm. Also note that we can see the calculated idf weight by accessing the internal attribute called idf_. Now that fit() method has calculated the idf for the matrix, let\u2019s transform the freq_term_matrix to the tf-idf weight matrix:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "tf_idf_matrix = tfidf.transform(freq_term_matrix)\n",
+ "print tf_idf_matrix.todense()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "[[ 0. 0.50154891 0.70490949 0.50154891]\n",
+ " [ 0. 0.4472136 0. 0.89442719]]\n"
+ ]
+ }
+ ],
+ "prompt_number": 33
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that scikit-learn provides a function to do the vectorizing and if-idf matrix calculation all in one, you can use TfidfVectorizer. However, we must pass in a dictionary unless we want the dictionary to be learned from the documents we are analyzing."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from sklearn.feature_extraction.text import TfidfVectorizer\n",
+ "\n",
+ "tfidf_compare = TfidfVectorizer(vocabulary=count_vectorizer.vocabulary_).fit_transform(test_set)\n",
+ "print tfidf_compare.todense()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "[[ 0. 0.50154891 0.70490949 0.50154891]\n",
+ " [ 0. 0.4472136 0. 0.89442719]]\n"
+ ]
+ }
+ ],
+ "prompt_number": 34
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Same!"
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 1,
+ "metadata": {},
+ "source": [
+ "4. Cosine Similarity"
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "4.1 Dot Products"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We've talked about dot products before, mainly in terms of vector multiplication in linear algebra.\n",
+ "*Who can review / summarize dot products for us?*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Okay, let's now consider the dot product from a geometric definition:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So, what happens when the angle between the two vectors is 90 degrees?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*Q: What is the cosine of ninety degrees?*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "When the angle between the two vectors is 90 degrees, then the term vec{a} cos{theta} will be zero and the resulting multiplication with the magnitude of the vector vec{b} will also be zero. Now you know that, when the dot product between two different vectors is zero, they are orthogonal to each other (they have an angle of 90 degrees), this is a very neat way to check the orthogonality of different vectors. It is also important to note that we are using 2D examples, but the most amazing fact about it is that we can also calculate angles and similarity between vectors in higher dimensional spaces, and that is why math let us see far than the obvious even when we can\u2019t visualize or imagine what is the angle between two vectors with twelve dimensions for instance."
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "4.2 Cosine Similarity (of documents in our vector space)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "he cosine similarity between two vectors (or two documents on the Vector Space) is a measure that calculates the cosine of the angle between them. This metric is a measurement of orientation and not magnitude, it can be seen as a comparison between documents on a normalized space because we\u2019re not taking into the consideration only the magnitude of each word count (tf-idf) of each document, but the angle between the documents. What we have to do to build the cosine similarity equation is to solve the equation of the dot product for the cos{theta}:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Cosine Similarity will generate a metric that says how related are two documents by looking at the angle between them in vector space:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that even if we had a vector pointing to a point far from another vector, they still could have an small angle and that is the central point on the use of Cosine Similarity, the measurement tends to ignore the higher term count on documents. Suppose we have a document with the word \u201csky\u201d appearing 200 times and another document with the word \u201csky\u201d appearing 50, the Euclidean distance between them will be higher but the angle will still be small because they are pointing to the same direction, which is what matters when we are comparing documents.\n",
+ "\n",
+ "Now that we have a Vector Space Model of documents (like on the image below) modeled as vectors (with TF-IDF counts) and also have a formula to calculate the similarity between different documents in this space, let\u2019s see now how we do it in practice using sklearn."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "4.3 Now, in Python:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "documents = (\n",
+ "\"The sky is blue\",\n",
+ "\"The sun is bright\",\n",
+ "\"The sun in the sky is bright\",\n",
+ "\"We can see the shining sun, the bright sun\"\n",
+ ")"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 35
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from sklearn.feature_extraction.text import TfidfVectorizer\n",
+ "tfidf_vectorizer = TfidfVectorizer()\n",
+ "tfidf_matrix = tfidf_vectorizer.fit_transform(documents)\n",
+ "print tfidf_matrix.shape"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "(4, 11)\n"
+ ]
+ }
+ ],
+ "prompt_number": 36
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now we have the TF-IDF matrix (tfidf_matrix) for each document (the number of rows of the matrix) with 11 tf-idf terms (the number of columns from the matrix), we can calculate the Cosine Similarity between the first document (\u201cThe sky is blue\u201d) with each of the other documents of the set:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from sklearn.metrics.pairwise import cosine_similarity\n",
+ "cosine_similarity(tfidf_matrix[0:1], tfidf_matrix)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 37,
+ "text": [
+ "array([[ 1. , 0.36651513, 0.52305744, 0.13448867]])"
+ ]
+ }
+ ],
+ "prompt_number": 37
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The tfidf_matrix[0:1] is the Scipy operation to get the first row of the sparse matrix and the resulting array is the Cosine Similarity between the first document with all documents in the set. Note that the first value of the array is 1.0 because it is the Cosine Similarity between the first document with itself. Also note that due to the presence of similar words on the third document (\u201cThe sun in the sky is bright\u201d), it achieved a better score."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "import math\n",
+ "# This was already calculated on the previous step, so we just use the value\n",
+ "cos_sim = 0.52305744\n",
+ "angle_in_radians = math.acos(cos_sim)\n",
+ "print math.degrees(angle_in_radians)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "58.4624371074\n"
+ ]
+ }
+ ],
+ "prompt_number": 38
+ }
+ ],
+ "metadata": {}
+ }
+ ]
+}
\ No newline at end of file
diff --git a/Labs/Lab16/assets/1-Eoft.png b/Labs/Lab16/assets/1-Eoft.png
new file mode 100644
index 0000000..87834d9
Binary files /dev/null and b/Labs/Lab16/assets/1-Eoft.png differ
diff --git a/Labs/Lab16/assets/10.png b/Labs/Lab16/assets/10.png
new file mode 100644
index 0000000..04c2ef3
Binary files /dev/null and b/Labs/Lab16/assets/10.png differ
diff --git a/Labs/Lab16/assets/11.png b/Labs/Lab16/assets/11.png
new file mode 100644
index 0000000..13c3a32
Binary files /dev/null and b/Labs/Lab16/assets/11.png differ
diff --git a/Labs/Lab16/assets/12.png b/Labs/Lab16/assets/12.png
new file mode 100644
index 0000000..2b6abd9
Binary files /dev/null and b/Labs/Lab16/assets/12.png differ
diff --git a/Labs/Lab16/assets/13.png b/Labs/Lab16/assets/13.png
new file mode 100644
index 0000000..6eac8d0
Binary files /dev/null and b/Labs/Lab16/assets/13.png differ
diff --git a/Labs/Lab16/assets/14.png b/Labs/Lab16/assets/14.png
new file mode 100644
index 0000000..936bd94
Binary files /dev/null and b/Labs/Lab16/assets/14.png differ
diff --git a/Labs/Lab16/assets/15.png b/Labs/Lab16/assets/15.png
new file mode 100644
index 0000000..2b6cafb
Binary files /dev/null and b/Labs/Lab16/assets/15.png differ
diff --git a/Labs/Lab16/assets/16.png b/Labs/Lab16/assets/16.png
new file mode 100644
index 0000000..302159e
Binary files /dev/null and b/Labs/Lab16/assets/16.png differ
diff --git a/Labs/Lab16/assets/17.png b/Labs/Lab16/assets/17.png
new file mode 100644
index 0000000..c2c3cd8
Binary files /dev/null and b/Labs/Lab16/assets/17.png differ
diff --git a/Labs/Lab16/assets/18.png b/Labs/Lab16/assets/18.png
new file mode 100644
index 0000000..7398e24
Binary files /dev/null and b/Labs/Lab16/assets/18.png differ
diff --git a/Labs/Lab16/assets/19.png b/Labs/Lab16/assets/19.png
new file mode 100644
index 0000000..708c75b
Binary files /dev/null and b/Labs/Lab16/assets/19.png differ
diff --git a/Labs/Lab16/assets/2.png b/Labs/Lab16/assets/2.png
new file mode 100644
index 0000000..74c2be8
Binary files /dev/null and b/Labs/Lab16/assets/2.png differ
diff --git a/Labs/Lab16/assets/20.png b/Labs/Lab16/assets/20.png
new file mode 100644
index 0000000..f019c79
Binary files /dev/null and b/Labs/Lab16/assets/20.png differ
diff --git a/Labs/Lab16/assets/21.png b/Labs/Lab16/assets/21.png
new file mode 100644
index 0000000..137ce3d
Binary files /dev/null and b/Labs/Lab16/assets/21.png differ
diff --git a/Labs/Lab16/assets/21a.png b/Labs/Lab16/assets/21a.png
new file mode 100644
index 0000000..137ce3d
Binary files /dev/null and b/Labs/Lab16/assets/21a.png differ
diff --git a/Labs/Lab16/assets/22.png b/Labs/Lab16/assets/22.png
new file mode 100644
index 0000000..2cdd733
Binary files /dev/null and b/Labs/Lab16/assets/22.png differ
diff --git a/Labs/Lab16/assets/23.png b/Labs/Lab16/assets/23.png
new file mode 100644
index 0000000..8cc7324
Binary files /dev/null and b/Labs/Lab16/assets/23.png differ
diff --git a/Labs/Lab16/assets/24.png b/Labs/Lab16/assets/24.png
new file mode 100644
index 0000000..54d23da
Binary files /dev/null and b/Labs/Lab16/assets/24.png differ
diff --git a/Labs/Lab16/assets/25.png b/Labs/Lab16/assets/25.png
new file mode 100644
index 0000000..994d338
Binary files /dev/null and b/Labs/Lab16/assets/25.png differ
diff --git a/Labs/Lab16/assets/26.png b/Labs/Lab16/assets/26.png
new file mode 100644
index 0000000..329492a
Binary files /dev/null and b/Labs/Lab16/assets/26.png differ
diff --git a/Labs/Lab16/assets/27.png b/Labs/Lab16/assets/27.png
new file mode 100644
index 0000000..e1dc264
Binary files /dev/null and b/Labs/Lab16/assets/27.png differ
diff --git a/Labs/Lab16/assets/28.png b/Labs/Lab16/assets/28.png
new file mode 100644
index 0000000..1ac3376
Binary files /dev/null and b/Labs/Lab16/assets/28.png differ
diff --git a/Labs/Lab16/assets/29.png b/Labs/Lab16/assets/29.png
new file mode 100644
index 0000000..2bc2abb
Binary files /dev/null and b/Labs/Lab16/assets/29.png differ
diff --git a/Labs/Lab16/assets/3.png b/Labs/Lab16/assets/3.png
new file mode 100644
index 0000000..d6b246d
Binary files /dev/null and b/Labs/Lab16/assets/3.png differ
diff --git a/Labs/Lab16/assets/30.png b/Labs/Lab16/assets/30.png
new file mode 100644
index 0000000..99aea82
Binary files /dev/null and b/Labs/Lab16/assets/30.png differ
diff --git a/Labs/Lab16/assets/31.png b/Labs/Lab16/assets/31.png
new file mode 100644
index 0000000..6543a3c
Binary files /dev/null and b/Labs/Lab16/assets/31.png differ
diff --git a/Labs/Lab16/assets/4.png b/Labs/Lab16/assets/4.png
new file mode 100644
index 0000000..b45b6f4
Binary files /dev/null and b/Labs/Lab16/assets/4.png differ
diff --git a/Labs/Lab16/assets/5.png b/Labs/Lab16/assets/5.png
new file mode 100644
index 0000000..b50ff8d
Binary files /dev/null and b/Labs/Lab16/assets/5.png differ
diff --git a/Labs/Lab16/assets/6.png b/Labs/Lab16/assets/6.png
new file mode 100644
index 0000000..f53295d
Binary files /dev/null and b/Labs/Lab16/assets/6.png differ
diff --git a/Labs/Lab16/assets/7.png b/Labs/Lab16/assets/7.png
new file mode 100644
index 0000000..94e8f4c
Binary files /dev/null and b/Labs/Lab16/assets/7.png differ
diff --git a/Labs/Lab16/assets/8.png b/Labs/Lab16/assets/8.png
new file mode 100644
index 0000000..2453314
Binary files /dev/null and b/Labs/Lab16/assets/8.png differ
diff --git a/Labs/Lab16/assets/9.png b/Labs/Lab16/assets/9.png
new file mode 100644
index 0000000..b41995c
Binary files /dev/null and b/Labs/Lab16/assets/9.png differ
diff --git a/Labs/Lab16/assets/Dot_Product.png b/Labs/Lab16/assets/Dot_Product.png
new file mode 100644
index 0000000..94e6d7f
Binary files /dev/null and b/Labs/Lab16/assets/Dot_Product.png differ
diff --git a/Labs/Lab16/assets/Manhattan_distance.svg b/Labs/Lab16/assets/Manhattan_distance.svg
new file mode 100644
index 0000000..48e86af
--- /dev/null
+++ b/Labs/Lab16/assets/Manhattan_distance.svg
@@ -0,0 +1,648 @@
+
+
+
diff --git a/Labs/Lab16/assets/cosine_similarity.png b/Labs/Lab16/assets/cosine_similarity.png
new file mode 100644
index 0000000..b22b71f
Binary files /dev/null and b/Labs/Lab16/assets/cosine_similarity.png differ
diff --git a/Labs/Lab16/assets/cosine_similarity_formula.tiff b/Labs/Lab16/assets/cosine_similarity_formula.tiff
new file mode 100644
index 0000000..268167c
Binary files /dev/null and b/Labs/Lab16/assets/cosine_similarity_formula.tiff differ
diff --git a/Labs/Lab16/assets/dot_product_formula.tiff b/Labs/Lab16/assets/dot_product_formula.tiff
new file mode 100644
index 0000000..26705ef
Binary files /dev/null and b/Labs/Lab16/assets/dot_product_formula.tiff differ
diff --git a/Labs/Lab16/assets/orthogonal.gif b/Labs/Lab16/assets/orthogonal.gif
new file mode 100644
index 0000000..9ff9ff5
Binary files /dev/null and b/Labs/Lab16/assets/orthogonal.gif differ
diff --git a/Labs/Lab16/assets/twitter.png b/Labs/Lab16/assets/twitter.png
new file mode 100644
index 0000000..fe04158
Binary files /dev/null and b/Labs/Lab16/assets/twitter.png differ
diff --git a/Labs/Lab16/assets/vector_space.png b/Labs/Lab16/assets/vector_space.png
new file mode 100644
index 0000000..de38a66
Binary files /dev/null and b/Labs/Lab16/assets/vector_space.png differ