Skip to content

Commit 426b302

Browse files
authoredJan 31, 2018
lecture readme
1 parent 729656f commit 426b302

File tree

1 file changed

+6
-2
lines changed

1 file changed

+6
-2
lines changed
 

‎lecture_2/README.md

+6-2
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,9 @@
33
This lecture starts with a few primers and builds up to a full neural network that can solve a non-trivial problem.
44

55
* [Linear Primer](linear_primer.pdf) - We look at the nature of linearity and linear transformations
6-
* [Numpy](1. numpy.ipynb) - We introduce the `jupyter notebook` and we look at numpy - the language spoken by tensorflow and pytorch (sort of).
7-
* [Non-Linearity](3. Non Linearity.ipynb) - We identify
6+
* [Numpy](1.%20numpy.ipynb) - We introduce the `jupyter notebook` and we look at numpy - the language spoken by tensorflow and pytorch (sort of).
7+
* [Non-Linearity](3.%20Non%20Linearity.ipynb) - We identify the limitations of linear transforms and exploit non-linearity with an illuminating example.
8+
* [Code - Linear-Separator](4.%20Code%20%20Linear%20Separator.ipynb) - We implement a small linear model in Keras.
9+
* [Code - Non Linearity.ipynb](5.%20Code%20-%20Non%20Linearity.ipynb) - We express the non-linear thought experiment in code and look at the standard deep learning building blocks.
10+
* [Code - Universality](6.%20Universality%20-%20XOR.ipynb) - We illustrate the universal nature of these building blocks by taking our existing code and training it on the XOR function.
11+
* [Code - MNIST MLP.ipynb](7.%20Code%20-%20MNIST%20MLP.ipynb) - We take building blocks covered over a few toy problems and solve a real computer vision problem.

0 commit comments

Comments
 (0)
Please sign in to comment.