Skip to content

Commit 48a3dae

Browse files
committed
Update README and upload neuralnetwork.py
1 parent 8d1d515 commit 48a3dae

File tree

2 files changed

+61
-7
lines changed

2 files changed

+61
-7
lines changed

hw4/README.md

+8-7
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,19 @@
11
# Introduction to Tensorflow
2-
Please submit each notebook by the below deadline to gradescope.
2+
Please submit each notebook by the below deadline to gradescope. This is just to keep you on pace to complete the homeworks in time.
33

4-
**DEADLINE:** Saturday (10/19)
4+
NOTE: Deadlines have been shifted around.
5+
6+
**DEADLINE:** Sunday (10/20)
7+
1. Start with `neural_network.py`. In this file you will be implementing a neural network from scratch using numpy! This is a tough one! Feel free to consult Google or anyone in the club if you need help!
8+
9+
**DEADLINE:** Tuesday (10/22)
510

611
2. Do the Tenorflow Tutorial notebook. You should finish this notebook by Saturday.
712

8-
**DEADLINE:** Monday (10/21) --> For both of these
13+
**DEADLINE:** Thursday (10/24) --> For both of these
914

1015
3. Next, you will be implementing a Denoising Autoencoder in tensorflow! In this homework you will gain more tf experience and train a model on AWS!
1116

1217
4. Finish with the Keras is Cool notebook. This will introduce you to Keras. It should not take too long and is not nearly as important as the other homeworks.
1318

14-
**DEADLINE:** Thursday (10/24)
15-
NOTE: This is not been pushed to github yet.... will be release in the near future.
16-
1. Start with `neural_network.py`. In this file you will be implementing a neural network from scratch using numpy! Don't worry tho...there is plenty of guidance and if you have any questions feel free to reach out.
17-
1819

hw4/neuralnetwork.py

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
import numpy as np
2+
3+
class NeuralNet:
4+
5+
#constructor, we will hardcode this to a 1 hidden layer network, for simplicity
6+
#the problem we will grade on is differentiating 0 and 1s
7+
#Some things/structuure may need to be changed. What needs to stay consistant is us being able to call
8+
#forward with 2 arguments: a data point and a label. Strange architecture, but should be good for learning
9+
def __init__(self, input_size=784, hidden_size=100, output_size=1):
10+
self.input_size = input_size
11+
self.hidden_size = hidden_size
12+
self.output_size = output_size
13+
#YOUR CODE HERE, initialize appropriately sized weights/biases with random paramters
14+
self.weight1 =
15+
self.bias1 =
16+
self.weight2 =
17+
self.bias2 =
18+
19+
#Potentially helpful, np.dot(a, b), also @ is the matrix product in numpy (a @ b)
20+
21+
#loss function, implement L1 loss
22+
#YOUR CODE HERE
23+
def loss(self, y0, y1):
24+
25+
#relu and sigmoid, nonlinear activations
26+
#YOUR CODE HERE
27+
def relu(self, x):
28+
29+
#You also may want the derivative of Relu and sigmoid
30+
31+
def sigmoid(self, x):
32+
33+
#forward function, you may assume x is correct input size
34+
#have the activation from the input to hidden layer be relu, and from hidden to output be sigmoid
35+
#have your forward function call backprop: we won't be doing batch training, so for EVERY SINGLE input,
36+
#we will update our weights. This is not always (maybe not even here) possible or practical, why?
37+
#Also, normally forward doesn't take in labels. Since we'll have forward call backprop, it'll take in labels
38+
#YOUR CODE HERE
39+
def forward(self, x, label):
40+
41+
#implement backprop, might help to have a helper function update weights
42+
#Recommend you check out the youtube channel 3Blue1Brown and their video on backprop
43+
#YOUR CODE HERE
44+
def backprop(self, x, label): #What else might we need to take in as arguments? Modify as necessary
45+
46+
#Compute the gradients first
47+
#First will have to do with combining derivative of sigmoid, output layer, and what else?
48+
#np.sum(x, axis, keepdims) may be useful
49+
50+
#Update your weights and biases. Use a learning rate of 0.1, and update on every call to backprop
51+
lr = .1
52+
53+

0 commit comments

Comments
 (0)