Skip to content

gauravbyte/neural-network-from-scratch-in-c

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Network from Scratch in C

This project implements a simple feedforward neural network entirely from scratch using the C programming language. It supports multiple hidden layers, various activation functions, and outputs performance metrics and loss plots using Gnuplot.


🔧 Build & Run Instructions

make              # Builds the binary
make run arg="<args>"   # Compiles src/neuralnet.c and runs the binary with provided arguments
make clean        # Removes all object files from the lib directory

🧠 Neural Network Parameters

To run the program, provide the following arguments in the order shown:

<1> Number of layers (including output layer)
<2> Number of neurons in each layer (comma-separated)
<3> Activation function (1=sigmoid, 2=tanh, 3=relu)
<4> Number of epochs
<5> Learning rate

📌 Example

make run arg="3 10,10,2 1 1000 0.1"

This example creates a 3-layer neural network with two hidden layers (10 neurons each), an output layer with 2 neurons, sigmoid activation, 1000 epochs, and a learning rate of 0.1.


📊 Sample Results

1. Sigmoid Activation

Command: arg="3 10,10,2 1 1000 0.1"
Final Test Accuracy: ~90.83%

Sigmoid Output


2. Tanh Activation

Command: arg="3 10,10,2 2 1500 0.1"
Final Test Accuracy: ~95.21%

Tanh Loss
Tanh Accuracy


3. ReLU Activation (with issues)

Command: arg="3 10,10,2 3 1500 0.1"
Final Test Accuracy: ~60.47%

ReLU Issue

🔍 Note: ReLU learning appears slower and final accuracy is significantly lower. This may indicate a bug or missing implementation detail for ReLU-based backpropagation.


📚 Comparison with Scikit-learn (SGD Optimizer)

Activation Epochs Accuracy (%)
Sigmoid 1500 90.84
Tanh 1500 95.80
ReLU 1500 95.10

🧪 Observations

  • Sigmoid & Tanh: Suffer from vanishing gradient issues due to limited derivative ranges (e.g., sigmoid’s max derivative is 0.25).
  • ReLU: Works well in sklearn, but in C implementation, convergence is poor — likely due to incorrect gradient calculations or weight initialization.
  • Accuracy drops when deeper layers are used with ReLU in this implementation.

🧠 Concepts Covered

  • Feedforward neural network
  • Manual matrix operations
  • Backpropagation
  • Loss function implementation
  • Activation functions (Sigmoid, Tanh, ReLU)
  • Gradient descent
  • CLI-based configuration and training
  • Gnuplot-based visualization

📌 Future Improvements

  • Fix ReLU backpropagation implementation
  • Add softmax + cross-entropy support for classification
  • Implement momentum/Adam optimizers
  • Add support for batch processing and regularization
  • Extend to support custom datasets

About

I implemented Neural Network from scratch in c programming.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published