Skip to content

Commit

Permalink
update makefile
Browse files Browse the repository at this point in the history
  • Loading branch information
polarker committed Dec 18, 2016
1 parent 78b0dfc commit a291244
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 10 deletions.
2 changes: 2 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,15 @@ endif
all: $(EXAMPLE) $(TEST)

$(EXAMPLE): $(BINDIR)/%: $(BUILDDIR)/example/%.o $(OBJ)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) $(LIB) $< $(OBJ) -o $@

$(EXAMPLEOBJ): $(BUILDDIR)/example/%.o: $(EXAMPLEDIR)/%.cc
@mkdir -p $(@D)
$(CXX) $(CFLAGS) $(INC) -c $< -o $@

$(TEST): $(BINDIR)/%: $(BUILDDIR)/test/%.o $(OBJ)
@mkdir -p $(@D)
$(CXX) $(CFLAGS) $(LIB) $< $(OBJ) -o $@

$(TESTOBJ): $(BUILDDIR)/test/%.o: $(TESTDIR)/%.cc
Expand Down
10 changes: 0 additions & 10 deletions README.git

This file was deleted.

12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#Neural Network Library#

This library is a research of implementing ultimately generic neural network without performance overhead.

##Interesting Features:
* Network inside network: every network is considered as a filter and could be used to contruct more complicated networks;
* Network sharing and cloning: sub-networks could share paramenters and are clonable;
* In place memory optimization by default: one neuron could accept signals from several other neurons with just one copy of n-dim array memory;
* Dynamic traning: it's able to train only part of the whole network (e.g. RNN with varied input lenght); it's able to fix part of the whole network;
* Dynamic network [WIP]: fast dynamic network construction, optimization with cache.

The guiding principles of design includes both efficiency and convenience. For user guide, please have a look at the example fold. The library uses extensively new features of C++11 to make the code simple and clear. Using Galois is just as simple as drawing dataflow graphs. Galois is also efficient. For the same mnist_mlp model (from torch demos) on Mac Pro 2013, the consumed time of each epoch is: Torch ~ 40s; Keras ~ 60s; Galois ~ 30s. Only implemented for CPU for the moment.

0 comments on commit a291244

Please sign in to comment.