You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+18-12
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ Created by Kevin Lin, Huei-Fang Yang, and Chu-Song Chen at Academia Sinica, Taip
7
7
8
8
## Introduction
9
9
10
-
We present a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. SSDH constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. Compared to state-of-the-art results, SSDH achieves 26.30% (89.68% vs. 63.38%), 17.11% (89.00% vs. 71.89%) and 19.56% (31.28% vs. 11.72%) higher precisions averaged over a different number of top returned images for the CIFAR-10, NUS-WIDE, and SUN397 datasets, respectively.
10
+
This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed.
@@ -19,6 +19,10 @@ Presentation slide can be found [here](http://www.csie.ntu.edu.tw/~r01944012/dee
19
19
20
20
If you find our work useful in your research, please consider citing:
21
21
22
+
Supervised Learning of Semantics-Preserving Hash via Deep Convolutional Neural Networks
23
+
Huei-Fang Yang, Kevin Lin, Chu-Song Chen
24
+
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017
25
+
22
26
Supervised Learning of Semantics-Preserving Hashing via Deep Neural Networks for Large-Scale Image Search
23
27
Huei-Fang Yang, Kevin Lin, Chu-Song Chen
24
28
arXiv preprint arXiv:1507.00101
@@ -54,14 +58,14 @@ Launch matlab and run `demo.m`. This demo will generate 48-bits binary codes for
54
58
55
59
## Retrieval evaluation on CIFAR10
56
60
57
-
Launch matalb and run `run_cifar10.m` to perform the evaluation of `precision at k` and `mean average precision at k`. We set `k=1000` in the experiments. The bit length of binary codes is `48`. This process takes around 12 minutes.
61
+
Launch matalb and run `run_cifar10.m` to perform the evaluation of `precision at k` and `mean average precision (mAP) at k`. In this CIFAR10 experiment, we employ all the test images (`10,000` images) as the query set, and we select all the training images (`50,000` images) to form the database. We computed mAP based on the entire retrieval list, thus we set `k = 50,000` in this experiment. The bit length of binary codes is `48`. This process takes around 12 minutes.
58
62
59
63
>> run_cifar10
60
64
61
65
62
66
Then, you will get the `mAP` result as follows.
63
67
64
-
>> MAP = 0.897165
68
+
>> MAP = 0.913361
65
69
66
70
Moreover, simply run the following commands to generate the `precision at k` curves:
67
71
@@ -79,18 +83,20 @@ Simply run the following command to train SSDH:
79
83
$ ./train.sh
80
84
81
85
82
-
After 50,000 iterations, the top-1 error is around 10% on the test set of CIFAR10 dataset:
86
+
After 50,000 iterations, the top-1 error rate is around 10% on the test set of CIFAR10 dataset:
83
87
```
84
-
I1109 20:36:30.962478 25398 solver.cpp:326] Iteration 50000, loss = -0.114461
85
-
I1109 20:36:30.962507 25398 solver.cpp:346] Iteration 50000, Testing net (#0)
86
-
I1109 20:36:45.218626 25398 solver.cpp:414] Test net output #0: accuracy = 0.8979
87
-
I1109 20:36:45.218660 25398 solver.cpp:414] Test net output #1: loss: 50%-fire-rate = 0.0005225 (* 1 = 0.0005225 loss)
88
-
I1109 20:36:45.218668 25398 solver.cpp:414] Test net output #2: loss: classfication-error = 0.368178 (* 1 = 0.368178 loss)
89
-
I1109 20:36:45.218675 25398 solver.cpp:414] Test net output #3: loss: forcing-binary = -0.114508 (* 1 = -0.114508 loss)
The training process takes roughly 2~3 hours on a desktop with Titian X GPU. You will finally get your model named `SSDH48_iter_xxxxxx.caffemodel` under folder `/examples/SSDH/`
95
101
96
102
To use the model, modify the `model_file` in `demo.m` to link to your model:
0 commit comments