Skip to content

Commit fac0fe4

Browse files
committed
Minor rewording and reorganization of NTXentLoss docs
1 parent e26b0e2 commit fac0fe4

File tree

1 file changed

+19
-15
lines changed

1 file changed

+19
-15
lines changed

docs/losses.md

Lines changed: 19 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -788,12 +788,26 @@ This is also known as InfoNCE, and is a generalization of the [NPairsLoss](losse
788788
- [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf){target=_blank}
789789
- [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/pdf/2002.05709.pdf){target=_blank}
790790

791-
In the equation below, loss is computed for each positive pair, `k_+`, in a batch, normalized by all pairs in the batch, `k_i in K`.
792-
For each `embeddings` with `labels` and `ref_emb` with `ref_labels`, positive pair `(embeddings[i], ref_emb[j])` are defined when `labels[i] == ref_labels[j]`.
793-
When `embeddings` and `ref_emb` are augmented versions of each other (e.g. SimCLR), `labels[i] == ref_labels[i]` (see [SelfSupervisedLoss](losses.md#selfsupervisedloss)).
794-
Note that multiple positive pairs can exist if the same label is present multiple times in `labels` and/or `ref_labels`.
791+
??? "How exactly is the NTXentLoss computed?"
792+
793+
In the equation below, a loss is computed for each positive pair (`k_+`) in a batch, normalized by all positive and negative pairs in the batch that have the same "anchor" embedding (`k_i in K`).
794+
795+
- What does "anchor" mean? Let's say we have 3 pairs specified by batch indices: (0, 1), (0, 2), (1, 0). The first two pairs start with 0, so they have the same anchor. The third pair has the same indices as the first pair, but the order is different, so it does not have the same anchor.
796+
797+
Given `embeddings` with corresponding `labels`, positive pairs `(embeddings[i], embeddings[j])` are defined when `labels[i] == labels[j]`. Now let's look at an example loss calculation:
798+
799+
Consider `labels = [0, 0, 1, 2]`. Two losses will be computed:
800+
801+
* A positive pair of indices `[0, 1]`, with negative pairs of indices `[0, 2], [0, 3]`.
802+
803+
* A positive pair of indices `[1, 0]`, with negative pairs of indices `[1, 2], [1, 3]`.
804+
805+
Labels `1`, and `2` do not have positive pairs, and therefore the negative pair of indices `[2, 3]` will not be used.
806+
807+
Note that an anchor can belong to multiple positive pairs if its label is present multiple times in `labels`.
808+
809+
Are you trying to use `NTXentLoss` for self-supervised learning? Specifically, do you have two sets of embeddings which are derived from data that are augmented versions of each other? If so, you can skip the step of creating the `labels` array, by wrapping `NTXentLoss` with [`SelfSupervisedLoss`](losses.md#selfsupervisedloss).
795810

796-
Instead of passing labels (`NTXentLoss(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)`), `indices_tuple` could be passed (see [`pytorch_metric_learning.utils.loss_and_miner_utils.get_all_pairs_indices](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/src/pytorch_metric_learning/utils/loss_and_miner_utils.py)).
797811
```python
798812
losses.NTXentLoss(temperature=0.07, **kwargs)
799813
```
@@ -806,16 +820,6 @@ losses.NTXentLoss(temperature=0.07, **kwargs)
806820

807821
* **temperature**: This is tau in the above equation. The MoCo paper uses 0.07, while SimCLR uses 0.5.
808822

809-
**Other info:**
810-
811-
For example, consider `labels = ref_labels = [0, 0, 1, 2]`. Two losses will be computed:
812-
813-
* Positive pair of indices `[0, 1]`, with negative pairs of indices `[0, 2], [0, 3]`.
814-
815-
* Positive pair of indices `[1, 0]`, with negative pairs of indices `[1, 2], [1, 3]`.
816-
817-
Labels `1`, and `2` do not have positive pairs, and therefore the negative pair of indices `[2, 3]` will not be used.
818-
819823
**Default distance**:
820824

821825
- [```CosineSimilarity()```](distances.md#cosinesimilarity)

0 commit comments

Comments
 (0)