diff --git a/README.md b/README.md index 561914d9..76985c8d 100644 --- a/README.md +++ b/README.md @@ -200,6 +200,11 @@ More information can be found in [the full documentation](https://torchbiggraph. We trained a PBG model on the full [Wikidata](https://www.wikidata.org/) graph, using a [translation operator](https://torchbiggraph.readthedocs.io/en/latest/scoring.html#operators) to represent relations. It can be downloaded [here](https://dl.fbaipublicfiles.com/torchbiggraph/wikidata_translation_v1.tsv.gz) (36GiB, gzip-compressed). We used the truthy version of data from [here](https://dumps.wikimedia.org/wikidatawiki/entities/) to train our model. The model file is in TSV format as described in the above section. Note that the first line of the file contains the number of entities, the number of relations and the dimension of the embeddings, separated by tabs. The model contains 78 million entities, 4,131 relations and the dimension of the embeddings is 200. +### Working with the pre-trained embeddings + +You can run the pre-trained embeddings directly with [Weaviate](https://github.com/semi-technologies/weaviate). You can [import them yourself](https://github.com/semi-technologies/biggraph-wikidata-search-with-weaviate), run them from a [Weaviate backup](https://github.com/semi-technologies/biggraph-wikidata-search-with-weaviate#restore-as-weaviate-backup) or [use the live service](http://biggraph-wikidata-in-weaviate.vectors.network/). + + ## Citation To cite this work please use: