Building on OmiCLIP, a visual–omics foundation model designed to bridge omics data and hematoxylin and eosin (H&E) images, we developed the Loki platform, which has five key functions: tissue alignment using ST or H&E images, cell type decomposition of ST or H&E images using scRNA-seq as a reference, tissue annotation of ST or H&E images based on bulk RNA-seq or marker genes, ST gene expression prediction from H&E images, and histology image–transcriptomics retrieval.
Please find our preprint here.
You can view the Loki website and notebooks here. This README provides a quick overview of how to set up and use Loki.
All source code for Loki is contained in the ./src/loki
directory.
-
Create a Conda environment:
conda create -n loki_env python=3.9 conda activate loki_env
-
Navigate to the Loki source directory and install Loki:
cd ./src pip install .
Once Loki is installed, you can import it in your Python scripts or notebooks:
import loki.preprocess
import loki.utils
import loki.plot
import loki.align
import loki.annotate
import loki.decompose
import loki.retrieve
import loki.predex
The ST-bank database are avaliable from Google Drive link.
The links_to_raw_data.xlsx file includes the source paper names, doi links, and download links of the raw data. The text.csv file includes the gene sentences with paired image patches. The image.tar.gz includes the image patches.
The pretrained weights are avaliable on Hugging Face.
If you find our database, pretrained weights, or code useful, please consider citing our paper:
Chen, W., Zhang, P., Tran, T., Xiao, Y., Li, S., ... & Wang, G. A visual–omics foundation model to bridge histopathology image with transcriptomics. Nature Methods (In Press).
@article{wang2025visual,
title={A visual--omics foundation model to bridge histopathology image with transcriptomics},
author={Wang, Guangyu and Chen, Weiqing and Zhang, Pengzhi and Tran, Tu and Xiao, Yiwei and Li, Shengyu and Shah, Vrutant and Brannan, Kristopher and Youker, Keith and Lai, Li and others},
year={2025}
}
The project was built on top of the amazing repository openclip for model training. We thank the authors and developers for their contribution.
ⓒ GuangyuWang Lab. This model and associated code are released under the bsd-3-clause license and may only be used for non-commercial, academic research purposes with proper attribution.