You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've recently released vcfdist as an alternative comparison engine to vcfeval or xcmp. In summary, it benefits from standardizing variant representations, requiring local (but not global) phasing of variants, and giving partial credit for variant calls which are mostly (but not exactly) correct. For more information, you can find the code here and pre-print here.
I just added support for intermediate GA4GH VCF benchmarking summary files, as described in Supplementary Note 3 of "Best practices for benchmarking germline small-variant calls in human genomes". I have verified that hap.py's qfy.py is able to correctly process vcfdist's intermediate VCF file for stratified variant comparison, and have included an example here.
Is there any process for getting a new benchmarking comparison engine integrated into hap.py? I would be happy (no pun intended) to put in the effort and submit a PR, but I would like to first verify that this repository is still actively maintained and the developers would welcome adding a new comparison engine.
The text was updated successfully, but these errors were encountered:
I've recently released
vcfdist
as an alternative comparison engine tovcfeval
orxcmp
. In summary, it benefits from standardizing variant representations, requiring local (but not global) phasing of variants, and giving partial credit for variant calls which are mostly (but not exactly) correct. For more information, you can find the code here and pre-print here.I just added support for intermediate GA4GH VCF benchmarking summary files, as described in Supplementary Note 3 of "Best practices for benchmarking germline small-variant calls in human genomes". I have verified that
hap.py
'sqfy.py
is able to correctly processvcfdist
's intermediate VCF file for stratified variant comparison, and have included an example here.Is there any process for getting a new benchmarking comparison engine integrated into
hap.py
? I would be happy (no pun intended) to put in the effort and submit a PR, but I would like to first verify that this repository is still actively maintained and the developers would welcome adding a new comparison engine.The text was updated successfully, but these errors were encountered: