Skip to content

My notes for Komodo DeFi API

dimxy edited this page Oct 25, 2025 · 105 revisions

Building Trezor emulator hints

Clone repo and init submodules:

git clone --recurse-submodules https://github.com/trezor/trezor-firmware.git

Install Protobuf:

Install protobuf exactly the same version which is in poetry.lock. (The legacy emulator did not build w/o the matched protoc version.) In my case it was this version: https://github.com/protocolbuffers/protobuf/releases/download/v3.19.4/protoc-3.19.4-osx-x86_64.zip.
I installed it into ~/.local and added the $HOME/.local/bin to the PATH

Poetry installation:

# Go to the trezor-firmware dir
sudo pip3 install poetry
poetry install # installs deps
poetry shell  # start poetry shell. Run make build_unix from this shell only

Install lib dependencies:
See https://github.com/trezor/trezor-firmware/blob/main/docs/core/build/emulator.md for the instructions.
Note: docs says it should be done inside poetry shell but probably this is superfluous.
Note: the needed latest scond lib could not install on my old Monterrey MacOs as its dependency openssl@3 (v3.5.x) did not install (due to openssl installation test_cmp_http failure and I could not fix this despite instructions how to edit the formula), so I had to upgrade MacOs.

Run poetry:

# Go to the trezor-firmware dir
poetry install # installs py deps
poetry shell  # start poetry shell. Run make build_unix from this shell only

Build and use Trezor legacy emulator "Model One" version:

  • Install Poetry and always work from Poetry shell to build emulator (start with poetry shell in the trezor-firmware dir. It creates .venv dir)
  • cmd to build legacy emulator on MacOs: make build_unix in the trezor-firmware/legacy dir
  • the executable is trezor-firmware/legacy/firmware/trezor.elf (no params)
  • Note that Emulator requires window GUI (can't run on VPC). Update: you can set env SDL_VIDEODRIVER=dummy and run it nointeractive
  • to configure Emulator - create a wallet you need to use Trezor Suite (web or GUI). Trezor Suite needs the bridge which can be downloaded from trezor.io (as dmg for macos. I installed it and the bridge started automatically. I could not connect to the emilator when the bridge was autostarted with no params, so I copied its exe from the install directory and removed the app on MacOs). I ran bridge as ./trezord -e 21324. Note that you need to run trezord before Trezor Suite (if it is desktop version) as it loads its own version which cannot connect to legacy trezor for some reason (wrong -e default param possibly)
  • When start the Trezor Suite please wait for some time to let it find the trezor emulator.
  • Note that rebuilding emulator overrides emulator flash file emulator.img (with the wallet setting where your wallet seed is stored), so copy it before rebuilding
  • When configure the wallet do not set PIN (not supported in adex yet)
  • Perform wallet backup when you are recovering it after the emulator.img was lost - emulator will show on its screen wallet words - write them down
  • Before running a komodo-defi test for emulator, stop the Trezor Suite. Also be aware emulator may hang sometimes and needs to be restarted.

BTW there is trezor-user-env project which provides a web interface to start all emulators version. It requires Xquartz lib installed. But trezor-user-env is basically not needed, you can run emulator without it just from cmd line.

Instruction to build and use Trezor Emulator for Safe and T models:
https://github.com/MetaMask/metamask-extension/blob/develop/docs/trezor-emulator.md

Build trezord bridge
I lost the old trezord bridge and it's not supported anymore.
Build the new version from this repo: https://github.com/trezor/trezord-go

How to dump privkey from bitcoin for tests

Note dumpprivkey is supported only for legacy wallets:

./bitcoin-cli -testnet help dumpprivkey 
dumpprivkey "address"

Reveals the private key corresponding to 'address'.
Then the importprivkey can be used with this output
Note: This command is only compatible with legacy wallets.

To run bitcoin testnet node isolated

Stop bitcoind and delete peer files:
~/.bitcoin/testnet3/peers.dat
~/.bitcoin/testnet3/anchors.dat
Run command: ./bitcoind -testnet -port=7777 -dnsseed=0
To clear mempool stop bitcoind and delete ~/.bitcoin/testnet3/mempool.dat

Which coin config fields are responsible for Segwit

We have segwit and non-segwit utxo coins' configurations in the 'coins' configuration file. F.e. we have BTC and BTC-segwit coin configs for the same blockchain BTC. For users BTC or BTC-segwit means that their address in the wallet is either standard ('1...') or native segwit ('bc1..'). For that this coins param is responsible "address_format": {"format": "segwit"}: when a coin is activated the address format in its activation param is checked against the address_format in the coin config, so it would not allow to activate a segwit coin with non-segwit address format.

There is another param in the coin config "segwit" param which actually tells to the code that the segwit feature is enabled on this network. So if "segwit" is set to false for some coin then the API code won't allow to send value to segwit addresses for that coin.

If the HD wallet API feature is used then derivation_path should be set in the coins config which is used to generate user addresses based on BIP32 schema. For non-segwit utxo coins BIP44 note is used to for derivation_path which should have "m/44'" as the purpose field in it. For segwit coins BIP84 is used and derivation_path starts with "m/84'". That means that for segwit coins we must ensure the correct 84 purpose field in the config, otherwise accounts will be created differently from other apps, including trezor firmware, which is not good.
I created an issue that would prevent from mistakes when derivation_path in coins file is set incorrectly to m/44' for segwit coins: https://github.com/KomodoPlatform/komodo-defi-framework/issues/1995

How key policy is created on coin init:

There are 2 options for coin initialisation in Komodo DeFi API: legacy and a new one. We will use the standalone coin as an example of both paths:

Before any coin addition the crypto_ctx in MmCtx is initialised at mm2 startup in lp_init fn. Param "passphrase" must be set in ctx.conf for crypto_ctx init. The crypto_ctx is initialised depending on "enable_hd" ctx.conf param, either as KeyPairPolicy::Iguana or GlobalHDAccount (see CryptoCtx::init_with_global_hd_account and CryptoCtx::init_with_iguana_passphrase fn). So currently MmCtx::crypto_ctx is always initialised with a keypair (Note the Trezor context cannot be initialised at mm2 startup).

In the legacy init option each coin can be initialised and added by "enable" or "electrum" calls. In this case we still cannot add Trezor policy as the legacy case does not allow user interaction needed for hardware wallet. In legacy case lp_coininit fn is called which gets PrivKeyBuildPolicy enum from MmCtx and calls utxo_standard_coin_with_policy fn to build the coin with UtxoArcBuilder::new().build() object. Note that priv_key_policy field of PrivKeyActivationPolicy type in the UtxoActivationParams is ignored here.

The new init option is spawned in a rpc task manager thread: the InitStandaloneCoinTask run fn begins with init_standalone_coin fn then the result is obtained with get_activation_result fn. The init_standalone_coin fn from init_utxo_standard_activation.rs gets PrivKeyBuildPolicy enum from the priv_key_policy field of PrivKeyActivationPolicy type in the UtxoActivationParams, and now it can be already Trezor policy. Then the same UtxoArcBuilder::new().build() object is used here too to create and add the coin object.

BTW maybe there is a bug in the new init option:
For the new init option the initialisation result should be obtained with get_activation_result fn, which calls enable_coin_balance fn, which in turn calls enable_hd_wallet fn for the DerivationMethod::HDWallet path. This enable_hd_wallet fn, if the wallet accounts list is empty, creates a new account with coin.create_new_account fn. This coin.create_new_account creates a new account with INIT_ACCOUNT_ID = 0 and calls coin.extract_extended_pubkey fn which always extracts a pubkey for trezor coin or throws CoinDoesntSupportTrezor error (impl in utxo_common::extract_extended_pubkey fn). So it looks like the new init option will work only from trezor wallet (and would fail if we call it with PrivKeyActivationPolicy::ContextPrivKey (meaning Iguana or HdWallet).
Update: fixed in evm-hd-wallet branch, check the new version of extract_extended_pubkey fn in lp_coins.rs there

MM2 internal private key

Note that there is an internal privkey used in p2p (swap) exchanges and generated for Komodo coin. It is always generated even if a hardware wallet is initialized. (TODO: use hardware wallet key instead). See hw_ctx.rmd160() call.

Do proper merge commit into feature branch

Recently we agreed not to rebase my development feature branch on the dev branch but to do merge 'dev' instead. This is done to more easily see changes that I did when added recent 'dev' changes. First try was not successful: when I did merge dev to my branch (feature-add-script-type-to-address) in SourceTree and fixed changes I reloaded the SourceTree app and when I was about to create the merge commit SourceTree did not offer the commit message and I had to create my own one. I think this led to an improper (unrecognisable?) merge commit created. As a result when I tried to do a pull request of my branch to the dev branch I saw red label 'could not automatically merge'.
When I tired to merge again and got a merge commit message auto-created by SourceTree (git) and again made a pull request the PR label was green already. Plus after creation the second PR I clicked the merge commit and Github showed me a compact 'condensed version' of this commit with only my added changes to the huge amount of recent code pulled from 'dev' branch. I had also an option to see 'full version' of the merge commit.
In the first case (with the merge commit created myself) I did not see the condensed version but only full one.
I guess it is important to create a proper merge commit with a git auto-created commit message?

But it looks like the cause was not the commit message but in that github could not identify the commit as 'merge'. Apparently a commit is a 'merge' when it has more than one parent (https://stackoverflow.com/questions/74478550/is-there-a-way-to-tell-if-a-git-commit-is-a-merge). Indeed, for the first myself created 'merge-commit' it was only one parent:

git show --pretty=%ph --quiet a705919
e5283c317h

But for the second case it was two parents (merge commit indeed):

git show --pretty=%ph --quiet 123e3f8
e5283c317 fc95ef3edh

How could I have lost the parent in the first try? Need to pay attention to the merge commit next time

How Trezor invokes pin

All eth and btc calls to the trezor device is protected with pin, including getting address or pubkey calls (see CHECK_PIN in the firmware code). The device auto locks its storage with default timeout of 10 min and unlocks it by requesting the pin. Few calls like fsm_msgTxAck and fsm_msgEthereumTxAck do not ask pin but check that storage is unlocked (see CHECK_UNLOCKED)

Hardened and non-hardened derivation

Just to memorise how it works.
Here is a sample derivation path: "m/44'/0'/0'/0/0".
The hardened path is path_to_coin: "m/44'/0'/0'" that is derivation from privkey is used to this level.
But address_to_account part is non-hardened (".../0/0") so derivation from pubkey is used to generate accounts.

fix cargo error failed to select a version for wasmparser.

I got an error:

failed to select a version for `wasmparser`.
    ... required by package `wasm-bindgen-cli v0.2.95 (/Users/dimxy/repo/wasm-bindgen/crates/cli)`
versions that meet the requirements `^0.211` are: 0.211.1, 0.211.0

the package `wasm-bindgen-cli` depends on `wasmparser`, with features: `indexmap` but `wasmparser` does not have these features.
 It has an optional dependency with that name, but that dependency uses the "dep:" syntax in the features table, so it does not have an implicit feature with that name.

Fixed by enabling feature 'validate' (which turns on optional dependency 'indexmap') wherever wasmparser is added in Cargo.toml: wasmparser = { version = "0.212", features = ['validate'] }

How to enable logging in MM2 cargo tests

add env_logger init with common::log::UnifiedLoggerBuilder::default().init();
Note with env_logger created, this will allow to print to console the messages from log::info! log::debug! macro from the known rust log crate. To enable messages you need also the RUST_LOG set and this ignores the --nocapture test param.
There is also MM2 own logger common::log::log!() macro which prints messages to console if --nocapture is set (and ignores RUST_LOG env).

Enable logging in docker tests

To enable logs for info!() debug!() calls in docker test functions add common::log::UnifiedLoggerBuilder::default().init(); in the docker_tests_runner

Info how to run zcoin native tests

Added notes in this file

Info about private coins and transactions (relevant for Sapling update)

Some info about private coins ['zcoin' in the kdf]:

  • z_key, z_spending_key - secret key to sign private transactions shielded spends and sign binding signatures
  • Note - like utxo
  • SpendDescription - like utxo tx input
  • Nullifier - part of SpendDescription, nullifies a previous tx note
  • fvk - full viewing key ...
  • viewing_key - key to encrypt/decrypt private transaction's outputs' part where tx's amount and address are encrypted (to inform the receiver)
  • incoming
  • binding signature - signs private tx balance (visible) TODO...

How to use komodefi-wasm-rpc

It needs the path to chrome browser - can't configure, fixed in code:

-    executablePath: '/usr/bin/google-chrome',
+    executablePath: '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome',

Build wasm target in atomicDEX (KDF) repo: CC=/usr/local/opt/llvm/bin/clang AR=/usr/local/opt/llvm/bin/llvm-ar wasm-pack build mm2src/mm2_bin_lib --target web --out-dir wasm_build/deps/pkg/

Create wasm zip (recursively):

cd ~/repo/atomicDEX-API/mm2src/mm2_bin_lib/wasm_build/deps/pkg
zip -r kdf_wasm.zip # (better to omit .gitgnore package.json though) 

copy kdf_wasm.zip to komodefi-wasm-rpc installation:

cp ~/repo/atomicDEX-API/mm2src/mm2_bin_lib/wasm_build/deps/pkg/kdf_wasm.zip .  # Call this from /Users/dimxy/repo/komodefi-wasm-rpc

update kdf files in komodefi-wasm-rpc:

./update_wasm_local.sh  /Users/dimxy/repo/komodefi-wasm-rpc/kdf_wasm.zip # use full path to kdf_wasm.zip (!)

It should finish without copy or other errors.

Then start two modules:

yarn preview
node server.cjs

To call KDF rpc fix the 'rpc' and 'userpass' source and test bash-script using them: add 'endpoint=rpc' var to the rpc files. add the 'endpoint' to the KDF url in bash-scripts fix password in the userpass file

run tests in zcash librustzcash lib

Tried to run this test in this tag zcash_primitives 0.13.0-rc.1:
cargo test -p zcash_client_sqlite -- --nocapture send_proposed_transfer
First got compile error that rustc must be 1.81 (fixed this in rust-toolchain.toml)
Then I got error with time lib (not built due to newer rustc ver). Solution: update to the newer (proposed) ver:
cargo update -p time@0.3.23 --precise 0.3.35.
However update directly did not work out:
error: failed to select a version for the requirement 'time' = ">=0.3.22, <0.3.24".
I so needed to edit Cargo.toml and removed upper bound:
time = ">=0.3.22" # removed upper bound
Also I needed to fetch and move ZcashParams:
mv ~/Library/Application\ Support/ZcashParams /Users/dimxy/.local/share
(Actually yet I needed to copy ZcashParams back in ~/Library/Application\ Support for cargo test -p mm2_main -- z_coin to work.)

The testing findings:
Actually I wanted to test if it's possible to store unconfirmed notes in the librustzcash db (in received_notes table), to store the change output and show it in unconfirmed sapling balance. In the komodoplatform librustzcash repo apparently this was not possible because in the received_notes table the field 'nf' has the NOT NULL constraint (this field contains nullifier).
But in the newer zcash librustzcash tag: zcash_primitives-0.13.0-rc.1 (67b84c25e06c0009b73d8c0afd91f28393cba4de) version the sapling_received_notes table already allows to store unconfirmed notes because the field 'nf' has the UNIQUE constraint, which can be NULL. In fact this new version specially tracks the unconfirmed change: see the change_pending_confirmation var.

KDF ordermatch workflow:

Order matching messages and logic:

  • Maker creates an order sending OrdermatchMessage::MakerOrderCreated.
  • Taker sends OrdermatchMessage::TakerRequest to find best orders.
  • Maker processes OrdermatchMessage::TakerRequest to find if any orders match by best price (in the match_with_request fn). Note that Taker may filter orders by uuids before calling match_with_request. If a match found, Maker broadcasts OrdermatchMessage::MakerReserved (see let reserved = MakerReserved {..} and then broadcast_ordermatch_message in code). Maker also inserts a maker_match with the reserved order into orders.matches.
  • Taker receives OrdermatchMessage::MakerReserved and calls process_maker_reserved fn. In this fn Taker inserts the new reserved_msg into pending_map and then, after a delay, processes all pending reserved_msg, in the match_reserved fn. Note that match_reserved fn first does match by MatchBy::Orders (that the reserver order uuid is in Taker request list) and MatchBy::Pubkey and then does match by price. Here actually Taker selects the best priced order from ones that several Makers found and sent - because the pending reserved orders are sorted by price.
  • Taker, if a match found for MakerReserved, broadcasts a TakerConnect message and inserts the match into the Taker order's matches map.
  • Maker when receives the TakerConnect starts the swap calling lp_connect_start_bob fn and broadcasts the MakerConnected message.
  • Taker, when receives the MakerConnected message starts the swap calling lp_connected_alice fn.
  • The swap is now running.

Maker order cancel and update:

  • Maker may cancel its orders via 'cancel_order' rpc or automatically in the lp_order_match_loop, due to low balance. if an order cancelled Maker broadcasts OrdermatchMessage::MakerOrderCancelled msg
  • Maker may update its orders, in this case Maker broadcasts OrdermatchMessage::MakerOrderUpdated msg.

(Note that a Taker order may convert to a Maker order after timeout)

KDF orderbook flow sync

We have seed (relay) and non-seed nodes. Orderbook sync algo is different for either of them.
The flow is as follows:

  • When a node (seed or non-seed) starts, User adds maker orders. (Note: when a maker order is created the node that created it also becomes subscribed for the base/rel pair in this order - see subscribe_to_orderbook_topic fn)
  • Node broadcasts its created orders with a 'orbk' MakerOrderCreated message. On receiving this message, all nodes that have subscribed on this coin pair add the order to their orderbooks.
  • Node runs a loop and periodically broadcasts a PubkeyKeepAlive message with trie roots (kind of a fingerprint of the hashmap of orders) for each base/rel pair map, for the node pubkey. Nodes, on receiving this message, call the process_keep_alive fn: if node is a seed it will proceed for any base/rel pair; if the node is not a seed it will proceed further only for the subscribed base/rel pairs. On further proceeding, the node checks if the trie roots are changed: if yes, it will send a SyncPubkeyOrderbookState request to peers (it stops sending after a first response). On receiving a response with orders, the node updates its orderbook. So, each node is basically triggered by a PubkeyKeepAlive message to do an orderbook sync request and to update its orderbook.

NOTE: a question could arise: why seed nodes should process keep-alive messages for all pairs and non-seed nodes process them only for subscribed pairs? Both seed and non-seed nodes subscribe to orders by coin pairs (See the subscribe_to_orderbook_topic fn: we always need to pass a pair when an orbk topic is constructed.) Will a seed or non-seed node ever receive a keep-alive message for a pair it is not subscribed to? Apparently seed nodes receive all messages because they need to relay them. Probably non-seed nodes may also receive messages even if they have not subscribed on them. So seed nodes will be receiving messages with all newly created orders and also receive and process all PubkeyKeepAlive messages. Looks like non-seed nodes also may receive messages they did not subscribe. If they receive keep-alive messages they have not subscribed to, they should ignore such messages.

  • User can also call orderbook rpc to update the local orderbook. This rpc calls the subscribe_to_orderbook_topic fn with base/rel param and, if not sent already, also sends a GetOrderbook request to one peer node - on the response received the node's orderbook is updated (see process_pubkey_full_trie fn). The OrderbookRequestingState::Requested status is also set so on next orderbook rpc calls the GetOrderbook request won't be executed.
  • Also note: if in the first call to the orderbook rpc the orderbook (on the GetOrderbook request) was not received (due to some error), User may again call the orderbook rpc (for this base/rel pair) but new GetOrderbook requests can be sent only before the timeout of 10 sec after the first call to the rpc (when the subscribed_at var is set) has passed - meaning the orderbook will be updated by a SyncPubkeyOrderbookState request (triggered by a keepalive message) after 10 sec.

How change color in kdf dump log

look for 'vvv' or 'impl Drop for RaiiDump' or 'ANSI_CODE'. Use more visible colors:

const BLUE_ANSI_CODE: &str = "\x1b[34m"; 
const GREEN_ANSI_CODE: &str = "\x1b[32m";
const PURPLE_ANSI_CODE: &str = "\x1b[35m"; // even better visible than green

install protoc protocol buffers compiler to build kdf

If you see build errors like this:

error: failed to run custom build command for `mm2_main v0.1.0 (/home/ubuntu/komodo-defi-framework/mm2src/mm2_main)`

Caused by:
  process didn't exit successfully: `/home/ubuntu/komodo-defi-framework/target/debug/build/mm2_main-4b89662e1a88534f/build-script-build` (exit status: 101)
  --- stderr
  thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: "protoc failed: swap_v2.proto:17:12: Explicit 'optional' labels are disallowed in the Proto3 syntax.

you may need to install correct protocol buffers compiler. On linux ubuntu 18 don't install it like 'apt-get install protobuf-compiler' it would install a version like 3.0.0 which won't compile kdf Instead use releases from this page: https://github.com/protocolbuffers/protobuf/releases/tag/v30.2. Download and unzip it to ~/.local/bin and check this dir is in the PATH.

How KDF zcoin sync loop works and is respawned

This is extra comments to the explanation in the source code to light_wallet_db_sync_loop: https://github.com/KomodoPlatform/komodo-defi-framework/blob/c800ea03f12dab33d2dc04a1d858ee6da111203f/mm2src/coins/z_coin/z_rpc.rs#L871.
About block scanning:
In fact, the loop in light_wallet_db_sync_loop() does not stop scanning blocks. That is, when let sync_guard = self.wait_for_gen_tx_blockchain_sync().await?; is called in the gen_tx(), the light_wallet_db_sync_loop will continue to work, and with block scanning.
(BTW, there is a SaplingSyncLoopHandle::main_sync_state_finished var which is set to true when max height is reached. But it affects whether notification is done. Interesting that it's never resets to false. Will notification work?)
How light_wallet_db_sync_loop is aborted and respawned:
When sync_guard is dropped, SaplingSyncRespawnGuard::drop is executed and a new spawn_abortable with light_wallet_db_sync_loop is called. This returns a abort_handle var, which is replaced in the SaplingSyncRespawnGuard::abort_handle member var (which is of the Arc<Mutex<AbortOnDropHandle>> type), so its previous value is dropped and AbortOnDropHandle::drop is called, calling abort for the running light_wallet_db_sync_loop future.
So technically light_wallet_db_sync_loop is aborted and respawned in same moment in the SaplingSyncRespawnGuard::drop. Also block scanning is continuing to work until the fn is aborted.

How zcoin light_wallet_db_sync_loop scans blocks and fills wallet db (wasm case)

The block processing in light_wallet_db_sync_loop may run long time when zcoin is activated (sync for two days may take 30 min). The problem is that after zcoin is activated and wallet db is filled with z-transactions, on the second login to the wallet zcoin activation again takes too long.
Here how this works:

  • On zcoin activation GUI passed the timestamp since which scanning should be done. If no timestamp passed init_light_client sets 1 day as default.
  • init_light_client fn calculates sync_height from the passed sync timestamp (by using "average_block_time" coins param).
  • init_light_client determines continue_from_prev_sync is true
  • if continue_from_prev_sync is false the wallet db is rewinded to sync_height (actually TICKER_HEIGHT_INDEX table is cleared and set to sync_height). Not a long process though, few secs only.

Then light_wallet_db_sync_loop starts and runs several phases: First, update_blocks_cache fn updates block cache from the 'max_in_wallet' var (which is the extrema max from TICKER_HEIGHT_INDEX table). This does not take too long time (few seconds for 2 day sync depth).
Then scan_validate_and_update_blocks fn starts:

  • calls process_blocks_with_mode with BlockProcessingMode::Validate, again since the extrema max height (not a long run for 2-day depth)
  • starts a loop with process_blocks_with_mode with BlockProcessingMode::Scan (for iteration of 1000 blocks) until max_in_wallet >= current_block. This process is the longest apparently because of scanning each block and trying to decrypt z-transactions (this along may take 30 min and more. However on Saturday I saw it managed to finish in 5-7 min. Why sometimes is it stuck for 30 secs?)

How MarketMakerIt test executable logging works

When MarketMakerIt is created its stdin and stdout redirected to MarketMakerIt::log_path, initialised as the path to mm2.log in a tmp directory (see start_with_envs fn).
See also into_mm_arc fn: it creates ctx of MmCtx with MmCtx::log var, for which we could do logging with the log_tag!() macro - apparently it is also sent into MarketMakerIt::log_path.

KDF abortable system destroyed if ctx dropped

This may happen in tests: ctx may be dropped while test is running. Example code:

let res = coin.withdraw_for_tests(ctx, ..).await // ctx is moved and dropped

In such a case the abortable_system is dropped and after that, abortable futures cannot be executed. This may lead to errors like that: GenTxError(Internal(\"wait_for_spendable_balance task was cancelled\"))"). To prevent this ensure that ctx exists till the end of the test.

How to run ZOMBIE chain for zcoin tests

ZOMBIE chain must be running for zcoin tests: komodod -ac_name=ZOMBIE -ac_supply=0 -ac_reward=25600000000 -ac_halving=388885 -ac_private=1 -ac_sapling=1 -testnode=1 -addnode=65.21.51.116 -addnode=116.203.120.163 -addnode=168.119.236.239 -addnode=65.109.1.121 -addnode=159.69.125.84 -addnode=159.69.10.44 Also check the test z_key (spending key) has balance: komodo-cli -ac_name=ZOMBIE z_getbalance zs10hvyxf3ajm82e4gvxem3zjlf9xf3yxhjww9fvz3mfqza9zwumvluzy735e29c3x5aj2nu0ua6n0 If no balance, you may mine some transparent coins and send to the test z_key. When tests are run for the first time (or have not been run for a long) synching to fill ZOMBIE_wallet.db is started which may take hours. So it is recommended to run prepare_zombie_sapling_cache to sync ZOMBIE_wallet.db before running zcoin tests: cargo test -p coins --features zhtlc-native-tests -- --nocapture prepare_zombie_sapling_cache If you did not run prepare_zombie_sapling_cache waiting for ZOMBIE_wallet.db sync will be done in the first call to ZCoin::gen_tx. In tests, for ZOMBIE_wallet.db to be filled, another database ZOMBIE_cache.db is created in memory, so if db sync in tests is cancelled and restarted this would cause restarting of building ZOMBIE_cache.db in memory

Note that during the ZOMBIE_wallet.db sync an error may be reported: 'error trying to connect: tcp connect error: Can't assign requested address (os error 49)'. Also during the sync other apps like ssh or komodo-cli may return same error or even crash. TODO: fix this problem, maybe it is due to too much load on TCP stack Errors like No one seems interested in SyncStatus: send failed because channel is full in the debug log may be ignored (means that update status is temporarily not watched)

To monitor sync status in logs you may add logging support into the beginning of prepare_zombie_sapling_cache test (or other tests): common::log::UnifiedLoggerBuilder::default().init(); and run cargo test with var RUST_LOG=debug

Clone this wiki locally