Skip to content

Release/1.5.2 #180

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 33 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
e08282b
move tests
basfroman Aug 7, 2025
c00f5e3
add unit and integration tests wf
basfroman Aug 7, 2025
54044a7
update e2e tests wf
basfroman Aug 7, 2025
138f00d
add substrate-interface to test
basfroman Aug 7, 2025
32e26e1
add substrate-interface to test
basfroman Aug 7, 2025
b76b2ab
update dev requirements
basfroman Aug 7, 2025
3a0dc46
add pytest.ini
basfroman Aug 7, 2025
2c9598e
comment weird tests :D
basfroman Aug 7, 2025
94316ff
comment weird tests :D 2
basfroman Aug 7, 2025
5ec1cba
Fix tests
thewhaleking Aug 8, 2025
06510d3
Skip test that doesn't run in GHA
thewhaleking Aug 8, 2025
e34588b
Context.
thewhaleking Aug 8, 2025
456618d
Merge pull request #173 from opentensor/tests/roman/improve-test-work…
thewhaleking Aug 8, 2025
0acdc1a
Adds env var support for setting cache size
thewhaleking Aug 8, 2025
10bd660
Applies the same caching sizes to both async and sync
thewhaleking Aug 8, 2025
f5cea8e
Adds unit tests
thewhaleking Aug 8, 2025
f11028a
Move to helper
thewhaleking Aug 8, 2025
5abd398
README on cache usage and design.
thewhaleking Aug 8, 2025
028a1a1
Merge pull request #174 from opentensor/feat/thewhaleking/add-cache-s…
basfroman Aug 8, 2025
136ca77
WIP check-in
thewhaleking Aug 12, 2025
ec82546
Ensure closing of sqlite DB connection when closing DiskCachedAsyncSu…
thewhaleking Aug 12, 2025
b5cc45e
While pytest implicitly converts monkeypatched env vars to strings, I…
thewhaleking Aug 12, 2025
c660e19
Merge pull request #177 from opentensor/fix/thewhaleking/set-env-vars…
thewhaleking Aug 13, 2025
e33a0eb
PR comments
thewhaleking Aug 13, 2025
4da8bde
no op
thewhaleking Aug 13, 2025
958dd2e
List out the env vars used by this lib
thewhaleking Aug 13, 2025
18f55d3
Merge pull request #176 from opentensor/feat/thewhaleking/use-aiosqlite
thewhaleking Aug 13, 2025
e63ee25
Async
thewhaleking Aug 14, 2025
19ddcc7
sync
thewhaleking Aug 14, 2025
9f1a549
Ruff
thewhaleking Aug 14, 2025
99533b7
Merge pull request #178 from opentensor/feat/thewhaleking/additional-…
thewhaleking Aug 14, 2025
8c58ca5
Version + changelog
thewhaleking Aug 14, 2025
b652c0f
Merge pull request #179 from opentensor/changelog/1.5.2
thewhaleking Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
173 changes: 173 additions & 0 deletions .github/workflows/e2e-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
name: E2E Tests

concurrency:
group: e2e-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

on:
pull_request:
branches:
- '**'
types: [ opened, synchronize, reopened, ready_for_review ]

workflow_dispatch:
inputs:
verbose:
description: "Output more information when triggered manually"
required: false
default: ""

env:
CARGO_TERM_COLOR: always
VERBOSE: ${{ github.event.inputs.verbose }}

# job to run tests in parallel
jobs:
# Looking for e2e tests
find-tests:
runs-on: ubuntu-latest
if: ${{ github.event.pull_request.draft == false }}
outputs:
test-files: ${{ steps.get-tests.outputs.test-files }}
steps:
- name: Check-out repository under $GITHUB_WORKSPACE
uses: actions/checkout@v4

- name: Find test files
id: get-tests
run: |
test_files=$(find tests/e2e_tests -name "test*.py" | jq -R -s -c 'split("\n") | map(select(. != ""))')
# keep it here for future debug
# test_files=$(find tests/e2e_tests -type f -name "test*.py" | grep -E 'test_(hotkeys|staking)\.py$' | jq -R -s -c 'split("\n") | map(select(. != ""))')
echo "Found test files: $test_files"
echo "test-files=$test_files" >> "$GITHUB_OUTPUT"
shell: bash

# Pull docker image
pull-docker-image:
runs-on: ubuntu-latest
outputs:
image-name: ${{ steps.set-image.outputs.image }}
steps:
- name: Set Docker image tag based on label or branch
id: set-image
run: |
echo "Event: $GITHUB_EVENT_NAME"
echo "Branch: $GITHUB_REF_NAME"

echo "Reading labels ..."
if [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
labels=$(jq -r '.pull_request.labels[].name' "$GITHUB_EVENT_PATH")
else
labels=""
fi

image=""

for label in $labels; do
echo "Found label: $label"
case "$label" in
"subtensor-localnet:main")
image="ghcr.io/opentensor/subtensor-localnet:main"
break
;;
"subtensor-localnet:testnet")
image="ghcr.io/opentensor/subtensor-localnet:testnet"
break
;;
"subtensor-localnet:devnet")
image="ghcr.io/opentensor/subtensor-localnet:devnet"
break
;;
esac
done

if [[ -z "$image" ]]; then
# fallback to default based on branch
if [[ "${GITHUB_REF_NAME}" == "master" ]]; then
image="ghcr.io/opentensor/subtensor-localnet:main"
else
image="ghcr.io/opentensor/subtensor-localnet:devnet-ready"
fi
fi

echo "✅ Final selected image: $image"
echo "image=$image" >> "$GITHUB_OUTPUT"

- name: Log in to GitHub Container Registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u $GITHUB_ACTOR --password-stdin

- name: Pull Docker Image
run: docker pull ${{ steps.set-image.outputs.image }}

- name: Save Docker Image to Cache
run: docker save -o subtensor-localnet.tar ${{ steps.set-image.outputs.image }}

- name: Upload Docker Image as Artifact
uses: actions/upload-artifact@v4
with:
name: subtensor-localnet
path: subtensor-localnet.tar

# Job to run tests in parallel
run-fast-blocks-e2e-test:
name: "FB: ${{ matrix.test-file }} / Python ${{ matrix.python-version }}"
needs:
- find-tests
- pull-docker-image
runs-on: ubuntu-latest
timeout-minutes: 45
strategy:
fail-fast: false # Allow other matrix jobs to run even if this job fails
max-parallel: 32 # Set the maximum number of parallel jobs (same as we have cores in ubuntu-latest runner)
matrix:
os:
- ubuntu-latest
test-file: ${{ fromJson(needs.find-tests.outputs.test-files) }}
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
steps:
- name: Check-out repository
uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install uv
uses: astral-sh/setup-uv@v4

- name: install dependencies
run: |
uv sync --extra dev --dev

- name: Download Cached Docker Image
uses: actions/download-artifact@v4
with:
name: subtensor-localnet

- name: Load Docker Image
run: docker load -i subtensor-localnet.tar

- name: Run tests with retry
env:
LOCALNET_IMAGE_NAME: ${{ needs.pull-docker-image.outputs.image-name }}
run: |
for i in 1 2 3; do
echo "::group::🔁 Test attempt $i"
if uv run pytest ${{ matrix.test-file }} -s; then
echo "✅ Tests passed on attempt $i"
echo "::endgroup::"
exit 0
else
echo "❌ Tests failed on attempt $i"
echo "::endgroup::"
if [ "$i" -lt 3 ]; then
echo "Retrying..."
sleep 5
fi
fi
done

echo "Tests failed after 3 attempts"
exit 1
81 changes: 0 additions & 81 deletions .github/workflows/run-async-substrate-interface-tests.yml

This file was deleted.

59 changes: 59 additions & 0 deletions .github/workflows/unit-and-integration-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: Unit and integration tests checker
permissions:
contents: read

on:
pull_request:
types: [opened, synchronize, reopened, edited]

jobs:
unit-and-integration-tests:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

strategy:
fail-fast: false
max-parallel: 5
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]

steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"

- name: Cache venv
id: cache
uses: actions/cache@v4
with:
path: venv
key: v2-${{ runner.os }}-${{ hashFiles('pyproject.toml') }}

- name: Install deps
if: ${{ steps.cache.outputs.cache-hit != 'true' }}
run: |
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip
python -m pip install uv
python -m uv sync --extra dev --active

- name: Unit tests
timeout-minutes: 20
env:
PYTHONUNBUFFERED: "1"
run: |
source venv/bin/activate
python -m uv run pytest -n 2 tests/unit_tests/ --reruns 3

- name: Integration tests
timeout-minutes: 20
env:
PYTHONUNBUFFERED: "1"
run: |
source venv/bin/activate
python -m uv run pytest -n 2 tests/integration_tests/ --reruns 3
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
# Changelog
## 1.5.2 /2025-08-15
* Improve test workflows by @basfroman in https://github.com/opentensor/async-substrate-interface/pull/173
* Adds env var support for setting cache size by @thewhaleking in https://github.com/opentensor/async-substrate-interface/pull/174
* Set env vars as str in unit test by @thewhaleking in https://github.com/opentensor/async-substrate-interface/pull/177
* DiskCachedAsyncSubstrateInterface: use aiosqlite by @thewhaleking in https://github.com/opentensor/async-substrate-interface/pull/176
* Additional Debug Logging by @thewhaleking in https://github.com/opentensor/async-substrate-interface/pull/178


**Full Changelog**: https://github.com/opentensor/async-substrate-interface/compare/v1.5.1...v1.5.2

## 1.5.1 /2025-08-05
* query multiple/decoding fix by @thewhaleking in https://github.com/opentensor/async-substrate-interface/pull/168
Expand Down
43 changes: 43 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,49 @@ async def main():
asyncio.run(main())
```

### Caching
There are a few different cache types used in this library to improve the performance overall. The one with which
you are probably familiar is the typical `functools.lru_cache` used in `sync_substrate.SubstrateInterface`.

By default, it uses a max cache size of 512 for smaller returns, and 16 for larger ones. These cache sizes are
user-configurable using the respective env vars, `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE`.

They are applied only on methods whose results cannot change — such as the block hash for a given block number
(small, 512 default), or the runtime for a given runtime version (large, 16 default).

Additionally, in `AsyncSubstrateInterface`, because of its asynchronous nature, we developed our own asyncio-friendly
LRU caches. The primary one is the `CachedFetcher` which wraps the same methods as `functools.lru_cache` does in
`SubstrateInterface`, but the key difference here is that each request is assigned a future that is returned when the
initial request completes. So, if you were to do:

```python
bn = 5000
bh1, bh2 = await asyncio.gather(
asi.get_block_hash(bn),
asi.get_block_hash(bn)
)
```
it would actually only make one single network call, and return the result to both requests. Like `SubstrateInterface`,
it also takes the `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE` vars to set cache size.

The third and final caching mechanism we use is `async_substrate_interface.async_substrate.DiskCachedAsyncSubstrateInterface`,
which functions the same as the normal `AsyncSubstrateInterface`, but that also saves this cache to the disk, so the cache
is preserved between runs. This is product for a fairly nice use-case (such as `btcli`). As you may call different networks
with entirely different results, this cache is keyed by the uri supplied at instantiation of the `DiskCachedAsyncSubstrateInterface`
object, so `DiskCachedAsyncSubstrateInterface(network_1)` and `DiskCachedAsyncSubstrateInterface(network_2)` will not share
the same on-disk cache.

As with the other two caches, this also takes `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE` env vars.


### ENV VARS
The following environment variables are used within async-substrate-interface
- NO_CACHE (default 0): if set to 1, when using the DiskCachedAsyncSubstrateInterface class, no persistent on-disk cache will be stored, instead using only in-memory cache.
- CACHE_LOCATION (default `~/.cache/async-substrate-interface`): this determines the location for the cache file, if using DiskCachedAsyncSubstrateInterface
- SUBSTRATE_CACHE_METHOD_SIZE (default 512): the cache size (either in-memory or on-disk) of the smaller return-size methods (see the Caching section for more info)
- SUBSTRATE_RUNTIME_CACHE_SIZE (default 16): the cache size (either in-memory or on-disk) of the larger return-size methods (see the Caching section for more info)


## Contributing

Contributions are welcome! Please open an issue or submit a pull request to the `staging` branch.
Expand Down
Loading
Loading