Skip to content

Commit 62bb23c

Browse files
fix ORM circular dependency (#161)
* fix ORM circular dependency * PR comments
1 parent 9d6ef66 commit 62bb23c

File tree

59 files changed

+849
-599
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+849
-599
lines changed

Cargo.lock

+2-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

README.md

+75-42
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,42 @@
11
## IMPORTANT: See Prerequisites below
22

33
## Digital Asset RPC API Infrastructure
4-
This repo houses the API Ingester and Database Types components of the Metaplex Digital Asset RPC API. Together these
5-
components are responsible for the aggregation of Solana Validator Data into an extremely fast and well typed api. This
6-
api provides a nice interface on top of the Metaplex programs. It abstracts the byte layout on chain, allows for
7-
super-fast querying and searching, as well as serves the merkle proofs needed to operate over compressed nfts.
4+
5+
This repo houses the API Ingester and Database Types components of the Metaplex Digital Asset RPC API. Together these
6+
components are responsible for the aggregation of Solana Validator Data into an extremely fast and well typed api. This
7+
api provides a nice interface on top of the Metaplex programs. It abstracts the byte layout on chain, allows for
8+
super-fast querying and searching, as well as serves the merkle proofs needed to operate over compressed nfts.
89

910
### Components
11+
1012
1. Ingester -> A background processing system that gets messages from a [Messenger](https://github.com/metaplex-foundation/digital-asset-validator-plugin), and uses [BlockBuster](https://github.com/metaplex-foundation/blockbuster) Parsers to store the canonical representation of Metaplex types in a storage system. This system also holds the re-articulated Merkle tree that supports the compressed NFTs system.
1113
2. Api -> A JSON Rpc api that serves Metaplex objects. This api allows filtering, pagination and searching over Metaplex data. This data includes serving the merkle proofs for the compressed NFTs system. It is intended to be run right alongside the Solana RPC and works in much the same way. Just like the solana RPC takes data from the validator and serves it in a new format, so this api takes data off the validator and serves it.
1214

1315
The API specification is located here https://github.com/metaplex-foundation/api-specifications
1416
This spec is what providers of this api must implement against.
1517

1618
### Infrastructure and Deployment Examples
17-
Along with the above rust binaries, this repo also maintains examples and best practice settings for running the entire infrastructure.
18-
The example infrastructure is as follows.
1919

20-
* A Solana No-Vote Validator - This validator is configured to only have secure access to the validator ledger and account data under consensus.
21-
* A Geyser Plugin (Plerkle) - The above validator is further configured to load this geyser plugin that sends Plerkle Serialized Messages over a messaging system.
22-
* A Redis Cluster (Stream Optimized) - The example messaging system is a light weight redis deployment that supports the streaming configuration.
23-
* A Kubernetes Cluster - The orchestration system for the API and Ingester processes. Probably overkill for a small installation, but it's a rock solid platform for critical software.
20+
Along with the above rust binaries, this repo also maintains examples and best practice settings for running the entire infrastructure.
21+
The example infrastructure is as follows.
22+
23+
- A Solana No-Vote Validator - This validator is configured to only have secure access to the validator ledger and account data under consensus.
24+
- A Geyser Plugin (Plerkle) - The above validator is further configured to load this geyser plugin that sends Plerkle Serialized Messages over a messaging system.
25+
- A Redis Cluster (Stream Optimized) - The example messaging system is a light weight redis deployment that supports the streaming configuration.
26+
- A Kubernetes Cluster - The orchestration system for the API and Ingester processes. Probably overkill for a small installation, but it's a rock solid platform for critical software.
2427

2528
This repo houses Helm Charts, Docker files and Terraform files to assist in the deployment of the example infrastructure.
2629

2730
### Developing
2831

2932
#### Prerequisites:
33+
3034
You must clone the https://github.com/metaplex-foundation/blockbuster repo, this is un publishable for now due to active development in like 1000 branches and serious mathematics avoiding dependency hell.
3135

3236
Because this is a multi component system the easiest way to develop or locally test this system is with docker but developing locally without docker is possible.
3337

3438
#### Regenerating DB Types
39+
3540
Edit the init.sql, then run `docker compose up db`
3641
Then with a local `DATABASE_URL` var exported like this `export DATABASE_URL=postgres://solana:solana@localhost/solana` you can run
3742
`sea-orm-cli generate entity -o ./digital_asset_types/src/dao/generated/ --database-url $DATABASE_URL --with-serde both --expanded-format`
@@ -40,18 +45,19 @@ If you need to install `sea-orm-cli` run `cargo install sea-orm-cli`.
4045

4146
Note: The current SeaORM types were generated using version 0.9.3 so unless you want to upgrade you can install using `cargo install sea-orm-cli --version 0.9.3`.
4247

43-
Also note: The migration `m20230224_093722_performance_improvements` needs to be commented out of the migration lib.rs in order for the Sea ORM `Relations` to generate correctly.
44-
4548
#### Developing Locally
46-
*Prerequisites*
47-
* A Postgres Server running with the database setup according to ./init.sql
48-
* A Redis instance that has streams enabled or a version that supports streams
49-
* A local solana validator with the Plerkle plugin running.
50-
* Environment Variables set to allow your validator, ingester and api to access those prerequisites.
49+
50+
_Prerequisites_
51+
52+
- A Postgres Server running with the database setup according to ./init.sql
53+
- A Redis instance that has streams enabled or a version that supports streams
54+
- A local solana validator with the Plerkle plugin running.
55+
- Environment Variables set to allow your validator, ingester and api to access those prerequisites.
5156

5257
See [Plugin Configuration](https://github.com/metaplex-foundation/digital-asset-validator-plugin#building-locally) for how to locally configure the test validator plugin to work.
5358

5459
For the API you need the following environment variables:
60+
5561
```bash
5662
APP_DATABASE_URL=postgres://solana:solana@db/solana #change to your db host
5763
APP_SERVER_PORT=9090
@@ -62,6 +68,7 @@ cargo run -p das_api
6268
```
6369

6470
For the Ingester you need the following environment variables:
71+
6572
```bash
6673
INGESTER_DATABASE_CONFIG: '{listener_channel="backfill_item_added", url="postgres://solana:solana@db/solana"}' # your database host
6774
INGESTER_MESSENGER_CONFIG: '{messenger_type="Redis", connection_config={ redis_connection_str="redis://redis" } }' #your redis
@@ -72,13 +79,14 @@ INGESTER_RPC_CONFIG: '{url="http://validator:8899", commitment="finalized"}' # y
7279
cargo run -p nft_ingester
7380
```
7481

75-
7682
When making changes you will need to stop the cargo process and re-run. Someday we will have auto rebuild for local cargo stuff but for now you are on your own.
7783

7884
#### NOTE
85+
7986
```
80-
INGESTER_ROLE
87+
INGESTER_ROLE
8188
```
89+
8290
This environment variable can be used to split the work load.
8391

8492
All for a combined setup
@@ -89,44 +97,49 @@ Background for just the background tasks.
8997
For production you should split the coponents up.
9098

9199
### Developing With Docker
100+
92101
Developing with Docker is much easier, but has some nuances to it. This test docker compose system relies on a programs folder being accessible, this folder needs to have the shared object files for the following programs
93-
* Token Metadata
94-
* Bubblegum
95-
* Gummyroll
96-
* Token 2022
97-
* Latest version of the Associated token program
102+
103+
- Token Metadata
104+
- Bubblegum
105+
- Gummyroll
106+
- Token 2022
107+
- Latest version of the Associated token program
98108

99109
You need to run the following script in order to get the .so files.
100110

101111
```bash
102112
./prepare-local-docker-env.sh
103113
```
104-
This script downloads these programs from mainnet and puts them in the `programs/` folder.
114+
115+
This script grabs all the code for these programs and compiles it, and chucks it into your programs folder. Go grab some coffe because this will take a while/
116+
If you get some permissions errors, just sudo delete the programs directory and start again.
105117

106118
#### Authentication with Docker and AWS
107119

108-
_This step is not normally needed for basic local docker usage._
109-
```aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin {your aws container registry}```
120+
`aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin {your aws container registry}`
110121

111122
#### Running the application
112123

113-
We use ``docker-compose`` to build the multi-container Docker application. On some systems its ``docker compose``.
124+
We use `docker-compose` to build the multi-container Docker application. On some systems its `docker compose`.
125+
114126
```bash
115-
docker-compose build
127+
docker-compose build
116128
```
129+
117130
This builds the docker container for API and the Ingester components and will download the appropriate Redis, Postgres and Solana+plerkle docker images.
118131
Keep in mind that the version `latest` on the Solana Validator image will match the latest version available on the docs, for other versions please change that version in your docker compose file.
119132

120133
```bash
121-
docker-compose up
134+
docker-compose up
122135
```
123136

124137
#### Developing
125138

126-
When making changes you will need to ``docker compose up --build --force-recreate`` again to get the latest changes.
139+
When making changes you will need to `docker compose up --build --force-recreate` again to get the latest changes.
127140
Also when mucking about with the docker file if your gut tells you that something is wrong, and you are getting build errors run `docker compose build --no-cache`
128141

129-
Sometimes you will want to delete the db do so with `sudo rm -rf db-data`. You can also delete the ledger with `sudo rm -rf ledger`.
142+
Sometimes you will want to delete the db do so with `sudo rm -rf db-data`. You can also delete the ledger with `sudo rm -rf ledger`.
130143

131144
#### Running Bubblegum Test Sequences
132145

@@ -137,13 +150,16 @@ sudo rm -rf db-data/
137150
sudo rm -rf ledger/
138151
docker compose up --force-recreate --build
139152
```
153+
140154
_In another terminal:_
155+
141156
```bash
142157
cd tools/txn_forwarder/bubblegum_tests/
143158
./run-bubblegum-sequences.sh
144159
```
145160

146161
You should see it log something like:
162+
147163
```
148164
Running 10 scenarios forwards
149165
mint_transfer_burn.scenario initial asset table state passed
@@ -160,26 +176,33 @@ ALL TESTS PASSED FORWARDS!
160176
```
161177

162178
You can also run the sequences in reverse:
179+
163180
```bash
164181
./run-bubblegum-sequences.sh reverse
165182
```
183+
166184
And after it runs you should see `ALL TESTS PASSED IN REVERSE!`
167185

168186
A few detailed notes about this test script:
169-
* This script is not all-encompassing. It is only meant to automate some normal basic tests that were previously done manually. The reason this test is not added to CI is because requires a more powerful system to run the Docker application, which contains the no-vote Solana validator.
170-
* The test sequences are in `.scenario` files, but instead of sending those files to the `txn_forwarder` directly (which supports the file format), we parse them out and send them individually using the `single` parameter. This is because using the `.scenario` file directly results in random ordering of the transactions and we are explicity trying to test them going forwards and in reverse.
171-
* In general the expected database results are the same when running the transactions forwards and backwards. However, for assets that are decompressed, this is not true because we don't index some of the asset information from Bubblegum mint indexing if we already know the asset has been decompressed. We instead let Token Metadata account based indexing fill in that information. This is not reflected by this test script so the results differ when running these sequences in reverse. The differing results are reflected in test files with the `_reverse` suffix.
187+
188+
- This script is not all-encompassing. It is only meant to automate some normal basic tests that were previously done manually. The reason this test is not added to CI is because requires a more powerful system to run the Docker application, which contains the no-vote Solana validator.
189+
- The test sequences are in `.scenario` files, but instead of sending those files to the `txn_forwarder` directly (which supports the file format), we parse them out and send them individually using the `single` parameter. This is because using the `.scenario` file directly results in random ordering of the transactions and we are explicity trying to test them going forwards and in reverse.
190+
- In general the expected database results are the same when running the transactions forwards and backwards. However, for assets that are decompressed, this is not true because we don't index some of the asset information from Bubblegum mint indexing if we already know the asset has been decompressed. We instead let Token Metadata account based indexing fill in that information. This is not reflected by this test script so the results differ when running these sequences in reverse. The differing results are reflected in test files with the `_reverse` suffix.
172191

173192
#### Logs
193+
174194
To get a reasonable amount of logs while running Docker, direct grafana logs to a file:
195+
175196
```
176197
grafana:
177198
...
178199
environment:
179200
...
180201
- GF_LOG_MODE=file
181202
```
182-
and set Solana Rust logs to error level (it is already set to error level now in the current docker compose file):
203+
204+
and set Solana Rust logs to error level:
205+
183206
```
184207
solana:
185208
...
@@ -190,10 +213,13 @@ and set Solana Rust logs to error level (it is already set to error level now in
190213
#### Interacting with API
191214

192215
Once everything is working you can see that there is a api being served on
216+
193217
```
194218
http://localhost:9090
195219
```
220+
196221
And a Metrics System on
222+
197223
```
198224
http://localhost:3000
199225
```
@@ -234,10 +260,12 @@ curl --request POST --url http://localhost:9090 --header 'Content-Type: applicat
234260
}' | json_pp
235261
```
236262

237-
# Deploying to Kubernetes
263+
# Deploying to Kubernetes
264+
238265
Using skaffold you can deploy to k8s, make sure you authenticate with your docker registry
239266

240267
Make sure you have the env vars you need to satisfy this part of the skaffold.yaml
268+
241269
```yaml
242270
...
243271
setValueTemplates:
@@ -259,28 +287,34 @@ Make sure you have the env vars you need to satisfy this part of the skaffold.ya
259287
metrics.data_dog_api_key: "{{.DATA_DOG_API}}"
260288
...
261289
```
290+
262291
```bash
263292
skaffold build --file-output skaffold-state.json --cache-artifacts=false
264293
## Your namepsace may differ.
265294
skaffold deploy -p devnet --build-artifacts skaffold-state.json --namespace devnet-read-api --tail=true
266295
```
267296

268297
# METRICS
298+
269299
Here are the metrics that various parts of ths system expose;
270300

271301
## NFT INGESTER
302+
272303
### ACKING
304+
273305
count ingester.ack - number of messages acked tagged by stream
274306

275307
count ingester.stream.ack_error - error acking a message
276308
count ingester.stream.receive_error - error getting stream data
277309

278310
### Stream Metrics
311+
279312
ingester.stream_redelivery - Stream tagged of messages re delivered
280313
ingester.stream_size - Size of stream, tagged by stream
281314
ingester.stream_size_error - Error getting the stream size
282315

283316
### Stream Specific Metrics
317+
284318
All these metrics are tagged by stream
285319
count ingester.seen
286320
time ingester.proc_time
@@ -290,6 +324,7 @@ count ingester.not_implemented
290324
count ingester.ingest_error
291325

292326
### BG Tasks
327+
293328
time ingester.bgtask.proc_time
294329
count ingester.bgtask.success
295330
count ingester.bgtask.error
@@ -298,17 +333,15 @@ time ingester.bgtask.bus_time
298333
count ingester.bgtask.identical
299334

300335
### BACKFILLER
336+
301337
count ingester.backfiller.task_panic
302338
count ingester.backfiller.task_error
303339
guage ingester.backfiller.missing_trees
304340

305341
### Startup
342+
306343
ingester.startup
307344

308345
## API
309-
api_call
310-
311-
312-
313-
314346

347+
api_call

digital_asset_types/Cargo.toml

+2
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@ publish = { workspace = true }
88
[dependencies]
99
async-trait = { workspace = true }
1010
blockbuster = { workspace = true }
11+
borsh = { version = "0.9.3", optional = true }
12+
borsh-derive = { version = "0.9.3", optional = true }
1113
bs58 = { workspace = true }
1214
futures = { workspace = true }
1315
indexmap = { workspace = true }

0 commit comments

Comments
 (0)