-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pyth-lazer-publisher-sdk): Add SDK for Publisher Transactions #2557
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 6 Skipped Deployments
|
8337bae
to
f5ac983
Compare
|
||
package pyth_lazer_transaction; | ||
|
||
message PublisherUpdate { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need publisher_id
field here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that publisher ID would be something only Relayer knows about. The ID would map to an access token and a public key. The access token used to connect to Relayer determines what public key is accepted, and that key is used to verify the signature. In that scenario, publishers would not know/care what ID they are, and would not include it in any messages. Only the signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to use this format in the kafka/nats messages and read them in the aggregator (and add governance instructions as another type of transaction along with PublisherUpdate). Aggregator won't know access tokens so it needs another way to know the publisher id.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we do have to attach additional information anyway to the payloads before it goes to NATS. For example, if we reject the update due to price deviation, invalid timestamp, etc. We also attach the information like publisher ID. This is being done in Relayer. Is the consideration that Aggregator will fully handle all of this, and Relayer simplifies to taking data in and verifying it before sending to NATS? I'm good with that approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With publisher signatures, it will no longer be possible to modify the payload at the relayer. It will only be able to reject the whole message. So for every check we have in relayer today we'll have to decide whether to move it to aggregator or reject at relayer.
The price deviation is the trickiest to handle. Option one is to remove it (it was always meant as a temporary measure). Option two is implement the pythnet-based publisher and move the whole logic to the aggregator.
Note that your definition of SignedLazerTransaction
doesn't include public key which is necessary for signature verification. So to verify the signature, we should be able to derive the public key from payload and state. The state will contain public keys of publishers so having publisher_id in the payload should be enough. However, now that I think about it, it's nice to have both. Having public key in SignedLazerTransaction
will allow anyone to verify a transaction on its own, without state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
anyone can put their own public key there, you'd need a source of truth (state) to check it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Overall js packaging stuff looks great, just a few minor points to improve but overall lgtm
``` | ||
cd to crosschain root | ||
$ pnpm install | ||
$ pnpm turbo --filter @pythnetwork/pyth-lazer-publisher-sdk build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually build
in and of itself isn't a goal -- usually you're building to do something else. My general advice is to document the actual things you might want to do in the docs (e.g. "run tests" or "run this example" etc) and let the build happen automatically with the task orchestrator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that makes sense. I will add in a followup PR an example showing how to use the SDK to create some of the types, sign them, and then encode them.
lazer/publisher_sdk/js/src/index.ts
Outdated
@@ -0,0 +1 @@ | |||
export * from "./generated/proto.js"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally recommend against barrel exports. Instead, just point the main export at generated/proto.js
directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By this, do you mean changing main and types in package.json to point to the proto.js instead of index.ts?
Summary
Created new publisher_sdk which will store all sdk related code for publishers. Publishers is now more of a gneral term for anyone who publishes transactions to Lazer. Right now that's only Feed Publishers but in the future, there will be Governance, and maybe others too. The new publisher_sdk has proto files for which types are generated. Pyth Lazer will use these for encoding/decoding incoming transactions to the system in the future.
Rationale
Lazer needs to support more types of input and be capable of changing its input schema as updates happen. Using a new format for input provides this flexibility. Using protobuf for message passing provides additional flexibility for updating schemas and also improves the message compression size.
How has this been tested?
Was able to generate types successfully.