Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement v2 client GET functionality #972

Open
wants to merge 41 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
4625e96
Write GET tests
litt3 Dec 9, 2024
f07e820
Merge branch 'master' into client-v2-get
litt3 Dec 10, 2024
885c131
Respond to PR comments
litt3 Dec 10, 2024
6848663
Create new V2 client config
litt3 Dec 10, 2024
a48afb1
Respond to more PR comments
litt3 Dec 11, 2024
225f2a3
Fix failing unit test
litt3 Dec 12, 2024
d265f6a
Merge branch 'master' into client-v2-get
litt3 Dec 12, 2024
e9d91c5
Adopt new package structure
litt3 Dec 12, 2024
dd3c262
Use new test random util
litt3 Dec 12, 2024
88df865
Implement relay call timeout
litt3 Dec 12, 2024
505a1f0
Use correct error join method
litt3 Dec 12, 2024
2b87633
Merge branch 'master' into client-v2-get
litt3 Jan 8, 2025
cf1cd80
Make updates required by upstream changes
litt3 Jan 8, 2025
53893d8
Update how FFT and IFFT are referred to
litt3 Jan 13, 2025
0373dd7
Implement GetPayload
litt3 Jan 13, 2025
826a026
Remove GetBlob, leaving only GetPayload
litt3 Jan 13, 2025
975b6e5
Remove unnecessary codec mock
litt3 Jan 13, 2025
0666d24
Use more reasonable line breaks for logs
litt3 Jan 13, 2025
0a49aa5
Test malicious cert
litt3 Jan 13, 2025
1193ce7
Merge branch 'master' into client-v2-get
litt3 Jan 13, 2025
496e277
Merge branch 'master' into client-v2-get
litt3 Jan 14, 2025
2d392ff
Finish test coverage
litt3 Jan 14, 2025
db51291
Fix commitment length check
litt3 Jan 14, 2025
4f3280c
Merge branch 'master' into client-v2-get
litt3 Jan 16, 2025
aaa1342
Call VerifyBlobV2
litt3 Jan 17, 2025
9be51e6
Simply verify blob
litt3 Jan 17, 2025
cc6b9a1
Merge branch 'master' into client-v2-get
litt3 Jan 17, 2025
ae926c7
Clean up
litt3 Jan 17, 2025
f82d128
Merge branch 'master' into client-v2-get
litt3 Jan 17, 2025
017a48c
Return error from verification method
litt3 Jan 21, 2025
b645370
Merge branch 'master' into client-v2-get
litt3 Jan 21, 2025
03f8018
Address some PR comments
litt3 Jan 21, 2025
ef3944d
Rename methods, and clean up
litt3 Jan 21, 2025
78cab0d
Actually apply fix for poor doc
litt3 Jan 22, 2025
e27d3ea
Fix goroutine safety comment
litt3 Jan 22, 2025
f6126af
Merge branch 'master' into client-v2-get
litt3 Jan 22, 2025
28c3d02
Fix test
litt3 Jan 22, 2025
036a222
Rework polynomial encoding enum, and descriptions
litt3 Jan 22, 2025
7b66df6
Make PR fixes
litt3 Jan 23, 2025
ad3dc97
Move conversion utils
litt3 Jan 23, 2025
6930a47
Remove GetCodec
litt3 Jan 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions api/clients/codecs/polynomial_form.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
package codecs

// PolynomialForm is an enum that describes the different ways a polynomial may be represented.
type PolynomialForm uint
samlaf marked this conversation as resolved.
Show resolved Hide resolved

const (
// PolynomialFormEval is short for polynomial "evaluation form".
// The field elements represent the evaluation of the polynomial at roots of unity.
PolynomialFormEval PolynomialForm = iota
// PolynomialFormCoeff is short for polynomial "coefficient form".
// The field elements represent the coefficients of the polynomial.
PolynomialFormCoeff
)
40 changes: 40 additions & 0 deletions api/clients/v2/config.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
package clients

import (
"time"

"github.com/Layr-Labs/eigenda/api/clients/codecs"
)

// EigenDAClientConfig contains configuration values for EigenDAClient
type EigenDAClientConfig struct {
// The blob encoding version to use when writing and reading blobs
BlobEncodingVersion codecs.BlobEncodingVersion
samlaf marked this conversation as resolved.
Show resolved Hide resolved

// The Ethereum RPC URL to use for querying the Ethereum blockchain.
EthRpcUrl string

// The address of the EigenDABlobVerifier contract
EigenDABlobVerifierAddr string

// PayloadPolynomialForm is the initial form of a Payload after being encoded. The configured form does not imply
// any restrictions on the contents of a payload: it merely dictates how payload data is treated after being
// encoded.
//
// Since blobs sent to the disperser must be in coefficient form, the initial form of the encoded payload dictates
// what data processing must be performed during blob construction.
//
// The chosen form also dictates how the KZG commitment made to the blob can be used. If the encoded payload starts
// in PolynomialFormEval (meaning the data WILL be IFFTed before computing the commitment) then it will be possible
// to open points on the KZG commitment to prove that the field elements correspond to the commitment. If the
// encoded payload starts in PolynomialFormCoeff (meaning the data will NOT be IFFTed before computing the
// commitment) then it will not be possible to create a commitment opening: the blob will need to be supplied in its
// entirety to perform a verification that any part of the data matches the KZG commitment.
PayloadPolynomialForm codecs.PolynomialForm
samlaf marked this conversation as resolved.
Show resolved Hide resolved

// The timeout duration for relay calls to retrieve blobs.
RelayTimeout time.Duration

// The timeout duration for contract calls
ContractCallTimeout time.Duration
}
279 changes: 279 additions & 0 deletions api/clients/v2/eigenda_client.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,279 @@
package clients

import (
"context"
"errors"
"fmt"
"math/rand"

"github.com/Layr-Labs/eigenda/api/clients/codecs"
"github.com/Layr-Labs/eigenda/api/clients/v2/verification"
"github.com/Layr-Labs/eigenda/common/geth"
contractEigenDABlobVerifier "github.com/Layr-Labs/eigenda/contracts/bindings/EigenDABlobVerifier"
core "github.com/Layr-Labs/eigenda/core/v2"
"github.com/Layr-Labs/eigenda/encoding"
"github.com/Layr-Labs/eigensdk-go/logging"
"github.com/consensys/gnark-crypto/ecc/bn254"
gethcommon "github.com/ethereum/go-ethereum/common"
)

// EigenDAClient provides the ability to get payloads from the relay subsystem, and to send new payloads to the disperser.
//
// This struct is goroutine safe.
type EigenDAClient struct {
log logging.Logger
// doesn't need to be cryptographically secure, as it's only used to distribute load across relays
random *rand.Rand
Comment on lines +25 to +26

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small dumb question - does the use of a non-deterministic key potentially impact retrieval latencies across a subnetwork of verifier nodes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's possible to provide any guarantee of highly similar latencies across a network of verifier nodes. Even if all nodes were to talk to a single relay, that relay could have high latency variability in responding to the different verifier nodes.

I think the best solution would be to implement a tool which prioritizes the best relay partner on a node-by-node basis, as mentioned in the TODO comment, so that every verifier node gets a response as quickly as possible

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need a mutex around rand

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we do.

The only random method we're using is Perm, which calls Intn under the hood.

There exists a static rand method in math.Rand, which calls Intn on the global rand singleton, without any synchronization.

func Intn(n int) int { return globalRand().Intn(n) }

This indicates to me that it must be safe to call without synchronization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation says

Random numbers are generated by a Source, usually wrapped in a Rand. Both types should be used by a single goroutine at a time: sharing among multiple goroutines requires some kind of synchronization.

Think the whole package is not goroutine safe. I think (but can't find a link atm) that everything in golang is assumed to NOT be goroutine safe, unless explicitly stated in the documentation comment. For eg golang maps are not goroutine safe: there's https://pkg.go.dev/golang.org/x/sync/syncmap for that.

Copy link
Contributor Author

@litt3 litt3 Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The package itself is comfortable using a global instance of Rand for some functionality, without explicit synchronization. I think it would be reasonable to copy the same usage patterns, without explicit synchronization

If you're not comfortable with this, though, I think I would lean toward sacrificing test determinism, and just use the static methods from rand instead of maintaining a local source of randomness. Having mutexes around random is very ugly.

Thoughts on this?

clientConfig *EigenDAClientConfig
codec codecs.BlobCodec
relayClient RelayClient
g1Srs []bn254.G1Affine
blobVerifier verification.IBlobVerifier
}

// BuildEigenDAClient builds an EigenDAClient from config structs.
func BuildEigenDAClient(
log logging.Logger,
clientConfig *EigenDAClientConfig,
ethConfig geth.EthClientConfig,
relayClientConfig *RelayClientConfig,
g1Srs []bn254.G1Affine) (*EigenDAClient, error) {
samlaf marked this conversation as resolved.
Show resolved Hide resolved

relayClient, err := NewRelayClient(relayClientConfig, log)
if err != nil {
return nil, fmt.Errorf("new relay client: %w", err)
}

ethClient, err := geth.NewClient(ethConfig, gethcommon.Address{}, 0, log)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably out of scope but does the use of geth for the eth package imply that the node being used has to be a go-ethereum one or do other execution client nodes (e.g, reth, besu) also work?

Copy link
Contributor Author

@litt3 litt3 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the use of geth for the eth package imply that the node being used has to be a go-ethereum

I don't think that's the implication. The bindings method I'm using requires an EthClient input parameter, and the implementation happens to be in a package called geth. But I don't see why the target node would be required to be geth

@0x0aa0 can you weigh in here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah no it doesn't, it's just a geth provided library. its just an implementation of the eth rpc methods which all clients implement.

if err != nil {
return nil, fmt.Errorf("new eth client: %w", err)
}

blobVerifier, err := verification.NewBlobVerifier(ethClient, clientConfig.EigenDABlobVerifierAddr)
samlaf marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
return nil, fmt.Errorf("new blob verifier: %w", err)
}

codec, err := createCodec(clientConfig)
if err != nil {
return nil, err
}

return NewEigenDAClient(
log,
rand.New(rand.NewSource(rand.Int63())),
clientConfig,
relayClient,
blobVerifier,
codec,
g1Srs)
}

// NewEigenDAClient assembles an EigenDAClient from subcomponents that have already been constructed and initialized.
func NewEigenDAClient(
log logging.Logger,
random *rand.Rand,
clientConfig *EigenDAClientConfig,
relayClient RelayClient,
blobVerifier verification.IBlobVerifier,
codec codecs.BlobCodec,
g1Srs []bn254.G1Affine) (*EigenDAClient, error) {

return &EigenDAClient{
log: log,
random: random,
clientConfig: clientConfig,
codec: codec,
relayClient: relayClient,
blobVerifier: blobVerifier,
g1Srs: g1Srs,
}, nil
}

// GetPayload iteratively attempts to fetch a given blob with key blobKey from relays that have it, as claimed by the
// blob certificate. The relays are attempted in random order.
//
// If the blob is successfully retrieved, then the blob is verified. If the verification succeeds, the blob is decoded
// to yield the payload (the original user data), and the payload is returned.
func (c *EigenDAClient) GetPayload(
samlaf marked this conversation as resolved.
Show resolved Hide resolved
ctx context.Context,
blobKey core.BlobKey,
eigenDACert *verification.EigenDACert) ([]byte, error) {

err := c.verifyCertWithTimeout(ctx, eigenDACert)
if err != nil {
return nil, fmt.Errorf("verify cert with timeout for blobKey %v: %w", blobKey, err)
}

relayKeys := eigenDACert.BlobVerificationProof.BlobCertificate.RelayKeys
relayKeyCount := len(relayKeys)
if relayKeyCount == 0 {
return nil, errors.New("relay key count is zero")
}

blobCommitmentProto := contractEigenDABlobVerifier.BlobCommitmentBindingToProto(
&eigenDACert.BlobVerificationProof.BlobCertificate.BlobHeader.Commitment)
blobCommitment, err := encoding.BlobCommitmentsFromProtobuf(blobCommitmentProto)

if err != nil {
return nil, fmt.Errorf("blob commitments from protobuf: %w", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feel like this should be a single util function that does both steps.
Also can we rename blobCommitmentProto to something like blobCommitmentInCert

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a utility function, and sidestepped the name blobCommitmentProto 7b66df6a

LMK what you think about the chosen name for the util function, it's awkward but nothing better came to mind

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! I don't like though that we use commitmentS (with an s) in one place and not the other.
image

Can we make those the same (prob remove the s?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a larger renaming discussion.

Considering the struct itself contains 2 commitments, in addition to other elements, if anything BlobCommitments, plural, is the better name.


// create a randomized array of indices, so that it isn't always the first relay in the list which gets hit
indices := c.random.Perm(relayKeyCount)

// TODO (litt3): consider creating a utility which can deprioritize relays that fail to respond (or respond maliciously)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, utility should also prioritize relays with lower latencies (although perhaps it should still reach out to lower priority relays with small but non-zero probability).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expanded TODO to mention prioritizing low latency relays


// iterate over relays in random order, until we are able to get the blob from someone
for _, val := range indices {
relayKey := relayKeys[val]

blob, err := c.getBlobWithTimeout(ctx, relayKey, blobKey)
// if GetBlob returned an error, try calling a different relay
if err != nil {
c.log.Warn("blob couldn't be retrieved from relay", "blobKey", blobKey, "relayKey", relayKey, "error", err)
continue
}

err = c.verifyBlobAgainstCert(blobKey, relayKey, blob, blobCommitment.Commitment, blobCommitment.Length)

// An honest relay should never send a blob which doesn't verify
if err != nil {
c.log.Warn("verify blob from relay: %w", err)
continue
}

payload, err := c.codec.DecodeBlob(blob)
if err != nil {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this error a terminal one but others warrant retrying? Is the idea that if a blob passes verification then the contents would always be the same and therefore the codec decoding would yield the same result irrespective of relay?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g couldn't only one relay lie about the length of the blob, causing the initial varuint decoding and length invariant to fail?

Copy link
Contributor Author

@litt3 litt3 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if a blob passes verification then the contents would always be the same and therefore the codec decoding would yield the same result irrespective of relay

Correct. If the blob contents verified against the cert we have, that means the relay delivered the blob as it was dispersed. If we asked another relay, either it would:

  1. return the same blob bytes, and we end up at the same place
  2. return different blob bytes. if these different bytes are decodable, that means they are necessarily different from the bytes we currently have, so the cert can't possibly verify

If a non-parseable blob verifies against the commitments, time to panic. Either it's a bug, or worse

c.log.Error(
`Blob verification was successful, but decode blob failed!
This is likely a problem with the local blob codec configuration,
but could potentially indicate a maliciously generated blob certificate.
It should not be possible for an honestly generated certificate to verify
for an invalid blob!`,
Comment on lines +152 to +153
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this is not 100% accurate. the cert only talks about the blob. Here its the decoding of the blob to the payload that is failing. Our disperser client would only construct correctly encoded blobs, but it might have been constructed by some other library (lets say our rust library), and there might be some incompatibility between the two for eg.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, "honestly generated certificate" includes the dispersing client verifying that the input bytes have a valid encoding for the intended use. By "should not be possible", I'm trying to convey that this should never happen in the normal course of business. If it does happen, it indicates when of the following:

  • a bug somewhere in the system
  • a malicious dispersing client
  • broken cryptography

Do you have any suggestions for rewording here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure because I haven't wrapped my head around this whole length proof verification stuff, and the kind of attacks possible. But your "a malicious dispersing client" is PRECISELY the kind of thing EigenDA is meant to protect against (otherwise... just use S3). So we absolutely need good safeguards and guarantees around these attacks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's chat during our Integrations Food for Thought this afternoon. I'm fine in the meantime with merging this PR as is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But your "a malicious dispersing client" is PRECISELY the kind of thing EigenDA is meant to protect against

I don't think it is. EigenDA is meant to guarantee that data is immutable and available. If the dispersing client wants to fill a blob with total gobbldygook that means nothing to anyone, DA isn't in a position to protect against that. All DA can do it guarantee that the gobbldygook is faithfully preserved

"blobKey", blobKey, "relayKey", relayKey, "eigenDACert", eigenDACert, "error", err)
return nil, fmt.Errorf("decode blob: %w", err)
}

return payload, nil
}

return nil, fmt.Errorf("unable to retrieve blob %v from any relay. relay count: %d", blobKey, relayKeyCount)
}

// verifyBlobAgainstCert verifies the blob received from a relay against the certificate.
// This method does NOT verify the blob with an eth_call to verifyBlobV2, that must be done separately.
samlaf marked this conversation as resolved.
Show resolved Hide resolved
//
// The following verifications are performed in this method:
// 1. Verify that blob isn't empty
// 2. Verify the blob kzg commitment
// 3. Verify that the blob length is less than or equal to the claimed blob length
samlaf marked this conversation as resolved.
Show resolved Hide resolved
//
// If all verifications succeed, the method returns nil. Otherwise, it returns an error.
func (c *EigenDAClient) verifyBlobAgainstCert(
blobKey core.BlobKey,
relayKey core.RelayKey,
blob []byte,
kzgCommitment *encoding.G1Commitment,
blobLength uint) error {
Comment on lines +172 to +177
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this method take a blob and a cert? That's what I would have intuited from the name. Is it really worth extracting all of these fields from the cert? Would we ever want to call it without actually having a cert in hand?

Copy link
Contributor Author

@litt3 litt3 Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with passing in the entire EigenDACert is that verifyBlobAgainstCert is called in a loop, and the data contained in the cert must be processed before it can be used (the binding type must be converted to the internal type). And we wouldn't want to do this conversion multiple times.

One option would be to pass in an encoding.BlobCommitments type, in place of the separate G1Commitment and uint length. But since these 2 fields are only a subset of the data contained in the encoding.BlobCommitments struct, my personal preference is to only pass in the things we need.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha that makes sense at least. Might be nice to have an internal type for Cert (contains all the datatypes that a cert has, but in the internal types instead of contract types). But I'm fine with merging without this. This PR has been alive for too long already haha.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a larger team discussion about when to create and use internal datatypes.

Should we always have internal types? Only when we need to define methods for autogenerated types? Only when necessary (i.e., use proto types wherever possible)?


// An honest relay should never send an empty blob
if len(blob) == 0 {
return fmt.Errorf("blob %v received from relay %v had length 0", blobKey, relayKey)
}

// TODO: in the future, this will be optimized to use fiat shamir transformation for verification, rather than
// regenerating the commitment: https://github.com/Layr-Labs/eigenda/issues/1037
valid, err := verification.GenerateAndCompareBlobCommitment(c.g1Srs, blob, kzgCommitment)
if err != nil {
return fmt.Errorf(
"generate and compare commitment for blob %v received from relay %v: %w",
blobKey,
relayKey,
err)
}

if !valid {
return fmt.Errorf("commitment for blob %v is invalid for bytes received from relay %v", blobKey, relayKey)
}

// Checking that the length returned by the relay is <= the length claimed in the BlobCommitments is sufficient
// here: it isn't necessary to verify the length proof itself, since this will have been done by DA nodes prior to
// signing for availability.
Comment on lines +200 to +201
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really true? I feel like we should still check the proof?
Otherwise wouldn't the same argument also apply to the actual commitment above? and possibly some other checks we check in the CertVerification contract call.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also still having trouble understanding why we check <= :( :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summoning the experts @bxue-l2 @anupsv

But here's my attempt at an explanation:
The reason why we need to verify the kzg commitment is because without doing that, we don't know for sure that the relay sent us the same bytes that the DA nodes received. Once we have verified the commitment, this is guaranteed, so we can begin to rely on our trust of the DA nodes. We assume that a given threshold of DA nodes must be honest, and the only way the length proof could fail verification is if greater than the assumed threshold is malicious.

Now, as to whether the length check is needed at all, let me repeat here my comment I recently made on the notion doc:

I’m also not 100% convinced it is necessary in the retrieval path. The only thing I can think that this is protecting against is a relay sending tons of extra padded 0s? But we could also protect against that by simply forbidding relays from sending trailing zeros, and check that

//
// Note that the length in the commitment is the length of the blob in symbols
if uint(len(blob)) > blobLength*encoding.BYTES_PER_SYMBOL {
return fmt.Errorf(
"length for blob %v (%d bytes) received from relay %v is greater than claimed blob length (%d bytes)",
blobKey,
len(blob),
relayKey,
blobLength*encoding.BYTES_PER_SYMBOL)
}

return nil
}

// getBlobWithTimeout attempts to get a blob from a given relay, and times out based on config.RelayTimeout
func (c *EigenDAClient) getBlobWithTimeout(
ctx context.Context,
relayKey core.RelayKey,
blobKey core.BlobKey) ([]byte, error) {

timeoutCtx, cancel := context.WithTimeout(ctx, c.clientConfig.RelayTimeout)
defer cancel()

return c.relayClient.GetBlob(timeoutCtx, relayKey, blobKey)
}

// verifyCertWithTimeout verifies an EigenDACert by making a call to VerifyBlobV2.
//
// This method times out after the duration configured in clientConfig.ContractCallTimeout
func (c *EigenDAClient) verifyCertWithTimeout(
ctx context.Context,
eigenDACert *verification.EigenDACert,
) error {
timeoutCtx, cancel := context.WithTimeout(ctx, c.clientConfig.ContractCallTimeout)
defer cancel()

return c.blobVerifier.VerifyBlobV2(timeoutCtx, eigenDACert)
}

// GetCodec returns the codec the client uses for encoding and decoding blobs
func (c *EigenDAClient) GetCodec() codecs.BlobCodec {
return c.codec
}

// Close is responsible for calling close on all internal clients. This method will do its best to close all internal
// clients, even if some closes fail.
//
// Any and all errors returned from closing internal clients will be joined and returned.
//
// This method should only be called once.
func (c *EigenDAClient) Close() error {
relayClientErr := c.relayClient.Close()

// TODO: this is using join, since there will be more subcomponents requiring closing after adding PUT functionality
return errors.Join(relayClientErr)
}

// createCodec creates the codec based on client config values
func createCodec(config *EigenDAClientConfig) (codecs.BlobCodec, error) {
lowLevelCodec, err := codecs.BlobEncodingVersionToCodec(config.BlobEncodingVersion)
if err != nil {
return nil, fmt.Errorf("create low level codec: %w", err)
}

switch config.PayloadPolynomialForm {
case codecs.PolynomialFormCoeff:
// Data must NOT be IFFTed during blob construction, since the payload is already in PolynomialFormCoeff after
// being encoded.
return codecs.NewNoIFFTCodec(lowLevelCodec), nil
case codecs.PolynomialFormEval:
// Data MUST be IFFTed during blob construction, since the payload is in PolynomialFormEval after being encoded,
// but must be in PolynomialFormCoeff to produce a valid blob.
return codecs.NewIFFTCodec(lowLevelCodec), nil
default:
return nil, fmt.Errorf("unsupported polynomial form: %d", config.PayloadPolynomialForm)
}
}
47 changes: 47 additions & 0 deletions api/clients/v2/mock/blob_verifier.go
samlaf marked this conversation as resolved.
Show resolved Hide resolved

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion api/clients/v2/mock/relay_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ func NewRelayClient() *MockRelayClient {
}

func (c *MockRelayClient) GetBlob(ctx context.Context, relayKey corev2.RelayKey, blobKey corev2.BlobKey) ([]byte, error) {
args := c.Called(blobKey)
args := c.Called(ctx, relayKey, blobKey)
if args.Get(0) == nil {
return nil, args.Error(1)
}
Expand Down
Loading
Loading