- 
                Notifications
    You must be signed in to change notification settings 
- Fork 2.2k
fix(cast): consistent serialization of Uint/Ints depending on actual type #12059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
d53b17f    to
    0543c04      
    Compare
  
    c14e6a8    to
    05f0bc9      
    Compare
  
    | Okay this set out to be lot more complicated than I initially thought since the  Let me know if the proptest test is not enough for this and if you'd like to see any other/more kind of testing. | 
| bumping here | 
…type The current implementation dynamically tries to determine if the runtime value can fit in 64 bits, but this leads to inconsistent serialization. For instance if you were decoding an `uint[]`, some of the values that fit in 64 bits will serialize as number while others serialize as string making it require special handling on the user that is consuming the json. This change makes it so it uses the type information to determine the serialization. So the user will always know that specific types will always serialize to a number or a string depending on the number of bits that type uses.
05f0bc9    to
    1407d86      
    Compare
  
    | hey @0xferrous, thanks for the PR! you're right that this is an oversight in the current impl. To give some context: the original approach was designed to work around serde's number conversion limitations. It tries to preserve values as JSON numbers when they fit within 64 bits, since most  however, as u've pointed out, this creates inconsistency for arrays where elements serialize since this primarily impacts arrays, i'd prefer a more targeted fix solely focused towards array serialization: 
 imo this is superior as it ensures type uniformity within each array while preserving ergonomics for simple values + i don't think we need the new flag u introduced... wdyt? | 
| Just from the cast decoded output when json formatted, I dont think ergonomics matter much. this still means the serialization will differ with runtime values, the uniformity of serialization is better, yes, but since the decoding type is already provided by the user, they can know exactly what to expect depending on the type they provide. Also, 2 ** 64 is a little over 18 eth, I don't think its that uncommon that values go beyond that especially when working with defi contracts. | 
| Also for some personal context, I stumbled upon this because I was using the  | 
| @klkvr @DaniPopes thoughts? | 
Motivation
The current implementation dynamically tries to determine if the runtime value can fit in 64 bits, but this leads to inconsistent serialization. For instance if you were decoding an
uint[], some of the values that fit in 64 bits will serialize as number while others serialize as string making it require special handling on the user that is consuming the json.Solution
Use the type information to determine the serialization. So the user will always know that specific types will always serialize to a number or a string depending on the number of bits that type uses.
PR Checklist