Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: replace byteorder with the equivalent built-in methods #2081

Conversation

0x61nas
Copy link
Contributor

@0x61nas 0x61nas commented Dec 24, 2023

Follow up to #2071, this PR replace the usage of byteorder utilities to convert the bytes between the different representations and use the built-in methods.

This PR isn't completely done yet, since that the final aim of this PR is to remove byteorder from our deps.

But after refactoring the ico and hdr decoder/encoder, I noticed the there is an annoying-repeated pattern to read the needed bytes from the reader and convert them to the representation that we want + there's a potential error if we forget to read before convert the buffer, so I introduced a new utility macro in 8478822 to take care of those steps. But I have some concerns about the way that this should be done with, like:

  • Did we should use ? inside this macro, or just return an Result and leave the error handling to the caller? Which it'll use the ? operator anyway.
  • The syntax
  • Should we take the buffer from the caller, or just create an internal one on the call. (from the performance perspective).

@0x61nas 0x61nas marked this pull request as draft December 24, 2023 18:55
@fintelia
Copy link
Contributor

I'd like to step back and consider what problem we're trying to solve here. The read_u16/read_u32/etc. methods from the byteorder crate are really convenient. The implementation of those methods is pretty straightforward so it is certainly feasible to duplicate the functionality in this crate (either with macros or functions). But I'd like to talk through the goals/tradeoffs before doing so

@0x61nas
Copy link
Contributor Author

0x61nas commented Dec 24, 2023

The read_u16/read_u32/etc. methods from the byteorder crate are really convenient.
The implementation of those methods is pretty straightforward so it is certainly feasible to duplicate the functionality in this crate (either with macros or functions).

Am with you in this point, they just use the built-in methods in most cases. The reason I think that we need that is 1) They create a new buffer on every call. So more allocations? (a thing we can reduce if we did use the built-in methods ourselves). 2) Less error/attack surface, if we did reduce our dependencies.

@0x61nas
Copy link
Contributor Author

0x61nas commented Dec 24, 2023

  1. They create a new buffer on every call. So more allocations? (a thing we can reduce if we did use the built-in methods ourselves).

I'm not sure if LLVM/rustc has an optimization for this situation or if it has any effect on the overall performance. But I prefer to play on the safe side.

@fintelia
Copy link
Contributor

In general, it is best to make performance decisions based on benchmarking. If you have any measurements showing a difference, it would certainly be worth investigating. However, my guess based on looking at the code is that after LLVM inlines everything there won't be a performance difference because the buffers are all very small and allocated on the stack.

As far as reducing attack surface by avoiding dependencies, it would be nice to have a dependency that didn't use as much unsafe code, but overall I'm not especially worried given that byteorder is an extremely popular crate.

@0x61nas
Copy link
Contributor Author

0x61nas commented Dec 25, 2023

If you have any measurements showing a difference, it would certainly be worth investigating.

I don't have such a thing now, but I do think that valgrind would show some difference in the number of allocations. However, it wouldn't be that huge anyway, so I don't think that we should spend more time on this.

@0x61nas 0x61nas closed this Dec 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants