-
Notifications
You must be signed in to change notification settings - Fork 19
Optimization principles
This page is intended for bitcode
users (version 0.6
and above), and does not concern how bitcode
itself is internally optimized.
If you have multiple values of the same type, and plan to compress the encoded data, decide how semantically similar the values are. If they are similar, use an array. For example, to store two prices, use prices: [u32; 2]
. This reduces overhead and improves compression in case of duplicates. If they are different, use a tuple or struct
, to allow them to be compressed separately. For example, to store a price and a quantity, use (u32, u32)
or struct Item { price: u32, quantity: u32 }
.
Be aware that u128
and i128
generally result in poor codegen, so avoid converting things that happen to be 16 bytes into them for encoding purposes. You can use [u8; 16]
instead.
bitcode
is relatively compressible, both in terms of compression ratio and compression speed. Recommended algorithms are Deflate, LZ4, and Zstd. If you think compression would be too expensive, try using a lower compression level.
Encode
and Decode
are faster than Serialize
and Deserialize
, respectively. Encode
produces a smaller output than Serialize
. Learn more.
You can reduce allocations by using a bitcode::Buffer
for encode
and decode
. If you don't have a convenient place to store it, consider a thread-local:
pub mod bitcode {
pub use bitcode::*;
thread_local! {
static BUFFER: std::cell::RefCell<bitcode::Buffer> = Default::default();
}
pub fn encode<T: Encode + ?Sized>(t: &T) -> Vec<u8> {
BUFFER.with(|b| b.borrow_mut().encode(t).to_owned())
}
pub fn decode<'a, T: Decode<'a> + ?Sized>(bytes: &'a [u8]) -> Result<T, Error> {
BUFFER.with(|b| b.borrow_mut().decode(bytes))
}
}
bitcode
has high CPU overhead relative to other formats. If you only encode a single struct
, you may be disappointed by the performance. However, as the number and complexity of things being encoded grows, bitcode
may become the faster option.