Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add convenience Functions to save/load quantized Model to/from Disk #411

Open
marcnnn opened this issue Dec 7, 2024 · 3 comments
Open

Comments

@marcnnn
Copy link

marcnnn commented Dec 7, 2024

Since loading Quantized models from HF is not possible, jet.

I was searching for an easy way to safe models after quantization, as easy as loading them from HF.

And then a function to load the file again.

@jonatanklosko
Copy link
Member

This probably belongs more to Axon than Bumblebee, since we need a way to store %Axon.ModelState{}. For the model itself, maybe there should be a way to quantize the model only, so it can be replicated without altering the params (so we can still build the model, instead of storing it). cc @seanmor5

@marcnnn
Copy link
Author

marcnnn commented Jan 13, 2025

There is model:
https://github.com/elixir-nx/axon/blob/main/lib/axon/quantization.ex#L43

And Model state:
https://github.com/elixir-nx/axon/blob/main/lib/axon/quantization.ex#L69

And Quantized Tensors can be serialized with Nx and Safetensor
%{params: model_state, model: model} = quantized_model %{ "decoder.blocks.19.ffn.output" => %{ "kernel" => p } } = model_state.data %Axon.Quantization.QTensor{value: tensor} = p tensor |> Nx.serialize() Safetensors.write!("/srv/public/marcni/phi.axon", %{tensor: tensor})

Is it just model_state.data that needs to be serialized, or is there more?

Just loading and Quantization took around 3 Minutes and starting/loading on GPU it again over 3 minutes.

I would like to speed that up…
Since Phi-4 is a small model in comparison.

@jonatanklosko
Copy link
Member

Oh, I missed quantize_model!

For the model state you can actually do Nx.serialize(model_state). So it would be this:

# Serialize
File.write!("state.nx", Nx.serialize(model_info.params))

# Load
{:ok, spec} = Bumblebee.load_spec({:hf, "..."})
model = spec |> Bumblebee.build_model() |> Axon.Quantization.quantize_model()
params = File.read!("state.nx") |> Nx.deserialize()
model_info = %{spec: spec, model: model, params: params}

This may work for your use case if you have enough RAM to serialize and deserialize.

There are two issues with Nx.serialize:

  1. It is going to built a huge binary, which you then dump into a single file. Similarly, load requires loading the whole file. For large models it may be necessary to do sharded serialization, where we save and load across multiple files.

  2. In principle it's better to use more portable formats, such as safetensors. Thought for practical Elixir use cases that's not necessarily a requirement. The challenges with safetensors is that: (a) it has a flat structure, while Axon params are nested under layer names; (b) Axon params can be arbitrary Nx.Container, in fact, %Axon.Quantization.QTensor{} is also one, with three tensors. We could workaround (a) by flattening the structure to a map with keys like layer_name-->param_name. I'm not sure about (b), maybe we could nest container tensors further as in layer_name-->param_name-->t1, and other container attributes could be put as term_to_binary in safetensors metadata. But at that point the question is whether it makes sense to use safetensors, given that we need to "reinvent" a custom format on top.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants