Skip to content

Commit

Permalink
gemma : use more bits for the token_embd.weight tensor (ggerganov#5650)
Browse files Browse the repository at this point in the history
* gemma : use Q8_0 for the token_embd.weight tensor

* llama : quantize token_embd.weight using output type

(cherry picked from commit 96633ee)

Signed-off-by: Jared Van Bortel <[email protected]>
  • Loading branch information
ggerganov authored and cebtenzzre committed Feb 22, 2024
1 parent 11ed1fb commit 95dcf04
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -10515,7 +10515,10 @@ static ggml_type get_k_quant_type(quantize_state_internal & qs, ggml_type new_ty
return std::make_pair(i_layer, n_layer);
};

if (name == tn(LLM_TENSOR_OUTPUT, "weight")) {
// for arches that share the same tensor between the token embeddings and the output, we quantize the token embeddings
// with the quantization of the output tensor
if (name == tn(LLM_TENSOR_OUTPUT, "weight") ||
(LLM_TENSOR_NAMES.at(arch).find(LLM_TENSOR_OUTPUT) == LLM_TENSOR_NAMES.at(arch).end() && name == "token_embd.weight")) {
int nx = tensor->ne[0];
if (arch == LLM_ARCH_FALCON || nx % QK_K != 0) {
new_type = GGML_TYPE_Q8_0;
Expand Down

0 comments on commit 95dcf04

Please sign in to comment.