Skip to content

Commit

Permalink
Fix transformers version (#156)
Browse files Browse the repository at this point in the history
When running AWQ search foe Llama2 with transformers>=4.38.0, I find the bug below:
File "/×××/llm-awq/awq/quantize/auto_scale.py", line 134, in _search_module_scale
RuntimeError: The expanded size of the tensor (4608) must match the existing size (4096) at non-singleton dimension 3. Target sizes: [65, 32, 512, 4608]. Tensor sizes: [65, 1, 512, 4096]
But earlier versions of transformers would not happen this. So I set transformers==4.36.2.
  • Loading branch information
Louym authored Mar 11, 2024
1 parent 178e522 commit 7901983
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ classifiers = [
dependencies = [
"accelerate", "sentencepiece", "tokenizers>=0.12.1",
"torch>=2.0.0", "torchvision",
"transformers>=4.32.0",
"transformers==4.36.2",
"lm_eval==0.3.0", "texttable",
"toml", "attributedict",
"protobuf",
Expand Down

0 comments on commit 7901983

Please sign in to comment.