Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FSDP+LORA on multiple gpu(A100 80gb*4) ValueError: Cannot flatten integer dtype tensors #2250

Open
6 of 8 tasks
Paxwell-Paxwell opened this issue Jan 10, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Paxwell-Paxwell
Copy link

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

The LoRA configuration should work with fsdp

Current behaviour

[rank0]: raise ValueError("Cannot flatten integer dtype tensors")
[rank0]: ValueError: Cannot flatten integer dtype tensors
[rank1]: Traceback (most recent call last):

Steps to reproduce

Run Axolotl on multiple GPUs using LoRA+ FSDP, 4 NVIDIA A100 GPUs with 80GB
-torch Version: 2.5.1
-axolotl Version: 0.6.0

Config yaml

base_model: meta-llama/Llama-3.3-70B-Instruct
model_type: LlamaForCausalLM
processing_class: AutoTokenizer

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

load_in_8bit: true
load_in_4bit: false
strict: false

chat_template: llama3
datasets:

dataset_prepared_path: ./workspace/aiLawData/last_run_prepared
val_set_size: 0
output_dir: ./workspace/aiLawData/outputs/Llama-3.3-70B-memo-law-Instruct-lora-r256-v3
hub_model_id: PaxwellPaxwell/Llama-3.3-70B-Memo-law-Instruct-adapter-lora-r256-v3
sequence_len: 12000
sample_packing: false
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 256
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: Ai-Law
wandb_entity:
wandb_watch:
wandb_name: Llama-3.3-70B-memo-law-Instruct-adapter-lora-r256-v3
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 10
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
auto_resume_from_checkpoints: true

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing_kwargs:
  use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 2
debug:
deepspeed:
weight_decay: 0.0

fsdp:
  - full_shard
  - auto_wrap

fsdp_config:
  activation_checkpointing: true
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: true
  fsdp_use_orig_params: false
  fsdp_cpu_ram_efficient_loading: true
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_sharding_strategy: FULL_SHARD

special_tokens:
  pad_token: <|finetune_right_pad_id|>
  eos_token: <|eot_id|>

Possible solution

No response

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

3.10.13

axolotl branch-commit

main/latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of axolotl.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.
@Paxwell-Paxwell Paxwell-Paxwell added the bug Something isn't working label Jan 10, 2025
@Paxwell-Paxwell Paxwell-Paxwell changed the title ValueError: Cannot flatten integer dtype tensors (help please) on multiple gpu(A100 80gb*4) FSDP+LORA ValueError: Cannot flatten integer dtype tensors (help please) on multiple gpu(A100 80gb*4) Jan 10, 2025
@Paxwell-Paxwell Paxwell-Paxwell changed the title FSDP+LORA ValueError: Cannot flatten integer dtype tensors (help please) on multiple gpu(A100 80gb*4) FSDP+LORA on multiple gpu(A100 80gb*4) ValueError: Cannot flatten integer dtype tensors Jan 10, 2025
@NJordan72
Copy link
Contributor

Can you post the stack trace so we can see what is throwing the error? I had a similar problem earlier this week and depending on where it is coming from I either had to set gradient_accumulation_steps to 1 or turn off the liger cross entropy kernel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants