-
Notifications
You must be signed in to change notification settings - Fork 164
Open
Description
(WorkerDict pid=86867) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
(WorkerDict pid=86867) /home/aolong/.conda/envs/vllm/lib/python3.11/site-packages/torch/distributed/fsdp/_init_utils.py:440: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.
(WorkerDict pid=86867) warnings.warn(
(WorkerDict pid=86867) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)
BASE_CONFIG="\
algorithm.adv_estimator=reinforce_plus_plus \
data.train_batch_size=4 \
data.val_batch_size=4 \
data.max_prompt_length=512 \
data.max_response_length=4096 \
actor_rollout_ref.actor.optim.lr=5e-7 \
actor_rollout_ref.model.use_remove_padding=True \
actor_rollout_ref.actor.ppo_mini_batch_size=256 \
actor_rollout_ref.actor.ppo_micro_batch_size=64 \
actor_rollout_ref.actor.use_kl_loss=True \
actor_rollout_ref.actor.kl_loss_coef=0.001 \
actor_rollout_ref.actor.kl_loss_type=low_var_kl \
actor_rollout_ref.model.enable_gradient_checkpointing=True \
actor_rollout_ref.actor.fsdp_config.param_offload=False \
actor_rollout_ref.actor.fsdp_config.grad_offload=False \
actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
actor_rollout_ref.rollout.log_prob_micro_batch_size=160 \
actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
actor_rollout_ref.rollout.name=vllm \
actor_rollout_ref.rollout.gpu_memory_utilization=0.5 \
actor_rollout_ref.rollout.n=4 \
actor_rollout_ref.rollout.temperature=0.7 \
actor_rollout_ref.ref.log_prob_micro_batch_size=160 \
actor_rollout_ref.ref.fsdp_config.param_offload=False \
algorithm.kl_ctrl.kl_coef=0.001 \
trainer.critic_warmup=0 \
trainer.logger=['wandb'] \
trainer.project_name='GRPO_logic_KK' \
trainer.n_gpus_per_node=1 \
trainer.nnodes=1 \
trainer.save_freq=60 \
trainer.test_freq=10 \
trainer.total_epochs=5"
这里说with a model not initialized on GPU和the current dype in Qwen2ForCausalLM is torch.float32是怎么回事,脚本里的offload应该都关了,而且这里说默认加载bf16,为什么会提示float32
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels