Skip to content

Get invalid argument types #24

@ivanvoid

Description

@ivanvoid

Put data as pointed here than run:
python run_nerf.py --config configs/surreal/surreal.txt --basedir logs --expname surreal_model
Got this error:

$ python run_nerf.py --config configs/surreal/surreal.txt --basedir logs  --expname surreal_model
init meta
/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
parent-centered
Loader initialized.
KPE: RelDist, BPE: VecNorm, VPE: VecNorm
Embedder class: <class 'core.cutoff_embedder.CutoffEmbedder'>
Normalization: False opt_cut :False
Embedder class: <class 'core.cutoff_embedder.Embedder'>
Embedder class: <class 'core.cutoff_embedder.CutoffEmbedder'>
Normalization: False opt_cut :False
RayCaster(
  (network): NeRF(
    (pts_linears): ModuleList(
      (0): Linear(in_features=432, out_features=256, bias=True)
      (1-4): 4 x Linear(in_features=256, out_features=256, bias=True)
      (5): Linear(in_features=688, out_features=256, bias=True)
      (6-7): 2 x Linear(in_features=256, out_features=256, bias=True)
    )
    (alpha_linear): Linear(in_features=256, out_features=1, bias=True)
    (views_linears): ModuleList(
      (0): Linear(in_features=904, out_features=128, bias=True)
    )
    (feature_linear): Linear(in_features=256, out_features=256, bias=True)
    (rgb_linear): Linear(in_features=128, out_features=3, bias=True)
  )
  (network_fine): NeRF(
    (pts_linears): ModuleList(
      (0): Linear(in_features=432, out_features=256, bias=True)
      (1-4): 4 x Linear(in_features=256, out_features=256, bias=True)
      (5): Linear(in_features=688, out_features=256, bias=True)
      (6-7): 2 x Linear(in_features=256, out_features=256, bias=True)
    )
    (alpha_linear): Linear(in_features=256, out_features=1, bias=True)
    (views_linears): ModuleList(
      (0): Linear(in_features=904, out_features=128, bias=True)
    )
    (feature_linear): Linear(in_features=256, out_features=256, bias=True)
    (rgb_linear): Linear(in_features=128, out_features=3, bias=True)
  )
  (embed_fn): CutoffEmbedder()
  (embedbones_fn): Embedder()
  (embeddirs_fn): CutoffEmbedder()
)
Found ckpts []
#parameters: 864260
done creating popt
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
init dataset
  0%|                                    | 1/150000 [00:02<103:00:51,  2.47s/it]
Traceback (most recent call last):
  File "run_nerf.py", line 625, in <module>
    train()
  File "run_nerf.py", line 541, in train
    loss_dict, stats = trainer.train_batch(batch, i, global_step)
  File "/home/ivan/Projects/human_nerf2/A-NeRF/core/trainer.py", line 254, in train_batch
    optim_stats = self.optimize(loss_dict['total_loss'], i, popt_detach or not args.opt_pose)
  File "/home/ivan/Projects/human_nerf2/A-NeRF/core/trainer.py", line 463, in optimize
    self._optim_step()
  File "/home/ivan/Projects/human_nerf2/A-NeRF/core/trainer.py", line 448, in _optim_step
    self.optimizer.step()
  File "/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/optim/optimizer.py", line 280, in wrapper
    out = func(*args, **kwargs)
  File "/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/optim/optimizer.py", line 33, in _use_grad
    ret = func(self, *args, **kwargs)
  File "/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/optim/adam.py", line 141, in step
    adam(
  File "/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/optim/adam.py", line 281, in adam
    func(params,
  File "/home/ivan/miniconda3/envs/anerf/lib/python3.8/site-packages/torch/optim/adam.py", line 509, in _multi_tensor_adam
    torch._foreach_addcdiv_(params_, device_exp_avgs, denom, step_size)
TypeError: _foreach_addcdiv_() received an invalid combination of arguments - got (list, list, tuple, list), but expected one of:
 * (tuple of Tensors self, tuple of Tensors tensor1, tuple of Tensors tensor2, tuple of Scalars scalars)
      didn't match because some of the arguments have invalid types: (list of [Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter, Parameter], list of [Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tens

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions