You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CMAKE_PREFIX_PATH=$CONDA_PREFIX/lib/python3.9/site-packages/pybind11/share/cmake/pybind11 bash build_all_conda.sh
After this procedure, a .so file will be created under mycpp/build. If its name is not mycpp.so, an error will occur when executing mycpp.cluster_poses in estimator.py.
The solution is simple: just rename the .so file to mycpp.so. In my case, the name was as follows, and changing it to mycpp.so resolved the issue:
mycpp.cpython-39-x86_64-linux-gnu.so
ali@ali:~$ docker run --gpus all -it -v /home/ali/Downloads/FoundationPose:/workspace foundationpose /bin/bash
(my) root@89b15d8c96c3:/# cd /workspace
(my) root@89b15d8c96c3:/workspace# python run_demo.py
Warp 1.0.2 initialized:
CUDA Toolkit 11.5, Driver 11.4
Devices:
"cpu" : "x86_64"
"cuda:0" : "NVIDIA GeForce RTX 3070 Ti Laptop GPU" (8 GiB, sm_86, mempool enabled)
Kernel cache:
/root/.cache/warp/1.0.2
[init()] self.cfg:
lr: 0.0001
c_in: 6
zfar: 'Infinity'
debug: null
n_view: 1
run_id: 3wy8qqex
use_BN: true
exp_name: 2024-01-11-20-02-45
n_epochs: 62
save_dir: /home/bowenw/debug/2024-01-11-20-02-45/
use_mask: false
loss_type: pairwise_valid
optimizer: adam
batch_size: 64
crop_ratio: 1.1
enable_amp: true
use_normal: false
max_num_key: null
warmup_step: -1
input_resize:
max_step_val: 1000
vis_interval: 1000
weight_decay: 0
normalize_xyz: true
resume_run_id: null
clip_grad_norm: 'Infinity'
lr_epoch_decay: 500
render_backend: nvdiffrast
train_num_pair: 5
lr_decay_epochs:
n_epochs_warmup: 1
make_pair_online: false
gradient_max_norm: 'Infinity'
max_step_per_epoch: 10000
n_rendering_workers: 1
save_epoch_interval: 100
n_dataloader_workers: 100
split_objects_across_gpus: true
ckpt_dir: /workspace/learning/training/../../weights/2024-01-11-20-02-45/model_best.pth
[init()] self.h5_file:None
[init()] Using pretrained model from /workspace/learning/training/../../weights/2024-01-11-20-02-45/model_best.pth
[init()] init done
[init()] welcome
[init()] self.cfg:
lr: 0.0001
c_in: 6
zfar: .inf
debug: null
w_rot: 0.1
n_view: 1
run_id: null
use_BN: true
rot_rep: axis_angle
ckpt_dir: /workspace/learning/training/../../weights/2023-10-28-18-33-37/model_best.pth
exp_name: 2023-10-28-18-33-37
save_dir: /tmp/2023-10-28-18-33-37/
loss_type: l2
optimizer: adam
trans_rep: tracknet
batch_size: 64
crop_ratio: 1.2
use_normal: false
BN_momentum: 0.1
max_num_key: null
warmup_step: -1
input_resize:
max_step_val: 1000
normal_uint8: false
vis_interval: 1000
weight_decay: 0
n_max_objects: null
normalize_xyz: true
clip_grad_norm: 'Infinity'
rot_normalizer: 0.3490658503988659
trans_normalizer:
max_step_per_epoch: 25000
val_epoch_interval: 10
n_dataloader_workers: 60
enable_amp: true
use_mask: false
[init()] self.h5_file:
[init()] Using pretrained model from /workspace/learning/training/../../weights/2023-10-28-18-33-37/model_best.pth
[init()] init done
[reset_object()] self.diameter:0.19646325799497472, vox_size:0.009823162899748735
[reset_object()] self.pts:torch.Size([607, 3])
[reset_object()] reset done
[make_rotation_grid()] cam_in_obs:(42, 4, 4)
[make_rotation_grid()] rot_grid:(252, 4, 4)
Traceback (most recent call last):
File "run_demo.py", line 41, in
est = FoundationPose(model_pts=mesh.vertices, model_normals=mesh.vertex_normals, mesh=mesh, scorer=scorer, refiner=refiner, debug_dir=debug_dir, debug=debug, glctx=glctx)
File "/workspace/estimater.py", line 27, in init
self.make_rotation_grid(min_n_views=40, inplane_step=60)
File "/workspace/estimater.py", line 120, in make_rotation_grid
rot_grid = mycpp.cluster_poses(30, 99999, rot_grid, self.symmetry_tfs.data.cpu().numpy())
AttributeError: 'NoneType' object has no attribute 'cluster_poses'
(my) root@89b15d8c96c3:/workspace#
I need to know where is the problem? thank you.
The text was updated successfully, but these errors were encountered: