This repo contains code of Touch-LLM for UniTouch. Our code is built on top of the ImageBind and LLaMA-Adapter codebases.
- Download the pretrained touch encoder (
last_new.ckpt
) from the HuggingFace model hub and put it in the./UniTouch
folder, same level astouch_qa.py
. - Download the folder
ckpts
from the HuggingFace model hub and put it in the./UniTouch
folder, same level astouch_qa.py
. - Download the folder
llama_ori
from the HuggingFace model hub and put it in the./UniTouch
folder, same level astouch_qa.py
.
For Touch-LLM:
CUDA_VISIBLE_DEVICES=0 python touch_qa.py
@inproceedings{yang2024binding,
title={Binding touch to everything: Learning unified multimodal tactile representations},
author={Yang, Fengyu and Feng, Chao and Chen, Ziyang and Park, Hyoungseob and Wang, Daniel and Dou, Yiming and Zeng, Ziyao and Chen, Xien and Gangopadhyay, Rit and Owens, Andrew and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={26340--26353},
year={2024}
}