We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I used the latest TVM to compile my model for x86_64 and for cuda_x86_64 targets (Cuda 10.2 sm_70).
After that I tried to use model_peeker to check the model folders model_peeker works fine for x86_64 folder but fails for cuda_x86_64
model_peeker
x86_64
cuda_x86_64
Error - Loader of _lib(module.loadbinary__lib) is not presented :
Loader of _lib(module.loadbinary__lib) is not presented
[04:38:07] /home/ubuntu/neo-ai-dlr/demo/cpp/model_peeker.cc:92: TVMError: Check failed: f != nullptr: Loader of _lib(module.loadbinary__lib) is not presented. Stack trace: File "/home/ubuntu/neo-ai-dlr/3rdparty/tvm/src/runtime/library_module.cc", line 131 [bt] (0) ./model_peeker(+0xaf5e4) [0x56423d40e5e4] [bt] (1) ./model_peeker(+0xb0d48) [0x56423d40fd48] [bt] (2) ./model_peeker(+0x3442e) [0x56423d39342e] [bt] (3) ./model_peeker(+0xb5b4a) [0x56423d414b4a] [bt] (4) ./model_peeker(+0x2fefd) [0x56423d38eefd] [bt] (5) ./model_peeker(+0x16a8e) [0x56423d375a8e] [bt] (6) ./model_peeker(+0xbebd) [0x56423d36aebd] [bt] (7) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7ff9b58fcb97] [bt] (8) ./model_peeker(+0xd97a) [0x56423d36c97a] terminate called after throwing an instance of 'std::runtime_error' what(): Could not load DLR Model Aborted (core dumped)
The same error I got when I tried to call CreateDLRModel directly
CreateDLRModel
int device_type = 2; std::string input_name = "input"; std::string model_dir = "./mobilenet_v1_1.0_224/cuda_102_sm_70_x86_64"; std::cout << "Loading model... " << std::endl; DLRModelHandle model; if (CreateDLRModel(&model, model_dir.c_str(), device_type, 0) != 0) { std::clog << DLRGetLastError() << std::endl; throw std::runtime_error("Could not load DLR Model"); }
Error:
Loading model... TVMError: Check failed: f != nullptr: Loader of _lib(module.loadbinary__lib) is not presented. Stack trace: File "/home/ubuntu/neo-ai-dlr/3rdparty/tvm/src/runtime/library_module.cc", line 131 [bt] (0) ./libdlr.so(tvm::runtime::ImportModuleBlob(char const*, std::vector<tvm::runtime::Module, std::allocator<tvm::runtime::Module> >*)+0x21f4) [0x7f331285fb34] [bt] (1) ./libdlr.so(tvm::runtime::CreateModuleFromLibrary(tvm::runtime::ObjectPtr<tvm::runtime::Library>)+0x198) [0x7f3312861298] [bt] (2) ./libdlr.so(+0x5b6ae) [0x7f33127e46ae] [bt] (3) ./libdlr.so(tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x52a) [0x7f331286609a] [bt] (4) ./libdlr.so(dlr::TVMModel::SetupTVMModule(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)+0x1e3d) [0x7f33127e017d] [bt] (5) ./libdlr.so(CreateDLRModel+0x1dee) [0x7f33127c527e] [bt] (6) ./run-style-trans-dlr(+0xef6) [0x560aa80e9ef6] [bt] (7) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f3311e18b97] [bt] (8) ./run-style-trans-dlr(+0x104a) [0x560aa80ea04a] terminate called after throwing an instance of 'std::runtime_error' what(): Could not load DLR Model Aborted (core dumped)
Compiled models for x86_64 and cuda can be downloaded from here
cuda
The text was updated successfully, but these errors were encountered:
From this change, we need to update DLR's TVM commit id again.
Sorry, something went wrong.
I asked if this change is backward compatible apache/tvm#4532
Yes, the change is backward compatible - we can use new runtime with older models.
Thanks for confirming, since this is a forward compatibility issue, we'll keep track of next neo-ai/tvm update till it's merged into DLR.
No branches or pull requests
I used the latest TVM to compile my model for x86_64 and for cuda_x86_64 targets (Cuda 10.2 sm_70).
After that I tried to use
model_peeker
to check the model foldersmodel_peeker works fine for
x86_64
folder but fails forcuda_x86_64
Error -
Loader of _lib(module.loadbinary__lib) is not presented
:The same error I got when I tried to call
CreateDLRModel
directlyError:
Compiled models for
x86_64
andcuda
can be downloaded from hereThe text was updated successfully, but these errors were encountered: