Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot allocate memory symbolic tensor shape [T.Any(), 3, 224, 224] [Bug] #15123

Closed
qq1243196045 opened this issue Jun 20, 2023 · 2 comments
Closed
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug

Comments

@qq1243196045
Copy link

qq1243196045 commented Jun 20, 2023

when I use tvmc ,something error occurred:

wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v2-7.onnx
tvmc compile --target "llvm"   --output resnet50-v2-7-tvm.tar  resnet50-v2-7.onnx
WARNING:autotvm:One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Traceback (most recent call last):
  File "/root/anaconda3/envs/tvm-build/bin/tvmc", line 33, in <module>
    sys.exit(load_entry_point('tvm==0.13.dev217+g2d2b72733', 'console_scripts', 'tvmc')())
  File "/projects/tvm/python/tvm/driver/tvmc/main.py", line 118, in main
    sys.exit(_main(sys.argv[1:]))
  File "/projects/tvm/python/tvm/driver/tvmc/main.py", line 106, in _main
    return args.func(args)
  File "/projects/tvm/python/tvm/driver/tvmc/compiler.py", line 217, in drive_compile
    **transform_args,
  File "/projects/tvm/python/tvm/driver/tvmc/compiler.py", line 421, in compile_model
    workspace_pools=workspace_pools,
  File "/projects/tvm/python/tvm/driver/tvmc/compiler.py", line 491, in build
    workspace_memory_pools=workspace_pools,
  File "/projects/tvm/python/tvm/relay/build_module.py", line 372, in build
    mod_name=mod_name,
  File "/projects/tvm/python/tvm/relay/build_module.py", line 169, in build
    mod_name,
  File "/projects/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  15: TVMFuncCall
  14: tvm::relay::backend::RelayBuildModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
  13: tvm::relay::backend::RelayBuildModule::Build(tvm::IRModule, tvm::runtime::Array<tvm::Target, void> const&, tvm::Target const&, tvm::relay::Executor const&, tvm::relay::Runtime const&, tvm::WorkspaceMemoryPools const&, tvm::ConstantMemoryPools const&, tvm::runtime::String)
  12: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, tvm::runtime::String const&)
  11: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::backend::GraphExecutorCodegenModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  10: tvm::relay::backend::GraphExecutorCodegen::Codegen(tvm::IRModule, tvm::relay::Function, tvm::runtime::String)
  9: tvm::relay::GraphPlanMemory(tvm::relay::Function const&)
  8: tvm::relay::StorageAllocator::Plan(tvm::relay::Function const&)
  7: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  6: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  5: tvm::relay::transform::DeviceAwareExprVisitor::VisitExpr_(tvm::relay::FunctionNode const*)
  4: tvm::relay::StorageAllocaBaseVisitor::DeviceAwareVisitExpr_(tvm::relay::FunctionNode const*)
  3: tvm::relay::StorageAllocaBaseVisitor::CreateToken(tvm::RelayExprNode const*, bool)
  2: tvm::relay::StorageAllocator::CreateTokenOnDevice(tvm::RelayExprNode const*, tvm::VirtualDevice const&, bool)
  1: tvm::relay::TokenAllocator1D::Alloc(tvm::relay::StorageToken*, long)
  0: tvm::relay::TokenAllocator1D::GetMemorySize(tvm::relay::StorageToken*)
  File "/projects/tvm/src/relay/backend/token_allocator.cc", line 41
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [T.Any(), 3, 224, 224]

@qq1243196045 qq1243196045 added needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug labels Jun 20, 2023
@sheepHavingPurpleLeaf
Copy link

relay doesn't support dynamic inputs abd here is the batchsize. I'm also new to TVM, after some searching I found that relax can handle dynamic shape inputs now. But I haven't found a working example yet.

@masahi
Copy link
Member

masahi commented Jun 20, 2023

You don't need Relax if it is a simple dynamic shape like dynamic batch. TVMC apparently supports compiling and running with Relay VM, that's what you need here.

#10722

@masahi masahi closed this as completed Jun 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug
Projects
None yet
Development

No branches or pull requests

3 participants