Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you forget to bind? #274

Open
njuhang opened this issue Jul 26, 2023 · 2 comments
Open

Did you forget to bind? #274

njuhang opened this issue Jul 26, 2023 · 2 comments

Comments

@njuhang
Copy link

njuhang commented Jul 26, 2023

Expected behavior

generate a .so model file

Actual behavior

report error:
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/mlc-relax/relax/build/libtvm.so(tvm::ApplyPasses(tvm::IRModule, tvm::transform::Sequential)+0x42) [0x7f5212c4d512]
[bt] (7) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule) const+0x56) [0x7f5212d21dd6]
[bt] (6) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x347) [0x7f5212d21aa7]
[bt] (5) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x44a) [0x7f5212d2429a]
[bt] (4) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x347) [0x7f5212d21aa7]
[bt] (3) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1f4) [0x7f5212d22b84]
[bt] (2) /home/mlc-relax/relax/build/libtvm.so(+0x256a123) [0x7f5213820123]
[bt] (1) /home/mlc-relax/relax/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x3d) [0x7f521298bfcd]
[bt] (0) /home/mlc-relax/relax/build/libtvm.so(tvm::runtime::Backtraceabi:cxx11+0x2c) [0x7f5214c849dc]
Did you forget to bind?
Variable A is directly accessed by host memory (it is not contained in a thread environment or in the function arguments.
Variable T_relu is directly accessed by host memory (it is not contained in a thread environment or in the function arguments.
File "/home/mlc-relax/relax/src/tir/analysis/verify_memory.cc", line 205
RuntimeError: Memory verification failed with the following errors:
# from tvm.script import tir as T

Environment

Ubuntu 20.04, mlc-relax up to date, python3.8

Steps to reproduce

cross_compiler = "/home/AndroidSdk/ndk/23.2.8568313/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android28-clang++"
target = tvm.target.Target("opencl --device=mali", host="llvm --mtriple=aarch64-linux-gnu")
#target = tvm.target.Target("llvm --num-cores=4 --mtriple=aarch64-linux-android --mattr=+neon")
model_out = "./libs/mobilenet.so"
relay_mod, relay_param,_, _ = get_network("mobilenet", 1)
relax_mod = relay_translator.from_relay(relay_mod["main"], target, relay_param)
ex = relax.build(relax_mod, target)
ex.export_library(model_out, cc=cross_compiler)

in function get_network, it use testing.mobilenet.get_workload to load model.

Triage

*(backend::OpenCL)

@raj-khare
Copy link

getting the same err

@luismiaresse
Copy link

luismiaresse commented Mar 17, 2024

Getting same error with CUDA 12.1 and ROCM 5.7 in the Web Stable Diffussion notebook demo. Using LLVM target and CPU device I get a different one:

InternalError: Traceback (most recent call last):
  61: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)>::AssignTypedLambda<tvm::__mk_TVM23::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#1}>(tvm::__mk_TVM23::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  60: tvm::TIRToRuntime(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&)
  59: tvm::SplitMixedModule(tvm::IRModule, tvm::Target const&, tvm::Target const&)
  58: tvm::ApplyPasses(tvm::IRModule, tvm::transform::Sequential)
  57: tvm::transform::Pass::operator()(tvm::IRModule) const
  56: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  55: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  54: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  53: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  52: _ZN3tvm7runtime13PackedFun
  51: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::tir::transform::LowerTVMBuiltin()::{lambda(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)#1}>(tvm::tir::transform::LowerTVMBuiltin()::{lambda(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const
  50: tvm::tir::BuiltinLower::VisitBodyAndRealizeAlloca(tvm::tir::Stmt)
  49: tvm::tir::BuiltinLower::GetMaxStack(tvm::tir::Stmt)
  48: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  47: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  46: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  45: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  44: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  43: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  42: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  41: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  40: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  39: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  38: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  37: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  36: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  35: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  34: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  33: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  32: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  31: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  30: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  29: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  28: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  27: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  26: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  25: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_7ru
  24: tvm::runtime::ObjectPtr<tvm::runtime::Object> tvm::runtime::Array<tvm::tir::Stmt, void>::MapHelper<tvm::tir::StmtMutator::Internal::Mutate(tvm::tir::StmtMutator*, tvm::runtime::Array<tvm::tir::Stmt, void> const&)::{lambda(tvm::tir::Stmt const&)#1}, tvm::tir::Stmt>(tvm::runtime::ObjectPtr<tvm::runtime::Object>, tvm::tir::StmtMutator::Internal::Mutate(tvm::tir::StmtMutator*, tvm::runtime::Array<tvm::tir::Stmt, void> const&)::{lambda(tvm::tir::Stmt const&)#1})
  23: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  22: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  21: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  20: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AttrStmtNode const*)
  19: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  18: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  17: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  16: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AllocateNode const*)
  15: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  14: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  13: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  12: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AllocateNode const*)
  11: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  10: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  9: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  8: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AllocateNode const*)
  7: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  6: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  5: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  4: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AttrStmtNode const*)
  3: tvm::tir::BuiltinLower::VisitStmt(tvm::tir::Stmt const&)
  2: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  1: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_
  0: tvm::tir::BuiltinLower::VisitStmt_(tvm::tir::AllocateNode const*)
  File "/home/luismi/Documents/relax/src/tir/transforms/lower_tvm_builtin.cc", line 248
InternalError: Check failed: (device_type_) is false: Unknown device type in current IR 

Using mlc-ai-nightly-cu121 and mlc-ai-nightly-rocm57 TVM, relax.build fails

EDIT: Same behaviour happens on Google Collab (Tesla T4 GPU with CUDA 12.1, mlc-ai-nightly-cu121, pytorch nightly 2.4 cpu). This is probably a regression, or a failure to bind correctly some variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants