Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] llvm Cannot create binary operator with two operands of differing type!"' faile #253

Open
zhangxiao-stack opened this issue Jun 14, 2023 · 1 comment

Comments

@zhangxiao-stack
Copy link
Contributor

zhangxiao-stack commented Jun 14, 2023

Environment

Hardware: Amdgpu gfx906
TVM Version: * mlc 6fd55bc [Unity][FIX] add init file to relax.backend.contrib (#15023) (#244)
Operation system: Centos
LLVM version: 15.0.7

Steps to reproduce

install relax
1、git clone https://github.com/mlc-ai/relax.git --recursive
2、cp config.cmake build/
-DUSE_ROCM=ON -DUSE_ROCBLAS=ON -DUSE_LLVM=ON
3、 cmake ..
4、 make
5、 python mlp.py

import tvm
from tvm import relax, tir, topi
import numpy as np
from tvm.contrib import rocblas

def build_mlp(data, weight):
    bb = relax.BlockBuilder()

    with bb.function("mlp", [data, weight]):
        gv0 = bb.emit_te(tvm.contrib.rocblas.matmul, data, weight, transa=False, transb=False)
#        print(gv0)
        gv1 = bb.emit_te(topi.nn.relu, gv0)
        bb.emit_func_output(gv1)

    mod = bb.get()
    return mod


if __name__ == "__main__":
    # symbolic dimensions
    n, m = tir.Var("n", "int64"), tir.Var("m", "int64")
    # create data and weight variables
    data = relax.Var("data", relax.TensorStructInfo([n, m], "float32"))
    weight = relax.Var("weight", relax.TensorStructInfo([m, n], "float32"))

    # construct a mlp model
    mod = build_mlp(data, weight)

    # build and create vm executor
    target = tvm.target.Target("rocm", host="llvm")
    with tvm.target.Target(target):
        mod = tvm.tir.transform.DefaultGPUSchedule()(mod)
    #mod.show()
    ex = relax.build(mod, target)
    vm = relax.VirtualMachine(ex, tvm.rocm())

    # run the mlp model on relax vm
    data = tvm.nd.array((np.random.rand(16, 32).astype(np.float32)),tvm.rocm())
    weight = tvm.nd.array((np.random.rand(32, 16).astype(np.float32)),tvm.rocm())
    res = vm["mlp"](data, weight)
    print(res)

Error message

llvm-project-llvmorg-15.0.7/llvm/lib/IR/Instructions.cpp:2785: static llvm::BinaryOperator* llvm::BinaryOperator::Create(llvm::Instruction::BinaryOps, llvm::Value*, llvm::Value*, const llvm::Twine&, llvm::Instruction*): Assertion `S1->getType() == S2->getType() && "Cannot create binary operator with two operands of differing type!"' failed.
Aborted (core dumped)

gdb python --core=core_python_150871

#1  0x00007f599a690a78 in abort () from /lib64/libc.so.6
#2  0x00007f599a6881a6 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007f599a688252 in __assert_fail () from /lib64/libc.so.6
#4  0x00007f58edf5db33 in llvm::BinaryOperator::Create(llvm::Instruction::BinaryOps, llvm::Value*, llvm::Value*, llvm::Twine const&, llvm::Instruction*) [clone .part.680] () from /usr/local/bin/libtvm.so
#5  0x00007f58f31eef72 in llvm::BinaryOperator::Create(llvm::Instruction::BinaryOps, llvm::Value*, llvm::Value*, llvm::Twine const&, llvm::Instruction*) [clone .localalias.60] () from /usr/local/bin/libtvm.so
#6  0x00007f58f00679a2 in llvm::IRBuilderBase::CreateMul(llvm::Value*, llvm::Value*, llvm::Twine const&, bool, bool) ()
   from /usr/local/bin/libtvm.so
#7  0x00007f58f0053e3c in tvm::codegen::CodeGenLLVM::CreateMul(tvm::runtime::DataType, llvm::Value*, llvm::Value*) ()
   from /usr/local/bin/libtvm.so
#8  0x00007f58f0026d4d in tvm::tir::ExprFunctor<llvm::Value* (tvm::PrimExpr const&)>::VisitExpr(tvm::PrimExpr const&) ()
   from /usr/local/bin/libtvm.so
#9  0x00007f58f0054814 in tvm::codegen::CodeGenLLVM::VisitExpr_(tvm::tir::AddNode const*) () from /usr/local/bin/libtvm.so
#10 0x00007f58f0026d4d in tvm::tir::ExprFunctor<llvm::Value* (tvm::PrimExpr const&)>::VisitExpr(tvm::PrimExpr const&) ()
   from /usr/local/bin/libtvm.so
#11 0x00007f58f0054824 in tvm::codegen::CodeGenLLVM::VisitExpr_(tvm::tir::AddNode const*) () from /usr/local/bin/libtvm.so
#12 0x00007f58f0026d4d in tvm::tir::ExprFunctor<llvm::Value* (tvm::PrimExpr const&)>::VisitExpr(tvm::PrimExpr const&) ()
   from /usr/local/bin/libtvm.so
#13 0x00007f58f0053c55 in tvm::codegen::CodeGenLLVM::VisitExpr_(tvm::tir::LTNode const*) () from /usr/local/bin/libtvm.so
#14 0x00007f58f0026d4d in tvm::tir::ExprFunctor<llvm::Value* (tvm::PrimExpr const&)>::VisitExpr(tvm::PrimExpr const&) ()
   from /usr/local/bin/libtvm.so
#15 0x00007f58f005279b in tvm::codegen::CodeGenLLVM::VisitStmt_(tvm::tir::IfThenElseNode const*) () from /usr/local/bin/libtvm.so
#16 0x00007f58ee2278ac in tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&) ()
   from /usr/local/bin/libtvm.so
#17 0x00007f58f0062ebe in tvm::codegen::CodeGenLLVM::CreateSerialFor(llvm::Value*, llvm::Value*, llvm::Value*, tvm::tir::Var const&, tvm::tir::Stmt const&) () from /usr/local/bin/libtvm.so
#18 0x00007f58f006366b in tvm::codegen::CodeGenLLVM::VisitStmt_(tvm::tir::ForNode const*) () from /usr/local/bin/libtvm.so
#19 0x00007f58ee2278ac in tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&) ()
   from /usr/local/bin/libtvm.so
#20 0x00007f58f0064ab7 in tvm::codegen::CodeGenLLVM::VisitStmt_(tvm::tir::AttrStmtNode const*) () from /usr/local/bin/libtvm.so
#21 0x00007f58ee2278ac in tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&) ()
   from /usr/local/bin/libtvm.so
#22 0x00007f58f0064ab7 in tvm::codegen::CodeGenLLVM::VisitStmt_(tvm::tir::AttrStmtNode const*) () from /usr/local/bin/libtvm.so
#23 0x00007f58ee2278ac in tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&) ()
   from /usr/local/bin/libtvm.so
#24 0x00007f58f006626b in tvm::codegen::CodeGenLLVM::AddFunctionInternal(tvm::GlobalVar const&, tvm::tir::PrimFunc const&, bool) ()
   from /usr/local/bin/libtvm.so
#25 0x00007f58f002003c in tvm::codegen::CodeGenAMDGPU::AddFunction(tvm::GlobalVar const&, tvm::tir::PrimFunc const&) ()
   from /usr/local/bin/libtvm.so
#26 0x00007f58f0028f83 in _ZN3tvm7codegen11CodeGenLLVM19AddFunctionsOrderedINS_7runtime3MapINS_9GlobalVarENS_8BaseFuncEvvE8iteratorEZNS1_19AddFunctionsOrderedIS8_EEvT_SA_EUlSA_E_EEvSA_SA_T0_ () from /usr/local/bin/libtvm.so
#27 0x00007f58f001d798 in tvm::codegen::BuildAMDGPU(tvm::IRModule, tvm::Target) () from /usr/local/bin/libtvm.so
#28 0x00007f58ef5fe9b0 in tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<void tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::IRModule, tvm::Target)>::AssignTypedLambda<tvm::runtime::Module (*)(tvm::IRModule, tvm::Target)>(tvm::runtime::Module (*)(tvm::IRModule, tvm::Target), std::string)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) () from /usr/local/bin/libtvm.so
#29 0x00007f58ef5f6f6e in tvm::codegen::Build(tvm::IRModule, tvm::Target) () from /usr/local/bin/libtvm.so
#30 0x00007f58ee44b3e2 in tvm::TIRToRuntime(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&) ()
   from /usr/local/bin/libtvm.so
#31 0x00007f58ee452ce4 in tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<void tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)>::AssignTypedLambda<tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#6}>(tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule,
@yzh119
Copy link
Member

yzh119 commented Jun 16, 2023

The ROCM backend hasn't been tested for MLC-LLM.
We encouraged using Vulkan for AMD GPU at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants