-
Notifications
You must be signed in to change notification settings - Fork 12
Issues: NVIDIA/TensorRT-Incubator
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Incorrect quantize output with fp16 scales
mlir-tensorrt
Pull request for the mlir-tensorrt project
#392
opened Nov 19, 2024 by
parthchadha
Segfault in prompt_encoder
mlir-tensorrt
Pull request for the mlir-tensorrt project
#378
opened Nov 15, 2024 by
yizhuoz004
[mlir-tensorrt] TensorRT Dialect New feature or request
mlir-tensorrt
Pull request for the mlir-tensorrt project
ReifyRankedShapedTypeOpInterface
implementation
enhancement
#369
opened Nov 13, 2024 by
christopherbate
nanoGPT FP8 compilation failed
mlir-tensorrt
Pull request for the mlir-tensorrt project
#353
opened Nov 8, 2024 by
yizhuoz004
Support collections of tensors as inputs to compiled functions
#343
opened Nov 6, 2024 by
pranavm-nvidia
Function Registry should do type checking on variadic arguments
#341
opened Nov 6, 2024 by
pranavm-nvidia
Add support for multiple optimization profiles
tripy
Pull request for the tripy project
#315
opened Oct 28, 2024 by
parthchadha
Fix or remove
skip_num_stack_entries
in convert_to_tensors
#310
opened Oct 28, 2024 by
pranavm-nvidia
dtype constraints does not work for overloaded functions
good first issue
Good for newcomers
#300
opened Oct 24, 2024 by
pranavm-nvidia
tp.mean
failure when dim
is multi dimensional with skipped dimensions
#297
opened Oct 22, 2024 by
farazkh80
stablehlo-to-tensorrt
conversion pass doesn't support stablehlo.reduce
with multiple reduction dims
mlir-tensorrt
#279
opened Oct 16, 2024 by
farazkh80
stablehlo-to-tensorrt
does not support converting stablehlo.dynamic_gather
mlir-tensorrt
#264
opened Oct 10, 2024 by
parthchadha
Figure out how to convey the behavior of ops when receiving different Tensor subclasses
#226
opened Sep 26, 2024 by
pranavm-nvidia
Concatenating a shape tensor using the
*
op does not work with fill
#207
opened Sep 16, 2024 by
pranavm-nvidia
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-17.