-
Notifications
You must be signed in to change notification settings - Fork 3k
Issues: microsoft/onnxruntime
[DO NOT UNPIN] onnxruntime-gpu v1.10.0 PyPI Removal Notice
#22747
opened Nov 6, 2024 by
sophies927
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
No ONNX function found for <OpOverload(op='quantized_decomposed.dequantize_per_channel', overload='default')>
feature request
request for unsupported feature or enhancement
.NET
Pull requests that update .net code
quantization
issues related to quantization
#23221
opened Dec 29, 2024 by
ruixupu
Inconsistent outputs when running onnx and pytorch (stft and istft)
#23219
opened Dec 28, 2024 by
etemesi254
[Feature Request] Inquiry About ONNX Runtime Support for Dynamic Decoding Correction in Machine Translation on Android
feature request
request for unsupported feature or enhancement
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23216
opened Dec 28, 2024 by
tigflanker
onnxruntime-web dependency on document breaks chrome serviceworker
platform:web
issues related to ONNX Runtime web; typically submitted using template
#23214
opened Dec 27, 2024 by
abbasvalliani
RUNTIME_EXCEPTION : Non-zero status code returned while running If node.
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23213
opened Dec 27, 2024 by
matchaaShaw
ONNXRuntime produces inconsistent results for specific output v10_0 (flaky test behavior)
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23212
opened Dec 27, 2024 by
Thrsu
Inconsistent results with different optimization settings
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23211
opened Dec 27, 2024 by
Thrsu
ONNXRuntime Optimization Causes Output Discrepancy with Certain opt_level Settings
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23210
opened Dec 27, 2024 by
Thrsu
ONNXRuntime Optimization Causes Output Discrepancy in Specific Model Structure (Output Y)
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23209
opened Dec 27, 2024 by
Thrsu
Java library throws error when using CUDA: LoadLibrary failed with error 126 "" when trying to load "C:\Users\xx\AppData\Local\Temp\onnxruntime-java5278075328315693241\onnxruntime_providers_cuda.dll"
api:Java
issues related to the Java API
ep:CUDA
issues related to the CUDA execution provider
#23208
opened Dec 27, 2024 by
sduqlsc
ONNXRuntime Optimization Causes Output Discrepancy in BiasDropout Operator
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23207
opened Dec 27, 2024 by
Thrsu
[Mobile] google say not support nnapi anymore
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23206
opened Dec 27, 2024 by
WangHHY19931001
custom op's SUPPORTED_TENSOR_TYPES does not include int4 and uint4
ep:VitisAI
issues related to Vitis AI execution provider
#23205
opened Dec 27, 2024 by
BoarQing
[Inference Error] The onnx inference result is inconsistent with the numpy inference result
#23202
opened Dec 26, 2024 by
songqiuyu
Different results between GPU and CPU
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23201
opened Dec 26, 2024 by
matchaaShaw
Inconsistent Results After ONNX Runtime Optimization
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23200
opened Dec 26, 2024 by
matchaaShaw
Inconsistent Results After ONNX Runtime Optimization
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23199
opened Dec 26, 2024 by
matchaaShaw
PyExc_Exception while import onnxruntime
build
build issues; typically submitted using template
#23196
opened Dec 26, 2024 by
ranjitsingha
[Mobile] How to use GPU acceleration on Android
api:Java
issues related to the Java API
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23194
opened Dec 25, 2024 by
lizhiwen19900709
[Build] TypeInferenceError when quantize an onnx model with custom operator
build
build issues; typically submitted using template
quantization
issues related to quantization
#23191
opened Dec 25, 2024 by
Liuhehe2019
[Feature Request] Shape inference for GroupQueryAttention Op
ep:WebNN
WebNN execution provider
feature request
request for unsupported feature or enhancement
#23189
opened Dec 24, 2024 by
peishenyan
[Web] Upgrading from 1.20.1 to 1.21.* breaks Segment Anything models on WebGPU
ep:WebGPU
ort-web webgpu provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
.NET
Pull requests that update .net code
platform:web
issues related to ONNX Runtime web; typically submitted using template
#23183
opened Dec 23, 2024 by
xenova
[Build] Fails on arm64: error: no member named 'linux_id' in 'cpuinfo_processor'
build
build issues; typically submitted using template
#23181
opened Dec 23, 2024 by
yurivict
[Build] error: array index 7 is past the end of the array (that has type '__m256[4]')
build
build issues; typically submitted using template
#23180
opened Dec 23, 2024 by
yurivict
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.