How to interpret results coming from onnxruntime InferenceSession?
So, i was trying to downsize my ML app by making so that the pythorch package won’t be needed in the docker anymore, i saw that the torch.hub.load is using onnxruntime under the hood , so i imagined that i could instead use the onnxruntime directly.