We are trying to deploy vision transformer models (EfficientViT_B0, MobileViT_V2_175, and RepViT_M11) on our flutter application using the tflite_flutter_plus and tflite_flutter_plus_helper dependencies. All 3 models were trained and quantized on version 2.10
of tensorflow
. When creating the interpreter, we get the following error:
ERROR: Unable to create interpreter: Didn't find op for builtin opcode 'CONV_2D' version '6'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Below is the block of code where this error is caught:
Future<void> _loadModel() async {
try {
_interpreter = await Interpreter.fromAsset(_modelFile, options: InterpreterOptions());
print('Interpreter Created Successfully...');
_inputShape = _interpreter.getInputTensor(0).shape;
_outputShape = _interpreter.getOutputTensor(0).shape;
_inputType = _interpreter.getInputTensor(0).type;
_outputType = _interpreter.getOutputTensor(0).type;
_outputBuffer = TensorBuffer.createFixedSize(_outputShape, _outputType);
// _probabilityProcessor =TensorProcessorBuilder().add(NormalizeOp(0, 1)).build();
_probabilityProcessor = TensorProcessorBuilder().build();
} catch (e) {
print('ERROR: Unable to create interpreter: ${e.toString()}');
}
}
My pubspec.yaml file is as follows:
`description: "A new Flutter project."
publish_to: 'none'
version: 1.0.0+1
environment:
sdk: '>=3.3.4 <4.0.0'
dependencies:
flutter:
sdk: flutter
cupertino_icons: ^1.0.2
dynamic_color: ^1.7.0
flutter_launcher_icons: ^0.11.0
path_provider: ^2.0.15
path: ^1.9.0
http: ^1.2.1
flutter_rating_bar: ^4.0.1
transparent_image: ^2.0.1
provider: ^6.1.2
shared_preferences: ^2.2.3
persistent_bottom_nav_bar: ^6.2.1
percent_indicator: ^4.2.3
google_fonts: ^6.2.1
image: ^3.3.0
image_picker: ^1.1.2
tflite_flutter_plus: ^0.0.1
tflite_flutter_helper_plus: ^0.0.2
camera: ^0.10.6
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^4.0.0
flutter:
uses-material-design: true
assets:
- assets/
- assets/images/
- assets/classes/`
I ran the install.bat file made available here:
It is worth noting that this error only shows up on the above mentioned ViT models. I’ve tried it with DenseNet121 (also trained on version 2.10 of tensorflow) and the application operates normally with no errors.
I’ve also attempted changing the TF_VERSION in the install.bat file itself, but it generates this error for all models (ViT-based or otherwise):
Invalid argument(s): Failed to load dynamic library 'libtensorflowlite_c.so': dlopen failed: "/data/app/~~L-_uTAUsnBW2lkc8a5p0KQ==/com.example.project-dpeoZ1PR50UeKS0PoXAgog==/base.apk!/lib/arm64-v8a/libtensorflowlite_c.so" has bad ELF magic: 4e6f7420
We’re not sure what the issue with the interpreter is, whether it’s from the models that are being used, the mobile application or the development environment itself. We have tried to regenerate the models with tensorflow 2.5.0
and faced similar if not the exact same problems.
The same question has been asked by my colleague on GitHub (https://github.com/odejinmi/tflite_flutter_plus/issues/4)
D.Varam is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
You can try tflite_flutter which is much more updated than what you use and it seems that a similar problem has been fixed. Just try to load the model first. If it works, you must write your image preprocessing steps rather than using the helper package.
If it is not working, I suggest you try onnxruntime. You can use .onnx models with that.