I have been trying to put a TensorFlow Lite model into an app that I have been working on in Java on Android Studio. I have got the model into the app, but whenever it takes a picture and sends it through it, I get errors about the Buffer size not being large enough for the pixels.
Here is the Java:
@androidx.camera.core.ExperimentalGetImage
private void takeAndAnalyzeImage() {
if (imageCapture != null) {
imageCapture.takePicture(cameraExecutor, new ImageCapture.OnImageCapturedCallback() {
@Override
public void onCaptureSuccess(ImageProxy image) {
super.onCaptureSuccess(image);
Log.d("CameraXApp", "Image capture success. Processing image...");
Image mediaImage = image.getImage();
if (mediaImage != null) {
Log.d("CameraXApp", "Captured image received. Dimensions: " + mediaImage.getWidth() + "x" + mediaImage.getHeight());
try {
Log.d("CameraXApp", "Converting to Bitmap...");
Bitmap bitmap = toBitmap(mediaImage);
if (bitmap != null) { // Check if bitmap is not null
Log.d("CameraXApp", "Bitmap conversion successful. Resizing...");
// Resize the bitmap to fit the model input size (224x224)
Bitmap resizedBitmap = Bitmap.createScaledBitmap(bitmap, 224, 224, true);
TensorImage tensorImage = TensorImage.fromBitmap(resizedBitmap);
runInference(tensorImage);
} else {
Log.e("CameraXApp", "Bitmap is null. Cannot resize image.");
}
} catch (Exception e) {
Log.e("CameraXApp", "Error converting image to Bitmap: " + e.getMessage());
}
}
image.close();
}
@Override
public void onError(ImageCaptureException exception) {
Log.e("CameraXApp", "Photo capture failed: " + exception.getMessage());
}
});
} else {
Log.e("CameraXApp", "imageCapture is null. Cannot capture image.");
}
}
private Bitmap toBitmap(Image image) {
try {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
int width = image.getWidth();
int height = image.getHeight();
int pixelStride = image.getPlanes()[0].getPixelStride();
int rowStride = image.getPlanes()[0].getRowStride();
int pixelFormat = image.getFormat();
// Calculate the expected size based on image dimensions and pixel format
int expectedSize;
switch (pixelFormat) {
case ImageFormat.YUV_420_888:
expectedSize = width * height * 3 / 2; // YUV_420_888 format
break;
case ImageFormat.JPEG:
expectedSize = buffer.capacity(); // JPEG format
break;
default:
expectedSize = width * height * 4; // Default to ARGB_8888 format
break;
}
Log.d("CameraXApp", "Buffer size: " + buffer.capacity() + ", Expected size: " + expectedSize);
// Ensure the buffer size is large enough for the pixels
if (buffer.capacity() < expectedSize) {
Log.e("CameraXApp", "Buffer not large enough for pixels");
return null;
}
// Check if pixelStride is zero to avoid divide by zero error
int adjustedWidth = pixelStride != 0 ? width + (rowStride / pixelStride - 1) : width;
Bitmap bitmap = Bitmap.createBitmap(adjustedWidth, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
} catch (Exception e) {
Log.e("CameraXApp", "Error converting image to Bitmap: " + e.getMessage());
return null;
}
}
private void runInference(TensorImage tensorImage) {
try {
FinalModel.Outputs outputs = model.process(tensorImage.getTensorBuffer());
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
// Process the output as needed
// Add a log message to indicate successful analysis
Log.d("ImageAnalysis", "Image analysis completed successfully");
} catch (Exception e) {
Log.e("Error", "Error running inference: " + e.getMessage());
}
}
I have tried to convert the photos to JPEGs once they have been taken, changing the way that the buffer size is calculated, changing how the pixelStride and rowStride are calculated, rewinding the bitmap and calculating a larger one, as well as changing how the rowPadding is calculated and used. All of it has led to the same error of the buffer size not being large enough for the pixels and stopping processing once it tries to convert the image to a bitmap.
Ethan W is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.