Skip to content Skip to sidebar Skip to footer

Converting Bitmap To Bytebuffer (float) In Tensorflow-lite Android

In tensorflow-lite android demo code for image classification, the images are first converted to ByteBuffer format for better performance.This conversion from bitmap to floating p

Solution 1:

  1. I believe that 255/0 in your code is a copy/paste mistake, not real code.
  2. I wonder what the timecost of the pure Java solution is, especially when you weigh it against the timecost of inference. For me, with a slightly larger bitmap for Google's mobilenet_v1_1.0_224, the naïve float buffer preparation was less than 5% of inference time.
  3. I could quantize the tflite model (with the same tflite_convert utility that generated .tflite file from .h5. There could actually be three quantization operations, but I only used two: --inference_input_type=QUANTIZED_UINT8 and --post_training_quantize.
    • The resulting model is about 25% size of the float32 one, which is an achievement on its own.
    • The resulting model runs about twice faster (at least on some devices).
    • And, the resulting model consumes unit8 inputs. This means that instead of imgData.putFloat(((val>> 16) & 0xFF) / 255.f) we write imgData.put((val>> 16) & 0xFF), and so on.

By the way, I don't think that your formulae are correct. To achieve best accuracy when float32 buffers are involved, we use

putFLoat(byteval / 256f)

where byteval is int in range [0:255].

Solution 2:

As mentioned here use the below code from here for converting Bitmap to ByteBuffer(float32)

privatefunconvertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer? {
    val byteBuffer =
        ByteBuffer.allocateDirect(4 * BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE)
    byteBuffer.order(ByteOrder.nativeOrder())
    val intValues = IntArray(inputSize * inputSize)
    bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
    var pixel = 0for (i in0 until inputSize) {
        for (j in0 until inputSize) {
            val `val` = intValues[pixel++]
            byteBuffer.putFloat(((`val` shr 16 and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
            byteBuffer.putFloat(((`val` shr 8 and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
            byteBuffer.putFloat(((`val` and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
        }
    }
    return byteBuffer
}

Solution 3:

For floats, mean = 1 and std = 255.0 the function would be:

funbitmapToBytebufferWithOpenCV(bitmap: Bitmap): ByteBuffer {
            val startTime = SystemClock.uptimeMillis()
            val imgData = ByteBuffer.allocateDirect(1 * 257 * 257 * 3 * 4)
            imgData.order(ByteOrder.nativeOrder())

            val bufmat = Mat()
            val newmat = Mat()
            Utils.bitmapToMat(bitmap, bufmat)
            Imgproc.cvtColor(bufmat, bufmat, Imgproc.COLOR_RGBA2RGB)
            val splitImage: List<Mat> = ArrayList(3)

            Core.split(bufmat, splitImage)
            splitImage[0].convertTo(splitImage[0], CV_32F, 1.0 / 255.0)
            splitImage[1].convertTo(splitImage[1], CV_32F, 1.0 / 255.0)
            splitImage[2].convertTo(splitImage[2], CV_32F, 1.0 / 255.0)
            Core.merge(splitImage, newmat)

            val buf = FloatArray(257 * 257 * 3)
            newmat.get(0, 0, buf)

            for (i in buf.indices) {
                imgData.putFloat(buf[i])
            }
            imgData.rewind()
            val endTime = SystemClock.uptimeMillis()
            Log.v("Bitwise", (endTime - startTime).toString())
            return imgData
        }

Unfortunately this one is slightly slower (10ms) than what Sunit wrote above with for loops and bitwise operations (8ms).

Post a Comment for "Converting Bitmap To Bytebuffer (float) In Tensorflow-lite Android"