I am creating a ocr for urdu. I bucket my data into n number of buckets based on their widths. means all images whose widths are less than 400px are all reshaped to have width of 400px and are stored into a bucket.
The code is previously done and is located on github at, urdu-handwriting-recognition-using-deep-learning.
I want to use the same architecture in this repository coded in keras with tensorflow 2 but can not figure out how this code is handling the dynamic width.
I want the code in tensorflow 2 keras, as the previous version is written in tf1.<>.
I donot want to pad all images, as if max width is 1000 and I pad an image with 600 white pixels it may effect the results and performance. My sequences are padded with 999 for empty space.
Below is how I am creating it. Issue is that if width of image is small, the timestamps gets too small that they are much smaller that the ground truth maximum length.
import tensorflow as tf
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Reshape, Dense, Bidirectional, LSTM, Dropout, BatchNormalization
from keras.models import Model
import tensorflow as tf
def build_cnn(input_shape, num_of_characters):
# Input layer
inputs = Input(shape=input_shape)
# Convolutional layers
x = Conv2D(filters=32, kernel_size=(5, 5), strides=(1, 1), padding='same', activation='relu')(inputs)
x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same')(x)
x = Conv2D(filters=64, kernel_size=(5, 5), strides=(1, 2), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 2), strides=(1, 2), padding='same')(x)
x = Conv2D(filters=128, kernel_size=(5, 5), strides=(1, 2), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 2), strides=(1, 2), padding='same')(x)
x = Conv2D(filters=128, kernel_size=(5, 5), strides=(1, 2), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 2), strides=(1, 2), padding='same')(x)
x = Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 2), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 2), strides=(1, 2), padding='same')(x)
x = Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 2), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 2), strides=(1, 2), padding='same')(x)
x = Conv2D(filters=512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(1, 1), strides=(1, 1), padding='same')(x)
print(f"Shape of x : {x.shape}")
# Get dynamic shape
batch_size = x.shape[0]
time_stamps = x.shape[2] # new width
features = x.shape[1] * x.shape[3] # new height * channels (depth)
# Reshape for LSTM
x = Reshape((time_stamps, features))(x)
return inputs, x
def build_rnn(inputs, x):
# Bidirectional LSTM layers
x = Bidirectional(LSTM(units=512, return_sequences=True))(x)
x = Bidirectional(LSTM(units=512, return_sequences=True))(x)
# Dropout layer
x = Dropout(0.2)(x)
# Output layer
outputs = Dense(num_of_characters, activation='softmax', name="Output_Layer")(x)
# Define model
model = Model(inputs=inputs, outputs=outputs)
return model
def build_model(input_shape, num_of_characters):
cnn_inputs, cnn_out = build_cnn(input_shape, num_of_characters)
rnn_out = build_rnn(cnn_inputs, cnn_out)
return rnn_out
# Example usage
input_shape = (64,5000, 1) # Example input shape (batchsize, height, width, channels)
num_of_characters = 200 # Number of output classes
model = build_model(input_shape=input_shape, num_of_characters=num_of_characters)
model.summary()
What I want is whatever is the width of the image, My code adjusts it and can process and then calculate the cer and levenshtien accuracy too.
In short I want to train an OCR model on dynamic width images, the height can be same. If there is any approach that the height and width can alter too. Kinldy share that too.
I tried to rehsape the output of the cnn to ( batch size, timestamps(image width after passing through CNN), (height * depth) ) new height and depth after passing through CNN. depth is always 512.