'How to optimise CNN model for traffic congestion?
I am trying to build a CNN model using Keras for traffic congestion prediction. Currently I have around around 1600 images - 700 for positive class and 900 for negative class. They are high resolution images of 1920x1080. I have tried quite a number of approached including -
- Increasing number of Conv2D layers - started with just 2 Conv2D layers with 32 filters and eventually increased to up to 9 layers (different number of filters) with MaxPooling in between.
- Adding MaxPooling to reduce the feature map
- Adding data augmentation to training data
- Tried different kernel sizes assuming that larger kernerls will detect big objects like cars in the images
After all these, not able to increase my accuracy beyond 55%. Whereas if I build a 2 layer neural network model on the same data - 1st layer with 128 fully connected neurons and output layer - it's able to give accuracy of the order of 99%.
Any suggestions on improving accuracy of CNN model?
Edit: below is the code I am using -
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255,
horizontal_flip=True,
rotation_range=40,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.2,
zoom_range=0.2,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 120 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_folder, # This is the source directory for training images
classes = ['no_traffic', 'traffic'],
target_size=(IMG_WIDTH, IMG_HEIGHT), # All images will be resized to IMG_WIDTHxIMG_HEIGHT
batch_size=train_batch_size,
# Use binary labels
class_mode='binary')
# Flow validation images in batches of 19 using valid_datagen generator
validation_generator = validation_datagen.flow_from_directory(
validate_folder, # This is the source directory for training images
classes = ['no_traffic', 'traffic'],
target_size=(IMG_WIDTH, IMG_HEIGHT), # All images will be resized to IMG_WIDTHxIMG_HEIGHT
batch_size=validate_batch_size,
# Use binary labels
class_mode='binary',
shuffle=False)
##Model
model=Sequential()
model.add(Conv2D(filters=64, strides=1, kernel_size=3, padding="same", input_shape=(IMG_HEIGHT,IMG_WIDTH, 3)))
model.add(Activation('relu'))
model.add(Conv2D(filters=64, strides=1, kernel_size=3, padding="same"))
model.add(Activation('relu'))
model.add(Conv2D(filters=64, strides=1, kernel_size=3, padding="same"))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2), padding="valid"))
model.add(Conv2D(filters=128, strides=1, kernel_size=3, padding="same"))
model.add(Activation('relu'))
model.add(Conv2D(filters=128, strides=1, kernel_size=3, padding="same"))
model.add(Activation('relu'))
model.add(Conv2D(filters=128, strides=1, kernel_size=3, padding="same"))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2), padding="valid"))
model.add(Conv2D(filters=128, strides=(2, 2), kernel_size=3, padding="valid"))
model.add(Activation('relu'))
model.add(Conv2D(filters=128, strides=(2, 2), kernel_size=3, padding="valid"))
model.add(Activation('relu'))
model.add(Conv2D(filters=128, strides=(2, 2), kernel_size=3, padding="valid"))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2), padding="valid"))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(256,activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1,activation='sigmoid'))
##Model compiler
model.compile(optimizer = Adam(),
loss = 'binary_crossentropy',
metrics=['accuracy'])
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
