'Are modern CNN (convolutional neural network) as DetectNet rotate invariant?

As known nVidia DetectNet - CNN (convolutional neural network) for object detection is based on approach from Yolo/DenseBox: https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/

DetectNet is an extension of the popular GoogLeNet network. The extensions are similar to approaches taken in the Yolo and DenseBox papers.

And as shown here, DetectNet can detects objects (cars) with any rotations: https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/

enter image description here

Are modern CNN (convolutional neural network) as DetectNet rotate invariant?

Can I train DetectNet on thousands different images with one the same rotation angle of object, to detect objects on any rotation angles?

enter image description here

And what about rotate invariant of: Yolo, Yolo v2, DenseBox on which based DetectNet?



Solution 1:[1]

No

In classification problems, CNNs are not rotate invariant. You need to include in your training set images with every possible rotation.

You can train a CNN to classify images into predefined categories (if you want to detect several objects in a image as in your example you need to scan every place of a image with your classifier).

However, this is an object detection problem, not only a classification problem.

In object detection problems, you can use a sliding window approach, but it is extremely inefficient. Instead a simple CNN other architectures are the state of art. For example:

These architectures can detect the object anywhere in the image, but you also must include in the training set samples with different rotations (and the training set must be labelled using bounding boxes, that it is very time consuming).

Solution 2:[2]

Adding on to Rob's answer, in general CNN itself is translation invariant, but not rotation and scale. However, it is not compulsory to include all possible rotations into your training data. A max pooling layer would introduce rotation invariant.

This image posted by Franck Dernoncourt here might be what you're looking for.

Secondly, regarding Kershaw's comment on Rob's answer which says:

A CNN is invariant to small horizontal or vertical movements in your training data mainly because of max pooling.

The main reason CNNs are translation invariant is the convolution. The filter would extract the feature regardless of where it is in the image since the filter will be moving across the entire image. It is when the image is rotated or scaled that the filter would fail because of the difference in pixel representation of the features.

Source: Aditya Kumar Praharaj's answer from this link.

Solution 3:[3]

Detectron2 added Rotated Faster RCNN network recently. To create such model, you should create annotations for vehicles with rotated bounding box, which is:

rbbox = [center_x, center_x, width, height, angle]

For example:

enter image description here

Visit this link for more information.

Solution 4:[4]

The CNN is rotation-invariant providing all convolution kernel with a property K = T{K} (e.g., use of symmetrical kernels) and replace the 1st flatten layer with a merge convolution layer. I called it transformation-identical CNN (TI-CNN), https://arxiv.org/abs/1806.03636 and https://arxiv.org/abs/1807.11156

If you wish to establish a rotation-identical CNN (virtually arbitrary small angle), I would introduce the geared rotation-identical CNN (GRI-CNN) https://arxiv.org/abs/1808.01280

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 Maryam Bahrami
Solution 4 ShihChung Benedict