'Anyone faces this issue when restoring checkpoint?

I am trying to build a retinanet to perform custom object identification. However, due to computational resource limitations, I have decided to employ transfer learning, leveraging on tensorflow object-detection api (located here) to restore the weights of ResNet50 trained on CoCo. Is there a better way of doing things, as when I tried to train my model, I get extremely high losses:

batch 0 of 59, loss=24863.758
batch 10 of 59, loss=24897.668
batch 20 of 59, loss=24746.227
batch 30 of 59, loss=24508.37
batch 40 of 59, loss=24240.383
batch 50 of 59, loss=23961.887

On further inspection of my train log, I found the following warnings:

WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.

WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.kernel

WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.bias

So clearly, the restoration of checkpoints has a large part to play in the high losses (especially since I am employing transfer learning, using weights from resnet50).

Code that I used to restore checkpoints:

    # Checkpoint for the base tower layer (layer preceeding both class & bounding box prediction layers) & box prediction head (prediction layer for bounding boxes)
        tmp_box_predictor_checkpoint = tf.train.Checkpoint(
            _base_tower_layers_for_heads=self.detection_model._box_predictor._base_tower_layers_for_heads,
            _box_prediction_head=self.detection_model._box_predictor._box_prediction_head
        )

        # Model checkpoint to point to box prediction layer and feature extraction layer
        tmp_model_checkpoint = tf.train.Checkpoint(_box_predictor=tmp_box_predictor_checkpoint,
                                                   _feature_extractor=self.detection_model._feature_extractor)

        print("model checkpoint var:")
        print(vars(tmp_model_checkpoint))

        # Define a checkpoint that sets `model= None
        checkpoint = tf.train.Checkpoint(model=tmp_model_checkpoint)

        checkpoint_path = MODULE_ROOT / 'checkpoints' / "test_data" / "checkpoint" / "ckpt-0"

        print(f"checkpoint_path: {checkpoint_path}")

        # Restore / load the checkpoint to the checkpoint path
        checkpoint.restore(
            save_path=f"{checkpoint_path}"  # note we are not including the .index extension as doing so will result in the model loss not improving) meaning pre-trained weights were not restored properly)
        )
        print(f"checkpoints restored at {checkpoint_path}")


Solution 1:[1]

See the documentation.

You need to tell the browser (and the client-side babel compiler) that your script is not JS needs compiling with babel by setting type="text/babel"

<script src="./Component/RootComponent.js" type="text/babel"></script>

I strongly recommend setting up a local compiler instead of using browser-side babel.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Quentin