'Tensorflow Object detection AI

I am using Tensorflow model zoo object detection. SSD MobileNet V2 FPNLite 320x320 is the model I am using to train my model. Everything goes well my model starts training but I receive some weird msgs. I don't why this msg is showing up.

I think half of my model is training on GPU and then it is switching to CPU but I am not sure.

Here are the msgs that are showing up.

2022-01-30 19:30:21.237816: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9971 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6 INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) I0130 19:30:21.241063 140126199379776 mirrored_strategy.py:376] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)

After this it is showing me the following msgs.

INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0130 19:30:43.470607 140126199379776 cross_device_ops.py:619] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).

Here is my GPU information.

enter image description here

Please someone help me with this. I have been struggling for weeks.



Solution 1:[1]

Above messages are logs that appear while using distribution strategy in code. You can disable the logs by adding these three lines at start of your program.

import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow as tf

you can force the process to run in certain Gpu card using tf.device() Api to avoid switching between from GPU to CPU. https://www.tensorflow.org/api_docs/python/tf/device

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Tensorflow Support