'TensorFlow 2.4.1 loadmodel throws an error

python 3.6.9 and tensorflow 2.4.1

I have a simple working inference function as following:

def inference():

  # FIXME: Calling the loadModel() generates a wierd error.
  #infer = loadModel(frozen_model)
  
  # inlined the function here instead:
  imported = tf.saved_model.load(model_dir)
  infer = imported.signatures["action"]
  print("Frozen model successfully load.")

  input = []
  input = setFeatures()
  print("Features successfully fetched, FT size: ", np.shape(input), input.dtype)
  output = infer(tf.constant(input))
  output = output['output'].numpy()
  print(output)
  return output

When I try to call a custom loadModel(frozen_model) function, having the same function body:

def loadModel(model_dir): 
  imported = tf.saved_model.load(model_dir)
  infer = imported.signatures["action"]
  return infer

it throws this weird error:

Frozen model successfully load.
Features successfully fetched, FT size:  (1, 10) float32
2022-03-30 23:44:19.497505: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2022-03-30 23:44:19.499217: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2194810000 Hz
Traceback (most recent call last):
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1679, in _call_impl
    cancellation_manager)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1756, in _call_with_structured_signature
    self._structured_signature_check_missing_args(args, kwargs)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1777, in _structured_signature_check_missing_args
    ", ".join(sorted(missing_arguments))))
TypeError: signature_wrapper(*, FT_features) missing required arguments: FT_features

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/LOCAL//MLInterface.py", line 35, in MLFSM
    infer=inference()
  File "/home/LOCAL/MLInference.py", line 73, in inference
    output = infer(tf.constant(input))
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1669, in __call__
    return self._call_impl(args, kwargs)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1683, in _call_impl
    cancellation_manager)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1736, in _call_with_flat_signature
    return self._call_flat(args, self.captured_inputs, cancellation_manager)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 116, in _call_flat
    cancellation_manager)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1919, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 560, in call
    ctx=ctx)
  File "/home/LOCAL/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
  (0) Failed precondition:  Error while reading resource variable _AnonymousVar0 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar0/N10tensorflow3VarE does not exist.
         [[{{node StatefulPartitionedCall/add_1/ReadVariableOp}}]]
  (1) Failed precondition:  Error while reading resource variable _AnonymousVar0 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar0/N10tensorflow3VarE does not exist.
         [[{{node StatefulPartitionedCall/add_1/ReadVariableOp}}]]
         [[FT_features/_5]]
0 successful operations.
0 derived errors ignored. [Op:__inference_signature_wrapper_39]

Function call stack:
signature_wrapper -> signature_wrapper

What changes when loadModel() returns the infer object to inference()? Is it a bug or I am missing something here? Thanks!



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source