Skip to content

Error using model nucleiDAPI1-5 #19

@rockdeme

Description

@rockdeme

Hey! I'm running UnMicst via docker in wsl2. I get an error when I run the nucleiDAPI1-5 model, others seem to be working fine.

root@9905833d651a:/# python app/UnMicst.py data/file.tif --outputPath data/ --model nucleiDAPI1-5 --channel 42

I get the following error:

Instructions for updating:
non-resource variables are not supported in the long term
Sat Nov 19 12:07:21 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.01    Driver Version: 512.78       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   43C    P0    19W /  N/A |      0MiB /  6144MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
automatically choosing GPU
Using GPU 0
loading data
app/UnMicst.py:99: UserWarning: `tf.layers.batch_normalization` is deprecated and will be removed in a future version. Please use `tf.keras.layers.BatchNormalization` instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.BatchNormalization` documentation).
  bn = tf.layers.batch_normalization(tf.nn.relu(c00 + shortcut), training=UNet2D.tfTraining)
/usr/local/lib/python3.8/dist-packages/keras/legacy_tf_layers/normalization.py:455: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs, training=training)
loading data
loading data
0.34
0.25
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1380, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1363, in _run_fn
    return self._call_tf_sessionrun(options, feed_dict, fetch_list,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1456, in _call_tf_sessionrun
    return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
  (0) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[{{node save/RestoreV2}}]]
         [[save/RestoreV2/_45]]
  (1) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[{{node save/RestoreV2}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 1404, in restore
    sess.run(self.saver_def.restore_op_name,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 970, in run
    result = self._run(None, fetches, feed_dict, options_ptr,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1193, in _run
    results = self._do_run(handle, final_targets, final_fetches,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1373, in _do_run
    return self._do_call(_run_fn, feeds, fetches, targets, options,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1399, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
  (0) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[node save/RestoreV2
 (defined at app/UnMicst.py:510)
]]
         [[save/RestoreV2/_45]]
  (1) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[node save/RestoreV2
 (defined at app/UnMicst.py:510)
]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node save/RestoreV2:
In[0] save/Const:
In[1] save/RestoreV2/tensor_names:
In[2] save/RestoreV2/shape_and_slices:

Operation defined at: (most recent call last)
>>>   File "app/UnMicst.py", line 596, in <module>
>>>     UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
>>>
>>>   File "app/UnMicst.py", line 510, in singleImageInferenceSetup
>>>     saver = tf.train.Saver()
>>>

Input Source operations connected to node save/RestoreV2:
In[0] save/Const:
In[1] save/RestoreV2/tensor_names:
In[2] save/RestoreV2/shape_and_slices:

Operation defined at: (most recent call last)
>>>   File "app/UnMicst.py", line 596, in <module>
>>>     UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
>>>
>>>   File "app/UnMicst.py", line 510, in singleImageInferenceSetup
>>>     saver = tf.train.Saver()
>>>

Original stack trace for 'save/RestoreV2':
  File "app/UnMicst.py", line 596, in <module>
    UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
  File "app/UnMicst.py", line 510, in singleImageInferenceSetup
    saver = tf.train.Saver()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 923, in __init__
    self.build()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 935, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 963, in _build
    self.saver_def = self._builder._build_internal(  # pylint: disable=protected-access
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 533, in _build_internal
    restore_op = self._AddRestoreOps(filename_tensor, saveables,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 353, in _AddRestoreOps
    all_tensors = self.bulk_restore(filename_tensor, saveables, preferred_shard,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 601, in bulk_restore
    return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 1501, in restore_v2
    _, _, _op, _outputs = _op_def_library._apply_op_helper(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 744, in _apply_op_helper
    op = g._create_op_internal(op_type_name, inputs, dtypes=None,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3697, in _create_op_internal
    ret = Operation(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 2101, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/py_checkpoint_reader.py", line 70, in get_tensor
    return CheckpointReader.CheckpointReader_GetTensor(
RuntimeError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 1415, in restore
    names_to_keys = object_graph_key_mapping(save_path)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 1736, in object_graph_key_mapping
    object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/py_checkpoint_reader.py", line 75, in get_tensor
    error_translator(e)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/py_checkpoint_reader.py", line 35, in error_translator
    raise errors_impl.NotFoundError(None, None, error_message)
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "app/UnMicst.py", line 596, in <module>
    UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
  File "app/UnMicst.py", line 514, in singleImageInferenceSetup
    saver.restore(UNet2D.Session, variablesPath)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 1420, in restore
    raise _wrap_restore_error_with_msg(
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

2 root error(s) found.
  (0) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[node save/RestoreV2
 (defined at app/UnMicst.py:510)
]]
         [[save/RestoreV2/_45]]
  (1) NOT_FOUND: Key batch_normalization_1/beta not found in checkpoint
         [[node save/RestoreV2
 (defined at app/UnMicst.py:510)
]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node save/RestoreV2:
In[0] save/Const:
In[1] save/RestoreV2/tensor_names:
In[2] save/RestoreV2/shape_and_slices:

Operation defined at: (most recent call last)
>>>   File "app/UnMicst.py", line 596, in <module>
>>>     UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
>>>
>>>   File "app/UnMicst.py", line 510, in singleImageInferenceSetup
>>>     saver = tf.train.Saver()
>>>

Input Source operations connected to node save/RestoreV2:
In[0] save/Const:
In[1] save/RestoreV2/tensor_names:
In[2] save/RestoreV2/shape_and_slices:

Operation defined at: (most recent call last)
>>>   File "app/UnMicst.py", line 596, in <module>
>>>     UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
>>>
>>>   File "app/UnMicst.py", line 510, in singleImageInferenceSetup
>>>     saver = tf.train.Saver()
>>>

Original stack trace for 'save/RestoreV2':
  File "app/UnMicst.py", line 596, in <module>
    UNet2D.singleImageInferenceSetup(modelPath, GPU,args.mean,args.std)
  File "app/UnMicst.py", line 510, in singleImageInferenceSetup
    saver = tf.train.Saver()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 923, in __init__
    self.build()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 935, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 963, in _build
    self.saver_def = self._builder._build_internal(  # pylint: disable=protected-access
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 533, in _build_internal
    restore_op = self._AddRestoreOps(filename_tensor, saveables,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 353, in _AddRestoreOps
    all_tensors = self.bulk_restore(filename_tensor, saveables, preferred_shard,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py", line 601, in bulk_restore
    return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 1501, in restore_v2
    _, _, _op, _outputs = _op_def_library._apply_op_helper(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 744, in _apply_op_helper
    op = g._create_op_internal(op_type_name, inputs, dtypes=None,
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3697, in _create_op_internal
    ret = Operation(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 2101, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)```

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions