Thanks for your shared code. When I train the model on my own datasets, I found memory leak when running code :
training_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), training_batch))
groundtruth_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), groundtruth_batch))
and after some iteration, when saving checkpoints, the graphdef is larger than 2GB, program crashing. Does anyone meet this issue and how to solve it?
Thanks for your shared code. When I train the model on my own datasets, I found memory leak when running code :
training_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), training_batch))
groundtruth_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), groundtruth_batch))
and after some iteration, when saving checkpoints, the graphdef is larger than 2GB, program crashing. Does anyone meet this issue and how to solve it?