-
Notifications
You must be signed in to change notification settings - Fork 134
Description
I have faced an error (below) while running run_on_test_videos.sh on my sequence of images
Wondering what may be, the only clue I got was tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,322,364,32] vs. shape[1] = [1,321,364,32]. Seems somewhere the shapes do not match, but I do not know from what, or what should I change in the code.
However, observing the images I put in the data/vids/'videoname', their sizes are 727 x 642. Then I realize that the error showed numbers close to the half of these image dimensions.
However, the problem do not seem to be with 727 (as ceil(727/2)=364 match in both shape[0] and shape[1] in the error description), so, the problem might be with the dimension 642, as 332 do not match with 321 (respectively 642/2 +1 and 642/2)
So, my question is: there is any known restriction on the image sizes? They must be odd? Which image sizes could make the approximation errors that break the program?
Knowing it Would help me to crop the images in a size the can work.
Thank you in advance.
The full error:
Traceback (most recent call last):
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,322,364,32] vs. shape[1] = [1,321,364,32]
[[{{node ynet_3frames/decoder/concat}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 102, in <module>
main(arguments)
File "main.py", line 83, in main
args.velocity_mag)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 279, in run
out_amp = self.inference(prev_frame, frame, amplification_factor)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 235, in inference
[amplification_factor]})
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,322,364,32] vs. shape[1] = [1,321,364,32]
[[node ynet_3frames/decoder/concat (defined at /home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Original stack trace for 'ynet_3frames/decoder/concat':
File "main.py", line 102, in <module>
main(arguments)
File "main.py", line 83, in main
args.velocity_mag)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 265, in run
self.setup_for_inference(checkpoint_dir, image_width, image_height)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 209, in setup_for_inference
self._build_feed_model()
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 196, in _build_feed_model
False)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 145, in image_transformer
return self._decoder(self.texture_b, self.out_shape_enc)
File "/home/jonathan/Dropbox/UnB/motionMag/deep_motion_mag-master/magnet.py", line 119, in _decoder
enc = tf.concat([texture_enc, shape_enc], axis=3)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 1420, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 1257, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/home/jonathan/Dropbox/UnB/motionMag/momagenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()