Conversation
| from armory.art_experimental.attacks import patch | ||
| from armory.art_experimental.attacks.sweep import SweepAttack | ||
| from armory.datasets.generator import ArmoryDataGenerator | ||
| from armory.data.datasets import EvalGenerator # TODO: Remove before PR merge |
There was a problem hiding this comment.
EvalGenerator is seemingly never reached as armory runs fine without this import, but flake8 complains about this line without this import:
armory/armory/utils/config_loading.py
Line 194 in 0d15a1b
EvalGenerator should be completely removed from this file as suggested by #1836 (comment)
| def mot_array_to_coco(batch): | ||
| """ | ||
| Map from 3D array (batch_size x detections x 9) to extended coco format | ||
| of dimension (batch_size x frames x detections_per_frame) | ||
| NOTE: 'image_id' is given as the frame of a video, so is not unique | ||
| """ | ||
| if len(batch.shape) == 2: | ||
| not_batch = True | ||
| batch = tf.expand_dims(batch, axis=0) | ||
| elif len(batch.shape) == 3: | ||
| not_batch = False | ||
| else: | ||
| raise ValueError(f"batch.ndim {len(batch.shape)} is not in (2, 3)") | ||
|
|
||
| # output = tf.TensorArray(dtype=tf.float32, size=batch.shape[0], dynamic_size=False) | ||
| output = [] | ||
| for i in range(batch.shape[0]): | ||
| array = batch[i] | ||
| # if not tf.math.greater(tf.shape(array)[0], 0): | ||
| if array.shape[0] == 0: | ||
| # no object detections | ||
| # output = output.write(i, []) | ||
| output.append([]) | ||
| continue | ||
|
|
||
| # frames = tf.TensorArray(dtype=tf.float32, size=tf.shape(array)[0], dynamic_size=False) | ||
| frames = [] | ||
| for detection in array: | ||
| frame = tf.lookup.StaticHashTable( | ||
| { | ||
| # TODO: should image_id include video number as well? | ||
| "image_id": tf.cast(tf.math.round(detection[0]), tf.int32), | ||
| "category_id": tf.cast(tf.math.round(detection[7]), tf.int32), | ||
| "bbox": tf.cast(detection[2:6], float), | ||
| "score": tf.cast(detection[6], float), | ||
| # The following are extended fields | ||
| "object_id": tf.cast( | ||
| tf.math.round(detection[1]), tf.int32 | ||
| ), # for a specific object across frames | ||
| "visibility": tf.cast(detection[8], float), | ||
| } | ||
| ) | ||
| frames.append(frame) | ||
| # frames = frames.write(frames.size(), frame) | ||
| # output = output.write(i, frames) | ||
| output.append(frames) | ||
|
|
||
| if not_batch: | ||
| output = output[0] | ||
|
|
||
| raise NotImplementedError("This does not work yet") | ||
| return output |
There was a problem hiding this comment.
This function is translated from the linked function below, however tensorflow does not support storing dictionaries as elements of a tensor (whereas previously numpy had no problem with this). As such the return of this preprocessing function cannot be a tensor containing lists of dictionaries (which contain tensors as values) since this is not supported even with RaggedTensors. From this I only see two viable options:
- Keep the logic of this function as a preprocessing step and encode the key/value pairs of the dictionary using some encoding schema to be decoded during the
nextcall - Move this function to be applied in CarlaMOT in the
nextcall - Some tensorflow magical operations to get this to work without touching scenario code
Thoughts?
armory/armory/data/adversarial_datasets.py
Lines 1029 to 1072 in 48dcc04
There was a problem hiding this comment.
Note that the rest of the function works up until the NotImplementedError; after that point it either throws an error when trying to tf.convert_to_tensor(output) or when the function is mapped to the dataloader it complains that the output is not a valid return type (since it is a list of dictionaries).
Also the static hash table makes no difference compared to using a generic dictionary.
There was a problem hiding this comment.
Ideally (3) is best but I think (2) may be preferable to (1)
There was a problem hiding this comment.
Agreed; assuming you don't have a recommendation of how to proceed with (3) I will implement (2).
There was a problem hiding this comment.
I do not, we've run into a similar issue with other datasets that has required modifying the scenario code
No description provided.