I currently have a computationally intensive algorithm program, where the input is a collection of highly complex nested protobuf type data. The computation time for each time(frame) is approximately 100ms.
I want to increase branch coverage and explore possible crash risks by importing production data (real data from various proto fields) and then using protobuf mutator to mutate a large number of values near the production data for the fuzzing test.
Is protobuf mutator feasible for my situation? Mainly considering two issues
-
The protobuf of production data is very large. For one frame calculation of the program, some of the serialized message of protobuf may have a size over 1M. I'm not sure about the effeciency of the mutator under this circumstance.
-
Does the slow computation per frame (100ms per frame) also make the mutator inefficient?
Finally, if the libprotobuf-mutator is not suitable for my situation, is there any recommended approcach for doing fuzzing test for highly complex nested protobuf in (computationally intensive algorithm)/(slow C++ program) ?