Hello! Our static bug checker has found a performance issue in
tff_tutorials/custom_federated_algorithms,_part_2_implementing_federated_averaging.py: batch_train is repeatedly called in a for loop, but there is a tf.function decorated function _train_on_batch defined and called in batch_train.
In that case, when batch_train is called in a loop, the function `` will create a new graph every time, and that can trigger tf.function retracing warning.
Here is the tensorflow document to support it.
Briefly, for better efficiency, it's better to use:
@tf.function
def inner():
pass
def outer():
inner()
than:
def outer():
@tf.function
def inner():
pass
inner()
Looking forward to your reply.
Hello! Our static bug checker has found a performance issue in
tff_tutorials/custom_federated_algorithms,_part_2_implementing_federated_averaging.py:
batch_trainis repeatedly called in a for loop, but there is a tf.function decorated function_train_on_batchdefined and called inbatch_train.In that case, when
batch_trainis called in a loop, the function `` will create a new graph every time, and that can trigger tf.function retracing warning.Here is the tensorflow document to support it.
Briefly, for better efficiency, it's better to use:
than:
Looking forward to your reply.