diff --git a/CMakeLists.txt b/CMakeLists.txt
index 3cd3526..50d66cf 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -32,12 +32,10 @@ include_directories(include lib tests
${Picoquic_INCLUDE_DIRS} ${PTLS_INCLUDE_DIRS} ${OPENSSL_INCLUDE_DIR})
set (C4_LIBRARY_FILES
- src/c4.c
src/register_cc_algo.c
)
set (C4_LIBRARY_HEADERS
- src/c4.h
src/picoquic_register_cc_algo.h
)
diff --git a/c4_lib/c4_lib.vcxproj b/c4_lib/c4_lib.vcxproj
index 77a38cb..c7b64b0 100644
--- a/c4_lib/c4_lib.vcxproj
+++ b/c4_lib/c4_lib.vcxproj
@@ -144,14 +144,12 @@
-
-
diff --git a/c4_lib/c4_lib.vcxproj.filters b/c4_lib/c4_lib.vcxproj.filters
index b26e036..d05ecaa 100644
--- a/c4_lib/c4_lib.vcxproj.filters
+++ b/c4_lib/c4_lib.vcxproj.filters
@@ -21,9 +21,6 @@
Header Files
-
- Source Files
-
Source Files
@@ -32,9 +29,6 @@
-
- Source Files
-
Source Files
diff --git a/doc/c4-design.md b/doc/c4-design.md
index 67ca049..74280c9 100644
--- a/doc/c4-design.md
+++ b/doc/c4-design.md
@@ -85,7 +85,9 @@ informative:
I-D.irtf-iccrg-ledbat-plus-plus:
+ RFC9330:
RFC9331:
+ I-D.briscoe-iccrg-prague-congestion-control:
ICCRG-LEO:
target: https://datatracker.ietf.org/meeting/122/materials/slides-122-iccrg-mind-the-misleading-effects-of-leo-mobility-on-end-to-end-congestion-control-00
title: "Mind the Misleading Effects of LEO Mobility on End-to-End Congestion Control"
@@ -544,7 +546,7 @@ if measured_rate > nominal_rate and not congested:
In our early experiments, we observed a "congestion bounce"
that happened as follow:
-* congestion is detected, the nomnal rate is reduced, and
+* congestion is detected, the nominal rate is reduced, and
C4 enters recovery.
* packets sent at the data rate that caused the congestion
continue to be acknowledged during recovery.
@@ -596,6 +598,43 @@ ack delay and the send delay as a divider is sufficient
for stable operation, and does not cause the response
delays that filtering would.
+## Early Congestion Modification
+
+We want C4 to handle Early Congestion Notification in a manner
+compatible with the L4S design. For that, we monitor
+the evolving ratio of CE marks that the L4S specification
+designates as `alpha`
+(we use `ecn_alpha` here to avoid confusion),
+and we detect congestion if the ratio grows over a threshold.
+
+We did not find a recommended algorithm for computing `ecn_alpha`
+in either {{RFC9330}} or {{RFC9331}}, but we could get some
+concrete suggestions in {{I-D.briscoe-iccrg-prague-congestion-control}}.
+That draft, now obsolete, suggests updating the ratio once per
+RTT, as the exponential weighted average of the fraction of
+CE marks per packet:
+
+~~~
+frac = nb_CE / (nb_CE + nb_ECT1)
+ecn_alpha += (frac - ecn_alpha)/16
+~~~
+
+This kind of averaging introduces a reaction delay. The draft suggests mitigating that
+delay by preempting the averaging if the fraction is large:
+
+~~~
+if frac > 0.5:
+ ecn_alpha = frac
+~~~
+
+We followed that design, but decided to update the coefficient after
+each acknowledgement, instead of after each RTT. This is in line with
+our implementation of "delayed acknowledgements" in QUIC, which
+results in a small number of acknowledgements per RTT.
+
+The reaction of C4 to an excess of CE marks is similar to the
+reaction to excess delays or to packet losses, see {{congestion}}.
+
# Competition with other algorithms
We saw in {{vegas-struggle}} that delay based algorithms required
@@ -802,7 +841,7 @@ connection using either Cubic, BBR or C4. We had to design a response,
and we first turned to making the response to excess delay or
packet loss a function of the data rate of the flow.
-## Introducing a sensitivity curve
+## Introducing a sensitivity curve {#sensitivity-curve}
In our second design, we attempted to fix the unfairness and
shutdowns effect by introducing a sensitivity curve,
@@ -831,6 +870,13 @@ For the loss threshold, the rule is:
loss_threshold = 0.02 + 0.50 * (1-sensitivity);
~~~
+For the CE mark threshold, the rule is:
+
+~~~
+loss_threshold = 1/32 + 1/32 * (1-sensitivity);
+~~~
+
+
This very simple change allowed us to stabilize the results. In our
competition tests we see sharing of resource almost equitably between
C4 connections, and reasonably between C4 and Cubic or C4 and BBR.
@@ -858,7 +904,8 @@ This means we would only double the bandwidth after about 68 RTT, or increase
from 10 to 65 Mbps after 185 RTT -- by which time the LEO station might
have connected to a different orbiting satellite. To go faster, we implement
a "cascade": if the previous pushing at 6.25% was successful, the next
-pushing will use 25% (see {{variable-pushing}}). If three successive pushings
+pushing will use 25% (see {{variable-pushing}}), or an intermediate
+value if the observed ratio of ECN marks is greater than 0. If three successive pushings
all result in increases of the
nominal rate, C4 will reenter the "startup" mode, during which each RTT
can result in a 100% increase of rate and CWND.
@@ -947,6 +994,12 @@ We manage that compromise by adopting a variable pushing rate:
the next pushing will happen at 25%, otherwise it will
remain at 6.25%
+If the observed ratio of ECN-CE marks is greater than zero, we will
+use it to modulate the amount of pushing. We leave the pushing rate
+at 6.25% if the previous pushing attempt was not successful, but
+otherwise we pick a value intermediate between 25% (if 0 ECN marks)
+and 6.25% (if the ratio of ECN marks approaches the threshold).
+
As explained in {{cascade}}, if three consecutive pushing attempts
result in significant increases, C4 detects that the underlying network
conditions have changed, and will reenter the startup state.
diff --git a/doc/c4-spec.md b/doc/c4-spec.md
index d03c965..df94dab 100644
--- a/doc/c4-spec.md
+++ b/doc/c4-spec.md
@@ -302,7 +302,7 @@ diagram.
~~~
-## Setting pacing rate, congestion window and quantum
+## Setting pacing rate, congestion window and quantum {#set_pace}
If the nominal rate or the nominal max RTT are not yet
assessed, C4 sets pacing rate, congestion window and
@@ -320,7 +320,6 @@ and on a coefficient `alpha_current`:
~~~
pacing_rate = alpha_current_ * nominal_rate
cwnd = max (pacing_rate * nominal_max_rtt, 2*MTU)
-quantum = max ( min (cwnd / 4, 64KB), 2*MTU)
~~~
The coefficient `alpha` for the different states is:
@@ -332,6 +331,18 @@ Recovery | 15/16 |
Cruising | 1 |
Pushing | 5/4 or 17/16 | see {{c4-pushing}} for rules on choosing 5/4 or 17/16
+Setting the pacing quantum is a tradeoff between two requirements.
+Using a large quantum enables applications to send large batches of
+packets in a single transaction, which improves performance. But
+sending large batches of packets creates "instant queues" and
+causes some Active Queue Management mechanisms to mark packets as
+ECN/CE, or drop them. As a compromise, we set the quantum to
+4 milliseconds worth of transmission.
+
+~~~
+quantum = max ( min (pacing_rate*4_milliseconds, 64KB), 2*MTU)
+~~~
+
## Initial state {#c4-initial}
When the flow is initialized, it enters the Initial state,
@@ -432,10 +443,21 @@ a congestion signal is received.
## Pushing state {#c4-pushing}
-The Pushing state is entered from the Cruising state.
-The coefficient `alpha_current` is set to 5/4 if the previous
+The Pushing state is entered from the Cruising state.
+
+The coefficient `alpha_current` depend on whether the
+previous
pushing attempt was successful (see {{c4-recovery}}),
-or 17/16 if it was not.
+and also of the current value of `ecn_alpha`
+(see {{process-ecn}}):
+
+~~~
+ if not previous_attempt_successful:
+ alpha_current = 17/16
+ else:
+ alpha_current = 17/16 +
+ 17/16 * (1 - ecn_alpha / ecn_threshold)
+~~~
C4 exits the pushing state after one era, or if a congestion
signal is received before that. In an exception to
@@ -536,13 +558,53 @@ PTO timeouts. When testing in "high jitter" conditions, we realized that we shou
not change the state of C4 for losses detected solely based on timer, and
only react to those losses that are detected by gaps in acknowledgements.
-## Detecting Excessive CE Marks
+## Detecting Excessive CE Marks {#process-ecn}
+
+When the path supports ECN marking, C4 monitors the arrival of ECN/CE and
+ECN/ECT(1) marks by computing the ratio `ecn_alpha`. Congestion is detected
+when that ratio exceeds `ecn_threshold`, which varies depending on the
+sensitivity coefficient:
+
+~~~
+ecn_threshold = (2-sensitivity)*3/32
+~~~
+
+The ratio `ecn_alpha` is
+updated each time an acknowledgement is received, as follow:
+
+~~~
+delta_ce = increase in the reported CE marks
+delta_ect1 = increase in the reported ECT(1) marks
+frac = delta_ce / (delta_ce + delta_ect1)
+
+if frac >= 0.5:
+ ecn_alpha = frac
+else:
+ ecn_alpha += (frac - ecn_alpha)/16
+
+if ecn_alpha > ecn_threshold:
+ report congestion
+~~~
+
+Congestion detection causes C4 to enter recovery. The
+ration `ecn_alpha` is set to zero on exit of recovery.
+
+## Applying congestion signals
+
+On congestion signal, if C4 was not in recovery state, it
+will enter recovery.
-TBD. The plan is to mimic the L4S specification.
+As stated in {{c4-initial}} and {{c4-pushing}}, detecting
+a congestion in the Initial or Pushing state does not cause
+a change in the `nominal_rate` or `nominal_max_RTT`, because
+the pacing rate in these states is larger than the
+`nominal_rate`. Rate reduction only happens if recovery
+was entered from the Cruising state.
-## Rate Reduction on Congestion
+### Rate Reduction on Congestion {#rate-reduction}
-On entering recovery, C4 reduces the `nominal_rate` by the factor "beta"
+On entering recovery from the cruising state, C4 reduces the
+`nominal_rate` by the factor "beta"
corresponding to the congestion signal:
~~~
@@ -560,13 +622,16 @@ the acceptable margin, capped to `1/4`:
~~~
beta = min(1/4,
(rtt_sample - (nominal_max_rtt + delay_threshold)/
- delay_threshod))
+ delay_threshold))
+~~~
+
+If the signal is an ECN/CE rate, the coefficient is proportional
+to the difference between `ecn_alpha` and `ecn_threshold`, capped to '1/4':
+
+~~~
+ beta = min(1/4, (ecn_alpha - ecn_threshold)/ ecn_threshold))
~~~
-If the signal is an ECN/CE rate, this is still TBD. We could
-use a proportional reduction coefficient in line with
-{{RFC9331}}, but we should use the sensitivity coefficient to
-modulate that signal.
# Security Considerations
@@ -586,6 +651,32 @@ This document has no IANA actions.
TODO acknowledge.
+# Changes since previous versions
+
+This section should be deleted before publication as an RFC
+
+## Changes since draft-huitema-ccwg-c4-spec-00
+
+Added the specification of reaction to ECN in {{process-ecn}}
+and in {{rate-reduction}}. Update section {{c4-pushing}} to
+modulate pushing rate based on observed rate of ECN/CE marks.
+
+In {{set_pace}}, the computation of the "quantum" changed
+from:
+
+~~~
+quantum = max ( min (cwnd / 4, 64KB), 2*MTU)
+~~~
+
+to:
+
+~~~
+quantum = max ( min (pacing_rate*4_milliseconds, 64KB), 2*MTU)
+~~~
+
+The old formula caused long bursts of packets that would
+trigger packet drops or ECN/CE marking by active queue management
+algorithms.
diff --git a/papers/c4-ecn-alpha-128-256-qvis.png b/papers/c4-ecn-alpha-128-256-qvis.png
new file mode 100644
index 0000000..c7ee8b0
Binary files /dev/null and b/papers/c4-ecn-alpha-128-256-qvis.png differ
diff --git a/papers/c4-ecn-early-fixed-qvis.png b/papers/c4-ecn-early-fixed-qvis.png
new file mode 100644
index 0000000..0a29a92
Binary files /dev/null and b/papers/c4-ecn-early-fixed-qvis.png differ
diff --git a/papers/c4-ecn-early-fixed-sim-rates.png b/papers/c4-ecn-early-fixed-sim-rates.png
new file mode 100644
index 0000000..ec03e32
Binary files /dev/null and b/papers/c4-ecn-early-fixed-sim-rates.png differ
diff --git a/papers/c4-ecn-early-fixed-sim.png b/papers/c4-ecn-early-fixed-sim.png
new file mode 100644
index 0000000..0a29a92
Binary files /dev/null and b/papers/c4-ecn-early-fixed-sim.png differ
diff --git a/papers/c4-ecn-early-trial-qvis.png b/papers/c4-ecn-early-trial-qvis.png
new file mode 100644
index 0000000..f02fb64
Binary files /dev/null and b/papers/c4-ecn-early-trial-qvis.png differ
diff --git a/papers/c4-ecn-quantum-4-fixed-qvis.png b/papers/c4-ecn-quantum-4-fixed-qvis.png
new file mode 100644
index 0000000..c48bbcf
Binary files /dev/null and b/papers/c4-ecn-quantum-4-fixed-qvis.png differ
diff --git a/papers/c4-ecn-quantum-4-qvis.png b/papers/c4-ecn-quantum-4-qvis.png
new file mode 100644
index 0000000..d4cae0a
Binary files /dev/null and b/papers/c4-ecn-quantum-4-qvis.png differ
diff --git a/papers/ecn-support.md b/papers/ecn-support.md
new file mode 100644
index 0000000..88b248b
--- /dev/null
+++ b/papers/ecn-support.md
@@ -0,0 +1,190 @@
+# Adding ECN Support
+
+The first attempt at ECN support was not very convincing.As shown on the qvis congestion graph below,
+the connection quickly reaches an excessive rate, endures lots of packet losses, and then stabilizes
+at a very low data rate.
+
+
+
+There are lots of weird data in this trial:
+
+* During the initial phase, the "nominal RTT" is set to 58,095,238 based on a very first measurement,
+ in which the RTT is measured at 21 microseconds, the send delay to 0, with 1220 bytes acknowledged.
+ The simulated data rate is 20 Mbps, which means no measurement should have exceeded 2,500,000.
+ There is clearly a bug, probably due to some confusion between initial, handshake and 1 RTT packets.
+* It takes a long series of losses over 1.3 seconds to drive the data rate back to 2,500,000 MBps.
+* The nominal max RTT remains at 21 ms for 3 initial RTTs.
+* The RTT measurements are very often below the actual path latency, which probably indicates
+ some kind of jitter and ack compression.
+* After the initial phase, the data rate oscillates between 800,000 and 1,600,000 Bps, well below
+ 2,500,000 MBps, probably due to the initial naive implementation of ECN as "similar to packet loss."
+
+This suggests a bunch of fixes before the next trial, starting with fixing the aberrant
+initial value of the nominal data rate.
+
+
+
+The issue with the initial data comes from mixing data and acknowledgements from different QUIC epochs
+(initial, handshake, 1RTT). The simplest fix is for C4 to ignore the packets exchanged in the initial
+and handshake epochs. That fixes the excess initial rate observed in the previous trial, but there
+are still issues:
+
+* ignoring the initial epoch causes the "media over bad wifi" test to fail, with a maximum
+ frame delay jumping from 410ms to 509ms.
+* C4 exits the initial phase early, after receiving an ECN notification.
+* The RTT measurements are still very often below the actual path latency.
+* After the initial phase, the data rate oscillates between 800,000 and 1,600,000 Bps, well below
+ 2,500,000 MBps, probably due to the initial naive implementation of ECN as "similar to packet loss."
+
+The tests were made harder by our initial decision to leave the C4 code (c4.c) in the C4 project, and
+load it as a submodule in the picoquic project. This turns out to not be too good, because the the
+linker building the simulator gets confused between two copies of "c4.o", one in the picoquic core
+library and one in the C4 project. It is more reasonable to move the code entirely to the picoquic
+project, and only use the C4 project for documentation and simulations.
+
+The draft
+[TCP Prague](https://datatracker.ietf.org/doc/draft-briscoe-iccrg-prague-congestion-control/)
+request computation of a moving average once per RTT, using the ratio `frac` as the number of
+ECN/CE marks over the total number of packets received in that RTT:
+
+~~~
+ alpha += g * (frac - alpha);
+~~~
+
+In that formula, the gain `g` is by default set to `1/16`. The coefficient `alpha` provides
+an estimate of the marking rate at the bottleneck. When congestion is detected, the `ssthresh`
+is reduced to:
+
+~~~
+ ssthresh = (1 - alpha/2) * cwnd;
+~~~
+
+There is a further stipulation that the alpha is initialized to 1 at the first ECN mark, which
+ensures that the first ECN mark causes Prague to exit slow start.
+
+We can derive from these specifications that C4 should exit the initial phase at the
+first ECN mark, but we have a little ambiguity on the reduction. The "cwnd" coefficient
+in Prague is updated in real time, but in C4 it set to a multiple of the nominal
+rate and the nominal max RTT, times a coefficient 2 in in the initial phase.
+It would make sense to exit slow start and leave the nominal data rate "as is".
+The draft however has a caveat, explaining that if multiple connections are
+establish, they may trigger frequent marking. Rather than exit slow start on
+the first mark, it might be better to only exit when the rate passes some threshold.
+It would also make sense to have that threshold be a function of the "sensitivity"
+curve.
+
+Thinking further, we have to look at ECN as part of the probing cycle. In
+the initial phase, we see a constant increase of the data rate, and thus
+averaging the ECN rate or computing era averages over time has little value.
+It is better to just run a short term averaging to reduce the effect of noise,
+and simply exit the initial phase if the average passes the threshold
+corresponding to the sensitivity. There is no much point in dropping the
+nominal rate on exit, since entering recovery will take care of that.
+
+After the initial phase, we have a succession of probing cycles, starting
+with recovery, continuing with cruising and possibly probing, and then
+returning to recovery either after congestion is detected or probing is
+complete. We want to:
+
+* exit cruising or probing if the short-term average of ECN marks is too
+ high, and treat that as a congestion signal modulated by the observed
+ CE marking rate and the sensitivity.
+* in recovery, mark the probing as "congested" if the ECN rate is too
+ high, so the next probe happens at a low rate.
+* reset averaging at the end of recovery, so averages in the next cycle
+ are not skewed by congestion in the previous cycle.
+
+This drives a different implementation than Prague. The ECN rate is
+a running exponential average, computed either from the beginning
+of the Initial phase or from the end of the previous Recovery phase.
+It should probably be computed after each ACK, by checking for
+arrivals of new marks.
+
+The first attempt is to set the "ecn threshold" as function of sensitivity:
+25% for sensitivity 0, 12.5% for sensitivity 1, linear interpolation between
+the two. Then, we set the "beta" value as the max of 25% and the ratio between
+the "excess ECN alpha" and the threshold. This gets us the following graph:
+
+
+
+
+It is encourating. The passing mark for the "C4 alone" test was a completion in
+less that 5 seconds, this test completes in 5.01 seconds while maintaining the
+RTT variations in a narrow range. There are however a few issues:
+
+- We observe packet losses after the exit from the Initial phase,
+- We observe something a re-entry in a second Initial phase
+ shortly afte the first one, and significant packet losses after
+ exiting that phase,
+- The data rate seems to stabilize at 2MB/s, which would be
+ only 80% of capacity.
+
+But the graph above shows a concerning pattern: delay measurements fall below the
+simulated link latency, which indicates a bug in the implementation of
+the "dual queue AQM" in the picoquic simulator. After fixing these bugs,
+the graph is a bit different.
+
+
+
+Apart from fixing the simulator, this graph includes two fixes:
+
+- The sensitivity coefficient varies between 9.75% and 18.75% instead of between 12.5% and 25%,
+- The alpha coefficient for the pushing phase is set to either 6.25% if the previous push
+ was not successful, or if it was successful to a rate varying between 25% if the coefficient
+ ECN alpha was null and 6.25% if that coefficient was large.
+
+
+
+The graph looks better, but there are still some concerns, which become obvious when we look
+at the evolution of the nominal data rate:
+
+- C4 exited the initial phase a bit early, at a rate of 1.1 MB/s instead of the nominal 2.5MB/s
+- C4 also exited the "push" phases early, or discovered congestion well below the expected 2.5MB/s
+
+Examining the traces showed that the computation of the coefficient alpha was reasonably accurate.
+There were just too many CE marks, meaning probably too many packets in the queue.
+
+Could this be due to the way C4 configure pacing? If the "quantum" coefficient is too large,
+C4 will allow sending a big "train" of packets whenever the congestion window is relaxed. This
+will instantly fill the queue, causing the DualQ controller to start marking a fraction of the
+packets with `ecn=CE`. The DualQ controller is programmed to start marking packets if the
+queue is more than 5ms deep, increase the marking rate as the queue grows, and then mark every
+packet if the queue is more than 15ms deep. To avoid CE marks, C4 would have to keep the
+queue smaller than 5ms, which implies that the quantum should be less than 5ms of packet.
+The initial code was setting the quantum to 1/4th of CWND, which is only less than 5ms
+if the RTT is less than 20ms.
+
+The initial code was also setting the pacing rate in the initial phase to more than the
+double of the nominal rate (250%). This was means as a way to find the nominal rate
+sooner by allowing "chirping", but it has the effect of building queues as soon as the
+nominal rate is 80% of the target value. The DualQ controller sees these queues and
+increases the marking rate, causing the early exit of the initial phase.
+
+
+
+Fixing the quantum to 4ms of traffic does indeed fix the issue. The transmission rate reaches
+the expected value of 2.5MB/s, and the RTT remains pegged near the min RTT, only bumping up
+by at most 10ms during the Initial state and the pushing state. The only remaining problem is
+that we do see occasional packet losses after the Initial phase, and also after a
+pushing phase trying a 25% rate increase. The log file shows that the packet losses
+happen at almost the same time as the increase in the marking rate, which reaches 80%
+or higher.
+
+This is probably caused by the dynamics of the DualQ AQM. Endpoints use ECN signals as
+an indication of congestion. When they receive CE marks, they react by reducing their
+sending rate, but the AQM will only see the effect of that rate reduction 1 RTT after
+the CE mark. In between, the queue will build up rapidly, and may reach the threshold
+at which the AQM starts dropping packets. Examining the code, it turns out that this
+was a bug in the picoquic implementation. The limit is supposedly set to the total
+amount of buffers available, but instead was set to a constant.
+
+
+
+After fixing that bug, the packet losses are gone. A direct consequence is that the
+delays increase during the initial and pushing phase. When the limit was set to a
+low value, packet that would cause a delay above the limit were immediately
+dropped, which would place a cap to the maximum RTT. When the buffering is
+not arbitrarily limited, we allow some packet queues to build. However, we can
+see that after a short period the pushing phases are limited to a 6.25%
+increase, which only causes minimal delays. Outside of pushing phases, the
+delays are nearly equal to the min RTT.
\ No newline at end of file
diff --git a/sim_specs/c4_ecn.txt b/sim_specs/c4_ecn.txt
new file mode 100644
index 0000000..ee0c251
--- /dev/null
+++ b/sim_specs/c4_ecn.txt
@@ -0,0 +1,11 @@
+main_cc_algo: c4
+main_start_time: 0
+main_scenario_text: =b1:*1:397:10000000;
+nb_connections: 1
+main_target_time: 5000000
+data_rate_in_gbps: 0.02
+latency: 40000
+l4s_max: 15000
+queue_delay_max: 100000
+icid: ccecc400
+qlog_dir: cclog
\ No newline at end of file
diff --git a/sim_specs/c4_low_and_up.txt b/sim_specs/c4_low_and_up.txt
index fdc34a6..696bed4 100644
--- a/sim_specs/c4_low_and_up.txt
+++ b/sim_specs/c4_low_and_up.txt
@@ -2,7 +2,7 @@ main_cc_algo: c4
main_start_time: 0
main_scenario_text: =b1:*1:397:7000000;
nb_connections: 1
-main_target_time: 7900000
+main_target_time: 7950000
data_rate_in_gbps: 0.01
latency: 50000
queue_delay_max: 80000
diff --git a/sim_specs/c4_media_short_long.txt b/sim_specs/c4_media_short_long.txt
index 14974d9..c7b289b 100644
--- a/sim_specs/c4_media_short_long.txt
+++ b/sim_specs/c4_media_short_long.txt
@@ -11,6 +11,6 @@ qlog_dir: cclog
qperf_log: c4_media_sl_qperflog.csv
media_stats_start: 5000000
media_latency_average: 110000
-media_latency_max: 120000
+media_latency_max: 125000
media_excluded: vhigh, vmid, vlast
link_scenario: 1000000:U0.01:D0.1:L15000:Q100000;60000000:U0.01:D0.1:L50000:Q200000
\ No newline at end of file