<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.20 (Ruby 3.3.5) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

<!ENTITY RFC9000 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9000.xml">
<!ENTITY I-D.ietf-moq-transport SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-moq-transport.xml">
<!ENTITY RFC9438 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml">
<!ENTITY I-D.ietf-ccwg-bbr SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-ccwg-bbr.xml">
<!ENTITY RFC6817 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6817.xml">
<!ENTITY RFC6582 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml">
<!ENTITY RFC3649 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3649.xml">
<!ENTITY I-D.irtf-iccrg-ledbat-plus-plus SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.irtf-iccrg-ledbat-plus-plus.xml">
<!ENTITY RFC9330 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9330.xml">
<!ENTITY RFC9331 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9331.xml">
<!ENTITY I-D.briscoe-iccrg-prague-congestion-control SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.briscoe-iccrg-prague-congestion-control.xml">
]>


<rfc ipr="trust200902" docName="draft-huitema-ccwg-c4-design-03" category="info" consensus="true" submissionType="IETF">
  <front>
    <title abbrev="C4 Design">Design of Christian's Congestion Control Code (C4)</title>

    <author initials="C." surname="Huitema" fullname="Christian Huitema">
      <organization>Private Octopus Inc.</organization>
      <address>
        <email>huitema@huitema.net</email>
      </address>
    </author>
    <author initials="S." surname="Nandakumar" fullname="Suhas Nandakumar">
      <organization>Cisco</organization>
      <address>
        <email>snandaku@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Jennings" fullname="Cullen Jennings">
      <organization>Cisco</organization>
      <address>
        <email>fluffy@iii.ca</email>
      </address>
    </author>

    <date year="2026" month="February" day="26"/>

    <area>Web and Internet Transport</area>
    
    <keyword>C4</keyword> <keyword>Congestion Control</keyword> <keyword>Realtime Communication</keyword> <keyword>Media over QUIC</keyword>

    <abstract>


<?line 108?>

<t>Christian's Congestion Control Code is a new congestion control
algorithm designed to support Real-Time applications such as
Media over QUIC. It is designed to drive towards low delays,
with good support for the "application limited" behavior
frequently found when using variable rate encoding, and
with fast reaction to congestion to avoid the "priority
inversion" happening when congestion control overestimates
the available capacity. It pays special attention to the
high jitter conditions encountered in Wi-Fi networks.
The design emphasizes simplicity and
avoids making too many assumption about the "model" of
the network. The main control variables are the estimate
of the data rate and of the maximum path delay in the
absence of queues.</t>



    </abstract>



  </front>

  <middle>


<?line 125?>

<section anchor="introduction"><name>Introduction</name>

<t>Christian's Congestion Control Code (C4) is a new congestion control
algorithm designed to support Real-Time multimedia applications, specifically
multimedia applications using QUIC <xref target="RFC9000"/> and the Media
over QUIC transport <xref target="I-D.ietf-moq-transport"/>. These applications
require low delays, and often exhibit a variable data rate as they
alternate high bandwidth requirements when sending reference frames
and lower bandwith requirements when sending differential frames.
We translate that into 3 main goals:</t>

<t><list style="symbols">
  <t>Drive towards low delays (see <xref target="react-to-delays"/>),</t>
  <t>Support "application limited" behavior (see <xref target="limited"/>),</t>
  <t>React quickly to changing network conditions (see <xref target="congestion"/>).</t>
</list></t>

<t>The design of C4 is inspired by our experience using different
congestion control algorithms for QUIC,
notably Cubic <xref target="RFC9438"/>, Hystart <xref target="HyStart"/>, and BBR <xref target="I-D.ietf-ccwg-bbr"/>,
as well as the study
of delay-oriented algorithms such as TCP Vegas <xref target="TCP-Vegas"/>
and LEDBAT <xref target="RFC6817"/>. In addition, we wanted to keep the algorithm
simple and easy to implement.</t>

<t>C4 assumes that the transport stack is
capable of signaling to the congestion algorithms events such
as acknowledgements, RTT measurements, ECN signals or the detection
of packet losses. It also assumes that the congestion algorithm
controls the transport stack by setting the congestion window
(CWND) and the pacing rate.</t>

<t>C4 tracks the state of the network by keeping a small set of
variables, the main ones being 
the "nominal rate", the "nominal max RTT",
and the current state of the algorithm. The details on using and
tracking the min RTT are discussed in <xref target="react-to-delays"/>.</t>

<t>The nominal rate is the pacing rate corresponding to the most recent
estimate of the bandwidth available to the connection.
The nominal max RTT is the best estimate of the maximum RTT
that can occur on the network in the absence of queues. When we
do not observe delay jitter, this coincides with the min RTT.
In the presence of jitter, it should be the sum of the
min RTT and the maximum jitter. C4 will compute a pacing
rate as the nominal rate multiplied by a coefficient that
depends on the state of the protocol, and set the CWND for
the path to the product of that pacing rate by the max RTT.
The design of these mechanisms is
discussed in <xref target="congestion"/>.</t>

</section>
<section anchor="react-to-delays"><name>Studying the reaction to delays</name>

<t>The current design of C4 is the result of a series of experiments.
Our initial design was to monitor delays and react to
dellay increases in much the same way as
congestion control algorithms like TCP Vegas or LEDBAT:</t>

<t><list style="symbols">
  <t>monitor the current RTT and the min RTT</t>
  <t>if the current RTT sample exceed the min RTT by more than a preset
margin, treat that as a congestion signal.</t>
</list></t>

<t>The "preset margin" is set by default to 10 ms in TCP Vegas and LEDBAT.
That was adequate when these algorithms were designed, but it can be
considered excessive in high speed low latency networks.
For the initial C4 design, we set it to the lowest of 1/8th of the min RTT and 25ms.</t>

<t>The min RTT itself is measured over time. The detection of congestion by comparing
delays to min RTT plus margin works well, except in two conditions:</t>

<t><list style="symbols">
  <t>if the C4 connection is competing with a another connection that
does not react to delay variations, such as a connection using Cubic,</t>
  <t>if the network exhibits a lot of latency jitter, as happens on
some Wi-Fi networks.</t>
</list></t>

<t>We also know that if several connection using delay-based algorithms
compete, the competition is only fair if they all have the same
estimate of the min RTT. We handle that by using a "periodic slow down"
mechanism.</t>

<section anchor="vegas-struggle"><name>Managing Competition with Loss Based Algorithms</name>

<t>Competition between Cubic and a delay based algorithm leads to Cubic
consuming all the bandwidth and the delay based connection starving.
This phenomenon force TCP Vegas to only be deployed in controlled
environments, in which it does not have to compete with
TCP Reno <xref target="RFC6582"/> or Cubic.</t>

<t>We handled this competition issue by using a simple detection algorithm.
If C4 detected competition with a loss based algorithm, it switched
to a "pig war" mode and stopped reacting to changes in delays -- it would
instead only react to packet losses and ECN signals. In that mode,
we used another algorithm to detect when the competition has ceased,
and switch back to the delay responsive mode.</t>

<t>In our initial deployments, we detected competition when delay based
congestion notifications leads to CWND and rate
reduction for more than 3
consecutive RTT. The assumption is that if the competition reacted to delays
variations, it would have reacted to the delay increases before
3 RTT. However, that simple test caused many "false positive"
detections.</t>

<t>We refined this test to start the pig war
if we observed 4 consecutive delay-based rate reductions
and the nominal CWND was less than half the max nominal CWND
observed since the last "initial" phase, or if we observed
at least 5 reductions and the nominal CWND is less than 4/5th of
the max nominal CWND.</t>

<t>We validated this test by comparing the
ratio <spanx style="verb">CWND/MAX_CWND</spanx> for "valid" decisions, when we are simulating
a competition scenario, and "spurious" decisions, when the
"more than 3 consecutive reductions" test fires but we are
not simulating any competition:</t>

<texttable>
      <ttcol align='left'>Ratio CWND/Max</ttcol>
      <ttcol align='left'>valid</ttcol>
      <ttcol align='left'>spurious</ttcol>
      <c>Average</c>
      <c>30%</c>
      <c>75%</c>
      <c>Max</c>
      <c>49%</c>
      <c>100%</c>
      <c>Top 25%</c>
      <c>37%</c>
      <c>91%</c>
      <c>Media</c>
      <c>35%</c>
      <c>83%</c>
      <c>Bottom 25%</c>
      <c>20%</c>
      <c>52%</c>
      <c>Min</c>
      <c>12%</c>
      <c>25%</c>
      <c>&lt;50%</c>
      <c>100%</c>
      <c>20%</c>
</texttable>

<t>Note that this validation was based on simulations, and that we cannot
claim that our simulations perfectly reflect the real world. We will
discuss in <xref target="simplify"/> how this imperfections lead us to use change
our overall design.</t>

<t>Our initial exit competition algorithm was simple. C4 will exit the
"pig war" mode if the available bandwidth increases.</t>

</section>
<section anchor="handling-chaotic-delays"><name>Handling Chaotic Delays</name>

<t>Some Wi-Fi networks exhibit spikes in latency. These spikes are
probably what caused the delay jitter discussed in
<xref target="Cubic-QUIC-Blog"/>. We discussed them in more details in
<xref target="Wi-Fi-Suspension-Blog"/>. We are not sure about the
mechanism behind these spikes, but we have noticed that they
mostly happen when several adjacent Wi-Fi networks are configured
to use the same frequencies and channels. In these configurations,
we expect the hidden node problem to result in some collisions.
The Wi-Fi layer 2 retransmission algorithm takes care of these
losses, but apparently uses an exponential back off algorithm
to space retransmission delays in case of repeated collisions.
When repeated collisions occur, the exponential backoff mechanism
can cause large delays. The Wi-Fi layer 2 algorithm will also
try to maintain delivery order, and subsequent packets will
be queued behind the packet that caused the collisions.</t>

<t>In our initial design, we detected the advent of such "chaotic delay jitter" by computing
a running estimate of the max RTT. We measured the max RTT observed
in each round trip, to obtain the "era max RTT". We then computed
an exponentially averaged "nominal max RTT":</t>

<figure><artwork><![CDATA[
nominal_max_rtt = (7 * nominal_max_rtt + era_max_rtt) / 8;
]]></artwork></figure>

<t>If the nominal max RTT was more than twice the min RTT, we set the
"chaotic jitter" condition. When that condition was set, we stopped
considering excess delay as an indication of congestion,
and we changed
the way we computed the "current CWND" used for the controlled
path. Instead of simply setting it to "nominal CWND", we set it
to a larger value:</t>

<figure><artwork><![CDATA[
target_cwnd = alpha*nominal_cwnd +
              (max_bytes_acked - nominal_cwnd) / 2;
]]></artwork></figure>
<t>In this formula, <spanx style="verb">alpha</spanx> is the amplification coefficient corresponding
to the current state, such as for example 1 if "cruising" or 1.25
if "pushing" (see <xref target="congestion"/>), and <spanx style="verb">max_bytes_acked</spanx> is the largest
amount of bytes in flight that was succesfully acknowledged since
the last initial phase.</t>

<t>The increased <spanx style="verb">target_cwnd</spanx> enabled C4 to keep sending data through
most jitter events. There is of course a risk that this increased
value will cause congestion. We limit that risk by only using half
the value of <spanx style="verb">max_bytes_ack</spanx>, and by the setting a
conservative pacing rate:</t>

<figure><artwork><![CDATA[
target_rate = alpha*nominal_rate;
]]></artwork></figure>
<t>Using the pacing rate that way prevents the larger window to
cause big spikes in traffic.</t>

<t>The network conditions can evolve over time. C4 will keep monitoring
the nominal max RTT, and will reset the "chaotic jitter" condition
if nominal max RTT decreases below a threshold of 1.5 times the
min RTT.</t>

</section>
<section anchor="slowdown"><name>Monitor min RTT</name>

<t>Delay based algorithm rely on a correct estimate of the
min RTT. They will naturally discover a reduction in the min
RTT, but detecting an increase in the max RTT is difficult.
There are known failure modes when multiple delay based
algorithms compete, in particular the "late comer advantage".</t>

<t>In our initial design, the connections ensured that their min RTT is valid by
occasionally entering a "slowdown" period, during which they set
CWND to half the nominal value. This is similar to
the "Probe RTT" mechanism implemented in BBR, or the
"initial and periodic slowdown" proposed as extension
to LEDBAT in <xref target="I-D.irtf-iccrg-ledbat-plus-plus"/>. In our
implementation, the slowdown occurs if more than 5
seconds have elapsed since the previous slowdown, or
since the last time the min RTT was set.</t>

<t>The measurement of min RTT in the period
that follows the slowdown is considered a "clean"
measurement. If two consecutive slowdown periods were
followed by clean measurements larger than the current
min RTT, we detect an RTT change and reset the
connection. If the measurement results in the same
value as the previous min RTT, C4 continue normal
operation.</t>

<t>Some applications exhibit periods of natural slow down. This
is the case for example of multimedia applications, when
they only send differentially encoded frames. Natural
slowdown was detected if an application sent less than
half the nominal CWND during a period, and more than 4 seconds
had elapsed since the previous slowdown or the previous
min RTT update. The measurement that follows a natural
slowdown was also considered a clean measurement.</t>

<t>A slowdown period corresponds to a reduction in offered
traffic. If multiple connections are competing for the same
bottleneck, each of these connections may experience cleaner
RTT measurements, leading to equalization of the min RTT
observed by these connections.</t>

</section>
</section>
<section anchor="simplify"><name>Simplifying the initial design</name>

<t>After extensive testing of our initial design, we felt we had
drifted away from our initial "simplicity" tenet. The algorithms
used to detect "pig war" and "chaotic jitter" were difficult
to tune, and despite our efforts they resulted in many
false positive or false negative. The "slowdown" algorithm
made C4 less friendly to "real time" applications that
prefer using stable estimated rates. These algorithms
interacted with each other in ways that were sometimes
hard to predict.</t>

<section anchor="chaotic-jitter-and-rate-control"><name>Chaotic jitter and rate control</name>

<t>As we observed the chaotic jitter behavior, we came to the
conclusion that only controlling the CWND did not work well.
we had a dilemma: either use a small CWND to guarantee that
RTTs remain small, or use a large CWND so that transmission
would not stall during peaks in jitter. But if we use a large
CWND, we need some form of pacing to prevent senders from
sending a large amount of packets too quickly. And then we
realized that if we do have to set a pacing rate, we can simplify
the algorithm.</t>

<t>Suppose that we compute a pacing rate that matches the network
capacity, just like BBR does. Then, in first approximation, the
setting the CWND too high does not matter too much. 
The number of bytes in flight will be limited by the product
of the pacing rate by the actual RTT. We are thus free to
set the CWND to a large value.</t>

</section>
<section anchor="monitoring-the-nominal-max-rtt"><name>Monitoring the nominal max RTT</name>

<t>The observation on chaotic jitter leads to the idea of monitoring
the maximum RTT. There is some difficulty here, because the
observed RTT has three components:</t>

<t><list style="symbols">
  <t>The minimum RTT in the absence of jitter</t>
  <t>The jitter caused by access networks such as Wi-Fi</t>
  <t>The delays caused by queues in the network</t>
</list></t>

<t>We cannot merely use the maximum value of the observed RTT,
because of the queing delay component. In pushing periods, we
are going to use data rate slightly higher than the measured
value. This will create a bit of queuing, pushing the queing
delay component ever higher -- and eventually resulting in
"buffer bloat".</t>

<t>To avoid that, we can have periodic periods in which the
endpoint sends data at deliberately slower than the
rate estimate. This would enable a "clean" measurement
of the Max RTT.</t>

<t>However, tests showed that only measuring
the Max RTT during recovery periods is not reactive enough.
For example, if the underlying RTT changes, we would need to wait
up to 6 RTT before registering the change. In practice, we can
measure the Max RTT in both the "recovery" and "cruising"
periods, i.e., all the periods in which data is sent at most
at the "nominal data rate".</t>

<t>If we are dealing with jitter, the clean Max RTT measurements
will include whatever jitter was happening at the time of the
measurement. It is not sufficient to measure the Max RTT once,
we must keep the maximum value of a long enough series of measurement
to capture the maximum jitter than the network can cause. But
we are also aware that jitter conditions change over time, so
we have to make sure that if the jitter diminished, the
Max RTT also diminishes.</t>

<t>We solved that by measuring the Max RTT during the "recovery"
periods that follow every "push". These periods occur about every 6 RTT,
giving us reasonably frequent measurements. During these periods, we
try to ensure clean measurements by
setting the pacing rate a bit lower than the nominal rate -- 6.25%
slower in our initial trials. We apply the following algorithm:</t>

<t><list style="symbols">
  <t>compute the <spanx style="verb">max_rtt_sample</spanx> as the maximum RTT observed for
packets sent during the recovery period.</t>
  <t>if the <spanx style="verb">max_rtt_sample</spanx> is more than <spanx style="verb">max_jitter</spanx> above
<spanx style="verb">running_min_rtt</spanx>, reset it to <spanx style="verb">running_min_rtt + max_jitter</spanx>
(by default, <spanx style="verb">max_jitter</spanx> is set to 250ms).</t>
  <t>if <spanx style="verb">max_rtt_sample</spanx> is larger than <spanx style="verb">nominal_max_rtt</spanx>, set
<spanx style="verb">nominal_max_rtt</spanx> to that value.</t>
  <t>else, set <spanx style="verb">nominal_max_rtt</spanx> to:</t>
</list></t>

<figure><artwork><![CDATA[
   nominal_max_rtt = gamma*max_rtt_sample + 
                     (1-gamma)*nominal_max_rtt
]]></artwork></figure>

<t>The <spanx style="verb">gamma</spanx> coefficient is set to <spanx style="verb">1/8</spanx> in our initial trials.</t>

<section anchor="preventing-runaway-max-rtt"><name>Preventing Runaway Max RTT</name>

<t>Computing Max RTT the way we do bears the risk of "run away increase"
of Max RTT:</t>

<t><list style="symbols">
  <t>C4 notices high jitter, increases Nominal Max RTT accordingly, set CWND to the
product of the increased Nominal Max RTT and Nominal Rate</t>
  <t>If Nominal rate is above the actual link rate, C4 will fill the pipe, and create a queue.</t>
  <t>On the next measurement, C4 finds that the max RTT has increased because of the queue,
interprets that as "more jitter", increases Max RTT and fills the queue some more.</t>
  <t>Repeat until the queue become so large that packets are dropped and cause a
congestion event.</t>
</list></t>

<t>Our proposed algorithm limits the Max RTT to at most <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but that is still risky. If congestion causes queues, the running measurements of <spanx style="verb">min RTT</spanx>
will increase, causing the algorithm to allow for corresponding increases in <spanx style="verb">max RTT</spanx>.
This would not happen as fast as without the capping to <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but it would still increase.</t>

</section>
<section anchor="initial-phase-and-max-rtt"><name>Initial Phase and Max RTT</name>

<t>During the initial phase, the nominal max RTT and the running min RTT are
set to the first RTT value that is measured. This is not great in presence
of high jitter, which causes C4 to exit the Initial phase early, leaving
the nominal rate way too low. If C4 is competing on the Wi-Fi link
against another connection, it might remain stalled at this low data rate.</t>

<t>We considered updating the Max RTT during the Initial phase, but that
prevents any detection of delay based congestion. The Initial phase
would continue until path buffers are full, a classic case of buffer
bloat. Instead, we adopted a simple workaround:</t>

<t><list style="symbols">
  <t>Maintain a flag "initial_after_jitter", initialized to 0.</t>
  <t>Get a measure of the max RTT after exit from initial.</t>
  <t>If C4 detects a "high jitter" condition and the
"initial_after_jitter" flag is still 0, set the
flag to 1 and re-enter the "initial" state.</t>
</list></t>

<t>Empirically, we detect high jitter in that case if the "running min RTT"
is less that 2/5th of the "nominal max RTT".</t>

</section>
</section>
<section anchor="monitor-rate"><name>Monitoring the nominal rate</name>

<t>The nominal rate is measured on each acknowledgement by dividing
the number of bytes acknowledged since the packet was sent
by the RTT measured with the acknowledgement of the packet,
protecting against delay jitter as explained in
<xref target="rate-measurement"/>, without additional filtering
as discussed in <xref target="not-filtering"/>.</t>

<t>We only use the measurements to increase the nominal rate,
replacing the current value if we observe a greater filtered measurement.
This is a deliberate choice, as decreases in measurement are ambiguous.
They can result from the application being rate limited, or from
measurement noises. Following those causes random decrease over time,
which can be detrimental for rate limited applications.
If the network conditions have changed, the rate will
be reduced if congestion signals are received, as explained
in <xref target="congestion"/>.</t>

<section anchor="rate-measurement"><name>Rate measurement</name>

<t>The simple algorithm protects from underestimation of the
delay by observing that
delivery rates cannot be larger than the rate at which the
packets were sent, thus keeping the lower of the estimated
receive rate and the send rate.</t>

<t>The algorithm uses four input variables:</t>

<t><list style="symbols">
  <t><spanx style="verb">current_time</spanx>: the time when the acknowledment is received.</t>
  <t><spanx style="verb">send_time</spanx>: the time at which the highest acknowledged
packet was sent.</t>
  <t><spanx style="verb">bytes_acknowledged</spanx>: the number of bytes acknowledged
 by the receiver between <spanx style="verb">send_time</spanx> and <spanx style="verb">current_time</spanx></t>
  <t><spanx style="verb">first_sent</spanx>: the time at which the packet containing
the first acknowledged bytes was sent.</t>
</list></t>

<t>The computation goes as follow:</t>

<figure><artwork><![CDATA[
ack_delay = current_time - send_time
send_delay = send_time - first_sent
measured_rate = bytes_acknowledged /
                max(ack_delay, send_delay)
]]></artwork></figure>

<t>This is in line with the specification of rate measurement
in <xref target="I-D.ietf-ccwg-bbr"/>.</t>

<t>We use the data rate measurement to update the
nominal rate, but only if not congested (see <xref target="congestion-bounce"/>)</t>

<figure><artwork><![CDATA[
if measured_rate > nominal_rate and not congested:
    nominal_rate = measured_rate
]]></artwork></figure>

</section>
<section anchor="congestion-bounce"><name>Avoiding Congestion Bounce</name>

<t>In our early experiments, we observed a "congestion bounce"
that happened as follow:</t>

<t><list style="symbols">
  <t>congestion is detected, the nominal rate is reduced, and
C4 enters recovery.</t>
  <t>packets sent at the data rate that caused the congestion
continue to be acknowledged during recovery.</t>
  <t>if enough packets are acknowledged, they will cause
a rate measurement close to the previous nominal rate.</t>
  <t>if C4 accepts this new nominal rate, the flow will
bounce back to the previous transmission rate, erasing
the effects of the congestion signal.</t>
</list></t>

<t>Since we do not want that to happen, we specify that the
nominal rate cannot be updated during congested periods,
defined as:</t>

<t><list style="symbols">
  <t>C4 is in "recovery" state,</t>
  <t>The recovery state was entered following a congestion signal,
or a congestion signal was received since the beginning
of the recovery era.</t>
</list></t>

</section>
<section anchor="not-filtering"><name>Not filtering the measurements</name>

<t>There is some noise in the measurements of the data rate, and we
protect against that noise by retaining the maximum of the
<spanx style="verb">ack_delay</spanx> and the <spanx style="verb">send_delay</spanx>. During early experiments,
we considered smoothing the measurements for eliminating that
noise.</t>

<t>The best filter that we could defined operated by
smoothing the inverse of the data rate, the "time per byte sent".
This works better because the data rate measurements are the
quotient of the number of bytes received by the delay.
The number of bytes received is
easy to assert, but the measurement of the delays are very noisy.
Instead of trying to average the data rates, we can average
their inverse, i.e., the quotients of the delay by the
bytes received, the times per byte. Then we can obtain
smoothed data rates as the inverse of these times per byte,
effectively computing an harmonic average of measurements
over time. We could for example 
compute an exponentially weighted moving average
of the time per byte, and use the inverse of that
as a filtered measurement of the data rate.</t>

<t>We do not specify any such filter in C4, because while
filtering will reduce the noise, we will also delay
any observation, resulting into a somewhat sluggish
response to change in network conditions. Experience
shows that the precaution of using the max of the
ack delay and the send delay as a divider is sufficient
for stable operation, and does not cause the response
delays that filtering would.</t>

</section>
</section>
<section anchor="early-congestion-modification"><name>Early Congestion Modification</name>

<t>We want C4 to handle Early Congestion Notification in a manner
compatible with the L4S design. For that, we monitor
the evolving ratio of CE marks that the L4S specification
designates as <spanx style="verb">alpha</spanx>
(we use <spanx style="verb">ecn_alpha</spanx> here to avoid confusion),
and we detect congestion if the ratio grows over a threshold.</t>

<t>We did not find a recommended algorithm for computing <spanx style="verb">ecn_alpha</spanx>
in either <xref target="RFC9330"/> or <xref target="RFC9331"/>, but we could get some
concrete suggestions in <xref target="I-D.briscoe-iccrg-prague-congestion-control"/>.
That draft, now obsolete, suggests updating the ratio once per
RTT, as the exponential weighted average of the fraction of
CE marks per packet:</t>

<figure><artwork><![CDATA[
frac = nb_CE / (nb_CE + nb_ECT1)
ecn_alpha += (frac - ecn_alpha)/16
]]></artwork></figure>

<t>This kind of averaging introduces a reaction delay. The draft suggests mitigating that
delay by preempting the averaging if the fraction is large:</t>

<figure><artwork><![CDATA[
if frac > 0.5:
    ecn_alpha = frac
]]></artwork></figure>

<t>We followed that design, but decided to update the coefficient after
each acknowledgement, instead of after each RTT. This is in line with
our implementation of "delayed acknowledgements" in QUIC, which
results in a small number of acknowledgements per RTT.</t>

<t>The reaction of C4 to an excess of CE marks is similar to the
reaction to excess delays or to packet losses, see <xref target="congestion"/>.</t>

</section>
</section>
<section anchor="competition-with-other-algorithms"><name>Competition with other algorithms</name>

<t>We saw in <xref target="vegas-struggle"/> that delay based algorithms required
a special "escape mode" when facing competition from algorithms
like Cubic. Relying on pacing rate and max RTT instead of CWND
and min RTT makes this problem much simpler. The measured max RTT
will naturally increase as algorithms like Cubic cause buffer
bloat and increased queues. Instead of being shut down,
C4 will just keep increasing its max RTT and thus its running
CWND, automatically matching the other algorithm's values.</t>

<t>We verified that behavior in a number of simulations. We also
verified that when the competition ceases, C4 will progressively
drop its nominal max RTT, returning to situations with very low
queuing delays.</t>

<section anchor="no-need-for-slowdowns"><name>No need for slowdowns</name>

<t>The fairness of delay based algorithm depends on all competing
flows having similar estimates of the min RTT. As discussed
in <xref target="slowdown"/>, this ends up creating variants of the
<spanx style="verb">latecomer advantage</spanx> issue, requiring a periodic slowdown
mechanism to ensure that all competing flow have chance to
update the RTT value.</t>

<t>This problem is caused by the default algorithm of setting
min RTT to the minimum of all RTT sample values since the beginning 
of the connection. Flows that started more recently compute
that minimum over a longer period, and thus discover a larger
min RTT than older flows. This problem does not exist with
max RTT, because all competing flows see the same max RTT
value. The slowdown mechanism is thus not necessary.</t>

<t>Removing the need for a slowdown mechanism allows for a
simpler protocol, better suited to real time communications.</t>

</section>
</section>
<section anchor="congestion"><name>React quickly to changing network conditions</name>

<t>Our focus is on maintaining low delays, and thus reacting
quickly to changes in network conditions. We can detect some of these
changes by monitoring the RTT and the data rate, but
experience with the early version of BBR showed that
completely ignoring packet losses can lead to very unfair
competition with Cubic. The L4S effort is promoting the use
of ECN feedback by network elements (see <xref target="RFC9331"/>),
which could well end up detecting congestion and queues
more precisely than the monitoring of end-to-end delays.
C4 will thus detect changing network conditions by monitoring
3 congestion control signals:</t>

<t><list style="numbers" type="1">
  <t>Excessive increase of measured RTT (above the nominal Max RTT),</t>
  <t>Excessive rate of packet losses (but not mere Probe Time Out, see <xref target="no-pto"/>),</t>
  <t>Excessive rate of ECN/CE marks</t>
</list></t>

<t>If any of these signals is detected, C4 enters a "recovery"
state. On entering recovery, C4 reduces the <spanx style="verb">nominal_rate</spanx>
by a factor "beta":</t>

<figure><artwork><![CDATA[
    # on congestion detected:
    nominal_rate = (1-beta)*nominal_rate
]]></artwork></figure>
<t>The cofficient <spanx style="verb">beta</spanx> differs depending on the nature of the congestion
signal. For packet losses, it is set to <spanx style="verb">1/4</spanx>, similar to the
value used in Cubic. For delay based losses, it is proportional to the
difference between the measured RTT and the target RTT divided by
the acceptable margin, capped to <spanx style="verb">1/4</spanx>. If the signal
is an ECN/CE rate, we may
use a proportional reduction coefficient in line with
<xref target="RFC9331"/>, again capped to <spanx style="verb">1/4</spanx>.</t>

<t>During the recovery period, target CWND and pacing rate are set
to a fraction of the "nominal rate" multiplied by the
"nominal max RTT".
The recovery period ends when the first packet
sent after entering recovery is acknowledged. Congestion
signals are processed when entering recovery; further signals
are ignored until the end of recovery.</t>

<t>Network conditions may change for the better or for the worse. Worsening 
is detected through congestion signals, but increases can only be detected
by trying to send more data and checking whether the network accepts it.
Different algorithms have done two ways: pursuing regular increases of
CWND until congestion finally occurs, like for example the "congestion
avoidance" phase of TCP RENO; or periodically probe the network
by sending at a higher rate, like the Probe Bandwidth mechanism of
BBR. C4 adopt the periodic probing approach, in particular
because it is a better fit for variable rate multimedia applications
(see details in <xref target="limited"/>).</t>

<section anchor="no-pto"><name>Do not react to Probe Time Out</name>

<t>QUIC normally detect losses by observing gaps in the sequences of acknowledged
packet. That's a robust signal. QUIC will also inject "Probe time out"
packets if the PTO timeout elapses before the last sent packet has not been acknowledged.
This is not a robust congestion signal, because delay jitter may also cause
PTO timeouts. When testing in "high jitter" conditions, we realized that we should
not change the state of C4 for losses detected solely based on timer, and
only react to those losses that are detected by gaps in acknowledgements.</t>

</section>
<section anchor="rate-update"><name>Update the Nominal Rate after Pushing</name>

<t>C4 configures the transport with a larger rate and CWND
than the nominal CWND during "pushing" periods.
The peer will acknowledge the data sent during these periods in
the round trip that followed.</t>

<t>When we receive an ACK for a newly acknowledged packet,
we update the nominal rate as explained in <xref target="monitor-rate"/>.</t>

<t>This strategy is effectively a form of "make before break".
The pushing
only increase the rate by a fraction of the nominal values,
and only lasts for one round trip. That limited increase is not
expected to increase the size of queues by more than a small
fraction of the bandwidth*delay product. It might cause a
slight increase of the measured RTT for a short period, or
perhaps cause some ECN signaling, but it should not cause packet
losses -- unless competing connections have caused large queues.
If there was no extra
capacity available, C4 does not increase the nominal CWND and
the connection continues with the previous value.</t>

</section>
</section>
<section anchor="fairness"><name>Driving for fairness</name>

<t>Many protocols enforce fairness by tuning their behavior so
that large flows become less aggressive than smaller ones, either
by trying less hard to increase their bandwidth or by reacting
more to congestion events. We considered adopting a similar
strategy for C4.</t>

<t>The aggressiveness of C4 is driven by several considerations:</t>

<t><list style="symbols">
  <t>the frequency of the "pushing" periods,</t>
  <t>the coefficient <spanx style="verb">alpha</spanx> used during pushing,</t>
  <t>the coefficient <spanx style="verb">beta</spanx> used during response to congestion events,</t>
  <t>the delay threshold above a nominal value to detect congestion,</t>
  <t>the ratio of packet losses considered excessive,</t>
  <t>the ratio of ECN marks considered excessive.</t>
</list></t>

<t>We clearly want to have some or all of these parameters depend
on how much resource the flow is using.
There are know limits to these strategies. For example,
consider TCP Reno, in which the growth rate of CWND during the
"congestion avoidance" phase" is inversely proportional to its size.
This drives very good long term fairness, but in practice
it prevents TCP Reno from operating well on high speed or
high delay connections, as discussed in the "problem description"
section of <xref target="RFC3649"/>. In that RFC, Sally Floyd was proposing
using a growth rate inversely proportional to the
logarithm of the CWND, which would not be so drastic.</t>

<t>In the initial design, we proposed making the frequency of the
pushing periods inversely proportional to the logarithm of the
CWND, but that gets in tension with our estimation of
the max RTT, which requires frequent "recovery" periods.
We would not want the Max RTT estimate to work less well for
high speed connections! We solved the tension in favor of
reliable max RTT estimates, and fixed to 4 the number
of Cruising periods between Recovery and Pushing. The whole
cycle takes about 6 RTT.</t>

<t>We also reduced the default rate increase during Pushing to
6.25%, which means that the default cycle is more on less on
par with the aggressiveness of RENO when
operating at low bandwidth (lower than 34 Mbps).</t>

<section anchor="absence-of-constraints-is-unfair"><name>Absence of constraints is unfair</name>

<t>Once we fixed the push frequency and the default increase rate, we were
left with responses that were mostly proportional to the amount
of resource used by a connection. Such design makes the resource sharing
very dependent on initial conditions. We saw simulations where
after some initial period, one of two competing connections on
a 20 Mbps path might settle at a 15 Mbps rate and the other at 5 Mbps.
Both connections would react to a congestion event by dropping
their bandwidth by 25%, to 15 or 3.75 Mbps. And then once the condition
eased, both would increase their data rate by the same amount. If
everything went well the two connection will share the bandwidth
without exceeding it, and the situation would be very stable --
but also very much unfair.</t>

<t>We also had some simulations in which a first connection will
grab all the available bandwidth, and a late comer connection
would struggle to get any bandwidth at all. The analysis 
showed that the second connection was
exiting the initial phase early, after encountering either
excess delay or excess packet loss. The first
connection was saturating the path, any additional traffic
did cause queuing or losses, and the second connection had
no chance to grow.</t>

<t>This "second comer shut down" effect happened particularly often
on high jitter links. The established connections had tuned their
timers or congestion window to account for the high jitter. The
second connection was basing their timers on their first
measurements, before any of the big jitter events had occured.
This caused an imbalance between the first connection, which
expected large RTT variations, and the second, which did not
expect them yet.</t>

<t>These shutdown effects happened in simulations with the first
connection using either Cubic, BBR or C4. We had to design a response,
and we first turned to making the response to excess delay or
packet loss a function of the data rate of the flow.</t>

</section>
<section anchor="sensitivity-curve"><name>Introducing a sensitivity curve</name>

<t>In our second design, we attempted to fix the unfairness and
shutdowns effect by introducing a sensitivity curve,
computing a "sensitivity" as a function of the flow data
rate. Our first implementation is simple:</t>

<t><list style="symbols">
  <t>set sensitivity to 0 if data rate is lower than 50000B/s</t>
  <t>linear interpolation between 0 and 0.92 for values
between 50,000 and 1,000,000B/s.</t>
  <t>linear interpolation between 0.92 and 1 for values
between 1,000,000 and 10,000,000B/s.</t>
  <t>set sensitivity to 1 if data rate is higher than
10,000,000B/s</t>
</list></t>

<t>The sensitivity index is then used to set the value of delay and
loss thresholds. For the delay threshold, the rule is:</t>

<figure><artwork><![CDATA[
    delay_fraction = 1/16 + (1 - sensitivity)*3/16
    delay_threshold = min(25ms, delay_fraction*nominal_max_rtt)
]]></artwork></figure>

<t>For the loss threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 0.02 + 0.50 * (1-sensitivity);
]]></artwork></figure>

<t>For the CE mark threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 1/32 + 1/32 * (1-sensitivity);
]]></artwork></figure>

<t>This very simple change allowed us to stabilize the results. In our
competition tests we see sharing of resource almost equitably between
C4 connections, and reasonably between C4 and Cubic or C4 and BBR.
We do not observe the shutdown effects that we saw before.</t>

<t>There is no doubt that the current curve will have to be refined. We have
a couple of such tests in our test suite with total capacity higher than
20Mbps, and for those tests the dependency on initial conditions remain.
We will revisit the definition of the curve, probably to have the sensitivity
follow the logarithm of data rate.</t>

</section>
<section anchor="cascade"><name>Cascade of Increases</name>

<t>We sometimes encounter networks in which the available bandwidth changes rapidly.
For example, when a competing connection stops, the available capacity may double.
With low Earth orbit satellite constellations (LEO), it appears
that ground stations constantly check availability of nearby satellites, and
switch to a different satellite every 10 or 15 seconds depending on the
constellation (see <xref target="ICCRG-LEO"/>), with the bandwidth jumping from 10Mbps to
65Mbps.</t>

<t>Because we aim for fairness with RENO or Cubic, the cycle of recovery, cruising
and pushing will only result in slow increases increases, maybe 6.25% after 6 RTT.
This means we would only double the bandwidth after about 68 RTT, or increase
from 10 to 65 Mbps after 185 RTT -- by which time the LEO station might
have connected to a different orbiting satellite. To go faster, we implement
a "cascade": if the previous pushes at 6.25% was successful, the next
pushing will use 25% (see <xref target="variable-pushing"/>), or an intermediate
value if the observed ratio of ECN marks is greater than 0. If three successive pushes
all result in increases of the
nominal rate, C4 will reenter the "startup" mode, during which each RTT
can result in a 100% increase of rate and CWND.</t>

</section>
</section>
<section anchor="limited"><name>Supporting Application Limited Connections</name>

<t>C4 is specially designed to support multimedia applications,
which very often operate in application limited mode.
After testing and simulations of application limited applications,
we incorporated a number of features.</t>

<t>The first feature is the design decision to only lower the nominal
rate if congestion is detected. This is in contrast with the BBR design,
in which the estimate of bottleneck bandwidth is also lowered
if the bandwidth measured after a "probe bandwidth" attempt is
lower than the current estimate while the connection was not
"application limited". We found that detection of the application
limited state was somewhat error prone. Occasional errors end up
with a spurious reduction of the estimate of the bottleneck bandwidth.
These errors can accumulate over time, causing the bandwidth
estimate to "drift down", and the multimedia experience to suffer.
Our strategy of only reducing the nominal values in
reaction to congestion notifications much reduces that risk.</t>

<t>The second feature is the "make before break" nature of the rate
updates discussed in <xref target="rate-update"/>. This reduces the risk
of using rates that are too large and would cause queues or losses,
and thus makes C4 a good choice for multimedia applications.</t>

<t>C4 adds two more features to handle multimedia
applications well: coordinated pushing (see <xref target="coordinated-pushing"/>),
and variable pushing rate (see <xref target="variable-pushing"/>).</t>

<section anchor="coordinated-pushing"><name>Coordinated Pushing</name>

<t>As stated in <xref target="fairness"/>, the connection will remain in "cruising"
state for a specified interval, and then move to "pushing". This works well
when the connection is almost saturating the network path, but not so
well for a media application that uses little bandwidth most of the
time, and only needs more bandwidth when it is refreshing the state
of the media encoders and sending new "reference" frames. If that
happens, pushing will only be effective if the pushing interval
coincides with the sending of these reference frames. If pushing
happens during an application limited period, there will be no data to
push with and thus no chance of increasing the nominal rate and CWND.
If the reference frames are sent outside of a pushing interval, the
rate and CWND will be kept at the nominal value.</t>

<t>To break that issue, one could imagine sending "filler" traffic during
the pushing periods. We tried that in simulations, and the drawback became
obvious. The filler traffic would sometimes cause queues and packet
losses, which degrade the quality of the multimedia experience.
We could reduce this risk of packet losses by sending redundant traffic,
for example creating the additional traffic using a forward error
correction (FEC) algorithm, so that individual packet losses are
immediately corrected. However, this is complicated, and FEC does
not always protect against long batches of losses.</t>

<t>C4 uses a simpler solution. If the time has come to enter pushing, it
will check whether the connection is "application limited", which is
simply defined as testing whether the application send a "nominal CWND"
worth of data during the previous interval. If it is, C4 will remain
in cruising state until the application finally sends more data, and
will only enter the the pushing state when the last period was
not application limited.</t>

</section>
<section anchor="variable-pushing"><name>Variable Pushing Rate</name>

<t>C4 tests for available bandwidth at regular pushing intervals
(see <xref target="fairness"/>), during which the rate and CWND is set at 25% more
than the nominal values. This mimics what BBR
is doing, but may be less than ideal for real time applications.
When in pushing state, the application is allowed to send
more data than the nominal CWND, which causes temporary queues
and degrades the experience somewhat. On the other hand, not pushing
at all would not be a good option, because the connection could
end up stuck using only a fraction of the available
capacity. We thus have to find a compromise between operating at
low capacity and risking building queues.</t>

<t>We manage that compromise by adopting a variable pushing rate:</t>

<t><list style="symbols">
  <t>If pushing at 25% did not result in a significant increase of
the nominal rate, the next pushing will happen at 6.25%</t>
  <t>If pushing at 6.25% did result in some increase of the nominal CWIN,
the next pushing will happen at 25%, otherwise it will
remain at 6.25%</t>
</list></t>

<t>If the observed ratio of ECN-CE marks is greater than zero, we will
use it to modulate the amount of pushing. We leave the pushing rate
at 6.25% if the previous pushing attempt was not successful, but
otherwise we pick a value intermediate between 25% (if 0 ECN marks)
and 6.25% (if the ratio of ECN marks approaches the threshold).</t>

<t>As explained in <xref target="cascade"/>, if three consecutive pushing attempts
result in significant increases, C4 detects that the underlying network
conditions have changed, and will reenter the startup state.</t>

<t>The "significant increase" mentioned above is a matter of debate.
Even if capacity is available,
increasing the send rate by 25% does not always result in a 25%
increase of the acknowledged rate. Delay jitter, for example,
may result in lower measurement. We initially computed the threshold
for detecting "significant" increase as 1/2 of the increase in
the sending rate, but multiple simulation shows that was too high
and caused lower performance. We now set that threshold to 1/4 of the
increase in the sending rate.</t>

</section>
<section anchor="pushing-rate-and-cascades"><name>Pushing rate and Cascades</name>

<t>The choice of a 25% push rate was motivated by discussions of
BBR design. Pushing has two parallel functions: discover the available
capacity, if any; and also, push back against other connections
in case of competition. Consider for example competition with Cubic.
The Cubic connection will only back off if it observes packet losses,
which typically happen when the bottleneck buffers are full. Pushing
at a high rate increases the chance of building queues,
overfilling the buffers, causing losses, and thus causing Cubic to back off.
Pushing at a lower rate like 6.25% would not have that effect, and C4
would keep using a lower share of the network. This is why we will always
pushed at 25% in the "pig war" mode.</t>

<t>The computation of the interval between pushes is tied to the need to
compete nicely, and follows the general idea that
the average growth rate should mimic that of RENO or Cubic in the
same circumstances. If we pick a lower push rate, such as 6.25% or
maybe 12.5%, we might be able to use shorter intervals. This could be
a nice compromise: in normal operation, push frequently, but at a
low rate. This would not create large queues or disturb competing
connections, but it will let C4 discover capacity more quickly. Then,
we could use the "cascade" algorithm to push at a higher rate,
and then maybe switch to startup mode if a lot of capacity is
available. This is something that we intend to test, but have not
implemented yet.</t>

</section>
<section anchor="adaptation-to-ecnl4s"><name>Adaptation to ECN/L4S</name>

<t>Tests with L4S active queue management showed the tension between the
periodic updates and L4S goal to minimize queue sizes. Typical L4S deployment
start marking packets with ECN/CE when the queue size is about 1.5ms, and
increase the mark rate progressively as the queue size increases,
reaching 100% when the queue size is about 2ms. If C4 pushes at 25% every 6 RTT,
and if the bandwidth estimate is accurate,
the queue size will increase by 25% of the RTT during the first roundtrip,
before any correction signal can be applied. The increased marking
rate will affect all connections sharing the bottleneck, which is
not desirable.</t>

<t>L4S is tuned for the "Prague" algorithm, which increases CWIN by one packet every
RTT. In a typical trial with a 20ms RTT and a 100 Mbps data rate, it takes 0.12ms
to send a packet, and thus 12.5 RTT before building a queue of 1.5ms. In the same
conditions, C4 would have increased the rate by 25% after 6 RTT in the
aggressive scenario, thus triggering a high rate of marking.</t>

<t>The cascade process made the problem even worse. If a push at 6.25% does increase
the nominal rate, the next push will be at 25%. If that push and the next one
did increase the nominal rate, C4 will reenter the initial phase, even if some
of the pushes did cause ECN/CE marks. The initial phase will then cause a lot
of packet losses, which will degrade performance.</t>

<t>To mitigate this issue, we had to add a "very low" pushing mode, setting the
pushing rate to only 3.125% if the previous push resulted in a high rate of ECN/CE marks.
We also replaced the somewhat adhoc "count of successive probes" by the management
of a "probe level", defining 4 levels:</t>

<t><list style="symbols">
  <t>level 0: pushing at 3.125%, spend 1 cycle in cruising before pushing.</t>
  <t>level 1: pushing at 6.25%, spend 4 cycles in cruising before pushing.</t>
  <t>level 2: pushing at 25%, spend at most 1 cycle in cruising before pushing.</t>
  <t>level 3: pushing at 25%, spend at most 1 cycle in cruising before pushing.</t>
</list></t>

<t>The "probe level" is updated after the recovery phase as follow:</t>

<t><list style="symbols">
  <t>if the previous probe was successful and did not result in a high rate of ECN/CE marks,
increase the probe level by 1. If the probe level was already at 3, reenter the startup phase.</t>
  <t>if the previous probe was successful but did result in a high rate of ECN/CE marks,
remain at the same probe level.</t>
  <t>if the previous probe was not successful but did not result in a high rate of ECN/CE marks,
stay at probe level 0 if already at that level, otherwise move back to probe level 1.</t>
  <t>if the previous probe was not successful and did result in a high rate of ECN/CE marks,
move to probe level 0.</t>
</list></t>

<t>This logic treats the CE marking differently from other congestion signals, because
the CE marks are an intentional indication of congestion by the network, and is thus
less ambiguous than delay increases or packet losses, which can be caused by other
factors such as delay jitter or random transmission issues. Simulations show that
this logic allows to quickly discover the available capacity in L4S networks, whithout spuriously
reentering the startup phase and causing packet losses. It is equivalent to the
previous logic when the network does not support L4S.</t>

</section>
</section>
<section anchor="revisiting-the-initial-phase"><name>Revisiting the Initial Phase</name>

<t>Our November 2025 design of C4 included a "rate based"
initial phase, during which C4 will send at twice the "nominal rate",
monitor acknowledgments and increase the nominal rate if measurements
increase, and exit if congestion is detected or if the measurements
do not increase for 3 consecutive RTT. That algorithm works
well in most scenario, but we were observing early exits in
"high delay jitter" scenarios, such as Wi-Fi networks with lots of
packet collisions.</t>

<t>After observing that phenomenon, we realized that the
rate based algorithm was failing in case of high delay jitter
because it was setting the CWND to the product of pacing rate
and the "nominal" max RTT. The nominal Max RTT was set to a fixed
value, observed either before the initial phase or on the first
roundtrip in that phase. It would work if the initial phase
started during a high jitter event and the initial RTT was large
enough, but in many case it was not and became a limiting
factor.</t>

<section anchor="why-not-increasing-max-rtt-during-initial-phase"><name>Why not increasing Max RTT during Initial phase?</name>

<t>In the initial phase, the algorithm tries to discover the bandwidth
and does not yet have a good estimate of delay jitter, which typically
requires a series of measurements. In these conditions, it is
easy to underestimate the max RTT. On the other hand, the flow is
deliberately probing at a high data rate. If the algorithm
allows updates of max RTT during that phase, the risks of
spiraling into buffer boat are very high, but if the CWND
remains too low, the risk of exiting startup with a severely
underestimated data rate is also very high.</t>

<t>We tried to develop simple rules to classify the delay measurements
between caused by jitter, and caused by congestion. If we could do that,
we would be able to increase the max RTT safely, when appropriate.
However, we could not find variables that were both easy to monitor
and well correlated with the actual cause of the delay.</t>

</section>
<section anchor="building-a-robust-initial-estimator"><name>Building a robust initial estimator</name>

<t>The "rate based" initial estimator requires estimating both the
data rate and the max RTT simultaneously. In contrast, the "CWND based"
initial estimator use in algorithms like Reno or Cubic
only requires estimating the CWND, plus a possibly
loose estimate of the data rate. The Reno algorithm is remarkably
simple: just increase the CWND by the number of bytes acknowledged,
without any explicit dependency on the measured latency.</t>

<t>The Reno algorithm terminates when packet losses are observed,
leading to bufferbloat. Hystart improves that by terminating when
the measured delays start increasing, but this can lead to early
exit in case of delay jitter. The rate based algorithm terminate when
the measured bandwidth stops growing, which provides good
results. Our proposal is to combine a Reno like growth of the
CWND with a rate-control like exit condition.</t>

<t>Of course, things are not that simple. The "rate" test only stops the
growth of the CWND after the third "non growing" round. If CWND doubles
after each round it becomes excessive, buffers fill up, and lots
of packets are lost. We dealt with that problem by essentially
freezing the increases of after the first "non growing" round.
If a larger measurement happens before 3 RTT, the increases
resume, otherwise, C4 exits the initial phase.</t>

<t>When the initial phase completes, we retain as estimate of the
data rate the highest value measured so far.
We also want to obtain a reasonable estimate of the "max RTT".
In the Reno logic, the "ssthresh" is set to half the CWND
value before congestion is detected. C4 will not use the
ssthresh variable after exiting the Initial phase, but it
can set the max RTT to the quotient of ssthresh by the
final rate estimate.</t>

</section>
</section>
<section anchor="state-machine"><name>State Machine</name>

<t>The state machine for C4 has the following states:</t>

<t><list style="symbols">
  <t>"startup": the initial state, during which the CWND is
set to twice the "nominal_CWND". The connection
exits startup if the "nominal_cwnd" does not
increase for 3 consecutive round trips. When the
connection exits startup, it enters "recovery".</t>
  <t>"recovery": the connection enters that state after
"startup", "pushing", or a congestion detection in
a "cruising" state. It remains in that state for
at least one roundtrip, until the first packet sent
in "recovery" is acknowledged. Once that happens,
the connection goes back
to "startup" if the last 3 pushing attemps have resulted
in increases of "nominal rate", or enters "cruising"
otherwise.</t>
  <t>"cruising": the connection is sending using the
"nominal_rate" and "nominal_max_rtt" value. If congestion is detected,
the connection exits cruising and enters
"recovery" after lowering the value of
"nominal_cwnd".
Otherwise, the connection will
remain in "cruising" state until at least 4 RTT and
the connection is not "app limited". At that
point, it enters "pushing".</t>
  <t>"pushing": the connection is using a rate and CWND 25%
larger than "nominal_rate" and "nominal_CWND".
It remains in that state
for one round trip, i.e., until the first packet
send while "pushing" is acknowledged. At that point,
it enters the "recovery" state.</t>
</list></t>

<t>These transitions are summarized in the following state
diagram.</t>

<figure><artwork><![CDATA[
                    Start
                      |
                      v
                      +<-----------------------+
                      |                        |
                      v                        |
                 +----------+                  |
                 | Startup  |                  |
                 +----|-----+                  |
                      |                        |
                      v                        |
                 +------------+                |
  +--+---------->|  Recovery  |                |
  ^  ^           +----|---|---+                |
  |  |                |   |     Rapid Increase |
  |  |                |   +------------------->+
  |  |                |
  |  |                v
  |  |           +----------+
  |  |           | Cruising |
  |  |           +-|--|-----+
  |  | Congestion  |  |
  |  +-------------+  |
  |                   |
  |                   v
  |              +----------+
  |              | Pushing  |
  |              +----|-----+
  |                   |
  +<------------------+

]]></artwork></figure>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>We do not believe that C4 introduce new security issues. Or maybe there are,
such as what happen if applications can be fooled in going to fast and
overwhelming the network, or going to slow and underwhelming the application.
Discuss!</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>




    <references title='Informative References' anchor="sec-informative-references">

&RFC9000;
&I-D.ietf-moq-transport;
&RFC9438;
&I-D.ietf-ccwg-bbr;
&RFC6817;
&RFC6582;
&RFC3649;
<reference anchor="TCP-Vegas" target="https://ieeexplore.ieee.org/document/464716">
  <front>
    <title>TCP Vegas: end to end congestion avoidance on a global Internet</title>
    <author initials="L. S." surname="Brakmo">
      <organization></organization>
    </author>
    <author initials="L. L." surname="Peterson">
      <organization></organization>
    </author>
    <date year="1995" month="October"/>
  </front>
  <seriesInfo name="IEEE Journal on Selected Areas in Communications ( Volume: 13, Issue: 8, October 1995)" value=""/>
</reference>
<reference anchor="HyStart" target="https://doi.org/10.1016/j.comnet.2011.01.014">
  <front>
    <title>Taming the elephants: New TCP slow start</title>
    <author initials="S." surname="Ha">
      <organization></organization>
    </author>
    <author initials="I." surname="Rhee">
      <organization></organization>
    </author>
    <date year="2011" month="June"/>
  </front>
  <seriesInfo name="Computer Networks vol. 55, no. 9, pp. 2092-2110" value=""/>
</reference>
<reference anchor="Cubic-QUIC-Blog" target="https://www.privateoctopus.com/2019/11/11/implementing-cubic-congestion-control-in-quic/">
  <front>
    <title>Implementing Cubic congestion control in Quic</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2019" month="November"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
<reference anchor="Wi-Fi-Suspension-Blog" target="https://www.privateoctopus.com/2023/05/18/the-weird-case-of-wifi-latency-spikes.html">
  <front>
    <title>The weird case of the wifi latency spikes</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2023" month="May"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
&I-D.irtf-iccrg-ledbat-plus-plus;
&RFC9330;
&RFC9331;
&I-D.briscoe-iccrg-prague-congestion-control;
<reference anchor="ICCRG-LEO" target="https://datatracker.ietf.org/meeting/122/materials/slides-122-iccrg-mind-the-misleading-effects-of-leo-mobility-on-end-to-end-congestion-control-00">
  <front>
    <title>Mind the Misleading Effects of LEO Mobility on End-to-End Congestion Control</title>
    <author initials="Z." surname="Lai">
      <organization></organization>
    </author>
    <author initials="Z." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Wu">
      <organization></organization>
    </author>
    <author initials="H." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Zhang">
      <organization></organization>
    </author>
    <date year="2025" month="March"/>
  </front>
  <seriesInfo name="Slides presented at ICCRG meeting during IETF 122" value=""/>
</reference>


    </references>



<?line 1268?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>TODO acknowledge.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA7V9bZMbR3Lm9/oVbSg2TEoAODMktRLX8i1FUV76RFImaevC
cXGcBtDA9BJAY7sbA2El+rdf5pMvVdXAcLUXPsauZgbortesfH0yazKZhL7u
19WTYvRd1dWrbdEsi2c3bd31dbn9x6541mxXFf3RbPnXvm3W9HNRFfeePbo/
CuVs1la39PKzR4W8Pwrzsq9WTXt8UtTbZRPCoplvyw31sGjLZT+52dd9tSkn
8/lhNZk/mizw2uTiYej2s03dddRVf9zR8y+ev/s+bPebWdU+CQtq9UmYN9uu
2nb77knRt/sq1LsWv3X91cXF1xdXoWyrkobzUzUryu2ieLHtq3Zb9cW7ttx2
u6btR+FDdTw07eJJKCbFs0f478kc+dM3Vbnu601Fn202+21N86In+JuX1aIu
i+a2aot/+/cXz0Io9/1N03KLoSho2jS8Z9PiTzJT/kgWwNc1/appV0+KH9v6
liZYvJ73zW7f0bjnU/6SnqnXTwpdsz/qzynNKO3r7bR4RbMtP+w3ZRu7e7u/
KbvBN9Rbua3/iqnQgOpu3iT9dFt5+I9z/mI6bzaDKf1rtd3W21WXzGm/Xlfb
7ItP97Fc75fL4x/rup7OyxC2TbuhJ29pc5lc/A964c33z76+uLjA7y8m303r
ql9ONs1fJr1tpj/26OFX+WOgLiJOe+LLry5/778//urKfn/45aOvnwT+492z
Hyf/Ua3KDl8Vfdmuqp6Wvu933ZMHD+qqqn7erZu2mvKvU5rkA6Ls/aba9g8e
ffno95dfynt6mqi5QporKiLEvsGPeaS08rapF+V2XhX8R7FaN7Ny7QQ7QmMg
+mL08BKEQeeguPz668fyXVe1NR0dWjN64sXz58+Lf2327ZbaoPbeVutq3leL
4imdh462LyfirrhX/Eez3vMGXj4cFy+6bk+/fjXO+rkvHTlx87+J/FCK+GFK
pPdtW37YNOe//mFa/FjRjDo6OPzVn45vaWF7bWy4xoumxrpeXkwvLy6/fPBn
pkBajOnVxeXl9IL/90jftFUuN0R1RX9TFTTj3U257anfV9WBt7Po1s2h6LhD
mYmt57/ut1XBberH6VLSOu32NGJqpCc28aErbpv1tHj8eFxsm2nx9bjY7ab0
8tdXk6vLywtpIFsiYhC2Cn48/1Tad/lXL6bFm5uqwto828/q+YQZyuTbdbM6
T4aHw2G6E2bRCK/gNXpAk/n6weUl/6/e7NYVUyUtzGSONiPV8a/M3yb1dvKX
fT1/kNHsi+RVGU5KsPoqE9O/0asZhb5qbotLIh8exxnyPGF8JFhmNMdTArP1
GfJQ/vinevJ9PXm773YkAngy/w/LdPXwwcXjB5dfPSCSmRyqul1M5mVXTZrl
5FAv68ma3tjOj5NuV3+ouulNv1nnx5ooDa8V/BoLS6Y9frXQVwt5NVuel+VR
l+fq4f+35QHva4n31fN5u5qsq8Ws7Ce79b7Df4TLMbN8+PDiSfz90hnnrGVW
Xen7u7Zc7asztCPPP3v25l8mPzx/fX79ad4lcen5h6oFP8ax3lQVU9aDy6ur
B8TmaQXKdfegW9ekAkzoQ+2YTvRiwttDysC6KhdMx9VySfys421aVw0JgVm9
rvvjhEZV8dMNfpyh84uLbPte1syLacdeetvFc2mb95LmU7zUtpmNPpe26ccZ
HWGww+38prj8Pfb4HId+i2kWu7YiBYY5c9nLIha6LMVi3/IP1noKWo2/sfn/
OS1+KOvzn5/5+N+mxU/704//dOfT/0m8dBXCZDIpylnHe9mH8FsUw7ojabYl
DnzKOkK5JsWw7m82hWh9FQRjt9+xMIfCNXnHGle5261dVHV7WtmyCwOta1q8
6LmztKUFHfmKfjmU7aIrmPsvqnV57MbhQL0Wq6ZZeG+ka4ASRklnxbre0IFa
jIpZdVPe1k0blm31lz3t2PpIb+yJEA43pO7sO96q25JIeLauipZ1Nzr8DRPU
mDVP6XBZdn1BEniOxmmAyZrQX1ABZBDErHhljqQE0QSZu42KGxpZxVqV9HmG
FfNi8Ed8mrrADZW3pGRhTPNyV86pRazTjhaBGFM1pyNHlNczj5cx0Evhpl7d
FH+uexZ71PSiloXnCe1ZH6HFJaYP9ks7K2JxGpgVyuKTakeit6v/SgTesfyp
uWMsA6bYFZvyA+R009CvW/qKFI7NTvSgWbPvZRE2RD/rER1ETEV7mhbcEemO
cdq27kRpbSXCXxchKENm/iO7wlaAfrgpf643+w0tBm0NCIOnxQtAFF5BE1sW
tNl7YvxC+Zt6sViTdP6M9bK2Weyxkb/tHLCB9N9yGDZ7mCFM/Om5GMt+Lunv
9foY7nhKKZUPTPHLL6pRf/yIZQEf5DeCH6rCVWt6+rzO/fEjdqTLT2ngY1LT
biSHTteeiK2ofr6pZ3VPi+FnJtmijodypAVh3Zc/AUHO6PVDvaC90rZZM+nk
LNB2gXe31ZLIk7du2ZI50gXukoZA05HXP/n2ol7i9Z5PhTQwDT9Vsggszmlc
xKbrLe3LQyHBVUMyi0TppPjuDl5T3OuqilYPx56lh3z88eP9Mb31Vrf301zH
2tAv9N033GLBatsH4kbMTZhH80T0qKSHV1uIdEeNEFUnh5YN/UdMocTydzUf
8hkJvX1Lm7VjycWrKtTj6xTO8CAn4w4slaloTFZdT5t8VCVSCI9stI8fx2QC
QCOnD9UY4A9527799k1KdGbB0deBKORQrddKKaTS7xdHPupY2knDg4VIjUNR
oVG4GUZNu4X38SPo5Ifn33379J2Mju1DpuwXxJAWsoZj6rM4lGiZFvtDVe3Q
u/cSwOuExZCVhS1x7ZvWmlYXfK7qhI745Xi+aBXmH2j5AzNqPhA0H96Xci2c
Eo+n9mKcXHULWuY58tJQO9vmQPreSoh8XLx5947UirLbt/bJ82evtHVSdETw
LcgwE35GPe9YV+uJjLuODgGLDHqyOR3/uQEFJYXu7AyJqLqq781MSxo4kDLW
HMK9Zz+9+u6+cySWWnyy6fjJGkKPtI3nQ6ns3IieOuC94ZfKotsQN+QOWYq4
nBgr/6cD3GxpOrOKn4aUGW0bUjjp/HOHI3nQPyOJwWs5Ggcb3Xzf8kHIR+Ir
IbKKFpaEcMcKpJwfFoSYhS0CtY49YvG1IKV7T6sOGXuGbeihTYfJh3awVLSu
NDBad+FsSj6bBvrHnI+uiUgbdGSvUWeIVLcV0phmfetyWPczarIYNmtClp4L
oJo5mTXNnNaN1yPdN5G9xansLX5iJn2owqKhrmkn6Yn2tlKRLXoKbxQNY97U
2zkUa7D6ZG2n4YW0Lxq3dGDvkiTqbpr9mlie6A9E5TqB4HujO24TknenzDMP
NdHYXBwFRHOyDSGRZfluQTQTtxcOW9KbZM6QisR0xEsUFhXpeYvOFiijrV3b
9M28WQuPZMLmT/nIMLsNQgY89cYeZy1F3i77jESoc52QLFAuC3rI9E3FQqXu
iMsQaxrQZipMpqwVvWVGbESd6rkqCn/5bEjOQsx2ioaSSJrpaL34s1ItKP5d
ZBKY2TS8JmKqtzXEtjZx4IUn5bKhj4m/af+8ZBgBfUervBaVb84+sQpesQ2L
Caw5CX9qhDXTvyHk1mTdJ2KFOhM5AqXA+k9ZRUZMQlz0ZL08eYjGwOKk+nle
VdnjvHObBqpuyb5CkHRPBtuGzO6aBFVPMxJaYgIsUyYrTF+ZyEje1PdGvOT8
JzW/qJYlLzut4eVFscHixElGaclUQ73wcpcLUqyYrqBTCfkk60Q6WOWa7biY
kZJfCzeYVfDj07FlpYOn23WsS1GXUPxIr62gxblLJRod3+vi2vYT3UgfkNY8
mbq3s8B6YAdKunzwFR0R41DJAb96vOl0bezjuu+q9ZKXRiXoQmxO1q6dvwt3
5BaTpaZlZKZQshkflAKZJrVhdsLoyhfiWWSdZowF2PVgh4cmUeFAUEomNM3I
lAswvs1O3AZgfKT0E6e8EfvNHgNvKYpFQ7TOfNROgjJSSEczJVRdKtP3RXhB
hRvHoRj7VpWe31k3WGXbLeOy1J6YsMzZaCBdQ2dsaEWyvg1Vg3UY1bdJESIV
py3Xp6MRjW9Wdpm6F2Q5qrGKL6yNLVWzZdu9rFudAZ1xYt+kaFd+8k+EowmR
gkZHZ26xVlOAtlhFOp0lYkhk8M/FzUyazHYUnH0ye/yseFluSyjoz5IhYcN+
IEWr+BbTeBrPzC+f3fJ5m3R9u1+t1hWxy/TNGa1aRWdNtGqm31L3crAgBfu2
QHx4FMdtD085T30g/ZU1pQ0ly87K+i29yQefFnNHZ522cUvfkPiZp5yQesNS
z7it3bo5ithQFkrqaai2t3XbbFUl5WNwUxPd0Yl1GpVtaXQPKyxW4D7eUJ+q
qj/+6oosWGIEmNy0AA3JLi1MLUgpoNtX6cap1h5PcVTfwoulcJReoifz4baV
0JCHyy0KBT0wv6FZsmOHqKOmo1m2o4KdGiK7+4YOgwokVdJgwYkoUoYxmXBj
B1ZOAhlmPW2kLKuf3kxVR8uJcg8LBqTK/Y7DgW04Hquyh0gi4AM8T+ff2XQ5
dDhnObkQ5VemRzMnlV75q1CMaJ1g4NwlET6NoMkkNBODbvqhumN1eQwJCaZC
mIYON4cYtpG0WQeChGfHD/Fp8c7ACo3S8qHEjOd7DivKmWYenjig6s7ZznAR
sObqWsT2hJRp2j4J0SbPxsWJysasonFV4aEM4U8kmm5Fi6WelSJ7llbzEvsF
L9loSTtKOl3T1Tz6UXCaVcbZVst6a0SP19mDBPMayqDQYKCZ0bqrGr0oIEt8
SVKOCjXRV7Jzq8fUWSw5y36yqTpZ35ty7Yp/9lzw/ujczYXXrtkfOlLCGBXs
NCSe3YAzJyMMtCa0zfTs42Q0xdnR1OlgHj14DDkfzg1IluyW7Gv22aeLlgpu
2AAtb3FxzW89ePn0f73nX65BWCO8P6JVm9edkMFBrBUYc7STexKDrACUGSV1
ZINR+42o8aNut6c/9t1pQ9z9KCHfbK/iYoxk5Mu6Zdoi3UoGwI6XZBAFU1Ey
DFIp3mBqMjNaoF9lQeinDYkdn+m/Xwc/6V94ytJ5VdFbDy9+R//9/ePfBWnt
0df89+XFxe/Cu2ZH+hX/+fD3/N+vL3+nPnz6BJ9/9fB34dum75uNPniF1h5f
0YPEEamdK3xKrf/T4wtrWJ4L4VVjDjrspG4smElpLBoKsKwGllhIqMRykSpK
qxXm67LeyIfMtZLHC5LwHBoC811yQN3snDUrcOsF1AM2Bs1QEjNJXODLIwmp
G+g07GPbaGPOw4gv82ml065SIHD3DfQeM2uIZFNbp/qZNeiEqiI35ykLG4kW
Kh4HPeXCSBldNPujMuD8SvSXP7FQhf5yUxITnhffCRcMb09VOffxSvyTl0IV
QvMX6xdMpmSmzuAgPIiLADwvsk2NRaTmZ/jll0GAnN11P6X+E3p/A6OuaaMT
Bm+ejRzr+3xscWpI1Y/BiKjJsVNWo4Y+h7EdOXB+lk/zauFOsmNgvwtNTrRf
8zmLRlsu/lyyP2a4ejwMOurLesUmR1DKcNNUw1DzWkU+D25bubznodnbSuss
+tlqVqq9qReLimXpAk4C2naoAGpt06pBO5+ToibsSNwDMkjaEtqNK3oYzj0F
aaW6RMkbO+c5mCchiH4iK0XrULYSRNuL0sJDa7bqeodS0SyXiUuR5RgpOtWw
T9WSWK/U+Htb7apS1Ik4dniQznwjnigxE4Yj4AH4rgc2VEGXNPt2pYTZifKQ
r0pyCPnUsTET+hbeYPY49qUod8S/6cOmXcA0Yo1qT+IOsUVV6TrhJaQ/wxG2
SCjPlL5+cFzSOZ/qXW4Xu8qFc79g9zHczWz1jeZ6ttOjNzKZuFdZ1u6B7zrn
7nNDye3l5Iso02kVSEO6KVpEUfu23o1hMsywQHC70gFxlyta7CXuCUcbaQUZ
1RAtlSKGFqcOWxJ0//Vf/xX04/f08fu274tvinu/Lz4vhh9/UVA79tf94kHx
1R/wOpsDqcJhc2JmG0V0f6hVvVGr0V0R4L22vLawbuCrn1O21D4URl710oZY
DO4twQbAXaKbBdcMbfjCIkmZQ0IU94PJlwW0IvZxHSpfVFl480KxWjASg8HC
44n5xn5G5jdqlCxF4ET3vnheRqnONUrcMmIY4TS1LK33le6SgEfezw802m/o
/JBa+LltET78wkFT8u8eb9XsSBrQez4Wi2JSpM/zDl7JDoI51ohMsVgfF9do
/tqcjSUkta1e6pjN/OnBHOOp9z96TXitqp/Fe3fJ4nU0b/c1m5sjVm4vp1eP
WQcf7fbdDT48F5wTpnA9mJqPFOvW9aHccEyeVx9PMSdcruvVjfIGkM9+TjSy
3OOIxNiQKuLBFXFjFFDD1QtmCgCNJNmW64K01xmb1xyP0ViYR1E5kNvf0Lle
3UDymfiWMBVYZouYBahz37KfsGjr7kOivXm/AZShLnbw37hK4AmIicqbaIOD
ltu1mfdsjmCG0gz1mK/otSyzusKNcksxEttbgE9Tp3lOorCPhiTKHwq1/Xtn
vvDU7a77cmS3rQTufD9bjYGxe1omOyNNLSpQJPqYIC0EdBrlZSlV3TZrGnTi
pDQFENukLmlQ8Skvk+XA0+IaFn5wF89iMh5yQ7Jg3MZlTxiIoepumjWYxOX0
MYbVpdEVdY+pt9w8pL98xq409qR9DOG7s46ttlofBTCLAzo/CUB5D0x2R5nZ
tuz3LSQGK4tYqDLaUhaIohcDVoQ1FrW0YUI5cfqTMQ7GofF6TioU9KVW1Ek+
clt2Oa5Zp2SdW4EHGgnKnG0h8Zm7G5M6Io2p55ZLhSmtJcq34cEvbkvSK1bV
6G6pn8fxGMxjslmU1DouuxlPdCoC6Ucl6xNYLY6qt+rttJ0ZFeL3HBteTXx4
8KtyUAJGObEI9wsYueBA8q7waYexUmNujYRhfySdFP6ZUVTCYjRdXInffvtm
rNHrYF4E0G/mitVRts2uAe2wZdKL3s98XKP+sNT+BmRSAQG0vMFHUgo0ANxD
OxOlsmPGH7WCx4Gs9oajerARaL93XeYLYW7A9rY3w1MLA18J8g/SoIWqBxaz
iDF+Jn/fUI18YlUkCrskEd4cunzc8JR6IIY2eU6GKbzY3iwtwNLiEu6G8Aak
B4n1BOlCgpxoKIMgGMMTlSnK0pAqTeqQLGUaorZoEM/0qSQ2Xah6lq6C2DOd
LQGc+yIKNDLry+79SmyFzvqeibXdlOvQ0MRKiX+LqZtBqszQtdnTyiuHiaEA
IfSgwhu2Sqol8GbdBetiThFwniDXWMxmSCUczDkxlYVhlopX0n3wjWEycaWf
6JLDhgngiCGo0W0WTs4qDrGe79IPPG9EJPBHhRI4vb74LfRtuBP7wiPt+x07
5BTnl+xlRrilrXE+SYSOMiI+IT3aw6dDkk3UO3hhBuKgwWoDtAH5y5TmvDvl
qmK1WyjOdGZQ3azp+3VFT34Yi+HjMfa0gQ2JgQRvhcFXbTgF8BhamRNJ/rIn
dv1X1/jTwLI7XUXDyXuTmL16p0xVGQTSSQab+4rWbQk9TtjnrXio+T3q9A5L
c1mt1TGyCIu2XgKWxcrPsm022UujiBRlZyaneYhnPkb1xMz1OEV0Y8GHOlRS
JOBs8hgq+35bCdnSCHc16wgMb1vSPokWdlR+IeKFfe4hd7kzzcon22oF5VAG
mYjD6LPYlAtEanGwlryjC8HpjeAy5NM+yjkJIrQ7gBhVf+16eORMqRGXfOeI
y7g2NctmiTggMiUkhhgPB9YQexZHJ7uliYdBB6Oz2mJJqVOSl73oYc+ylfSY
iiNVw9MuCyCApeXvGHJxLJ7VjWGJmF/PSZRaPFp4mhmVRoPCb0gDYUccdFwO
jU+DEBJHOWuSvpvySVHVmOIeNoSAvUzjWO3LltF6onHzEepoewH6woNQHeRF
cengxa5RlSjxMgUJ68Ar2MMbK6xwV5UfIFoMC/TtvtfQRdIuVCCsw5ZxDPCr
sf1ZCMxOD7EaA2DvVdvhfAQzqWyE0dgzDxEjqBUBOi2einMIWCkmsfqvpuDJ
oBaNx1NZfJapWaIbtS3suAt8PIZCA6Cqndkv7jLIm9FYY8mhzy7FBwTDn4+L
P+9JkwFshiGeHOoFPW+h5i7rtoODsG1+ZpI37SqkoEHd4kbwIR4s3pSgPaDK
yRSfFmImIWP0nIUMY2BWGd7WrEDFTBl2/Axkis4ZsVx3dAnqfM+bxtTWhAyW
Fd0cqvOmto7NaGBEiTonB0z5+nZ4xDzwCa69qEpoEbl1l6DvErsbJOic8Vjw
52TmVGJy8mL70WbBcwNViafGWw5/G4NRPhcBTfxbeziD4JOh6qOWUSDuSoa+
zeG7cp+3+U/gT9WX1MEbXxJQoPVl1MWhPAngkJyEUWjuclsCdwD0NwnrYo0v
2Mz1S+rBoSVxylD91WNjqh4fGk4xLlaNHmNuJiLZO9AZe/7pZ6rsmms0pFaQ
+DgYusWHijVKxUAiicS6jiMMgxGyg6W1rjhPh3HIzFT2UBJFtMExtw2j2Z51
Gk4rK3s2G9/F9JOyd2YAduHWlCm4jtRgUiEWtaPZC+fqZPJlDw/3jLVm3otO
IPg2fYFGmkyz2YPHil8p2h6p1mMH8qUBFkMMm1NjHWM4D8bxIFfkZTsML81B
Idy7rWD6H+O0ElgUy/pqyy4swZipoj62YNmeufQaSlM0TQTPoMKiEl3lUNZ9
2O/41y8Fu4fAP/Wyqjs1p1V+UgtCZCzH67mzZLO/0snzHpBGKWDFkU3FNCHz
Ngan03paTceO9jnZSOwa0H+0kQCLsGtRvT/Gm5yu4WZYWox7UQlEHWpHROOq
8uoDTtXXAFKvWRNYVIj5gXSVQRwcKAbZp0B5tnzNpZPZo71tXLePMNqmOLdo
pHsIBmbDIsgR/CcsgmE97FsHASSo05QYGa1T7nrrIocFx5PuTjqLH0FLCLp2
Aqo/iAAp+zNJV2r0ujdvTNw7WKQRIaUPVaETjaAVD5gye+5uGHDJC2frgG79
y04hUx37DfX4zJLDU5w5PDnZGZ2lJhq40VGc3CNTWN1EBgBcoqvy3JfCilc1
g8s4Fs4etmaLoLDl3GU0NC2+86HElsGRNdomLq5zvofZMVMnUgkvjDfnVzl8
mzjrl1PGIChXq3OHW480VtELdhwO4QZkSQRwpxoVJKjpUPzMtQac3gvu99o8
FIkMj3KLsd6mA+LQJvsyYGxT6kfJ4qSLOg1c4VuhnGvem9sqXGuc7z3Nn1+8
HqvrRWI7w6+LL4qkjXAvAonHeesKNKYmrh5fbLr7OsZz40v9RNeDSB0Nh32M
Jx+LVkSkqArX50W1ZmQR93nuYfXqF8VJKPCbYlWSofF5PjCa5yAEZZGoywme
v//5oCGJH7JCc40HrrPgUlyO68sHX13fQVKsN35W/CiGAiTPfgtb+qUpjc8s
QuvHNQnxkfY/q8pWUfUcKiGONqIdFIPcnNojFrP6OiDHZMEKoqErksTQcYJl
e6XHw9nLfN60bLisj7LmpgczDyrynIQ0xHTSzjZ+9oZBfRN2vLwaZL6AUlO1
nKTRB7VpLPKxrE3y1Tt1AbiiBYVySm2/No79c8Zp0Mqy3hp7Sz3+N2USqipO
9cg9iRvOoaYFIwOv1xboJYF0qasiXct07jzqLjYleju/OEX2H6MZSBHp63Xy
DI2BnyL+LiaHZX2AU0BctwI7xRpguFyLIoFXgrwUaRS95hFKzMZSl8kEtnBE
Z/gbHIFU7b368Zjke4SZiBKP8KileRYlcCGi64s2YYiDjJEjnieurmtXKrCS
Y7RhLDGDuZYQT+ycy5OkskyQa93iawU6RxeAong4zssu+VJyjSxpmXSCndoC
v2UtHDMqi2FD0LP+QhnAjxyRxY75UY/CL4/bjs+Zkw6V9EWMCWdBOQ/EFIxv
/kJUIdspM1diuIYXYoUsEw5NaUoVc46MRYhyqZspoWIDofncMOqCGBMzCxLV
t8PIJM45cyi262nnQCuSHxRdrZoqpUgcYgChXJWMlz6TCAGk7gYeAPMHsVdH
ajAg+gyvvWm7ghNNfMrwT39CL3qR74eRfPB4L6Mws6SRAdbew9vvhq2pJ8rj
E3L6kfAlBp2ccY71j+H7LruOq7YoNkqeCTD6HLkBM6NcNDv4Zg18zEprCWwO
FJWXhlwqi+W6XDlo933JLuH3CSPDx+J5aooLlr3/Al+T6eM5UKgo1aVMWwKX
sDbA76Xge3b4jxLaSmLQRtzExc6PSkbsDOdi7GCcQr7iJCcNK00Q4xT11nHJ
gHcQGTzf7OpW8uvT8FRaLqE2AA/Cw0vVk/NTNwoJSLkvrhSknBtbjnv6lLcI
R+OXz9TpM+E/P55PD425Swq7GiQJI+eLdO+Fn76B1+wUOGKqM+PQJAxJVpE6
yBKDbxFTMYd9RgcbtTFmEKgH2fXwZvBPRG536xIQd+A4eXaTRCBw6rgxY0vb
LiH9xdLm7OhBDiNxh4l/jzTGnyoDkFSpr0YEDqd0W/R/uBPj0FY0vLmb9IoO
El6aQdq5wBf0j1YHxxD/NERlfLZMHClkCzZwCyCYl2YtJkEyWJWbWb3aN3sB
bR5heyqwE2cMe5GE/yT9GV2oKxS+cbig07a3TY1s8O/dmqGl5pCSMPiWjlCz
8aElNmswQbCVbKBeUjd5a6iftOMsHjJ1wN0p0gX2rwLZVEGAnFDEJMJ3Eus8
SX4UHslp0PUtv5zSVTib2Eon8A0yd5PF+OWzE+qTo2dVAFzlULoWn754jtT3
FQN26stj8BIoRBYXucCKFEXgx3ycs+okfi7Wa5945hxFipAPNFl4qS07nt8S
E1YPokeZgi5OLJmCMGalgSBFG8QZYvuXYrKQARILskB2XOtBeM+0cP0kunM8
0cgZw0atIdscFgPX3O/Ju+lMxefJsj5hUmxo5MwJjTnyyx/UVj/F8NjYU86m
I2s9+S4ZniD2stlyl9Cq3vMI7pyAjpTlOpEhc6oiUcgy3iuji3OSBGoYfkJQ
Kw6HAIXIp1RNW2rivZDYN0U6wGJS+PgRbvKn/GN6JM7A+MHCsG+n61k8OLGN
SZjd8xGMi9jPfbOLhdtxjgCdwigxvKSNHZV2cAxDxO3kRUKEkRsPjy75DFDQ
KNYAxyVj5dDaIAaAcOuNi9D0TsCakxmpSfPq48f7sta1Owp1lf65SLGBoJKs
SSnnlT3zTd6ELBPzoafspJcUUmdr36J/4kinY3I8GBTsNG1+nIVw2eGeZC7j
5ZFAhcTeEeSU09TnKVutI7hkfCIV5TiDG0slrIK1OqhZnbuq+Ghm7iy1s+O+
nSLdrXuxXkUf7tnHkZ+XgbdfPU3q2k0N4/QtTOOYoE6pkzMENF8jIGqlFhTg
ks5eu+PSL3NO6+7EwOASUDm94bSz3QEJVugWZPmV3kGWASGvk4LQRbZRxRJ2
+VrF9P+3UOHELYQoe2kIG4D1eMsFq40TeHTXR3ZMEnEkB8mXOx4Xc8ySJJPk
xFKkgtUaSqMXgqPWyJ+7MaX+BXO8SquPJe7U08mxx6Vpz32DNky0JErsrFrV
W+W6umTeOS2sagCvmj5qkqea4S+f5apkCHm8FcqTI0YHToyM1hWDW5lC7Now
9kDamXE8T2VF5iVWdeLa2e21y+/ryHev3Xt+yhfCIbN2u01DBvTZKQPBxorb
1uxhUlgwPhVLM8lJXGtERGEDbMEaMQiyDmIt5D1J4bvqzOrAToJk2rEcPvai
3YzcU8OxZBLPgkPxoPZ5IeA148Jf9k1fJ2bJUCFwwlFNAAs5PQsx8EfrLlgl
KLLEq7Y3d8AJVtNblAGB9ngpj1y4xjMs+vaoziXNdckn1nngVr9mU44LHcha
WhhQvIUy2y7rXOcW8mmMXWvpfMkFsmHdSdaO7iBzAR+RRTHy7eyGzY2D8Czq
bp1kGhWIQbds4M59ynkgrgsJ0v0no68UXBkcqTJMGDpU7AVi26uBxm2rpkuS
0ZicSqOlbDpE9aiVcc6UO6FfUUyU6xp3ZY8Q0A96WLhA86MIySBFcV2FyHwU
n89CVeVt3Um42FPOZD8DN5zASMYZCgDIFGZOyLrs1vvVqu5ugmbwV7EgAY/n
1AabFs8dqRg49p44yElW0dBNa4uOWHZrKItiyaZpS6mBETOZxCHBq9Elsd3A
W6uwOMflKqrPgEDxzNtcvP4KApRxIZlYxMXyHIww0apeNgtXPbFnEJHixdQa
ICfvvEpKExTwlm04L7MFCdKnPGhXbn949NbyegupY6PIC/XnwA+DbA41z+sG
pZGec9WYD8lic0OZphykWTt/muQU7ikw7bqab99r4hNklNce5ZxRgPPue7KY
OrlSdW9pBicNaNXyvmsOhSd6KJErgo+DJ4DU0ipsGN+WxhPED2/nPRkaMgQF
3icFAx8+vJA6H/bnJft7NPNWjv2KjCgmaMAMWy4Y0hFRy7i7iPH/jZWN2YpA
eSPcVMAFvw98mJp1JdleaLnL/cG6Taxa7AS4OzYWmCaYOudJmBpUwLY0r3Dw
jWYWJJqqWnP8FJkI29l7euZBcU9++YI/eP7s3eX94KtYfPFNcQ+PTwr/8P6D
yy8Tw+tDLTVRZSjKGhCjY/KJdbxE4AkqixckrsCGGMIq0QFclhAbqLishoVh
Yg+D6Vqg94mbUBj0PxcX08diHsUpfYPvZAI/WWzdgAuGPZakHa4Ft8jtvCzw
Cj9xOOcQZVe2i111UvNjiqQ7tVeRrp+nhSDAirXgjR5UZRyhcDpXxxQ/QEhS
FQzKGvWK4dugCQFBia7sZKMcCrIO8LqUZWSJNuDCaZW2NJdUykIOqsuw6T5M
VASI/KSe0aC6TCcAk/IgZ3BQ1eijbd2ZDK/O6rUuQukVi0dVNy93kko1EjfS
UvyuaTUEuNuSIQByqjWC3lQC3mq2OfqDUxocYeX7j/ol+E7jZhskuMOUs+R5
lI4T31+b5S54i2GQfOaOZOQt5AXltNq+5AAmURuMMEacrUJioiKKM7e7Yfrn
LKJgMfA/O+ZJ35cc3W4QJCT7kj/UmIWCl0mUN+yvROxD4L12pgc7/Y+d+Lu1
Ig0deJJKjiqyerag8UjdSZENAc5wznz+6tmiRChI1MUwP+3FqpXqcetj4Eg3
5nKS40iSYd9uVZPu6n6v+HuQLlRv4ihBoZeW6g8t4VUjyD4oIYr57+QMckmx
rR6481W4krqOpVaMRPQyLJHScoPYpx9QL+R9UoTsaRLFEAeY50l+1EKY6Ge/
E5RDbbXJo7ofrjl/cJA+eC3FscZ65NJsnySXLimFEUFWgmtIJyXeDHfTzwGM
TtiwR5kZffYuPUl1CvcV00SKEcaVZJoRAJdnDVmJU8UjM89cr9M6ikKW58z+
whT+NJns+7UrtKigVGmyk5RPdSOlEieZdyuKEMMHWWgnmVI4WUm+qfjv4/DZ
j0+aE0eEuGcVMrYmrtpWP9ddLwLHydnMhNP178CwoVtzDoYxIkcdJ7l7SZZl
J4Pl7mg5iKRL9puFN5WaSRKS0VNQnmuilCQtfK9lkdukdKka592+1gpZnhDD
40/upYFw+bvqXKdO0I8CYlk2830nlf+8Age/PSxNjklbNbYw7E9CbeesIIGe
m6IMZ49XPbFXUbEzi+GmsIzEuUF6S0gSwNxYED+N3gXA7XPuRAJ4hoXBiikL
ltVW+slLw/EgUWmIpgQmt98yzwonVe1URr5T00ISpAqhxk3j2hy7RWkgXG1u
WXGqrFR39oKQa1VV1GHuOvt9jwdCZ0clb7b8iF3FNOu0sPTWBF3ACWTbkuxd
QCsNTx/XlovD+hUgzrtNQsgpVJPmE2SU7Vd4eO6yBQ0mksZ6yZZwrFpq4c8Y
BsBm34tAtW0OdKMFuUqbaDV9Pd++e6zSWoJDIfnRuBDg9b43xWzbTHZ9gyV+
eK5F2qsHpgwCww3/gLlkLDya+fOjs75MEb+CiWDMnGeF25d4R7wTYvhcp6GN
64Cix6Svcar/iFhBOYr4y+KzosmutrCBnI2R3Luc8Pv3s9oLaEqCYq7nX/Nj
15o226kgTnBDUMrcCEtiC+ovh30+UIXrAW7zEaNRc+VaIv97BRvosfre6hGr
fpC3B8Rdq8AFbcayfTkeoFHHPtUvU1YilSkEkgT3CVyrEmTlAAT8JlYnmMFq
woAxfs+gllkzTIXOl9KMJ4ltymOQ9LZsrDFhNkO2phZSZrfDqX0yggzYNgAy
j21yXukxU90R5NaqMokZLe7irKr7oAI3L/EZ3M270xGIWuWqqERnhSxC5/bk
6YkAkiMJL00Tn1FIQQm0onxkK71W5qShPxTLfQuFW99C9hH4PQPTHAtaiUUf
g17h1SmP41xj9e9ZprIKZgZ/6Cf0Eucs/MQ/RFdKuEOhZV7OYCy0wrNDKuEl
9nKw8jrQQu7RhvdParUhhwhVzSopkk+r0d8oLsu4tQXU6n4avrN8+NSQgu65
aIj6uGABp8A+KXb7ttvLiq5QSyOOkD0uTFiyismMlrWUv5CyDmMx0FIHMygs
YRp+lZ+WtOStQM3a569e/4EX13RqNLsDJ09mFmZHL6bDWrWldMkJRO/8tEiA
b71WX9S+aCakHaDuC3B94pH1LC56D01zjmU5vxlUF/FsOOFHpdHEspY7kvJL
ju6oWhAg82PNvezuErGkvmvyItS5QEM4DbIsBFxEI5UY1gacNKGYAWZW5S5W
epD6eGI9ZVgOOa+s3pT9P8LB1czYNDZWj+6iH73e/hnJ5jI+SUPa9yNH16gn
68d3r/ElMlpQ/MBqvOJrVO/oYm03IMcldMpY4pQ3hBRj66M7DXO62p+B5PhQ
SxEEhK2TYdlFCpa0z6HX84hKCSPlGcQcCsYVCSgpqmwDC223EzBKnuhDN8ZZ
BLtL18dYf5NHIxXvQl7HWLBk+rrYk21Sqo522vZ36AwTgvr3aFumSQPKk3/U
9EmFbYkh+hEXiniZxeGFJVbiWZBW7iCCN+gkNyitkREre2n0W6TJrkKFJ6as
OINoAAzyeLo0Tw8SPJbKK5JEqwredg3GGXCLBvf02f9UA21bHYalvwxzeahS
12gW2x/ALekIZzhTuYgEqFr+ewUhlwbxSs9yHyFJTU/DjDb8g4pXXSYhhAxX
adnWp6I8qx/USZgC7/MRE5OTWX5cKznpDjCMxZtwwoLUxBQdJBsC318WbyEZ
3rkAB20YDs4rp/7vz+VUaq4L0hQFcm45F5IbnJkLJ0qd2tc3TIymA3HiV9Xe
8FGQpmBwxpLfSBbW1AK91CSGxFRX0UM2mZCwAxA5eg3SYiTivBFXjGST2G1o
oie2gsrYsueYiMBT/GMpWZgC7rs4i5w1ZS7kDhjH9CRXuTgAxvPocetWraVW
3AH3y2f2Kx3wl2zimOuBHWNSp94fZh1kbzCKOlau4DxLnDKZuXhTNLsGa1au
zNcoNAGKqEB9xEAlcpVoOHjHym2kK1G3ScHdphVohzohhOKyewKtil6ekwA5
Lw47tUGCn0tem2ePDLTpozZfpQBxcFXiVm5n8rsW0LpIdGB2JGQjkvXouvWQ
2Y31ydQOsDqLoCUrnyHvnX1c7LX06SwqPVwOa0MOXaw3JyZ3mTONpIRMWhzz
8zR+NzS+z11QcvIKH0KJs5x7XPNI1uLJEbyVFuQQn1ELB55b46SSlRtclqz2
auAy/M1BIg00w2bfqicTrtZaL/cbVp7zjK3GzHwhjFqg3DG33euKFnbBQnIp
A3fDsV6+QM8EfiLxYEOlPpuBCjyScBlAE6LzZlYuj4/5rSo/IMZOnFS4pBPp
2LQUGz+3Zl94tnzgClyWYeM3REiZIYEJsB3BvqYmu1eG+KlUEdFiCs7+BGef
ZgsItZtHturmJF340RHXdjMxACOXrxHXanHgIfTRuHgL/fX7dXNcgG9Kch0f
c7uKIl3guxeL13rdrEp3hfO4tMgM9irmqs2QCbhoSTSibKTegXWmUJNn+tnV
nGeOehiUv/j0GIvhGDWW5CmAq0oLs0kxPg0c7j3yoWFwg41IUThMUEOCXcwM
T0CErnL9VCUrofjGmLDldSK5SAPbk2DQIJCl0YRQSEIT/1CkefKVD51ryhDJ
czArtNW6Vi9L3pO6mZf1z6JtPBI5iCgYO1KfadkGX15z9rwxNwK/roqsuGcP
xOWqMD/O2QpFTFJy6r/U2LBdpmPpEGk0RalMBZGeY1OT+yYgyd1WnNSSbYI4
sTakZ8si56srIVS2ZB21Sd7PicxhS1gK28WzWSLtPpGF95Is/IePipezXae2
49NYYYZ5FvGzmk89c0BxaYfXim7VxVZdM6HoeM2NzMQXwv1cKGK4rpZqB5j8
SetoaV33c8QvpZoCnDDKqb3sTRZnesvcXIutWVi5ii91N3J/FPZf5ABQZVs/
xINQBAfZ00sLDiwMghhBkDOeLmoK5VaUz0NzhxrIHo3i6gIbILmGoslyBG5d
iYvi8rF8nSWMaGiYr+3gL6d8ucNN1rScTzcByxPJjrw0zljWvLRMWaLvQKKc
v/eY5efD6e+1q1gLq7FwXyxaK9fYSP0UGcFAH4tIUSsMzNEz2VJ2kgbUrBCs
6oFHCbYBhiClMU2JhbHHW1jl5kGwLDW53E0i8eMIhLOQtA5vpoBQhb1NJsgd
xsnG59AIhPSTQ8+l0rDjKTm4QC/VfTkYbVi15cwLxZy5E2KsF00l9WdjE8GS
mQXZgTJsnP25PaY3TCFSrNX96MgcOzq4IS3eIw4c3rBseGUXOFH0bN6zZRCb
B1avqgbEWVTxrFI61B78neh5MiQsS8j7JQLo922ZVAyRhTimaYZaHTIw7E3M
LUMQuFsk2eKT6XGJxG0Tg+XQBszIHvnzvOCO7hipxR2TNKIjjz2WfNlyMH3H
6obV2w861QoEhQIxA8NvgWKJCzkPAT4bIIKS8+mlqlHvgevSmc846Q39hLN7
yS6haHZZF1v9W3Yhr3mpHoQYs0J57Ky6OIYOR6170tSA5brNm1m5LodhlOEx
MDyWOwbEABS8QrztKd9Ik5MKeQzxpo1NcbT6vKx7084hVm4pGr5x9TZn2yY8
T8hRlEVFRsptfAgGi5UnN9RpmUwIldJFl2M6ZcqMgRFNJNH5UjNrcGJCclSY
fey3me8jMk3DMq5Bv5/F29LVQGWlqSe7vT9yLtotZy4ln03wWcxcUuJJdFUu
87fZqc+GJLxEo7du0rM3wRbafFLMyetPD2McEtw5nzj/fiR45OGMl1YnAAXN
pgXABljaAQ6wtst4YEVzyDDtnPPk2Y0cF1BKEJju8/iC/n37oKNXOZaGoAUX
FGnWlr4rxHwBmryYfn2ljnp2kHEqkX7/+GJMDeGhS/5tLM1O/2a73CLeOt+u
NyYPXQzaPjPdy5PpJgXyqOWsDc2qTVogxlP9rBchbAurCmtFF72Kl4PL4e+K
XoHO4NYn/gKrNAKdNglK47H37uz7prh8cPll8UVx71LyJ21k9z9/yNDa+Er0
RHzDEKF7fL/oeNDcsFSQ5kTaEPOxnxshP5H1dDG9uKLRXUwfXxSfc5A8HeIf
8uYVDfB39nD54CH3gB939SDcV/QWSYm2At6K15ULr1gE1RxoMPbDIFivs55i
U6TIH+4Pcb24SBXsco0aNGwcynXzSqEhuy5VeXdSX8zv0Xwk/n1gL8FO7SL6
aZKzYSn8YP5Ddu6xEtLCRVxNk0QwEu+LZj+zRLubWCNA+CDURavsNvP7/JSp
31a4RG6vBcORLyJrovWicAUcQFUqPhpOsXevbHrGri5YTVaDFKSATEa0JgdD
LI358bypoZVTxMiWbJRbMl7dOsQbkVMKfy38mi9zfPX5ydaa8afegzR7hgsU
l92ciyvTNy88fvvLZ3P5+KNWs9Max1EZjJVGM7fWuYvPDLfVlrt6sT4OikAi
OF+eNZdwSY+WLIoN+x5weI5JYE0z+Yn3iKf7nHQ2dv3itjSumLmupdpyx7+r
QnDvh+ev7wMlwhpD2XbinV5JoKPr9TG8VQpAkaPnNgg6Yj3UJmb07OW1foQI
gt7nCSPMq8sno5EqfZcXuMDmsdV6P0HShGzUBv168ezZm3+Z0ARwr42rNnG5
/7zfoCQB3HWXoE24IB6L3Ri+tUQoWtR6k3v70RwcCo3rQ6A5+CYSFMS4sMKY
0IPMmQXy1VCkX34Gh2pSD0p/G/MG0sGEb0RtjS/t9nQps7LtYhFQtCrbPZiw
vKrOmq/Et9VENELQhUDZUDWt5ZXLrx5DFZ1MWKNRIrZbIWiFjRLERg8SwhHq
FDGZbi9IDphj22fS18nwaFDaqpLq3a7LBE4SlyM2emJxb4/J8HKy/6nXxYk3
D/HVQ5oXXv3ch2zdeVP5aSUUAxdM9CHQCzvGt6KbAGnQG7CqHhTzPeOIpz2x
QivQpS4U48TVjHV0uOEHgw+lXHujRJBCQ0Dbeca2QQqpqVgzCFDh/U5uVhzc
iWL5IyGpxwIgPG6xTAOBWahZqvRz8e0We/U0qdzygwY1nyXG2y+fGdQC0W3W
PSVtAuAJuX8dgleavPPiCQVpyjV1bEhasi4GnQzCIqty3a/cEWAgA1xsl5g1
DMY48+qgY/glG9JEJTc4TRVYVoDq2S3tom7rh3Y3lho+dpWq30RtOrUHH6UQ
cV4jJoE3ZYk+gH2Wir1GI6heLlZJyARKegVRvPYhvVZTL6rAgBjDP4gexyiw
8gkJOyRPjMwI4hTjQbVS0yl8GEgfNR9YaoGzpTo6sx9y295SQuiSmpNUK4No
iy8F28SYp+85pVXbNgB+b9k88ruE5PNOUb9BcRZ242wCJRwUpnGz/8yiTtXC
1qaRBD2f70F6Wf3ctC5gdMil4YARbqoQB0s09JNzkgC0cZCYn06BNfegK1+J
ITJFDc5T7ALDOtL0q4QI8yuuNeBngFq96WxqhhHM48EROAO5GIBcAZUV7MdJ
UawUIvNRT0EK6OXug+f1Sp63I3ZQpk+uK2BvgxSsc3dY1SXesOCwe/F8s8It
AT8pdSV3d5/nT1Nwt3LBZTkPjQQgjDkkKbrx7ZBdtcEe2ye05ChUCi5jcsnr
u/hXqTTCkB0DZ++Aj9wtw1RrTTqLcKRz/eCGDRwn3RDHM3wc3uRlMgh1DBnS
FYt/y3lUDImkB6O9nrPB107YfEGuWBweyfeS7CxCeaVCkn7lPYOLweIaeEcN
oSleUoOto261RNdQDnCwoUJAqCRFikifaePoRIWwHGJH/XAeioaf4vMYba2V
pJZsttrQsCaW7qOHGZcntXKPruEuuUDLCBexVAhi281KLzTfX9x23fiMFjmr
IgzKtSR9yhaf1GSScPUiRbdY1w4A8O7T3g0ypQPwW5nOi2QHTgtYR++7YDMU
lzQ2UMYU52ZHMfqgaSRJqmDKwAYKygsrmpIPWDHZrGjue0YWSG314WJIbfKs
SR/qh2rnxYDym+NwZQE4W6FlSpG6xtEsySshdr5i6Lmt64jL6TLOUf30unQh
3R+LHeOy2daTD+uz14YjDfogaS8V33ETmhl0YYsjAA5kvWlkxK3SjCUqmD1i
s9yfXK1atnS5N1zz1Lvv+6w8mgYvgeH1IfgUaLnnHNSS4Iz54e0CMXIZ7zik
CGdPI4TsPwl5FAZgoHcODHGCEA56JSMMwe+fP7sfwdljv2eHL6u9rRdctzkf
HJemrTeq8CPdDo2xVhavfVD1DLlPTPtaY6qg3gA7A1i1XOP6o2E1HWBKZnpT
Da2N9CtiRW6ltnRejvjvJVr7IqkMwhBewMGQBsl6miGa+HZbqR0FEzxFr+f8
86zyZXtPmp3eqBtLJ7linbY5uMoN1byyW3cDsWOpMYpzn9SqdfPNDiOmCNaZ
WjgsXVjHNeGiul5MOEiHYHh5uRDEEf3iZ4h8MlpN6QlUJdLEDXDTmnzB8T/s
5+maiYD9DxPLJl3fSH3UE4mMPRZnF6TRGQcQK1maIjDkV4puT6Xy/dPrL3Me
aalCXO+VzF1elFMIsSZOi/Dd0MzmHS7HYEsDaReNgzvZjzSr4p19uABIy2l6
LmWuLgEiXG/zlR6f7B6kuhZUkKSMEJMyzqKeB3We2Swhw621q3qCXLgGPuaF
MEx9NkthatXXBTPAytsYWoPJO00vzqBNqis2OwnbpbWeMiApo9Y1wbDr93Qg
hV2BCk8Bxk4NjmfVu8f3nftmtaIJ852WVqKLEcUUz8KGWXT9welMfJi/m+3r
NfiuJfDDZbkpt6WVbE+bPqbwzrO6Jwr1R+3AyMwKsKTOBrZXYVtsM/CxVo47
LUqHcviZlmMF0NXTc9Kz+H+478ShJsiTHOsciejFq7EN4BPdAewBAjnUkqCi
xfJU/fURmUJy1jM0SathZM6hv1Zt41WUgubAcHi0WYgV2TuwB8LUkFh8C3Vl
7ux0W4KvxjlvmSyXWPFqjWceM04FjrNljF7N/lyrJ5z4w5z64Eqjvi6iC+w+
zp+M4l5WuidzlFk2kCVBWLCHbZenJ5kA5mn/qBciyeVg8UbYwfS6kFDCGfoT
WWOVvj04klyy5LfI3VUBOLm2OgoW9cZ5Ae93cNGdGQDfM7XlZivDDCP1SS+U
QyxxhiaeM0iaHUZ2qvk5R7uHgbrsFXMVqhSh8KqTpAeTCXd4QrKcDQkyf5dk
+4zTNLRxYKEQWxSvUHZX0k8Omon1Cxb5fkPti2nY6WqNsoollw+uhjd4WJ6K
K5VeSdUvS41qdJFUDGPqt+v8gl9MsdApEEflTBI2STAFxjRLsBd0YlFJjis/
eGRm4vCO7nRMoi38mNrukNNC1BpwVg8ELBbeOlhKrXm4OA3+VgsXmvdEHZwh
+gWn3gnu0Ds0AHWTWbB2LEH3JBaFOC99xnJh7/EPArxad41YnVIb1LTZ4SUH
HdQ1JaUkiorUU0F5Zwr++RoAWAmtRTNwOYity0NolkseYu2h0W6QKq2e5P64
05RHZemu5aX+vME1Br6EwVMhc+Sq3qjsFutAtI5RHpBtMff5SQ/REZhDs/ad
fyET51CsTnMafoxirlTy1PLlHywqlN4UcqvSXBwC0sWzR4qSQy0es5ykLYEK
mngUnhe90IebY1LjjxkI7He5uAJSxjDqdjGuuORPSkT7uRWF1sWHxnDYh1iL
/ifjwO8ai6e/6VwAare1SqyyByvaQU4cwV2U8JIIQUuBsxTbrnlJUHFlgQwT
bCE8nUoA8HJet/P9hsOac/WCRGGoLMIO59gvkZTNICtUInaXV1OgmSsFr7L6
OBOEIpKoOMeqaqOOr6s+V/xlKDHtRC17gnIgyEpN6xCmMOOeVwlITfoflEFh
4YMLZfQmojTDiteB2EK/b2dJmaAMw2C3xzAxrCsUJnRGEsPNrLn7/bDv+JLV
4MXyTFf2oF5+TQ5mcpJ/HKLXEOsa48YmbJnmwLJob7CxiawMzt4iVcMlog46
AU/wJmyF/MhGk6niLHG8wsORRJSCrGNA+KLcKW3TW1yx4IdHb4nuBTHC/IwL
mei9jnJRkijbKNHp0NMI6E8AgsEzp81bzkvA7a0awXuj+A8jWPSaJvqV6UfY
nZZ53K2bI4KoWCYoXbVXaNExaqUFZ4uxOb3nitbhcgoIEZvSWTodQDw4XFkR
LKs9mDblSheiD1h5RCA/2e/VprOLd2Kkl09Ydn0fqpMNg1keWkEVhPle6GjQ
UXZ5k6lLyqcG9+tIzA+4B87v5PtbHReauJy05rNeOgETV0J66ZVjug/BL48o
SoELSi2lGFU1vFEurhJXDZ9klvotqDsE3nZmpIDRGjJ29CNqTY5ST5i24OKM
rSHktW/9XgCscUDtrxesKqoklYvhLE356mLTeSkQBJUFN5AUF2JzBoGWi+kl
bWiwqgul5QNHGcj8Mr2v1MWqXpjGewNa1CQnveU+zSFn9xEYDc5uXHN3jugu
JzAKY/tJbmU3r7a09I1eXkFTXq0qrU0WtQEutiNbaeJOIUJaUaPYmBfV8rcY
L2zVLV6YWzoxX5sE93FyH9XANHZ3tZwJDxNom+otxuO0qwCI332LzFlgweCO
r0rNENQ6bWKMAaE8C7elxX6M7lPEvBZEqraWk8wMOwy9xJ5Uxk+bOzpVyQM8
8VoCtDKfLJzxB8cilwt4Ja223sgtRMFJJLdxhtSA9tD9QyLYO8xoNXnENB3Q
RLYESTYUX9OjlOix6nJx08y5joea9yk8hIPv3ciyQaLoCDAPNDi/pl1Zj8YK
f6MZPJKPOnhn8Gtx8ST1lMisxhydA7xW06kSP6uePnM1eDuXT048LtbMI2mm
+03tXD0Z+IysFbvQ7+8Z1MP/jsbESE9XFHlder2A8AoJNllpnhu1SP1yiskp
naC5HJIkJaPPeMjupCC5yzE5t8komTYuPT6QfnFAeU96aXHEno/Puigwi+lv
HTqq22YOtr8x7OghM1adjvHT/eZ+Ke/771o2miZmny4MQO/Jyki2PX+VevkQ
nLZrMNLXL/+uUdtm/+YRW1A8G7Gl46ybFVsurLiLfqVvMkE7vA43F3P6sZnm
pyWSxGMdkhb0LhLBu201zMZBsngHTnpRyzG1FEV8a/nGIEUK7Cow8XAK3j0B
tp1UNctu6oplODGHIEXbOrexssozuMkLN4Bl95NAFJD8eZugwFjhNgvRV1Nr
RtKSW+nF846RxJ7YQrk2VC8GL2l1BiVaH4MetgQAEM+bX4IaVXELA+q94owl
J3NQbwuCeDJSk1G70mygB3fxGb6OhqhlLIGQtoFkd3tKocpXNFng3K4urh4b
ik0rNMhd6RCiojuxKjUKA70gC0KZHtEpC+4PtaZC5uXQxkELuyTuRr0aY/sJ
RaWoB/cwxKtX+T1c6XgntA5g16ziibShGHvvlBXnh5lrWQtwl2k5WGy/oEv4
OjxAUlxt1ALxSNiNtaLs3pMa8PkwSmoAWD0ka6KLTgW5X9Rh5AdBcKOqrmVH
zUkKAXbIoWQBQ+ZXutFeVbSO9P/tmTpLDoUYFg8+4MLZei2RSHftnYw7rd8l
t4TFi86Te5jTW5iTMnpm3juJjCyNXTTIQeVK60GAxUi2FmzuOMZeNFMtKYaV
q6Eo2BMNu+CGXWGXaYps5BMp9gTOWb08bStYpV5DxWSJj5JQbBO0F20WcL8E
uRfKi0tsYFOWcTXhu98uFPDBKjMHoNmCFN4o7oifbo4pGfNQBvfEZre6/o+T
wgzJVb6JT6atBdqW8cUIYcyuwDhW6jPRIGkKoVxkIYSBczZ4cQVOkUOXgwtX
zOLrkpxqq1/p192k1xyah0Lp6EykF7svRUxCvPBSi+O5v1Vo3VNBTNnyFQoq
QsxPA6NwcD2vkZOmOdXdBxzebkeGu54tdvjCR0yGvmIaoWhy90oZseZGEM2q
s9uJY7uoRKvpyiZ1DOnKBj2XKM9WaZGnxMX0bu5Y8roVk8S5naSPNDvLrOJ0
LZCG3Pi7TG4oyvmr+bWiZDcySMIus2PCtc3fqvc2CWwHXkTPTTdH6sAh9bNW
317CWyw5Mxxl3HESLem6DuPx1v2yEL9A0nyCbSXp+kZedkWKJLTCV9PSimIV
k/tmcR98dim7XmIRcE6/jU4NLbNnB1D3hHoQeySRuafPxHokVrmE7ZpGRhHi
pjqS2JaGVaK+3FbQVHCsDGWut1yBYQ9Efex2L+Gt4d0BKHxjjnQrsnc6QKPg
cbFb7/m070jzqWdEleuGk8GGoOvk4PGSoJfInGpJCWs/cH6Xlvx+IncOZGQh
M1K19RNXbo69RAKzYA4/16TzDfLSEvVhgYIE9LlakIPhcay8lgtxQIgnWDOX
V+PANbKxQMYH9L7sPx3Fd1tzCODWaJPnoo0rLkuCoD4uvU5D33WJYIVw6rww
N/SSIMpTlPIpx5blP6sj+CzPDCP6Y5GhhngMhiH8n6cEPCpLi+BpmKyWSmkg
NkOExTRkU2xZsGCNQXIa3Enq/BirA5TcSmbjWczNBQdt12tWEvet8GQakuwH
8wKpvg9aklmPpH4vEh1B2DIX7jQbglaVW0aEWd0uWKfZ2rxH4kAWnzYKWSFN
qwvJXS+SWlf3WvatS0p/eZxyiSymnfBPVgWjA03mQSQmkXcGaHn2iJrC7Igk
AuKSv3onWViSwfJXO59ZClKcjnjAz00HNb2tYmV6DZnhhVUNeygpZ1kf2HUG
V7vxLdW/oSOfaCdWdPJUobNK9FZGVO6N74YMJWGM3Ibd4SvoFifbjnPR2ui7
s7ppcuOcXE8kebynHGsUizmrfiUUy4abMtiuEwDBKKnlfVOuE/ku49FluytP
yEwtJloNqwVrOiK24l33QyNQVRKJ6SE/zLLZTVao1p5eUugdaA3rZTTNbCXY
OftZ8RaYypeI9VSaNYKPNvKRVggUkMKN3ajkGEGp/eepbU+yLVcU4Qn6UUGP
7PqRVT21Pt8DmyoHOykhUyjFmc5U5yW8388PWxLCpuimLrlTYzFWAfXqtzeV
XBhrYIasM2ixWu4+lhTjKgbxL5l/2oI8b1eFWMlZ6sbXbBxzK8Ynl5PG5Kp6
i3tmYw6HIpfY9DE904wiz+7gV9hvVoInVklgLMHnpqXKAcnHuqVl005KlL+W
wkml8w6D6CVTx13T7Jzjr5ok/VE3DfDdhwM4mEK3zG0vI8k43cA/gWo9uikx
vaWIfAob5N+cbBCOtsB/PPWLdye9NWAEBj4a1GIY2RU1L+5yZJxZFKEod3DD
GYLRc59xwYUbAL9g7MBKV6SDA7lP6ZPXkSsPOsxRkFkSUAbUdjJ5ZHHC08Fr
3WkGpifZgE9FFvOd6k2Ny9HiMfGkId4E++PcHhjUJQdGM+ytyG6z/9S+CMug
F+46EfTVaQlgu/j0/HkAj2I7AmmSsZzpyYnQVdA1YLrt4+GvTi4y9qI/8Icq
ahEpMfsNacrw+ShiZ8Bxw6IuV225mcYSJMN/b/mknf2mKH694/PbOz7/4p8m
5/99cVcH5z/+RM9/xwtfJP3/phd+lcUgUXFuYHf18Ovf0YN2c8e//+ZJnxkU
v0BPJA/9M43Gy0KeDo1f+D/4X97Dr/r/sz38eq6hwj58w5UwvNzGJ1/IJmMD
/uKuF+74/Pb085QyTr/9NRbQPNPmFzzvX/N3k4tb8YF8no/+C//85N9dn9+e
fn468nzNDFZ4rs2UWj8xknOH+AvjH9ACSSlqOWxiAFC9qiEpbTOr1nVlmEVE
HfQeUORDdtaAhXRetwoC663g8DiYp/wQFQeE+dLcW40uLZtmLSxw1ai9zWUn
5GoAImyyYdcbzzy0EBcxd38cZTpwLTP70bLnkw75ehAAdP+Bl+HF01dPT5ZA
qg43873aSsiCxJOSG8K+/AmX3GBVhxp5mgdKwi9PxJdRLb4ZLclOqUZ8//vr
716nEoSVcfz7vzTS5LFm1wAA

-->

</rfc>

