This application, as all our reports, was made for **academic purposes** and is the result of fundamental research *independent of the competent health authorities*. As noted below, in addition to methodological limitations, the data are potentially incomplete. For example, the data on the serial interval all originate from Asia. For public health matters and for any related questions, we recommend consulting and following the WHO instructions available at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public.

Every day, public health agencies such as Santé Publique France communicate key figures to monitor the epidemic. Four of them particularly reflect epidemiological dynamics: the cumulative number of confirmed cases, the cumulative number of deaths, the number of people being hospitalized, the number of people being hospitalized in intensive care units.

The statistical analysis of **time variations** in these time series can inform us about epidemiological dynamics.

The purpose of this application is to visualize the *trends* present in the data through the **temporal reproduction number**, noted \(\mathcal{R}(t)\).

**Warning**: like any interpretation, this one depends on the *quality* of the data. In fact, each type of data reflects a past state of the epidemic due to the “temporal intervals” between the date of infection and the date of symptom onset, hospitalisation, or death. In addition, there are reporting delays (week-end figures are usually lower). Finally, the *screening strategy* may vary over time. These results should therefore be interpreted with caution.

One of the key parameters in an epidemic is the **number of reproduction**, \(\mathcal{R}\), which characterizes the average number of people infected by a contagious person during his or her infection. At the beginning of an epidemic, when the whole population is susceptible (i.e. not immune), this number has a particular value noted \(\mathcal{R}_0\) and called the **basic** reproduction number. During the course of the epidemic, when the proportion of immunized people becomes large enough to slow the transmission of the virus (by an effect comparable to a dilution of the individuals still susceptible), this number is called the **effective**, or **temporal** reproduction number, and denoted \(\mathcal{R}(t)\).

Intuitively, if \(\mathcal{R}(t)>1\), then one person is infecting more than one person on average and the epidemic is growing. As the COVID-19 epidemic spreads, \(\mathcal{R}(t)\) decreases as an increasing proportion of the population becomes immune. When the group immunity threshold is exceeded (see our Report 2), \(\mathcal{R}(t)\) falls below the threshold of 1, an epidemic peak is reached, and the epidemic declines. Public health control measures can also decrease \(\mathcal{R}(t)\). Therefore, the epidemic peak can be reached before the threshold for herd immunity is.

At a given point in time \(t\), knowing the value of \(\mathcal{R}(t)\) is therefore essential to determine the status of the epidemic.

While the reproduction number quantifies a potential for the spread of the epidemic at a given date, its estimate is based on data that reflect the state of the epidemic several days earlier. Indeed, from the moment a person is infected, it takes about a week before the virus is detected, and then usually another week before a potential hospitalization, with a possible admission in an intensive care unit (ICU) after another delay. Death can occur several days or weeks after ICU admission. In addition, time must be allowed for the case/admission/death to be identified and reported.

As a result, by estimating \(\mathcal{R}(t)\) from the incidence data of newly detected cases, we can at best capture the status of the epidemic during the previous week. Based on new hospitalizations and intensive care admissions, this lag is of the order of two weeks, and it increases to four weeks for deaths.

While it is not necessary for all cases to be recorded for the method to work properly, it is necessary for the screening effort to be as constant as possible over time. Indeed, an intensitification of screening effort mechanically amplifies the number of cases detected, therefore mimicking a growing epidemic. For this reason, data on case incidence in France are particularly difficult to use because screening intensity was initially very low and increased several times over the past few months.

It should be noted that variations in sampling intensity are inevitable. For example, we alsways find lower incidences during week-ends. To limit this, we use a 7-days sliding average.

As explained below, a key input for estimating the reproduction number is the number of days during which a person is contagious. In practice, this is estimated by tracking contacts (i.e. infector-infectee pairs) and counting the number of days between the dates of onset of symptoms in the infecting and infected individuals respectively. This is referred to as the *serial interval*.

To date, **few to no data are available for this serial interval in France and in Europe**. In other words, all estimates presented here are made on the basis of durations based on field surveys from Asia.

Our application allows us to choose between several data sets for the serial interval in order to better understand the effect of this parameter on the results.

We illustrate below the three different choices we propose for the distribution of the serial interval:

- the Nishiura
*et alii*option represented by the histogram; - the \(\text{Gamma}(6.5, 0.62)\) option represented by the blue curve
- the \(\text{Weibull}(5, 6)\) option represented by the green curve.

In practice, we need two pieces of information to calculate the reproduction number:

the growth rate of the epidemic at a given time, \(r(t)\), which can be easily calculated from incidence time series (new cases detected, new hospitalizations, new deaths…),

the time during which a person is contagious, \(D\).

A well-known and intuitive relationship between all these terms is as follows:

\[\begin{align} \mathcal{R}(t) & = D~r(t)+1 \end{align}\]

However, this apparent simplicity masks significant problems. First of all, even in a Markovian context (*i.e.* in the absence of memory, see the study by Sofonea *et al.*, *medRxiv*), this equation remains an approximation: the \(\mathcal{R}(t)\) hence calculated is underestimated by a quantity \(\mathcal{R}_0 p\left(t\right)\), where \(p\left(t\right)\) refers to the proportion of cases at time \(t\), a quantity that is not always negligible, especially near the epidemic peak.

Secondly, the duration of contagiousness \(D\) is a delicate measure: it can only be expressed as a fixed duration if an individual becomes contagious immediately after infection and loses contagiousness at a constant rate (Markovian regime). In practice, these two conditions are rarely satisfied. On the one hand, there is often a latency period during which the individual is not infectious (in the case of COVID-19, this latency is less than the incubation period, hence the difficulty of measurement). This imposes a positive corrective term to the estimated growth rate. On the other hand, the loss of contagiousness seems to show a non-Markovian ageing pattern (typically captured by Weibull distributions of shape parameter \(k>1\)). Therefore, setting \(D\) to a “mean” value, for example, by assuming that someone is infectious for 9 days, is far too simplistic. Some try to the problem by assuming a uniform distribution over a given interval, e.g. assuming people are contagious between 4 and 14 days, but this approach does not avoid a bias.

To visualize the, neither exponential nor uniform, aspect of the distribution of contagiousness over time, we show below the COVID-19 serial interval compiled by Nishiura *et alii* (2020) from 28 infector-infectee pairs.

The median (in red) and even 95% confidence interval (between the blue bounds) are both not very representative of the totality of the values.

To circumvent the contagiousness duration problem and its modelling in a classical SIR model, while relying on the empirical distribution, an elegant solution is provided by the Euler-Lotka equation. Indeed, *provided that the epidemic growth is still exponential*, we have

\[\begin{align} \mathcal{R(t)} & = \left(\int_{a=0}^\infty e^{- r~a}~g(a)~\text{d}a \right)^{-1}, \end{align}\]

where \(a\) is the “age” of an infection, \(g(a)\) is the distribution of contagiousness durations, and \(r\) is the growth rate, assumed to be constant over the time interval. In practice, \(g(a)\) is approximated by the serial interval, noted here as \(w(a)\), described above and which corresponds to the time between the onset of symptoms in an infection pair.

Outside the exponential regime, and in particular in the vicinity of the epidemic peak, we can use a general formula introduced by Wallinga & Lipsitch (2007, Proc B) that directly involves incidence data (\(\left(y_{1},\ldots,y_{n}\right)\), where \(y_k\) represents the number of new cases detected on day \(k\)):

\[\begin{align} \mathcal{R}\left(t\right)=\frac{y_{t}}{\underset{a\geq0}{\sum}y_{t-a}g(a)}. \end{align}\]

The *R0* software was developed by Obadia *et alii* (2012, *BMC Med Inform Decis Mak*). Its maximum likelihood estimation method for the basic reproduction number \(\mathcal{R}_0\) is based on the approach developed by White and Pagano (2008, Statist Med), whereas its method for estimating the temporal reproduction number \(\mathcal{R}(t)\) is based on a procedure introduced by Wallinga & Teunis (2004, *Am J Epidemiol*), whose principle is as follows:

an epidemic curve is a time series of incidences \(\left(y_{1},\ldots,y_{n}\right)\), where \(y_k\) represents the number of new cases detected on day \(k\). In the original model, detection corresponds to the onset of symptoms.

this time series is the result of a set of transmission events between all cases detected since the beginning of the epidemic (NB: if the proportion of undetected cases remains constant over the time window studied, the estimate is not biased; otherwise, pre-processing of the data is necessary, see e.g. White

*et alii*(2009, Influenza and Other Respiratory Viruses). The structure of this set of events is seen as a tree (more precisely a related acyclic oriented graph) whose probability of underlying data (*i.e.*the**likelihood**) is decomposed into infector/infected pairs (transmissions are assumed to be independent).if \(p_{i,j}\) denotes the probability that the individual \(\mathrm{I}_d\) detected on day \(t_d\) or the infector of the individual \(\mathrm{I}_i\) detected on the day \(t_i\), then, by definition of the serial interval whose law is characterized thereafter \(\left(w_{k}\right)_k\geq 0\),

\[\begin{align} p_{i,j}\propto w_{t_i-t_j} \end{align}\]

(NB: the difference in days \(t_i-t_k\) corresponds to the notation \(a\) above)

- therefore, the relative likelihood of the pair \(\left(i,j\right)\), \(\ell_{i,j}\) is given by the expression

\[\begin{align} \ell_{i,j} & = p_{i,j}\big/\underset{k\neq i}{\sum}p_{i,k} \end{align}\]

- The individual reproduction number of case \(j\) (denoted \(R_j\)) is therefore the sum of all cases \(i\) it may have infected, weighted by the relative likelihood of each pair.

\[\begin{align} R_{j} & = \sum_i \ell_{ij} \end{align}\]

- The temporal reproduction number is finally obtained by averaging the individual reproduction number over all cases detected on the same day \(j\), where \(t_j=t\):

\[\begin{align} \mathcal{R}(t) & = \frac{1}{y_{t_j}} \sum_{j:t_j=t} R_j \end{align}\]

This software also allows the direct use of raw values for the serial interval (which we implement in one of the options).

The *EpiEstim* software developed by Cori *et alii* (2013, *Am J Epidemiol*), updated in 2019 (Thompson *et alii* 2019 Epidemics)) is based on a different approach from the *R0* software motivated by the fact that, in situations where the studied epidemic is still ongoing, and more particularly when it comes to evaluating the effectiveness of control measures (a very current situation therefore), the total number of infections caused by the last detected cases is not yet known. This weakness is an opportunity to highlight two approaches to estimating the number of temporal reproduction, namely

- that of Wallinga & Teunis (2004), Obadia
*et al.*(2012), from the*R0*software leads to the**number of reproduction of cases**(or cohort), which is retrospective: its calculation is based on the number of secondary cases actually caused by a cohort of detected infectors from the date on which they were detected;

- the one by Cori
*et al.*(2013), Thompson*et al*(2019), from the*EpiEstim*software leads to the**instantaneous reproduction number**, which is prospective: its calculation is based on the potential number of secondary infections that a cohort of cases could have caused if the conditions of transmissibility had remained the same as at the time of their detection.

To quote Cori * et al.* (2013), this distinction is equivalent to that between the (retrospective) life expectancy of a cohort of individuals, calculated once they have all died, and the (prospective) life expectancy of the same cohort, estimated under the assumption that mortality will remain the same as that known at birth.

Formally *EpiEstim* maximizes the likelihood of incidence data (viewed as a Poisson count) observed over a time window in which the reproduction number is assumed to be constant. By noting \(y_k\) and \(y_{k}^{+}\) respectively the numbers of new indigenous and total (i.e. local + imported) cases, and \(w_s\) the probability corresponding to a serial interval \(s\), the temporal (instantaneous) reproduction number for the interval \(\left[t;t-\tau\right]\) verifies

\[\begin{align} \mathcal{R}_{\tau}\left(t\right) & = \underset{R}{\mathrm{argmax}}\left\{ \underset{k=t-\tau}{\overset{t}{\prod}}y_{k}!^{-1}e^{-R\underset{s=1}{\overset{k}{\sum}}w_{s}y_{k-s}^{+}}\left(R\underset{s=1}{\overset{k}{\sum}}w_{s}y_{k-s}^{+}\right)^{y_{k}}\right\} \end{align}\]

The number of calculations required by the *EpiEstim* software to make inferences explains its relative slowness.

French incidence data come from Data.Gouv and international incidence data come from Our World in Data

Serial interval data are from Nishiura

*et al.*(2020, Int J Inf Dis), Ki*et al.*(2020, Epidemiol Health), Rothe*et al.*(2020, N Engl J Med), Li*et al.*(2020, N Engl J Med).The ETE modeling team (Laboratoire MIVEGEC, CNRS, IRD, Université de Montpellier) is composed of Samuel Alizon, Thomas Bénéteau, Marc Choisy, Gonché Danesh, Ramsès Djidjou-Demasse, Baptiste Elie, Yannis Michalakis, Bastien Reyné, Quentin Richard, Christian Selinger, Mircea T. Sofonea.

We are very greatefull to the South Green platform of IRD Montpellier for hosting this application (more details on bioinfo.ird.fr).

Since 30 April, we have received support from the Occitanie region in the context of its “Covid-19 Emergency Research Appeal”. The latter has convinced the National Research Agency to finance half of our PHYEPI project, which was on the waiting list.

The page was partly translated using DeepL.

Contribution to this work

design of work: all the team

model realisation and code: BR, GD et SA

shiny interface: BR

writing: SA, MTS

validation: all the team

contact: mircea.sofonea@umontpellier.fr bastien.reyne@ird.fr samuel.alizon@cnrs.fr

This work is made available under the terms of a Creative Commons Attribution Licence - No commercial use 4.0 International.

The first two graphs show the number of temporal reproduction (denoted \(\mathcal{R}(t)\)), i.e., at a given time \(t\), the average number of people an infected person infects over the course of his or her infection. It is therefore an estimate of the **potential spread** of the epidemic. The shaded areas indicate the confidence interval and the line is the median.

The third graph represents the incidence data, i.e. here the number of new cases detected each day. Depending on the choice in the menu, this incidence can refer to screenings, hospitalizations, ICU admissions, or deaths.

The fourth graph allows you to zoom in on a particular time period by dragging the small sliders (move the left one to the right to see the recent values better).

Not necessarily: \(\mathcal{R}(t)\) represents the trend of the epidemic but it does not reflect its current state. For instance, an epidemic can be declining but there can hundreds of thousands of infected people and overwhelmed ICU services.

These two graphs are based on two slightly different methods, which are described in the Model tab.

The *EpiEstim* method is less sensitive to time variations (it calculates a 3-day average) and uses a more detailed model to forecast recent values.

The *R0* method is more sensitive to changes in the incidence curve and the most recent value can have a significant effect on the curve.

Yes, but it depends on the piece of information you are interested in.

The *R0* method is to be preferred if you are moving away from the present — and therefore if you are looking for an estimate for a past date.

Conversely, for a recent estimate of \(\mathcal{R}(t)\) we will rather turn to the method *EpiEstim*.

The serial interval is used to estimate the average number of days between the date of symptoms onset in an “infecter” and the date of symptoms onset in an “infectee”. This data is needed to measure the reproduction number. Unfortunately, **the serial interval is still largely unknown** for epidemics in France and Europe.

We therefore offer the user to choose between several serial interval distributions measured on COVID-19 epidemics in Asia or estimated in the light of previous epidemics.

In order to overcome the weekend reporting period for new cases, the data were smoothed using a 7-day moving average.

Each point is therefore the mean of the last 7 days.

For purists, we used the function \(\texttt{zoo::rollmean(x, k = 7, align = "right", fill = NA)}\).

Note that this smoothing produces decimal values in the incidence series.

There is always a **lag** between the state of the epidemic and what can be estimated from the data. This gap is due to the fact that the events observed (screening, hospitalisation, entry into intensive care units, death) only occurs a certain number of days after *individuals have been infected* — the event most likely to characterise the epidemic in real time, but in practice never known.

These time lags vary from one individual to another, but for the sake of simplicity we have set them at times that we consider to be representative based on a model from our team (Sofonea *et al.*, *medRxiv*):

- 10 days lag for newly detected cases;
- 12 days lag for hospitalizations;
- 14 days lag for the ICU admissions;
- 28 days lag for deaths.

Incidence data on PCR detection are generally the least reliable because they are very sensitive to variations in detection policies. For example, in France sampling was limited at the beginning of the epidemic but now lots of tests are being performed. This increase in screening mechanically translates into an increase in the number of cases detected, and therefore an increase in the reproduction number, even though the epidemic could be strongly decreasing.

The most reliable data for measuring \(\mathcal{R}(t)\) is therefore the one where the screening policy has varied the least over time. In France, this would therefore correspond to data on new ICU admissions. However this data is not always available.

National data aggregates departmental data. This addition makes them less sensitive to stochastic fluctuations. The estimates are therefore more robust with narrower confidence intervals.

On the other hand, if only a few departments account for the majority of cases, then the national data may poorly reflects the situation of the epidemic in the least affected departments.

In order to be processed by the packages used, the data needs to satisfy several conditions:

- the first data point is non-zero;
- there are no missing values;
- the number of consecutive observations must be greater than eight.