Jekyll2022-12-05T11:04:40+01:00https://davidetorlo.it/feed.xmlDavide Torlo’s Homepagepersonal descriptionDavide Torlodavidetorlo@gmail.comHow long will it take to adopt a Zero Covid Strategy?2021-02-16T00:00:00+01:002021-02-16T00:00:00+01:00https://davidetorlo.it/posts/2021/02/ZeroCovidTimes<p>In this post, I try to get further information on how long does for a Zero Covid strategy to get to the end of the first phase and I provide a tool to play with the parameters on <a href="https://colab.research.google.com/drive/1G7Dn1z8CfkhTtbK8YvlVdakc2yz3qzmm?usp=sharing">Google colab</a>.</p>
<h3 id="disclaimer">Disclaimer</h3>
<p>The discussion in this post is a simplification of the model in reality. In particular, I will not consider now the introduction of new more infective <strong>variants</strong> (I will consider it in future) and the fact that when number of cases are <strong>very low</strong>, the SIR model does not make much sense, and we should pass to a discrete model.</p>
<h2 id="what-is-a-zero-covid-strategy">What is a Zero Covid strategy?</h2>
<p>As the name suggests, the goal of this strategy is to bring the number of cases to the lowest possible (zero). This has been already done in some countries as New Zealand, Australia or Taiwan.
It consists of different phases.</p>
<ul>
<li>A first one, where a <strong>strong lockdown</strong> is imposed until the number of cases decreases below a threshold (around 10 per week over 100,000 population, but different experts said different numbers between 2.5 and 50)</li>
<li>A second phase where <strong>many restriction are lifted</strong>, the <strong>contact tracing</strong> is effective and a <strong>zone system</strong> is employed. This means that every time a zone reaches a number of cases higher than the tolerance, it falls back into a local lockdown, until the number are limited again. The clou ingredient of this phase is that each case should be always reconduct to a previous one. In the event of a case with unknown origin, the zone falls back in the previous phase.</li>
<li>A third phase in which each zone, where the number of cases is zero, is set <strong>free</strong>. Again, as soon as new cases are found it falls back in the previous phases.</li>
</ul>
<p>This strategy has been already employed in different countries outside central Europe. It is remarkable that also in Europe similar techniques (meaning different geographical zones ruled by different levels of danger) have been adopted. For example, Italy follows a 3 phases strategy, where each phase does not pursue the goal of reaching the traceability of the cases. Clearly it is not enough to bring cases to zero. Lately, Germany started imposing a stronger lockdown in the last months and it is aiming to decrease the number of cases below a certain threshold which is still debated between government and Lands.</p>
<h2 id="how-the-pandemic-would-behave-during-the-strategy">How the pandemic would behave during the strategy?</h2>
<p>The pandemic can be modeled using many simplifications with a SIR model (symptomatic, infected, recovered). This model, when far from the herd immunity, behaves exponentially in both directions. Meaning that the number of cases evolve as $I=Ce^{(R-1)t}$, where C is a constant depending on the infection, R is the reproduction number of the disease and t is time. The measures adopted by a country (ignoring borders effects) modify directly the reproduction number $R$. In the picture below we can see the behavior of the cases varying in time if $R>1$ (on the left) and if $R < 1$ on the right.</p>
<p><img src="/images/postCovid/exponential.png" alt="Exponential Behavior" /></p>
<p>For human beings it is difficult to catch the exponential behavior, that is why we often use a logarithmic scale to better understand it. In this scale, we see that the logarithm of the infections behaves linearly with time $\log (I) = C_1+ C_2(R-1)t$. In particular, we notice that the number of infections doubles at a fixed time, for $R>1$ and gets the half every $N$ days when $R <1$. Without getting too complicated with different variants of the virus and so on, we try to focus on a strategy that brings $R$ to certain values and keeps it for a certain time.</p>
<p>It is now interesting to understand what is the halving time for the development of the pandemic. This can tell us <strong>when</strong> the Zero Covid phases should change to the next one.</p>
<h2 id="how-to-forecast-the-time-needed-in-lockdown-before-the-new-phase">How to forecast the time needed in lockdown before the new phase?</h2>
<p>Let us look at the Covid cases in New Zealand. There were few cases at the benning of April, then the Zero Covid strategy was applied.</p>
<p><img src="/images/postCovid/HalveningNewZealand.png" alt="New Zealand cases" /></p>
<p>There the peak was of around 2 detected cases per day per 100,000 inhabitants, which was brought to almost zero in around one month. In particular, we see that in the logarithmic scale, they were able to halve the detected cases every 5 days.</p>
<p>If we come back to Europe we can observe similar behaviors, but with way higher numbers.
In <strong>Italy</strong>, after the secod wave, we observe a negative exponential behavior as well in May, where the testing was more consistent in the country. There the halving time was of <strong>14 days</strong>. At that time they could have reached the goal of 1 case per week per 100,000 inhabitants by continuing the measures that were on in May till the beginning of July. On the contrary, restrictions were lifted and the exponential inverted around mid July.</p>
<p><img src="/images/postCovid/HalveningItalyFirstWave.png" alt="Italy first wave" /></p>
<p>The past is gone and the question is: if Italy applies the same measures kept on May 2020 from now, <strong>when will the goal of 1 case per week per 100,000 inhabitants will be reached?</strong>
Supposing that the testing is working more or less fine and that the virus did not mutate too much (which is not true, it worsened), we can forecast the cases for the <strong>new lockdown</strong>.</p>
<p>The green line shows that by the beginning of May 2021 Italy could reach the number of cases of the last summer (2020) and by the end of May 2021 the threshold of 1 case per week per 100,000 inhabitants.</p>
<p>During the second wave the rules were less strict and different in each region, leading to a larger $R$ and a less effective strategy: halving time of about 19 days.</p>
<p><img src="/images/postCovid/HalveningItalySecondWave.png" alt="Italy Second wave" /></p>
<p>Using this strategy now, it would last till mid June 2021 to have the same cases of summer 2020.</p>
<p><img src="/images/postCovid/HalveningItalyGoal50.png" alt="Italy Goal 50" /></p>
<p>While if Italy wants to reach 50 cases per week, one month with the red zone measure of the second wave should be enough.</p>
<h1 id="play-with-the-tool"><a href="https://colab.research.google.com/drive/1G7Dn1z8CfkhTtbK8YvlVdakc2yz3qzmm?usp=sharing">Play with the tool</a></h1>
<p>These plots have been produced with a simple python script which is available on <a href="https://colab.research.google.com/drive/1G7Dn1z8CfkhTtbK8YvlVdakc2yz3qzmm?usp=sharing">Google Colab</a>.
There you can put all the countries of the world and play with the parameters. Simply start the script with the play button on the top left of the code box and then tune the parameters:</p>
<ul>
<li>Data to consider to compute the halving time: starting date and Data interval</li>
<li>Halving time of the projection starting from now</li>
<li>Goal to be reached</li>
</ul>
<p><img src="/images/postCovid/HalveningGermany.png" alt="Germany cases" />
As an example, here we set Germany to be the country of interest, we see that during the first wave the halving time was about 14 days and now it is of around 20 days. We can imagine that keeping this pace it will reach the goal of 2.5 cases per week per 100,000 inhabitants by beginning of June.</p>
<h2 id="conclusions">Conclusions</h2>
<p>Clearly, reaching 1 case per 100,000 per week would take too long with a lockdown, but reaching 50 or 30 cases per 100,000 per week in order to make the contact tracing still effective is doable in a month or two of not too strict lockdown measures.</p>
<p>Now, the objective stays in tradeoff between the cost for decreasing the cases and the goal that we want to achieve in order to make contact tracing as effective as possible. I think in the next months the European countries will try to find a good compromise in this.</p>Davide Torlodavidetorlo@gmail.comIn this post, I try to get further information on how long does for a Zero Covid strategy to get to the end of the first phase and I provide a tool to play with the parameters on Google colab.Weighted Model Order Reduction to Quantify Uncertainties2021-02-01T00:00:00+01:002021-02-01T00:00:00+01:00https://davidetorlo.it/posts/2021/02/wMOR<p>This is a blog post I wrote for the <a href="https://medium.com/sissa-mathlab">SISSA mathLab medium page</a>.</p>
<h1 id="weighted-model-order-reduction-to-quantify-uncertainties">Weighted Model Order Reduction to Quantify Uncertainties</h1>
<p>Model order reduction (MOR) techniques, maybe familiar to many readers of this blog, are able to reduce computational time for numerical analysis (and not only) parametric problems. In a fast <em>online</em> phase of the MOR we exploit the expensive computations provided by an <em>offline</em> phase. This is useful in many situations, where fast evaluations of the map parameter-to-solution is necessary. We can think of <em>real-time</em> applications, where data are collected on the field and a fast response is necessary, <em>optimization</em> processes, where a cost function (depending on our parametric problem) must be evaluated several times, or <a href="https://en.wikipedia.org/wiki/Uncertainty_quantification"><em>uncertainty quantification</em></a> (UQ) applications, where the parameters are intrinsically aleatory and the solution of the problem carries this uncertainty.</p>
<p>UQ is a broad subject and in this article we focus only parameter uncertainty and its possible stochastic behavior, i.e., we suppose that the parameters of our problem are random, but they obey to a <strong>known</strong> underlying <em>probability law</em>. The goal of this UQ task is to estimate what is the <em>probability law</em> of the solution of the problem, making use of the knowledge of the distribution of the parameters. Let us give an example. Suppose we have a parameter that follows a <a href="https://en.wikipedia.org/wiki/Beta_distribution">Beta(1.5,2) distribution</a>. In the next figure on the left, we simulate the distribution of 10,000 samples drawn by this probability law. The goal of our UQ task is to understand the probability distribution for the output of the problem of interest, in the example below (right figure), a very simple function of the original parameter. What we observe in the figure is just a discrete approximation of the probability law.</p>
<p><img src="/files/images/posts/MORUQ/betaDistributedFunction.png" alt="On the left: 10,000 samples drawn from a Beta(1.5,2) distribution. On the right: a function applied to the parameters." /></p>
<p>If one wants to get more insights on the law, it is important to compute some statistical <em>momenta</em> of the output solution, e.g. the mean and the variance of the solution, computed with sampling algorithms as <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo methods</a>. Monte Carlo methods consists of computing the output problem for many parameters drawn from the probability law, as done in the previous picture, and then taking averages of the obtained quantities. The main issue of Monte Carlo methods is the <strong>slow decay</strong> of the error, which depends on the square root of the samples. In other words, if we want to have a tenth of the error we need to run 100 times the number of the original simulations. Hence, a very large number of samples are needed.</p>
<p>Here is where MOR helps the process.</p>
<h2 id="weighted-model-order-reduction-techniques">Weighted model order reduction techniques</h2>
<p>Let us summarise the main aspects of MOR algorithms. First of all, we have at our disposal a full order model (FOM), which provides reliable solutions in long computational times. MOR algorithms select some of these FOM solutions that constitute the reduced space where we search the reduced solution. One way to build a reduced model (ROM) is projecting on the reduced space the operators and use the reduced operators to find new reduced solutions. The ROM are often much smaller and faster to be solved, allowing to use way <strong>less computational time</strong> (up to 100, 1000 times faster).</p>
<p><img src="/files/images/posts/MORUQ/MORprocess.png" alt="How to build a (projection-based) reduced order model." /></p>
<p>The two steps where the probability on the parameters play a role in this process are
1) The choice of the <strong>samples</strong> in the parameter domain
2) The definition of the <strong>error that we minimise</strong> to choose the reduced space.</p>
<p><img src="/files/images/posts/MORUQ/Sampling.png" alt="Sampling." /></p>
<p>The sampling strategy is classically done with some uniform grid, some quadrature rules or <strong>randomly choosing the samples on the domain in a uniform way</strong>. The weighted algorithms adapt the sampling strategy to the probability distribution, obtaining a <strong>distributed sampling</strong>. Suppose that we know <em>a priori</em> that a certain situation is much more probable than another one, the algorithm will pick many more samples in the probable one. This helps in getting better results on average test cases.</p>
<p>The definition of the error that one tries to minimiwe while searching for the reduced space can be modified according to the underlying probability distribution. For example, one can <strong>weight</strong> such error with the probability density function. This means, again, that when we see an error in a very likely-to-happen simulation, we worry a lot and we try to improve the approximation on such parameters, while if it happens on rare events, we may ignore this piece of information.</p>
<table>
<thead>
<tr>
<th style="text-align: center">Greedy Algorithm</th>
<th style="text-align: center">POD algorithm</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/files/images/posts/MORUQ/greedy.png" alt="greedy" /></td>
<td style="text-align: center"><img src="/files/images/posts/MORUQ/POD.png" alt="POD" /></td>
</tr>
</tbody>
</table>
<p>Applying these two modifications to the MOR algorithms one can obtain smaller models for the same error. We have tested the <a href="https://en.wikipedia.org/wiki/Greedy_algorithm">Greedy algorithm</a> and the <a href="https://en.wikipedia.org/wiki/Proper_orthogonal_decomposition">proper orthogonal decomposition (POD)</a> obtaining a good level of improvement. In the previous figures, one can see that the two modifications, or even one of the two, helps in getting smaller error on tests, with respect to the standard strategy. This is due to the exploitation of useful stochastic information that governs the problem inside the reduced algorithm.</p>
<p>This allow to get nice simulations in even shorter computational times with respect to the classical MOR algorithms. In the next figure, we can see a simulation of a convection - diffusion problem run with a weighted MOR technique.</p>
<p><img src="/files/images/posts/MORUQ/animation.gif" alt="Sampling." /></p>
<h2 id="conclusion-use-all-the-data-youhave">Conclusion: use all the data you have!</h2>
<p>It is clear that in uncertain situations, the distribution of the uncertainty can provide very useful information. MOR algorithms can be designed for this and they will improve significantly in time and accuracy if this knowledge is exploited.</p>
<h2 id="references">References</h2>
<p>[1] D. Torlo, F. Ballarin, and G. Rozza. <a href="/publication/2018-10-25-stabilized-weighted">Stabilized Weighted Reduced Basis Methods for Parametrized Advection Dominated Problems with Random Inputs</a> In SIAM/ASA Journal on Uncertainty Quantification 2018.</p>
<p>[2] L. Venturi, F. Ballarin, and G. Rozza. <a href="https://link.springer.com/article/10.1007/s10915-018-0830-7">A Weighted POD Method for Elliptic PDEs with Random Inputs</a> In Journal of Scientific Computing 2019.</p>
<p>[3] P. Chen, A. Quarteroni, and G. Rozza. <a href="https://doi.org/10.1137/130905253">A weighted reduced basis method for elliptic partial differential equations with random input data</a> In SIAM Journal on Numerical Analysis 2013.</p>
<p>[4] L. Venturi, D. Torlo, F. Ballarin, and G. Rozza. <a href="https://link.springer.com/chapter/10.1007/978-3-030-04870-9_2">Weighted Reduced Order Methods for Parametrized Partial Differential Equations with Random Inputs</a> In: Canavero F. (eds) Uncertainty Modeling for Engineering Applications. PoliTO Springer Series. Springer, Cham.</p>Davide Torlodavidetorlo@gmail.comThis is a blog post I wrote for the SISSA mathLab medium page.How is testing going for different World countries for the SARS-CoV-2 pandemic2021-02-01T00:00:00+01:002021-02-01T00:00:00+01:00https://davidetorlo.it/posts/2021/02/covid-world<p>This post is based on the observation I made some months ago in a <a href="https://twitter.com/accdavlo/status/1335657681156247552">Twitter post</a>.</p>
<h3 id="summary">Summary</h3>
<h2 id="basic-knowledge-on-the-fatality-rate-of-the-virus">Basic Knowledge on the Fatality rate of the Virus</h2>
<p>It is clear at the moment, that the SARS-Cov-2 has a case fatality rate (CFR) which highly depends on the age and the co-existence of other illnesses. In particular, the CFR depends exponentially on the age <code class="language-plaintext highlighter-rouge">[1]</code> as shown also by <a href="https://ourworldindata.org/mortality-risk-covid#case-fatality-rate-of-covid-19-by-age">ourWorldInData</a>.</p>
<p>For an average European country it has been estimated that the CFR should be in the interval 1%-1.5% (with the original variant, in the future it might change). This piece of information can be used to determine the quality of the testing strategy and capacity of a certain country (and soon which variant will be dominant in the country). Indeed, from communicated cases and deaths, we can estimate the apparent CFR and compare it with the expected one.</p>
<h2 id="france-germany-uk-and-italy-and-their-testing-systems">France, Germany, UK and Italy and their testing systems</h2>
<p>In particular I set up an <a href="https://colab.research.google.com/drive/13agn1qMRO8NFMTY0yOfSGW2H37V5C_Rh?usp=sharing">Ipython notebook</a> where one can compare the cases and the deaths supposing an apparent CFR and the delay between the report of the cases and the report of the death (in Italy estimated in about 12 days).</p>
<table>
<thead>
<tr>
<th style="text-align: center">Fatality 2%</th>
<th style="text-align: center">Fatality 3%</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsItalyFat2.png" alt="Cases vs Deaths in Italy: Fatality 2%" /></td>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsItalyFat3.png" alt="Cases vs Deaths in Italy: Fatality 3%" /></td>
</tr>
</tbody>
</table>
<p>From the pictures above it is clear that in Italy the apparent CFR moved from a 2% of the beginning of the first wave to a 3% after the peak of the second wave. This may be caused by different factors. It is evident that the testing strategy and the case tracing may lose quality as the number of cases increases. Another factor could be that the percentage of older cases increased in time. Unfortunately, in Italy the positivity rate of the PCR test divided by age is not available, so we cannot compare directly this hypothesis. In France this data is available <a href="https://twitter.com/gforestier/status/1360689560687108106/photo/2">Age rate France</a> and we can see how the spread of the pandemic starts in every wave from the younger population and spread towards the older one (in France schools were rarely closed, in particular in the age 0-11). This can explain their CFR which varies a bit along the second wave (minimum of 0.7 before the second wave and maximum around 3 before the second plateau of cases).</p>
<table>
<thead>
<tr>
<th style="text-align: center">Fatality 1.3%</th>
<th style="text-align: center">Fatality 2%</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsFranceFat1.3.png" alt="Cases vs Deaths in France: Fatality 1.3%" /></td>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsFranceFat2.png" alt="Cases vs Deaths in France: Fatality 2%" /></td>
</tr>
</tbody>
</table>
<p>In Germany we can observe a net <a href="https://www.thelocal.de/20201120/explained-what-is-germanys-new-coronavirus-test-strategy-for-winter">change in the testing strategy</a> that was recommend by the Robert Koch Institute. Before mid November the apparent CFR was 1%, result of a very good tracing system and testing strategy, while after the change, due to capacity limits, the fatality raised constantly till 4% of mid January. We can imagine that the unreported cases could be at least the 75% of the total ones.</p>
<table>
<thead>
<tr>
<th style="text-align: center">Fatality 1%</th>
<th style="text-align: center">Fatality 4%</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsGermanyFat1.png" alt="Cases vs Deaths in Germany: Fatality 1%" /></td>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsGermanyFat4.png" alt="Cases vs Deaths in Germany: Fatality 4%" /></td>
</tr>
</tbody>
</table>
<p>UK seemed also to have a stable testing strategy and the apparent CFR which varies between 1 and 2.5 even after the peak of the second wave, which is probably due to the new variant, which seems to be more fatal.</p>
<table>
<thead>
<tr>
<th style="text-align: center">Fatality 1.9%</th>
<th style="text-align: center">Fatality 2.4%</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsUKFat1.9.png" alt="Cases vs Deaths in UK: Fatality 1.9%" /></td>
<td style="text-align: center"><img src="/images/postCovid/casesVsDeathsUKFat2.4.png" alt="Cases vs Deaths in UK: Fatality 2.4%" /></td>
</tr>
</tbody>
</table>
<h2 id="future">Future</h2>
<p>In Germany and UK strong opinions on <a href="https://yestonocovid.eu/">Zero Covid</a> strategy are under consideration by the government. They share the goal of bringing down the cases to numbers lower than 2.5 cases over 100,000 people every week. This should allow a better testing and contact tracing strategy and, hence, a lower CFR.
In other countries this policy has not been considered yet, but may be incouraged by the rest of EU.</p>
<p><code class="language-plaintext highlighter-rouge">[1]</code> Undurraga, E.A., Chowell, G. & Mizumoto, K. COVID-19 case fatality risk by age and gender in a high testing setting in Latin America: Chile, March–August 2020. Infect Dis Poverty 10, 11 (2021). <a href="https://doi.org/10.1186/s40249-020-00785-1">https://doi.org/10.1186/s40249-020-00785-1</a></p>Davide Torlodavidetorlo@gmail.comThis post is based on the observation I made some months ago in a Twitter post.