### Motivation and Summary

Between 1980 and 2019, there were 14 billion-dollar extreme precipitation-related disasters over periods longer than 14 days. Unfortunately, our current capabilities in predicting these prolonged wet periods are low, owing in part to a relative lack of prior research on such extremes. As such, our team aims to increase understanding and ultimately improve prediction of these periods. The first step is to develop a __definition__ for an extreme 2 week period.

We analyze periods in “windows”: 14-day blocks beginning on each calendar day. For each spatial point and window in the CONUS, we developed accumulation **and duration** thresholds whereby a point is labelled extreme if it meets ** both** criteria. All spatial points labelled extreme for a given window are lastly grouped together to form an extreme period if the shape delineating the period has an area greater than 200,000 km

^{2}.

Our 14-day database version 1.0 can be seen here on our site. Each period has some summary statistics, like area-averaged precipitation, and a map of the total precipitation and extreme period shape can be seen by clicking the event number column. Next, we will go into more detail with graphical examples on one of the most impactful floods in US history.

## January 1937 Flooding

### Louisville, KY: Tabular Example

The table below shows 25 days of precipitation in Louisville, KY (shown as purple star on the map) in late December 1936 and January 1937. The “Daily Precip” row gives the precipitation recorded on the calendar day given at the top. The “14-Day Total” row gives the total precipitation over the 2 weeks __ beginning__ on the respective calendar day. For example, the 12/29 value of 4.67 equals 0.14 + 0.79 + 0.01 + … + 1.69 + 0.73 + 0.0. Similarly, the “Threshold” row gives the 14-day 99

^{th}percentile for the 2 week period beginning on the calendar day. Lastly, the final row is shaded green if the 2-week period total precipitation exceeds the period’s threshold value; every 14-day period shown in the table met the duration criterion. The extreme period in our database is the 01/12 – 01/25 period and the total precipitation is shown on the map below. However, you may wonder how we determined that period to be labelled as extreme when, at least over Louisville, there seem to be several periods that could all be called extreme. Let’s now take a step-by-step look into our extreme period identification process.

### Developing the Shape of an Extreme Period and Grouping Similar Shapes

To develop an extreme period shape, we first find all grid points in the US that meet both the accumulation and duration criteria. These grid points are then used in a method called kernel density estimation (KDE). This method will consider the extreme points and develop a map with larger values in areas with many extreme points and smaller values with fewer extreme points. To easily compare across all times of the year, we normalize each map to have a minimum of 0 and maximum of 1. Let’s take a look at an example from 12-25 January 1937:

In panel a, the green points are those that were labelled as extreme. These points generally stretch in a northeast-southwest direction, spanning from Arkansas into New York. The green points were then passed into the KDE method, and the result is seen in the middle panel. Throughout most of Kentucky and Ohio, where virtually every point is extreme, the KDE field is close to 1. Conversely, values in Arkansas are smaller since there are fewer points nearby that are extreme. Similarly, areas like northern Missouri, for example, have density values of 0 since the nearest extreme point is still quite far away. The extreme shape developed for this specific 14-day period is then drawn using a single contour level: 0.2710 (see details below if you want to know how we chose this number). Our database only contains those shapes with an areal extent of 200,000 km^{2}, so the small ovals in Louisiana/Mississippi and Maryland are not considered. Now, you might imagine the resulting shape for nearby periods, such as 10-23 January 1937 or 14-27 January 1937 are would be very similar. After all, the total precipitation in these 3 periods are going to be similar since they share many of the same days! Indeed, we can see several very similar extreme shapes:

It may be hard to tell, but there are actually 5 main shapes plotted on the map above; the colors are: red, green, blue, purple, and black. These different colors represent the outcome of our overall process thus far with the red shape coming from the 14-day period beginning on 10 January 1937 and the black shape coming from the period beginning on 14 January 1937. It would not make sense to call each one an independent extreme period since they all share a large number of common individual days. Therefore, we developed a technique that would put all 5 of these shapes into a group, denoting that they are essentially the same extreme period. Then, we choose the “most extreme” period and keep that one for the database while discarding the others. Essentially, the most extreme period was the one that was most different from normal.

Our finalized database spans from January 1915 through December 2018, first keeping any extreme shape that is greater than 200,000 km^{2} and then trims the initial set by grouping all similar events and keeping the most extreme. Our version 1.0 database contains 851 periods in this timeframe across the US, ranging from extreme blizzards and heavy monsoon periods to strong low pressure systems, both tropical and extratropical! Below, we go into a bit more technical detail for each step along the way. Check it out if interested!

## Technical Details

In this section, we will go into more depth in describing the process behind creating our database. Any use of the database or methods detailed here should cite this paper, while questions can be directed over email to ty.dickinson@ou.edu.

### Data

We used two observationally-based datasets to create version 1.0 of our 14-day database: Livneh and PRISM. The Livneh dataset spans 1915-2011, and we appended PRISM to extend the database through 2018. We found errors from interpolating PRISM’s 4-km horizontal grid to Livneh’s ~6-km grid to be small.

### Accumulation and Duration Criteria

We begin by finding individual grid points that experienced extreme conditions over a given 14-day window. As mentioned in the introduction, a point is deemed extreme if a passes criteria for both accumulation and duration. We impose a duration criterion on top of an accumulation criterion to ensure the events we find are more persistent in nature as these events are more likely to be predictable at meaningful lead times.

For the accumulation threshold, we selected the long-term 99th percentile. The 99th percentile was calculated using each 14-day sum over all 104 years for each grid point and each overlapping 14-day window (i.e., 1–14 January, 2–15 January, etc.). Day-to-day variations in the 99th percentile were also smoothed via Fourier smoothing by retaining the leading 3 coefficients.

For a grid point to pass the duration criterion, at least 7 of the 14 days (i.e., half the days) in the period must receive greater than or equal to the long-term mean daily precipitation. We defined the long-term mean daily precipitation as the mean daily precipitation from all 1456 days in each 14-day period of interest.

### Extreme Period Shape: Kernel Density Estimation

For a given 14-day period, we first found every grid point in the CONUS that met **both** criteria mentioned above. To develop an extreme period shape using the spatial points labelled as extreme, we employed kernel density estimation (KDE). In short, KDE uses the extreme grid points and fits a 3-D probability density function (PDF) where the vertical coordinate is larger where many extreme points are in proximity. We chose the Epanechnikov kernel with a bandwidth of 0.02 as parameters when employing KDE. We found this kernel to be superior to other kernels in outlining the extreme points without over-smoothing. However, owing to Livneh and PRISM only being defined over land, the extreme shapes we developed tended to rigidly follow land-sea and geopolitical boundaries.

After deriving the 3-D PDF, we normalized the field by dividing by the maximum value, allowing us to directly compare PDFs generated across space and time. Initial extreme period shapes were developed using the 0.2710 contour, chosen by finding the 99^{th} percentile of all KDE fields between 1915 and 2018. Finally, we denote a derived shape as being a potential extreme period if the areal extent is at least 200,000 km^{2}.

### Postprocessing Event Periods

To catalogue a database of extreme periods, we first test all 14-day periods between 1 January 1915 and 31 December 2018 and record any period that produces a shape meeting the areal extent of 200,000 km^{2}. However, one consequence of testing each individual period was our database contained many “repeat” periods, shown in the step-by-step example above. Essentially, our postprocessing algorithm grouped extreme periods together if they intersected in time and had a spatial correlation of at least 0.5. Then, the period for our database was the one that was considered the “most extreme”. To make this choice, we defined the “total over extreme” (TOE): $$\mathrm{TOE} = \sum_{i=1}^{n}P^{i}_{total} – P^{i}_{q99}$$ where $n$ is the number of periods grouped together, and the quantity inside the summation is the total period’s precipitation subtracted by the period’s 99^{th} percentile threshold. Note that only those grid points labelled extreme inside the extreme shape are included in the sum. The final period chosen for the database was the one with the largest TOE.

After running our postprocessing algorithm, version 1.0 of the Livneh-PRISM 14-day extreme database contains 851 unique events.

### Developing Regions: *k*-means Clustering

Another main objective when developing our database was to objectively define typical regions of 14-day extreme precipitation. Furthermore, we performed our clustering in such a way as to allow some events to be unclustered, which allows for investigation into why those events did not fit a “typical” pattern of other extreme periods. To develop our regions, we performed *k*-means (where *k* represents the number of regions) clustering on our database of events in an iterative manner:

- Cluster
*n*periods into*k*regions. - Calculate the silhouette score
*s*, a measure in [-1,1] of how well a given extreme period fits its assigned region, for all*n*periods. - Find the number of periods with negative silhouettes ($n^*$), indicative of a bad regional assignment.
- If $n^* \ne 0$, remove the periods with $s < 0$, and repeat from step 1; otherwise, end.

We varied the number of regions from 5 to 30, and created the “Hybrid Index” (HI) to determine the optimal number of clusters. HI is defined as the product of the number of retained events (remember, some events get thrown out during the clustering process) and the average silhouette score across all periods. The optimal number of regions is then the maximum value for HI, which for our 14-day Livneh-PRISM database was 15 regions. Overall, 102 of the 851 periods were not assigned to a region.

We also performed extensive sensitivity testing by varying the initial region centers and by applying other algorithms and found consistent results throughout. The final clusters are shown below:

Point of contact: Ty Dickinson, ty.dickinson@ou.edu