The context dependence of frontier versus wilderness conservation priorities

Much of conservation planning has focused on how we should prioritize areas for protection based on biodiversity and cost, but less is known about how we should prioritize areas based upon the level of threat they face. We discuss two opposing threat prioritization strategies: frontier conservation (prioritizing high‐threat areas) and wilderness conservation (prioritizing low‐threat areas). Using a temporally explicit model, we demonstrate that the best strategy depends on a variety of factors, including protection costs, heterogeneity in biodiversity, biodiversity–area relationships, the rate of biodiversity recovery, the rate of change in threats through time, and the timeframe within which we measure conservation outcomes. By quantitatively comparing the impact of these strategies, we aim to shift the debate away from a simple dichotomy of frontier versus wilderness, toward an understanding of the context‐specific benefits of each option, and a discussion of how threat combines with other factors to determine spatial conservation priorities.


INTRODUCTION
Systematic conservation planning aims to protect biodiversity features from threats that might compromise their persistence, so that overall biodiversity value is maximized within a planning region (Margules & Pressey, 2000). Despite these aims, many approaches to conservation prioritization consider only biodiversity value and/or conservation costs, without considering threats (e.g., UNEP-WCMC, 2008). When threats are not considered, areas unlikely to lose biodiversity might be protected, leading to "residual" protected areas (PAs; Devillers et al., 2015;Joppa & Pfaff, 2009). To avoid residual outcomes, sites should be prioritized for protection based upon three key factors: their biodiversity value, the costs of protection, and the imminence and/or severity of threats they This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. face (Merenlender, Newburn, Reed, & Rissman, 2009;Newburn, Berck, & Merenlender, 2006;Pressey & Taffs, 2001;Visconti, Pressey, Segan, & Wintle, 2010;Wilson, McBride, Bode, & Possingham, 2006).
It is intuitive and widely accepted that sites with high biodiversity value and low cost should be prioritized. However, a serious and fundamental debate remains about whether conservation investment should seek out or avoid sites facing high levels of threat. Some approaches advocate the protection of sites imminently facing high levels of threat (henceforth referred to as "frontier" areas; Hoekstra, Boucher, Ricketts, & Roberts, 2005;Ricketts et al., 2005;Venter et al., 2014). Others advocate the protection of sites facing lower levels of threat, and sites likely to become threatened in the more distant future (henceforth referred to as "wilderness" areas; Graham & McClanahan, 2013;Klein et al., 2009;Mittermeier et al., 2003;Watson et al., 2018). Each strategy is supported by cogent arguments: frontier conservation avoids immediate biodiversity losses, while a wilderness strategy can secure large intact areas and pre-empt future threats. All conservation prioritization frameworks exist on a continuum between frontier and wilderness (Brooks et al., 2006), either explicitly or implicitly, and millions of dollars of conservation funding are allocated accordingly.
Our goal is to explore the range of factors (e.g., threats, costs, and biodiversity values) that might influence the relative impact of frontier and wilderness conservation strategies. Several recent analyses have used real conservation landscapes to show how conservation impacts depend upon a suite of factors, including the spatial relationship between threats and costs (Visconti et al., 2010), the speciesarea relationship within a region (Spring, Cacho, Mac Nally, & Sabbadin, 2007), and decision-makers' time preferences (Armsworth, 2018). However, the size and complexity of these landscapes allow for only one or two factors to be explored. Here we present a theoretical planning landscape in which it is possible to systematically vary and control a range of factors. Our aim with this general model is not to provide specific recommendations for particular conservation landscapes, but instead to offer a clearer picture of how multiple factors interact to determine the relative impact of wilderness and frontier strategies. Crucially, we show that both frontier and wilderness strategies can deliver the greatest conservation impact under different conditions, and that in some cases, a combination of both strategies is most effective. In doing so, we hope to progress the debate beyond a simple dichotomy, toward an understanding of the conditions that determine which strategy delivers the greatest impacts.

GENERAL CONSERVATION MODEL
We integrated the suite of factors that determine conservation impact using a deterministic two-patch landscape, where the objective was to maximize biodiversity value across both patches. In this formulation, one patch faces high levels of threat (frontier patch), while the other faces low levels of threat (wilderness patch). Managers allocate a proportion (0%-100%) of their conservation budget to each patch, which is then immediately used to purchase and protect land.
For the two-patch system, we quantitatively defined a set of key factors that affect conservation decisions, each of which is commonly discussed in the context of the frontier/wilderness debate Visconti et al., 2010). These are: the biodiversity value of each patch, the cost of protection (e.g., acquisition, transaction, and opportunity costs; Naidoo et al., 2006), the biodiversity-area relationship (i.e., the species-area relationship; Wilson & MacArthur, 1967), the proportion of biodiversity unique to each patch, the rate at which biodiversity recovers following protection, the rate of change in threats (static or dynamic), and the timeframe over which conservation benefits are measured. In the discussion below, we use the model to test the effect of each factor, and then draw on examples from the literature to discuss how each factor is likely to influence frontier and wilderness conservation priorities.

Model description
At time t the total extant biodiversity value of the system, , is given by the equation: where the subscripts and denote the frontier and wilderness patches respectively, and is the amount of biodiversity value present in patch at = 0. The model contains three components: specifies the proportion of each patch that can be protected, given the budget allocated to each patch ( ) and the cost of protecting each patch ( ); (1 − ) specifies the proportion of unprotected biodiversity remaining in each patch after t years, given the annual loss rate ( ); and (1 − ) specifies the proportion of this biodiversity loss that occurs according to the proportion of biodiversity that is protected. Protected patches experience no loss of biodiversity value (see Supporting Information for alternative scenarios). We denote the total budget as = + . The parameter accounts for the non-linear relationship between area and biodiversity (Murdoch et al., 2007). For consistency with other conservation prioritization analyses, we assumed a value of = 0.25  see Supporting Information for alternative values). To measure relative impact, all strategies were compared to a counterfactual scenario, where neither patch is protected (i.e., = 0).

Analyses
We measured impact across the full range of allocation decisions, ranging from total frontier protection ( = 100, = 0) to total wilderness protection ( = 0, = 100). We then calculated the relative impact of strategies when our key factors were varied in both isolation and combination, while other parameters were kept at their default values (Table S1). We focused particularly on changes in the ratio of costs and threats, since frontier areas are often characterized as highthreat/high-cost, and wilderness areas as low-threat/low-cost (e.g., Armsworth, 2018). We also considered how strategies performed when the relative biodiversity value of the frontier and wilderness patches changed.
We also considered three structural variations of our base model. In the first, protection allowed biodiversity value to recover in degraded sites. We considered a scenario where the frontier patch was significantly degraded (25% of the default value) but could asymptotically regain its biodiversity value, as reported in analyses of postdisturbance recovery (Liebsch, Marques, & Goldenberg, 2008; but see Supporting Information for alternative scenarios). In the second, we considered how threats might change over time. One proposed benefit of a wilderness strategy is that it can secure large, lowthreat areas that might become highly threatened in the future (Watson et al., 2018). Thus, we allowed biodiversity loss in the wilderness patch ( ) to increase over a period of 100 years until it was equal to that of the frontier patch. This modification assumed a sigmoidal transition from wilderness to frontier, as observed in empirical analyses of forest clearing (Etter, McAlpine, Pullar, & Possingham, 2006). The third structural variant considered the degree of biodiversity complementarity between patches. For this variation, we explicitly defined the proportion of biodiversity value that was endemic to each patch. Full details of all analyses are provided in the Supporting Information.
The parameter and model explorations described in this paper are only a small subset of the dynamics that can be produced by our conservation model. To facilitate further exploration, we have published an online interactive version of the model, available at https://edmondsacre.shinyapps.io/Patch/, where all parameters can be manipulated, and the impact of alternative prioritization strategies are graphed accordingly.

Costs
Majority protection of the frontier patch generally had the greatest impact on biodiversity value, regardless of costs (Figures 1a, b). However, as the cost of the frontier patch increased, allocating a larger proportion of the budget toward wilderness increased impact, particularly over longer timeframes and when threats were dynamic (Figures 1a, b).
In many real-world contexts, conservation costs are unlikely to be homogeneous between frontier and wilderness areas. Instead, when threats are driven by economically profitable activities, conservation costs and threats might be positively correlated (Boyd, Epanchin-Niell, & Siikamäki, 2015;Merenlender et al., 2009). For land clearing in California, for example, Newburn et al. (2006) found that land with a high probability of being converted had high acquisition costs.
Similarly, Venter et al. (2014) found that threatened terrestrial vertebrate species were more common in areas with high agricultural land value. While these analyses hint that conservation costs and threats might be linked, there is a paucity of empirical analyses examining this relationship across a range of conservation landscapes. Our results suggest that the more positive this relationship, the more resources should be allocated toward wilderness areas.

Aspects of biodiversity value
Frontier prioritization had a greater impact when biodiversity values were equal or higher in the frontier patch (Figures 1c, d). Over shorter timeframes, majority frontier protection was most effective unless the wilderness patch had significantly higher biodiversity value (i.e., more than five times greater; Figure 1c, d). However, over longer timeframes, majority wilderness protection became beneficial if the wilderness patch had moderately higher biodiversity value (∼2 times higher; Figures 1c, d). The amount of biodiversity overlap between patches had no qualitative effect on conservation impact, but reduced the relative difference between strategies overall ( Figure S1). When the biodiversity-area relationship was linear ( = 1), the same effects occurred, but it became always beneficial to fully protect either frontier or wilderness, while partial protection was always suboptimal (see Supporting Information for further details).
If we consider anthropogenic threats (e.g., land clearing for agriculture, harvesting of natural resources, pollution) over large extents, then we expect frontier landscapes to have high levels of both threat and biodiversity value (Luck, 2007). This is because human populations tend to inhabit productive landscapes that foster high levels of biodiversity (Chown, van Rensburg, Gaston, Rodrigues, & van Jaarsveld, 2003). There is also evidence to suggest that global wilderness areas-because they are often ecologically homogeneousare relatively species poor (Mittermeier et al., 2003). However, across smaller spatial extents, a negative relationship is often observed, particularly where anthropogenic threats have been present long enough to cause local declines in biodiversity value (e.g., Turner, Nakamura, & Dinetti, 2004). This scale dependence adds another dimension to the choice between frontier and wilderness: planners working at regional or national scales (e.g., ecoregions) could justifiably prioritize frontier areas, while planners working at smaller local scales might justifiably focus on areas with greater wilderness value.

The rate of biodiversity recovery
When the frontier patch had a low initial biodiversity value, but could recover following protection, frontier prioritization had an increased impact (Figures 2b, d). Interestingly, this effect was more pronounced with lower initial F I G U R E 1 Variation in the most effective budget allocation across different time frames. Panels a and b show how the most effective strategy varies according to the ratio of cost between frontier and wilderness patches (c F /c W ). The maximum and minimum cost ratios represented on the y-axis of panels a and b is where costs were 10 times higher in the frontier patch (c F = 1,000, c W = 100) and 10 times higher in the wilderness patch (c F = 100, c W = 1,000), respectively. Panels c and d show how the most effective strategy varied according to the ratio of biodiversity value between frontier and wilderness patches (s F /s W ). The maximum and minimum biodiversity value ratio represented on the y-axis of panels c and d is where biodiversity values were 10 times higher in the frontier patch (s F = 1,000, s W = 100) and 10 times higher in the wilderness patch (s F = 100, s W = 1,000), respectively. Panels a and c represent the most effective strategies when threats were static. In the static threats scenarios, the rate of biodiversity loss in the frontier patch (q F ) and the wilderness patch (q W ) was 10% per year and 1% per year, respectively. Panels b and d represent the most effective strategies when threats were dynamic. In the dynamic threats scenarios, the rate of biodiversity loss in the wilderness patch increased sigmoidally from 1% to 10% over 100 years, while threats remained static in the frontier patch. For all scenarios, untested factors were left at their default values (Table S1). Full details of model parameters are available in the Supporting Information biodiversity values of the frontier patch ( Figure S3), because more degraded patches had higher recovery potential. Thus, frontier prioritization can produce large gains in biodiversity value, relative to initial conditions, when the frontier patch is substantially degraded but can recover. Furthermore, when the frontier patch had substantial recovery potential, changes to cost had minimal effect on the relative impact of each strategy ( Figure S4).
Wilderness areas, by definition, are closer to their pristine state and, therefore, are likely to have minimal recovery potential. More degraded frontier areas, on the other hand, might have significant recovery potential. However, such areas might have been degraded to a point that trophic cascades and ecosystem shifts could inhibit recolonization and habitat recovery after protection. The recovery potential of degraded areas will depend highly upon the proximity and connectivity of degraded and intact habits, and the particular characteristics of habitats within a planning region (Jones & Schmitz, 2009). For example, differences in dispersal capabilities between marine and terrestrial species might mean that frontier marine systems have greater recovery potential than terrestrial ones (Carr et al., 2003).
In regions where recovery is unlikely to contribute toward conservation objectives (e.g., the return of extirpated species) or to occur only over long timeframes (e.g., habitats containing slow-growing species), protecting some wilderness areas might be beneficial. In regions where biodiversity is likely to substantially recover within the required timeframes (e.g., habitats containing fast-growing species), frontier prioritization is likely to have a greater impact.

Temporal change in threats
When threats increased over time in the wilderness patch, the relative impact of wilderness prioritization increased (Figures 1b, d, and 2c). This effect was amplified over time,  (Table S1). Panel b shows a scenario where biodiversity value in the frontier patch was degraded to 25% of its potential value ( = 25), but could recover to potential levels if protected (see Supporting Information for more details). Panel c shows a scenario where the rate of biodiversity loss was dynamic in the wilderness patch. Panel d shows a scenario where biodiversity value could recover in the frontier patch, and the rate of biodiversity loss was dynamic in the wilderness patch. In the dynamic threats scenarios, the rate of biodiversity loss in the wilderness patch increased sigmoidally from 1% to 10% over 100 years, while threats remained static in the frontier patch. The black line represents a counterfactual scenario in which neither patch was protected. Full details of model parameters and default values for all factors are available in the Supporting Information as the wilderness patch transitioned into a frontier patch. When threats were dynamic, partial protection of both patches also became more effective relative to total frontier or wilderness protection (Figures 1b, d, and 2c). This is because the biodiversity-area relationship dictated that there were diminishing returns on investment in each patch, and a split protection approach cost-effectively mitigated short-term losses in the frontier patch, and long-term losses in the wilderness patch. This effect was amplified when there was a greater difference in initial threat levels between frontier and wilderness patches ( Figure S6).
Threat dynamics are important to consider, given extensive evidence that threats change over time (e.g., Sabbadin, Spring, & Rabier, 2007;Spring et al., 2010). The threat of land development, for example, often follows a "contagion" process, where forested areas that are close to development are cleared, making more distant sites accessible and threatened. Both terrestrial and marine ecosystems exhibit the sigmoidal degradation trajectories that are characteristic of contagion dynamics (Etter et al., 2006;Worm et al., 2009). Such contagion dynamics are a common motivation for wilderness conservation: by undertaking conservation actions before threats arrive, large amounts of future biodiversity loss can be avoided at a relatively low cost. However, because these benefits will be realized in the future, the timeframe over which biodiversity impacts are measured plays a critical role in this scenario, as we explain in the next section.

Timeframe to reach conservation objectives
Wilderness conservation delivered benefits over longer timeframes, at the cost of immediate frontier losses. Conversely, frontier prioritization performed better over shorter timeframes (Figures 1 and 2), but became less effective over longer timeframes. When threats were dynamic, losses in the wilderness occurred sooner, reducing the time required until which it became beneficial to prioritize wilderness (Figures 1  and 2).
The effect of timeframe on frontier and wilderness prioritization is particularly important because different conservation actors often pursue goals over different timeframes. In Australia, for example, government timeframes range from years (e.g., State of New South Wales and Office of Environment andHeritage, 2018) to decades (e.g., Natural Resource Management Ministerial Council, 2010). For nongovernmental conservation actors, in contrast, timeframes can extend to centuries (e.g., Pressey, Watts, & Barrett, 2004). This variation may arise from different political and funding cycles, or from differing objectives. For example, short-term impacts will be most important when conserving endangered species or habitats that face imminent extinction. Similarly, where livelihoods and ecosystem service objectives are concerned, standard economic discount rates, where short-term benefits are favored over long-term benefits, might be most appropriate (Armsworth, 2018). In such cases, frontier prioritization is likely to have a greater impact. In contrast, where practitioners are working toward long-term goals, wilderness prioritization might have a greater impact.

CONCLUSION
Our results demonstrate that the impact of different threat prioritization strategies can vary dramatically depending on how threats relate to other factors. Furthermore, we have shown that interactions between key factors can amplify or suppress the effects of others. For example, costs can heavily influence frontier and wilderness impacts, but this influence is suppressed if frontier areas are degraded and have significant recovery potential, and amplified if threats are dynamic (Figures 1a, b, 2b, d). It is essential, therefore, that conservation practitioners consider these relationships when developing conservation prioritizations. Much of the data required to quantify these processes are readily available. Information on biodiversity values and costs is widespread and commonly used across a variety of conservation contexts, although there are concerns about its accuracy for conservation planning purposes (Adams, Pressey, & Naidoo, 2010;Armsworth, 2014). Data on recovery potential have been collated in both terrestrial (Liebsch et al., 2008) and marine environments (McClanahan, Maina, Graham, & Jones, 2016). Even models of threat dynamics are available for some terrestrial habitats (Etter et al., 2006), and could feasibly be constructed for others. To incorporate timeframes, planners need only explicitly state their objectives, or identify relevant discount rates (Armsworth, 2018).
Our general model identifies and isolates factors that are likely to be influential within particular planning regions. However, a two-patch model does not account for the potentially complex spatial distribution of frontier and wilderness areas, or how this distribution might affect important spa-tial processes, such as species' dispersal. For specific conservation contexts, more extensive analyses that account for the characteristics of habitats within the planning region are required. In addition to the factors discussed above, further analyses should consider rates of biodiversity loss within protected areas (explored partially in Supporting Information), the displacement of threats from protected to unprotected areas (i.e., leakage; Ewers & Rodrigues, 2008), species persistence in relation to fragmentation and connectivity (see Visconti et al., 2010b), and rates of protected area downgrading, downsizing and degazettement.
Importantly, our results do not support the use of either frontier or wilderness strategies. Instead, they stress the importance of context in deciding which approach will deliver the greatest benefits from limited conservation resources. Our results also clearly show that failure to quantify, or at least consider, all relevant factors might produce prioritizations that have a much lower impact than expected. Specifically, wilderness-focused conservation efforts that neglect to consider heterogeneity in recovery potential, and the specific timeframes to reach objectives, will likely have suboptimal conservation impacts. Likewise, frontier-focused conservation efforts that neglect to consider heterogeneity in costs, threat dynamics, and biodiversity values will likely have suboptimal impacts.