Article

Foresight Myths, Debunked

Leveraging Futures Studies for German Security Policy

Bressen 2025 Boris Pistorius OJ
German Minister of Defense, Boris Pistorius, looking ahead.  | Photo: Jane Schmidt (Bundeswehr)/flickr

Prediction is very difficult, especially when it is about the future,” wrote Danish physicist Niels Bohr. According to the Greek myth, Trojan priestess Cassandra’s predictions were ignored (with disastrous consequences) – so too the warnings of Germany’s Eastern European neighbors in the lead-up to Russia’s invasion of Ukraine. How can we leverage foresight and futures studies to improve German security policy? Six hypotheses, put to the test.

Futurology is pseudo-intellectual tea leaf reading.”

Not if it is done professionally. Like political science, serious futurology is based on methods from a range of different disciplines, including psychology, history, mathematics, and sociology. Universities offer degree programs in futures studies and multinational companies like Shell or the United Nations have their own futurology (or foresight”) teams.

Futures studies is much more practical than many other academic disciplines, with futurologists embedded in organizations or companies. According to a 2020 survey by the Nuremberg Institute for Market Decisions, 90 percent of major European and US companies engage in strategic foresight. A good third of them have departments especially dedicated to foresight. For NATO, defense ministries and defense contractors – all of whom are developing, commissioning and producing military technology for the next 20 to 50 years – professional foresight is essential. NATO, for example, uses climate futures to determine what temperatures a helicopter on NATO’s southern flank must be able to withstand in 2060 – or whether helicopters need to be replaced by a different technology altogether. 

In fields that cannot afford to ignore relevant developments and trends, foresight gets treated in a serious, professional and structured way to avoid surprises, plan for the future and shape it.

Society and politics should take a more laid-back approach and accept the uncertainty associated with the future.”

No, that would be disastrous. Few people consciously think about what our world could look like in 2060. And when they do, the human brain produces a range of biases that served us as hunter-gatherers but are not fit for strategic decisions in today’s world. 

For example, recency bias” leads us to believe that things we have just experienced are very likely to happen again in the future. Conversely, events that happen in far-away places – think of the Ebola epidemic that hit West Africa just a few years before the coronavirus pandemic – feel safely irrelevant to us and do not spark a swift response to prepare for a similar situation at home.

People generally underestimate just how much the world will change in the coming years. The scale of the future(s) also matters: we tend to assess societal developments and the global future with above-average pessimism, while viewing our own immediate future with above-average optimism. This differentiation makes perfect evolutionary sense: our personal optimism helps us get out of bed every morning. But we should not let our tendency to feel despair over the world’s future push us into a mode of fatalistic anguish. Instead, visualizing positive developments in vivid detail is a helpful way to train our perception to see positive opportunities.

When it comes to political decisions, our gut feelings alone, which were programmed to respond to a pre-modern reality, are not a good guide. This is especially true in group contexts, where social and institutional biases add to cognitive biases. For example, we are socially conditioned to avoid conflict with our political allies and closest colleagues. This can produce groupthink,” where seeking group consensus and harmony leads to bad decisions. A reliable social network, after all, remains just as important for survival in today’s workplace as it was in the Stone Age. 

At the political level, these fallacies have grave consequences. Political leadership in Germany, for example, has excelled at ignoring even the best analyses due to unchecked biases. Former German Chancellor Angela Merkel, her party, and its coalition partners failed to question their assumptions and wishful thinking about Vladimir Putin’s intentions for the European security order – despite receiving new information to the contrary. Merkel’s lack of willingness to reconsider and admit mistakes was not a responsible approach to security policy.

All futurology is based on extrapolating trends.” 

Quite the opposite! In futures studies, we rarely speak of the” future, but rather of multiple futures. Extrapolating data into the future is a banal and often unhelpful way of thinking about what is to come – it can give people a false sense of certainty. One example of a simple extrapolation (and its limitations) is the growing size of the world’s population. Until ten years ago, scientific articles and German schoolbooks included a graph on the development of the global population, which made it look like exponential growth and overpopulation were inevitable. Today, we know that the underlying assumptions (like birth rates and life expectancies) change over time, rendering the extrapolation useless. The models have been adapted accordingly. And indeed: a net shrinking of the world population is currently projected to happen by 2100 at the latest.

Anyone who thinks this trend reversal was obvious was either blessed with a healthy dose of skepticism regarding deterministic predictions, or has fallen victim to the common psychological tendency of retroactively finding simple explanations for things that seemed murky at the time. Highlighting the assumptions that underly projections, communicating statistical error terms and different scenarios, joining a forecasting tournament or keeping a diary of your own assumptions about the future can all be helpful exercises to check our biases – and should be mandatory for employees of geopolitical analysis units.

Simple extrapolation is decidedly not what we do in futures studies. Instead, we analyze the systemic effects of different factors and underlying assumptions, and critically question images of the future. Doing so is crucial, since projections, predictions and plausible courses of events are relevant for political decisions across different fields. The question of population growth, for example, matters for determining the future size of Germany’s skilled working population in the economy, the demand for schoolteachers (education), or even the size of the military personnel base in case of a national emergency (defense).

Even if the future cannot be predicted precisely, it is still important to clearly separate desirable” futures from probable” futures and expand the range of plausible futures considered in strategic planning, to be better prepared for eventualities that our biases lead us to quickly discount.

Anyone who engages in foresight purely by way of extrapolating trends easily falls into a mode of fatalism and runs the risk of becoming a passive recipient of someone else’s future. Instead, we must explore futures actively and think about how we can work on those futures collectively; futurologist Florence Gaub has likened this work to playing on the offense” in soccer. 

So-called megatrends’ are already known and widely available.”

Yes, but… Numerous organizations publish reports on megatrends and other future developments. These are helpful for getting a rough overview but are not detailed enough to make one’s own organization future-proof.” Public reports such as Global Trends (by the US Director of National Intelligence) must be seen for what they are: public communication tools used for strategic purposes. Unsurprisingly, governments are keeping their best futures,” including those on quantum technology or cryptocurrencies, to themselves.

Adopting a certain view of the future, just because it gets repeated by many, can be misleading – especially given the human tendency to readily accept anything that’s plausible and corresponds to pre-existing assumptions, while dismissing dissenting opinions. But anyone who has ever taken part in a professional scenario-building workshop can confirm that perceived plausibility can change rapidly when different futures are spelled out in vivid detail. Foresight is no magic, but sometimes it feels like it. 

Different foresight methods serve different purposes and range from prediction to planning for fundamental uncertainty. This wide net of foresight methodologies cannot be replaced by the summary of some megatrend report. Connecting the right method to each purpose is another unique strong suit of applied futures research.

Truly innovative thinking about the future is difficult, but it is worthwhile and can have an immense impact. In 1992, science fiction author Neal Stephenson described the meta-verse” for the first time in his novel Snow Crash. His description is eerily similar to the metaverse that Mark Zuckerberg’s company Meta has actually developed. Did Stephenson predict the metaverse because of his in-depth knowledge of technological developments? Or did he conjure it up himself, with the entire global population soon engaging in the online world he invented? The truth lies somewhere in the middle. That is precisely why organizations such as the French Ministry of Defense employ science fiction authors: so they can help shape the future, before others do so on their terms.

Who gets to imagine the future is ultimately a political question, because visions of the future shape our actions. In critical futures research, experts like Pupul Bisht describe the phenomenon of exclusively letting powerful people imagine the future, while suppressing alternative futures, as colonizing the future.” Organizations like the Berlin-based group SUPERRR use decolonial approaches to help civil society organizations shape alternative futures on topics like artificial intelligence.

In the age of ChatGPT, ever-improving algorithms are taking over the task of precise prediction.”

True, but only within certain limits. Prediction models based on large amounts of data and algorithms are indeed able to calculate the probability of certain phenomena. This is how ChatGPT works: using language learning models, it answers a question based on how likely a certain response is to be correct. 

A similar principle is applied in political science. Researchers evaluate news from daily newspapers and calculate the probability of war and conflict escalation. Governments and international organizations invest in this research to be as well-informed as possible. It is important for democratic actors to do so, not in the least because tech companies and authoritarian states use these predictions for their own benefit. The skills needed to understand and use algorithmic predictions, however, are hard to come by and should be stimulated and developed in order to avoid a widening skills gap.

But algorithmic prediction has its limits. Certain methodologies, including those using a black box” algorithm, do not allow a researcher to figure out which piece of information led to a certain prediction. This makes it difficult to identify the right points to tackle to prevent undesirable developments. The further into the future a prediction is cast, the more unreliable the prediction models are; prediction accuracy decreases as dissimilarities between the future point of prediction and today increase.

Elaborate computer-assisted simulations can be an alternative, but they also depend on human assumptions and cannot rely on numbers and algorithms alone. And human expertise is equally necessary to classify results and draw the right conclusions for possible action. Other limitations include the fact that models are dependent on the reliability of the database and can only function optimally for a limited range of phenomena. Finding the right mix of methods is imperative.

Rare but significant events, such as the fall of the Berlin Wall, 9/​11 or the Arab Spring, cannot be predicted by futurology.”

That is only partly true. Of course, the exact circumstances and timing of those events could hardly have been predicted. The example of the fall of the Berlin Wall shows us that major events follow a cascade of developments, all based on many different factors. Nevertheless, three central questions can help us prepare for major political events.

Firstly, how can we observe the right signals that can help us estimate tipping points for trend reversals and major events? There are similarities in the ways certain events play out. One example: the dire economic situation and political repression in the GDR were crucial factors that would eventually lead to a revolution in East Germany – as was the case in Egypt or Syria before the so-called Arab Spring. 

We must always question which information and whose assessments are listened to and who gets ignored. There were signs that both mass demonstrations in Egypt and the Russian invasion of Ukraine might happen – these warnings were not taken seriously (enough). Even when future scenarios get developed, alternative perspectives and relevant expertise are often discounted.

Furthermore, analyzing the probability of events is not enough. Things get really interesting when it comes to correctly assessing reactions and impacts – for example, what are the chances the Afghan army stops the Taliban after their offensive? And what about Ukraine against the Russian forces’ invasion? 

Secondly, what distortions does the existing security policy structure perpetuate and promote (including current decision-making processes), and how can these be offset? A lack of imagination, a preference for information that matches one’s own opinion and blind spots are well-known analytical problems. Intelligence services therefore apply structured analysis techniques which are similar to those used in strategic foresight.

Thirdly, how can decision-makers be incentivized to act on the basis of warnings and to learn from mistakes? Upon closer inspection, we know that most major events in security policy were preceded by indications that something was afoot. And still, no one took the necessary action at the right time and place. In the case of terrorism, such as 9/​11 or the attack on Berlin’s Breitscheidplatz, crucial information did not come together in a useful way, mostly due to fragmentation in the intelligence system. Theoretically, everyone wants prevention, but in practice, the implementation of concrete steps lags and is not rewarded. On top of that, there is insufficient follow-through following an event that intelligence services failed to prevent; no inquiry committee is organized, no parliamentary commissions get called to action to address the issue. 

In the areas of internal security and counterterrorism, the dangers for Germany’s own population are great enough to initiate reforms, albeit slowly. By contrast, when preparing for political events in other parts of the world, Germany takes its time. The state is slow to come to terms with its mistakes and to assume responsibility for them. Germany’s inquiry into the factors that allowed the Taliban to take over in Afghanistan in 2021 is a case in point: this inquiry was too little, too late. 

Applied research into foresight and decision-making processes has significantly contributed to better analyses. However, there is still much to be done. In matters of national security, explicit futures analysis should be the basis of decision-making and must be used to retroactively hold the government accountable for decisions. When drafting decision-making reform, this understanding should be front and center. 

Instead of politically biased opinions from individual ministries, we need a well-documented, integrated overview of the situation at hand – as recommended by Gerhard Conrad of the German discussion group on intelligence services. Strategic foresight and structured future analysis must be part of political planning; following Finland, Canada, the United States and Singapore, Germany should make foresight the object of integrated analysis units. And if security policy research is to be helpful at all, it will need access to both the levels of research and political decision-making.

Through analyzing warnings about war, Christoph Meyer and his colleagues at King’s College London have found out how to minimize the so-called Cassandra effect of constantly disregarded forewarnings. Warnings are more successful when warners have decision makers’ trust and communicate concrete options for preventive action. In turn, decision makers must demand and consequently reward critical contradiction across all hierarchical levels, rather than punish dissenting opinions.

Our own research on strategic foresight at GPPi shows that foresight must be geared to the needs of decision makers and should be firmly integrated into existing political processes. At the same time, those engaging in foresight need sufficient independence to be able to fulfill their role as a critical friend” and corrective force – and futurology as a field must be able to adapt flexibly to methodological developments and needs. 

All of this is politically inconvenient – after all, people prefer to make decisions in familiar surroundings with like-minded people. But in a democracy, political power must go hand in hand with the obligation to learn from mistakes and do better.


A version of this article originally appeared in German in Internationale Politik.