Search documentation…

Search documentation…

Intelligence does not claim infallibility for its prophecies. Intelligence merely holds that the answer which it gives is the most deeply and objectively based and carefully considered estimate.”

Sherman Kent, 1903–1986, The Father of Intelligence Analysis


Intelligence does not claim infallibility for its prophecies. Intelligence merely holds that the answer which it gives is the most deeply and objectively based and carefully considered estimate.”

Sherman Kent, 1903–1986, The Father of Intelligence Analysis


Thinking about horizon scanning for signals of change and the processes we use and mostly, . . the absence of robust discussion about how scanners scan. Sure we’re taught the process, looking for STEEP categories or as I like to use — PESTLE+

or as I like to use — PESTLE+.

P — Political
E — Economic
S — Social
T — Technological
L — Legal
E — Environmental
+Values / Mythology / Metaphor / Worldviews

Incorporating Integral Futures dimensions into our scanning posture and perspective [🎓Joe Voros], encourages a causal layered lens, considering shifts in discourse or how emerging signals of change might be constrained or amplified by cultural myths, whether they challenge or reinforce certain worldviews . . but is this enough?

There’s a lot of talk in futures circles amongst those using AI for automated scanning about the benefits, and many enterprise platforms utilise AI for large scale web scraping and surfacing signals. There’s no question that AI as a technology can help in some parts of the process but it surfaces as many problems as it appears to solve when it comes to horizon scanning.

Thinking about horizon scanning for signals of change and the processes we use and mostly, . . the absence of robust discussion about how scanners scan. Sure we’re taught the process, looking for STEEP categories or as I like to use — PESTLE+

or as I like to use — PESTLE+.

P — Political
E — Economic
S — Social
T — Technological
L — Legal
E — Environmental
+Values / Mythology / Metaphor / Worldviews

Incorporating Integral Futures dimensions into our scanning posture and perspective [🎓Joe Voros], encourages a causal layered lens, considering shifts in discourse or how emerging signals of change might be constrained or amplified by cultural myths, whether they challenge or reinforce certain worldviews . . but is this enough?

There’s a lot of talk in futures circles amongst those using AI for automated scanning about the benefits, and many enterprise platforms utilise AI for large scale web scraping and surfacing signals. There’s no question that AI as a technology can help in some parts of the process but it surfaces as many problems as it appears to solve when it comes to horizon scanning.

For starters, if everyone is using the same kinds of AI Horizon Scan platforms (or AI scanning models), surely there’s a sampling bias? You can see in the example below which is a quick data pull of the sources I’ve used personally in the past 6 months.

Many of my sources are USA-based, no surprises there given that many futures resources, think tanks and universities are based in the US.

UK sources represent the second biggest region where I’ve been exploring the future of the built environment where the UK is leading the charge with labs like Dark Matter Labs, Future Build and The New Civil Engineer. I’ve also been thinking about intergenerational fairness through the lens of the Centre for Postnormalacy Studies and multi-species perspectives via the Ministry of Multispecies Communications and The Treaty of Finsbury Park 2025; as well expanding my scanning around more-than-human ecologies to labs like Uroboros in Prague and people like John Thackara in France who are thinking about design for multi-species cities.

I live in Australia, so it’s also no surprise that Australian sources make up the third biggest region for my scanning sources; but when you look at the source spread below . . it’s not exactly representative is it? The grey represent scan sources below come from over 80 other countries too small to be represented in their own clusters.

For starters, if everyone is using the same kinds of AI Horizon Scan platforms (or AI scanning models), surely there’s a sampling bias? You can see in the example below which is a quick data pull of the sources I’ve used personally in the past 6 months.

Many of my sources are USA-based, no surprises there given that many futures resources, think tanks and universities are based in the US.

UK sources represent the second biggest region where I’ve been exploring the future of the built environment where the UK is leading the charge with labs like Dark Matter Labs, Future Build and The New Civil Engineer. I’ve also been thinking about intergenerational fairness through the lens of the Centre for Postnormalacy Studies and multi-species perspectives via the Ministry of Multispecies Communications and The Treaty of Finsbury Park 2025; as well expanding my scanning around more-than-human ecologies to labs like Uroboros in Prague and people like John Thackara in France who are thinking about design for multi-species cities.

I live in Australia, so it’s also no surprise that Australian sources make up the third biggest region for my scanning sources; but when you look at the source spread below . . it’s not exactly representative is it? The grey represent scan sources below come from over 80 other countries too small to be represented in their own clusters.

If horizon scanners are using AI to automatically surface web signals, surely the law of averages suggests that the same kinds of signals will be surfaced amongst scanners (unless they happen to be deeply skilled coders with their own sensitive scanning algorithms).

If horizon scanners are using AI to automatically surface web signals, surely the law of averages suggests that the same kinds of signals will be surfaced amongst scanners (unless they happen to be deeply skilled coders with their own sensitive scanning algorithms).

We all know it and chances are, this article isn’t the only piece of content you’ll read today. Research guestimates that the average person consumes four articles, 8,200 words, and 226 messages daily. Volumes of data are rapidly growing, and a Statista report found the amount of global data is slated to reach more than 394 zettabytes by 2028.

We all know it and chances are, this article isn’t the only piece of content you’ll read today. Research guestimates that the average person consumes four articles, 8,200 words, and 226 messages daily. Volumes of data are rapidly growing, and a Statista report found the amount of global data is slated to reach more than 394 zettabytes by 2028.

To put it in perspective, a zettabyte equals 1 sextillion bytes
 (1,000,000,000,000,000,000,000 bytes), or the equivalent of 
storing 250 billion DVDs

I’m playing around with AI to pull extracts and summaries from scan hits I’ve clipped myself — to help me ascertain quickly the content overview for tagging, clustering or filtering. The actual scanning itself though, is all human for me.

You can read more about that in AI Workflows

I’m playing around with AI to pull extracts and summaries from scan hits I’ve clipped myself — to help me ascertain quickly the content overview for tagging, clustering or filtering. The actual scanning itself though, is all human for me.

You can read more about that in AI Workflows

I've written quite a bit on horizon scanning digital workflows and futures research tech stacks which go some way to thinking about approaches to handle large volumes of information and scan hits, in the pursuit of rigorous futures research, and helping you to organise your futures research. But what about scan hits you’ve collected yourself and then need to review and quantify for credibility, potential implications and contextual rigour? What human methodologies, theoretical frameworks, or significant questions should we consider in order to make our futures scanning more robust?

How do we distinguish signal from noise?
What can be trusted? How do we perceive truth OR differentiate a random outlier from substantial signal?

How can we set ourselves up to think more clearly?
We can’t just rely on intuition and experience. Spoiler alert : intuition is often where we go wrong.

How can we better externalise the analytical process to ensure that our signals provide us with a robust chain of custody for new ideas and thinking?
Using the kind of structured analytic techniques that the intelligence community uses, can help us to make our analysis more transparent, more objective and more amenable to criticism. Which means better futures intelligence.

In previous posts I’ve focused on digital workflows and how we might better set up a horizon scanning database to ensure signals are both usable (in format and taxonomy) and useful (in application). There’s clearly also benefit in interrogating how we think about our scanning and the mental models or intellectual postures we take when we approach scanning.

I've written quite a bit on horizon scanning digital workflows and futures research tech stacks which go some way to thinking about approaches to handle large volumes of information and scan hits, in the pursuit of rigorous futures research, and helping you to organise your futures research. But what about scan hits you’ve collected yourself and then need to review and quantify for credibility, potential implications and contextual rigour? What human methodologies, theoretical frameworks, or significant questions should we consider in order to make our futures scanning more robust?

How do we distinguish signal from noise?
What can be trusted? How do we perceive truth OR differentiate a random outlier from substantial signal?

How can we set ourselves up to think more clearly?
We can’t just rely on intuition and experience. Spoiler alert : intuition is often where we go wrong.

How can we better externalise the analytical process to ensure that our signals provide us with a robust chain of custody for new ideas and thinking?
Using the kind of structured analytic techniques that the intelligence community uses, can help us to make our analysis more transparent, more objective and more amenable to criticism. Which means better futures intelligence.

In previous posts I’ve focused on digital workflows and how we might better set up a horizon scanning database to ensure signals are both usable (in format and taxonomy) and useful (in application). There’s clearly also benefit in interrogating how we think about our scanning and the mental models or intellectual postures we take when we approach scanning.

This is what Daniel Kahneman calls Fast Thinking. It’s the kind of thinking that helps us reach a judgement quickly based on incomplete and contradictory information.

This kind of thinking operates automatically and quickly, with little effort and no sense of voluntary control. Where knowledge is retrieved without intention or effort.

In futures scanning we typically have to make judgements and conclusions based on incomplete and contradictory information. Systems #1 Thinking helps us to notice patterns and similarities (based on past research or experience), but jumping to conclusions is only helpful when they have a high probability of being true.

Systems #1 Thinking doesn’t log the alternatives our mind rejects or even note there was alternatives. It’s the kind of thinking which is focused on finding and telling a coherent story. It simply works with what is available, not what info is lacking. In fact, the less info available, the easier to create a coherent mental picture.

System #1 Thinking largely takes place inside a foresight scanner’s head, a kind of ‘Black Box’ of Futures Scanning. The problem is, this kind of thinking can lead us to unsupportable conclusions and inaccurate judgements. In short; with Systems #1 Thinking in the drivers seat, the risk of error is high.

This is what Daniel Kahneman calls Fast Thinking. It’s the kind of thinking that helps us reach a judgement quickly based on incomplete and contradictory information.

This kind of thinking operates automatically and quickly, with little effort and no sense of voluntary control. Where knowledge is retrieved without intention or effort.

In futures scanning we typically have to make judgements and conclusions based on incomplete and contradictory information. Systems #1 Thinking helps us to notice patterns and similarities (based on past research or experience), but jumping to conclusions is only helpful when they have a high probability of being true.

Systems #1 Thinking doesn’t log the alternatives our mind rejects or even note there was alternatives. It’s the kind of thinking which is focused on finding and telling a coherent story. It simply works with what is available, not what info is lacking. In fact, the less info available, the easier to create a coherent mental picture.

System #1 Thinking largely takes place inside a foresight scanner’s head, a kind of ‘Black Box’ of Futures Scanning. The problem is, this kind of thinking can lead us to unsupportable conclusions and inaccurate judgements. In short; with Systems #1 Thinking in the drivers seat, the risk of error is high.

We know from experience that thinking, judging and decision making are difficult to do well. The mere knowledge of potential errors and bias doesn’t insulate us from their impacts on our scanning and research.

It’s here that Structured Analytic Techniques can provide scaffolding for our thinking, offering compensating techniques to mitigate our tendency to grasp the immediate or intuitive answer. Whilst futurists often say ‘Scanning is an Art Form’ and to some extent they’re right, robust and systematic scanning is as important to foresight as the creative and unpredictable.

We know from experience that thinking, judging and decision making are difficult to do well. The mere knowledge of potential errors and bias doesn’t insulate us from their impacts on our scanning and research.

It’s here that Structured Analytic Techniques can provide scaffolding for our thinking, offering compensating techniques to mitigate our tendency to grasp the immediate or intuitive answer. Whilst futurists often say ‘Scanning is an Art Form’ and to some extent they’re right, robust and systematic scanning is as important to foresight as the creative and unpredictable.

This kind of thinking kicks in when we encounter a complex calculation or complex analysis problem. It’s where we allocate attention to the effortful activity including complex reasoning and computations. It challenges us to think systematically, working beyond what comes spontaneously or intuitively.

From a futures intelligence perspective, this kind of thinking can help us to systematically challenge our scanning assumptions. We might ask ourselves questions like:

  • What issues surface repeatedly in the context of X Futures?

  • What assumptions are being made in the paper that are potentially subjective or not established, and are deserving of further investigation in the context of X futures?

  • If we were to systematically identify, test, and challenge the assumptions underpinning our futures intelligence research to better understand the challenges and vulnerabilities for X (insert domain or topic here) — what insights might surface that we had not been explicitly looking for?

  • What kind of information might make us change our minds about the issues in this space? what would disprove or render uncertain . . the key points we’ve taken as certain or objective?

  • What data has been transformed into evidence for the decision making in this research? If we go back to the original data source, how robust is it? and is there another data set which challenges this perspective?

  • We know in foresight, for every trend there’s often a counter trend. Think about precision medicine with an n = 1, in contrast to large language models (LLMs) or the wisdom of crowds.

  • What evidence has been used to identify this trend or cluster of signals as significant? How robust is the evidence and how has it been qualified and defined? What implications does this have for the perspective or signal I’m considering now?

This kind of thinking kicks in when we encounter a complex calculation or complex analysis problem. It’s where we allocate attention to the effortful activity including complex reasoning and computations. It challenges us to think systematically, working beyond what comes spontaneously or intuitively.

From a futures intelligence perspective, this kind of thinking can help us to systematically challenge our scanning assumptions. We might ask ourselves questions like:

  • What issues surface repeatedly in the context of X Futures?

  • What assumptions are being made in the paper that are potentially subjective or not established, and are deserving of further investigation in the context of X futures?

  • If we were to systematically identify, test, and challenge the assumptions underpinning our futures intelligence research to better understand the challenges and vulnerabilities for X (insert domain or topic here) — what insights might surface that we had not been explicitly looking for?

  • What kind of information might make us change our minds about the issues in this space? what would disprove or render uncertain . . the key points we’ve taken as certain or objective?

  • What data has been transformed into evidence for the decision making in this research? If we go back to the original data source, how robust is it? and is there another data set which challenges this perspective?

  • We know in foresight, for every trend there’s often a counter trend. Think about precision medicine with an n = 1, in contrast to large language models (LLMs) or the wisdom of crowds.

  • What evidence has been used to identify this trend or cluster of signals as significant? How robust is the evidence and how has it been qualified and defined? What implications does this have for the perspective or signal I’m considering now?

In the context of futures intelligence research . .

  • What kind of information (be specific) would disprove the points being made in this signal or contextual research snip?

  • What information could we seek to disprove the key arguments or claims? or stress test them?

  • Does the research contain any information that might undermine existing assumptions about the futures possibilities or implications as it relates to the X? particularly in areas of possibility or vulnerability?

  • If we had to critique the findings and challenge the potential groupthink to expose any potential hidden weaknesses or blind spots to help us better understand X futures as they relate to them, what critique might we consider?

In the context of futures intelligence research . .

  • What kind of information (be specific) would disprove the points being made in this signal or contextual research snip?

  • What information could we seek to disprove the key arguments or claims? or stress test them?

  • Does the research contain any information that might undermine existing assumptions about the futures possibilities or implications as it relates to the X? particularly in areas of possibility or vulnerability?

  • If we had to critique the findings and challenge the potential groupthink to expose any potential hidden weaknesses or blind spots to help us better understand X futures as they relate to them, what critique might we consider?

Are there balancing loops within the context system that create stabilising behaviours or new iterative behaviours?

  • What reinforcing loops or forces are amplifying these potential impacts, challenges, vulnerabilities or opportunities?

  • How do natural sciences, social sciences and indigenous knowledge ‘fit together’ and enhance each other? is there an understanding of the interconnections between different systems?

  • Can we consider how potential changes in one variable (e.g. new shipping technology) might influence other drivers in the context of particular domain futures?

  • Under what plausible high-impact scenarios (environmental catastrophe, geopolitical crisis, tech breakthroughs) should stakeholders prepare, and what indicators signal these scenarios are unfolding?

    • What critical infrastructure or system vulnerabilities exist, and how can they be mitigated or capitalized upon?

Are there balancing loops within the context system that create stabilising behaviours or new iterative behaviours?

  • What reinforcing loops or forces are amplifying these potential impacts, challenges, vulnerabilities or opportunities?

  • How do natural sciences, social sciences and indigenous knowledge ‘fit together’ and enhance each other? is there an understanding of the interconnections between different systems?

  • Can we consider how potential changes in one variable (e.g. new shipping technology) might influence other drivers in the context of particular domain futures?

  • Under what plausible high-impact scenarios (environmental catastrophe, geopolitical crisis, tech breakthroughs) should stakeholders prepare, and what indicators signal these scenarios are unfolding?

    • What critical infrastructure or system vulnerabilities exist, and how can they be mitigated or capitalized upon?

Is a scanner’s methodology ever explicitly discussed in terms of research approach?

  • Are there examples that might help us to open the black box of scanning methodology?

  • Does this piece of research offer any new answers or context for unresolved questions about the futures in focus?

  • How explicitly do we understand the scanner’s decision about research priorities and what might be gained from exploring them collectively?

Is a scanner’s methodology ever explicitly discussed in terms of research approach?

  • Are there examples that might help us to open the black box of scanning methodology?

  • Does this piece of research offer any new answers or context for unresolved questions about the futures in focus?

  • How explicitly do we understand the scanner’s decision about research priorities and what might be gained from exploring them collectively?

What assumptions are we making in our scanning that can be used to defend any decisions? Are they solid assumptions which provide a basis for our analysis? Assumptions (whilst not necessarily untrue) are scanner-specific and subjectively located within the context of the scanning environment.

How can we uncover assumptions not supported by facts and logic?

Which signals on the edge will prove to be substantive harbingers of future states?

  • What assumptions are unsupported, deserving further investigation and scrutiny by the team?

  • How do we perceive truth or differentiate a random outlier from a substantial signal on the edge?

What assumptions are we making in our scanning that can be used to defend any decisions? Are they solid assumptions which provide a basis for our analysis? Assumptions (whilst not necessarily untrue) are scanner-specific and subjectively located within the context of the scanning environment.

How can we uncover assumptions not supported by facts and logic?

Which signals on the edge will prove to be substantive harbingers of future states?

  • What assumptions are unsupported, deserving further investigation and scrutiny by the team?

  • How do we perceive truth or differentiate a random outlier from a substantial signal on the edge?

Good horizon scanning like any methodology gets better with practice. Building the muscle of creative expansive scanning, combined with rigorous data interrogation and contextual scaffolds is something that seems to be getting easier the more I try. Doing the data dump of my sources for the past 6 months shows me how narrow the lens is / was. Obviously things are project- specific but nevertheless, it’s something I’m now conscious of. The geek in me is furiously considering how I could possibly download my Feedly newsletter & RSS feeds which is where most of my regular non-project specific scan feeds flow into . . . to see if that’s a more representative source sample set.

Sometimes I even have some of the prompts I mentioned above, stuck up on my computer so that I’m intentionally switching postures and lens as I go. Which if I’m honest, feels kind of awkward but like most things in life . . I’m tipping it’ll continue to be awkward, until it’s not. That’s probably the time I can start to trust my own intuition a little more.

What small signals of change are tyre kicking outliers versus those few on the edge which will prove to be substantive harbingers of future states? It’s hard to know and to some extent, that’s kind of the point. Horizon scanning for signals of change is both necessarily subjective, and in need of systematic rigour.

Knowing which parts of the horizon scanning process should be loose and which should be tight is the challenge.

Good horizon scanning like any methodology gets better with practice. Building the muscle of creative expansive scanning, combined with rigorous data interrogation and contextual scaffolds is something that seems to be getting easier the more I try. Doing the data dump of my sources for the past 6 months shows me how narrow the lens is / was. Obviously things are project- specific but nevertheless, it’s something I’m now conscious of. The geek in me is furiously considering how I could possibly download my Feedly newsletter & RSS feeds which is where most of my regular non-project specific scan feeds flow into . . . to see if that’s a more representative source sample set.

Sometimes I even have some of the prompts I mentioned above, stuck up on my computer so that I’m intentionally switching postures and lens as I go. Which if I’m honest, feels kind of awkward but like most things in life . . I’m tipping it’ll continue to be awkward, until it’s not. That’s probably the time I can start to trust my own intuition a little more.

What small signals of change are tyre kicking outliers versus those few on the edge which will prove to be substantive harbingers of future states? It’s hard to know and to some extent, that’s kind of the point. Horizon scanning for signals of change is both necessarily subjective, and in need of systematic rigour.

Knowing which parts of the horizon scanning process should be loose and which should be tight is the challenge.

What did you think of this content?

It was interesting

It was not interesting

I have feedback

What did you think of this content?

Helpful

Not helpful

Feedback

Last updated on

Dec

4,

2024

Last updated on

Dec

4,

2024

Create a free website with Framer, the website builder loved by startups, designers and agencies.