Stuart Clarke, CEO of Blackdot Solutions
If there’s been one standout development that impacted open source intelligence (OSINT) in 2025, it’s AI. We’ve definitely started to feel its adverse impact – and nowhere is this clearer than the world of OSINT.
We’re seeing more sophisticated disinformation campaigns and deepfakes, distorting the veracity of information available online. There have been numerous high-profile instances of AI being used to manipulate or generate fake images or videos. Bad actors now have a whole range of abilities, from creating deepfake identities for fraud to altering media sources to impact the reputation of a certain person or entity.
These threats have ushered in a new use for OSINT – the process of collecting, combining and analysing publicly available information to produce actionable intelligence. BBC Verify is one example of a body where OSINT forms a key part of its mission to investigate the legitimacy of claims online and whether images are AI-generated. As its live page outlines, the organisation uses “open-source intelligence, satellite imagery, fact-checking and data analysis to help report complex stories”.
Yet AI doesn’t just impact OSINT investigations negatively – it’s also helping investigators to collect and analyse data. More and more organisations are adopting AI into OSINT processes themselves due to the potential for efficiency gains and ability to unlock insights that might otherwise have been missed. In this way, AI represents both a challenge and an opportunity.

So, with all this said, what can we expect for OSINT over the next year?
Real or fake? The newer obstacles facing OSINT
OSINT technology is designed to tackle the pivotal challenge of searching, sorting and deriving insights from the abundance of information on the internet. Now, however, the use of generative AI is not only adding more information into this data pool, but far more disinformation too. Of course, this problem will be amplified in 2026. So, how does OSINT evolve when data sources become less trustworthy?
As disinformation and deepfakes propagate further, the reliability of data drawn from open sources will need to be questioned. It’s even more important for investigators to verify information accurately – a core OSINT skill. Yet spotting AI-generated content accurately requires time. It also brings into focus the importance of having access to broad and high-quality data to avoid relying on a smaller set of untrustworthy information. So, there is greater emphasis on using OSINT technology that can automatically collect data at scale.
Internet users are using a wider range of platforms and spreading data across more locations. AI-assisted crime, for example, will create higher volumes and more complex challenges for those fighting it. Therefore, the need to access actionable insights and intelligence across many locations (in a digestible format and in one location) will be even stronger. Humans can then verify this information and act on it as required.
Relying on models like ChatGPT alone for research is not a strategy, especially when these platforms can ‘hallucinate’ or assert that AI-generated content is authentic. The AI tools responsible for some of the misinformation online can’t also be relied upon to solve the problem. Above all, this challenge reemphasises the value of human expertise for verification.
When AI is used thoughtfully and integrated into investigative processes, however, we can transform the effectiveness of investigators, giving them more time to verify information. To do this successfully, careful consideration of security, ethics and application is required.
Taking control of OSINT: a desire for proactivity and insights
AI’s not the only reason OSINT has become a must-have in more organisations. There is a growing recognition that the intelligence investigators need can be drawn from information held in public sources. So, organisations have started to mature their compliance, corporate intelligence or investigations functions to reflect this fact.
As such, we’ll see a big shift towards using OSINT proactively. This will entail ‘monitoring investigations’, which continuously and proactively collect data to detect threats, instead of investigations which react to specific incidents.
When it comes to OSINT technology, the use of AI has contributed to a cognitive shift from solutions that ‘collect data’ to solutions that ‘deliver insights’. OSINT platforms will need to provide real outputs or products (e.g. reports), not just aggregate data and leave the hard work to the analyst. Large language models have set the tone: people want answers.
OSINT investigators will also need to grapple with the newer challenge of data platforms becoming closed sources, with API access tightening and costs going up. In particular, we’re witnessing shifting data availability across jurisdictions – some sources are becoming more closed as others become more open. Investigators are going to be forced to think more smartly and strategically about their data use, focusing on the data that can deliver the most value. Alongside this, we’re seeing a growing desire to converge internal data with external information to draw new insights from an organisation’s existing data, plugging potential gaps.
Regulations making OSINT a necessity
In 2026 we expect to see several laws and regulations increasing both the need for and the demands placed on OSINT.
In Europe, the new Supply Chain Directive will require large companies to identify, prevent and mitigate human rights and environmental risks in their global supply chains. This includes mandatory risk analysis and preventive measures, areas where OSINT can provide tangible value.
The UK’s new Failure to Prevent Fraud (FTPF) act came into force in September 2025. Large organisations are now accountable if an ‘associated person’ commits fraud to benefit the company – unless the organisation can prove it had reasonable fraud prevention procedures in place. As such, OSINT has become an essential resource to managing this risk for tasks like screening suspicious customers.
Then, the EU AI Act will have a big impact as it continues to be implemented, driving further need for OSINT investigators to think carefully about security and ethics. By introducing different risk levels, the Act prompts organisations and their investigators to consider where OSINT should be used – and where AI can be applied within this.
One crucial aspect of compliance with this legislation will be the importance of maintaining transparent workflows. Explainable AI will be critical – investigative teams will have to demonstrate they have used AI correctly and can validate all findings.
The battle for authenticity
Verifying what’s real will be the overarching challenge in 2026. The proactive use of OSINT for investigations will be a crucial aid in the battle to verify information, access more data sources, and successfully close cases. The best way to mitigate the threat posed by AI-generated deepfakes and disinformation is having the ability to pull information from as many sources as possible, alongside review by skilled human investigators.
The smart and responsible use of AI itself will help investigators streamline workflows and meet regulations more efficiently. Pivotally, it will be an important tool in fighting AI-driven crime of all kinds. But ultimately, as the impact of AI-generated content continues to grow, human expertise and judgement will be a defining characteristic of the most successful OSINT processes.


