Mission & Principles

Why Asymmetric Intelligence exists, what it will and won't do, and the analytical standards that govern every monitor.

asym-intel.info is a free, public OSINT intelligence commons that surfaces structural risk signals before they reach mainstream analytical consensus.

Seven autonomous monitors track seven domains where institutional analysis systematically lags the underlying conditions: democratic backsliding, macrofinancial stress, foreign information manipulation, European strategic autonomy, AI governance, environmental risk, and conflict escalation. Each publishes weekly. All output is open, sourced, and permanent. No paywalls. No paid tiers. Ever.

The platform exists because the gap between signal and consensus is where decisions are made well or badly. A senior analyst reading a WDM brief on a country in early-stage institutional capture — six months before the annual democracy indices register the deterioration — has time to act. A policymaker seeing the GMM’s narrative vs. data divergence before the FT frames it has an analytical edge. That gap is what this platform manufactures.

The platform is built entirely on open-source intelligence. No classified material. No investigative access. No leaked documents. The constraint is the point: if a risk is genuinely detectable through public sources and structured methodology, the platform will find it. If it isn’t, the platform will say so explicitly rather than speculate.


Core Principles

1. No false signal, no matter the cost to coverage

A false signal published to a senior decision-maker is worse than no signal. Speed never overrides correctness. Coverage breadth never overrides source discipline. If the evidence does not support a finding at the required confidence level, the finding is not published — it is held, flagged as uncertain, or downgraded.

2. Source hierarchy is methodology, not bureaucracy

Every monitor operates a tiered source hierarchy. The hierarchy is not a formality — it determines what can be scored and at what confidence level. Tier 1 sources (primary institutional data, platform disclosures, official registries) determine findings. Tier 2–3 sources corroborate. Tier 4–5 sources flag but do not score. A finding that cannot be supported at Tier 1–2 is held at Suspected or Unattributed, not elevated to Confirmed. Violations of the hierarchy are not analytical boldness — they are analytical failure.

3. Uniform evidentiary standards across all actors

The 4-actor FIMI framework (RU/CN/US/IL) exists because applying different evidentiary standards to different state actors is not methodology — it is selection bias. A standard that is applied to Russian information operations must be applied to US and Israeli operations in the same information space. The EEAS institutional gap (structurally calibrated toward RU/CN attribution) is documented and compensated for. This principle applies platform-wide: the analytical standard does not shift based on which conclusion it would produce.

4. Structural over episodic; deviation over level

The platform’s analytical value is in identifying structural conditions, not reporting events. A presidential statement is a signal, not a fact. An electoral outcome is not a democracy score. A VIX spike is not a regime change. Every monitor is designed to distinguish structural deterioration from episodic noise — and to say explicitly when a finding is episodic rather than structural. In conflict analysis, the principle is formalised as deviation-over-level: an anomalous spike in a low-intensity theatre matters more than a sustained high level in a familiar one.

5. Documented uncertainty is a deliverable, not a weakness

Confidence levels are explicit. Blind spots are published. The GMM’s 12.5% false-positive rate across four known indicator patterns is documented — because an analyst reading the output needs to know both the score and whether it sits in a known over-sensitivity zone. SCEM’s CONTESTED band is carried on early-watch entries until week 13 because it tells the reader something true about epistemic status. The platform’s credibility is built on this honesty, not despite it.

6. Public good, permanent commons

All output is free. All methodology is published. The platform does not offer paid access, professional tiers, API monetisation, or data sales. This constraint is not a business decision — it is a founding principle. The moment access is conditioned on payment, the platform stops being a public commons and starts being a product. The platform will not become a product.


What We Are Not

Not journalism. The platform does not break stories, conduct investigations, or interview sources. It analyses public data using structured methodology. The distinction matters: journalism tells you what happened; this platform tells you what the structured signal picture looks like and what the trajectory implies.

Not advocacy. The platform does not have a preferred outcome for any domain it monitors. It does not want democracy to win, specific actors to lose, or particular policy positions to prevail. The analytical register is cold because warm analysis rationalises uncomfortable signals. Where findings have implications that benefit one political coalition, they are published anyway.

Not a news aggregator. Seven monitors publish weekly whether or not there is major news that week. A stable or improving signal is as important to report as a deteriorating one. The platform tracks conditions, not events.

Not comprehensive. The platform tracks seven domains because those are the seven domains where asymmetric risk concentrates and where systematic analytical lag is most consequential. It does not track every important domain. Parsimony is the design, not the limitation.

Not dependent on classified material. Every finding is reproducible from public sources. The methodology pages show the work. The source hierarchy is explicit. Any analyst with access to the same sources should be able to approximate the same output. That reproducibility is the platform’s integrity guarantee.


Success Criteria

Success is not traffic. It is not subscriber count. It is not publication frequency.

Lead time: The platform surfaces a risk before it enters mainstream analytical consensus. The 6–18 month gap between WDM’s institutional-deterioration flags and the annual democracy indices is the operational definition of success in that domain.

Analytical citation: A senior analyst, policymaker, or researcher cites the platform’s output before a comparable conclusion appears in a major institutional report — because the output was earlier and better grounded.

Source discipline under pressure: When a major geopolitical event produces temptation to overclaim, the platform holds the line. A downgraded finding with an explicit uncertainty flag is success. A confident claim that turns out to be wrong is not.


About the Publisher

Published by Peter Howitt, Gibraltar. Independent — no government, institutional, or commercial affiliation.

Each monitor’s full methodology is published at its methodology page. The analytical framework, source hierarchies, and scoring rubrics are open. Nothing about how this platform reaches its conclusions is hidden.