Dharani Holdings
Dharani Holdings
Risk Intelligence Platform

Independent insight.
End-to-end assurance.

The Risk Intelligence Platform is the quantitative analytics workbench of Dharani Holdings Limited. It brings clarity, confidence and discipline to the full lifecycle of major capital programmes — three-dimensional simulation of Time, Cost and Performance, evidence-grade claim packages, and persistent project knowledge that compounds across every engagement.

  • Quantitative simulation across Time, Cost & Performance
  • 30+ category lexicon — FIDIC · NEC · legal · geopolitical
  • Society of Construction Law claim packages (PDF & Word)
  • Primavera P6 .xer and PRA .plan native parsing
  • Issue, claim & dispute support with delay fragnets
© 2026 Dharani Holdings Limited. Registered in Ireland under the Companies Act 2014 · Company Number 794280 · Registered office Dublin, Ireland. All rights reserved.

Sign in

Welcome back. Continue where you left off.
Forgot?
or continue with
Internal-use environment. This workbench is intended for Dharani Holdings practitioners and authorised client engagements. Production deployment within the Dharani Holdings environment uses managed single sign-on and identity services.

Projects

Active workspace · select a project or create a new one
·
Guest

Data loaded — choose your workflow

Before the simulation runs, decide how much hand-holding you want.
How would you like to proceed?
Resumed previous archive
Internal-use build
Browser-local authentication and storage. Production rollout within the Dharani Holdings estate uses managed single sign-on and shared services.
Dharani Holdings
Dharani Holdings
Risk Intelligence Platform
Dharani Holdings · Risk Intelligence Platform

Clarity at every stage.
Confidence in every number.

Upload structured registers (Excel · CSV · JSON), unstructured documents (Word · PDF · PowerPoint · text), or Primavera P6 schedules (.xer · .plan). The workbench runs three-dimensional Monte Carlo simulation across Time, Cost & Performance with nine distributions and per-dimension overrides; classifies risks via a 30+ category lexicon (FIDIC · NEC · legal · geopolitical · regional); manages Risks, Uncertainty, Issues and Claims registers separately; produces Society of Construction Law claim packages in PDF and Word; and supports per-activity confidence forecasts, BOQ cost analysis, schedule lens filters, and a project knowledge base that compounds learning across every Dharani Holdings engagement.

Drop any document here
structured registers · unstructured documents · project-control files
.XLSX.CSV.JSON .DOCX.PDF.PPTX .XER.PLAN.TXT
or
Major-project NLP extraction
Specialist project-controls lexicon scans any document for statements about Time, Cost and Performance — capturing phrases like "30-day delay", "€2.5M overrun" or "12% throughput loss" with structured value extraction. Built on the Dharani Holdings project advisory taxonomy.
Three-dimensional simulation
Monte Carlo across Time (days), Cost (€/$/£) and Performance (% retention). Multiplicative compounding for performance, additive for time and cost. Select the dimensions in scope upfront — the dashboard adapts.
Compounding knowledge
Every confirmed schema, manual risk and deletion strengthens the engine. Detection accuracy compounds across documents and engagements — fully on-device, no external AI APIs.
01 / Overview

Risk Position Summary

The strip below shows the scheduled baseline read directly from your loaded source data (the schedule's current dates, the BOQ totals, the nominal scope). Below it, the KPI tiles show the risk-adjusted P50 / P80 / P90 outcomes produced by Monte Carlo simulation — they are additions to baseline, not a re-baseline. Use the sidebar to switch focus dimension and adjust parameters live.

S-Curve · Schedule Outcome
Cumulative probability — switch dimension in sidebar
Risk Concentration
Top Risk Drivers
Click any to drill into register
Quick Insights
Auto-generated from current data
02 / Risk Register

All Identified Risks

Click column headers to sort. Filter using the controls below or via the sidebar.

02c / Issues / Delay Events

Issue & Claim Management

Discrete delay events / disruptions used as building blocks for claim packages. Each issue can include schedule fragnets, contractual notices, causation narrative, chronology, and supporting documents. Bundle multiple issues into a Claim package via the Claims tab.

02d / Claims & Changes

Claim & Change Register

When a risk materialises it becomes an issue. Bundle issues into a claim package — if the package is approved as agreed, it becomes a Change with approved time, cost and performance values. Unresolved packages remain as Claims. The Summary view rolls everything up as a time-slice progression so claimed-vs-approved values are visible at a glance.

View
02e / Issue Modeling — Schedule Impact

Issue-to-Schedule Modeling

Model each issue's impact on the project completion date relative to the baseline. View per-issue S-curves, combined effect, and incremental contribution. Use this for retrospective delay analysis (As-Built) and forward-looking impacted programmes (Time Impact Analysis).

View mode
Combined Schedule Impact — Cumulative Delay vs Baseline
Each window adds to the cumulative project delay
Issue Contribution Table
Net excusable delay × prolongation rate · sortable
02b / Uncertainty Register

Activity-Level Uncertainty

Continuous uncertainty entries (separate from discrete risk events). Each entry covers an activity or a group, with three-point bands and per-dimension distribution. The simulation's Uncertainty scenario uses these entries.

When to use the Uncertainty Register
Risk register vs uncertainty register — quick guide
Risk register — discrete events with a probability gate. Either it happens or it doesn't. Example: "Vendor failure delays signalling delivery" with 35% probability and ±60 day band if it occurs.
Uncertainty register — continuous estimation variance that's always present. Example: "Civil works productivity rate varies between 0.85× and 1.15× of plan" — sampled every iteration with no probability gate.
Both flow into the simulation, but their bands and distributions are managed separately so the analysis stays auditable.
02b / Schedule

Project Schedule — activities

WBS-organised schedule. Click any node to expand/collapse children. Activities show duration and linked-risk count. Click a row to filter the register to risks linked to that activity.

CODE NAME START FINISH ORIG REM TF FF CRIT % R ADD
02c / Bill of Quantities

BOQ / Price Breakdown

Financial line-item analysis. Each row gets row-level uncertainty bands on quantity and rate, and Monte Carlo runs over the entire bill. Sensitivity shows which line items drive the most cost variance.

Total Cost Distribution
P10 → P90 spread from row-level uncertainty
P-Value
P80
Top Cost Drivers
Line items by share of total cost variance
02d / Confidence

Schedule Confidence by Activity

Pick any activity from the list. The right pane runs a focused QSRA on just the risks linked to that activity, showing the distribution of its forecast finish date with P50/P80/P90 confidence bands.

Schedule Window
Click any activity to analyze
Select an activity
Pick an activity from the left pane to view its forecast
QSRA / Schedule

Quantitative Schedule Risk Analysis

All schedule-focused outputs in one place: Monte Carlo distribution, sensitivity drivers, pre-vs-post mitigation comparison, and probability-impact matrix — all scoped to Time.

Scenario lens
Distribution Histogram
Drag the slider to inspect impact at any P-value
· Schedule
P-Value
P80
Tornado · Top Drivers
Spearman rank correlation with outcome
S-Curve · Pre vs Post
Mitigation effectiveness comparison
Heat Map · Pre-Mitigation
Heat Map · Post-Mitigation
Sensitivity Table · All Three Dimensions
QCRA / Cost

Quantitative Cost Risk Analysis

Cost-focused Monte Carlo distribution, drivers, mitigation comparison, and probability-impact matrix.

Scenario lens
Cost Distribution Histogram
Drag the slider to inspect cost at any P-value
P-Value
P80
Tornado · Top Cost Drivers
Cost S-Curve · Pre vs Post
QPRA / Performance

Quantitative Performance Risk Analysis

Performance loss distribution, drivers, and mitigation. Performance impacts compound multiplicatively (residual retention).

Scenario lens
Performance Loss Distribution
Cumulative performance loss across iterations
P-Value
P80
Tornado · Top Performance Drivers
Performance S-Curve · Pre vs Post
06 / Intelligence

Auto-Generated Insights

Pattern recognition, calibration analysis, and concentration metrics across all dimensions.

Risk Map · Probability × Impact
Heat-map of risk concentration with bubble overlay for each item · matrix size set in Settings
Concentration · Pareto
09 / Deliverable

QSRA Report

99 / Help & Reference

Dharani Holdings — Risk Intelligence Platform User Guide

Everything you need to know to get the most out of the simulation engine. Use the side index to jump to a section.

Quick start

The Risk Intelligence Platform is the quantitative analytics workbench used by Dharani Holdings Limited. It runs entirely in the browser, brings together schedule, cost and performance risk, and produces independent, evidence-grade output for boards, sponsors, lenders and dispute panels. The workflow below takes most users from a cold start to a published claim package in under 30 minutes.

Reading the numbers correctly. Every KPI grid shows a baseline strip at the top with the planned values pulled directly from your source data (planned finish date, planned budget, nominal scope). The P50 / P80 / P90 tiles below are the risk-adjusted outputs of the Monte Carlo simulation — they are additions to baseline, not replacements. For example, a P80 of +47d on a baseline finish of 12 Dec 27 means the simulation suggests an 80%-confidence forecast finish of 28 Jan 28.

  1. Upload any combination of risk register (Excel / CSV / JSON), schedule (Primavera .xer or PRA .plan), and unstructured documents (PDF / Word / PowerPoint). For ambiguous documents you'll be asked how to interpret them.
  2. Pick the analysis scope — Time, Cost, Performance, or any combination. The dashboard tabs (QSRA / QCRA / QPRA) only show for active dimensions.
  3. Review the Overview for the synthesised position. The "Recommended Actions" card up top tells you the dominant driver, mitigation yield, and next step.
  4. Drill into a tab — QSRA for time, QCRA for cost, QPRA for performance, or Confidence for per-activity finish dates.
  5. Adjust on the fly via the right sidebar (focus dimension, P-value, sampling method, distribution, mitigation effectiveness). Click the tab on the right edge to collapse the sidebar and use the full window width.

Accounts & Projects

The Risk Intelligence Platform is an internal workbench for Dharani Holdings Limited. All data lives in your browser by default; live Dharani Holdings deployments integrate with managed single sign-on and the firm’s shared storage and audit services.

Accounts

  • Sign in — use your Dharani Holdings credentials. First-time users on a browser are added automatically.
  • Sign out — top-right of the header. Your projects and knowledge base persist for next sign-in on the same browser.
  • Guest mode — “Skip → use as guest” lets you try the workbench without an account. Nothing is persisted beyond the current browser session.
  • Profile panel — click your name in the header to view your account details or sign out.

Projects & archives

After signing in you land directly in the workspace, where you can drop files and begin analysis straight away. The 📁 Projects button in the header opens the project manager when you want to organise multiple engagements.

  • 💾 Save — the primary "save my work" button. Always visible in the header once data is loaded. If no project context exists, it prompts for a project name, creates the project, and saves a first snapshot. If you're already in a project, it adds a new dated snapshot. Ctrl+S triggers the same action.
  • + New project — create a named workspace with a short description
  • Open in workspace — load the most recent archive (or start empty)
  • 📦 Archive snapshot — the project-context version of Save; appears alongside 💾 Save once a project exists. Both write the same workspace state (risks, activities, uncertainties, issues, claims, BOQ, settings, source filenames) as a labelled, time-stamped archive within the project.
  • Load archive — time-travel back to any previous state
  • Export project (.json) — download the entire project as JSON for backup or transfer between browsers
  • Active / Archived / All tabs — organise projects without deleting them

Typical engagement workflow: Sign in → drop XER and risk register into the workspace → click 💾 Save ("Pre-mitigation review") → edit risks and re-run → click 💾 Save again ("Post-mitigation, Apr 2026") → generate the SCL claim package. The archive history acts as your audit trail.

Importing on top of demo data: If you used "Try with sample dataset" first and then drop your real XER / Excel onto the workspace, the workbench detects the demo state and clears it automatically so the demo uncertainties / issues / claims don't pollute your real analysis. You'll see a "Cleared sample dataset" toast confirming the reset.

Knowledge sharing across the team

The category lexicon and learned patterns are held per-browser by default. To share knowledge across the engagement team:

  • Use Knowledge panel → Export JSON to download your knowledge base
  • Send the file to a colleague who imports it via Knowledge panel → Import JSON
  • Real-time cross-user knowledge sync is handled by the Dharani Holdings shared environment when deployed centrally

Control over the model — when does the engine run?

The workbench is designed so practitioners stay in control of when the simulation fires. There are several places where the engine could auto-run; each is gated by an explicit choice.

After file upload

When you drop files into the workbench, the parser reads them and then — before any simulation runs — shows the Workflow choice modal. You see a summary of what was loaded (schedule, register, BOQ, documents) and pick one of three paths:

  • Auto-detect & run — accept all detected risks and run immediately. Best for power-user iteration on a register you trust.
  • Review before running (recommended) — detected risks are marked pending in the Risk register. Inspect each, edit the probability/impact, accept the ones you want, then click Run Simulation. Use this for engagement-grade work where every risk going into the model needs a name on it.
  • Start with blank register — discard everything the parser detected. Schedule and BOQ still load, but the register is empty. Add risks one at a time — the engine will not fire until you click Run, and you see the marginal P80 shift after each addition.

Auto-rerun on parameter change

The default is now off. When you edit a risk's probability, change the mitigation effectiveness slider, switch the default distribution, or modify the inclusion filters, the Run Simulation button turns amber with a pulsing "Stale — click Run" indicator. The previous P-values stay on the dashboard until you re-run, so you can compare side-by-side. Turn auto-rerun back on under Settings → General if you prefer the live-update behaviour.

Marginal impact preview

When adding a new risk through the Add Risk modal, click Preview impact before saving. The workbench runs a fast 2,000-iteration what-if comparing the current register against the register-plus-this-risk, and reports the projected P80 shift in both schedule days and cost. This lets you reason about each risk's contribution incrementally rather than only seeing the aggregate.

Heuristic risks from XER

When an XER is uploaded, the workbench scans activity names for delay/cost patterns (procurement, permits, commissioning, geotechnical, interface, rail-systems, etc.) and generates candidate risks. These are clearly tagged pending in the Risk register so you can see exactly which entries came from pattern matching vs your own register. None of them affect the simulation until you accept them.

Project resume

When you re-enter a project with prior archives, the most recent archive is auto-restored — but a banner appears in the lower-left showing what was resumed, with options to Browse archives (pick a different snapshot), Start fresh (clear the workbench while keeping the archives), or Keep (dismiss the banner). The banner auto-dismisses after 12 seconds.

Tabs at a glance

TabPurposeKey actions
OverviewSynthesised dashboard with recommended actionsBaseline strip (planned values from source data) + risk-adjusted P50/P80/P90 KPIs, S-curve, top driver
RiskDiscrete risk register (probability × impact events)Add/edit/delete risks, inline category change, per-dimension distributions
UncertaintyContinuous variance (always sampled)Per-dimension three-point bands; Time accepts % of plan OR absolute days
IssuesDiscrete delay events for claim managementDescription, causation, chronology, notices, mitigation, schedule fragnet, supporting docs
Issue ModelingSchedule impact of issues — modeled three waysCombined cumulative S-curve · Per-issue S-curves · Incremental Time-Slice
ClaimsBundle issues into formal EOT/Variation packagesGenerate SCL Time-Slice Window Analysis report (PDF or Word)
ScheduleWBS-organised activity registerCritical path / longest path / TF / FF filters · +R/+U/+I from any row
ConfidencePer-activity finish-date forecastsClick any activity for focused QSRA — KPI tiles show baseline finish, P50/P80 dates (delay days), and QCRA cost estimate. Days/dates toggle on chart axis.
BOQCost-line Monte CarloRow-level uncertainty bands on quantity AND rate, baseline total + P50/P80/P90 KPIs with delta vs baseline, top cost drivers
QSRA / QCRA / QPRAThree-dimensional quantitative analysisBaseline strip with dimension-specific footnote + risk-adjusted KPIs (P80 tile shows Forecast ≈ [baseline + delta]) + histogram + S-curve + tornado (risks/activities/categories)
InsightsAuto-generated patternsHover any card to edit (saves to knowledge base for re-use)
ReportProject-level export with embedded chartsExcel (multi-sheet) · Word (.docx) · PDF executive summary · PDF detailed report (13 sections including all rendered charts as embedded images)
HelpThis pageComprehensive reference — works without any data loaded

Workflow choice — auto / review / blank

The Risk Intelligence engine is deliberately not eager. As soon as you load data — whether from a file or the sample dataset — the application stops and asks how you want to proceed. The dialog appears after schema confirmation and scope selection, immediately before the engine runs for the first time. The three choices are mutually exclusive and only the engagement context determines which is appropriate.

OptionWhat happensWhen to choose
Auto-detect & runAll detected risks accepted as-is. Engine runs. Dashboard populates.You trust the source data and just want the headline P-values fast — sample previews, internal reviews, hackathons.
Review before running (recommended)All detected risks marked as pending. They appear in the Risk Register with striped rows and "PENDING" chips. The engine runs with zero active risks (so P-values start at zero). You accept risks one at a time using the Accept buttons, then click Run Simulation.Engagement-grade work — every risk needs to be inspected, classified, and signed off before it enters the model.
Start with blank registerAll detected risks discarded. Schedule, BOQ and uncertainty entries still load. You add risks one at a time using + Add Risk, and the Marginal-impact preview button lets you see the P80 shift for each addition before you commit.First-principles risk modelling — when the source's heuristic-extracted risks are not trustworthy, or when you want a clean register built from primary evidence (workshops, SME interviews).

The workflow dialog also surfaces a summary of what was actually loaded — activity count, BOQ totals, baseline finish, uncertainty entries, document count. That summary is part of the audit trail: it tells you what the engine saw before you made your choice.

Pending-risk review

When you choose "Review before running" the Risk Register shows every detected risk with a diagonal-striped background and a PENDING chip next to its ID. Pending risks are excluded from the simulation — the P80 will read as if they aren't there.

Each pending row has two action buttons in place of the cog/delete you see for active risks:

  • ✓ Accept — moves this single risk into the active register. The cog and delete buttons replace the accept/reject controls. The stale-results indicator turns amber so you know to click Run.
  • × — discards this risk permanently. The underlying row is removed from App.rawRows so a rebuild won't bring it back.

A banner at the top of the register summarises the count and offers bulk actions: Accept all & run (move every pending risk into active in one go) and Discard all (with confirmation). Pending state survives any operation that rebuilds the risk objects (category change, response change, override edit, etc.) because the IDs are tracked in a side-channel Set, not in the risk object itself.

Marginal-impact preview

The Add Risk modal has a Preview impact ↻ button between Cancel and Save. Click it to see how this single risk, if added, would shift the project's P80 — before you commit. A short paragraph appears showing:

  • Schedule P80 before → Schedule P80 after, with the delta in days
  • Cost P80 before → Cost P80 after, with the delta (only if Cost scope is active)
  • Activity-linkage commentary: "Linked to N activities — path-aware routing through the CPM network" or "No activity link — impact applied as project-level overlay (no float erosion)"

The preview uses a reduced iteration count (max 2,000) so it returns within a second or two even on large projects. Nothing is committed: you can re-click Preview after adjusting any field, and only the Add Risk button actually persists. This is the workhorse tool when building a register from blank — you add a risk, preview, decide if it's plausible against your project intuition, and either save or refine.

The preview number is indicative, not authoritative. The marginal effect of a risk depends on what's already in the register, on inter-risk correlations, and on Latin Hypercube sampling artefacts at low iteration counts. Treat the delta as directional: ±10% accuracy is normal. The final P80 after Save (with full 5,000-iter Run) is the auditable figure.

Explicit residual / post-mitigation values

By default, post-mitigation values are derived from the Response strategy and the global Mitigation Effectiveness slider. The formula multiplies pre-mitigation probability and impact by per-strategy factors (Avoid: 0.1/0.2, Reduce: 0.5/0.6, Transfer: 0.6/0.85, Accept: 1.0/1.0) blended with the effectiveness setting.

That derivation is fine for a directional view but inadequate for engagement-grade work where you need to say: "after our specific mitigation plan, this risk has 5% probability and a 3-day most-likely impact." The Residual / Post-Mitigation values section on every risk modal supports exactly that.

Tick Explicit residuals at the top of the section and the body reveals four sub-sections matching the pre-mitigation pattern:

  • Residual probability — slider 0–100%
  • Residual schedule (days) — Min / Most Likely / Max + a distribution dropdown with all nine options
  • Residual cost — same shape
  • Residual performance (% loss) — same shape

The distribution dropdown gives input-mode flexibility identical to the pre-mitigation side: choose Point (single value) for a deterministic residual (just fill Most Likely), Two-point (50/50 Min/Max) for a binary outcome, BetaPert or Triangular for a band, or leave it as inherit pre-mitigation to use the same distribution shape as pre.

When the toggle is on, the Mitigation Effectiveness slider on the sidebar is ignored for this risk. The engine samples directly from your residual band in the Post-Mitigated scenario. Other risks (without explicit residuals) continue to use the derived formula.

Risks with explicit residuals show a small green residual chip next to their ID in the Risk Register. Residual values are also editable later — open the per-risk cog (⚙) and the same section appears, pre-populated with the saved values.

Risk matrix on the Insights tab

The Insights tab opens with a Risk Map · Probability × Impact card showing every item in the model as a coloured bubble on a heat-mapped grid. The card has three toggle groups and a size override:

  • Scenario: Pre-Mitigated (original probabilities and impacts), Post-Mitigated (uses explicit residuals where set, else the derived values), Uncertainty (probability becomes 1 — no Bernoulli gate — and the impact is the central tendency × pre probability)
  • View axis: Risks (one bubble per risk), Activities (one bubble per activity that has at least one risk linked to it, with probability aggregated as 1−∏(1−p) across linked risks), BOQ items (cost concentration on the impact axis, quantity variance on the probability axis)
  • Impact dimension: Time, Cost, Performance, or Overall (worst-of T/C/P, normalised within the data set)
  • Size override: inherit from Settings (default 5×5) or force 3×3, 4×4, 5×5 or 6×6 for this view only

Each cell shows its bucket count in the top-right corner. Bubbles are coloured by risk Level (red=High, amber=Medium, green=Low). Hover any bubble for the underlying values. The right-hand panel lists the level breakdown and the top-five exposure items (sorted by P × I).

The matrix is the only place in the app where the Uncertainty scenario, activities, and BOQ items can all be plotted on the same P×I framework. Use it to triage: in Pre-Mitigated mode, identify which cells are overcrowded; switch to Activities view to see which schedule milestones are sitting in the red zone; switch to BOQ to find cost concentration risks.

Claim & Change outcomes

The Claims tab is now called Claim & Change Register because every claim package can resolve into one of five outcomes — and three of them turn the claim into a Change. The lifecycle:

  1. Risk sits in the register as a probability × impact (qualitative + quantitative).
  2. If the risk materialises, the Issue Register captures it as a discrete event with delay days, prolongation cost, disruption cost, and contractual reference (FIDIC clause, NEC compensation event, etc.).
  3. One or more issues are bundled into a Claim package with a methodology (SCL Time-Slice Window Analysis, Time Impact Analysis, As-Planned vs As-Built, etc.) and submitted.
  4. The Engineer or Project Manager determines the outcome. You record this in the Outcome section of the claim modal.

The five outcomes:

  • Pending — claim submitted, awaiting determination. Default state.
  • Approved as Change — claim approved with values agreed. You enter Approved Time (days EOT), Approved Cost ($), Approved Performance (%) and the Change Order / Variation Reference. The package converts to a Change.
  • Partially Approved — some entitlement granted. Approved values entered; the difference vs claimed is visible in the variance column.
  • Rejected — no entitlement. Outcome notes capture the rejection grounds.
  • Withdrawn — withdrawn by the Contractor before determination.

The Summary & time-slice view on the Claims tab rolls everything up: four KPI tiles (Pending Claims, Approved as Change, Total Claimed, Recovery Ratio) and a chronological table showing every package's claimed-vs-approved values with variance columns. Recovery Ratio is the colour-coded approved-time ÷ claimed-time percentage — under 30% it's red, 30–60% amber, above 60% green.

Customisable risk owners

The risk Owner field is hard-coded in many tools to a fixed list of generic roles. In Risk Intelligence it's per-engagement and lives in Settings → People. Each entry has a name (e.g. "Programme Director") and an optional organisation/discipline (e.g. "Dharani Holdings", "Contractor", "Engineer").

When the list is empty, the Owner field in the Add Risk modal is a plain text input. As soon as you add at least one entry, the field becomes a datalist-driven combobox: you get a dropdown of the pre-defined names but can still type free text for one-off entries. The same list also feeds the Issue modal's owner fields.

Owners persist with the project's settings and survive across sessions. Removing an owner from the list does not change risks already assigned to that name — they keep the value as free text. The list is intended to be tuned per engagement: a Gulf rail project will have different roles to a UK PFI hospital.

Materialise risk → issue

Each row in the Risk Register has a small button in the actions column. Click it to convert this risk into an Issue: the Add Issue modal opens pre-populated with the originating risk's title (as "Materialisation of [risk title]"), description, causation narrative (citing the risk ID), most-likely schedule and cost impacts, and linked activities.

The issue captures a linkedRiskId pointing back to the originating risk. This appears in the Excel Issue Register sheet, so the audit trail from risk → issue → claim → change is end-to-end traceable. You can edit any of the pre-filled fields before saving — the materialise button is a starting point, not a commit.

The risk is not automatically deleted from the register when it materialises. That's deliberate: a single risk can materialise multiple times (recurring weather events, repeated permit delays), and you'll often want to keep the risk in the register at reduced probability for the residual exposure that wasn't realised by this specific event.

Data & file formats

The engine accepts everything from a clean structured register to free-text policy documents. The file chooser appears whenever a file's structure is ambiguous, letting you decide how it should be read.

FormatWhat it's used for
.xlsx / .xls / .csvRisk register (auto-detected schema), BOQ table, PRA Modelled Risk bridge format, or ambiguous → choose mode.
.jsonRisk register with explicit field mapping.
.xerPrimavera P6 schedule. Activities, WBS, resources, and dates extracted.
.plan / .plnPRA native schedule (PMAW8 binary). Activity codes/names extracted; durations not available.
.pdf / .docx / .pptx / .txtUnstructured. Choose: NLP risk extraction, QCRA-cost focus, QPRA-perf focus, or add to knowledge base.

Tip: Use Settings → Advanced → Download Excel template (or the button on the empty-state page) for an 11-sheet ready-to-fill workbook covering every register and every tab.

Excel template — 11 sheets covering every tab

The Excel template is the canonical way to populate the entire app from a single workbook. Download it from Settings → Advanced → Download .xlsx, fill in the sheets you need, and re-upload — every register populates at once with no schema-confirmation prompts.

Two-minute workflow

  1. Download the template from Settings → Advanced (or from the empty-state landing page).
  2. Read the README sheet first — every field definition, three-point estimate guidance, and the risk-vs-uncertainty explainer is in there.
  3. Fill in the sheets you need. You can leave any sheet blank — the importer skips empty sheets cleanly.
  4. Save and upload via "+ Add Files" or by dropping the file onto the empty-state page.
  5. The app auto-detects the template structure, imports every register, sets the analysis scope from what you provided, and runs the simulation immediately. You'll land on the Overview tab with everything populated.

Sheet inventory

SheetWhat it doesMaps to tab
READMEWorkflow, field definitions, three-point estimate guidance, distribution selection guide, risk-vs-uncertainty explainer, issue/claim workflow, deployment caveats summaryReference
QSRA - Time RisksSchedule risk events: ID, title, description, category, probability, three-point days, distribution, response, status, activity link, ownerRisk → QSRA
QCRA - Cost RisksCost risk events: same shape, three-point cost amountsRisk → QCRA
QPRA - Performance RisksPerformance risk events: three-point % lossRisk → QPRA
Uncertainty RegisterContinuous variance: per-dimension three-point bands, per-dimension distribution, time unit (pct/abs), activity linksUncertainty
IssuesDelay events: ID, status, cause/owner, window, contractual reference, event/notice/response dates, claim ref, description, causation, chronology, notices, mitigation, schedule fragnet (gross/concurrent days, prolongation rate, disruption cost), activity linksIssues + Issue Modeling
ClaimsIssue packages: claim ID, subject, type, status, dates, contract, methodology, issue IDs (semicolon-separated), executive summary, contractual basisClaims
ScheduleActivity register (overrides XER if no XER uploaded): code, name, WBS, start/finish, durations, %, floats, critical-path flag, type, status, resourcesSchedule + Confidence
BOQBill of Quantities: item no, section, description, unit, quantity, rate, amountBOQ
Schedule StructureReference: how XER fields map to QRAs + analysis pipelineReference
ReferenceValid values: all 30+ categories with keywords, 9 distributions with required fields, response strategies, status values, issue cause/owner, methodologiesReference

Filling each sheet — quick reference

Mandatory cells are marked with * in the column header (e.g. Risk ID*, Title*). Everything else is optional — leave blank or fill as needed. Sample rows are included on every sheet to show the expected shape; delete them or overwrite them, your choice.

SheetRequired columnsWhat's optional but useful
QSRA / QCRA / QPRARisk ID · Title · Probability · MLMin/Max (otherwise auto-bracketed at ±30%) · Distribution (defaults to BetaPert) · Activity ID (enables tornado-by-activity + Confidence drill-down) · Description (feeds NLP knowledge base)
Uncertainty RegisterUncertainty ID · Title · at least one dimension's Min/ML/MaxTime Unit (pct or abs) · per-dimension Distribution · Activity IDs · Source/WBS
IssuesIssue ID · Title · Gross Delay (days)Cause/Owner (drives EOT compensability) · Window (groups by SCL time-slice) · Concurrent Delay days (deducted from net) · Contractual Reference · Description / Causation / Chronology / Notices / Mitigation (all surface in claim report) · Prolongation Rate · Disruption Cost · Activity IDs
ClaimsClaim ID · Subject · Issue IDs (semicolon-separated)Type · Methodology (drives report template) · Submitted/Cut-off dates · Executive Summary · Contractual Basis
ScheduleActivity Code · Activity NameStart / Finish dates · Durations (orig + remaining) · TF / FF (drives critical/longest-path filters) · Critical Path (Y/N) · WBS · % Complete · Resources
BOQItem No · Description · Quantity · RateSection (groups by trade) · Unit · Amount (auto-computed if blank from Qty × Rate)

How the auto-import works

When you upload a workbook, the parser checks the sheet names. If 2 or more match the canonical template names (QSRA / QCRA / QPRA / Uncertainty Register / Issues / Claims / Schedule / BOQ), it switches into template mode and:

  • Reads every recognised sheet in one pass — no schema confirmation prompt
  • Tolerates header variations — case-insensitive, ignores * suffixes and parenthesised hints like "(days)" or "(% of plan)", accepts both "Probability" and "Probability (0-1)", both "Activity ID" and "Activity IDs", etc.
  • Merges risks across QSRA/QCRA/QPRA sheets — if you list the same Risk ID on multiple sheets, the time/cost/perf impacts merge into one risk with all three dimensions
  • Captures per-dimension distribution overrides — Distribution columns on each sheet apply only to that dimension
  • Auto-sets the analysis scope — finds time risks → enables Time scope; finds cost risks → enables Cost scope; finds uncertainty entries → enables Uncertainty scenario
  • Runs the simulation immediately — no need to click "Run" — and lands you on the Overview tab

Common patterns

Risk affecting both time and cost? List it on QSRA with the day impacts and on QCRA with the cost impacts using the same Risk ID. The importer merges them.

Risk linked to multiple activities? Put all codes in the Activity ID column separated by ; — example: A-1010;A-2030;A-4010.

Different distributions for time vs cost on the same risk? Set Distribution (Time) on the QSRA row and Distribution (Cost) on the QCRA row independently. LogNormal for cost (long tail) + BetaPert for time is a common combo.

Don't have an XER schedule? Use the Schedule sheet — it acts as the activity source. The Confidence tab and tornado-by-activity will work the same way.

Bundling 4 issues into one EOT claim? On the Claims sheet, put ISS-001;ISS-002;ISS-003;ISS-004 in the "Issue IDs" column. The SCL claim report will group them by Window automatically.

Common pitfalls

  • Probability format: the importer accepts both 0.45 and 45 (auto-detects). Don't mix formats within one sheet.
  • Dates: any format Excel recognises works (ISO 2026-01-15, 15/01/2026, etc.). Plain text dates like "Q1 2026" won't parse.
  • Empty rows are fine — the importer skips them. Don't worry about cleaning up below your data.
  • Adding a new column? The importer ignores unknown columns silently. You can add notes / status colour / your own metadata anywhere.
  • Renaming sheets? Keep at least the first word matching (e.g. "QSRA - My Custom Name" still works). Sheets that don't pattern-match are ignored.
  • Re-uploading after edits: the new file replaces the existing data. To merge, export your current state first via Reports → JSON, then merge externally.

Pro tip: The README sheet inside the template carries a self-contained quick-reference that travels with the file. When you share the template with a colleague, they don't need to open the app help to know how to fill it in — everything's on Sheet 1.

Reports & Exports

Risk Intelligence produces five distinct exports, each suited to a different audience and use case. All are accessible from the Report tab in the navigation — no credit gating, no entitlement checks, no waits. Dharani Holdings branded headers, footers and disclaimers are applied automatically to every output.

ExportFormatBest forWhat it contains
Excel — Risk Register.xlsxSharing with colleagues who need to filter / pivot / annotateCover sheet, Risk Register with all overrides, P-value confidence levels, sensitivity ranking, insights, full run log
Word — QSRA Report.docxEmbedding into a project assurance documentExecutive summary, methodology, scenario comparison, risk register, insights — Office Open XML with branded styles
PDF — Executive SummarySingle pageSharing with leadership / sponsorsOne-page brief: priority driver, P50/P80/P90 KPIs, top-3 risks, recommended actions
PDF — Detailed ReportMulti-pageFull project record / claim package13 sections including every rendered chart as an embedded image — see breakdown below
SCL Claim Report (PDF + Word)From Claims tabFormal EOT / variation / disruption submissionCover · TOC · Executive Summary · Contractual Provisions · Methodology · Time-Slice Window-by-Window analysis · Mitigation · Conclusion · Issues Schedule appendix · Documents appendix

PDF Detailed Report — what's inside (13 sections)

The detailed report captures everything currently in your dashboard. Charts are saved at full PNG resolution from the live Chart.js canvases at the moment you click Generate.

  1. Cover page — title, subtitle, client, author, document ref, classification, date
  2. Executive Summary — synthesised position, headline contingency figures at your selected confidence level
  3. Methodology — sampling method (Latin Hypercube / Monte Carlo), iterations, distribution, scope dimensions, mitigation assumptions
  4. Sensitivity Analysis — top variance contributors with Spearman ρ values
  5. Scenario Comparison — Pre-mitigation vs Post-mitigation side-by-side
  6. Risk Register — top 60 risks sorted by score, full register available in the Excel export
  7. Insights & Conclusions — pattern-based observations from the simulation results
  8. Visual Analysis · Chart Library — every rendered chart as a high-resolution embedded image, automatically scoped to your active dimensions:
    • Overview: Cumulative Probability S-Curve + Risk Concentration donut
    • Time QSRA: Distribution Histogram + Pre/Post S-Curve + Sensitivity Tornado
    • Cost QCRA: Distribution Histogram + S-Curve + Tornado
    • Performance QPRA: Distribution Histogram + S-Curve + Tornado
    • BOQ: Total Cost Distribution histogram
    • Issue Modeling: Schedule Impact chart
    The text of the report also carries the baseline values on every section header (planned finish, planned budget, nominal performance) so the P-tile figures can be interpreted without flipping back to the dashboard.
  9. Schedule Summary — activity counts, critical path stats, WBS hierarchy, top 15 critical activities (only included if XER or Schedule sheet uploaded)
  10. Uncertainty Register — full table of continuous uncertainty entries with time/cost/perf bands
  11. Issues / Delay Events Summary — totals plus per-issue breakdown with cause-owner, window, net days, prolongation, disruption
  12. Claim Packages — claims register with type, status, methodology, issue counts (full SCL Time-Slice analysis lives in the dedicated Claim Report)
  13. Bill of Quantities Analysis — baseline total, P50/P80/P90 confidence bands, delta vs baseline

Generating the PDF Detailed Report — checklist

  • Visit each tab first — charts render lazily when you switch to a tab. To capture all 13 charts, click through Overview → Time QSRA → Cost QCRA → Performance QPRA → BOQ → Issue Modeling before clicking Generate. The report includes only charts that have actually been rendered.
  • Click Report → "PDF — Detailed Report"
  • Your browser's print dialog opens. Choose "Save as PDF" as the destination.
  • Set margins to "Default" or "None" for best layout.
  • Save. Done.

Tip — print only the report: the dashboard around the report is hidden via visibility: hidden in the print stylesheet, so what you see in the print preview is exactly what will save to PDF — clean, no header/sidebar/tabs.

SCL Claim Report (separate from the project-level Detailed Report)

For each claim package, the dedicated SCL Time-Slice Window Analysis report is generated from the Claims tab by clicking the claim card → "PDF Report" or "Word Report" button. Unlike the project Detailed Report, the SCL report is structured around delay events grouped by window (W01, W02, W03…) per the Society of Construction Law Delay & Disruption Protocol (2nd Ed., 2017).

See the Issues, Claims & SCL section for the SCL workflow and methodology choices.

Risk register vs Uncertainty register

Two distinct registers, both feed the simulation, but they model different things:

Risk Register

Discrete events with a probability gate. Either it happens or it doesn't.

Example: "Vendor failure delays signalling delivery" — 35% probability, ±60-day band if it occurs.

Sampled per iteration: occurrence is Bernoulli(prob), magnitude is your three-point estimate.

Uncertainty Register

Continuous variance always present. No probability gate.

Example: "Civil works productivity rate varies between 0.85× and 1.15× of plan."

Sampled every iteration directly from the band — the spread comes from inherent estimation error, not discrete events.

Issues, Claims & SCL Protocol

The Issues and Claims tabs implement the Society of Construction Law (SCL) Delay & Disruption Protocol workflow for retrospective claim management.

Issues — building blocks

Each Issue captures one discrete delay event with the full SCL field set:

  • Identity: ID, title, status (Open/Notified/Investigating/Submitted/Closed)
  • Cause/Owner: Employer (excusable + compensable), Contractor (non-excusable), Neutral (excusable, non-compensable — e.g. force majeure), Concurrent (both), TBD
  • Window/Phase + Contractual Reference (e.g. "FIDIC Sub-Clause 8.4(b)")
  • Dates: Event, Notice (Contractor), Engineer Response, Claim Reference
  • Narrative: Description (the event), Causation (why it caused delay/cost), Chronology (timeline), Notices & Correspondence, Mitigation Measures
  • Schedule fragnet: gross delay days, concurrent days, live net excusable readout, prolongation rate per day, disruption cost
  • Linked activities (multi-select) and supporting documents (attachments)

Claims — formal submissions

A Claim bundles one or more Issues into a formal submission:

  • Type: EOT Claim, Variation/Change Order, Acceleration, Disruption, Prolongation, Combined
  • Methodology: Time Slice Window Analysis (SCL recommended), As-Planned vs As-Built, Impacted As-Planned, Time Impact Analysis, Collapsed As-Built, Longest Path Analysis
  • Generate Report in PDF or Word — produces a full SCL Time-Slice Window Analysis with cover, TOC, Executive Summary (Position/Key Dates/Quantum), Salient Contractual Provisions, Methodology, Window-by-Window Analysis (one section per window with full per-issue narrative), Mitigation, Conclusion & Quantum, Issues Schedule appendix, Supporting Documents appendix

Issue Modeling — schedule impact

The Issue Modeling tab projects how registered issues impact the project completion date relative to baseline. Three view modes:

  • Combined Impact — issues are sorted chronologically by event date and stacked into a cumulative net delay curve. The KPI line shows projected finish vs baseline.
  • Per-Issue S-Curves — each issue gets its own cumulative-probability S-curve standalone. Useful for understanding how individual issues might unfold under uncertainty.
  • Incremental (Time-Slice) — bars showing per-window incremental contribution + cumulative line. Mirrors the SCL Time-Slice Window Analysis methodology directly.

Use Combined for retrospective claim quantum, Per-Issue for sensitivity/communication, and Incremental for narrative alignment with your Time-Slice methodology.

Distributions

Pick the distribution that best matches the shape you expect. Each can be set per-dimension (time / cost / performance) on each risk and uncertainty entry.

DistributionRequired fieldsBest for
PointML onlySingle value — no Monte Carlo variance. Fixed contingency or flat allowance.
Two-pointMin, Max50/50 between two outcomes — binary "happens-or-not" pricing scenarios.
UniformMin, MaxEqual probability across the band — when ML is genuinely unknown.
Bi-triangularMin, MaxSymmetric triangle, ML auto-set to midpoint. Useful when no preference for asymmetry.
TriangularMin, ML, MaxLinear ramps. When the bell shape feels too "tight".
BetaPertMin, ML, MaxDefault — smooth bell, weighted to ML. Most realistic for project estimates.
NormalMin, ML, MaxSymmetric around ML. Avoid when ML is far from midpoint of Min-Max.
LogNormalMin, ML, MaxRight-skewed with long upper tail. Cost-style risks where bad outcomes have a long tail.
TrigenMin(P10), ML, Max(P90)BetaPert with truncated tails. Min/Max interpreted as 10th/90th percentile (conservative).

Distributions can be set per-dimension (Time / Cost / Performance) on every risk and uncertainty entry. The dialog box automatically dims any field that the chosen distribution doesn't use — Point only needs ML, Two-point only needs Min and Max, etc.

SRA engine — float erosion through retained logic

Schedule Risk Analysis (SRA) in this workbench is path-aware. When the loaded data includes Primavera P6 schedule logic (TASKPRED relationships), the simulation runs a full CPM forward-pass on every iteration, so float on parallel paths absorbs non-critical impacts naturally — exactly as Pertmaster, Safran Risk and Acumen Fuse would.

How it works

  1. Network build (once at simulation start). The retained Primavera logic — Finish-to-Start, Start-to-Start, Finish-to-Finish, Start-to-Finish links with lags — is loaded into a topologically-sorted graph. Cycles, if any, are flagged and skipped. The baseline CPM finish is computed from baseline activity durations and recorded as the anchor.
  2. Per iteration (5,000+ runs of the loop):
    • Working durations array is reset to the baseline.
    • For each risk that fires (probability gate via Bernoulli), the 3-point distribution is sampled to produce a duration impact for that iteration.
    • If the risk is linked to one or more activities (via the Activity ID column on the risk register), the sampled impact is added to each linked activity's working duration. Each linked activity receives the full impact — matching PRA / Pertmaster convention.
    • If the risk is unlinked (no Activity ID), its impact accumulates into a project-level overlay — the engine has no path context to resolve it, so it is added directly to the headline figure.
    • Uncertainty Register entries are routed the same way (linked → working durations; unlinked → overlay).
    • Forward-pass: early_start = max(predecessor constraints by relationship type, accounting for lag), early_finish = early_start + duration, in topological order.
    • The iteration's CPM-derived schedule delta = max(early_finish) − baseline_CPM_finish. This naturally reflects float erosion: a risk on a path with 60 days of float has zero effect until the impact exceeds 60 days; a risk on the critical path flows straight to project finish.
    • Net iteration schedule total = CPM delta + unlinked overlay.
  3. Ranking. The 5,000 schedule totals are sorted; P50, P80 and P90 are read off the empirical CDF.

What the dashboard tells you

The baseline strip footnote shows the engine mode and routing each time the simulation runs:

  • CPM forward-pass on N activities · M retained logic links — float erosion is active. The headline P-values reflect path competition.
  • X of Y risks routed through the network — how many of your register entries are linked to activities. Unlinked risks are still in the result, but as project-level overlay; they will not be path-resolved.
  • Linear-sum mode (no schedule logic) — falls back when no TASKPRED data is available (e.g. risk register without an XER, or a .plan file with no relationships extracted). Schedule risks are added linearly; this is conservative but path-agnostic.

Practical implications

  • P-values are usually much lower under CPM than under linear-sum. On large programmes the difference can be an order of magnitude — most risks land on activities with float and get partly or fully absorbed.
  • Link your risks to activities. An unlinked risk contributes to the overlay regardless of where in the schedule it would actually fall. The more risks you link, the more accurate the P-values.
  • Multi-activity links apply impact to each. If a single risk is linked to three activities, each activity's duration is extended by the full sampled impact. If your intent is to distribute the impact across them, split the risk into three entries — one per activity.
  • Cost and Performance dimensions stay linear-sum. CPM semantics only apply to schedule; cost and performance impacts are added across iterations regardless of activity link.

Why P80 dropped after enabling SRA. Under linear-sum, every risk's schedule impact added to total project delay, including risks on non-critical paths. Under CPM, a risk on a non-critical path delays the project only if the impact exceeds the available float. On a healthy schedule with substantive float, that's a small fraction of risks — so headline P80 falls. The new number is the genuine path-aware contingency requirement, not a softening of the analysis.

Three-point estimates

For BetaPert (the default):

  • Min — 5th-percentile case. Only ~5% chance the actual is lower.
  • ML (Most Likely) — single most plausible estimate.
  • Max — 95th-percentile case. Only ~5% chance the actual is higher.

Caution: Avoid Max ÷ Min ratios above ~5:1 — this often indicates the risk should be split into multiple separate risks.

Pre / Post / Uncertainty scenarios

Every simulation runs three scenarios in parallel. Toggle the scenario lens on QSRA / QCRA / QPRA tabs to see them.

  • Pre-Mitigated (rust orange) — risks at original probability and impact. The "as if no mitigation lands" worst case.
  • Post-Mitigated (sage green) — applies mitigation effectiveness factor to reduce probability and impact per the response strategy. Shows the value of your mitigation plan.
  • Uncertainty (teal) — no discrete risk events fire. Only the estimation uncertainty bands are sampled, plus entries from the Uncertainty Register. Isolates "how much is true uncertainty vs identifiable risk events".

Tornado modes

The tornado chart on QSRA / QCRA / QPRA can be grouped three ways:

  • Risks — every risk individually, ranked by Spearman correlation with the dimension outcome.
  • Activities — risks aggregated by linked schedule activity. Shows which activities concentrate the most exposure.
  • Categories — risks aggregated by category (e.g. Procurement, FIDIC, Geopolitical). Strategic view for risk owners.

Confidence view

Pick any activity from the schedule window on the left. The right pane runs a focused QSRA on just that activity's linked risks and shows the distribution of forecast finish dates with P50/P80/P90 bands. Toggle between calendar dates and days-delta on the chart axis. Activities with no linked risks fall back to a default uncertainty band derived from their planned duration.

Schedule filters (XER)

The Schedule tab supports filtering large schedules by classic project-controls criteria:

  • Critical path — activities with zero (or near-zero) total float. Drives the project finish.
  • Longest path — sequence of activities producing the longest run from start to finish.
  • Total float < N — near-critical activities (user-defined threshold). Useful for early warning.
  • Free float < N — activities that immediately delay successors if they slip.

Filters compose with text search — drill into a WBS branch first, then apply float thresholds.

Settings reference

Open from the gear icon in the header. Five tabs:

  • General — currency (10 codes incl. AED, OMR, SAR, INR, GBP, EUR, USD) and editable symbol, workdays per week, hours per workday, risk matrix size (3 / 4 / 5), auto re-run on parameter change (default on), time display (days vs absolute dates), and the analysis-scope changer (Time / Cost / Performance toggles).
  • Categories — manage the 34+ category lexicon. Add new categories with comma-separated keywords, edit keywords, delete categories. Edits persist in the knowledge base.
  • Thresholds — risk score levels (HIGH/MEDIUM cutoffs), performance loss red/yellow zones (%), wide-range warning ratio, and per-dimension uncertainty bands controlling the Uncertainty scenario spread (defaults: ±20% time, ±25% cost, ±15% perf).
  • Performance — toggle resource-allocation-based performance signal (uses XER resource data when present) and set the over-allocation threshold (% of baseline allocation considered a perf risk).
  • Advanced — show/hide the "Recommended Actions" panel on Overview, knowledge-base statistics, and the Excel template download button.

If you change scope mid-analysis (e.g. add Cost dimension after starting with Time only), the new QRA tab appears and the simulation re-runs to populate it.

Knowledge base

Persisted in browser localStorage. Grows with every analysis — the engine learns your patterns:

  • Category lexicon — 34 seed categories including FIDIC, NEC, Legal, Geopolitical, Regional. Manual edits stick.
  • Manual risks — every risk you add manually strengthens the lexicon with its keywords.
  • Insight edits — edit any insight card; it pins to the KB and re-shows on future runs with a "PINNED" tag.
  • Dismissals — dismiss an insight pattern and the engine stops surfacing it.
  • Schema memory — when you accept a schema mapping, it's remembered for similar future imports.

Open the side knowledge panel from the header (📚 Knowledge button) to inspect everything.

Keyboard tips & UI patterns

  • Esc — close any modal.
  • Ctrl/Cmd + Enter — re-run simulation.
  • Click your name in the top-right — opens the Profile panel (name, email, organisation, role) with a Sign out button.
  • Click 📁 Projects — switch between workspaces. After sign-in you land directly in the most recently used workspace, but the projects screen is one click away.
  • Click the tab on the right edge of the screen — collapse the Analysis Controls sidebar to use the full window width. Click again to restore. Preference is remembered between sessions.
  • Click a sensitivity table row — drills into that risk's parameters.
  • Click a tornado bar — drills into that risk (Risks mode) or filters by activity (Activities mode).
  • Click a heat-map cell — filters the register to risks at that probability × impact.
  • Click any S-curve point — see the cumulative probability and outcome value at that point.
  • Hover an insight card — reveal the EDIT button.
  • On the Schedule tab — the +R / +U / +I buttons on every row let you add a Risk, Uncertainty, or Issue linked to that activity in one click.
  • On the Risk register — click the Category cell to change it inline (no modal needed).

Methodology deep-dive

This section explains every methodological choice the engine makes, with the maths spelled out. It is the reference clients and dispute panels can be pointed at when they want to understand exactly what was done.

1. The three-dimensional model

Every iteration of the simulation computes three independent outcomes — Time (days of schedule slippage), Cost (currency overrun) and Performance (percentage degradation of a measured capability). The dimensions are not aggregated into a single score; each is reported on its own confidence axis (P10/P50/P80/P90) and visualised separately in the QSRA, QCRA and QPRA tabs.

For each risk in the active register, each iteration evaluates four steps in order:

  1. Bernoulli gate — draws a uniform U ∼ [0,1] and fires the risk if U < p where p is the effective probability after applying the global probability scale and any per-risk override.
  2. Time impact — when fired, samples from the chosen distribution (BetaPert default, Triangular, Normal, Lognormal, Uniform, or Trigen depending on the per-dimension distribution setting) over the (Min, ML, Max) triplet. The sampled value is added to that iteration's total schedule overrun.
  3. Cost impact — same distribution sampling logic but on the cost triplet. The default cost-schedule correlation is 0.3 (Spearman rank), implemented via a Gaussian copula on the two uniforms before they are converted back to distribution percentiles. Lowering it to 0 means cost and schedule impacts are independent for a given risk; raising it toward 1 makes them comove tightly.
  4. Performance impact — sampled the same way but applied multiplicatively: the residual retained performance is ∏(1 − pᵢ) across all firing risks in the iteration, then converted back to a loss percentage.

2. Sampling method

Latin Hypercube Sampling is the default. Each input distribution is divided into N equal-probability strata where N = iteration count; one sample is drawn from each stratum. This guarantees that low-probability tails are exercised even at modest iteration counts. Monte Carlo (independent uniforms) is offered as an alternative for users wanting baseline behaviour or who need exact reproducibility for a specific external comparison.

3. Distributions in detail

  • BetaPert — uses the Vose modified formula. Mean = (Min + λ·ML + Max) / (λ + 2) where λ is the shape parameter (default 4, range 2–6). The distribution is rescaled to (Min, Max) so the bounds are hard. Best when the ML carries weight.
  • Triangular — straight-line tents; bounds are hard, ML controls where the apex sits. Use when ML is itself uncertain.
  • Trigen — Triangular but with Min and Max interpreted as percentiles (default 10% and 90% from Settings → Distribution) rather than absolute bounds. Useful when SMEs are more comfortable saying "the 10–90 range is X to Y" than "the absolute minimum is X".
  • Normal — fitted to mean = ML, σ = (Max − Min) / 6; symmetric. Use only when the underlying process is genuinely symmetric (rare for project risks).
  • Lognormal — right-skewed; fitted to μ = ln(ML), σ = ln(Max/Min) / 6. Models multiplicative cost growth or duration overruns.
  • Uniform — equal probability across (Min, Max). Used as a placeholder when SMEs decline to express a preference.

4. Sensitivity (Spearman rank correlation)

For each dimension, the engine computes Spearman ρ between each risk's per-iteration occurrence indicator and the iteration's total outcome. A correlation of 0.5 on Time means the risk's occurrence is strongly associated with iterations that had high schedule overrun. The output table is sorted by absolute correlation; the tornado plot shows positive/negative drivers separately. Spearman is preferred over Pearson because the underlying risk-outcome relationship is rarely linear and Spearman is robust to outliers.

5. Schedule (SRA) engine — float erosion model

When a Primavera XER schedule is loaded with both activities and TASKPRED logic, the engine runs a per-iteration CPM forward pass with risk-perturbed durations. Each linked risk's sampled time impact is added to the duration of its target activity (or distributed across multiple targets when a risk is linked to several activities). The forward pass propagates the perturbation through the network respecting all four predecessor types (FS, SS, FF, SF) and their lags, and clamps activities to their schedule constraints (Must Start On, Start No Earlier Than, Must End On, Finish No Earlier Than) where set in the source XER. The new project finish for that iteration is max(EF) across all activities; subtracting the unperturbed baseline gives the iteration's schedule overrun.

Risks that aren't linked to any activity in the network are treated as a project-level overlay — their full sampled impact adds directly to the iteration's overrun without going through the CPM model. This is the legacy linear-sum behaviour, retained for risks where the network mapping isn't yet wired up.

6. Risk correlation groups

Risks tagged with the same corrGroup name share a single uniform draw per iteration for the Bernoulli gate (the impact-magnitude samples remain independent). The effect is a Bernoulli copula with rank correlation ρ ≈ 1 within the group: members co-occur. This models clusters like "all three permitting risks tied to the same regulatory cycle" or "the four supplier-dependence risks that all flow from a single sole-source decision". The number of active groups and the count of pooled risks is reported in the Methodology section of the detailed PDF.

7. Post-mitigation modelling

Each risk has two parallel sets of (Min, ML, Max) triplets — the pre-mitigation values and the post-mitigation residuals. Pre-mitigation values are taken as imported; post-mitigation values are either:

  • Explicit — entered directly by the user when "Use explicit residual values" is ticked. Recommended for engagement-grade work because it makes mitigation assumptions auditable.
  • Derived — computed from the pre-mitigation values multiplied by the response-strategy factor (Avoid = 0.05, Reduce = 0.4, Transfer = 0.3, Accept = 1.0) and the global mitigation-effectiveness slider. Both factors are user-tunable in Settings.

8. Convergence

After every run, the engine computes the relative standard error (RSE) of the schedule mean: RSE = (σ / √N) / μ. The target is set on the Convergence slider (default 1%); when actual RSE exceeds target, an amber banner on the Overview page recommends a specific iteration count N' = N × (RSE / target)² to reach the target. This makes the iteration-count decision data-driven rather than a guess.

9. Reproducibility

When the seed lock is enabled, every subsequent run uses the same seed regardless of the value in the seed input field. The locked seed is captured in archive snapshots, displayed on the cover page, and appears in the Run Log Excel sheet. Two analysts running the same dataset with the same locked seed will get bit-identical results. This is critical for engagement defensibility — claim-package figures should not shift between drafts because the seed rotated.

FAQ

Q: My P80 schedule overrun is huge — is that a re-baseline?
No. P80 represents the contingency needed to reach 80% confidence of completing on time. It is an addition to the baseline, not a replacement. The baseline strip at the top of every dashboard tab makes this explicit, and the focus KPI tile spells out the forecast date (baseline + P80 delta) so there is no ambiguity.

Q: Where do the "Scheduled finish" and "Planned budget" values in the baseline strip come from?
Scheduled finish comes from the Primavera P6 PROJECT record on your .xer file — specifically scd_end_date (the schedule's calculated end), with plan_end_date as backup. If the PROJECT record is missing those fields, the app falls back to the maximum current finish date across activities (act_end_date for completed activities, otherwise early_end_date). It deliberately does not use target_end_date as the primary source — that field is the activity's original planned finish set when the task was first created, and on schedules that have been re-planned or have slipped it can be very far from the current scheduled state. The data date shown alongside is the schedule's status date (last_recalc_date) so you can verify the values reflect the right update. Planned budget = sum of quantity × rate across all BOQ line items (or the Amount column if rate and quantity are blank). Nominal performance is always 100% — the simulation outputs the percentage loss against that baseline. If a dimension has no source data, its slot is omitted from the strip rather than shown as zero.

Q: The P80 tile says "Forecast ≈ 2028-02-16" — how is that computed?
It is the baseline finish date plus the P80 delta in days, rounded to a calendar date. For cost it is the planned budget plus the P80 cost delta. For performance it is 100% − P80 loss, shown as "Retained ≈ X%". The forecast values appear only when a baseline is known for that dimension; otherwise the tile falls back to the standard "+X above P50" sub-label.

Q: When should I use BetaPert vs Triangular?
BetaPert by default — it weights the ML and produces realistic bell shapes. Use Triangular when the ML is genuinely uncertain or when you want a more conservative spread.

Q: Why does Pre P80 stay the same when I change settings?
Make sure auto-run is enabled (Settings → General). Otherwise click "Run Simulation" in the sidebar to apply changes.

Q: How does the Uncertainty register interact with Settings → Thresholds → uncertainty bands?
The Settings bands set the global default. Per-entry distributions in the Uncertainty register override the default for that entry only.

Q: Can I use a non-USD currency?
Yes — Settings → General. Pick from USD, EUR, GBP, AED, OMR, INR, SAR, QAR, KWD, BHD. The symbol auto-pairs but is editable.

Q: Can I switch projects without losing my current work?
Yes. Click 📦 Archive in the header to snapshot the current workspace state (risks, activities, uncertainties, issues, claims, BOQ, settings, source filenames), then open 📁 Projects to switch to or create another workspace. When you return, the most recent archive is auto-restored.

Q: Is my data sent anywhere?
No. Risk Intelligence runs entirely in your browser. Risk registers, schedules and the knowledge base are held in local browser storage. The only network access is the initial one-time load of Chart.js, SheetJS and pdf.js from CDN. Live Dharani Holdings deployments use the firm’s managed environment for storage, identity and audit.

Q: Why does the convergence banner come up even though I ran 10,000 iterations?
Because the dataset has high variance — perhaps a few risks with very wide impact ranges or with very low probabilities. The banner is showing you the actual standard error, not the iteration count. The fix is either tighter input estimates or more iterations. The banner offers a one-click jump to the recommended count.

Q: I duplicated a risk but the override didn't carry across — is that a bug?
No, that's intentional. Duplicates inherit the rawRow (probability, impacts, category, owner, response, residual values, source citation) but not any per-risk override set via the cog modal. The reasoning: overrides usually express engagement-specific judgement on a particular risk; the copy is a new, separate risk that should start from the base values and be tweaked independently.

Q: How do correlation groups affect my P80?
If you pool risks that previously fired independently, the variance of the sum increases (they no longer offset each other across iterations). Concretely: pooling three 50%-probability risks that were independent produces a heavier right tail because all three either fire or none of them do. Expect P80 to rise and P50 to be roughly unchanged. The methodology section of the detailed PDF reports how many groups are active and how many risks they pool.

Q: What's the difference between Excel, Standard PDF, and Detailed PDF exports?
Excel is the working analyst's deliverable — multi-sheet workbook with the risk register, confidence levels, sensitivity, run log, audit trail, issues, claims and outcomes. Standard PDF is the on-screen Report tab printed as PDF — quick, includes the methodology summary, top drivers and recommendations. Detailed PDF is the formal multi-page report — cover page, executive summary, methodology with full reproducibility statement, the risk matrix, sensitivity drivers, per-risk appendix. Use Detailed when the deliverable will be reviewed by clients or dispute panels. Use the new Executive Brief for a tight one-pager when you need to communicate the headline numbers quickly.

Q: Can I compare two archive snapshots?
Yes. On the Projects screen, open a project's detail view, tick two archive boxes, and click "Compare selected". The comparison modal shows headline metric deltas (risk count, schedule P80, cost P80) and which risk IDs were added or removed between snapshots.

Ready
ITERS:5,000
RISKS:0
SCHED:
COST:
PERF:
© 2026 Dharani Holdings Limited · Risk Intelligence Platform