Back to Resources
Incident Report / Post-Mortem Template — visual preview
Template

Incident Report / Post-Mortem Template

Root Cause Analysis & Lessons Learned Capture

Overview

After the firefighting is over, the real work begins. A well-structured incident post-mortem turns every security incident into a learning opportunity that strengthens your defenses. This template walks you through documenting the incident timeline, identifying root causes, measuring business impact, and defining corrective actions that actually get implemented. Whether you follow a blameless post-mortem process inspired by Google SRE practices or a more traditional incident review, the goal is the same: build resilience, not assign blame.

Post-Mortem Sections

  • Executive summary and incident classification
  • Detailed timeline with UTC timestamps and key decisions
  • Root cause analysis (primary and contributing factors)
  • Impact assessment: data, systems, customers, and financial
  • Detection and response effectiveness review
  • Corrective actions with owners, deadlines, and verification steps
  • Lessons learned and process improvements
  • Communication review: internal and external response
  • Appendices: evidence logs, communication records, IOCs

Root Cause Analysis Methods

MethodBest forApproach
5 WhysSimple, single-cause incidentsAsk "why" iteratively until you reach the root cause — commonly used in blameless post-mortems
Fishbone (Ishikawa)Complex incidents with multiple contributing factorsCategorize causes across people, process, technology, and environment
Fault Tree AnalysisHigh-severity incidents requiring formal analysisMap failure paths in a logical tree structure
Timeline AnalysisIncidents with unclear sequence of eventsPlot every action and event chronologically to identify gaps
Contributing Factors AnalysisSystemic issues across teamsIdentify organizational, process, and tooling factors that enabled the incident

Writing the Timeline

The timeline is the backbone of any good post-mortem report. Start from the earliest indicator of compromise, not from when the alert fired. Include every significant event: initial access, lateral movement, detection, triage decisions, containment actions, communications sent, and full recovery. Use UTC timestamps throughout for consistency. Note where there were delays and why — were analysts waiting for approvals? Did detection take too long? Was the escalation path unclear? The timeline should tell the story of both the attack and the response.

Impact Assessment Framework

Quantify the impact across multiple dimensions. How many records were exposed? Which systems were offline and for how long? Were customers directly affected? What was the financial cost including incident response fees, legal expenses, and lost revenue? Document regulatory implications: was this a reportable breach under GDPR, HIPAA, CERT-In, or state breach notification laws? Include reputational impact where measurable. The impact section is what leadership and the board care about most.

Blameless Post-Mortem Best Practices

A blameless post-mortem culture, pioneered by Google SRE and adopted by organizations like Etsy, PagerDuty, and Atlassian, focuses on systemic improvements rather than individual fault. Key principles: (1) Assume everyone acted with the best information available at the time. (2) Focus on "what" and "how" — not "who." (3) Reward people for surfacing contributing factors honestly. (4) Document decisions that seemed reasonable in context, even if they later proved wrong. (5) Track recurring themes across post-mortems to identify systemic gaps in tooling, process, or training.

Corrective Actions That Stick

Every post-mortem produces a list of action items, but the ones that matter have three things: a clear owner, a realistic deadline, and a verification step. Assign each action to a specific person, not a team. Set deadlines within 30, 60, or 90 days depending on complexity. Schedule follow-up reviews to confirm implementation. Track completion rates across all post-mortems in tools like Jira, PagerDuty, or your GRC platform to identify systemic issues like chronic underinvestment in detection or repeated access control failures.

Using AI for Post-Mortem Generation

Modern incident response platforms, including Hunto AI, can auto-generate post-mortem drafts by correlating alert timelines, analyst actions, and remediation steps from your SIEM and ticketing systems. AI-generated post-mortems provide a structured starting point — pre-populating the timeline, affected assets, and detection-to-containment metrics — so your team can focus on root cause analysis and lessons learned rather than manual documentation.

Frequently Asked Questions

When should the post-mortem be completed after an incident?

Within two weeks of incident closure for critical incidents and within 30 days for lower-severity events. The longer you wait, the more details are lost and the less actionable the findings become. Many SRE teams schedule the post-mortem meeting within 48-72 hours while context is fresh.

Who should participate in the post-mortem review?

Everyone involved in the incident response, plus representatives from teams that own the affected systems. Include a facilitator who was not directly involved to ensure objectivity. Legal and compliance should review the final document before distribution.

How do we keep post-mortems blameless?

Focus on systems and processes, not individuals. Ask "what failed" rather than "who failed." Establish a culture where surfacing issues is rewarded. Document decisions that were reasonable given the information available at the time. Companies like Google, Atlassian, and PagerDuty publish blameless post-mortem guides that serve as excellent references.

Should post-mortem reports be shared externally?

Internal post-mortems should be treated as confidential. If regulatory notification requires a root cause summary, work with legal to prepare a version that meets disclosure requirements without exposing sensitive details about your security posture.

How do we track whether corrective actions are actually completed?

Use a centralized tracking system like Jira, PagerDuty, or your GRC platform. Assign each action a ticket with an owner and deadline. Review open actions in monthly security steering committee meetings. Report completion rates quarterly to leadership.

What is the difference between a post-mortem and an incident report?

An incident report documents the facts: what happened, when, and what was affected. A post-mortem goes deeper — it analyzes why the incident happened, evaluates the effectiveness of the response, and produces actionable improvements. The best post-mortem templates combine both: factual documentation with root cause analysis and lessons learned.

Can AI generate post-mortem reports automatically?

Yes. AI-powered platforms like Hunto AI can auto-generate post-mortem drafts by correlating SIEM logs, alert timelines, and analyst actions. The AI structures the timeline, identifies affected assets, and calculates detection-to-containment metrics. Your team then validates the analysis, adds root cause findings, and defines corrective actions.

Ready to use this resource?

Download it now or schedule a demo to see how Hunto AI can automate your security workflows.

Book a Demo
Hunto AI logo — Autonomous AI Cybersecurity Agents

100% Autonomous AI Agents that continuously discover, monitor, and mitigate external threats — protecting your brand, infrastructure, and data 24/7.

Partners

Nvidia Inception - Hunto AI Partner
KPMG - Hunto AI Partner
Mastercard - Hunto AI Partner
Airtel - Hunto AI Partner

© 2026 Hunto AI. Copyright. All Rights Reserved