Overview
Detection engineering is the discipline of systematically building, testing, deploying, and maintaining threat detection capabilities. It moves detection from an art practiced by a few senior analysts to an engineering discipline with version control, testing, and measurable quality metrics. This guide covers the detection lifecycle, Sigma rule development, ATT&CK coverage mapping, and the practices that make detection-as-code work at scale.
Detection Lifecycle
- Threat research: identify techniques to detect based on intelligence and risk
- Rule development: write detection logic in a vendor-agnostic format (Sigma)
- Testing: validate against known-good and known-bad data in a test environment
- Peer review: have another engineer review the rule for logic, performance, and false positive potential
- Deployment: push to production SIEM with appropriate severity and response tags
- Tuning: monitor false positive rate and refine allowlists and thresholds
- Retirement: decommission rules that are no longer relevant or effective
Sigma Rule Structure
| Component | Purpose | Example |
|---|---|---|
| title | Human-readable name | Suspicious PowerShell Encoded Command Execution |
| id | Unique UUID for tracking | a1b2c3d4-e5f6-7890-abcd-ef1234567890 |
| status | Development stage | test, experimental, stable |
| logsource | Data source specification | category: process_creation, product: windows |
| detection | Rule logic with conditions | selection, filter, condition combinations |
| level | Alert severity | critical, high, medium, low, informational |
| tags | MITRE ATT&CK mapping | attack.execution, attack.t1059.001 |
| falsepositives | Expected benign triggers | Administrative scripts, software deployment tools |
Detection-as-Code Practices
Treat detection rules like software code. Store them in a Git repository with version control, branching, and pull request reviews. Automate testing using a CI/CD pipeline that validates syntax, runs against test data, and checks for performance impact before deployment. Maintain test cases for each rule: both expected-to-fire samples and expected-not-to-fire samples. Use infrastructure-as-code tools to deploy rules consistently across SIEM environments. This approach enables collaboration, auditability, and rollback when a rule causes problems in production.
ATT&CK Coverage Mapping
Map every detection rule to the specific MITRE ATT&CK technique and sub-technique it covers. Use the ATT&CK Navigator to visualize your coverage and identify gaps. Prioritize coverage for the techniques most commonly used against your industry, as identified by threat intelligence reports like Mandiant M-Trends and CrowdStrike Global Threat Report. Do not aim for 100% coverage immediately. Focus on the techniques that matter most and build outward. A few high-quality detections for critical techniques are worth more than hundreds of untested rules.
Quality Metrics
- Track detection coverage as a percentage of prioritized ATT&CK techniques
- Measure true positive rate per rule to identify your highest-value detections
- Monitor false positive rate and time to tune for each new rule
- Track mean time from rule creation to production deployment
- Measure detection rule half-life: how long before a rule needs updating or retirement
- Report on detection-driven incident discoveries to demonstrate program value
Frequently Asked Questions
Do we need a dedicated detection engineer?
Organizations with more than a handful of detection rules benefit significantly from dedicated detection engineering. If you cannot justify a full-time role, assign detection engineering responsibilities to a senior analyst with protected time and treat it as a formal discipline.
Why Sigma over SIEM-native rules?
Sigma is vendor-agnostic, meaning rules can be converted to any SIEM query language. This protects your detection investment if you change SIEM platforms and allows you to share detections with the broader community.
How do we test detection rules without impacting production?
Use a detection testing framework with synthetic or replayed log data. Tools like Atomic Red Team generate telemetry for specific ATT&CK techniques. Run new rules in audit or shadow mode before enabling production alerting.
How many detection rules do we need?
Quality over quantity. Start with 20 to 30 well-tested rules covering the most critical techniques for your environment. Scale to 200 or more as your program matures. Every rule should have evidence of testing and a documented false positive profile.
What is the role of AI in detection engineering?
AI and ML can enhance detection through behavioral anomaly detection, reducing false positives through contextual scoring, and accelerating rule development. However, they supplement rather than replace deterministic rules. A strong detection program uses both approaches.
Ready to use this resource?
Download it now or schedule a demo to see how Hunto AI can automate your security workflows.
