In early 2025, the security operations team at a mid-sized regional financial services firm — roughly 3,800 employees, $2.1B in annual revenue, operating across seven states — faced a problem that thousands of security teams know intimately: they were extraordinarily busy, and yet they had no real idea whether they were secure.
Their six-person SOC was processing over 4,200 alerts per week. Analysts were working through queues,
The Diagnosis: Understanding What Was Actually Broken
Before designing any solution, the new CISO spent the first six weeks conducting a structured assessment of the existing program. The findings were documented across four dimensions: people, process, technology, and intelligence. What emerged was a clear picture of a team that had been optimized for operational throughput rather than security outcomes.
- 🔴Alert queues managed as tickets — first in, first out regardless of severity context
- 🔴Detection rules unchanged for 14+ months; many built for threats no longer relevant
- 🔴No documented adversary profiles for threats targeting financial services sector
- 🔴Two unweighted commercial threat feeds pushing raw IOCs directly to SIEM
- 🔴Intelligence consumed passively — no hunting, no proactive validation
- 🔴Analysts measured on ticket closure rate, not detection quality
- 🔴No post-incident review process feeding back into detection improvement
- 🟢Intelligence-led triage — alerts contextualized against active threat actor TTPs
- 🟢Detection engineering driven by PIRs and MITRE ATT&CK coverage mapping
- 🟢Documented profiles for top 8 adversary groups targeting regional financials
- 🟢TIP with AI relevance scoring filtering enriched IOCs before SIEM ingestion
- 🟢Structured biweekly threat hunting sprints against priority hypotheses
- 🟢Analysts measured on detection quality, hunting findings, and rule improvements
- 🟢Mandatory post-incident review with detection gap closure tracking
The most revealing finding from the assessment was what the CISO called the “coverage illusion.” The team believed they had broad detection coverage because they had a large number of SIEM rules — over 340 active correlation rules at the time. But when those rules were mapped to the MITRE ATT&CK framework, significant gaps emerged: the team had strong coverage for commodity malware and known-bad IOCs, but almost no detection logic for the lateral movement, credential access, and discovery techniques favored by the financially motivated threat actors most likely to target them.
losing tickets, and meeting SLA targets. By every operational metric they were measuring, the team appeared to be performing. But when a new CISO joined in Q1 2025 and asked a simple question — “What are we actually looking for, and why?” — the honest answer was unsettling.
They were looking for whatever their tools flagged. They had no documented intelligence requirements. No structured threat actor profiles relevant to their industry. No threat hunting program. No feedback loop between what they detected and how their detection rules evolved. They were, in the CISO’s words, “running on a treadmill — lots of motion, very little forward progress.”
What followed was one of the most rigorously documented SOC transformations we have seen shared with The Security Bench. This case study tells that story in full: the diagnosis, the program design decisions, the tooling choices, the failures along the way, and — ultimately — the measurable outcomes that justified every hour invested.
Of the organization’s 340+ SIEM correlation rules, fewer than 12% addressed techniques in the MITRE ATT&CK tactics most commonly used by eCrime and ransomware groups targeting the financial sector. The team had hundreds of rules — but was blind to the most relevant attack patterns against their industry.
Phase 1 Building the Intelligence Foundation: Priority Intelligence Requirements
With the assessment complete, the team’s first operational decision was the most important one they would make: before touching a single tool or writing a single detection rule, they would define what intelligence they actually needed.
This meant developing Priority Intelligence Requirements (PIRs) — a structured set of intelligence questions that would drive everything from threat feed selection to hunting hypothesis development to detection engineering priorities. PIRs were defined through a series of workshops involving the CISO, the SOC lead, the risk management team, and representatives from the business units most exposed to cyber risk.
Their Finalized PIR Set
-
1) Which financially motivated threat actor groups are actively targeting regional financial institutions in our geography, and what are their current TTPs?
-
2) What initial access vectors are most commonly used against organizations of our size and sector, and are we detecting them?
-
3) Are there active phishing campaigns, malware families, or exploit kits currently targeting the financial services sector that we should build detections for?
-
4) Which of our third-party vendors and supply chain partners have experienced recent compromises that could expose us to downstream risk?
-
5) Are there active credentials, data samples, or infrastructure belonging to our organization circulating on underground markets or dark web forums?
-
6) What CVEs are being actively exploited in the wild against technology we operate, and in what timeframe do we need to prioritize patching?
These six PIRs became the north star of the entire intelligence program. Every feed selection, every hunting hypothesis, every detection engineering priority, and every weekly intelligence brief was evaluated against one question: does this answer one of our PIRs?
Phase 2 Tooling Decisions: Building the Intelligence Stack
With PIRs defined, the team turned to tooling. They had a limited budget — the total additional technology investment approved for the transformation was $280,000 annually — and needed to make every dollar count. Their existing stack included a Splunk SIEM, CrowdStrike Falcon EDR, and Palo Alto Networks next-gen firewalls. The gaps were clear: no TIP, no structured threat hunting workflow, and no dark web monitoring.
Tool Selection Rationale
One deliberate decision worth noting: the team chose not to replace their existing SIEM or EDR. The CISO’s view was that the problem was not the tools — it was the intelligence layer sitting above them. Adding a TIP and improving the quality of what fed the SIEM delivered more value than a platform migration would have at double the cost.
The team’s tooling philosophy was deliberately conservative: add the intelligence layer before changing the detection layer. Better intelligence feeding existing tools consistently outperforms better tools fed by poor intelligence. They validated this assumption during a 60-day TIP trial before committing to full procurement.
Phase 3 Program Design: The Three Operational Pillars
With the intelligence foundation and tooling in place, the team restructured their operational model around three pillars — each directly supporting their PIRs and measured by specific outcome metrics rather than activity volume.
Pillar 1: Intelligence-Led Alert Triage
The first operational change was to the alert triage process itself. Under the old model, alerts were triaged by severity level and timestamp — a first-in-first-out queue. Under the new model, every alert entering the queue was automatically enriched by the TIP before an analyst touched it. Alerts involving IOCs or behavioral patterns associated with high-priority threat actors — those mapped to their PIRs — were automatically elevated regardless of the SIEM’s raw severity score.
This single process change reduced analyst time spent on low-context alerts by 34% in the first 60 days. Analysts were no longer spending equal time on a port scan from an irrelevant scanner and a command-and-control beacon from a known eCrime group’s infrastructure.
Pillar 2: Structured Threat Hunting Sprints
The team instituted biweekly two-day threat hunting sprints, each driven by a formally documented hunting hypothesis derived from their PIRs and current threat intelligence. A senior analyst was rotated as “hunt lead” for each sprint, responsible for developing the hypothesis, executing the hunt using Splunk and CrowdStrike telemetry, documenting findings, and presenting outcomes at the sprint close-out session.
Critically, every hunt — whether it found something or not — was treated as a detection engineering input. Hunts that found evidence of malicious activity generated new detection rules. Hunts that found nothing generated documentation that the relevant technique was not currently present in the environment, providing a baseline for future comparison.
Pillar 3: Detection Engineering Driven by ATT&CK Coverage
The third pillar addressed the coverage illusion directly. The team committed to a quarterly detection engineering review cycle using Vectr to map their current Splunk rule coverage against the MITRE ATT&CK techniques most relevant to their priority threat actors. Each quarter, the three most significant coverage gaps were assigned as detection engineering projects, with Atomic Red Team used to validate new rules before promotion to production.
In the first four quarters of the program, the team built 47 net-new detection rules — each mapped to a specific ATT&CK technique and validated against simulated adversary behavior before deployment.
The 12-Month Transformation Timeline
Measurable Outcomes: The Numbers at 12 Months
At the 12-month mark, the organization conducted a formal program review comparing current performance against baseline metrics established at the start of the transformation. The results validated the program design comprehensively — and, in several areas, exceeded the targets the CISO had set at program launch.
| Metric | Baseline (Q1 2025) | 12 Months (Q1 2026) | Change |
|---|---|---|---|
| Average Adversary Dwell Time | 18.4 days | 11.0 days | ↓ 40% |
| SIEM False Positive Rate | ~71% | ~24% | ↓ 67% |
| Mean Time to Detect (MTTD) | 41 hours | 12.8 hours | ↓ 68% (3.2×) |
| Weekly Alert Volume (post-filter) | 4,200+ alerts | ~940 alerts | ↓ 78% |
| ATT&CK Technique Coverage (Priority Actors) | 12% coverage | 61% coverage | ↑ 49 pts |
| Net-New Detection Rules (ATT&CK-mapped) | 0 (12 months prior) | 47 validated rules | +47 rules |
| Threat Hunt Findings (Actionable) | No formal program | 8 confirmed findings | New capability |
| Analyst Triage Time per TIP-confirmed IOC | ~45 minutes | <4 minutes | ↓ 91% |
Perhaps the most significant result was one that doesn’t appear in the metrics table: analyst retention. In the 12 months prior to the transformation, the team had lost two of six analysts to attrition — both citing burnout from alert queue work as a primary factor. In the 12 months following the program launch, the team retained all analysts and made one additional hire to support the expanded hunting function. Exit interviews from peers at comparable organizations consistently cite meaningful, strategic work as a primary retention driver for experienced security professionals.
Lessons Learned: What the Team Would Do Differently
No transformation of this scope unfolds without friction. The SOC lead shared six candid lessons from the program — including three things they would approach differently if starting over.
How to Replicate This: A Practical Starting Framework
This transformation was accomplished by a six-person team with a modest additional budget. The principles are fully transferable to organizations of comparable or larger scale. Here is the sequence that the SOC lead recommends for teams starting their own intelligence-led transformation.
Conclusion: Intelligence Is a Program, Not a Product
The transformation documented in this case study was not primarily a technology story. The organization didn’t buy their way out of reactive operations. They built their way out — through disciplined program design, clear intelligence requirements, structured operational processes, and a consistent commitment to measuring outcomes over activities.
The 40% reduction in dwell time is a compelling headline. But the more significant outcome — the one that will compound in value for years — is that this team now operates with a fundamentally different orientation. They are no longer running to keep pace with a queue. They are hunting. They are asking “what are the threats most likely to target us, right now?” and systematically answering that question every two weeks.
That shift — from reactive to proactive, from tool-driven to intelligence-driven — is replicable. It doesn’t require an unlimited budget or a team of twenty analysts. It requires clarity of purpose, disciplined process design, and the organizational will to measure what actually matters.
The most important investment this organization made was not in software — it was in thinking clearly about what intelligence they needed and why. Priority Intelligence Requirements cost nothing to develop and they changed everything. If your security program lacks documented PIRs, that is the most valuable gap to close before any technology procurement decision is made.