From Reactive to Proactive: How One SOC Built a Threat-Intel-Driven Defense Program

A futuristic security operations center (SOC) with analysts monitoring multiple screens and a large digital world map, featuring the text “From Reactive to Proactive: How One SOC Built a Threat-Intel-Driven Defense Program.”

In early 2025, the security operations team at a mid-sized regional financial services firm — roughly 3,800 employees, $2.1B in annual revenue, operating across seven states — faced a problem that thousands of security teams know intimately: they were extraordinarily busy, and yet they had no real idea whether they were secure.

Their six-person SOC was processing over 4,200 alerts per week. Analysts were working through queues,

The Diagnosis: Understanding What Was Actually Broken

Before designing any solution, the new CISO spent the first six weeks conducting a structured assessment of the existing program. The findings were documented across four dimensions: people, process, technology, and intelligence. What emerged was a clear picture of a team that had been optimized for operational throughput rather than security outcomes.

⚠ The Reactive State
  • 🔴Alert queues managed as tickets — first in, first out regardless of severity context
  • 🔴Detection rules unchanged for 14+ months; many built for threats no longer relevant
  • 🔴No documented adversary profiles for threats targeting financial services sector
  • 🔴Two unweighted commercial threat feeds pushing raw IOCs directly to SIEM
  • 🔴Intelligence consumed passively — no hunting, no proactive validation
  • 🔴Analysts measured on ticket closure rate, not detection quality
  • 🔴No post-incident review process feeding back into detection improvement
✅ The Target State
  • 🟢Intelligence-led triage — alerts contextualized against active threat actor TTPs
  • 🟢Detection engineering driven by PIRs and MITRE ATT&CK coverage mapping
  • 🟢Documented profiles for top 8 adversary groups targeting regional financials
  • 🟢TIP with AI relevance scoring filtering enriched IOCs before SIEM ingestion
  • 🟢Structured biweekly threat hunting sprints against priority hypotheses
  • 🟢Analysts measured on detection quality, hunting findings, and rule improvements
  • 🟢Mandatory post-incident review with detection gap closure tracking

The most revealing finding from the assessment was what the CISO called the “coverage illusion.” The team believed they had broad detection coverage because they had a large number of SIEM rules — over 340 active correlation rules at the time. But when those rules were mapped to the MITRE ATT&CK framework, significant gaps emerged: the team had strong coverage for commodity malware and known-bad IOCs, but almost no detection logic for the lateral movement, credential access, and discovery techniques favored by the financially motivated threat actors most likely to target them.

losing tickets, and meeting SLA targets. By every operational metric they were measuring, the team appeared to be performing. But when a new CISO joined in Q1 2025 and asked a simple question — “What are we actually looking for, and why?” — the honest answer was unsettling.

They were looking for whatever their tools flagged. They had no documented intelligence requirements. No structured threat actor profiles relevant to their industry. No threat hunting program. No feedback loop between what they detected and how their detection rules evolved. They were, in the CISO’s words, “running on a treadmill — lots of motion, very little forward progress.”

What followed was one of the most rigorously documented SOC transformations we have seen shared with The Security Bench. This case study tells that story in full: the diagnosis, the program design decisions, the tooling choices, the failures along the way, and — ultimately — the measurable outcomes that justified every hour invested.

 

 

🔍 Critical Finding

Of the organization’s 340+ SIEM correlation rules, fewer than 12% addressed techniques in the MITRE ATT&CK tactics most commonly used by eCrime and ransomware groups targeting the financial sector. The team had hundreds of rules — but was blind to the most relevant attack patterns against their industry.

 

Phase 1 Building the Intelligence Foundation: Priority Intelligence Requirements

With the assessment complete, the team’s first operational decision was the most important one they would make: before touching a single tool or writing a single detection rule, they would define what intelligence they actually needed.

This meant developing Priority Intelligence Requirements (PIRs) — a structured set of intelligence questions that would drive everything from threat feed selection to hunting hypothesis development to detection engineering priorities. PIRs were defined through a series of workshops involving the CISO, the SOC lead, the risk management team, and representatives from the business units most exposed to cyber risk.

 

Their Finalized PIR Set

📋 Priority Intelligence Requirements — Approved Q1 2025
  • 1) Which financially motivated threat actor groups are actively targeting regional financial institutions in our geography, and what are their current TTPs?
  • 2) What initial access vectors are most commonly used against organizations of our size and sector, and are we detecting them?
  • 3) Are there active phishing campaigns, malware families, or exploit kits currently targeting the financial services sector that we should build detections for?
  • 4) Which of our third-party vendors and supply chain partners have experienced recent compromises that could expose us to downstream risk?
  • 5) Are there active credentials, data samples, or infrastructure belonging to our organization circulating on underground markets or dark web forums?
  • 6) What CVEs are being actively exploited in the wild against technology we operate, and in what timeframe do we need to prioritize patching?

These six PIRs became the north star of the entire intelligence program. Every feed selection, every hunting hypothesis, every detection engineering priority, and every weekly intelligence brief was evaluated against one question: does this answer one of our PIRs?

 

Phase 2 Tooling Decisions: Building the Intelligence Stack

With PIRs defined, the team turned to tooling. They had a limited budget — the total additional technology investment approved for the transformation was $280,000 annually — and needed to make every dollar count. Their existing stack included a Splunk SIEM, CrowdStrike Falcon EDR, and Palo Alto Networks next-gen firewalls. The gaps were clear: no TIP, no structured threat hunting workflow, and no dark web monitoring.

 

Tool Selection Rationale

🧠 Anomali ThreatStream (TIP)
Threat Intelligence Platform
Selected for deep Splunk integration and strong STIX/TAXII support. AI relevance scoring enabled filtering of IOCs by industry and geography before SIEM ingestion — reducing raw feed volume by 78% while improving actionability.
🔭 Recorded Future (Intel Feed)
Primary Commercial Intel Source
Added as the primary commercial intelligence source, providing financial sector-specific threat actor profiles, dark web monitoring for brand and credential exposure, and vulnerability intelligence tied to active exploitation activity.
🎯 Vectr (Purple Team Platform)
ATT&CK Coverage Tracking
Used to track detection coverage across the MITRE ATT&CK matrix and plan threat hunting exercises. Enabled the team to visualize coverage gaps, prioritize detection engineering work, and measure improvement over time.
⚗️ Atomic Red Team (Detection Validation)
Open-Source Testing Framework
Free open-source framework used to execute individual ATT&CK technique simulations and validate whether existing Splunk detections fired correctly. Critical for closing the coverage illusion gap identified in the assessment.
🌐 FS-ISAC Membership (Intel Sharing)
Sector-Specific Intel Community
Financial Services ISAC membership formalized for the first time. Provided sector-specific threat intelligence, early warning on campaigns targeting regional banks, and reciprocal sharing relationships with peer institutions.
📊 Splunk SOAR (Playbook Automation)
Orchestration and Automation
Existing Splunk investment extended with SOAR playbooks. TIP-enriched high-confidence IOCs now automatically trigger investigation playbooks — reducing analyst triage time for confirmed malicious indicators from 45 minutes to under 4 minutes.

 

One deliberate decision worth noting: the team chose not to replace their existing SIEM or EDR. The CISO’s view was that the problem was not the tools — it was the intelligence layer sitting above them. Adding a TIP and improving the quality of what fed the SIEM delivered more value than a platform migration would have at double the cost.

 

💡 Decision Principle

The team’s tooling philosophy was deliberately conservative: add the intelligence layer before changing the detection layer. Better intelligence feeding existing tools consistently outperforms better tools fed by poor intelligence. They validated this assumption during a 60-day TIP trial before committing to full procurement.

 

Phase 3 Program Design: The Three Operational Pillars

With the intelligence foundation and tooling in place, the team restructured their operational model around three pillars — each directly supporting their PIRs and measured by specific outcome metrics rather than activity volume.

Pillar 1: Intelligence-Led Alert Triage

The first operational change was to the alert triage process itself. Under the old model, alerts were triaged by severity level and timestamp — a first-in-first-out queue. Under the new model, every alert entering the queue was automatically enriched by the TIP before an analyst touched it. Alerts involving IOCs or behavioral patterns associated with high-priority threat actors — those mapped to their PIRs — were automatically elevated regardless of the SIEM’s raw severity score.

This single process change reduced analyst time spent on low-context alerts by 34% in the first 60 days. Analysts were no longer spending equal time on a port scan from an irrelevant scanner and a command-and-control beacon from a known eCrime group’s infrastructure.

Pillar 2: Structured Threat Hunting Sprints

The team instituted biweekly two-day threat hunting sprints, each driven by a formally documented hunting hypothesis derived from their PIRs and current threat intelligence. A senior analyst was rotated as “hunt lead” for each sprint, responsible for developing the hypothesis, executing the hunt using Splunk and CrowdStrike telemetry, documenting findings, and presenting outcomes at the sprint close-out session.

Critically, every hunt — whether it found something or not — was treated as a detection engineering input. Hunts that found evidence of malicious activity generated new detection rules. Hunts that found nothing generated documentation that the relevant technique was not currently present in the environment, providing a baseline for future comparison.

Pillar 3: Detection Engineering Driven by ATT&CK Coverage

The third pillar addressed the coverage illusion directly. The team committed to a quarterly detection engineering review cycle using Vectr to map their current Splunk rule coverage against the MITRE ATT&CK techniques most relevant to their priority threat actors. Each quarter, the three most significant coverage gaps were assigned as detection engineering projects, with Atomic Red Team used to validate new rules before promotion to production.

In the first four quarters of the program, the team built 47 net-new detection rules — each mapped to a specific ATT&CK technique and validated against simulated adversary behavior before deployment.

 

The 12-Month Transformation Timeline

Month 1–2  ·  Jan–Feb 2025
Assessment and PIR Development
Six-week SOC assessment completed. MITRE ATT&CK coverage mapping reveals critical gaps. PIRs drafted, workshopped with risk management and business units, and formally approved by CISO. Baseline metrics established across all key performance indicators.
Month 3  ·  Mar 2025
TIP Proof of Concept and Vendor Selection
60-day Anomali ThreatStream POC launched using real organizational data. Recorded Future evaluated as primary commercial feed. POC demonstrates 78% reduction in raw IOC volume reaching SIEM with improved actionability. Budget approved. Contracts signed.
Month 4–5  ·  Apr–May 2025
TIP Deployment and SIEM Integration
TIP deployed and integrated with Splunk. AI relevance scoring configured with industry (financial services), geographic (regional US), and technology stack parameters. FS-ISAC feeds onboarded. Initial threat actor profiles built for top 8 adversary groups. First SOAR playbooks deployed for TIP-confirmed IOCs.
Month 6  ·  Jun 2025
First Threat Hunt — and First Real Finding
Program’s inaugural threat hunting sprint targets credential-based initial access techniques (ATT&CK T1078, T1110). Hunt surfaces evidence of password spraying activity against VPN infrastructure that had not triggered any SIEM alerts. Investigation confirms external threat actor probing. Incident contained. Detection gap closes with three new SIEM rules. Program credibility established.
Month 7–9  ·  Jul–Sep 2025
Operational Cadence and Detection Engineering Acceleration
Biweekly hunting sprint cadence fully established. First quarterly detection engineering review completed — 14 new rules deployed from top coverage gap analysis. False positive rate drops below 40% for first time. Analyst satisfaction scores improve significantly as alert noise decreases and hunting work provides more meaningful engagement.
Month 10–12  ·  Oct–Dec 2025
Maturity and Measurable Outcomes
Full program maturity reached. Second and third quarterly detection engineering cycles completed. Total of 47 net-new ATT&CK-mapped detections deployed since program launch. Year-end metrics confirm 40% reduction in average dwell time (18.4 → 11.0 days), 67% reduction in false positive rate, and 3.2× improvement in mean time to detect confirmed incidents.

Measurable Outcomes: The Numbers at 12 Months

At the 12-month mark, the organization conducted a formal program review comparing current performance against baseline metrics established at the start of the transformation. The results validated the program design comprehensively — and, in several areas, exceeded the targets the CISO had set at program launch.

Metric Baseline (Q1 2025) 12 Months (Q1 2026) Change
Average Adversary Dwell Time 18.4 days 11.0 days ↓ 40%
SIEM False Positive Rate ~71% ~24% ↓ 67%
Mean Time to Detect (MTTD) 41 hours 12.8 hours ↓ 68% (3.2×)
Weekly Alert Volume (post-filter) 4,200+ alerts ~940 alerts ↓ 78%
ATT&CK Technique Coverage (Priority Actors) 12% coverage 61% coverage ↑ 49 pts
Net-New Detection Rules (ATT&CK-mapped) 0 (12 months prior) 47 validated rules +47 rules
Threat Hunt Findings (Actionable) No formal program 8 confirmed findings New capability
Analyst Triage Time per TIP-confirmed IOC ~45 minutes <4 minutes ↓ 91%

 

Perhaps the most significant result was one that doesn’t appear in the metrics table: analyst retention. In the 12 months prior to the transformation, the team had lost two of six analysts to attrition — both citing burnout from alert queue work as a primary factor. In the 12 months following the program launch, the team retained all analysts and made one additional hire to support the expanded hunting function. Exit interviews from peers at comparable organizations consistently cite meaningful, strategic work as a primary retention driver for experienced security professionals.

 

Lessons Learned: What the Team Would Do Differently

No transformation of this scope unfolds without friction. The SOC lead shared six candid lessons from the program — including three things they would approach differently if starting over.

 

01) Start with PIRs — not tooling
The team spent three weeks evaluating TIP vendors before finalizing their PIRs. In hindsight, this was backwards. PIRs should drive vendor requirements, not the other way around. They eventually got it right, but lost evaluation time as a result.
02) Communicate the “why” to analysts early
Initial analyst resistance to the new hunting cadence and metrics model was higher than expected. Two analysts initially felt that the new performance metrics were unfair compared to the old ticket-based system. Earlier, more transparent communication about the rationale would have reduced friction.
03) Don’t underestimate TIP tuning time
The AI relevance scoring in the TIP required more tuning than the vendor’s implementation timeline projected. The team allocated two weeks for calibration — it took six. Budget analyst time for a full 6-8 week calibration period before expecting production-quality relevance scores.
04) The first hunt finding transforms the program
When the Month 6 hunt surfaced real threat actor activity that no alert had flagged, the entire team’s buy-in shifted. Nothing builds program credibility internally — with analysts, management, and the business — like a hunting program that finds something real. Prioritize getting that first finding early.
05) Detection engineering is a muscle — train it consistently
The quarterly detection engineering review cycle was the right cadence in hindsight, but the team wished they had started it in Month 1 rather than Month 4. Earlier detection engineering work would have accelerated ATT&CK coverage improvements and compounded value through the year.
06) Measure outcomes, not activities
The old metrics (ticket closure rate, SLA compliance) survived longer than they should have. Transitioning to outcome metrics (dwell time, false positive rate, MTTD) required active management attention and deliberate metric retirement. Start the metrics transition on Day 1.

 

How to Replicate This: A Practical Starting Framework

This transformation was accomplished by a six-person team with a modest additional budget. The principles are fully transferable to organizations of comparable or larger scale. Here is the sequence that the SOC lead recommends for teams starting their own intelligence-led transformation.

1) Conduct an honest ATT&CK coverage assessment
Map your existing detection rules to the MITRE ATT&CK framework for the techniques most commonly used by adversaries targeting your industry. Use free tools like ATT&CK Navigator and Atomic Red Team to validate whether your detections actually fire. This assessment will reveal your real coverage — not your assumed coverage — and will define your transformation starting point.
2) Develop and document Priority Intelligence Requirements
Run PIR workshops with your CISO, risk team, and business stakeholders. Aim for 4–8 specific, answerable intelligence questions that reflect your organization’s actual threat profile. These PIRs should be approved by leadership and treated as living documents — reviewed quarterly and updated as your threat landscape evolves.
3) Add the intelligence layer before changing detection tools
If you don’t have a TIP, evaluate one — starting with a structured POC using real organizational data. If budget is constrained, start with OpenCTI (open source) or MISP and connect your existing threat feeds through it. The goal is enrichment and relevance scoring before IOCs reach your SIEM — regardless of which platform provides it.
4) Launch your first formal threat hunt within 90 days
Don’t wait for the perfect platform configuration before hunting. Develop a hypothesis based on your PIRs and current threat intelligence, execute a focused 2-day hunt using whatever telemetry you have, document the process and findings formally, and use the results to build your first new detection rule. The hunt itself is as valuable as any finding it produces.
5) Establish a detection engineering review cadence
Commit to a quarterly detection engineering cycle with a specific, measurable output: at minimum, close the top 3 ATT&CK coverage gaps identified in your assessment. Use Atomic Red Team or a comparable simulation framework to validate new rules before production deployment. Track coverage improvement over time using Vectr or ATT&CK Navigator.
6) Replace activity metrics with outcome metrics from Day 1
Stop measuring ticket closure rates and start measuring dwell time, MTTD, false positive rate, and ATT&CK coverage. Establish baselines immediately — even rough estimates — so you can demonstrate improvement. These metrics tell a security outcome story to leadership that ticket volumes never can. They also drive the right analyst behaviors organically.

Conclusion: Intelligence Is a Program, Not a Product

The transformation documented in this case study was not primarily a technology story. The organization didn’t buy their way out of reactive operations. They built their way out — through disciplined program design, clear intelligence requirements, structured operational processes, and a consistent commitment to measuring outcomes over activities.

The 40% reduction in dwell time is a compelling headline. But the more significant outcome — the one that will compound in value for years — is that this team now operates with a fundamentally different orientation. They are no longer running to keep pace with a queue. They are hunting. They are asking “what are the threats most likely to target us, right now?” and systematically answering that question every two weeks.

That shift — from reactive to proactive, from tool-driven to intelligence-driven — is replicable. It doesn’t require an unlimited budget or a team of twenty analysts. It requires clarity of purpose, disciplined process design, and the organizational will to measure what actually matters.

 

🔑 The Central Lesson

The most important investment this organization made was not in software — it was in thinking clearly about what intelligence they needed and why. Priority Intelligence Requirements cost nothing to develop and they changed everything. If your security program lacks documented PIRs, that is the most valuable gap to close before any technology procurement decision is made.