CompTIA Security+ Exam Notes

CompTIA Security+ Exam Notes
Let Us Help You Pass

Tuesday, February 17, 2026

CREST: The Gold Standard for Professional Penetration Testing

 What is CREST in Penetration Testing?

CREST (Council of Registered Ethical Security Testers) is an international, not‑for‑profit accreditation and certification body for the cybersecurity industry. It sets professional standards for penetration testers and security service providers. Its certifications and company accreditations provide assurance that pentesting is performed ethically, competently, and using consistent, validated methodologies.

CREST plays two main roles:

1. Certifying individuals — penetration testers and threat‑intelligence/incident‑response specialists.

2. Accrediting organizations — pentesting consultancies that meet CREST’s operational, technical, and quality standards.

Why CREST Exists

CREST was created to address the risks of unregulated and inconsistent penetration testing, ensuring companies can trust the people and organizations performing these services. Its mission includes:

  • Providing a “stamp of approval” for high‑quality pentesting.
  • Ensuring pentesters follow strict ethical, legal, and methodological standards.
  • Validating the technical competence of testers via rigorous hands‑on exams.
  • Ensuring member companies meet quality‑assurance and data‑handling standards.

With hundreds of accredited organizations worldwide and thousands of certified testers, CREST has become one of the most recognized standards in professional pentesting.

What CREST Guarantees in a Pentest

Working with CREST‑certified testers or CREST‑accredited companies comes with strong assurances:

Repeatable, audit‑grade methodologies

  • CREST mandates documented, defensible processes for scoping, testing, evidence gathering, and reporting.

Technically vetted testers

  • Individuals must pass examinations that simulate real pentesting scenarios and require demonstrable skill.

Ethical & legal compliance

  • A strict code of conduct ensures clear boundaries, particularly in sensitive or regulated environments.

Meaningful, technically sound reports

  • CREST emphasizes producing actionable evidence (logs, PoC traces, reproducible exploit paths).

Industry and regulatory recognition

  • CREST certifications are globally recognized and often required or preferred by buyers of security services.

CREST in the Pentesting Workflow

CREST outlines structured pentesting processes to ensure consistency across engagements. This includes:

  • Scoping under defined rules of engagement
  • Pre‑engagement preparation
  • Methodical vulnerability discovery
  • Exploitation and evidence gathering
  • Risk analysis and prioritization
  • Remediation guidance

It also supports multiple pentesting domains:

  • Web application
  • Network
  • Mobile
  • Cloud
  • API
  • Vulnerability Assessment
  • Intelligence‑led (STAR) testing

CREST Certification Path for Pentesters

CREST provides a full career pathway from entry‑level to highly advanced testing roles.

1. CPSA — CREST Practitioner Security Analyst

  • Entry‑level exam covering fundamental pentesting knowledge.

2. CRT — CREST Registered Penetration Tester

  • Intermediate, hands‑on exam assessing ability to test infrastructure and web apps under time‑boxed conditions.
  • Delivered via Pearson VUE on a locked‑down Kali Linux environment. 

3. CCT (INF / APP) — CREST Certified Tester

Advanced specialization:

  • Infrastructure (CCT INF)
  • Application (CCT APP)

4. CCRTS / CCRT M — CREST Red Team certifications

  • For advanced offensive operators and managers.
  • Many governments (e.g., the UK) align CREST exams with public‑sector testing routes such as NCSC CHECK.

CREST‑Accredited Companies

CREST‑accredited pentesting firms must undergo:

  • Rigorous quality assurance audits
  • Validation of internal processes
  • Demonstrated the capability of their testers
  • Safe data‑handling and reporting procedures

This assures clients that accredited providers deliver consistent, ethical, and high‑quality security testing.

Why CREST Matters in Pentesting

CREST has become a gold standard because it:

  • Raises the bar for tester competence
  • Ensures methodological consistency across engagements
  • Provides buyer confidence in the quality of the pentest
  • Enhances career credibility for individual testers
  • Aligns with national cybersecurity schemes and regulators

CREST helps organizations avoid “low-quality pentests” that produce noise and false confidence. Instead, it focuses on defensible, repeatable, evidence‑backed results that stand up to audits or compliance reviews.

Summary


CREST brings trust, consistency, and professional rigor to penetration testing, benefiting both security professionals and organizations buying pentest services.

Monday, February 16, 2026

LAMP Server Explained: A Complete Guide to Linux, Apache, MySQL, and PHP

 What Is a LAMP Server?

A LAMP server is a classic, widely used web service stack consisting of:

Together, these technologies create a fully functional environment for hosting dynamic websites and web applications.

1. Linux – The Foundation (Operating System)

Linux is the underlying OS that provides:

  • File system organization
  • Permissions & user access control
  • Package management
  • System security
  • Networking capabilities

Popular distros for LAMP servers:

  • Ubuntu Server
  • Debian
  • CentOS / Rocky Linux
  • Red Hat Enterprise Linux

Linux’s strengths include:

  • Stability and uptime
  • Security & permission model
  • Command-line tools for automation
  • Massive community support
  • Cost effectiveness (usually free)

2. Apache – The Web Server

Apache HTTP Server is responsible for:

  • Accepting requests from web browsers
  • Processing those requests
  • Serving web pages, images, scripts, and files

Key features:

Modular architecture

Modules like:

  • mod_php – allows PHP to run inside Apache
  • mod_ssl – enables HTTPS
  • mod_rewrite – URL rewriting

Virtual hosts

Allows multiple websites on one server:

Logging

  • Access logs
  • Error logs

Apache is extremely flexible, stable, and widely supported.

3. MySQL (or MariaDB) – The Database Server

MySQL stores application data in relational tables.

Example use cases:

  • User accounts and passwords
  • Blog posts
  • E-commerce products
  • Session data

Core concepts:

  • Databases
  • Tables
  • Rows/records
  • Columns/fields
  • Primary keys
  • SQL queries

Example SQL query:

MySQL alternatives in LAMP:

  • MariaDB – a drop‑in replacement created by the original MySQL developers
  • Percona – optimized MySQL fork

4. PHP – The Web Programming Language

PHP runs on the server and generates dynamic HTML.

Example PHP script:

PHP is ideal for:

  • Form handling
  • Database interaction
  • Generating dynamic content
  • Server-side logic

Popular PHP applications built on LAMP:

  • WordPress
  • Drupal
  • Joomla
  • phpMyAdmin

PHP alternatives within LAMP:

  • Python (Django/Flask)
  • Perl

This is sometimes called LAPP or LAMP(Python).

How the LAMP Stack Works Together

Here’s the request flow:

1. Client browser sends request → https://yourserver.com

2. Apache receives the request

3. If PHP is needed → Apache hands the script to the PHP interpreter

4. PHP may request or modify data via MySQL

5. PHP generates HTML output

6. Apache sends the HTML response back to the browser

Everything happens in milliseconds.

Why LAMP Is Still Popular

Even though new stacks exist (Node.js, Docker, Nginx), LAMP remains a top choice because:

  • Open source and free
  • Stable and proven for decades
  • Runs a huge % of web apps
  • Easy to set up
  • Easy to administer
  • Massive community & documentation
  • Works on nearly any hardware

Typical Directory Structure


Simplified Installation Example (Ubuntu)


Modern Variants of LAMP

Summary

A LAMP server is a classic and powerful web development environment combining:

  • Linux – OS foundation
  • Apache – Web server
  • MySQL – Database
  • PHP – Server-side scripting

Sunday, February 15, 2026

Netcat Explained: Legitimate Uses, Security Risks, and Defensive Strategies

 What Is Netcat?

Netcat (often called nc) is a small, command‑line networking utility commonly described as the “Swiss Army knife of TCP/IP.”

It can:

  • Create TCP or UDP connections
  • Listen on ports
  • Transfer data between systems
  • Read or write directly to network sockets
  • Perform banner grabbing
  • Assist in debugging and network troubleshooting

In cybersecurity and IT operations, Netcat is widely used because it’s:

  • Lightweight
  • Built into many Linux distros
  • Available for macOS and Windows
  • Extremely flexible

Because of this flexibility, Netcat is used by penetration testers, system admins, and, unfortunately, malicious actors.

Legitimate Uses of Netcat

Professionals use Netcat for completely valid reasons, such as:

Network Debugging

  • Checking whether a specific port is open, diagnosing connection issues, or testing firewall rules.

System Administration

  • Sending files between machines internally, simple remote management in test environments, etc.

Security Testing (Ethical)

  • Pen testers simulate attacker behavior in controlled environments to help organizations find vulnerabilities.

These are all safe and normal uses of the tool.

How Netcat Can Be Misused (High‑Level, Non‑Actionable)

Since Netcat can open network connections, listen on ports, and transfer data, malicious actors sometimes abuse it for unauthorized remote access, data exfiltration, or persistence.

Below are conceptual descriptions to help you understand threats — not instructions.

1. Unauthorized Remote Access

Attackers may use Netcat’s ability to create inbound/outbound connections for:

  • Reverse connections that bypass firewalls
  • Backdoors that accept incoming connections

Security takeaway:

Monitor for unexpected listening ports or unusual outbound connections.

2. Data Exfiltration

Because Netcat can transmit raw data, an attacker could use it to move:

  • Password dumps
  • Files containing sensitive information
  • System logs revealing network structure

Security takeaway:

Use Data Loss Prevention (DLP), network monitoring, and egress filtering.

3. Port Scanning (Crude/Basic)

Netcat can be misused to probe which services are open on a target system.

Security takeaway:

Intrusion detection systems (IDS) can flag repeated access attempts across ports.

4. Simple Command Relay or “Piping.”

Attackers may chain Netcat with system shells to facilitate unauthorized remote command execution.

Security takeaway:

Look for abnormal processes spawning unexpected child processes.

5. Persistence Mechanisms

Netcat can be used as part of a larger persistence strategy by keeping malicious listeners active.

Security takeaway:

Host-based intrusion detection and startup/service audits help detect this.

How Security Teams Defend Against Netcat Misuse

Even though attackers can abuse Netcat, defenders can protect systems with techniques such as:

Network Monitoring

  • Spot unusual traffic patterns, unknown listening ports, or outbound connections.

Egress Filtering

  • Block unauthorized outbound traffic to prevent reverse connections.

IDS/IPS Signatures

  • Tools like Snort or Suricata can detect Netcat-like traffic patterns.

Least Privilege

  • Restrict which users can run low‑level networking tools.

Endpoint Monitoring

  • Watch for suspicious processes or binaries.

Saturday, February 14, 2026

How Octal Permissions Work in Linux (With Examples)

Understanding Octal Permissions in Linux

Linux file permissions are often represented in two ways:

1. Symbolic notation → e.g., rwxr-xr--

2. Octal (numeric) notation → e.g., 754

Octal notation is simply a numeric shorthand for symbolic permissions.

1. Symbolic Permissions (The Long Form)

Linux permissions operate on three categories:


And each category can have these three permission types:

Example:

rwx r-x r--

Breaks down as:


2. The Numeric (Octal) System

For each of the permissions (r, w, x), Linux assigns a numeric value:

To convert symbolic to octal, add the values:

Examples:

  • rwx → 4 + 2 + 1 → 7
  • rw- → 4 + 2 + 0 → 6
  • r-x → 4 + 0 + 1 → 5
  • r-- → 4 + 0 + 0 → 4
  • --- → 0 + 0 + 0 → 0

3. Putting It Together: Octal Notation

A full permission set requires 3 octal digits (user, group, others):

  • (user)(group)(others)

Example:

  • 754

Breaks down to:

  • 7 = rwx (owner)
  • 5 = r-x (group)
  • 4 = r-- (others)

Symbolically:

Rwx r-x r--

4. Common Octal Permission Values

For Files

For Directories

5. Special Bits (Setuid, Setgid, Sticky Bit)

Sometimes you’ll see 4 digits (e.g., 4755).

The first digit is for special permissions:

Examples:

  • 4755 → setuid bit + rwx r-x r-x
  • 1777 → sticky bit + rwx rwx rwx (used on /tmp)

6. Setting Permissions with chmod

Use octal notation directly:

1. chmod 754 filename 

2. chmod 700 private_script.sh

3. chmod 1777 /shared/tmp

7. Why Use Octal Instead of Symbolic?

Octal is:

  • faster (chmod 755 file)
  • unambiguous
  • very common in scripts and system admin work

Symbolic mode is better for tweaking permissions:

1. chmod g+w file

2. chmod o-rwx file 

But octal mode is perfect for resetting permissions.

Friday, February 13, 2026

Understanding ITIL: Core Concepts, Practices, and Why It Matters

 What Is ITIL? (Information Technology Infrastructure Library)

ITIL is a globally recognized framework of best practices for delivering high‑quality IT services.

It provides guidelines for how IT departments should organize, manage, and continuously improve the services they deliver to the business.

Originally developed by the UK government, ITIL has become the most widely adopted IT service management (ITSM) framework worldwide.

Core Purpose of ITIL

ITIL answers one key question:

How should IT deliver value to the business consistently and efficiently?

It focuses on:

  • Aligning IT with business needs
  • Reducing costs
  • Improving service quality
  • Managing risks
  • Increasing customer satisfaction
  • Ensuring predictable and repeatable processes

ITIL Versions (Historical Context)

ITIL v2

  • Introduced service support and service delivery.

ITIL v3 / ITIL 2011

Organized around a service lifecycle, divided into 5 stages:

1. Service Strategy

2. Service Design

3. Service Transition

4. Service Operation

5. Continual Service Improvement (CSI)

ITIL 4 (Current Framework)

Released in 2019, ITIL 4 moves from process-based to value-driven, flexible practices compatible with:

  • Agile
  • DevOps
  • Lean IT

ITIL 4 introduces the Service Value System (SVS) and 34 ITIL practices.

ITIL 4 Service Value System (SVS)

At the heart of ITIL 4 is the SVS, which ensures that all organizational components work together to create value.

Components of the SVS:

1. Guiding Principles

2. Governance

3. Service Value Chain

4. Practices (34 ITIL practices)

5. Continual Improvement

ITIL Guiding Principles

These high-level recommendations apply to any organization:

1. Focus on value – Everything should aim to deliver value to customers.

2. Start where you are – Don’t rebuild systems unnecessarily.

3. Progress iteratively with feedback – Small, controlled steps.

4. Collaborate and promote visibility – Remove silos.

5. Think and work holistically – IT and business must connect.

6. Keep it simple and practical – Avoid unnecessary complexity.

7. Optimize and automate – Improve efficiency.

ITIL Service Value Chain

The value chain is a flexible model showing how services are created and delivered.

It consists of 6 activities:

1. Plan

2. Improve

3. Engage

4. Design & Transition

5. Obtain/Build

6. Deliver & Support

These activities allow IT teams to transform demand into valuable services.

The 34 ITIL Practices (Grouped)

1. General Management Practices

  • Architecture Management
  • Continual Improvement
  • Information Security Management
  • Knowledge Management
  • Project Management
  • Relationship Management
  • Risk Management
  • Service Financial Management
  • Strategy Management
  • Supplier Management
  • Workforce & Talent Management

2. Service Management Practices

  • Availability Management
  • Business Analysis
  • Capacity & Performance Management
  • Change Enablement
  • Incident Management
  • IT Asset Management
  • Monitoring & Event Management
  • Problem Management
  • Release Management
  • Service Catalog Management
  • Service Configuration Management
  • Service Continuity Management
  • Service Design
  • Service Desk
  • Service Level Management
  • Service Request Management
  • Service Validation & Testing

3. Technical Management Practices

  • Deployment Management
  • Infrastructure & Platform Management
  • Software Development & Management

Key ITIL Concepts (Detailed Yet Simple)

1. Incident Management

  • Restore normal service as quickly as possible.
  • Example: Fixing a user’s Wi-Fi or restoring a crashed application.

2. Problem Management

  • Identify and fix the root causes of incidents.
  • Example: Investigating repeated system crashes.

3. Change Enablement (formerly Change Management)

  • Control risks when making changes to IT systems.
  • Types include:
    • Standard changes
    • Normal changes
    • Emergency changes

4. Service Desk

  • Central point of contact for users experiencing issues or requesting help.

5. Configuration Management (CMDB)

  • Tracks all IT assets and how they relate to each other.

6. Service Level Management

  • Defines and tracks service quality using SLAs, OLAs, and KPIs.

Why Organizations Use ITIL

Benefits:

  • Improved service reliability
  • Reduced operational costs
  • Better risk and compliance management
  • Higher customer satisfaction
  • Better alignment between IT and business
  • Clearer communication across teams

ITIL is widely used in:

  • Banks
  • Hospitals
  • Government agencies
  • Cloud service providers
  • MSPs (Managed Service Providers)
  • Large corporations

ITIL Certifications

Common certifications include:

1. ITIL 4 Foundation

2. ITIL 4 Managing Professional (MP)

3. ITIL 4 Strategic Leader (SL)

4. ITIL Master (highest level)

Short Summary

ITIL is:

  • A framework
  • For managing IT services
  • Using best practices
  • To ensure consistent, efficient, high‑quality service delivery

Thursday, February 12, 2026

File Integrity Monitoring Explained: How It Works and Why It Matters

 File Integrity Monitoring (FIM) 

File Integrity Monitoring (FIM) is a security control that detects unauthorized or unexpected changes to critical system files. It is used to identify suspicious activity, such as malware installation, privilege‑escalation attempts, configuration tampering, data manipulation, or attacker-persistence techniques.

At its core, FIM answers three essential questions:

  • What changed? (file, registry key, configuration, system object)
  • When did it change? (timestamp, event sequence)
  • Who or what made the change? (user account, process, system service)

Why FIM Matters

Attackers rarely compromise a system without leaving traces. Even “fileless” attacks eventually modify something persistent — a configuration file, a scheduled task, a registry entry, or a dropped payload.

FIM helps security teams:

  • Detect intrusions early
  • Identify tampering with security configurations
  • Comply with regulatory standards (PCI‑DSS, HIPAA, SOX, NIST, CIS)
  • Monitor insider threats
  • Maintain system baselines & change control discipline

How FIM Works (Step-by-Step)

1. Baseline Creation

When FIM is first deployed, it scans and records a “known-good state” of monitored files.

This baseline includes:

  • Cryptographic hashes (SHA‑256, SHA‑512)
  • File permissions
  • Ownership
  • Size
  • File attributes (hidden, read-only, etc.)
  • System Access Control Lists (SACLs)
  • Timestamps (creation, modification, access)

This baseline represents the trusted state.

2. Continuous Monitoring

FIM then continuously (or periodically) watches for:

  • File modifications
  • Deletions
  • Additions
  • Permission changes
  • Ownership changes
  • Registry alterations (on Windows)

Depending on the implementation, this can be:

  • Real‑time monitoring – uses kernel notifications, audit logs, or OS event hooks.
  • Scheduled scans – periodically re-hashes files and compares them to the baseline.

3. Change Detection

When changes occur, FIM evaluates:

  • Was the change authorized? (e.g., system patching, admin maintenance)
  • Was it suspicious or unexpected? (could indicate compromise)

Changes trigger alerts with details such as:

  • File name and path
  • Before vs. after hash values
  • User account making the change
  • Process responsible (e.g., PowerShell.exe, unknown binary)
  • Timestamp and event sequence

4. Logging & Alerting

FIM integrates with:

  • SIEM platforms (Splunk, Sentinel, QRadar)
  • EDR/XDR platforms
  • Compliance dashboards
  • SOC alerting systems

Alerts can be enriched with threat intelligence to determine if the modification correlates with known malicious behaviors.

What Files Are Typically Monitored?

Critical system files

  • OS executables
  • Kernel modules
  • Boot loaders
  • Driver files

Security configuration files

  • Firewall rules
  • PAM configurations
  • Authentication/authorization settings
  • Audit policies

Application & server configurations

  • Web server config (Apache, NGINX, IIS)
  • Database configs
  • Application settings files

Sensitive data files

  • Financial data
  • Customer data
  • PII/PHI
  • Encryption keys

Logs (in some cases)

Although logs are expected to change, FIM monitors suspicious tampering (e.g., deletions or timestamp manipulation).

Types of FIM

1. Host-based FIM

Runs on individual servers or endpoints.

Examples:

  • Microsoft Defender (with ASR & auditing)
  • Tripwire
  • OSSEC / Wazuh
  • AIDE (Linux)

2. Network-based FIM

  • Centralizes monitoring from multiple hosts.
  • Useful for large enterprise environments.

FIM vs. Change Control Systems

While Change Control manages authorized modifications (patching, updates, deployments), FIM detects all changes, authorized or not.

Good security design integrates FIM with change management so the system can automatically suppress alerts for approved updates and flag unauthorized ones.

Benefits of File Integrity Monitoring

Early breach detection

  • Unexplained file changes are often the first sign of compromise.

Compliance enforcement

  • Many standards explicitly require FIM, including PCI‑DSS 11.5.

Protects critical systems

Stops or flags:

  • Backdoor installation
  • Unauthorized configuration changes
  • Rootkit-like behavior
  • Tampering with identity or authentication mechanisms

Reduces attacker dwell time

Helps detect stealthy post‑exploitation actions early.

What FIM Does Not Do

FIM is not:

  • Antivirus
  • Patch management
  • A full EDR solution
  • An access control system

It complements these tools.

Summary

File Integrity Monitoring (FIM) is a foundational security control that continuously checks critical files for unauthorized changes by comparing them to a known-good baseline. It provides essential visibility, flags suspicious modifications, enhances compliance, and reduces the time an attacker can remain undetected.

Wednesday, February 11, 2026

Inside LSASS Dumping: A Defender’s Guide to Protecting Windows Credentials

 What is LSASS and “LSASS dumping”?

LSASS (Local Security Authority Subsystem Service) is the Windows process that enforces local security policy and manages authentication. After a user logs on, credential material (e.g., NTLM password hashes, Kerberos tickets, and, in some configurations, plaintext) resides in LSASS memory for the session. LSASS dumping is the act of extracting that in‑memory credential material (typically after an attacker already has admin/SYSTEM privileges) to facilitate account impersonation, privilege escalation, and lateral movement. 

Why attackers target LSASS

  • High payoff: A single host may contain credentials for privileged users (e.g., domain admins), enabling rapid lateral movement and full domain compromise. 
  • Pervasive technique: LSASS dumping is among the most prevalent credential‑access techniques seen across APT and cybercrime operations, including ransomware campaigns.
  • Windows internals: Because LSASS legitimately stores authentication artifacts during normal operation, an attacker with sufficient privileges can attempt to extract them unless the host is hardened.

Common approaches (high‑level only)

Attackers can attempt in‑memory access to LSASS or create memory dumps for offline parsing with credential‑theft utilities. Variants include abusing signed/LOLBAS binaries (living‑off‑the‑land), leveraging error‑reporting paths, or deploying custom tooling to evade EDR. (Deliberately omitting command‑level detail.)

Operational prerequisites for an attacker

  • Local admin/SYSTEM privileges are typically required to read LSASS memory, so LSASS dumping is usually a post‑compromise technique.
  • Misconfigurations (e.g., legacy WDigest settings that allow plaintext caching) can worsen the impact if present.

Detection: what to watch for

Aim to detect both attempted access to LSASS and the downstream use of stolen material.

1. Process and handle‑access telemetry

  • Alerts when non‑system processes open LSASS with suspicious access rights (e.g., memory‑read/duplicate handle). Modern EDRs and Sysmon telemetry are key here.

2. Anomalous child processes / LOLBAS abuse

  • Signed Windows binaries or admin tools unexpectedly interacting with LSASS (e.g., “living‑off‑the‑land” patterns).

3. Dump‑artifact forensics

  • Appearance of crash/dump files in temp or system folders, or unusual Windows Error Reporting activity tied to LSASS.

4. Post‑dump behavior

  • Sudden spikes in Kerberos ticket requests, pass‑the‑hash attempts, or lateral RDP/SMB authentication using previously unseen accounts or workstations.

Microsoft reports that modern endpoint solutions (e.g., Defender for Endpoint) include attack-surface reduction (ASR) rules and detections specifically designed to block or alert on LSASS credential theft, with demonstrated effectiveness in independent tests.

Mitigation: harden, limit, and monitor

Goal: Make LSASS hard to access, reduce the value of what’s inside it, and ensure attempts are noisy.

1. Enable LSASS protection features

  • LSA Protection / RunAsPPL (Protected Process Light): Launches LSASS as a protected process, allowing only trusted, signed code with appropriate privileges to interact with it. [microsoft.com]
  • Credential Guard: Uses virtualization‑based security (VBS) to isolate derived credentials, preventing many in‑memory theft scenarios. 

2. Apply Attack Surface Reduction (ASR) and EDR controls

  • Use ASR rules to block credential stealing from LSASS, and ensure EDR policies alert on suspicious handle access and dump patterns. 

3. Reduce credential exposure

  • Disable/avoid WDigest plaintext credential caching where possible; ensure modern auth packages and patches are in place.
  • Prefer Kerberos and constrained delegation; avoid unnecessary credential caching on servers that host many privileged logons.

4. Least privilege & endpoint hygiene

  • Minimize local admin rights; use Privileged Access Workstations (PAWs) and Just‑Enough/Just‑In‑Time Administration to limit where high‑value credentials ever land. 

5. Memory‑dump and handle‑access controls

  • Remove “Debug programs” rights from standard administrators; restrict who can create process dumps; monitor/alert on dump‑tool invocation and WerFault anomalies.

6. Network & identity protections

  • Detect reuse of stolen credentials (pass‑the‑hash/ticket) via anomalous authentication patterns and enforce MFA where feasible to blunt the theft value.

Incident response: if LSASS dumping is suspected

  • Isolate the endpoint from the network to stop lateral movement.
  • Acquire volatile evidence (carefully, to avoid destroying forensics) and collect EDR/Sysmon logs around the time of suspected access.
  • Hunt for credential reuse across the domain (failed logons, unusual source hosts, ticket anomalies).
  • Rotate sensitive credentials (admin/service accounts), re‑issue Kerberos tickets (e.g., KRB TGT resets per policy), and reimage if necessary.
  • Retrospective detection: Search for the same TTPs fleet‑wide; many attackers execute this technique on multiple hosts.

Legal and ethical note

Attempting to access LSASS memory on systems you do not own, or without explicit, written authorization, is illegal and unethical. All guidance here is intended solely to help defenders understand, detect, and mitigate this technique.

Quick FAQ

Is LSASS dumping the same as pass‑the‑hash?

  • No. LSASS dumping is a collection technique (to obtain secrets). Pass‑the‑hash/ticket are use techniques that may follow.

Does enabling Credential Guard stop all LSASS dumping?

  • It significantly reduces exposure by isolating secrets, but you should still enable RunAsPPL, ASR, and robust EDR detection; defense‑in‑depth is essential.

Where else do Windows credentials live?

  • Beyond LSASS memory, credentials/hashes can exist in the SAM, NTDS.dit (on DCs), LSA Secrets, cached domain credentials, and Credential Manager, each with its own risks and defenses.