Azure IR test notes
Don’t judge, this felt quite noobie but I enjoyed it enough to record it on my blogz in this drivel format.
Whilst learning about some forensics I thought with so much infrastructure being in Azure/AWS – whats it like trying to do forensics with VMs? Should be easy to request a snapshot, no faffing around with physical kit right?
Then i read some of @gossithedogs’s tweets about the recent exchange vulns being observed in his honeypot setups so I got interested enough to try myself.
I spun up a very poor attempt at a fake environment, 2 DCs, a Fileserver and an Exchange 2019 server.
For telemetry I installed sysmon with SwiftonSecurity’s configs, a free trial of splunk (external to the environment) with the universal forwarder on each host to pick up the event logs and a local.conf entry (snippet below) to grab the sysmon events too. I manually installed both of these which was a bit of a faff tbh.
[WinEventLog://Microsoft-Windows-Sysmon/Operational]
disabled = false
renderXml = true
source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational
I also installed a Google GRR host, again away from the azure environment to try it out.
Finally I installed a Win10 host, joined it to the domain and created a non-priv’d domain user “administrator” with the credentials P@ssw0rd.
SO with the Exchange box exposed on 25/80/443, and my Win10 machine exposed on RDP it didn’t take long until someone had brute forced the administrator password, but nothing interesting immediately. The brute force attempts continued..
Over the weekend a few logins were observed but they didn’t do anything complex. Some reconnaissance like ipconfig, route, arp -an etc.. One attacker uploaded a “Defender Control.exe” binary but seemed to hit a UAC prompt and give up. Another attacker uploaded “Angry IP Scanner”. Was not expecting such clunky interactions.
Monday
So I checked around 9-10am on Monday and I guess the $dayjob attackers start their day. This is executed. They end up using the recent print spooler exploit to privesc.
They then used mimikatz to grab domain admin creds from the box and used mstsc to RDP into the other boxes. I guess the DC gaveaway my honeypotting as they reset all the user credentials and didn’t do much else. (Or I’ve not found it in the logs yet).
While I was watching the activity I also pulled back the EventLogs, process lists, and malicious files from the hosts using Google GRR.
Once I decided to revoke their access I setup a Network Security Group blocking everything except my telemetry.
I used the azure “Run Command” -> Run powershell option to reset my credentials & get back in;
$NewPwd = ConvertTo-SecureString "NewPasswordHere" -AsPlainText -Force Set-ADAccountPassword -Identity localadmin -NewPassword $NewPwd -Reset
Once I’d got everything I thought of value from the boxes I deleted them. Prior to deletion I snapshotted the Windows10 to see what artefacts I could recover forensically. Ok so it cost about £100 in total. (The 32Gb exchange VM ate through a lot of budget) and I think I learnt a few things.
THINGS I DID LEARNSSSS
- Don’t leave canary tokens settings exposed on your DC desktop, it gives the game away.
- It really doesn’t take anytime at all for a trivial credential to get detected by attackers.
- Not all attackers are clevers.
- Google GRR is super cool but requires a LOT of compute. I’d love to use in a production environment but I get the feeling I’d get rejected asking for so much resource.
- SwiftOnSecurity’s awesome sysmon config is definitely awesome but I really didn’t RTFM and enable Clipboard monitoring and RDP file info. I wish I’d done this as it’d have given me a few of the binaries that they cleaned before I could capture.
- I think Credential Guard would’ve prevented them getting getting domain admin so easy.
- Better credential hygiene would’ve also been a good idea.
- Trying to add a DENY ALL EXCEPT ME rule isn’t as quick to deploy as you’d think. I’d left my splunk & Google access in place but took out GRR from working, I initially assumed it had been clobbered by the attacker but turned out IT WASNT DNS, IT WAS MY LACK OF ALLOWING NAME RESOLUTION. 🙂 So its definitely worth preparing this triage/remediation ruleset and covering off all your dependencies.
- 127GB of Azure disk image isn’t going to download fast. It took me FIVE HOURS to download. Once I got the image down Autopsy could open it fine without any crypto hassle but I did fail to find some of the artefacts I observed used in the logs. In future I think it’d make a lot more sense to analyse this In Azure. (Obvious I guess but, at least its something else you can prepare in advance).
- In future I think I’m going to try and present a vulnerable exchange OWA/ECP interface on a Linux box. You can change the server info so, HOW HARD CAN IT BE? (And it’d save a fortune)
- Deploying GRR with group policy was a doddle, I will definitely deploy sysmon & the splunk forwarder this way in future to avoid manually touching each box.
- The recently released ChainSAW tool for log analysis is awesome. I could run it on the eventlogs pulled via Google GRR without much fuss. Maybe it’d be an easy plugin to butcher? It flagged up the mimikatz usage etc:
mimikatz.exe log privilege::debug "sekurlsa::logonpasswords full" exit
So yeah, I quite enjoyed it and will re-attempt it with some of the lessons learnt. 😀 I will definitely rebuild it, maybe using splunk attack range to save time and deploy some baselines to slow/frustrate attackers.
I appreciate a lot of the stuff listed above is pretty obvious but I guess it shows how key it is.