256,338 rows affected.
If you have the ability to do this on your first day, it’s 100% not your fault.
This is literally true and I know it because I came here to say it and then noticed you beat me by 5 minutes.
This is literally true and I know it because I came here to say it and then noticed you beat me by 21 hours.
This is literally true and I know it because these three people said so
It’s fine, just restore the backup.
The what now?
It’s right there, in the room with the unicorns and leprechauns.
Right next to the windows backup cd
You know how we installed that system and have been waiting for a chance to see how it works
You go ask the other monkey who was in charge of the backup … and all they do is incoherently scream at you.
Backup straight out the building. Ain’t about to be there when they find out.
Every seasoned IT person, devOps or otherwise has accidentally made a catastrophic mistake. I ask that in interviews :D
Mine was replacing a failed hard drive in array.
- Check array health, see one failed member
- popped out the hot swappable old drive , popped in the new one
- Check array health to make sure the array rebuild is underway
- See array now has TWO failed member, and realize I feel the drive in my hand still spinning down
shit.
I accidentally rm’ed /bin on a remote host located in another country, and had to wait for someone to get in and fix it.
I pushed a $1 bln test trade through production instead of my test environment… that was a sweaty 30 minutes
Not IT but data analyst. Missed a 2% salary increase for our union members when projecting next year’s budget. $12 million mistake that was only caught once it was too late to fix.
I once deleted the whole production kubernetes environment trying to fix an update to prod give awry, at11pm. My saving grace was that our systems are barely used between 10pm-8am, and I managed to teach myself by reading enough docs and stack overflow comments to rebuild it and fix the initial mistake before 5am. Never learned how to correctly use a piece of stack that quickly before or since.
Nothing focuses the mind more than the panicked realisation that you have just hosed the production systems
Yep. Ran a config as code migration on prod instead of dev. We introduced new safeguards for running against prod after that. And changed the expectations for primary on call to do dev work with down time. Shifted to improving ops tooling or making pretty charts from all the metrics. Actually ended up reducing toil substantially over the next couple quarters.
10/10 will absolutely still do something dumb again.
I deleted all of our DNS records. As it turns out, you can’t make money when you can’t resolve dns records :P
Ctrl + z.
Thank you for coming to my Ted talk.
Works on my machine (excel sheet)
Elon was onto something after all
I once bricked all the POS terminals in a 30-store chain, at once.
One checkbox allowed me to do this.
Was it the recompute hash button?
No they were ancient ROM based tills, I unchecked a box that was blocking firmware updates from being pushed to the tills. For some reason I still don’t completely understand, these tills received their settings by Ethernet, but received their data by dialup fucking modems. When I unchecked the box, it told the tills to cease processing commands until the firmware update was completed. But the firmware update wouldn’t happen until I dialled into every single store, one at a time, and sent the firmware down through a 56k modem with horrendous stability, to each till, also one at a time. If one till lost one packet, I had to send it’s firmware again.
I say for 8 hrs watchimg bytes trickle down to the tills while answering calls from frantic wait staff and angry managers.
I worked with POS machines once too. Ugh. Worst. Things. Ever.
I’m curious - was it also a checkbox that immediately applied when toggled, instead of not actually applying until you press save?
Immediately applied, no save button. It was labeled something like “Allow/disable firmware updates” which is bad design. Label should say exactly one thing, either “Allow” or “disable” never “Allow/disable” The software was very antiquated.
Nah. It was totally a virus attack.
Reminds me of the time all those porn pop-ups hijacked my browser and filled my history. My dad thought I’d visited all those sites on purpose for a second there.
That’s actually pretty impressive
It was all a Pentest! The company should have been operating under the Zero Trust Policy and their Security systems should not have permitted a new employee to have that many rights. You’re welcome, the bill for this insightful Security Audit will arrive via mail.
Pretend you thought you were hired as disaster recovery tester
Now if you’ll excuse me while I fetch some documents from my car for my formal evaluation of your system
*Gets in car and drives away*
“Ah, shit. Oh well. They have backups.”
“…”
“They have backups, right?”
If they don’t, that’s something you can’t blame on a new start.
It’s my last day at work, and I just started to
dd
my work laptop…but I forgot I was ssh’d into the production database.Did you know that morning it would be your last day at work?
Get this monkey a job at Tesla
We need an army of them working at Palantir too
256,338 rows affected.
when it gives you a time to rub it in. ‘in 0.00035 seconds’I can do one better. Novo Nordisk lost their Canadian patent for Ozempic because someone forgot to fill out the renewal with a $400 admin fee.
They will lose $10B before patent ending.
But they saved $400. Someone needs to talk to HR.
What’s with the weird vertical artifacts in this image?
this meme is made out of corduroy
Trying to hide the slop
Scanlines in tate mode.
You’re stress testing the IT department’s RTO and RPO. This is important to do regularly at random intervals.
Netflix even invented something called Chaos Monkey that randomly breaks shit to make sure they’re ready.