Oh, Sh*t. Website disaster. Now what?

Don't let a "Friday night disaster" or a hacked homepage ruin your week.

From establishing a single source of truth to mastering the art of the "plain English" post-mortem, Jilly shares how to keep your cool, keeping your stakeholders informed, and get back online faster.

---

We’ve all been there. Me? Too many times to mention. One minute, I’m prepping dinner on a Friday night after a week of absolutely nailing the internet business for yet another week, and the next, BOOM. Client website down / hacked / fraudulent payments pinging left, right and centre. 

Pick the disaster that relates to you most if you like, we’ve seen them all; from the white screen of death, a 500 error, or worse and the homepage has been taken over by a dancing badger in a ballerina costume*.

When the proverbial hits the fan" (SHTF, for the acronym lovers), your first instinct might be to scream into a cushion or frantically Slack every developer you’ve ever met. For the calmer of us, you may be tempted to send a sternly worded email to your web agency or tech team with the subject title URGENT and hope that gets the issue off your desk.

Don’t do either of those things. 

At Bravand, we love the process of designing and building and testing and launching and all those fabulous productive things that come with the digital creative industry. But truth be told; whilst we’re super good at all of those things, the time when I feel our team truly shine is in a disaster. 

Having managed one disaster after another for over 20 years, I’ve gone from hot heated panic stations to a much cooler and calmer culture that we instill across our team.

If you’re managing a digital project or website; it is inevitable that you will face more than one “oh shit” moment in the near future - here is how to handle the heat without losing your cool.

  1. The "Who, How, and what not to do" of comms - When things go south, you need a single channel and source of truth. If you have five different people shouting in five different comms channels, nothing gets fixed, and everyone’s blood pressure triples.
    • The urgent ticket process: You need a crystal-clear process for raising the alarm. For us, it’s a call or text - mobile only and straight to me, plus back up contacts if you can’t get me. Email is a big no no and everyone knows it. Everyone on your team needs to know exactly where to go.
    • Centralised tracking: Put down the Post-it notes. You need an online project management system (we use Jira, but other PM software is available) to track the fix. To be crystal on this; this isn't just for the devs; it’s so everyone can see progress without poking the technical team every six minutes AND a report can be created quickly to relay back to board level contacts if needs be.
    • The group wide buy-In: Ensure every stakeholder is onboard with this process before the crisis hits. If the CEO is still emailing a junior dev directly during a blackout, the system is broken.
  2. Keep the comms coming - in a crisis, silence is the loudest noise you can make. It screams, "We have no idea what we're doing."
    • Internal comms: Keep your team updated every 15–30 minutes, even if the update is "We’re still looking into it." It stops the internal panic from leaking out
    • External comms: Be human. If the site is down, tell your users. A cheeky, honest social post or a "we’re working on it" landing page goes a long way. People forgive technical glitches; they don't forgive being ignored.
    • The post-mortem (without the blame) Once the fires are out and the site is humming again, it’s tempting to just go to the pub and forget it happened.
  3. Don’t skip the debrief - "A mistake is only a failure if you don't learn from it. Otherwise, it’s just an expensive lesson."
    • Sit down, talk to each other, look at the logs, and figure out exactly where the pipe burst. Was it a dodgy plugin? A devious hacker? A server overload? Human error?
    • Fix the process, not just the code, so it doesn't happen again next Friday evening (always on a f**king Friday).
  4. Share the lesson in a way that EVERYONE can learn - So you work in tech, and the problem had a lot of elements in it that have three lettered acronyms and posh computerised words etc. When relaying both the issue, the solution, and the lessons back to other stakeholders that may not know their cron job from their API connection, take the time to translate from tech jargon into plain English. 
I cannot stress enough the benefit of ditching the arrogant IT pro attitude and talking plainly about tech to other people that you work with. They’ll get it, they’ll appreciate it and it will make your life easier in the long run. 

The Bottom Line?

Don't panic. Sh*t happens; but with a solid comms process, a bit of Bravand-style best friend to your website support, you’ll be back online quicker than the time it takes for you to figure out how exactly do you pronounce our company name.

How’s your current emergency "SHTF" process looking and working right now? Is it a documented process, or just a prayer and a panic attack?

Get in touch

Not sure your website is in safe hands? Talk to us about our hosting and support packages - we're the team you want on speed dial when Friday night goes sideways.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.