This week, we experienced a major disruption that affected many of our customers, our partners, and the people who build with Webflow every day.
While performance is now back to expected levels, I want to speak plainly about what happened, what we got wrong, and how we’re responding.
What happened
On July 28, Webflow experienced a prolonged performance issue that made it difficult for many people to log in, access the Designer, and use key parts of the platform. The root cause was a coordinated malicious attack that overwhelmed some of our backend systems. Our engineering team worked closely with our database provider to investigate what happened, stabilize the platform, and roll out fixes in real time.
If you’re looking for a more detailed breakdown of the investigation and fixes, our CTO Allan and our engineering team have a full technical writeup here.
We know this incident made it hard for teams to do their work, support their clients, and stay confident in our platform. We heard that clearly in the feedback from our customers and our community.
What we got wrong
This was not just a technical failure on our end. It tested how we communicate, how we support our partners, and how we hold the standard we set for ourselves.
- Communication fell short. Many of you found out about the incident through social media or status updates you had to track down on your own.
- Partners were left without support. Without clear comms, many of you didn’t have answers for your clients during a stressful and high pressure moment.
- We misjudged the disruption. The impact ran deeper than we anticipated for teams depending on Webflow to do their work.
We own these misses, and getting the platform back online was only part of the solution. The harder truth is that we made an already difficult situation more frustrating for the people trying to get work done. I’m sorry we put you in that position.
What we’re doing
Since the incident, we’ve rolled out backend improvements including enhanced monitoring for reliable alerting on database latency, strengthened rate-limiting and firewalls to block malicious traffic, and upgraded our critical database cluster to a higher-capacity single-socket CPU architecture. We’re also making broader changes across the company.
Here’s what’s in motion:
- We’re driving incident Root Cause Analysis (RCA) action items at the highest priority to show our commitment to availability.
- We’re allocating additional resources to double down on operational improvements across multiple areas of our infrastructure and platform.
- We’re improving customer communications, including faster updates and more proactive messaging for both customers and partners.
- We’re taking a close look at how we run incident response so teams can move and communicate faster when things go wrong.
- And for those affected, we’re working on a plan to follow through and acknowledge the disruption. More on that soon.
We’ve got more work ahead, and we’ll share those updates as they come together.
What comes next
The work outlined above is just the beginning. Our focus now is making Webflow more reliable, more resilient, and more transparent for every customer, every partner, and every team that depends on us. That’s what we’re working toward, and we’ll keep sharing progress along the way.
To everyone who reached out, whether it was through a support ticket, forum thread, or direct message, thank you. You helped us see more clearly where we need to be better, and we’re paying attention.