State of the DeployBot
Creating an automated deployment service like DeployBot is a big responsibility. Our users depend on us during their most important deployments and their day to day workflows. When things break, businesses and teams suffer. We don’t take this responsibility lightly, so we are constantly working on the ways we can improve the stability and performance of DeployBot.
Unfortunately in many cases this work goes on under the hood and is not directly visible to our customers. In this post, I’m going to summarize all the good things that happened to DeployBot’s internals over the last year.
As some of you know, originally DeployBot started from Beanstalk and so in the beginning we were sharing a lot of the infrastructure with it. Over the last year, we separated a lot of the shared servers and services and now 99% of DeployBot runs on it’s own. That gives us more room to scale, more isolation in case of failures and more resources for your deployments.
While at it, we’ve performed important hardware and software upgrades to our deployment and database servers, so that we have much higher capacity on the reserve, ready to deploy those 150-server deployments some of our customers have! Regular users in many cases should also see improved deployment speeds.
We also developed internal tools and systems that will allow us to scale much faster and more efficiently than we were able to do before, to move more organically with DeployBot’s growth.
Web application performance and stability
Over the last year, we were focusing a lot on fixing the existing issues that prevented some of our users from being able to set things up and start deploying easily. A lot of the time was spent on identifying common issues reported by our Customer Success team and trying to make sure they do not happen. We also paid attention to the performance of the application, to make sure it stays snappy and resolved many cases when it wasn’t. Of course, we are never going to be done with this kind of improvements and fixes, so if you notice anything wrong — please let us know.
Evolving build tools
Since we released Build Tools we saw that it was a great success. More than 36% of all our active accounts are using builds as part of their deployment process and that number keeps increasing as more people learn about what can be achieved with them.
When we released Build Tools we started learning a lot from the ways people wanted to use them, so over the last year we kept tweaking and improving how builds work. There were a lot of bug fixes and tweaks and a lot of work went into trying to eliminate all the possible failure scenarios to make sure your build scripts run in our build environment with as little changes as possible.
Hardware upgrades also allowed us to increase our resource limits, doubling both the RAM and the CPU cores allowance per build. Your builds can now use as much as 2GB of RAM and 2 CPU cores and we also have more servers processing the builds.
A lot has changed since we rolled out build tools and our Ubuntu 14.04 LTS container. To keep up with these changes we recently released a new Ubuntu 16.04 LTS container. The updated container features NodeJS 6.9.2 (with NVM), PHP 7, Ruby 2.4 (and others through RVM) and other updates. But there’s more — now both our container images are available publicly in the Docker Hub, allowing you to use them for testing or running your builds locally (if you have Docker installed). This should make debugging builds for complicated cases much easier.
Some work went into improving our integrations with 3rd party services. Among other things, we’ve improved the DigitalOcean integration, our webhooks support and added 5 new AWS regions: Canada (Central), EU (Ireland), US East (Ohio), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
We hope you like these updates! If you have any ideas what else needs to be improved, please don’t hesitate to contact us at email@example.com.