Few things make us as sad as seeing a support ticket asking why a build is slow or hearing from our account managers that a customer is voicing concerns about slow builds. At CircleCI, we loathe slow builds. We consider it our mission to increase the total throughput of the software development process, and build speed is certainly a significant factor in overall throughput for many teams.
How we speed up builds
We have a few ways we currently work to make builds faster, and we’re constantly working on new ways. Today we have:
Source caching - Rather than grabbing all the source code on each build we keep a copy on our servers that is updated with just the changes since the last build.
Dependency caching - Rather than downloading dependencies on each build we put these on disk for quick retrieval during each build.
Parallelism - By running tests across several independent instances we can dramatically cut total build time. If you provide test metadata we can auto-split tests based on timing. Otherwise, we can auto-split by files (an even number per container), or you can manually set your test splitting or other parallel tasks.
On-demand build fleet - Part of the magic of CircleCI is we keep clean execution environments at the ready, so when you start your build there’s no more than a few seconds of wait time before your build starts to run. We recycle the machine each build runs on (or each part of a build, in the case of parallel builds) every time a build finishes, so you are guaranteed to never leak source code or data or have cross-contamination of settings, databases, environment variables, etc.
So, why does my build run faster on my laptop?
We are sometimes compared to the speed of builds running locally, but the comparison is complicated. Think of it this way – a very common development box for many of our customers is a MacBook Pro with 4 powerful CPUs that typically costs around $3,000 USD and gets replaced approximately every two years. To provide the same horsepower for every build we would need to provision substantially more compute power and thus charge a substantially higher price for on-demand fleet capacity.
Today, when paid customers get out of memory errors, they can contact support to get more memory allocated to their builds. And while our parallelism features typically increase build speed, in some cases orgs with compilation-intensive builds, like iOS and Android developers, may not be best suited for this setup. For now, when customers need more CPU power in their builds we suggest they talk with us about CircleCI Enterprise, where users can run their own instance of CircleCI and tune their own build fleet to the needs of their team.
Rest assured, we’re always working on ways to get you more compute power when you need it. From on-demand execution environments to caching of dependencies and source code to the industry’s easiest and most scalable parallelism setup to graphs that help identify bottlenecks, and a host of other features we’ve built over the years, at CircleCI we work every day to do our part improving your throughput turning code into software. Stay tuned over the rest of the year as we continue to roll out new ways to tune your builds.
We always love to hear from you on how you could get the most out your CI/CD systems, so be sure to post your thoughts in our community forum.