Last week we were fortunate to host Andrew Maxwell, Senior Engineering Manager at GoPro, and Adam Zimman, VP of Product at LaunchDarkly, at CircleCI for a chat with our CTO Rob Zuber. In their talk Andrew shared how his team at GoPro was able to reduce much of the typical risk associated with a product launch when launching their GoPro Plus last September. Watch the video or read the transcript below to see how GoPro uses continuous integration and feature flags to ensure a successful product launch.

They cover:
-the planning strategies that ensured a surprise-free launch
-how GoPro’s use of feature flags ensures that their engineers never lose sleep
-recent improvements to their CI pipeline

Webinar transcript below:

Rob Zuber: All right. Good morning, everybody. Thank you for joining us. I’m Rob Zuber, the CTO of CircleCI. And, I have with me today Adam Zimman from LaunchDarkly, who runs product and platform there, and Andrew Maxwell, who runs the entire web team for GoPro. We’re going to talk today about a recent product launch that GoPro did and how they were able to use CircleCI, LaunchDarkly, and some of the practices that we help support in order to have a successful launch and to reduce the risk of that launch. So with that, I’m going to let Andrew take it away and give us sort of an overview, and then we’ll have some discussion about the launch.

Andrew Maxwell: Okay. It starts about, we had a big product launch called GoPro Plus that happened September of last year. That was a big initiative because what we were basically building up was a new pillar within GoPro dedicated around software. So what we had to do was re-launch hardware, software, accessories, all at one time across the board. So, in my team, we actually focus on the web applications, primarily, and then, our universal header that tied to the log-in app, and then kind of marketed our new GoPro plus products. At the same time, we had mobile applications that went out, desktop applications. On our e-com store we had the new camera, the Hero5 Black and Session. And then we did a livestream video all choreographed at the same time. During that launch, my team was actually done about two weeks early, and we used LaunchDarkly to be able to push our code to production, turn the apps off, off by default, and then make sure that we had everything pushed out, deployed, and the infrastructure running live without customers actually seeing it. And, so, when we came to do the major release, we actually had a big war room across all the different teams scheduled for this massive launch within GoPro. And when each team, they did the deploys, they turned on the feature that they have from this e-com perspective, and then, when it came to my team, we had 12 different feature flags that we just turned on, made sure it worked, and since we did everything two weeks beforehand, we were very confident about our deploy and made sure everything was smooth to go.

RZ: That sounds great.

Adam Zimman: That sounds awesome.

RZ: So, one of the things that you mentioned there was being done a couple weeks early, and so, you had stuff in production but feature flagged off. What was the total length of the entire release cycle? And what did it look like as you worked through it? Were you able to actually put some elements out earlier into production?

AM: Yeah. So, we knew about the products and stuff that was going to be launched from GoPro for a long time. But what we want to do is actually make it a bigger bang. At GoPro you can kind of tell we don’t things lightly. We want to do big announcements for us, and we want to come out with great products. So, to get it up to where we actually launched our login application, which is single sign on, it replaced an old legacy setup, setup brand new cookies from the GoPro perspective, we launched that in March of the previous year. That was using the basic tech stack that knew that we were going to use, and from there we actually iterated, improved it drastically from that initial March launch. And then through there we actually had smaller features that would go out, and then go for alpha testing and beta testing along the way. So, around, shortly after March, we actually had most of the applications done from a core feature standpoint, but we kept iterating and improving those core features that we knew we were going to launch with, test it internally, open up who’s going to be using it, and then continuously improved it along the way.

RZ: Got it. And, so, of the things, of the sort of eventual features, not just the login element, but the features that you were putting out, what was the access like? Like, when did you sort of have some of those eventual features that you were going to launch already in production? And who was using them? Was it just internal? Or did you have sort of external beta users? How did you manage that process?

AM: It’s a little bit of both. We do a lot of internal testing. We dogfood our own product. So, we actually had a staggered release. So, we have the login application, as I mentioned, in March. A couple months later we actually released our accounts center, so that tied a lot of our features with the billing subscription portion of it. So, we were able to push out the accounts center application with what are known as global non-GoPro Plus features, initially. That way, people were testing the app. We got big load of users onto the app, and that way we knew that the infrastructure was there, and we could actually do testing of the GoPro Plus features live in production. Whereas main features were not there for general users.

RZ: Got it. Got it. So, let’s talk a little bit about CI and CD, or at least the CI element. How do you use CI within the kind of, I guess, the early stages before you even put those things out into production to reduce the overall risk of either bugs or, you know, alignment of the things that you’re building with what the stakeholders have sort of scoped out for what you’re planning to build?

AM: Yeah. So, my team, within GoPro, we follow the same practices against the cloud platform or platform architecture within the company. And then my team has a little bit of an addendum to that. So, what we have is, we have, we did have develop, staging and production branches, and then we actually have the feature branches. Where my team has it slightly different is when any of my developers work on code, they can push up their feature branch into Github. And then from there, it automatically runs CircleCI. It runs our tests. It builds our Docker image. And it actually deploys that feature branch to its own infrastructure, with a dedicated sub-domain with our internal platform. So, that engineer can actually test out their own code in isolation from everything that’s in developed. And that happens up to a hundred times a day, so, it consistently will cross any of our apps, where it’s login. It’s account center, media library or private links. Those are our core fundamental apps that you see for GoPro Plus. Once that code is actually looked at, they can actually merge and develop during normal code review practices. And as soon as that gets built, we do the same thing. We run it through CircleCI, run our integration tests directly in Circle against our Docker image. And so we run Docker within CircleCI.

RZ: Mm-hmm.

AM: We test against it using mocked data. That way we actually can say, “We are consistently testing our features against what the API contract should be.”

RZ: Right.

AM: Then from there, we actually push up our Docker image, and that will automatically deploy into QA, and then from there, we follow a similar practice for staging. That doesn’t get merged or deployed as often. We do that, about, sometimes two to three times a week. And then we do production one to two times a week.

RZ: Got it. So, do you find that there’s a sort of a tension or a disconnect between what sound like really great CI and CD practices leading up to getting stuff into production and these really long release cycles? Like, what drives the long release cycle at GoPro overall? And then do other teams follow similar practices, and it’s just part of that build-up? Or what’s the sort of disconnect there?

AM: Yeah. So, ideally from the software perspective, from the cloud, we actually, we have, our mandate is two weeks, for instance. We want to deploy and get everything out within that two weeks. And anything that’s in production, that still follows that mantra.

RZ: Mm-hm.

AM: Our native application, we follow a six week release cycle. So, we really want to turn out the features as much as we can, get them out to our users, solve problems, solve bugs, but for any bigger launches that are tied to our hardware, we want to actually makes sure that that is stable. And our hardware is like, the long tail. We know that we’re going to release as certain times of the year. We know that we ideally want it before the holiday season for the most consumers. So, we want to tie our software for those features directly to that.

RZ: Right.

AM: But, as much as we can, we want to decouple any long-running plan from anything that we can deploy on a continual basis.

RZ: Right. And are there, you mentioned the hardware, I’m just assuming there’s an embedded software that goes with that as well?

AM: Mm-hmm.

RZ: What is sort of the lead time where that needs to be ready? And then are there specific integration points for you on the web team that you need to make sure, you know, are validated at the point that this is, you know, as you’re getting into release? Or what’s that connection like between sort of the hardware team and embedded team building that piece and the stuff that you’re building?

AM: Yeah. So, between the hardware and software, we have release trains across the board at GoPro. And what that does is it sets up what features a team is going to work that when, and then from that high-level kind of product roadmap or plan, we know when those integration points are going to be. Whether there’s from web to camera, so we have camera as a hub feature.

RZ: Mm-hmm.

AM: Which all new GoPros once you plug it in the power, and you’re connected to the wifi, it automatically uploads to the cloud.

RZ: Right.

AM: So, that’s where we do integration. We want to make sure that that features are working with us.

RZ: Mm-hmm.

AM: But from all level of points across the board, if it’s hardware specific, they try to do as much testing as they can in isolation, because then no other team is tied to those features. It’s only when we have those touchpoints do we have to do those tests.

RZ: Got it. Cool. And going right back to the beginning, you mentioned that there were a couple weeks that you were effectively done.

AM: Mm-hmm. RZ: Which I’m sure everyone watching this right now is super jealous of. I don’t know anyone that says, “Oh, we were done early.”

AM: Mm-hmm. RZ: So, I’m sure that everyone else would also like to know, what did you do during that time? Was it, were you moving on to the next project? Or just helping out other teams? Like, how did your team make use of that time?

AM: Yeah. So, when we’re done, that’s done with the main core set that we signed up to deliver for that big launch, and so, what we’ve done around the time is we did full integration testing. We did any kind of regression testing. We did some manual testing, trying to break it. So, that we knew we had the leeway. We wanted to make sure that for whatever reason, if there was a problem, we can have that padding. We found a couple little bugs from in there before launch, but we still solved that, because we were able to get done early.

RZ: Mm-hmm.

AM: With that, we had a 1.0 launch, which was dedicated for the GoPro Plus, and then from there, we knew what was 1.1, what was 1.2…

RZ: Got it.

AM: The features that were going to come together, so we tied specifically to 1.0 release, so that we knew what was going to be delivered. It was stable. It was very consistent.

RZ: Right.

AM: But we already had a plan for the next six months, eight months-

RZ: Right.

AM: … of what we were going to do with building out. So, we were already onto the next sprint trying to solve those issues.

RZ: So, obviously most of these practices that you put in place and the approach that you took to build up to this big launch was about finding ways to use a lot of what we do in CD practices to reduce the risk of the overall launch while still having a big launch. So, tell me about launch day. Was there anything that came out of it that was a big surprise? Or were you really successful in reducing that risk and getting to a successful launch?

AM: Yeah. We had it well-planned. So, with that big release train that I mentioned earlier, all teams knew what was going to be delivered. We were able to say, “Okay. This feature’s red. Okay. Let’s get more hands on that feature.” Or, “Okay. Let’s keep an eye on that.” Or, if they go green, okay, move on to the next thing.

RZ: Mm-hmm.

AM: So, we had a big kind of orchestration plan for how we were going to do the big release throughout that day. There’s hiccups along the way as there will always be, but each team was able to coordinate, plan out that, “Okay. If that was a problem, here’s how much having me help will be able to solve that and continue to move on.” We actually put that padding in front so that we actually knew when we delivered that something could go wrong. It probably will. How do we make sure we solve those issues?

RZ: Right.

AM: And each team had their own kind of doing the reissues, but for the general public, it was a completely smooth release. For us, as a whole, it was a very smooth release, considering how big of a launch it was.

RZ: Well, congratulations.

AM: Thank you.

RZ: It’s pretty exciting, and a great use of the stuff that we talk about all the time.

AM: Yeah.

RZ: I know Adam has some questions as well. So, I’m going to give him a chance to jump in.

AZ: Definitely. So, I think that one of the things that I’d love to kind of have you share with some folks is how you think about the three aspects of feature control or feature management? Specifically, how you think about the concept phase. When you start ideating on what you’re going to build next and planning this out, how do you use feature flags as a mechanism to be able to initiate that work?

AM: Yeah. So, there’s a couple, there’s two kind of questions there, so let me try to… From a high level, what we do is we have a core team on each department or each team, if you will. There’s a engineering manager as myself. There’s designers. There’s product. There’s a program. Each of us are involved with QA, and we actually look at our big roadmap, plan that we want to work on from a product standpoint and come together and say, “Okay. These are the bugs that we need to work on.”

AZ: Right. AM: “These are the features we want to work on. These are the stories.” At the time when we work on that, we write the stories, and we have a general template that we follow. These are the analytics that we want to capture, our KPIs. These are our feature flags. So, for us, when we’re building features, we look at feature flags as part of the story or part of the actual feature we’re building, not as a safety mechanism to turn on or off. It’s part of it. Something could go wrong. Or we may want to turn that off in the future.

AZ: Yeah.

AM: So, by default, all new features, or core fundamental items that we add have feature flags. And we put the feature flag that either say, ‘add one’ if there’s a general name that any engineer can come up with. Or for a specific one that we want to watch for, they work with that feature flag name into the ticket itself. And all the engineers on my teams know that when they see that, okay, this is, we have a feature flag.

AZ: Right.

AM: And turn on or off.

AZ: Yeah. Very cool. And then from, like, moving to the next phase of, like, your launch phase …

AM: Mm-hmm.

AZ: You mentioned a little bit about how you do kind of like a progressive roll out.

AM: Yeah.

AZ: Or you gradually open up features to a broader and broader audience. Have you standardized that in some way, you know, using LaunchDarkly or just from a process perspective to be able to say, “You know, we’re going to have this kind of pipeline type roll out, where we know that, like, phase one is just the developer. Phase two is 10% population”? Can you describe a little bit what that looks like for you?

AM: Yeah. So, for us, we try to do a lot of the testing internally. So, we’ll open it up just from, start with my team, maybe usually the engineers. Then we open it up to the QA. Then we open it up to the software org and the cloud department. So we’ll get, like, the media team involved or identity, depending on the features that we’re working on. And then, if it’s a cross-team where we work on something with our mobile or our native apps-

AZ: Yeah.

AM: … then we’ll also include them into our testing, so that it broadens the range. We don’t do scale roll outs right now. And it’s something that we are looking into. But primarily what we focus on is, “Is the feature on or off? Can these users use it?” By the time we actually get a product, the product team will say, “Hey. This is good to go.” Or from, if it’s an engineering feature, I say it’s good to go.

AZ: Right.

AM: Then from the team, we will do testing and staging, make sure it works. And then when we do the deployment to production, we actually will test and say, “Okay. This feature is now there. We have a list of the new features that are added. Okay. Here’s a feature flag test.”

AZ: Got it. And in that capacity, have you started using any type of hierarchy for your feature flags where you actually have kind of like a master level feature flag and then other dependencies underneath it so you can turn on one and then it does it for, like, a bunch of different sections of your code?

AM: Yeah. So, we do use either a very small level, if it’s a really simple feature. Or we do the higher level feature flag. So, for example, our notifications, we have notifications for the web on or off completely-

AZ: Right.

AM: … across the board. We have the UI portion, and that’s, basically, you get a popup notification. You get a notification drawer. We can turn that on or off per application, so there’s three apps that can actually use that. We have the ability to turn them on, off across all three, so UI across all three apps. And we have a back notification. Those are going to be things like media sync. So, as you have a camera as a hub, automatic upload, we will, basically, get a notification when new media is ready, we will update the UI for you in real-time, and then you can see the media show up in your media library. We have the ability to turn those off separately across per app as well as across the board. And then, if we wanted at any time to say, “Okay. Notifications is having an issue for whatever reason”…

AZ: Right.

AM: Or, “We’re going to test them,” we can just turn all of it off, and it’s good to go across all of our applications.

AZ: Very cool. And in that context, one of the other things you brought up is, you know, thinking about post-launch, but the kind of control aspect of having things out there in production, how do you feel like things have changed with the use of feature flags to be able to provide things like kill switching and eliminate risks for even not only the engineering team in the build process, but also the kind of operations side of things after the release has taken place?

AM: Yeah. For me, I’ve been using feature flags for about ten years now, also with multi-variant testing, and so for me, it’s, basically, been an ingrained, like… you don’t know when something’s going to go wrong. Something will go wrong, whether it’s at two o’clock in the morning, and you’re called, and there’s an issue.

AZ: Right.

AM: I don’t want to have to debug it at two o’clock in the morning or have my team deal with those issues. So, being able to turn off that feature, debug it in real-time, look at the log, and they can actually go back and see what’s going on, versus having a poor experience for the user for a long period of time.

AZ: Right. And how has that kind of impacted, I know you mentioned you’ve been doing this for, using feature flags for ten years. How has that impacted your perspective on the difference between a short lived feature flag that you’d use just to launch a feature, versus the notion of actually using feature flagging as a control mechanism for longer-running features and having them there for kill switching or for being able to use them as almost, like, application configuration control?

AM: Yeah. I see them actually together. And the reason why I see them together is one, I want to be able to test features on or off, similar to our GoPro launch, and be prepared for it. But, I want to build the features in a way that it does one thing really well, and then it’s very isolated. That, also, with that in mind, I can do the feature switch at that level, and then that allows me to, if there’s a problem, as I mentioned, you can turn that one feature on or off. And, so for me, they actually, basically, behave the same.

AZ: Right.

AM: So, I can use it for a feature launch if I need to or for turning off at the same time.

AZ: And, you know, along those lines, have you also started to look at this as an opportunity, potentially, for the application owner or business owner to be able to have some of that feature control?

AM: Mm-hmm.

AZ: And being able to manipulate who gets to see what and when?

AM: Yeah. So, we do talk within our core team about how or soon do we want to be able to run some of these features? But it’s not a major point of for us. A lot of the features that we can do or add are things that are just going to enhance the user’s ability versus, like, a major product launch at this time.

AZ: Absolutely. No. That sounds great. Well, I appreciate you answering some of my questions, but I also wanted to make sure that we left some time for folks that are joining us to be able to ask some questions.

RZ: Yeah. For sure. So, let’s, well, we’ll jump into questions in just one sec, but it would be great just to get a summary maybe of your key takeaways from this launch and anything that you’ve learned out of this launch that maybe you’re doing differently for the next one. I mean, we could all guess, even outside of GoPro that there’ll be some new stuff this year in time for the holidays. So, is there anything that you learned at the last one that was kind of an improvement that you are making this year in terms of how you go to launch?

AM: The way that we actually built GoPro Plus last time, there was actually learning along, well before the actual launch. We had a internal launch, and then we had the big public launch. So, there was learnings that we’ve had initially. And from that, that’s where we gained most of the growth and new training. What we’ve done since then is actually incorporated feature switches into our native applications, trying to tie it a little bit more to our GoPro Plus, and then trying to bring that same level of control to other teams within the company. That’s our biggest thing. And then, the other thing that we’ve done is we’ve changed our continuous integration pipeline from deploying a Docker image to one EC2N instance to using ECS and deploying a little bit more regularly across the board.

RZ: Mm-hmm.

AM: And we actually have removed the whole staging branch in that making sure that production and staging are one to one for us.

RZ: Right. Great. Those sound like good steps as well. Awesome. Well, yeah. Let’s take some questions, and then, we’ll see where we go from there. All right. I guess I will ask the questions. Then we can decide who’s going to answer them. Well, this one is going to be for you, I’m pretty sure. So, Marta asks, “How did you create the orchestration plan?”

AM: Yeah. So, the orchestration, I’m guessing talking about the launch of the GoPro Plus, like, across the board. We actually have our program team that are in charge of, basically, keeping me honest, keeping the product team honest, making sure that we’re staying on track. They’re actually the ones that coordinate that big release train with the core team. So, the core team does the products and features that they’re going to be signing up for at a given date and time, and what they can actually comfortably accomplish. And then, across the board, we have a program that we meet once a week, saying, “Are we on track for these milestones across the board?”

RZ: Got it. Sounds good. Makes sense. So, Luis asks, “What is the way in which CircleCI integrates with LaunchDarkly? Or are they more complementary?” That’s a great question. Well, first of all, I don’t know if we said this at any point. CircleCI is a LaunchDarkly customer. So, we use LaunchDarkly as well to launch features ourselves.

AZ: As is LaunchDarkly a CircleCI customer.

RZ: Right. So, we know each other very well, and so, as far as integration, I see them more as, I would say, I see them more as complementary. I think there is a lot of pieces of the pipeline of successfully delivering software at a rapid clip into production. And the things that I think about in that flow, I mean, obviously, it starts with your … You described a little bit your branch process and how you use Github, or whoever your provider is, to manage the code itself, but triggering them off of each of those commits, even if it’s to a branch to do CI, and then, hopefully, deploy into an environment where someone can take a look at it and do some evaluation. And then as far as the flow into production, getting through CI into CD, so deploying into production, and then within your deployment having tools. So, you mentioned using ECS, you know, some kind of container orchestration might be the case, whatever your tool is to actually get stuff deployed. And at that point, you want, in my mind, a number of things. One, a really good understanding of what’s happening with your deployment, so those would be both your business metrics and your operational metrics. And then, we did one of these [webinars] earlier with some folks from Rollbar, so understanding if something is going horribly wrong in production is usually a good thing as well. And then being able to manage what’s visible to your end users. So, very similarly, you know, here at CircleCI we’ll launch or deploy code that we don’t expect, either it’s not complete or we’re still working on the design or whatever but we want to get it into our master branch. We’ll put that in production, and use LaunchDarkly to ensure that it’s visible only to us as internal users, so we can evaluate it, see how it’s performing, you know, understand the operational impacts, as Adam was asking about, and the actual visual, you know, functionality, have a look at it before we slowly roll it out. So, for me, CI and CD and what we do at CircleCI is about getting that stuff quickly into production. LaunchDarkly allows us to do that, because when we can put things into production that aren’t yet complete or that we’re not 100% certain are working the way that we expected, but no one’s going to use them until we adjust the feature flags.

AM: Mm-hmm. Yeah. For me, I follow a very similar pattern, so exactly what you just said. We deploy features that may come up of a collection of multiple features. So, we can actually deploy each one individually, make sure that that one’s done, but to the end user, we don’t actually display it until it’s completely ready. Those are things like video trimming on the web. We have that ability, but in order for us to deploy it to production, we wanted to make sure we deploy it, we test with the media service in production, make sure that each infrastructure is actually working together, and with that, we make sure, “Are we logging everything? Are we tracking everything that we need for production?” By the time we turn it on, it’s ready to go. Our KPI is already there and that we feel comfortable that we have all of the metrics and everything that we need for that feature.

RZ: That’s amazing. AZ: I think one of the ways that I oftentimes kind of segment out the, you know, how these technologies come together is thinking about it from, an internal perspective on the engineering side and the building side. And this is where feature flags is a mechanism for your engineers as individuals or as small teams to be able to make sure that they’re able to move forward with flexibility and eliminating risk from the other engineering projects taking place on your master branch. Right? It gives them the flexibility to be able to check in complete code, like we talked about, and still know that it’s not going to negatively impact their peers. And then on the production side, it gives you that ability to eliminate risk… I don’t think that I’ve ever met someone who is just like,”Oh, no. Yeah. Our staging environment, 100% on par with production at all times.” That’s just not a thing. Right? And I think that the reality is is that you want to have that ability to eliminate some of your risks from the production deployment to make sure that you can have confidence that your users, not only are going to see what you expect them to, but, you know, think about it from the infrastructure side. Right? Cause this is another component of the feature flagging and the feature management platform that LaunchDarkly offers is that it’s not just about your front-end user interface. It’s also about being able to use feature flags for infrastructure changes and thinking about it from the perspective of your database or, you know, how you’re going to be able to make sure that any updates that you’re doing from a version control perspective on those micro-service components are actually reacting the way that you would expect them to, from a monitoring perspective, from a an observation perspective, thinking about that.

RZ: Cool. Thanks. So, Kyle asks, “What role at GoPro is in charge of managing and creating feature flags? Product QA or leadership?”

AM: Yeah. Primarily that falls on me from the engineering side, because when we’re meeting as a core team, I look over at sort of tickets, and say, “Okay. Does this meet the technical requirements to achieve a set goal?” But that’s just me making sure that it does get implemented, or from a high level, if they’re technical features or a collection of features that I’ve mentioned, that those are broken down in a way that can be turned on and off or me have control from an engineering side to test those. Bt we actually meet as a core team, and we do that once a week to make sure we’re all on the same page. When the product owner has a feature that they want to make, like, “Hey. Let’s make sure that we turn this off.” That’s an obvious, that’s when we include the product at that time. Or from a programmer saying, “Ah, the API that you’re using may not be ready for two more weeks. Let’s leave that off for a little bit.” So, just more of a call out. Feature flags were not part of our core team, and from how we actually planned things going out.

RZ: Cool. Excellent. All right. Anthony asks, “So, you had a pretty smooth launch, apparently from having a solid plan. What kind of planning strategies did you employ? Things hardly ever go according to plan in my experience. So, this is very interesting.”

AM: Yeah. So, I think the main final one, definitely, with us moving across the board, there was hiccups along the way, as every team has, as Anthony mentioned. But, one, we had a program side to keep doing the weekly check-ins. We actually have all key stakeholders from product and engineering and design and program all show up to that meeting, making sure, like, “Okay, this team that you may not rely on today, but in say a month, are they yellow, red or green in their status? Are they looking good?” One, that keeps us honest. We want the whole team, from the stakeholders side, to be transparent with each other. Then from the team side, we actually have multiple ways that we keep track of that plan. So, from the product side, they look from their list of what products they want out by when, knowing that feature one is more important than feature two and et cetera. That’s their plan. Program does a check in to make sure that, Hey, those features are good. Then from engineering’s side, I have to, one, make sure I meet those product requirements. There’s a way that I actually set up my team is my teams are broken down into 60/20/20 sprints. So, 60% product ask, 60% technical tasks and 20% bugs. So, I think I said that right. 60/20/20? Yeah? It’s is always going to be part of their … I’ve seen a lot of companies where they’re like, “Oh. I’ll get that teched out later, after the launch.”

AZ: Right.

AM: And it never happens.

AZ: Mm-hmm.

AM: From that, that way I make sure that I’m still on track. The risk removed, so that the features that I am building in the future are going to be easier. There’s less problems for my team. Or if my team has issues, where their building takes forever, deploying takes forever, logging is missing, those are things that I direct with my team in that it actually causes them to not be able to develop as fast as possible.

AZ: Right.

AM: So, from that I actually break down the sprints. I’m very note driven, so I want to make sure that I constantly check my notes and my plan, and then do the weekly check in with the program team.

RZ: So, I’d actually like to continue with that. I think this is a really interesting question, and particularly as it pertains to what we all do in terms of, you know, continuous delivery and mitigating risk, doing things, you know, being able to launch things partway, that sort of stuff. I think when I think about a plan now versus a plan ten years ago, I mean, a plan ten years ago was a Gantt chart. Right? That said, between now and twelve months from now, week by week, this is exactly what’s going to happen. And those plans were, off, I mean, you just went back and changed the Gantt chart. That was, basically, your job. And so we don’t do that anymore. Right? So, the thing, to me, a plan is, “This is approximately where we want to go. These are the, you know, the absolute requirements versus the nice to haves.” And then we can prioritize and associate risk with those things and say, “Let’s start with the things that absolutely need to be there.” You know, your APIs for the hardware devices and stuff like that, versus, “This is a super cool feature,” but if we launch without web notifications, we still launched. Right? We made the target, and we can adapt as we go. And I’m just curious if that sort of aligns with how you thought about the planning.

AM: Yeah. So, from the high level, that’s kind of why we break it down from the product side –these are the must haves. These are the nice to haves. And they are all ranked by priority zero, like, must have, cannot go without it.

AZ: Right.

AM: Priority one, should have it, and then it’s like, nice to have, and then, it would be great to have. Right? That’s from the product side, and they drive that roadmap. And we want them to be a able to drive what the users are expecting. They work with the design team and the UX and make sure that across the board that is a good flow for the user. From my side, I try to look at it from, “Okay, here’s how long it’s going to be.” I come up with high level estimations on how long the feature is going to be from, like, I usually do it by person weeks or by sprints. So, I don’t ever want to get down to I’m planning per day, like, with old Gantt chart style. But I don’t want to look at it from, “Okay, here’s our end date. Here’s our date today. These are the features. How would they rank to make sure that I can get these done?” And looking back at the release train for the other teams. And that’s what we bring back to the core team, saying, like, “Okay these are our must haves. This is what actually can get done. How do we actually play Tetris and bring in a couple of features that we may not actually need, and bring in some nice to haves that make the polish for the user and look better.”

AZ: Mm-hmm. Yeah. No. And I think that also, you know, one of the things that you also brought up and has kind of changed the way a lot of kind of application companies or software companies are switching with modern application development is thinking about also accounting for some of that planning in the notion of risk mitigation.

AM: Mm-hmm.

AZ: Right? This idea that it’s okay for things to go sideways as long as you have a plan for that. Right? And I think that’s one of the areas that we also try emphasize with LaunchDarkly is that it’s not this idea that everything is going to go perfectly every time. But what can you do to be able to, as part of your build process, accommodate some of those unforeseen challenges and be able to be prepared for them. Right? And that’s where I think having the things like kill switches that, you know, really comes into play, being able to say, “okay look. Who knows?” There may be some, you know, kind of slow memory leak that, you know, results in my code that I just couldn’t foresee and it didn’t happen until after, you know, weeks or, certain millions of users interacted with it. And those are things that you are never going to be able to 100% account for. Or if you could you would be slowing down your development cycle to a point where it would, you’d miss your opportunity. Right?

AM: Mm-hmm.

RZ: Right.

AZ: So, it gives you the ability to use risk mitigation as part of your planning mechanism to make sure that you can continue to move fast. It’s like that whole notion of move fast and break things, but be prepared for when they break. Don’t just say “oh, it’s broken.”

AM: Yeah. That was a surprise. I think that, actually coming back to how I plan as well, is one thing I learned from one of my teachers in college is “times by three.” He used to do animation on South Park, and used to explain, like, things are going to go wrong. Right? So one of the things he taught me is this times by three, that gives you some padding. And then under promise and over deliver.

RZ: Yeah.

AM: Right? So I signed up for less features but I want to hit those features consistently-

AZ: To nail them.

AM: … every single time. And what we do to help mitigate that is we have there’s times that a lot of people will do the actual feature. They may launch the feature when it’s done, so code complete.

AZ: Yeah.

AM: But that’s actually not what it means to be done. Right? There’s feature complete, like, the code is actually done. Then there’s integration with the team, whether it’s to hardware, or software, the APIs, or even on your own team. Does it work in the way that you expected after it’s built. Right? And the last time is any findings that come out from integration testing, whether it’s your own or working with other teams, fix all the bugs. Right?

AZ: Yes.

AM: So, that’s part of getting it together, otherwise exactly like you said, you’re going to have … Stuff’s going to be broken, and you’ll strip it down and be like, “okay. I’m done.”

AZ: Yeah. Right.

RZ: Cool. That was a great one. Yeah. So, how is, like, obviously, GoPro is a very consumer-oriented brand.

AM: Mm-hmm.

RZ: How do you, what do you find is different about shipping an overall consumer package versus just delivering a software product?

AM: I can only talk a little bit. There’s dedicated smart people at our company that does the consumer portions. We’ve been very successful because of their planning. Part of the difference is they’re tied, that I know of, tied to the hardware, whether it’s features that they need for their camera, hardware, to the casing, or to getting the software to put on to the camera itself before it can be shipped. Right? So, they have a lot of stuff to actually worry about just to get the camera together. Then, there’s the packaging and any kind of marketing material that goes to that. And if there’s any links or any reference to the website-

AZ: Right.

AM: … how does that get done in time so that we make sure that those pages are ready on the production content? Whereas software, we have the ability, especially on the cloud and on the web, to push stuff out really quickly. Right? So, we’re not tied to someone else’s manufacturing deadline. We’re not tied to getting stuff printed, put together, verified, testing the content, checking for spelling mistakes as much. If we have a typo, it sucks, but it’s not as hard as we can push out a feature the next day or the next week or next sprint.

AZ: Yeah.

AM: Right? It all comes back to that core team, so with the consumer products, they have a lot more planning, but it has to be done upfront. A lot more mitigation, testing, than they do with other departments and other customers.

RZ: So, I had an opportunity to work on a consumer product once. It was an embedded device as well. And I would say, well, one I’m super thankful to be back in the world of cloud where I can ship constantly. But two, I think it helped me develop some empathy for other types of development.

AZ: Absolutely.

RZ: And we were having a conversation before it started about, like, self driving cars and just really complex sort of applications like that that have more components. Is there anything that’s done internally at GoPro to help, say, your team develop empathy for the other teams or understand more about what they’re trying to achieve so that you can get that coordination in a reasonable way?

AM: Yeah. So, one thing that we did recently that some people might have heard is that we closed down our San Francisco office, which is where our cloud platform was. We actually moved to San Mateo. That was a big initiative from a company to bring hardware and software together. We’re actually doing it and proof of that is actually getting us in the same building, in the same headquarters, so that, one is we’re talking a lot more. The hardware and software have the same roadmaps, so that program status check in is for both software and hardware. Before, it was two separate tracks. That way we don’t see them as two separate platforms. Yes, they’re two pillars of a company of financials and focus, but together we want to solve the users’ end goal. Right? Which is getting the memory, sharing those with friends and family. It’s a tag team effort. The camera captured memories, but once it’s on the camera how do I get it shared to content? We have quick for mobile to quickly share on the go, and the cloud, you can access it from any desktop or tablet. Just go to the website. Now, all of your memories are there. So, it’s a big collaboration between the two of us.

RZ: Right.

AZ: Along those lines, you know, I wonder whether or not, have you started to see any, kind of, interest from the hardware side to take a look at the practices that you’re using and be able to look at ways in which they can have a little bit more control over their iteration?

AM: From where I stand, I’m not really part of that, but the program team is. One thing they’ve done is since they were two separate tracks, they followed different practices, because one was dedicated to the platform and one was dedicated to the consumer products. Right?

AZ: Yeah.

AM: The hardware. They have now brought that into the same kind of planning process. So, what we’ve done in terms of the release train is they’re making sure are we yellow, green, red-

AZ: Right.

AM: … from our vendors, whether it’s for us, is the infrastructure ready? You know, is the QA verified over content.

AZ: Yeah.

AM: For the consumer portion of our hardware, we look at is the lens ready-

AZ: Right.

AM: … or whatever those breakdown parts. And so we look at it from that point, so we’re all driving towards similar goals and we can actually see from other teams how they’re accomplishing their goals. It may be a little bit differently, but it has similar aspects.

RZ: Cool. All right. We’re going to take one more question, and then wrap it up. So, what role do non-developers play in the management of feature roll out? And how do you see this changing as features are controlled more centrally?

AM: Yeah. In terms of from an internal, the non-engineering, the company is very engineering as we both do hardware and software. So, like, even our product people, they know about actually development of the projects. We do want them to be a part of it. So, from our native apps, the product people actually are the ones that turn those features on or off. Right?

AZ: Mm-hmm.

AM: They want that final control. From the cloud side, a lot of it is more, Hey, we can release if that’s what we need to.

AZ: Right.

AM: We want to be able to turn those features on. So, a lot of those feature switches that we do is primarily done by either the QA, the department inside us. It’s not actually ready until QA has approved it. So, engineering has done their tests. We do our integration. We think it’s very stable. We want the QA team, but not, we don’t want to throw it over the fence. We want them to say, “Hey, help break this.” Right?

AZ: Right.

AM: We want you to have your expertise and focus on a different perspective to help ensure quality. And then, when they feel it’s ready, they’ll turn those feature switches on. And then from or engineers, whoever is not cool, if they need to turn something off, they know that, okay they can turn it off. It’s very stable, but at the same time, they turn out just fine.

AZ: Sounds fun.

RZ: Cool. Well, I want to thank you both, Andrew and Adam for joining me here today. This was super interesting to hear about your experiences getting something out the door. And it’s, as we’ve noted a few times, pretty different for a lot of things we think about, you know, tying all of these pieces together into a big launch, but seeing how you were able to take the practices that we are so used to and apply them to that was pretty fascinating. So, again thank you both for joining.

AM: Thank you.

AZ: Thank you.

RZ: And thank you everyone for joining us online and listening and for your questions. Take care.