AI DevelopmentApr 9, 20267 min read

7 AI productivity lessons from the CTO of Superhuman

Jacob Schmitt

Senior Technical Content Marketing Manager

Most companies have built AI into their product by now, and many consider it the central feature of what they’re building. But plenty of those same companies are still figuring out how to get their own engineering teams to actually use AI tools day to day.

When Loïc Houssier joined Superhuman as CTO in early 2025, his team was in that exact spot. The company had been shipping AI email features for years, but internal adoption of AI dev tools was still early. His mandate was to increase velocity, so that’s where he started.

On the Confident Commit podcast, Loïc walked through what that looked like in practice. Here’s what worked, and how you can apply those same lessons to your own AI adoption efforts.

1. Kill the red tape first

The fastest way to slow down AI adoption is to make people ask permission to try things. If engineers have to submit a ticket or justify a subscription before they can experiment, most of them just won’t bother.

Loïc’s first move at Superhuman was removing that friction entirely. Engineers could grab whatever tool licenses they wanted with no approvals or justification needed. “I don’t care if you get one, two, three, ten subscriptions. Take them all,” he told the team.

Team takeaway: If people are overusing AI tools, that’s a problem you want to have. Remove approval gates for subscriptions, let engineers self-serve, and consolidate later. You can always rein it in. You can’t recover the months lost to request forms while your competitors shipped.

2. Build an AI guild, not a policy doc

Once your team has had time to experiment, the next step is identifying the people who are already getting results and giving them a way to share what they’ve learned.

At Superhuman, that took the shape of an AI guild that Loïc assembled in Q2, not a governance committee, but a group of engineers who were naturally gravitating toward these tools and finding real use cases. They share what’s working, flag what’s half-baked, and reassess every month.

Team takeaway: The format matters less than the cadence. Whatever structure you choose, build in a monthly checkpoint. A tool that fell short four weeks ago might be a different story after an update, and locking in decisions quarterly is too slow for how fast this space moves.

3. Win over your most respected skeptic

Most teams default to letting the excited people evangelize, but enthusiasm doesn’t drive adoption the way credibility does. The person best positioned to move a team is the one everyone already respects.

Loïc didn’t lean on Superhuman’s most enthusiastic adopters to build momentum. He recruited his most senior engineer, someone the whole org looked up to and someone who wasn’t buying the hype. That engineer dug into the tools, changed his mind, and started talking about where AI was genuinely useful. The effect on the rest of the team was immediate. As Loïc put it: “If Mike is doing it, we’d better be on it.”

Team takeaway: Identify the person your team already trusts and give them real time with the tools. Let them form their own opinion. If they come around, the rest of the org will follow.

4. Give people real time to retool

AI tools change fast enough that even engineers who adopted early can fall behind. If your team doesn’t have dedicated time to revisit their setup, most people will stick with whatever they configured months ago.

Superhuman handles this with a quarterly “quality week.” There’s no roadmap and no feature pressure. The whole team spends the week knocking out bugs, with rewards for most bugs fixed, biggest user impact, that kind of thing. But the real payoff is the space it creates for retooling. The week starts with the chief architect walking through his Claude Code configuration as a baseline. Engineers who tinker on weekends get a stage to share their findings. Engineers who’d rather just be handed something that works get a golden path.

Team takeaway: Build a recurring block of time, quarterly at minimum, where the team revisits their AI tooling setup without delivery pressure. Establish a shared baseline configuration so nobody starts from scratch, and move that baseline forward every cycle.

5. Don’t start with the code

If you have engineers who are hesitant to let AI anywhere near their codebase, don’t push them straight into code generation. Start with the tedious stuff they already wish they didn’t have to do.

Loïc redirected reluctant engineers toward lower-stakes work: summarizing docs, pulling notes out of interview recordings, building context on unfamiliar parts of the codebase. He himself uses Claude Code primarily as a general-purpose assistant across different MCPs, not just for writing code.

Team takeaway: Point hesitant engineers toward non-code tasks first: doc summaries, meeting notes, codebase exploration. Once someone sees AI save them 30 minutes on work they didn’t want to do anyway, the higher-stakes use cases start to feel a lot less risky.

6. Run different SDLCs for different risk levels

Most teams default to one development speed for the whole product, but not every surface needs the same quality bar. Some parts of your product are core to the brand and need to ship polished. Others are areas where users actively want you to move faster, even if it means tolerating some roughness.

Superhuman’s brand runs on quality, and users pay a premium for it. Loïc compared their shipping bar to the Apple mindset of only releasing things when they’re perfect. But customers are also asking for MCP support and newer models, and they’re willing to accept some trade-offs to get there sooner. So Loïc is segmenting the delivery process: a faster, more experimental track for surfaces where users have appetite for iteration, and the high-polish process for the core experience.

Team takeaway: Audit your product surfaces and assign each one a risk tolerance. Run a faster, more experimental track for areas where users want speed, and preserve the high-polish process for your core experience. One shipping speed for the whole product is almost always the wrong default.

7. Get comfortable with shared quality ownership

When your product depends on user prompts (defining an email filter in natural language, for example) the output quality isn’t entirely in your hands anymore. A vague prompt gets a vague result, and that’s a fundamentally different dynamic than traditional software where the engineering team owns the whole experience.

Superhuman leans on evals to catch the worst failure modes. Hallucinated email summaries are a real problem when someone’s heart rate spikes over fabricated bad news! But the engineering team also accepts this as a new reality. AI output isn’t always perfect. The job is to narrow the gap between what users expect and what the system actually delivers, model release by model release.

Team takeaway: Invest in evals to catch AI failure modes and build feedback loops that tighten output quality over time. But also accept that prompt-driven features shift some of the quality equation to the user, and design your experience accordingly.

The path to an AI-native engineering culture

Scaling AI productivity starts with reducing the cognitive load of adoption. By removing procurement friction, winning over respected skeptics, and creating a golden path for tooling, you allow your team to focus on what actually matters: shipping high-quality software. The transition to an AI-augmented workflow is less of a technical hurdle and more of a cultural shift toward continuous experimentation and shared quality ownership.

Hungry for more insights on engineering leadership and velocity? Subscribe to the Confident Commit Podcast and join Rob Zuber as he talks with industry leaders about the future of software delivery. And if you’re ready to speed up your own delivery cycles while maintaining that high-polish bar, see how CircleCI can help you automate your AI evals and get started building for free today.

Try CircleCI