Note from the publisher: You have managed to find some of our old content and it may be outdated and/or incorrect. Try searching in our docs or on the blog for current information.


It’s an exciting time to be a frontend developer. Facebook’s React turned our ideas about rendering UIs on their heads. Om, in particular, has opened up a new way of thinking about how UIs work. The past couple of years have been a rush of new ideas and growth.

But right now, things are getting really good. This year saw the release of Facebook’s GraphQL and Netflix’s Falcor, and hot on their heels comes a project that borrows the best ideas from each of them: Om Next. Om Next is the successor to the current version of Om (known these days as “Om Now”). It keeps the best things about Om, throws out what didn’t work well, and replaces it with a much better approach.

Om Next is currently in alpha, but once it’s released, we plan to use it at CircleCI. I’d like to explain why that’s so exciting for us. But first, we need to look at why we went down this path at all.

Why React?

Since the dawn of time (okay, since the dawn of JavaScript), web developers have struggled with a particular problem: when new data arrives, how do you update the UI? That is, if you’re displaying a list,

  • Artichokes

  • Cabbage

  • Eggplant

and two new items are added—”Broccoli” between the first and the second, and “Dill” between the second and third—how do you update the DOM to reflect that change? We have two options, and each of them is troublesome:

  1. We could insert new list items into the existing DOM, but finding the right place to insert them is error-prone, and each insert will cause the browser to repaint the page, which is slow.

  2. We could throw out the entire list and rebuild it in one go, but re-rendering large amounts of DOM is also slow.

To make client-side web development possible, we have to solve—or at least mitigate—one of these problems. Until recently, most JS frameworks concentrated on the first problem, either by making the DOM easier to navigate, or by tying the elements to controller objects which take responsibility for keeping them up to date. These approaches have certainly mitigated Problem #1, but they haven’t solved it. Often, as in Backbone, they spread out the problem over lots of little components. Each has only a small chunk of DOM to wrestle with, but it still has to wrestle. And we still have lots of little DOM updates repainting the page.

React solves the second problem. Using React, we pretend that re-rendering the entire page is fast and easy. React, for its part, lets us pretend that we’re rerendering the page, when in fact, we’re “rendering” a tree of JavaScript objects, called the “Virtual DOM”, a complete specification of what we’d like the page to look like. Then React compares new Virtual DOM to the previous one to figure out exactly what has to change in the real DOM, and then it changes only those bits, all at once. The result is fast and easy.

(If you look closely, there’s a trick in there. React only looks like it’s solving the second problem, but in the end it really solves the first: it asks us to pretend to re-render the entire list, so that it can find the right places to insert the new items automatically.)

It’s hard to beat the simplicity of a system like this. In many cases, the view code is simply a function which takes data and returns DOM (or at least, Virtual DOM). It’s stateless and declarative. Until you need a bit of state, that is, which React can also handle. Today, I can’t imagine using anything else.

But at CircleCI we didn’t just run with React, we also switched our codebase to ClojureScript, and wrote our frontend in Om. ClojureScript was an easy sell: our backend was already written in Clojure, and there were already several React wrappers for ClojureScript. But why Om in particular?

Why Om?

Om was, if not the first ClojureScript React library, one of the first. But by the time we began moving to React, in mid-2013, there were already several, including Reagent and Quiescent. What did Om offer us that the others didn’t?

While React is concerned with how data flows into each component on the page, Om is also concerned with how your data is stored. Om is opinionated about the application state in a way that React isn’t. The major selling points for Reagent and Quiescent are that they’re more flexible than Om. But we liked Om’s opinions, and we liked what we got in exchange for agreeing with it.

  1. Application state as a single, immutable data structure. This is both the cost of entry and (depending on your opinions) a benefit itself. The entire application state is stored as a single data structure, in a single atom. Changing the state of the application always involves swap!ing the atom. That means that the application state always changes (atomically) from one consistent state to another.

Om’s creator, David Nolen, likes to show off how easy this makes it to implement undo: just remember a of list old states, and you can reset! the state atom to any of them at any time. We liked it for another reason, though: we wanted to serialize the application state so users could send our support team a snapshot of their CircleCI interface. We’d pop in the data they sent, the page would deserialize it and reset! it into the state atom, and—presto-change-o—we’d see exactly what the customer saw.

We do another trick, too: we keep part of the application state tree in the browser’s Local Storage. This is an easy way to keep track of “sticky” things after you close the app, like which repos you’ve collapsed or expanded in the sidebar, or how you prefer to sort your branches. We watch the path [:settings :browser-settings] and sync those values to a key in Local Storage. Then, on page load, we pull the data out of Local Storage and swap! it back into the application state. If we want a value to persist, we just store it in that part of the tree, and it becomes “sticky” automatically.

sticky-app-state Sticky settings stored in the app state tree.

sticky-local-storage

Sticky settings automatically synched to Local Storage.

  1. Cursors. Om needed a way to navigate that giant data structure, and it landed on cursors. Suppose you’ve got that grocery list from above, reflected in your application state.
(def app-state
  (atom {:grocery-list [{:name "Artichokes"}
                        {:name "Cabbage"}
                        {:name "Eggplant"}]}))

Suppose the user clicks an Edit button and changes “Artichokes” to “Avocadoes”. Somehow, you need to swap! the single atom that holds the entire page’s state, updating the correct element to the new value. Something like:

(swap! app-state assoc-in [:grocery-list 0 :name] "Avocadoes")

Except, the Edit button is drawn as part of the list item component. That list item shouldn’t know that it represents item 0 of its containing list, or that the list is stored at :grocery-list. It also shouldn’t know that the app state is stored in a var called app-state.

Here’s the fix: instead of passing your component regular data ({:name "Artichokes"}), Om has you pass a cursor. A cursor contains the three pieces of information you need: the data itself ({:name "Artichokes"}), the path to that data ([:grocery-list 0]), and the atom it all lives in (app-state). It magically acts like it’s just the data, so (:name item-cursor) is "Artichokes", but Om’s om/update! function knows how to get the other two parts. Now, all you do is:

(om/update! item-cursor :name "Avocadoes")

Om hides the bits your component doesn’t need to care about.

Stretching Om’s Seams

Om (that is, Om “Now”) was a great start, but it hasn’t quite held up to large-scale ClojureScript applications like ours. (And how could it: it was written before they existed.) At CircleCI, we’ve been discovering the places where Om’s model breaks down.

The Conundrum of Cursors: Most data is not a tree.

Om’s cursor-oriented architecture requires you to structure your application state as a tree, specifically a tree that matches the structure of your UI. Your root UI component, which contains the entire app, is given a root cursor to the entire state. Then it passes subtrees of the state (cursors) to its subcomponents. Above, the root component would pass (:grocery-list root-cursor) to some grocery-list component, which would pass each element of that list to some grocery-item component. The structure of the state is the structure of the UI. For simple apps, that works great.

But consider our [:settings :browser-settings] trick from before. What part of the UI renders all the “sticky” settings? None: any part of the UI may need to store a setting or two in that branch to make it “sticky”. Remember, the branch picker needs to store whether each repo’s branch list is collapsed or expanded. If the branch picker was only passed the list in [:dashboard :branch-list], how would it read and write to [:settings :browser-settings]? We hit this problem repeatedly from a lot of angles. As it turns out, UI elements sometimes have cross-cutting concerns. Their data doesn’t always map to a tree.

How did we solve this problem? Most of our components take the entire app state as their data. Parent components don’t pass their children subcursors with just the bits they care about, they pass them the whole enchilada. Then we have a slew of paths defined in vars, which we use to extract the data we want. It’s not ideal. But it’s what we’ve had to do.

The Management of Mutations: Components don’t just change their own data.

When it comes time to change data, Om once again solves the simple version of the problem well, but quickly breaks down. As we saw above, if the grocery item “Artichokes” wants to change its name, that’s easy: it can use om/update! on its cursor. But what if it wants to delete itself? It can’t. That wouldn’t be changing the item, that would be changing the list. The grocery list component could delete the item by updating its cursor to a version with “Artichokes” removed:

(om/update! list-cursor 
  (into [] (remove #(= "Artichokes" (:name %))) list-cursor))

But in the UI, the “Delete” button should really be part of the grocery item component. Just as our data doesn’t map perfectly to our UI, neither do our mutation operations.

The usual solution to this problem in Om is core.async channels. The grocery list component would set up a channel and pass it to each list item. To delete itself, an item component would put a message on that channel. Meanwhile, the list component would run a go block to listen for the message, and when it got it, it would om/update! its cursor.

If that sounds like a complex solution to a common problem, you’re right: it is. But that’s what people do. Even Om’s TodoMVC example resorts to this.

In the CircleCI app, we do the same thing, but we do it on a much larger scale. We have a pair of multimethods, frontend.controllers.controls/control-event and frontend.controllers.controls/post-control-event!, which handle messages like this for the entire app. (The first one transforms the state tree as necessary, and the second one performs any side effects.) There’s a lot of architecture here that we built ourselves. We put in a good deal of effort to avoid Om’s own approach (cursors), because they don’t fit our needs at this scale.

The Delivery of Data: How does the data get into the app state?

Om works really well when all of the data lives on the client. Once it’s backed by a server, things get tricker. For small apps, like a grocery list, you may be able to load all of the data you’ll need when the page loads. But in an app like CircleCI, we can’t load everything you might ever want to see upfront. We load things on demand. When you go to your dashboard, we load the latest builds. When you go to a build page, we fetch that build’s details. As the components on the screen change, we discover we need different information.

Today, our hook for this is the navigation action. Our navigation system uses routing from Secretary, and layers on top of that a multimethod dispatch system very similar to the control-event system we saw above, called navigation-event. When the user navigates to a build page, for instance, we hit post-navigated-to! :build, which fires off any API calls we need to fetch the build’s details. When those API calls return, they swap! the new data into the state tree.

Meanwhile, our components are trying to render a part of the state tree that’s still empty. That’s fine: while it’s empty, they show a loading spinner. When the API system swap!s in the data, Om rerenders the components on the page, displaying the build.

It’s a good system, but it has one major flaw: it’s the build page component which knows what data it needs to display, but it’s the navigation system which knows what data to fetch. Those are miles away from each other in the codebase. If we want the build page to display something new, we have to add it to the UI on the build page and add an API call to the navigation system. If we remove something from the build page, we could easily forget to remove it from the navigation system, and make unnecessary API calls every time a user views a build.

Luckily for us, the future is bright.

Why Om Next?

Om Next is the upcoming revision to Om. It’s really more of a reboot. David Nolen and the Om team have taken the principles behind Om, applied the experience of the last few years, and built something new. It’s currently in alpha, and not production ready, but it should be ready in a couple of months. When it is, we plan to migrate to it. Why bother? I’m glad you asked.

The tree is really a graph.

In Om Next, each component gets to declare exactly what data it needs. It declares a query, using a syntax similar to Datomic’s pull query syntax. Unlike a simple path through a tree, a query like this can navigate a graph. (In fact, Om Next’s queries are analogous to, and partly inspired by, Facebook’s GraphQL.)

For instance, perhaps some component needs to display the start times of the builds previous to the builds that were recently initiated by the current user. You’d never store that information in a tree under the path [:current-user :initiated-builds 0 :previous :start-at]. You wouldn’t nest your actual build records inside other builds’ :previous references like that:

{:current-user
 {:initiated-builds 
  [{:id 146
    :repo-name "circleci/frontend"
    :start-at #inst "2015-12-17T17:13:59.167-00:00"
    :previous {:id 144
               :repo-name "circleci/frontend"
               :start-at #inst "2015-12-17T17:05:58.144-00:00"
               :previous {:id 141
                          :repo-name "circleci/frontend"
                          :start-at #inst "2015-12-17T17:05:13.512-00:00"
                          :previous ; and so on...
                          }}}
   {:id 145
    :repo-name "circleci/docker"
    :start-at #inst "2015-12-17T17:09:25.961-00:00"
    :previous {:id 143
               :repo-name "circleci/docker"
               :start-at #inst "2015-12-17T17:05:36.797-00:00"
               :previous {:id 138
                          :repo-name "circleci/docker"
                          :start-at #inst "2015-12-17T17:04:51.124-00:00"
                          :previous ; and so on...
                          }}}]}}

That would be silly. You’d store it in some list of builds; say, [:builds 145 :start-at]. Your data would look something like this:

{:current-user {:initiated-builds [[:build/by-id 146]
                                   [:build/by-id 145]]}
 :build/by-id {146 {:id 146
                    :repo-name "circleci/frontend"
                    :start-at #inst "2015-12-17T17:13:59.167-00:00"
                    :previous [:build/by-id 144]}
               145 {:id 145
                    :repo-name "circleci/docker"
                    :start-at #inst "2015-12-17T17:09:25.961-00:00"
                    :previous [:build/by-id 143]}
               144 {:id 144
                    :repo-name "circleci/frontend"
                    :start-at #inst "2015-12-17T17:05:58.144-00:00"
                    :previous [:build/by-id 141]}
               143 {:id 143
                    :repo-name "circleci/docker"
                    :start-at #inst "2015-12-17T17:05:36.797-00:00"
                    :previous [:build/by-id 138]}
               ;; and so on...
               }}

You can’t navigate that with a cursor. Once you narrow in on :current-user and its :initiated-builds, you can’t get to build #146: it’s in a different branch of the tree. And once you narrow in on build #146, you can’t back out to find its previous build, #145. It turns out your data isn’t really a tree, it’s a graph.

Queries let you navigate the graph of your date in all sorts of directions. The way Om Next does this is pretty clever: it takes something like the structure above and denormalizes the data you need into the places you expect it. Your original data is a graph, but what your UI sees is a tree, a tree that matches the structure of your UI perfectly. In Om Next, your query might be something like:

[{:current-user [{:initiated-builds [{:previous [:start-at]}]}]}]

And, depending on how you set things up (Om Next is exceptionally flexible), you might end up with a tree like:

{:current-user
 {:initiated-builds
  [{:previous {:start-at #inst "2015-12-17T17:05:58.144-00:00"}}
   {:previous {:start-at #inst "2015-12-17T17:05:36.797-00:00"}}]}}

Notice that, in the original data, each build references a previous build, but in this response we don’t have infinitely deep recursion. Why? Because we’re only getting what we asked for in the query. We asked to go one level deep, and that’s what we got.

In Om Now, you stored your app state in a tree which had to match the shape of your UI. In Om Next, you store your app state in any shape that makes sense, and let Om (with some help from you) convert that data into a tree that matches your UI on the fly. When the shape of your UI changes, the shape of the query changes with it, so the shape of the data it receives changes automatically. Your UI does not drive the shape of your data. That’s a huge win.

Mutation is a first-class operation.

Since you no longer receive data in the form in which it’s stored, you no longer operate on that data directly the way you used to: by om/transact!ing cursors. Instead, Om Next asks you to define a set of mutation operations which can change your application’s state. These mutations are named, and what they do is defined outside of the components themselves, as part of what Om Next calls the “parser”.

Does that sound familiar? That’s exactly what we’ve already built in CircleCI, in frontend.controllers.controls, only this version doesn’t involve core.async shenanigans and doesn’t require maintaining a big custom architecture no one else has ever seen. Apparently we were on the right track, but it’s so much nicer to have this built into Om itself.

Your components know what they need.

Remember our problem with getting data from the server? We had to hook into navigation events and then guess what data our components we were about to display. No more. In Om Next, our components have queries! We can ask them what they need. Rather than tying our API calls to navigation events, we tie our API calls to parts of our app state (like [:current-user :initiated-builds]). If we try to show a component that needs to know the current user’s initiated builds, that triggers an API call that asks the server for the data. If we stop using that component, we stop making that request. Automatically.

And because the mapping between data and API is centralized in the parser, we can batch our requests into fewer API calls, improving performance. We can even change which APIs deliver which data without touching the UI components at all!

So that’s what all the fuss is about

Om Next is exciting for us for a number of reasons. It vindicates some major decisions we made early on, it takes responsibility for a lot of architecture we’ve had to custom-build, and it will dramatically simplify the way we write most of our frontend application. Of course, the hard part now is waiting for it to be ready.

In the meantime, we’ll be working out the best way to gradually migrate our app from Om Now to Om Next, driving out bugs and hopefully improving the migration path for everyone. Like I said: it’s an exciting time to be a frontend developer!


If you’re excited by all this and want to try playing with it yourself, I heartily recommend Tony Kay’s om-tutorial. There are also several more focused tutorials available on the Om wiki. (All of the above are works in progress, since Om Next is still in flux, but they’re still worth checking out.)