This is a guest post by Divya Sasidharan, a web developer at the Knight Lab.
Interested in telling your story here? Reach out to our team at blog at circleci.com

test.allTheThings()

In agile development methodology, test-driven development is seen as the embodiment of good coding practice. The “test first, code later” approach to writing software not only blends well with the short sprints intrinsic to agile development, it also enables developers to focus on the requirements and design of a system without getting sidetracked. However, given that a project undergoes different stages of development with varying requirements and specifications, a test-driven approach is not always conducive to development.

In a fairly new codebase, or in a prototype application, where code is being added and refactored at a rapid pace, tests can add friction that slows down development. Conversely in a more stable codebase, tests protect the integrity of an application. Considering this, how can we more productively think about testing as a whole and adapt to the ebbs and flows of a project lifecycle?

“The act of writing a unit test is more an act of design than of verification. It is also more an act of documentation than of verification. The act of writing a unit test closes a remarkable number of feedback loops, the least of which is the one pertaining to verification of function” — Bob Martin

Case Study: TDD till I die

In a recent project called StorylineJS I adopted a test-driven approach and automated my tests using CircleCI. As the solo developer on the project, tests were instrumental in keeping me on task and removed the cognitive load of having to remember code I had already written. In the first couple of iteration cycles, with tests to guide my flow, I felt in control of my project and was able to retroactively fix conflicting and incompatible code in a timely manner.

As the pace of development picked up and specifications for the project changed, I began to drastically refactor the codebase and change prior functionality, breaking my once resilient army of test cases in the process. Gradually, the process of writing and re-writing tests became a chore that slowed down my development.

Soon enough, as the number of broken tests added up, I abandoned writing tests altogether and instead manually QA-ed the project alongside development to make sure the UI/UX was cohesive. Without working tests to guide my development, my codebase became a messy tangle of hard to read, brittle code that were littered with comments reminding me of the purpose of edge case, brute forced functions. By the actual launch date, my application—though functionally working—was left with a graveyard of broken tests that no longer reflected the state of my application.

What went wrong?

A major hitch I encountered in my project was that I submitted to “test first fundamentalism”, or the assumption that tests would allow me to think more clearly and structure code better. While testing succeeded in helping me think deeply about the overarching structure of the application, my zealous approach to test-driven development constrained my ability to think outside structured use cases and made me eventually lose sight of the goals in the project. In software development, a project’s lifecycle is never static. A project faces different challenges as it matures and undergoes different stages of its development. For one, the challenges an application has at the stage of having a small user base is drastically different from the stage at which it has a much larger one. The key to an effective testing strategy is to therefore to acknowledge that there is no one-size-fits-all approach to testing. The best testing strategy is one that adapts to the ebbs and flows of a project as a whole.

Goal Driven Testing

Assessing the high-level goals of a project is the first step to crafting an effective testing strategy. A common goal of a project, especially at the initial few stages of development, is to ensure that basic functionality and specifications are met. Some testing methodologies that fit well in this paradigm include unit tests and manual tests. Unit tests verify with certainty that a function works and will continue to work as expected. With unit tests, we can ascertain that when given a specific input value the function returns the expected output value or throws an error if the input value is invalid. We can also be sure that any new changes made to the program doesn’t break existing code. As consistent as unit tests may be, they unfortunately provide little insight into how a user will actually interact with an application and if a given function is in fact correct. This is where manual tests come in. In addition to testing that an application follows a specific user workflow, manual tests also test for correctness thereby making sure that an application effectively meets specification.

Another possible goal of a project may be to ensure the long term stability of a project. In this case, unit tests are equally applicable as a means of testing that functions continue to work repeatedly. Because we will be relying heavily on unit tests to measure the working state of an application, “testing the tests” by evaluating the quality of the tests through mutation testing is also an effective means of testing. Mutation tests involve modifying a program and running the test suite to assert that a test is properly testing your code; A failed test as a result of a mutant means a successful mutation test.

Retrospective

Let’s return to the example of StorylineJS, to get a clearer sense of how to construct better tests based on the goals of a project. The project was broken up into three major stages that correspond to three high level goals; implementing the basic functionality of the tool, implementing the UI/UX workflow of the tool, and refining and strengthening the robustness of application for release.

Goal 1: Basic functionality and specifications are met => Automated Unit Testing

In the beginning phase of a project, when working through basic functionality and core design requirements, the test-driven mantra forces you to focus on the overall intention of the application. Testing first means having to think about the behavior of a specific class or method rather than getting caught up in the details of its implementation. With StorylineJS, the initial functionality was to convert raw data from a CSV file to JSON. With a clearly defined input and output value and a specific task of the basic function in mind, we can easily extrapolate this function into a clearly defined test case:

var parse = require('csv-parse');
 
describe('CSV parser', function() {
  describe('parse csv to json', function() {
    it('should print result as a json object', function() {
      var input = '"Date","US Unemployment Rate"\n"1/31/80","6.3"';
      parse(input, function(err, output) {
        expect(output).to.eql([{ Date: '1/31/80', US Unemployment Rate: '6.3' }]);
      });
    })
  })
})

Since the core functionality of the application will likely remain unchanged, test-driven development and automated unit tests work well as they align the project with its specifications, ensuring that the code continues to work throughout the lifecycle of the project and beyond.

Goal 2: Determining appropriate UI => Manual/QA testing

In implementing a project’s UI, the main goal is often focused on how a user interacts with the application. In the case of StorylineJS, which was designed as a library with a configurable API, there were 2 parts to the UI: the actual design of the API, and the generated chart that authors could embed on a page.

Unlike building out the core functionality, determining the design standard for the UI is not as smooth of a process and is subject to constant change. Initially, I built out StorylineJS with the Revealing Module Pattern, but later refactored it to adopt a Prototype-based Pattern so I could create scoped instances of a “Storyline.” Though the ultimate goal was to have a structured and robust API that rarely changed, test-driven development and automated unit tests would pose a high maintenance cost as tests need to be continuously updated to remain relevant.

At this stage of uncertainty, manual testing is a better approach to testing. Manual testing allows you to focus on the user’s interaction and thereby tweak the UI to more closely emulate a user’s needs. Compared to unit tests, manual tests more accurately capture edge cases that may have been missed by a unit test.

Goal 3: Refinement => Regression Testing

Once a project has reached a level of stability with a somewhat standardized core functionality and UI, regression tests are valuable means of ensuring the resilience and consistency of a codebase. Akin to unit tests, regression tests focus on testing an application’s source code after the fact and thereby guarantee that existing functionality stays the same. Random tests is another way that we can ensure all the bases and test cases are covered pre-production. By randomizing test cases, we can effortlessly discover subtle and unexpected errors with minimal overhead.

Reality Over Rhetoric

Realistically speaking, deciding how, when and how much to test a project is a perpetual trade-off between budget and time. Figuring out the right approach to testing an application is an art that requires a keen understanding of a project’s goals and proper testing techniques. If done right, tests are a foolproof way of saving your application from your own pitfalls, improving developer productivity and reducing technical debt.

Additional Resources:

For more reading on testing strategies, here are some great articles, I’d recommend:

Itamar Turner-Trauring’s blogpost and PyCon talk: https://codewithoutrules.com/2017/03/26/why-how-test-software/ https://www.youtube.com/watch?v=Vaq_e7qUA-4

Atlassian’s post on Software Testing Strategies and Best Practices: https://www.atlassian.com/software-testing/?tab=strategic-planning

About the author:

Divya Sasidharan

Divya Sasidharan is a web developer who writes open source code to help people tell stories more effectively. She currently works at the Knight Lab on tools for data-driven journalism and mentors students on good software practices. Divya is passionate about mentorship and contributing to open source. You will most likely find her in the sunniest spot in the room with a cup of tea in hand.