TDD a CLI Caching Script - Part One


This is the first in a series about writing a general-purpose script to cache CLI output. In this series we'll learn about using bats to test CLI programs, level up our Bash skills, and hopefully end up with a useful tool we can use every day.


The end result script should work something like this:

cache cache-key script-name arg1 arg2 <additional args...>

  1. On first run, it invokes some-script-name with the arguments and caches the STDOUT result.
  2. On subsequent runs, it returns the cached content from the prior run.

Future versions of the cache script can incorporate a TTL, async refreshes, etc.

Why is this useful?

Caching allows us to do expensive work once and use the result until it is no longer timely. Some program results can be cached permanently because the content is easily fingerprinted to a unique cache key.

A real-world example is the rake routes (or rails routes) command. This command generates a list of available routes (think urls) in your application. Unfortunately, Rails has to essentially boot your entire app to generate this list. This takes longer and longer to do as your app grows.

If your Rails' route setup is traditional (single file, no surprising metaprogramming) then you can trust that your routes will only change if the config/routes.rb file changes. We can use md5 to get a simple string fingerprint of the file contents. We can use that fingerprint as a permanent cache key for the result of running rake routes because any changes to the routes will change the md5 and invalidate the cache.

This means that cache $(md5 config/routes.rb) rake routes can reliably cache the output and cut the time down from >10 seconds on a large app to essentially zero. This a huge difference if you're using this output for something like route-completion with fzf in Vim.

Writing our first test

Following TDD, we'll describe the behavior we wish our script had with tests. These tests will fail because the behavior doesn't exist yet. Then we'll implement just-enough functionality to make the test pass. We repeat this loop until our script is feature-complete.

First we install bats (from source or via brew install bats) and make a new directory for our script. Make a new directory cli-cache and give it a subdirectory of test.

Within the test directory, we'll make a new file named cache.bats and add our initial test:

@test "initial run is uncached" {
  run ./cache some-test-key echo hello
  [ "$status" -eq 0 ]
  [ $output = "hello" ]

run executes a given command, sets $output to the STDERR and STDOUT of the command, and sets $status to the status code of the command.

Our test is really just showing that the cache script can successfully execute the command we provide it and return the output. That's a small but important first step.

We can run the test with bats test from the cache directory.

 ✗ initial run is uncached
   (in test file test/cache.bats, line 3)
     `[ "$status" -eq 0 ]' failed

1 test, 1 failure

Hooray, our first failing test! The status code didn't match the expected code. If we put echo $status before our comparison, we'll see that $status is 127 which means the command is not found. That makes sense because we haven't made our cache script yet. Let's create an empty file named cache in the cli-cache folder and try again.

The test still fails, but now $status is 126 because the command isn't executable. chmod +x cache and try again.

 ✗ initial run is uncached
   (in test file test/cache.bats, line 4)
     `[ $output = "hello" ]' failed with status 2
   /var/folders/40/y21j3fw13432jk6z_y08mnbm0000gn/T/bats.59924.src: line 4: [: =: unary operator expected

1 test, 1 failure

The status code is fine now but our $output isn't what we want since our cache script doesn't do anything. Let's modify the cache script to run the command provided so the test will pass.

#!/usr/bin/env bash

set -e



We have a shebang line. We set -e so our script will fail at the first invalid command (this is generally a best practice).

Then we assign our $cache_key to the first argument. Next we shift to remove the $cache_key from our argument list. Now we can execute the provided command.

Rerunning bats test shows success. Nice work!

Add more tests to flesh out the implementation

Let's add a new test to verify that it works for quoted arguments to the provided command:

@test "works for quoted arguments" {
  run ./cache some-test-key printf "%s - %s\n" flounder fish
  [ "$status" -eq 0 ]
  [ $output = "flounder - fish" ]

Hrm. That didn't work. If we echo $output, we see -%s\nflounderfish -- all our arguments to printf smushed together. To preserve the arguments, we can update our cache script by changing $@ to the quoted form "$@".

With that passing, there's one more useful fundamental to get right: the cache command should return the exit code of the underlying command.

@test "preserves the status code of the original command" {
  run ./cache some-test-key exit 1
  [ "$status" -eq 1 ]

That one already passes for free by virtue of the "$@" being the last line of our script.

Now we have three passing tests, but we're not actually caching anything yet. We add a new test for the caching behavior.

@test "subsequent runs are cached" {
  run ./cache some-test-key echo initial-value
  [ "$status" -eq 0 ]
  [ $output = "initial-value" ]

  run ./cache some-test-key echo new-value
  [ "$status" -eq 0 ]
  [ $output = "initial-value" ]

Here we call echo twice with two different strings. Since our cache-key remains the same, the second echo should never get evaluated and our script should instead return the cached value from the first echo call.

With that test failing, let's update our script to do some caching.

#!/usr/bin/env bash

set -e



if test -f $cache_file; then
    cat $cache_file
    "$@" | tee $cache_file

Looks easy enough, right? If the cache file exists, we read it. Otherwise we execute the command and pipe it to tee. tee prints the output to STDOUT and also writes the output to our $cache_file.

You can specify the cache directory by setting the environment variable CACHE_DIR or we'll default to $TMPDIR.

Running our tests shows (perhaps) unexpected results:

 ✓ initial run is uncached
 ✗ works for quoted arguments
   (in test file test/cache.bats, line 18)
     `[ $output = "flounder - fish" ]' failed
 ✗ preserves the status code of the original command
   (in test file test/cache.bats, line 23)
     `[ "$status" -eq 1 ]' failed
 ✗ subsequent runs are cached
   (in test file test/cache.bats, line 29)
     `[ $output = "initial-value" ]' failed

4 tests, 3 failures

Wait, why is everything broken but the first test? Oh yeah, we're caching now and all the tests use the same cache-key. We could give each test a unique cache key, but instead let's use bats' setup function to ensure we delete cached content between tests.

setup() {
  export TEST_KEY="cache-tests-key"

  # clean up any old cache file (-f because we don't care if it exists or not)
  rm -f "$TMPDIR$TEST_KEY"

We'll replace anywhere we're using some-test-key in the tests with $TEST_KEY.

bats test now shows everything passing except the "preserves the status code of the original command" test. This is a side-effect of piping our command to tee. tee exits with a status code of 0 because tee worked fine (even though the preceding command did not). Fortunately we can use $PIPESTATUS to get the status of the any command in the pipe chain. We just need to add the line exit ${PIPESTATUS[0]} after our "$@" | tee $cache_file line.

 ✓ initial run is uncached
 ✓ works for quoted arguments
 ✓ preserves the status code of the original command
 ✓ subsequent runs are cached

4 tests, 0 failures


Here's the final version of the script:

#!/usr/bin/env bash

set -e



if test -f $cache_file; then
    cat $cache_file
    "$@" | tee $cache_file
    exit ${PIPESTATUS[0]}

You can add this to your $PATH to invoke cache from anywhere.

Let's compare timings of ways to invoke rake routes on a large app:

commandcache statusseconds
time rake routesno caching12
time spring rake routescold spring boot12
time spring rake routesspring fully-loaded3
time cache $(md5 -q config/routes.rb) rake routesuncached12
time cache $(md5 -q config/routes.rb) rake routescached0.02

With a small update to the source of our fzf route completion, things are super speedy!

inoremap <expr> <c-x><c-r> fzf#complete({
  \ 'source':  'cache $(md5 -q config/routes.rb) rake routes',
  \ 'reducer': '<sid>parse_route'})

If this all feels like a lot of work to save 12 seconds, you're right. From my experience, the value is rarely in the actual time saved, but in the preservation of flow. Any time I spend waiting on the computer is time when I can get distracted or otherwise lose my flow. In my career, I've observed that disruptions compound in the negative. I've found that eliminating them (where possible) can compound in the positive as well.

Now we have a new trick to eliminate disruptions and help preserve flow.

Up next

Stay tuned for an update where we add a TTL option to specify when cached content should expire. We'll also update the script to only cache successful runs of the provided command.

You can always find the most up-to-date version of the cache script on GitHub.

It may surprise you to hear that there isn't a standard unix utility to cache CLI script output. Thankfully, there's a number of community-provided examples to choose from. e.g. cachecmd, runcached, bash-cache, etc.

git changed-on-branch


When I'm working on a branch, I naturally want to interact with the changed files. To make this easier, I wrote a custom git subcommand I've named git-changed-on-branch.

Invoked as git changed-on-branch, the script returns all changed filenames between your current branch and origin/master (though you can pass in an alternate comparison branch/sha/etc. as an argument if you wish).

Let's look at the thoroughly-commented code:

# !/bin/bash

# List files changed on the current branch versus a provided branch/sha/etc.
# (or the default of origin/master)

  # First get already-committed file names that have been
  # (C)opied, (R)enamed, (A)added, or (M)odified
  # We specify the diff-filter because we explicitly _don't_ care about Deleted
  # files.
  git diff ${to_compare}... --name-only --diff-filter=CRAM &&
  # Then get files that are currently modified (staged or unstaged)
  (git status --porcelain | awk '{print $2}')
) |
# Finally, remove duplicates without changing the sort order
awk '!x[$0]++'

Save that as git-changed-on-branch somewhere in $PATH and chmod +x. In typical Unix fashion, we can compose this in some interesting ways.

  • git changed-on-branch | grep test | xargs yarn jest could run your tests.
  • git changed-on-branch | fzf -m --height 40% | xargs -o vim from your shell uses fzf to select (one or more) files to be opened in vim.

Let your imagination run wild and make some helpful aliases.

As a closing example, here's how I use this within vim with fzf#run to open files changed on the branch:

nnoremap <silent> <Leader>gt :call fzf#run({
  \ 'source':  'git changed-on-branch',
\   'sink':    'e',
\   'options': '--multi --reverse',
\   'down':    15
\ })<CR>

Use Git history to suggest related tests


You've started a new job (congrats!). For your first task, your PM wants you to change the default behavior of help-desk links to open in a new tab.

This is the sort of task that is either trivial or a trip down the rathole of fragile tests that depended on the original behavior.

As apps grow, two things often happen that make changes like this one slower for developers:

  1. It becomes less obvious which tests might be impacted by a change.
  2. The runtime of the test suite grows such that running the entire suite locally isn't palatable.

Many devs will run any seemingly relevant unit tests, any obvious integration tests, and then let CI tell them what they missed. But when CI takes minutes or tens of minutes to run, the feedback loop grows and this once seemingly simple tweak can derail your morning.

Fortunately, there's an easy way to find tests likely impacted by your change...

Git to the rescue (again)

If you're using atomic commits with Git, you have a rich history that groups files with their related tests.

There's no obvious relationship between a file named link-helper.js and your "Subscription Refund Integration Test" but if the two were changed in the same commit, that's a good hint that they might be related.

So if you make your change in link-helper.js, how can you use Git history to suggest related tests?

The naive version looks something like this

#!/usr/bin/env bash


    # find commits where the file was changed
    git log --format='%H' -- $1 |
    # show file names from those commits
    xargs git show --pretty="" --name-only |
    # filter to only the provided pattern
    grep $pattern |
    # remove duplicates

echo $candidates

Save that as suggest-tests somewhere in $PATH and chmod +x it.

Now you can invoke suggest-tests app/js/link-helper.js test/ and see all files with "test/" in their path that changed when app/js/link-helper.js also changed.

A more robust solution

There's a few places that the naive solution isn't ideal:

  1. It won't follow file renames.
  2. It returns file paths that have since been deleted.
  3. It would be nice if the uniq preserved history order (most recently edited to least recently edited).
  4. It would also be nice if you could set a default pattern to avoid specifying it every time.

After some thinking, googling, and false starts, here's the version I'm using today:

#!/usr/bin/env bash

function usage {
    script=$(basename $0)

    echo "$script - use Git history to suggest tests that could be relevant to the provided file"
    echo "Usage: $script file test_pattern"
    echo "       Note: test_pattern is optional if \$DEFAULT_SUGGEST_TESTS_PATTERN is set"

        echo "       (\$DEFAULT_SUGGEST_TESTS_PATTERN is unset or empty)"


    echo "Example:"
    echo "       $ suggest-tests some_file_name.rb _test.rb"

    echo "You might want to pipe the results into your test runner with xargs:"
    echo "       $ suggest-tests some_file_name.rb _test.rb | xargs rake test"
    exit 1

if [ "$#" -gt 2 ] || [ "$#" -eq 0 ] || [ $1 == "--help" ]; then


if [ -z $pattern ]; then

    # find commits where the file was changed (following renames on the file)
    git log --follow --format='%H' -- $file |
    # show file names from those commits
    xargs git show --pretty="" --name-only |
    # get the test files from those file names
    grep $pattern |
    # uniqify the names but preserve history order
    awk '!x[$0]++'

# get the root in case we're called from elsewhere
git_root=$(git rev-parse --show-toplevel)

# only return candidates that still exist on disk
for candidate in $candidates
    if [ -f "$git_root/$candidate" ]; then
        echo $candidate

This solves all our issues and adds some helpful usage instructions. Also, how great is that awk trick?

Running the relevant tests

I live in Ruby + minitest world most of the time so here's an example of how I run relevant tests: suggest-tests some_file_name.rb _test.rb | xargs rake test

Vim integration with fzf

When editing a file, it can sometimes be useful to edit related test files. Here's an example Vim mapping to quickly jump to these files with fzf.

nnoremap <silent> <Leader>S :call fzf#run({
\   'source':  'suggest-tests ' . bufname('%'),
\   'sink':    'e',
\   'options': '--multi --reverse',
\   'down':    15
\ })<CR>

That uses $DEFAULT_SUGGEST_TESTS_PATTERN (which I've set locally to '^test.*_test\.rb$') but you could make a binding for various patterns as you wish.

Closing thoughts

This approach isn't perfect (since you might break a test that shares no Git history with your changed file), and CI will still catch anything you miss. This script has saved me numerous CI feedback cycles over the past year and I hope it does the same for you.