Writing Shared Scripts for CircleCI Orbs
Problem
CircleCI Orbs provide an include
directive that can be used to include scripts in an orb's command configurations. Unfortunately, this directive doesn't provide an obvious way of referencing other, shared scripts from within the included script.
In fact, it turns out the include
directive is just a macro that prompts CircleCI to replace the include
directive with the text body of the referenced script. This means that other scripts included with the include
directive are not reference-able in a direct way, making shared scripting modules difficult to create.
Motivation
Lately I've been centralizing DevOps tooling for my projects in a custom CircleCI Orb. The orb is meant to abstract away typical actions on standardized repos that I plan to create for current and future projects.
I've been working on a monorepo generator using an Nx plugin with a custom workspace preset. For test coverage reports with Codecov integration, I'd like to dynamically upload one coverage report per package in my monorepo and assign a matching Codecov flag to it.
Unfortunately, the official Codecov CircleCI Orb isn't quite powerful enough to handle this use case on its own, so I ended up poaching some of their scripts to create something that understands an Nx workspace configuration and automatically perform the coverage upload segmented by package.
In the process, I wanted the ability to write shared scripts with functions that could get re-used in other scripts, allowing me to write more modular and testable scripts and allowing me to DRY up my scripts.
Solution
Fortunately, the include
directive can be used anywhere in an orb's configurations, not just within the command
property of a step
.
I can even use an include
directive to provide the body of my shared script as an environment variable to some other command.
Not recommended: eval
the body of the shared script
My first pass at solving this issue was to use include
to provide the body of my shared script as an environment variable, and then eval
those contents.
Obviously, this isn't the safest, and use of eval
in this way is a bit of a smell. However, it did solve the problem.
src/commands/upload-monorepo-coverage.yml
:
steps:
- run:
name: Upload Monorepo Coverage Results
command: << include(scripts/uploadMonorepoCoverageResults.sh) >>
environment:
PARSE_NX_PROJECTS_SCRIPT: << include(scripts/parseNxProjects.sh >>
src/scripts/parseNxProjects.sh
:
#! /usr/bin/env bash
# A common function I'd like to use in another file
parse_nx_projects() {
# ...
}
src/scripts/uploadMonorepoCoverageResults.sh
:
#! /usr/bin/env bash
eval "$PARSE_NX_PROJECTS_SCRIPT"
# This shared function from the `parseNxProjects.sh` is now callable
parse_nx_projects
# ...
Slightly better: writing the contents of the shared script to disk
Using eval
in the way above offended my sensibilities. Though I'm not convinced it's actually more secure, I ended up with a different approach inspired by the eval
approach. That is, using the include
directive, I write the contents of the shared script to somewhere predictable on disk, then provide the path to the shared script to any scripts that need to consume it.
This way I'm able to use source
to read my shared functions, which feels a little better.
I wrote a specific command to handle this for me, which I called write-shared-script
.
This is what the source of the command looks like:
description: >
This command writes shared scripts to disk so they can be consumed by other scripts
parameters:
script-dir:
type: string
default: ~/@chiubaka/circleci-orb/scripts
description: Path to the directory to write shared scripts to.
script-name:
type: string
description: Name of the script to write
script:
type: string
description: The script to write. Should be included here using the include directive.
steps:
- run:
name: Write << parameters.script-name >> to disk
command: << include(scripts/writeSharedScript.sh) >>
environment:
SCRIPT: << parameters.script >>
SCRIPT_DIR: << parameters.script-dir >>
SCRIPT_NAME: << parameters.script-name >>
And here's writeSharedScript.sh
:
#! /usr/bin/env bash
SCRIPT_PATH="$SCRIPT_DIR/$SCRIPT_NAME"
mkdir -p "$SCRIPT_DIR"
echo "$SCRIPT" > "$SCRIPT_PATH"
chmod +x "$SCRIPT_PATH"
Now the steps of the command that requires the shared script look more like this:
- write-shared-script:
script-name: parseNxProjects.sh
script: << include(scripts/parseNxProjects.sh) >>
- run:
name: Upload Monorepo Coverage Results
command: << include(scripts/uploadMonorepoCoverageResults.sh) >>
environment:
PARSE_NX_PROJECTS_SCRIPT: ~/@chiubaka/circleci-orb/scripts/parseNxProjects.sh
Finally, the uploadMonorepoCoverageResults.sh
script now looks like this:
#! /usr/bin/env bash
source "$PARSE_NX_PROJECTS_SCRIPT"
# This shared function from the `parseNxProjects.sh` is now callable
parse_nx_projects
# ...
Security Considerations
Arguably, in the context of CircleCI the difference between these two approaches isn't huge. Realistically, the input to eval
here is always in my control so long as CircleCI is working normally. If there were to be a security breach in CircleCI allowing an attacker access to control the input of this eval
statement, then there's a bigger problem here dealing with an entirely different threat model.
Technically, if an attacker could control the input of that eval
statement, that same attacker could likely control the contents of the shared script on disk, which would amount to an attack of similar gravity.
Still, it feels better to avoid the cardinal sin of using eval
, and at least this way my orb scripts are a bit more debuggable in production, since the shared scripts are written to disk.