Richie McColl

Writing tests with Fastify and Node test runner

Published by Richie McColl on 3/16/2023

Node.js v18 introduced an experimental, built-in test runner. This is a great addition to Node core that we're excited about at NearForm. This means we can now write and run tests without the need to include and setup a third party testing framework.

Automated tests allow us to work iteratively with more confidence. Developing through the lens of testing improves how we design our applications.

In this article, I'll be demonstrating how to use the test runner to test a Fastify backend API. First, we'll go through different approaches to writing tests with some examples. We will also explore some of the command line options for running the tests.

The code referenced in this post is available on the link below:

Prerequisites

  • Node.js v19.6.0

note: we’re using this version because this release included support for different types of test reporters.

I assume some basic knowledge of Fastify and what problem it is trying to solve. You can find more information about the principles behind Fastify here

If you have some Fastify experience, but would like a more in-depth tutorial, I recommend checking out the Fastify workshop. This is a good one to work through at your own pace. Most engineers who join NearForm go through this workshop during onboarding.

Setup

If you check out the repo, we have two main files (index.js and server.js).

The first one, the index file, is responsible for building the Fastify instance. This is also where we would typically register any plugins, decorators or routes.

import Fastify from "fastify";
function app() {
const fastify = Fastify();
return fastify;
}
export default app;

The second file is responsible for starting the server and encapsulates the server startup logic.

import app from "./index.js"
const fastify = app();
await fastify.listen({ port: 4000 });

API testing patterns

Injecting

There are a couple of approaches we can take when writing tests. The first pattern we'll look at is request injection. This is what would be considered as a typical unit test.

This allows us to send fake HTTP request objects to our server. In this example below, we're "injecting" a request into the app.

import test from "node:test";
import assert from "node:assert";
import buildApp from "../index.js";
test("GET /todos returns status 200", async (t) => {
const app = await buildApp();
t.after(async () => {
await app.close();
});
const res = await app.inject({
url: "/todos",
});
assert.deepStrictEqual(res.statusCode, 200);
});

The key thing about this pattern is that it doesn't use a socket connection. That means we can run our tests against an inactive server. In other words, server.listen is never called in these tests.

This inject behaviour comes from a library called light-my-request. You can find some documentation on that here.

There are also a few test runner specific things to note. We're importing the test module from node:test, which is the main interface for writing tests. Also, we're using the assert module as our assertion library. Everyone has opinions on assertion libraries, but we'll use assert for the sake of simplicity.

The async function here receives the test context as an argument. We can then use that to do things such as:

  • Call test lifecycle methods (t.before(), t.after())
  • Skip tests (t.skip())
  • Isolate subsets (t.runOnly(true))

Note: runOnly will only work when starting node with the --test-only flag. With this flag, Node skips all top level tests except for the subset specified.

There are two ways to configure this:

  • t.runOnly(true)
  • The only flag in the test options. For example:
test("POST /todos returns status 200", { only: true }, async (t) => {
...

HTTP Client

An alternative pattern for writing tests is to start and stop a server inside the tests. This would be considered as a typical integration test. Below is the code from the file in the tests folder - http-example.test.js 

import { describe, it, before, after } from "node:test";
import assert from "assert";
import buildApp from "../index.js";
describe("GET /todos HTTP", () => {
let app;
let port;
before(async () => {
app = await buildApp();
await app.listen();
port = app.server.address().port;
});
after(async () => {
await app.close();
});
it("GET /todos returns status 200", async () => {
const response = await fetch("http://localhost:" + port + "/todos");
assert.deepStrictEqual(response.status, 200);
});
});

This example introduces a few new additions. The first thing is that we're creating the Fastify app and listening to the server we created.

From the test runner, we're also using the describe/it pattern to organise the tests.

it is an alias for the test module that we've seen in the other test. To add the test lifecycle behaviour, we can import the before and after functions and use them directly.

If we run either of the test scripts, we'll see both of these tests fail.

✖ GET /todos (37.110643ms)
at TestContext.<anonymous> (file:///tests/todos.test.js:12:10)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Test.run (node:internal/test_runner/test:549:9) {
generatedMessage: false,
code: 'ERR_ASSERTION',
actual: 404,
expected: 200,
operator: 'deepEqual'

This makes sense. fastify is returning a 404 because there is no handler for the todos route. We can get that test green & passing by creating the todos handler in index.js

import Fastify from "fastify";
function app() {
const fastify = Fastify();
fastify.get("/todos", async () => {});
return fastify;
}
export default app;

Running either of the test scripts should output something similar to the following.

▶ /tests/todos-http.test.js
▶ GET /todos HTTP
✔ GET /todos returns status 200 (33.847374ms)
▶ GET /todos HTTP (58.346951ms)
▶ /tests/todos-http.test.js (401.023932ms)
▶ /tests/todos.test.js
✔ GET /todos returns status 200 (30.588452ms)
▶ /tests/todos.test.js (298.150638ms)

Running tests

Let's briefly examine the test scripts from the package.json

"test": "node --test tests/"

We use the --test flag to tell Node that we want to use the test runner. We also pass a test directory. This is because we want the test runner to recursively search for test files to execute.

Watch mode

"test:watch": "node --test --watch tests/"

File watching is one of the first things to think about when setting up a project. In the past, I would normally reach for a third party package such as nodemon. From Node.js v19, that no longer has to be the default, we can now use the experimental --watch flag.

Having this functionality baked in is a game changer. It allows us to create fast feedback loops for changes we make when writing software.

Test Reporters

If we run either of these test scripts, we should see output that looks like the below:

TAP version 13
# Subtest: tests/todos-http.test.js
# Subtest: GET /todos HTTP
# Subtest: GET /todos returns status 200
ok 1 - GET /todos returns status 200
---
duration_ms: 33.284794
...
1..1
ok 1 - GET /todos HTTP
---
duration_ms: 57.4993
...
1..1
ok 1 - /tests/todos-http.test.js
---
duration_ms: 390.356659
...
# Subtest: tests/todos.test.js
# Subtest: GET /todos returns status 200
ok 1 - GET /todos returns status 200
---
duration_ms: 29.049216
...
1..1
ok 2 - /tests/todos.test.js
---
duration_ms: 271.628752
...
1..2
# tests 2
# pass 2
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 664.450388

By default the test runner uses the Tap reporter. Some people like it, some don't. I find myself using the spec reporter. It’s less verbose.

The test runner comes with 3 reporter options (tapspec and dot). You can also build your own custom reporter if you're not happy with the defaults.

This feature is configured via the following flag.

--test-reporter=spec

If you run the tests with that flag enabled, you will see a different formatting of the test output.

▶ /tests/todos-http.test.js
▶ GET /todos HTTP
✔ GET /todos returns status 200 (30.059459ms)
▶ GET /todos HTTP (56.219638ms)
▶ /tests/todos-http.test.js (409.66882ms)
▶ tests/todos.test.js
✔ GET /todos returns status 200 (26.434736ms)
▶ tests/todos.test.js (265.183935ms)
ℹ tests 2
ℹ pass 2
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 677.714442

Request schema validation

For this last example we'll go full TDD and demonstrate how Fastify handles request & response validation. The first thing we'll have to do is write the failing tests. Below are some example failing tests for todos-post.test.js.

import { describe, it, before, after } from "node:test";
import assert from "assert";
import buildApp from "../index.js";
describe("POST /todos", async () => {
let app;
before(async () => {
app = await buildApp();
});
after(async () => {
await app.close();
app = null;
});
it("should return 400 when no name is present on the payload", async () => {
const res = await app.inject({
url: "/todos",
method: "POST",
payload: {},
});
assert.deepStrictEqual(res?.statusCode, 400);
assert.deepStrictEqual(
JSON.parse(res?.body).message,
"body must have required property 'name'"
);
});
it("should return 201 when the payload is valid", async () => {
const res = await app.inject({
url: "/todos",
method: "POST",
payload: {
name: "do the thing",
},
});
assert.deepStrictEqual(res?.statusCode, 201);
assert.deepStrictEqual(JSON.parse(res?.body).message, "created");
});
});

Now if we run this test file, both of these tests will fail as expected. We’ll see the same 404 from the previous test. So, let’s add the missing route in index.js.

We’ll also have to configure the route and send the correct status code. We will add some request schema validation to return an error when there is no name payload sent on the request.

import Fastify from "fastify";
function app() {
const fastify = Fastify();
fastify.get("/todos", async () => {});
fastify.post(
"/todos",
{
schema: {
body: {
type: "object",
properties: {
name: { type: "string" },
},
required: ["name"],
},
},
},
async (_, reply) => reply.status(201).send({})
);
return fastify;
}
export default app;

If we run the tests now. We will see one test passing and one failing.

▶ POST /todos
✔ should return 400 when no name is present on the payload (52.018916ms)
✖ should return 201 when the payload is valid (3.878632ms)

Response schema serialisation

Now let's finish up the success case response. We'll add response validation as well ensuring that responses match the schema shape we expect. i.e - having a message property.

import Fastify from "fastify";
function app() {
const fastify = Fastify();
fastify.get("/todos", async () => {});
fastify.post(
"/todos",
{
{
schema: {
body: {
type: "object",
properties: {
name: { type: "string" },
},
required: ["name"],
},
response: {
201: {
description: "Success Response",
type: "object",
properties: {
message: { type: "string" },
},
},
},
},
},
async (_, reply) => reply.status(201).send({ message: "created" })
);
return fastify;
}
export default app;

With these changes, a re-run of the tests should show everything as green and passing, as we expect.

▶ POST /todos
✔ should return 400 when no name is present on the payload (53.374649ms)
✔ should return 201 when the payload is valid (2.751953ms)
▶ POST /todos (72.330727ms)

Conclusion

As mentioned in the introduction, at NearForm we're very excited by Node.js features such as the test runner and we're actively contributing to this area of Node core.

I hope this article gives you an understanding of what a modern workflow with the test runner looks like. As Node.js approaches the big version 20 milestone, the future's looking bright.

References & Links

← Back to all posts