What you'll learn
  • what is the code benchmark
  • how to enable the code benchmark
  • how to output the code benchmark measurements
Before continuing

In order to follow this article, you must use Webiny version 5.35.0 or greater.

Overview
anchor

With the 5.35.0 version of Webiny, we have included the benchmarking tool for the code execution. Basically, a piece of code is wrapped into the measurement method and, if benchmarking is enabled, it measures execution time and memory difference between the measurement starting and end points.

This is useful to figure out where are the bottlenecks in the Webiny’s and the users code.

Measuring Code Execution
anchor

As noted in the overview part of this article, the benchmark tool measures execution time and memory difference, from start to end of execution, of the piece of code wrapped in the measurement method.

The measurement method looks like this:

const result = await context.benchmark.measure("my code", async() => {
    const innerResult = await doSomething();
    
    const processedInnerResult = await process(innerResult);
    
    return output(processedInnerResult);
});
// do something with that result
passTheResult(result);

The measurement is stored into an object with interface BenchmarkMeasurement external link. The properties are:

  • name - identifier of the measurement (my code in our example)
  • category - category of the measurement - default is webiny
  • start - the Date object created when the measurement started
  • end - the Date object created when the measurement ended
  • elapsed - difference, in milliseconds, from the end to start Date values
  • memory - difference, in bytes, from the end to the start of the memory measurement (collected via the process.memoryUsage().heapUsed)

Categorizing the Measurements
anchor

You can set your own measurement category, as the default category is webiny:

const result = await context.benchmark.measure({
        name: "my code",
        category: "my category"
    }, async() => {
    const innerResult = await doSomething();
    
    const processedInnerResult = await process(innerResult);
    
    return output(processedInnerResult);
});
// do something with that result
passTheResult(result);

By setting the category, you can filter the measurements easily later. You can write anything you like in both the category and name properties.

Enabling the Measurement
anchor

The measurements are not enabled by the default because, we are absolutely positive, nobody wants it to run all the time. If you want to enable the measurements, you will need to create a plugin which enables it:

import {ContextPlugin} from "@webiny/handler-aws";

const plugin = new ContextPlugin(async(context) => {
    context.benchmark.enableOn(async() => {
        /**
         * You can check the request headers for x-benchmark.
         */
        if (context.request.headers["x-benchmark"] === "true") {
            return true;
        }
        /**
         * You can check the query parameters for enableBenchmark. 
         */
        const query = context.request.query as Record<string, string> | undefined;
        if (query?.["enableBenchmark"] === "true") {
            return true;
        }
        /**
         * You can check the environment variable as well.
         */
        return process.env.BENCHMARK_ENABLE === "true";
    });
});

const handler = createHandler({
    plugins: [
        // the plugin which enables the benchmarking must always go to the top of the plugin list
        plugin,
        // rest of the plugins
    ],
});

In our example we give few possibilities to enable the benchmarking, but you can enable it however you like.

Although our example give a simple code to enable the benchmark via headers or query parameters, we would recommend to make it more complex. You can use the context.security.getIdentity() to check if a user is logged in and then enable the benchmark if correct headers are sent.

You can also check for some randomly generated string in the headers or query string. Just make it a bit more complex than our example as you don’t want to allow everyone to enable the benchmarking.

The Measurement Results
anchor

By the default we output the results into the console (it will be stored on Cloudwatch) in the end of the request.

As we use the Fastify external link as our request and response handler, and we output the measurements in the onResponse hook and the onTimeout hook. If you want to know more about the Fastify hooks and lifecycles, there is quite extensive documentation about the Fastify lifecycles external link, onResponse external link and onTimeout external link hooks.

Customising the Measurement Results Output
anchor

You can customize the output of the measurement logs, for example: send them to a service of your own liking.

import {ContextPlugin} from "@webiny/handler-aws";

const plugin = new ContextPlugin(async(context) => {
    context.benchmark.onOutput(async({benchmark, stop}) => {
        const result = await sendToUserDefinedService(benchmark.measurements);
        if (!result) {
            console.log(`Could not send benchmark results to a user defined service.`);
        }
        /**
         * If you want to prevent the measurements to be sent to the Cloudwatch, you need to break the cycle. 
         */
        return stop();
    });
});

const handler = createHandler({
    plugins: [
        // register the plugin in the end of the plugins array
        plugin
    ]
});

You can add multiple plugins which attach some onOutput method, and they will all be executed, from the last to the first added, if you do not break the cycle with the stop().

Also, remember that we have measurement categories, and if you have set the category to your measurement point, you can filter out unnecessary measurements when having a custom output.

Conclusion
anchor

The benchmark was initially intended as an internal tool to help us figure out how much of execution time is spent on some piece of code. We decided to attach it to the main context object, and have it documented, because it could help our users to figure out if and why the execution of their code is slow.