Skip to content

Running Studies

Automator supports multiple commands (which are shown when you pass the --help option).

Running a study is possible with the startStudy command. As explained in the general concepts, a study can create multiple measurement results, e.g. in the case of testing multiple websites.

Listing Study IDs

First we need to know which studies we can start. You will have your studies defined in the publicConfig.json file, which is loaded into the Surfmeter Lab extension. You can either look at your publicConfig.json file, or fetch the list of study IDs with:

./surfmeter-automator-headless listStudyIds
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
    listStudyIds

This will print a list of study IDs that are currently available:

{"level":30,"time":"2023-02-11T13:30:54.684Z","pid":288,"hostname":"4d3931b9c534","studyIds":["STUDY_NATIVE_DNS","STUDY_NATIVE_ICMP_PING","STUDY_TOP20_WEBSITES","STUDY_SPEEDTEST","STUDY_YOUTUBE"],"msg":"Available study IDs"}

To pretty-print the output, you can use pino-pretty, as explained in our FAQ.

In the above case, the studies are:

  • STUDY_NATIVE_DNS: A DNS measurement using the native DNS resolver of the operating system
  • STUDY_NATIVE_ICMP_PING: An ICMP ping measurement using the native ICMP ping implementation of the operating system
  • STUDY_TOP20_WEBSITES: A measurement of the top 20 websites
  • STUDY_SPEEDTEST: A speedtest measurement in the browser
  • STUDY_YOUTUBE: A YouTube video measurement in the browser

Want to run a different study?

These study IDs correspond to the IDs from the so-called publicConfig.json file that is currently loaded in the Surfmeter Lab extension. If you want to run a study that is not listed here, you need to first update the publicConfig.json file. This is explained in the configuration section.

Starting a Study

Start a study via the following command:

./surfmeter-automator-headless startStudy \
    --studyId <studyId>
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
    startStudy --studyId <studyId>

Here, replace <studyId> with a valid study ID from the public configuration of the Surfmeter extension you have been supplied. You can specify a different study ID via the --studyId option.

For example, we usually ship a YouTube video study with the following ID:

./surfmeter-automator-headless startStudy \
    --studyId STUDY_YOUTUBE
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
    startStudy --studyId STUDY_YOUTUBE

This will automatically open a browser in the background and perform the video measurement. Running a study may take some time depending on how it was defined (e.g. one minute for a video measurement). You can follow the log as the study runs, and you should be looking for a counter that shows you how far the study has progressed:

{"level":30,"time":"2023-02-11T13:41:59.749Z","pid":749,"hostname":"4d3931b9c534","studyId":"STUDY_YOUTUBE","msg":"Measuring 4/60s"}
{"level":30,"time":"2023-02-11T13:42:00.797Z","pid":749,"hostname":"4d3931b9c534","studyId":"STUDY_YOUTUBE","msg":"Measuring 5/60s"}
{"level":30,"time":"2023-02-11T13:42:01.892Z","pid":749,"hostname":"4d3931b9c534","studyId":"STUDY_YOUTUBE","msg":"Measuring 7/60s"}

For non-video measurements there might be no such counter, but you can still see the progress in the log.

The results will be logged to the command line at the end, and the program will exit. Look for the message containing Reports from study. Depending on your build of Surfmeter Lab, the results will also be sent to our server so you can view them in the dashboard or through the API.

Storing Measurement Results (Reports)

The measurement results may be quite large, and you maybe don't want to view them in the log.

You can also write the study's measurement results to a JSON file in order to analyze it later. This is done via the --reportFile or --reportDir option.

./surfmeter-automator-headless startStudy \
  --studyId <studyId> \
  --reportFile output.json

Note that we need to specify the full path to the report file, as the /home/surfmeter/reports directory is mounted as a volume.

docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
  startStudy --studyId <studyId> --reportFile /home/surfmeter/reports/output.json

By using the --reportFile option, the output will be stored in the file output.json in the current directory.

If you want to store the output in a directory, you can use the --reportDir option instead:

./surfmeter-automator-headless startStudy \
  --studyId <studyId> \
  --reportDir /path/to/output/
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
  startStudy --studyId <studyId> --reportDir /home/surfmeter/reports/

The output file will be named according to the current date and time, the client UUID, and the study ID, e.g.:

2023-02-08T09:58:40.371Z_1167ce7b-7a44-4034-a775-27247bf3a0ec_STUDY_YOUTUBE.json

Storing reports is mostly useful for debugging of individual measurements locally. The actual measurement results are always visible on our Surfmeter Server or dashboard environment.

Report Format

The report format is JSON. An example can be seen here:

{
    "started_at": "2023-09-04T18:36:05.736Z",
    "ended_at": "2023-09-04T18:36:20.975Z",
    "measurement_reports": [
        [
            {
                "type": "VideoMeasurement",
                "id": 2033,
                "video_measurement_id": 1331,
                // ...
            }
        ]
    ],
    "study_summary": [
        {
            "study_id": "STUDY_YOUTUBE",
            "finish_status": "aborted",
            "aborted_reason": "videoLoadError"
        }
    ],
    "diagnostics": {
        "args": [
            "/home/surfmeter/surfmeter-lab-automator/node_modules/ts-node/dist/child/child-entrypoint.js",
            "/home/surfmeter/surfmeter-lab-automator/src/index.ts",
            "startStudy",
            "--studyId",
            "STUDY_YOUTUBE",
            "--noFullscreen",
            "--reportFile",
            "report.json"
        ],
        "env": {
            // ...
        },
        "version": "1.21.0",
        "surfmeter_lab_version": "1.23.17",
        "errors": []
    }
}

Each entry in the measurement_reports list is a measurement that was created, for each of the studies you ran. The format of the measurement is described in the measurement data reference. Note that the results also contain the statistic values and the Client Reports, so they behave just like an API export, with Client Reports already merged in.

The started_at and ended_at fields contain the start and end time of all the studies, respectively. They are in ISO 8601 format.

The study_summary list contains a summary of the studies that were run. It contains the study ID, the finish status, and the reason for aborting the study (if applicable).

The diagnostics object contains some information about the run of the study. It contains the command line arguments, the environment variables, the version of Automator, the version of Surfmeter Lab, and any errors that occurred during the run.

Please be aware that when you have a standalone build of Surfmeter Lab, the reports may not have IDs for the measurements, as they are not sent to the server.

Starting Parallel Studies

You can run some studies in parallel, notably those that combine a browser-based measurement with a native measurement, or multiple native measurements. A good example for this would be running a speed test while loading a video, or running an upload and download speed test at the same time.

To run multiple studies, specify the --studyId option multiple times:

./surfmeter-automator-headless startStudy \
  --studyId STUDY_SPEEDTEST \
  --studyId STUDY_YOUTUBE
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
  startStudy --studyId STUDY_SPEEDTEST --studyId STUDY_YOUTUBE

This would start both a speed test and a YouTube video measurement in parallel. The results will be stored in separate measurements on the server.

Warning

At the moment you cannot run two parallel studies in the browser. This is due to the limitation of Chrome only allowing one instance of the browser to run at a time. We're working on a solution for this.

Extra Options

You may want to specify different command line options for running a study. See the reference for a list of all available options.

Below, we'll discuss some of the more useful options.

Study Timeout

A study timeout is useful if you want to stop a study after a certain amount of time in case you have multiple studies scheduled and can't wait for them to finish. You can specify a timeout in seconds via the --studyTimeout option:

./surfmeter-automator-headless startStudy \
  --studyId <studyId> \
  --studyTimeout 60
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
  startStudy --studyId <studyId> --studyTimeout 60

There are different timeouts that you can set for Automator:

  • --globalTimeout: This is the maximum time that Automator will wait for a command to complete. If the command does not complete within this time, Automator will exit with an error. This timeout is useful to prevent Automator from hanging indefinitely. It will be extended automatically to be at least as long as the deeplinkLoadTimeout, deeplinkWaitTimeout, and studyTimeout combined. Note that on a global timeout event, we still try to abort any running study and send the results (i.e. the fact that the study aborted) to the server within a grace period of 10 seconds (overridable with --globalTimeoutGracePeriod). If you set the grace period to 0, we will not wait for the results to be sent to the server and will hard-exit immediately.
  • --deeplinkLoadTimeout: This is the maximum time that Automator will wait for the Surfmeter Lab extension to load in the browser and present its “deep link” interface. If the deep link does not load within this time, Automator will exit with an error. This timeout may be needed for slower systems that take some time for Automator to load the Surfmeter Lab extension in the browser. If you have a fast system you can decrease it.
  • --deeplinkWaitTimeout: This is the maximum time that Automator will wait for the Surfmeter Lab extension to report that the deep link has been completed. If the deep link does not complete within this time, Automator will exit with an error. This timeout may be needed for slower networks where the deep link action takes some time to complete. If you have a fast network you can decrease it.
  • --studyTimeout: This is the maximum time that Automator will wait for a study to complete. If the study does not complete within this time, Automator will exit with an error. This timeout is useful to prevent Automator from hanging indefinitely. If no study timeout is set, or the nominal duration of an individual study is larger than the study timeout command line option, the study timeout will be increased accordingly, and a warning will be printed.
  • --sendTimeout: This is the maximum time that Automator will wait for the Surfmeter Lab extension to send the measurement results to the server. If the results are not sent within this time, Automator will exit with an error. This timeout may be needed for slower networks where the sending of the results takes some time. If you have a fast network you can decrease it, or set it to 0 in case you have a standalone build of Surfmeter Lab.

The following diagram shows how the timeouts are used:

sequenceDiagram
autonumber
  opt globalTimeout + globalTimeoutGracePeriod
    opt deeplinkLoadTimeout + deeplinkWaitTimeout
        Automator->>Browser: load deep link
        Browser->>Automator: send deep link result
    end
    opt studyTimeout
        opt "timeout" attribute in study description
            Automator->>Browser: Start study
            Browser->>Browser: Navigate to video, log in, ...
            Browser->>Automator: Study started/running
        end
        opt "duration" attribute in study description
            Browser->>Browser: Wait for completion
        end
        opt sendTimeout
            Browser->>Server: Send data to server
            Server->>Browser: Study IDs from server
            Browser->>Automator: Study results/report available
        end
    end
    opt globalTimeoutGracePeriod
        Automator->>Browser: Wait for study results
        Browser->>Server: Send final study results/report
    end
  end

We recommend setting the global timeout to a high value, e.g. 5 minutes, and then setting the other timeouts accordingly. This way, you can be sure that Automator will not hang indefinitely, but you can still run long studies. Of course, your absolute global timeout determines how often you can schedule studies.

If you schedule studies back-to-back, ensure the global timeout leaves enough time for the browser to close and the next study to start. If you have a schedule of 5 minutes, set the global timeout to 4 minutes and 50 seconds, for example.

Scheduling sequential studies

You can also schedule studies sequentially, i.e. one after the other. This is done by passing in multiple arguments to the --studyId option, and setting the --sequential flag. This will make it faster, as the browser can stay open in between runs, and the deep link calling is only performed once.

Flushing the Database

In case there are some unsent measurement results from previous runs, you can flush the database with the --flushDb option:

./surfmeter-automator-headless startStudy \
  --studyId <studyId> \
  --flushDb
docker exec --user surfmeter -it surfmeter surfmeter-lab-automator/surfmeter-automator-headless \
  startStudy --studyId <studyId> --flushDb

This will flush the database before the study is started. You can set an additional timeout with --flushDbTimeout (in milliseconds) that will be used to wait for the database to be flushed. If the timeout is reached, the study will be started anyway.

Creating TCP Dumps

Want a PCAP file generated? See our special options.

Logging Network Requests

See this page for logging network requests, which gives you more insight into what happens during a study.

Making Screen Recordings

If you need to see what happens during study execution in the browser, see our dedicated section.


If you've run a study successfully, let's dig into the configuration.