Logging and Sending Network Requests¶
Surfmeter Automator supports logging and sending network requests for additional debugging information. This captures the timing, size, remote IP, and other details of the network requests made by the browser for any study you run. This can be tremendously helpful for diagnosing issues with the browser or the network.
Logging Network Requests¶
Via the --logNetworkRequests
option, all network requests made by the browser will be logged to a file in the system's temporary directory and used for further analysis.
Using this option enables additional statistics like the video response time and content server hostname/IP address. It is not enabled by default due to the possible performance penalty on low-end devices.
Tip
If you want to globally set this option for all your studies, you can add the following to your automatorConfig.json
file:
{
"version": 1,
// other properties like "updates"
"globalScheduleSettings": {
"options": {
"logNetworkRequests": true
}
},
"studySchedules": [
// your individual study schedules
]
}
For more information on the globalScheduleSettings
option, see the reference.
The log file will be deleted unless the keepNetworkRequests
option is specified. If you want to keep the log file for later analysis, you can do so by adding this flag to your Automator command (--keepNetworkRequests
) or specifying the "keepNetworkRequests": true
option in the automatorConfig.json
file.
The path to the file will be printed in the log, and it is in a gzipped line-delimited JSON format. So you can use a command like gunzip -c <path>
to decompress and view the file.
Sending Network Requests¶
You can also send all network requests to the server with the --sendNetworkRequests
option.
The data will be available in the Surfmeter Export API under the network_requests
resource. Note that the data is not indexed to our Analytics Dashboard.
Warning
This causes a lot of network traffic and should be used with care. Sending requests may multiple send processes to run. We do not yet support retry mechanisms, so if the connection is lost, the request will fail.
Also note that we do not support keeping network requests indefinitely, so the data will be deleted after at most 30 days. We recommend using the API to get the data on a daily basis if you need more long-term storage.
Getting Network Requests via the API¶
Please check our Export API documentation for more information on how to get the network requests via the API.
Note
The API defaults to a page size of 20, so increase the per_page
parameter to get more requests, or use pagination.
Data Format¶
Internally, the Chrome DevTools Protocol is used to intercept and log the network requests, so the fields are in the same format as when using the DevTools Protocol. For more information, see the Chrome DevTools Protocol documentation on the following events:
In our API response, the request will contain a response object, so that you do not have to pair them yourself. Please note that we use snake_case
for field names unlike Chrome, which uses camelCase
.
Additionally, we provide the network_response.response.finished_or_failed_at_timestamp
field, which is the timestamp corresponding to the loadingFinished
or loadingFailed
events. This is added as a convenience to make it easier to calculate total response times, which are defined as the difference between the send_end
and finished_or_failed_at_timestamp
fields.
Example¶
For example, consider the following request/response pair:
{
"id": 411,
"measurement_id": 123,
"client_time": "2025-08-19T08:18:00.115Z",
"client_time_skewed": "2025-08-19T08:18:00.115Z",
"created_at": "2025-08-19T08:18:26.956Z",
"updated_at": "2025-08-19T08:18:26.956Z",
"data": { // (1)!
"type": "Script",
"request": {
"url": "https://example.com/some-script.js",
"method": "GET",
"headers": {
"referer": "https://example.com/some-page.html",
"sec_ch_ua": "\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"138\", \"Google Chrome\";v=\"138\"",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36",
"sec_ch_ua_mobile": "?0",
"sec_ch_ua_platform": "\"Linux\""
}
},
"frame_id": "570AF5E850C2F859374DDF9BB8CB68E5",
"study_id": "STUDY_YOUTUBE", // (2)!
"loader_id": "277B02E037CBA0628F1D53F0B76FDAF1",
"timestamp": 6378353.008456,
"wall_time": 1755591480.115987,
"request_id": "1132753.12",
"document_url": "https://example.com/some-page.html"
},
"network_response": {
"id": 411,
"network_request_id": 4104074,
"client_time": "2025-08-19T08:18:00.173Z",
"client_time_skewed": "2025-08-19T08:18:00.173Z",
"created_at": "2025-08-19T08:18:26.959Z",
"updated_at": "2025-08-19T08:18:26.959Z",
"data": {
"type": "Script",
"frame_id": "570AF5E850C2F859374DDF9BB8CB68E5",
"response": {
"url": "https://example.com/some-script.js",
"status": 200,
"timing": {
"dns_end": -1,
"ssl_end": -1,
"push_end": 0,
"send_end": 1.617,
"dns_start": -1,
"proxy_end": -1,
"ssl_start": -1,
"push_start": 0,
"send_start": 1.565,
"connect_end": -1,
"proxy_start": -1,
"request_time": 6378353.035412,
"worker_ready": -1,
"worker_start": -1,
"connect_start": -1,
"worker_fetch_start": -1,
"receive_headers_end": 30.537,
"receive_headers_start": 30.501,
"worker_respond_with_settled": -1
},
"charset": "",
"headers": {
"date": "Tue, 19 Aug 2025 08:18:00 GMT",
"etag": "\"40a-63269338b8380\"",
"server": "Apache",
"content_type": "text/javascript",
"accept_ranges": "bytes",
"last_modified": "Thu, 10 Apr 2025 09:27:58 GMT",
"content_length": "1034"
},
"protocol": "h2",
"mime_type": "text/javascript",
"remote_port": 443,
"status_text": "",
"connection_id": 7951,
"response_time": 1755591480173.442,
"from_disk_cache": false,
"connection_reused": true,
"remote_ip_address": "1.2.3.4",
"encoded_data_length": 39,
"from_prefetch_cache": false,
"from_service_worker": false
},
"study_id": "STUDY_YOUTUBE",
"loader_id": "277B02E037CBA0628F1D53F0B76FDAF1",
"timestamp": 6378353.091208,
"request_id": "1132753.12",
"finished_or_failed_at_timestamp": 6378353.070411 // (3)!
}
}
}
- This is the request data per the DevTools Protocol
requestWillBeSent
event. - This is the study ID of the study that was running. You can use it to filter the requests by study, if you ran multiple studies in parallel.
- This is the timestamp when the response was finished or failed. We add this as a convenience to make it easier to calculate total response times.
Calculating Total Response Times¶
The following is an example of how you can calculate the total response time for a video request, based on the data from the API.
Consider as a reference the code from chrome-har-capturer
.
# Total Request Time Calculation Algorithm
def calculate_total_request_time(network_request_data):
"""
Calculate the complete end-to-end duration of a network request
from start to finish (including response body download)
"""
# Step 1: Extract the key timing data from Chrome DevTools Protocol
request_data = network_request_data["data"]
response_data = network_request_data["network_response"]["data"]
timing_info = response_data["response"]["timing"]
# Step 2: Get the two critical timestamps (both in Chrome's internal timeline)
# ... when request actually started (seconds)
request_start_time = timing_info["request_time"]
# ... when completely done (seconds) - from Surfmeter
request_finished_time = response_data.get("finished_or_failed_at_timestamp")
# Step 3: Handle missing data (some requests might not have finished timestamps)
if not request_finished_time or request_finished_time <= 0:
return None # Cannot calculate total time without completion timestamp
# Step 4: Normalize timestamps to milliseconds (Chrome uses inconsistent units!)
def to_milliseconds(timestamp):
"""
Chrome timestamps are weird - sometimes seconds, sometimes milliseconds
Count digits before decimal to determine the unit
"""
digit_count = len(str(int(timestamp)))
if digit_count >= 13:
# Already in milliseconds (e.g., 1755591481408.136)
return timestamp
else:
# In seconds, need to convert (e.g., 1755591481.408136)
return timestamp * 1000
# Step 5: Convert both timestamps to consistent millisecond format
start_time_ms = to_milliseconds(request_start_time)
finish_time_ms = to_milliseconds(request_finished_time)
# Step 6: Calculate the total wall-clock time
total_duration_ms = finish_time_ms - start_time_ms
return total_duration_ms
# Example usage with real data:
video_request = {
"data": {
"timestamp": 6378354.292458, # Browser internal timestamp
"request": {"url": "https://example.com/video_segment.ts"}
},
"network_response": {
"data": {
"response": {
"timing": {
"request_time": 6378354.294126, # Request started here
"send_start": 0.343,
"send_end": 0.481,
# ... other fields
"receive_headers_end": 5.5
}
},
"finished_or_failed_at_timestamp": 6378354.301374 # Request completed here
}
}
}
# Calculate: (6378354.301374 - 6378354.294126) * 1000 = 7.248 milliseconds
total_time = calculate_total_request_time(video_request)
print(f"Total request time: {total_time:.2f} ms") # Output: 7.25 ms