This page was generated from docs/examples/DataSet/Benchmarking.ipynb. Interactive online version: Binder badge.

Dataset Benchmarking

This notebook is a behind-the-scenes benchmarking notebook, mainly for use by developers. The recommended way for users to interact with the dataset is via the Measurement object and its associated context manager. See the corresponding notebook for a comprehensive toturial on how to use those.

[1]:
%matplotlib inline
from pathlib import Path

import numpy as np

import qcodes as qc
from qcodes.dataset import (
    ParamSpec,
    initialise_or_create_database_at,
    load_or_create_experiment,
    new_data_set,
)
Logging hadn't been started.
Activating auto-logging. Current session state plus future input saved.
Filename       : /home/runner/.qcodes/logs/command_history.log
Mode           : append
Output logging : True
Raw input log  : False
Timestamping   : True
State          : active
Qcodes Logfile : /home/runner/.qcodes/logs/241008-17767-qcodes.log
[2]:
qc.config.core.db_location
[2]:
'~/experiments.db'
[3]:
initialise_or_create_database_at(Path.cwd() / "benchmarking.db")

Setup

[4]:
exp = load_or_create_experiment("benchmarking", sample_name="the sample is a lie")
exp
[4]:
benchmarking#the sample is a lie#1@/home/runner/work/Qcodes/Qcodes/docs/examples/DataSet/benchmarking.db
--------------------------------------------------------------------------------------------------------

Now we can create a dataset. Note two things:

- if we don't specfiy a exp_id, but we have an experiment in the experiment container the dataset will go into that one.
- dataset can be created from the experiment object
[5]:
dataSet = new_data_set("benchmark_data", exp_id=exp.exp_id)
exp
[5]:
benchmarking#the sample is a lie#1@/home/runner/work/Qcodes/Qcodes/docs/examples/DataSet/benchmarking.db
--------------------------------------------------------------------------------------------------------
1-benchmark_data-1-None-0

In this benchmark we will assueme that we are doing a 2D loop and investigate the performance implications of writing to the dataset

[6]:
x_shape = 100
y_shape = 100

Baseline: Generate data

[7]:
%%time
for x in range(x_shape):
    for y in range(y_shape):
        z = np.random.random_sample(1)
CPU times: user 9.83 ms, sys: 1.98 ms, total: 11.8 ms
Wall time: 11.4 ms

and store in memory

[8]:
x_data = np.zeros((x_shape, y_shape))
y_data = np.zeros((x_shape, y_shape))
z_data = np.zeros((x_shape, y_shape))
[9]:
%%time
for x in range(x_shape):
    for y in range(y_shape):
        x_data[x, y] = x
        y_data[x, y] = y
        z_data[x, y] = np.random.random_sample()
CPU times: user 13.2 ms, sys: 0 ns, total: 13.2 ms
Wall time: 13.1 ms

Add to dataset inside double loop

[10]:
double_dataset = new_data_set(
    "doubledata",
    exp_id=exp.exp_id,
    specs=[
        ParamSpec("x", "numeric"),
        ParamSpec("y", "numeric"),
        ParamSpec("z", "numeric"),
    ],
)
double_dataset.mark_started()

Note that this is so slow that we are only doing a 10th of the computation

[11]:
%%time
for x in range(x_shape // 10):
    for y in range(y_shape):
        double_dataset.add_results([{"x": x, "y": y, "z": np.random.random_sample()}])
CPU times: user 179 ms, sys: 66.5 ms, total: 245 ms
Wall time: 594 ms

Add the data in outer loop and store as np array

[12]:
single_dataset = new_data_set(
    "singledata",
    exp_id=exp.exp_id,
    specs=[ParamSpec("x", "array"), ParamSpec("y", "array"), ParamSpec("z", "array")],
)
single_dataset.mark_started()
x_data = np.zeros(y_shape)
y_data = np.zeros(y_shape)
z_data = np.zeros(y_shape)
[13]:
%%time
for x in range(x_shape):
    for y in range(y_shape):
        x_data[y] = x
        y_data[y] = y
        z_data[y] = np.random.random_sample(1)
    single_dataset.add_results([{"x": x_data, "y": y_data, "z": z_data}])
2024-10-08 05:36:57,484 ¦ py.warnings ¦ WARNING ¦ warnings ¦ _showwarnmsg ¦ 110 ¦ <timed exec>:5: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

CPU times: user 41.7 ms, sys: 4.95 ms, total: 46.7 ms
Wall time: 57.7 ms

Save once after loop

[14]:
zero_dataset = new_data_set(
    "zerodata",
    exp_id=exp.exp_id,
    specs=[ParamSpec("x", "array"), ParamSpec("y", "array"), ParamSpec("z", "array")],
)
zero_dataset.mark_started()
x_data = np.zeros((x_shape, y_shape))
y_data = np.zeros((x_shape, y_shape))
z_data = np.zeros((x_shape, y_shape))
[15]:
%%time
for x in range(x_shape):
    for y in range(y_shape):
        x_data[x, y] = x
        y_data[x, y] = y
        z_data[x, y] = np.random.random_sample(1)
zero_dataset.add_results([{"x": x_data, "y": y_data, "z": z_data}])
2024-10-08 05:36:57,557 ¦ py.warnings ¦ WARNING ¦ warnings ¦ _showwarnmsg ¦ 110 ¦ <timed exec>:5: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

CPU times: user 19.3 ms, sys: 1.03 ms, total: 20.4 ms
Wall time: 20.5 ms

Array parameter

[16]:
array1D_dataset = new_data_set(
    "array1Ddata",
    exp_id=exp.exp_id,
    specs=[ParamSpec("x", "array"), ParamSpec("y", "array"), ParamSpec("z", "array")],
)
array1D_dataset.mark_started()
y_setpoints = np.arange(y_shape)
[17]:
%%timeit
for x in range(x_shape):
    x_data[x, :] = x
    array1D_dataset.add_results(
        [{"x": x_data[x, :], "y": y_setpoints, "z": np.random.random_sample(y_shape)}]
    )
41.3 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[18]:
x_data = np.zeros((x_shape, y_shape))
y_data = np.zeros((x_shape, y_shape))
z_data = np.zeros((x_shape, y_shape))
y_setpoints = np.arange(y_shape)
[19]:
array0D_dataset = new_data_set(
    "array0Ddata",
    exp_id=exp.exp_id,
    specs=[ParamSpec("x", "array"), ParamSpec("y", "array"), ParamSpec("z", "array")],
)
array0D_dataset.mark_started()
[20]:
%%timeit
for x in range(x_shape):
    x_data[x, :] = x
    y_data[x, :] = y_setpoints
    z_data[x, :] = np.random.random_sample(y_shape)
array0D_dataset.add_results([{"x": x_data, "y": y_data, "z": z_data}])
1.88 ms ± 52 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

Insert many

[21]:
data = []
for i in range(100):
    for j in range(100):
        data.append({"x": i, "y": j, "z": np.random.random_sample()})
[22]:
many_Data = new_data_set(
    "many_data",
    exp_id=exp.exp_id,
    specs=[
        ParamSpec("x", "numeric"),
        ParamSpec("y", "numeric"),
        ParamSpec("z", "numeric"),
    ],
)
many_Data.mark_started()
[23]:
%%timeit
many_Data.add_results(data)
18.7 ms ± 169 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)