qcodes.dataset

The dataset module contains code related to storage and retrieval of data to and from disk

Classes:

AbstractSweep()

Abstract sweep class that defines an interface for concrete sweep classes.

ArraySweep(param, array[, delay, ...])

Sweep the values of a given array.

ConnectionPlus(sqlite3_connection)

A class to extend the sqlite3.Connection object.

DataSetProtocol(*args, **kwargs)

DataSetType(value[, names, module, ...])

InterDependencies_([dependencies, ...])

Object containing a group of ParamSpecs and the information about their internal relations to each other

LinSweep(param, start, stop, num_points[, ...])

Linear sweep.

LogSweep(param, start, stop, num_points[, ...])

Logarithmic sweep.

Measurement([exp, station, name])

Measurement procedure container.

ParamSpec(name, paramtype[, label, unit, ...])

param name:

name of the parameter

RunDescriber(interdeps[, shapes])

The object that holds the description of each run in the database.

SQLiteSettings()

Class that holds the machine's sqlite options.

SequentialParamsCaller(*param_meas)

ThreadPoolParamsCaller(*param_meas[, ...])

Context manager for calling given parameters in a thread pool.

TogetherSweep(*sweeps)

A combination of Multiple sweeps that are to be performed in parallel such that all parameters in the TogetherSweep are set to the next value before a parameter is read.

DataSetDefinition(name, independent, dependent)

A specification for the creation of a Dataset or Measurement object

LinSweeper(*args, **kwargs)

An iterable version of the LinSweep class

Exceptions:

BreakConditionInterrupt

Functions:

call_params_threaded(param_meas)

Function to create threads per instrument for the given set of measurement parameters.

connect(name[, debug, version])

Connect or create database.

datasaver_builder(dataset_definitions, *[, ...])

A utility context manager intended to simplify the creation of datasavers

do0d(*param_meas[, write_period, ...])

Perform a measurement of a single parameter.

do1d(param_set, start, stop, num_points, ...)

Perform a 1D scan of param_set from start to stop in num_points measuring param_meas at each step.

do2d(param_set1, start1, stop1, num_points1, ...)

Perform a 1D scan of param_set1 from start1 to stop1 in num_points1 and param_set2 from start2 to stop2 in num_points2 measuring param_meas at each step.

dond(*params[, write_period, ...])

Perform n-dimentional scan from slowest (first) to the fastest (last), to measure m measurement parameters.

dond_into(datasaver, *params[, ...])

A doNd-like utility function that writes gridded data to the supplied DataSaver

experiments([conn])

List all the experiments in the container (database file from config)

extract_runs_into_db(source_db_path, ...[, ...])

Extract a selection of runs into another DB file.

get_data_export_path()

Get the path to export data to at the end of a measurement from config

get_default_experiment_id(conn)

Returns the latest created/ loaded experiment's exp_id as the default experiment.

get_guids_by_run_spec(*[, captured_run_id, ...])

Get a list of matching guids from one or more pieces of runs specification.

guids_from_dbs(db_paths)

Extract all guids from the supplied database paths.

guids_from_dir(basepath)

Recursively find all db files under basepath and extract guids.

guids_from_list_str(s)

Get tuple of guids from a python/json string representation of a list.

import_dat_file(location[, exp])

This imports a QCoDeS legacy qcodes.data.data_set.DataSet into the database.

initialise_database([journal_mode])

Initialise a database in the location specified by the config object and set atomic commit and rollback mode of the db.

initialise_or_create_database_at(...[, ...])

This function sets up QCoDeS to refer to the given database file.

initialised_database_at(db_file_with_abs_path)

Initializes or creates a database and restores the 'db_location' afterwards.

load_by_counter(counter, exp_id[, conn])

Load a dataset given its counter in a given experiment

load_by_guid(guid[, conn])

Load a dataset by its GUID

load_by_id(run_id[, conn])

Load a dataset by run id

load_by_run_spec(*[, captured_run_id, ...])

Load a run from one or more pieces of runs specification.

load_experiment(exp_id[, conn])

Load experiment with the specified id (from database file from config)

load_experiment_by_name(name[, sample, ...])

Try to load experiment with the specified name.

load_from_file(path[, path_to_db])

Create an in-memory dataset from a file.

load_from_netcdf(path[, path_to_db])

Create a in memory dataset from a netcdf file.

load_last_experiment()

Load last experiment (from database file from config)

load_or_create_experiment(experiment_name[, ...])

Find and return an experiment with the given name and sample name, or create one if not found.

new_data_set(name[, exp_id, specs, values, ...])

Create a new dataset in the currently active/selected database.

new_experiment(name, sample_name[, ...])

Create a new experiment (in the database file from config)

plot_by_id(run_id[, axes, colorbars, ...])

Construct all plots for a given run_id.

plot_dataset(dataset[, axes, colorbars, ...])

Construct all plots for a given dataset

reset_default_experiment_id([conn])

Resets the default experiment id to to the last experiment in the db.

rundescriber_from_json(json_str)

Deserialize a JSON string into a RunDescriber of the current version

class qcodes.dataset.AbstractSweep[source]

Bases: ABC, Generic[T]

Abstract sweep class that defines an interface for concrete sweep classes.

Methods:

get_setpoints()

Returns an array of setpoint values for this sweep.

__class_getitem__(params)

Parameterizes a generic class.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

abstract get_setpoints() ndarray[Any, dtype[T]][source]

Returns an array of setpoint values for this sweep.

abstract property param: ParameterBase

Returns the Qcodes sweep parameter.

abstract property delay: float

Delay between two consecutive sweep points.

abstract property num_points: int

Number of sweep points.

abstract property post_actions: ActionsT

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.ArraySweep(param: ParameterBase, array: Sequence[Any] | npt.NDArray[T], delay: float = 0, post_actions: ActionsT = (), get_after_set: bool = False)[source]

Bases: AbstractSweep, Generic[T]

Sweep the values of a given array.

Parameters:
  • param – Qcodes parameter for sweep.

  • array – array with values to sweep.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Returns an array of setpoint values for this sweep.

__class_getitem__(params)

Parameterizes a generic class.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[T]][source]

Returns an array of setpoint values for this sweep.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: ActionsT

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

exception qcodes.dataset.BreakConditionInterrupt[source]

Bases: Exception

add_note()

Exception.add_note(note) – add a note to the exception

args
with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

class qcodes.dataset.ConnectionPlus(sqlite3_connection: sqlite3.Connection)[source]

Bases: ObjectProxy

A class to extend the sqlite3.Connection object. Since sqlite3.Connection has no __dict__, we can not directly add attributes to its instance directly.

It is not allowed to instantiate a new ConnectionPlus object from a ConnectionPlus object.

It is recommended to create a ConnectionPlus using the function connect()

Attributes:

atomic_in_progress

a bool describing whether the connection is currently in the middle of an atomic block of transactions, thus allowing to nest atomic context managers

path_to_dbfile

Path to the database file of the connection.

atomic_in_progress: bool = False

a bool describing whether the connection is currently in the middle of an atomic block of transactions, thus allowing to nest atomic context managers

path_to_dbfile: str = ''

Path to the database file of the connection.

class qcodes.dataset.DataSetProtocol(*args, **kwargs)[source]

Bases: Protocol

Attributes:

persistent_traits

pristine

running

completed

run_id

captured_run_id

counter

captured_counter

guid

number_of_results

name

exp_name

exp_id

sample_name

run_timestamp_raw

completed_timestamp_raw

snapshot

metadata

path_to_db

paramspecs

description

parent_dataset_links

export_info

cache

dependent_parameters

Methods:

prepare(*, snapshot, interdeps[, shapes, ...])

mark_completed()

run_timestamp([fmt])

completed_timestamp([fmt])

add_snapshot(snapshot[, overwrite])

add_metadata(tag, metadata)

export([export_type, path, prefix, ...])

get_parameter_data(*params[, start, end, ...])

get_parameters()

to_xarray_dataarray_dict(*params[, start, ...])

to_xarray_dataset(*params[, start, end, ...])

to_pandas_dataframe_dict(*params[, start, end])

to_pandas_dataframe(*params[, start, end])

the_same_dataset_as(other)

__class_getitem__(params)

Parameterizes a generic class.

persistent_traits: tuple[str, ...] = ('name', 'guid', 'number_of_results', 'exp_name', 'sample_name', 'completed', 'snapshot', 'run_timestamp_raw', 'description', 'completed_timestamp_raw', 'metadata', 'parent_dataset_links', 'captured_run_id', 'captured_counter')
prepare(*, snapshot: Mapping[Any, Any], interdeps: InterDependencies_, shapes: Shapes | None = None, parent_datasets: Sequence[Mapping[Any, Any]] = (), write_in_background: bool = False) None[source]
property pristine: bool
property running: bool
property completed: bool
mark_completed() None[source]
property run_id: int
property captured_run_id: int
property counter: int
property captured_counter: int
property guid: str
property number_of_results: int
property name: str
property exp_name: str
property exp_id: int
property sample_name: str
run_timestamp(fmt: str = '%Y-%m-%d %H:%M:%S') str | None[source]
property run_timestamp_raw: float | None
completed_timestamp(fmt: str = '%Y-%m-%d %H:%M:%S') str | None[source]
property completed_timestamp_raw: float | None
property snapshot: dict[str, Any] | None
add_snapshot(snapshot: str, overwrite: bool = False) None[source]
add_metadata(tag: str, metadata: Any) None[source]
property metadata: dict[str, Any]
property path_to_db: str | None
property paramspecs: dict[str, ParamSpec]
property description: RunDescriber
export(export_type: DataExportType | str | None = None, path: Path | str | None = None, prefix: str | None = None, automatic_export: bool = False) None[source]
property export_info: ExportInfo
property cache: DataSetCache[DataSetProtocol]
get_parameter_data(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None, callback: Callable[[float], None] | None = None) ParameterData[source]
get_parameters() list[ParamSpec][source]
property dependent_parameters: tuple[ParamSpecBase, ...]
to_xarray_dataarray_dict(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None, use_multi_index: Literal['auto', 'always', 'never'] = 'auto') dict[str, xr.DataArray][source]
to_xarray_dataset(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None, use_multi_index: Literal['auto', 'always', 'never'] = 'auto') xr.Dataset[source]
to_pandas_dataframe_dict(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) dict[str, pd.DataFrame][source]
to_pandas_dataframe(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) pd.DataFrame[source]
the_same_dataset_as(other: DataSetProtocol) bool[source]
classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.DataSetType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: str, Enum

Attributes:

DataSet

DataSetInMem

Methods:

encode([encoding, errors])

Encode the string using the codec registered for encoding.

replace(old, new[, count])

Return a copy with all occurrences of substring old replaced by new.

split([sep, maxsplit])

Return a list of the substrings in the string, using sep as the separator string.

rsplit([sep, maxsplit])

Return a list of the substrings in the string, using sep as the separator string.

join(iterable, /)

Concatenate any number of strings.

capitalize()

Return a capitalized version of the string.

casefold()

Return a version of the string suitable for caseless comparisons.

title()

Return a version of the string where each word is titlecased.

center(width[, fillchar])

Return a centered string of length width.

count(sub[, start[, end]])

Return the number of non-overlapping occurrences of substring sub in string S[start:end].

expandtabs([tabsize])

Return a copy where all tab characters are expanded using spaces.

find(sub[, start[, end]])

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].

partition(sep, /)

Partition the string into three parts using the given separator.

index(sub[, start[, end]])

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].

ljust(width[, fillchar])

Return a left-justified string of length width.

lower()

Return a copy of the string converted to lowercase.

lstrip([chars])

Return a copy of the string with leading whitespace removed.

rfind(sub[, start[, end]])

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].

rindex(sub[, start[, end]])

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].

rjust(width[, fillchar])

Return a right-justified string of length width.

rstrip([chars])

Return a copy of the string with trailing whitespace removed.

rpartition(sep, /)

Partition the string into three parts using the given separator.

splitlines([keepends])

Return a list of the lines in the string, breaking at line boundaries.

strip([chars])

Return a copy of the string with leading and trailing whitespace removed.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

translate(table, /)

Replace each character in the string using the given translation table.

upper()

Return a copy of the string converted to uppercase.

startswith(prefix[, start[, end]])

Return True if S starts with the specified prefix, False otherwise.

endswith(suffix[, start[, end]])

Return True if S ends with the specified suffix, False otherwise.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

islower()

Return True if the string is a lowercase string, False otherwise.

isupper()

Return True if the string is an uppercase string, False otherwise.

istitle()

Return True if the string is a title-cased string, False otherwise.

isspace()

Return True if the string is a whitespace string, False otherwise.

isdecimal()

Return True if the string is a decimal string, False otherwise.

isdigit()

Return True if the string is a digit string, False otherwise.

isnumeric()

Return True if the string is a numeric string, False otherwise.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

isprintable()

Return True if the string is printable, False otherwise.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

format(*args, **kwargs)

Return a formatted version of S, using substitutions from args and kwargs.

format_map(mapping)

Return a formatted version of S, using substitutions from mapping.

maketrans

Return a translation table usable for str.translate().

__dir__()

Returns public methods and other interesting attributes.

DataSet = 'DataSet'
encode(encoding='utf-8', errors='strict')

Encode the string using the codec registered for encoding.

encoding

The encoding in which to encode the string.

errors

The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.

replace(old, new, count=-1, /)

Return a copy with all occurrences of substring old replaced by new.

count

Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.

If the optional argument count is given, only the first count occurrences are replaced.

split(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits. -1 (the default value) means no limit.

Splitting starts at the front of the string and works to the end.

Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.

rsplit(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits. -1 (the default value) means no limit.

Splitting starts at the end of the string and works to the front.

join(iterable, /)

Concatenate any number of strings.

The string whose method is called is inserted in between each given string. The result is returned as a new string.

Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’

capitalize()

Return a capitalized version of the string.

More specifically, make the first character have upper case and the rest lower case.

casefold()

Return a version of the string suitable for caseless comparisons.

title()

Return a version of the string where each word is titlecased.

More specifically, words start with uppercased characters and all remaining cased characters have lower case.

center(width, fillchar=' ', /)

Return a centered string of length width.

Padding is done using the specified fill character (default is a space).

count(sub[, start[, end]]) int

Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.

expandtabs(tabsize=8)

Return a copy where all tab characters are expanded using spaces.

If tabsize is not given, a tab size of 8 characters is assumed.

find(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

partition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing the original string and two empty strings.

index(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

ljust(width, fillchar=' ', /)

Return a left-justified string of length width.

Padding is done using the specified fill character (default is a space).

lower()

Return a copy of the string converted to lowercase.

lstrip(chars=None, /)

Return a copy of the string with leading whitespace removed.

If chars is given and not None, remove characters in chars instead.

rfind(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

rindex(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

rjust(width, fillchar=' ', /)

Return a right-justified string of length width.

Padding is done using the specified fill character (default is a space).

rstrip(chars=None, /)

Return a copy of the string with trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

rpartition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing two empty strings and the original string.

splitlines(keepends=False)

Return a list of the lines in the string, breaking at line boundaries.

Line breaks are not included in the resulting list unless keepends is given and true.

strip(chars=None, /)

Return a copy of the string with leading and trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

translate(table, /)

Replace each character in the string using the given translation table.

table

Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.

The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.

upper()

Return a copy of the string converted to uppercase.

startswith(prefix[, start[, end]]) bool

Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.

endswith(suffix[, start[, end]]) bool

Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.

islower()

Return True if the string is a lowercase string, False otherwise.

A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.

isupper()

Return True if the string is an uppercase string, False otherwise.

A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.

istitle()

Return True if the string is a title-cased string, False otherwise.

In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.

isspace()

Return True if the string is a whitespace string, False otherwise.

A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.

isdecimal()

Return True if the string is a decimal string, False otherwise.

A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.

isdigit()

Return True if the string is a digit string, False otherwise.

A string is a digit string if all characters in the string are digits and there is at least one character in the string.

isnumeric()

Return True if the string is a numeric string, False otherwise.

A string is numeric if all characters in the string are numeric and there is at least one character in the string.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.

isprintable()

Return True if the string is printable, False otherwise.

A string is printable if all of its characters are considered printable in repr() or if it is empty.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

The string is never truncated.

format(*args, **kwargs) str

Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).

format_map(mapping) str

Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).

static maketrans()

Return a translation table usable for str.translate().

If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.

DataSetInMem = 'DataSetInMem'
__dir__()

Returns public methods and other interesting attributes.

class qcodes.dataset.InterDependencies_(dependencies: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, inferences: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, standalones: tuple[ParamSpecBase, ...] = ())[source]

Bases: object

Object containing a group of ParamSpecs and the information about their internal relations to each other

Methods:

validate_paramspectree(paramspectree)

Validate a ParamSpecTree.

what_depends_on(ps)

Return a tuple of the parameters that depend on the given parameter.

what_is_inferred_from(ps)

Return a tuple of the parameters thatare inferred from the given parameter.

extend([dependencies, inferences, standalones])

Create a new InterDependencies_ object that is an extension of this instance with the provided input

remove(parameter)

Create a new InterDependencies_ object that is similar to this instance, but has the given parameter removed.

validate_subset(parameters)

Validate that the given parameters form a valid subset of the parameters of this instance, meaning that all the given parameters are actually found in this instance and that there are no missing dependencies/inferences.

Attributes:

paramspecs

Return the ParamSpecBase objects of this instance

non_dependencies

Return all parameters that are not dependencies of other parameters, i.e. return the top level parameters.

names

Return all the names of the parameters of this instance

static validate_paramspectree(paramspectree: dict[ParamSpecBase, tuple[ParamSpecBase, ...]]) tuple[type[Exception], str] | None[source]

Validate a ParamSpecTree. Apart from adhering to the type, a ParamSpecTree must not have any cycles.

Returns:

A tuple with an exception type and an error message or None, if the paramtree is valid

what_depends_on(ps: ParamSpecBase) tuple[ParamSpecBase, ...][source]

Return a tuple of the parameters that depend on the given parameter. Returns an empty tuple if nothing depends on the given parameter

Parameters:

ps – the parameter to look up

Raises:

ValueError if the parameter is not part of this object

what_is_inferred_from(ps: ParamSpecBase) tuple[ParamSpecBase, ...][source]

Return a tuple of the parameters thatare inferred from the given parameter. Returns an empty tuple if nothing is inferred from the given parameter

Parameters:

ps – the parameter to look up

Raises:

ValueError if the parameter is not part of this object

property paramspecs: tuple[ParamSpecBase, ...]

Return the ParamSpecBase objects of this instance

property non_dependencies: tuple[ParamSpecBase, ...]

Return all parameters that are not dependencies of other parameters, i.e. return the top level parameters. Returned tuple is sorted by parameter names.

property names: tuple[str, ...]

Return all the names of the parameters of this instance

extend(dependencies: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, inferences: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, standalones: tuple[ParamSpecBase, ...] = ()) InterDependencies_[source]

Create a new InterDependencies_ object that is an extension of this instance with the provided input

remove(parameter: ParamSpecBase) InterDependencies_[source]

Create a new InterDependencies_ object that is similar to this instance, but has the given parameter removed.

validate_subset(parameters: Sequence[ParamSpecBase]) None[source]

Validate that the given parameters form a valid subset of the parameters of this instance, meaning that all the given parameters are actually found in this instance and that there are no missing dependencies/inferences.

Parameters:

parameters – The collection of ParamSpecBases to validate

Raises:
  • DependencyError, if a dependency is missing

  • InferenceError, if an inference is missing

class qcodes.dataset.LinSweep(param: ParameterBase, start: float, stop: float, num_points: int, delay: float = 0, post_actions: ActionsT = (), get_after_set: bool = False)[source]

Bases: AbstractSweep[float64]

Linear sweep.

Parameters:
  • param – Qcodes parameter to sweep.

  • start – Sweep start value.

  • stop – Sweep end value.

  • num_points – Number of sweep points.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

__class_getitem__(params)

Parameterizes a generic class.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[float64]][source]

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: ActionsT

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.LogSweep(param: ParameterBase, start: float, stop: float, num_points: int, delay: float = 0, post_actions: ActionsT = (), get_after_set: bool = False)[source]

Bases: AbstractSweep[float64]

Logarithmic sweep.

Parameters:
  • param – Qcodes parameter for sweep.

  • start – Sweep start value.

  • stop – Sweep end value.

  • num_points – Number of sweep points.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Logarithmically spaced numpy array for supplied start, stop and num_points.

__class_getitem__(params)

Parameterizes a generic class.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[float64]][source]

Logarithmically spaced numpy array for supplied start, stop and num_points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: ActionsT

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.Measurement(exp: Experiment | None = None, station: Station | None = None, name: str = '')[source]

Bases: object

Measurement procedure container. Note that multiple measurement instances cannot be nested.

Parameters:
  • exp – Specify the experiment to use. If not given the default one is used. The default experiment is the latest one created.

  • station – The QCoDeS station to snapshot. If not given, the default one is used.

  • name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

Attributes:

parameters

write_period

Methods:

register_parent(parent, link_type[, description])

Register a parent for the outcome of this measurement

register_parameter(parameter[, setpoints, ...])

Add QCoDeS Parameter to the dataset produced by running this measurement.

register_custom_parameter(name[, label, ...])

Register a custom parameter with this measurement

unregister_parameter(parameter)

Remove a custom/QCoDeS parameter from the dataset produced by running this measurement

add_before_run(func, args)

Add an action to be performed before the measurement.

add_after_run(func, args)

Add an action to be performed after the measurement.

add_subscriber(func, state)

Add a subscriber to the dataset of the measurement.

set_shapes(shapes)

Set the shapes of the data to be recorded in this measurement.

run([write_in_background, in_memory_cache, ...])

Returns the context manager for the experimental run

property parameters: dict[str, ParamSpecBase]
property write_period: float
register_parent(parent: DataSetProtocol, link_type: str, description: str = '') T[source]

Register a parent for the outcome of this measurement

Parameters:
  • parent – The parent dataset

  • link_type – A name for the type of parent-child link

  • description – A free-text description of the relationship

register_parameter(parameter: ParameterBase, setpoints: Sequence[str | ParameterBase] | None = None, basis: Sequence[str | ParameterBase] | None = None, paramtype: str | None = None) T[source]

Add QCoDeS Parameter to the dataset produced by running this measurement.

Parameters:
  • parameter – The parameter to add

  • setpoints – The Parameter representing the setpoints for this parameter. If this parameter is a setpoint, it should be left blank

  • basis – The parameters that this parameter is inferred from. If this parameter is not inferred from any other parameters, this should be left blank.

  • paramtype – Type of the parameter, i.e. the SQL storage class, If None the paramtype will be inferred from the parameter type and the validator of the supplied parameter.

register_custom_parameter(name: str, label: str | None = None, unit: str | None = None, basis: Sequence[str | ParameterBase] | None = None, setpoints: Sequence[str | ParameterBase] | None = None, paramtype: str = 'numeric') T[source]

Register a custom parameter with this measurement

Parameters:
  • name – The name that this parameter will have in the dataset. Must be unique (will overwrite an existing parameter with the same name!)

  • label – The label

  • unit – The unit

  • basis – A list of either QCoDeS Parameters or the names of parameters already registered in the measurement that this parameter is inferred from

  • setpoints – A list of either QCoDeS Parameters or the names of of parameters already registered in the measurement that are the setpoints of this parameter

  • paramtype – Type of the parameter, i.e. the SQL storage class

unregister_parameter(parameter: Sequence[str | ParameterBase]) None[source]

Remove a custom/QCoDeS parameter from the dataset produced by running this measurement

add_before_run(func: Callable[[...], Any], args: Sequence[Any]) T[source]

Add an action to be performed before the measurement.

Parameters:
  • func – Function to be performed

  • args – The arguments to said function

add_after_run(func: Callable[[...], Any], args: Sequence[Any]) T[source]

Add an action to be performed after the measurement.

Parameters:
  • func – Function to be performed

  • args – The arguments to said function

add_subscriber(func: Callable[[...], Any], state: MutableSequence[Any] | MutableMapping[Any, Any]) T[source]

Add a subscriber to the dataset of the measurement.

Parameters:
  • func – A function taking three positional arguments: a list of tuples of parameter values, an integer, a mutable variable (list or dict) to hold state/writes updates to.

  • state – The variable to hold the state.

set_shapes(shapes: Shapes | None) None[source]

Set the shapes of the data to be recorded in this measurement.

Parameters:

shapes – Dictionary from names of dependent parameters to a tuple of integers describing the shape of the measurement.

run(write_in_background: bool | None = None, in_memory_cache: bool | None = True, dataset_class: DataSetType = DataSetType.DataSet, parent_span: Span | None = None) Runner[source]

Returns the context manager for the experimental run

Parameters:
  • write_in_background – if True, results that will be added within the context manager with DataSaver.add_result will be stored in background, without blocking the main thread that is executing the context manager. By default the setting for write in background will be read from the qcodesrc.json config file.

  • in_memory_cache – Should measured data be keep in memory and available as part of the dataset.cache object.

  • dataset_class – Enum representing the Class used to store data with.

  • parent_span – An optional opentelemetry span that this should be registered a a child of if using opentelemetry.

class qcodes.dataset.ParamSpec(name: str, paramtype: str, label: str | None = None, unit: str | None = None, inferred_from: Sequence[ParamSpec | str] | None = None, depends_on: Sequence[ParamSpec | str] | None = None, **metadata: Any)[source]

Bases: ParamSpecBase

Parameters:
  • name – name of the parameter

  • paramtype – type of the parameter, i.e. the SQL storage class

  • label – label of the parameter

  • unit – unit of the parameter

  • inferred_from – the parameters that this parameter is inferred from

  • depends_on – the parameters that this parameter depends on

  • **metadata – additional metadata to be stored with the parameter

Attributes:

inferred_from_

depends_on_

inferred_from

depends_on

allowed_types

Methods:

copy()

Make a copy of self

__hash__()

Allow ParamSpecs in data structures that use hashing (i.e. sets).

base_version()

Return a ParamSpecBase object with the same name, paramtype, label and unit as this ParamSpec

sql_repr()

property inferred_from_: list[str]
property depends_on_: list[str]
property inferred_from: str
property depends_on: str
copy() ParamSpec[source]

Make a copy of self

__hash__() int[source]

Allow ParamSpecs in data structures that use hashing (i.e. sets)

base_version() ParamSpecBase[source]

Return a ParamSpecBase object with the same name, paramtype, label and unit as this ParamSpec

allowed_types: ClassVar[list[str]] = ['array', 'numeric', 'text', 'complex']
sql_repr() str
class qcodes.dataset.RunDescriber(interdeps: InterDependencies_, shapes: dict[str, tuple[int, ...]] | None = None)[source]

Bases: object

The object that holds the description of each run in the database. This object serialises itself to a string and is found under the run_description column in the runs table.

Extension of this object is planned for the future, for now it holds the parameter interdependencies. Extensions should be objects that can convert themselves to dictionary and added as attributes to the RunDescriber, such that the RunDescriber can iteratively convert its attributes when converting itself to dictionary.

Attributes:

version

shapes

interdeps

property version: int
property shapes: dict[str, tuple[int, ...]] | None
property interdeps: InterDependencies_
class qcodes.dataset.SQLiteSettings[source]

Bases: object

Class that holds the machine’s sqlite options.

Note that the settings are not dynamically updated, so changes during runtime must be updated manually. But you probably should not be changing these settings dynamically in the first place.

Attributes:

limits

settings

limits = {'MAX_ATTACHED': 10, 'MAX_COLUMN': 2000, 'MAX_COMPOUND_SELECT': 500, 'MAX_EXPR_DEPTH': 1000, 'MAX_FUNCTION_ARG': 127, 'MAX_LENGTH': 1000000000, 'MAX_LIKE_PATTERN_LENGTH': 50000, 'MAX_PAGE_COUNT': 1073741823, 'MAX_SQL_LENGTH': 1000000000, 'MAX_VARIABLE_NUMBER': 250000}
settings = {'ATOMIC_INTRINSICS': 1, 'COMPILER': 'gcc-11.4.0', 'DEFAULT_AUTOVACUUM': True, 'DEFAULT_CACHE_SIZE': '-2000', 'DEFAULT_FILE_FORMAT': 4, 'DEFAULT_JOURNAL_SIZE_LIMIT': '-1', 'DEFAULT_MMAP_SIZE': True, 'DEFAULT_PAGE_SIZE': 4096, 'DEFAULT_PCACHE_INITSZ': 20, 'DEFAULT_RECURSIVE_TRIGGERS': True, 'DEFAULT_SECTOR_SIZE': 4096, 'DEFAULT_SYNCHRONOUS': 2, 'DEFAULT_WAL_AUTOCHECKPOINT': 1000, 'DEFAULT_WAL_SYNCHRONOUS': 2, 'DEFAULT_WORKER_THREADS': True, 'ENABLE_COLUMN_METADATA': True, 'ENABLE_DBSTAT_VTAB': True, 'ENABLE_FTS3': True, 'ENABLE_FTS3_PARENTHESIS': True, 'ENABLE_FTS3_TOKENIZER': True, 'ENABLE_FTS4': True, 'ENABLE_FTS5': True, 'ENABLE_JSON1': True, 'ENABLE_LOAD_EXTENSION': True, 'ENABLE_MATH_FUNCTIONS': True, 'ENABLE_PREUPDATE_HOOK': True, 'ENABLE_RTREE': True, 'ENABLE_SESSION': True, 'ENABLE_STMTVTAB': True, 'ENABLE_UNLOCK_NOTIFY': True, 'ENABLE_UPDATE_DELETE_LIMIT': True, 'HAVE_ISNAN': True, 'LIKE_DOESNT_MATCH_BLOBS': True, 'MALLOC_SOFT_LIMIT': 1024, 'MAX_DEFAULT_PAGE_SIZE': 32768, 'MAX_MMAP_SIZE': '0x7fff0000', 'MAX_PAGE_SIZE': 65536, 'MAX_SCHEMA_RETRY': 25, 'MAX_TRIGGER_DEPTH': 1000, 'MAX_VDBE_OP': 250000000, 'MAX_WORKER_THREADS': 8, 'MUTEX_PTHREADS': True, 'OMIT_LOOKASIDE': True, 'SECURE_DELETE': True, 'SOUNDEX': True, 'SYSTEM_MALLOC': True, 'TEMP_STORE': 1, 'THREADSAFE': 1, 'USE_URI': True, 'VERSION': '3.37.2'}
class qcodes.dataset.SequentialParamsCaller(*param_meas: ParamMeasT)[source]

Bases: _ParamsCallerProtocol

Methods:

__class_getitem__(params)

Parameterizes a generic class.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.ThreadPoolParamsCaller(*param_meas: ParamMeasT, max_workers: int | None = None)[source]

Bases: _ParamsCallerProtocol

Context manager for calling given parameters in a thread pool. Note that parameters that have the same underlying instrument will be called in the same thread.

Usage:

...
with ThreadPoolParamsCaller(p1, p2, ...) as pool_caller:
    ...
    output = pool_caller()
    ...
    # Output can be passed directly into DataSaver.add_result:
    # datasaver.add_result(*output)
    ...
...
Parameters:
  • param_meas – parameter or a callable without arguments

  • max_workers – number of worker threads to create in the pool; if None, the number of worker threads will be equal to the number of unique “underlying instruments”

Methods:

__call__()

Call parameters in the thread pool and return (param, value) tuples.

__class_getitem__(params)

Parameterizes a generic class.

__call__() OutType[source]

Call parameters in the thread pool and return (param, value) tuples.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

class qcodes.dataset.TogetherSweep(*sweeps: AbstractSweep)[source]

Bases: object

A combination of Multiple sweeps that are to be performed in parallel such that all parameters in the TogetherSweep are set to the next value before a parameter is read.

Attributes:

sweeps

num_points

Methods:

get_setpoints()

property sweeps: tuple[AbstractSweep, ...]
get_setpoints() Iterable[source]
property num_points: int
qcodes.dataset.call_params_threaded(param_meas: Sequence[ParamMeasT]) OutType[source]

Function to create threads per instrument for the given set of measurement parameters.

Parameters:

param_meas – a Sequence of measurement parameters

qcodes.dataset.connect(name: str | Path, debug: bool = False, version: int = -1) ConnectionPlus[source]

Connect or create database. If debug the queries will be echoed back. This function takes care of registering the numpy/sqlite type converters that we need.

Parameters:
  • name – name or path to the sqlite file

  • debug – should tracing be turned on.

  • version – which version to create. We count from 0. -1 means ‘latest’. Should always be left at -1 except when testing.

Returns:

connection object to the database (note, it is ConnectionPlus, not sqlite3.Connection)

qcodes.dataset.datasaver_builder(dataset_definitions: Sequence[DataSetDefinition], *, override_experiment: Experiment | None = None) Generator[list[DataSaver], Any, None][source]

A utility context manager intended to simplify the creation of datasavers

The datasaver builder can be used to streamline the creation of multiple datasavers where all dependent parameters depend on all independent parameters.

Parameters:
  • dataset_definitions – A set of DataSetDefinitions to create and register parameters for

  • override_experiment – Sets the Experiment for all datasets to be written to. This argument overrides any experiments provided in the DataSetDefinition

Yields:

A list of generated datasavers with parameters registered

class qcodes.dataset.DataSetDefinition(name: str, independent: Sequence[ParameterBase], dependent: Sequence[ParameterBase], experiment: Experiment | None = None)[source]

Bases: object

A specification for the creation of a Dataset or Measurement object

Attributes:

name

The name to be assigned to the Measurement and dataset

independent

A sequence of independent parameters in the Measurement and dataset

dependent

A sequence of dependent parameters in the Measurement and dataset Note: All dependent parameters will depend on all independent parameters

experiment

An optional argument specifying which Experiment this dataset should be written to

name: str

The name to be assigned to the Measurement and dataset

independent: Sequence[ParameterBase]

A sequence of independent parameters in the Measurement and dataset

dependent: Sequence[ParameterBase]

A sequence of dependent parameters in the Measurement and dataset Note: All dependent parameters will depend on all independent parameters

experiment: Experiment | None = None

An optional argument specifying which Experiment this dataset should be written to

qcodes.dataset.do0d(*param_meas: ParamMeasT, write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, do_plot: bool | None = None, use_threads: bool | None = None, log_info: str | None = None) AxesTupleListWithDataSet[source]

Perform a measurement of a single parameter. This is probably most useful for an ArrayParameter that already returns an array of data points

Parameters:
  • *param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • log_info – Message that is logged during the measurement. If None a default message is used.

Returns:

The QCoDeS dataset.

qcodes.dataset.do1d(param_set: ParameterBase, start: float, stop: float, num_points: int, delay: float, *param_meas: ParamMeasT, enter_actions: ActionsT = (), exit_actions: ActionsT = (), write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, do_plot: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), show_progress: bool | None = None, log_info: str | None = None, break_condition: BreakConditionT | None = None) AxesTupleListWithDataSet[source]

Perform a 1D scan of param_set from start to stop in num_points measuring param_meas at each step. In case param_meas is an ArrayParameter this is effectively a 2d scan.

Parameters:
  • param_set – The QCoDeS parameter to sweep over

  • start – Starting point of sweep

  • stop – End point of sweep

  • num_points – Number of points in sweep

  • delay – Delay after setting parameter before measurement is performed

  • param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends

  • write_period – The time after which the data is actually written to the database.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

Returns:

The QCoDeS dataset.

qcodes.dataset.do2d(param_set1: ParameterBase, start1: float, stop1: float, num_points1: int, delay1: float, param_set2: ParameterBase, start2: float, stop2: float, num_points2: int, delay2: float, *param_meas: ParamMeasT, set_before_sweep: bool | None = True, enter_actions: ActionsT = (), exit_actions: ActionsT = (), before_inner_actions: ActionsT = (), after_inner_actions: ActionsT = (), write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, flush_columns: bool = False, do_plot: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), show_progress: bool | None = None, log_info: str | None = None, break_condition: BreakConditionT | None = None) AxesTupleListWithDataSet[source]

Perform a 1D scan of param_set1 from start1 to stop1 in num_points1 and param_set2 from start2 to stop2 in num_points2 measuring param_meas at each step.

Parameters:
  • param_set1 – The QCoDeS parameter to sweep over in the outer loop

  • start1 – Starting point of sweep in outer loop

  • stop1 – End point of sweep in the outer loop

  • num_points1 – Number of points to measure in the outer loop

  • delay1 – Delay after setting parameter in the outer loop

  • param_set2 – The QCoDeS parameter to sweep over in the inner loop

  • start2 – Starting point of sweep in inner loop

  • stop2 – End point of sweep in the inner loop

  • num_points2 – Number of points to measure in the inner loop

  • delay2 – Delay after setting parameter before measurement is performed

  • param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • set_before_sweep – if True the outer parameter is set to its first value before the inner parameter is swept to its next value.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends

  • before_inner_actions – Actions executed before each run of the inner loop

  • after_inner_actions – Actions executed after each run of the inner loop

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • flush_columns – The data is written after a column is finished independent of the passed time and write period.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

Returns:

The QCoDeS dataset.

qcodes.dataset.dond(*params: AbstractSweep | TogetherSweep | ParamMeasT | Sequence[ParamMeasT], write_period: float | None = None, measurement_name: str | Sequence[str] = '', exp: Experiment | Sequence[Experiment] | None = None, enter_actions: ActionsT = (), exit_actions: ActionsT = (), do_plot: bool | None = None, show_progress: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), log_info: str | None = None, break_condition: BreakConditionT | None = None, dataset_dependencies: Mapping[str, Sequence[ParamMeasT]] | None = None, in_memory_cache: bool | None = None) AxesTupleListWithDataSet | MultiAxesTupleListWithDataSet[source]

Perform n-dimentional scan from slowest (first) to the fastest (last), to measure m measurement parameters. The dimensions should be specified as sweep objects, and after them the parameters to measure should be passed.

Parameters:
  • params

    Instances of n sweep classes and m measurement parameters, e.g. if linear sweep is considered:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    param_meas_1, param_meas_2, ..., param_meas_m
    

    If multiple DataSets creation is needed, measurement parameters should be grouped, so one dataset will be created for each group. e.g.:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    [param_meas_1, param_meas_2], ..., [param_meas_m]
    

    If you want to sweep multiple parameters together.

    TogetherSweep(LinSweep(param_set_1, start_1, stop_1, num_points, delay_1),
                  LinSweep(param_set_2, start_2, stop_2, num_points, delay_2))
    param_meas_1, param_meas_2, ..., param_meas_m
    

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name(s) of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset. If more than one is given, each dataset will have an individual name.

  • exp – The experiment to use for this measurement. If you create multiple measurements using groups you may also supply multiple experiments.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start.

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends.

  • do_plot – should png and pdf versions of the images be saved and plots are shown after the run. If None the setting will be read from qcodesrc.json

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • use_threads – If True, measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned/swept-over.

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

  • dataset_dependencies – Optionally describe that measured datasets only depend on a subset of the setpoint parameters. Given as a mapping from measurement names to Sequence of Parameters. Note that a dataset must depend on at least one parameter from each dimension but can depend on one or more parameters from a dimension sweeped with a TogetherSweep.

  • in_memory_cache – Should a cache of the data be kept available in memory for faster plotting and exporting. Useful to disable if the data is very large in order to save on memory consumption. If None, the value for this will be read from qcodesrc.json config file.

Returns:

A tuple of QCoDeS DataSet, Matplotlib axis, Matplotlib colorbar. If more than one group of measurement parameters is supplied, the output will be a tuple of tuple(QCoDeS DataSet), tuple(Matplotlib axis), tuple(Matplotlib colorbar), in which each element of each sub-tuple belongs to one group, and the order of elements is the order of the supplied groups.

qcodes.dataset.dond_into(datasaver: DataSaver, *params: AbstractSweep | ParameterBase | Callable[[], None], additional_setpoints: Sequence[ParameterBase] = ()) None[source]

A doNd-like utility function that writes gridded data to the supplied DataSaver

dond_into accepts AbstractSweep objects and measurement parameters or callables. It executes the specified Sweeps, reads the measurement parameters, and stores the resulting data in the datasaver.

Parameters:
  • datasaver – The datasaver to write data to

  • params

    Instances of n sweep classes and m measurement parameters, e.g. if linear sweep is considered:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    param_meas_1, param_meas_2, ..., param_meas_m
    

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned/swept-over.

qcodes.dataset.experiments(conn: ConnectionPlus | None = None) list[Experiment][source]

List all the experiments in the container (database file from config)

Parameters:

conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

All the experiments in the container

qcodes.dataset.extract_runs_into_db(source_db_path: str | Path, target_db_path: str | Path, *run_ids: int, upgrade_source_db: bool = False, upgrade_target_db: bool = False) None[source]

Extract a selection of runs into another DB file. All runs must come from the same experiment. They will be added to an experiment with the same name and sample_name in the target db. If such an experiment does not exist, it will be created.

Parameters:
  • source_db_path – Path to the source DB file

  • target_db_path – Path to the target DB file. The target DB file will be created if it does not exist.

  • run_ids – The run_id’s of the runs to copy into the target DB file

  • upgrade_source_db – If the source DB is found to be in a version that is not the newest, should it be upgraded?

  • upgrade_target_db – If the target DB is found to be in a version that is not the newest, should it be upgraded?

qcodes.dataset.get_data_export_path() Path[source]

Get the path to export data to at the end of a measurement from config

Returns:

Path

qcodes.dataset.get_default_experiment_id(conn: ConnectionPlus) int[source]

Returns the latest created/ loaded experiment’s exp_id as the default experiment. If it is not set the maximum exp_id returned as the default. If no experiment is found in the database, a ValueError is raised.

Parameters:

conn – Open connection to the db in question.

Returns:

exp_id of the default experiment.

Raises:

ValueError – If no experiment exists in the given db.

qcodes.dataset.get_guids_by_run_spec(*, captured_run_id: int | None = None, captured_counter: int | None = None, experiment_name: str | None = None, sample_name: str | None = None, sample_id: int | None = None, location: int | None = None, work_station: int | None = None, conn: ConnectionPlus | None = None) list[str][source]

Get a list of matching guids from one or more pieces of runs specification. All fields are optional.

Parameters:
  • captured_run_id – The run_id that was originally assigned to this at the time of capture.

  • captured_counter – The counter that was originally assigned to this at the time of capture.

  • experiment_name – name of the experiment that the run was captured

  • sample_name – The name of the sample given when creating the experiment.

  • sample_id – The sample_id assigned as part of the GUID.

  • location – The location code assigned as part of GUID.

  • work_station – The workstation assigned as part of the GUID.

  • conn – An optional connection to the database. If no connection is supplied a connection to the default database will be opened.

Returns:

List of guids matching the run spec.

qcodes.dataset.guids_from_dbs(db_paths: Iterable[Path]) tuple[dict[Path, list[str]], dict[str, Path]][source]

Extract all guids from the supplied database paths.

Parameters:

db_paths – Path or str or directory where to search

Returns:

Tuple of Dictionary mapping paths to lists of guids as strings and Dictionary mapping guids to db paths.

qcodes.dataset.guids_from_dir(basepath: Path | str) tuple[dict[Path, list[str]], dict[str, Path]][source]

Recursively find all db files under basepath and extract guids.

Parameters:

basepath – Path or str or directory where to search

Returns:

Tuple of Dictionary mapping paths to lists of guids as strings and Dictionary mapping guids to db paths.

qcodes.dataset.guids_from_list_str(s: str) tuple[str, ...] | None[source]

Get tuple of guids from a python/json string representation of a list.

Extracts the guids from a string representation of a list, tuple, or set of guids or a single guid.

Parameters:

s – input string

Returns:

Extracted guids as a tuple of strings. If a provided string does not match the format, None will be returned. For an empty list/tuple/set or empty string an empty tuple is returned.

Examples

>>> guids_from_str(
"['07fd7195-c51e-44d6-a085-fa8274cf00d6',           '070d7195-c51e-44d6-a085-fa8274cf00d6']")
will return
('07fd7195-c51e-44d6-a085-fa8274cf00d6',
'070d7195-c51e-44d6-a085-fa8274cf00d6')
qcodes.dataset.import_dat_file(location: str | Path, exp: Experiment | None = None) list[int][source]

This imports a QCoDeS legacy qcodes.data.data_set.DataSet into the database.

Parameters:
  • location – Path to file containing legacy dataset

  • exp – Specify the experiment to store data to. If None the default one is used. See the docs of qcodes.dataset.Measurement for more details.

qcodes.dataset.initialise_database(journal_mode: Literal['DELETE', 'TRUNCATE', 'PERSIST', 'MEMORY', 'WAL', 'OFF'] | None = 'WAL') None[source]

Initialise a database in the location specified by the config object and set atomic commit and rollback mode of the db. The db is created with the latest supported version. If the database already exists the atomic commit and rollback mode is set and the database is upgraded to the latest version.

Parameters:

journal_mode – Which journal_mode should be used for atomic commit and rollback. Options are DELETE, TRUNCATE, PERSIST, MEMORY, WAL and OFF. If set to None no changes are made.

qcodes.dataset.initialise_or_create_database_at(db_file_with_abs_path: str | Path, journal_mode: JournalMode | None = 'WAL') None[source]

This function sets up QCoDeS to refer to the given database file. If the database file does not exist, it will be initiated.

Parameters:
  • db_file_with_abs_path – Database file name with absolute path, for example C:\mydata\majorana_experiments.db

  • journal_mode – Which journal_mode should be used for atomic commit and rollback. Options are DELETE, TRUNCATE, PERSIST, MEMORY, WAL and OFF. If set to None no changes are made.

qcodes.dataset.initialised_database_at(db_file_with_abs_path: str | Path) Iterator[None][source]

Initializes or creates a database and restores the ‘db_location’ afterwards.

Parameters:

db_file_with_abs_path – Database file name with absolute path, for example C:\mydata\majorana_experiments.db

class qcodes.dataset.LinSweeper(*args: Any, **kwargs: Any)[source]

Bases: LinSweep

An iterable version of the LinSweep class

Iterations of this object, set the next setpoint and then wait the delay time

Methods:

__class_getitem__(params)

Parameterizes a generic class.

get_setpoints()

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

Attributes:

delay

Delay between two consecutive sweep points.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

num_points

Number of sweep points.

param

Returns the Qcodes sweep parameter.

post_actions

actions to be performed after setting param to its setpoint.

classmethod __class_getitem__(params)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo(Generic[T]): ….

property delay: float

Delay between two consecutive sweep points.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

get_setpoints() ndarray[Any, dtype[float64]]

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

property num_points: int

Number of sweep points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property post_actions: ActionsT

actions to be performed after setting param to its setpoint.

qcodes.dataset.load_by_counter(counter: int, exp_id: int, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset given its counter in a given experiment

Lookup is performed in the database file that is specified in the config.

Note that the counter used in this function in not preserved when copying data to another db file. We recommend using load_by_run_spec() which does not have this issue and is significantly more flexible.

If the raw data is in the database this will be loaded as a DataSet. Otherwise it will be loaded as a DataSetInMemory If the raw data is exported to netcdf and qcodes.config.dataset.load_from_exported_file is set to True. this will be loaded from file as a DataSetInMemory. regardless.

Parameters:
  • counter – counter of the dataset within the given experiment

  • exp_id – id of the experiment where to look for the dataset

  • conn – connection to the database to load from. If not provided, a connection to the DB file specified in the config is made

Returns:

DataSet or DataSetInMemory of the given counter in the given experiment

qcodes.dataset.load_by_guid(guid: str, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset by its GUID

If no connection is provided, lookup is performed in the database file that is specified in the config.

If the raw data is in the database this will be loaded as a DataSet. Otherwise it will be loaded as a DataSetInMemory If the raw data is exported to netcdf and qcodes.config.dataset.load_from_exported_file is set to True. this will be loaded from file as a DataSetInMemory. regardless.

Parameters:
  • guid – guid of the dataset

  • conn – connection to the database to load from

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory with the given guid

Raises:
  • NameError – if no run with the given GUID exists in the database

  • RuntimeError – if several runs with the given GUID are found

qcodes.dataset.load_by_id(run_id: int, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset by run id

If no connection is provided, lookup is performed in the database file that is specified in the config.

Note that the run_id used in this function in not preserved when copying data to another db file. We recommend using load_by_run_spec() which does not have this issue and is significantly more flexible.

If the raw data is in the database this will be loaded as a DataSet. Otherwise it will be loaded as a DataSetInMemory If the raw data is exported to netcdf and qcodes.config.dataset.load_from_exported_file is set to True. this will be loaded from file as a DataSetInMemory. regardless.

Parameters:
  • run_id – run id of the dataset

  • conn – connection to the database to load from

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory with the given run id

qcodes.dataset.load_by_run_spec(*, captured_run_id: int | None = None, captured_counter: int | None = None, experiment_name: str | None = None, sample_name: str | None = None, sample_id: int | None = None, location: int | None = None, work_station: int | None = None, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a run from one or more pieces of runs specification. All fields are optional but the function will raise an error if more than one run matching the supplied specification is found. Along with the error specs of the runs found will be printed.

If the raw data is in the database this will be loaded as a DataSet. Otherwise it will be loaded as a DataSetInMemory If the raw data is exported to netcdf and qcodes.config.dataset.load_from_exported_file is set to True. this will be loaded from file as a DataSetInMemory. regardless.

Parameters:
  • captured_run_id – The run_id that was originally assigned to this at the time of capture.

  • captured_counter – The counter that was originally assigned to this at the time of capture.

  • experiment_name – name of the experiment that the run was captured

  • sample_name – The name of the sample given when creating the experiment.

  • sample_id – The sample_id assigned as part of the GUID.

  • location – The location code assigned as part of GUID.

  • work_station – The workstation assigned as part of the GUID.

  • conn – An optional connection to the database. If no connection is supplied a connection to the default database will be opened.

Raises:

NameError – if no run or more than one run with the given specification exists in the database

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory matching the provided specification.

qcodes.dataset.load_experiment(exp_id: int, conn: ConnectionPlus | None = None) Experiment[source]

Load experiment with the specified id (from database file from config)

Parameters:
  • exp_id – experiment id

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

experiment with the specified id

Raises:

ValueError – If experiment id is not an integer.

qcodes.dataset.load_experiment_by_name(name: str, sample: str | None = None, conn: ConnectionPlus | None = None, load_last_duplicate: bool = False) Experiment[source]

Try to load experiment with the specified name.

Nothing stops you from having many experiments with the same name and sample name. In that case this won’t work unless load_last_duplicate is set to True. Then, the last of duplicated experiments will be loaded.

Parameters:
  • name – the name of the experiment

  • sample – the name of the sample

  • load_last_duplicate – If True, prevent raising error for having multiple experiments with the same name and sample name, and load the last duplicated experiment, instead.

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

The requested experiment

Raises:

ValueError – either if the name and sample name are not unique, unless load_last_duplicate is True, or if no experiment found for the supplied name and sample.

qcodes.dataset.load_from_file(path: Path | str, path_to_db: Path | str | None = None) DataSetInMem[source]

Create an in-memory dataset from a file. The file is expected to contain a QCoDeS dataset that has been exported using the QCoDeS export functions. Currently, this only supports loading from netcdf files.

Parameters:
  • path – Path to the file to import.

  • path_to_db – Optional path to a database where this dataset may be exported to. If not supplied the path can be given at export time or the dataset exported to the default db as set in the QCoDeS config.

Returns:

The loaded dataset.

qcodes.dataset.load_from_netcdf(path: Path | str, path_to_db: Path | str | None = None) DataSetInMem[source]

Create a in memory dataset from a netcdf file. The netcdf file is expected to contain a QCoDeS dataset that has been exported using the QCoDeS netcdf export functions.

Parameters:
  • path – Path to the netcdf file to import.

  • path_to_db – Optional path to a database where this dataset may be exported to. If not supplied the path can be given at export time or the dataset exported to the default db as set in the QCoDeS config.

Returns:

The loaded dataset.

qcodes.dataset.load_last_experiment() Experiment[source]

Load last experiment (from database file from config)

Returns:

The last experiment

Raises:

ValueError – If no experiment exists in the db.

qcodes.dataset.load_or_create_experiment(experiment_name: str, sample_name: str | None = None, conn: ConnectionPlus | None = None, load_last_duplicate: bool = False) Experiment[source]

Find and return an experiment with the given name and sample name, or create one if not found.

Parameters:
  • experiment_name – Name of the experiment to find or create.

  • sample_name – Name of the sample.

  • load_last_duplicate – If True, prevent raising error for having multiple experiments with the same name and sample name, and load the last duplicated experiment, instead.

  • conn – Connection to the database. If not supplied, a new connection to the DB file specified in the config is made.

Returns:

The found or created experiment

Raises:

ValueError – If the name and sample name are not unique, unless load_last_duplicate is True.

qcodes.dataset.new_data_set(name: str, exp_id: int | None = None, specs: list[ParamSpec] | None = None, values: Sequence[str | complex | list | ndarray | bool | None] | None = None, metadata: Any | None = None, conn: ConnectionPlus | None = None, in_memory_cache: bool = True) DataSet[source]

Create a new dataset in the currently active/selected database.

If exp_id is not specified, the last experiment will be loaded by default.

Parameters:
  • name – the name of the new dataset

  • exp_id – the id of the experiments this dataset belongs to, defaults to the last experiment

  • specs – list of parameters to create this dataset with

  • values – the values to associate with the parameters

  • metadata – the metadata to associate with the dataset

  • conn – Existing connection to the database.

  • in_memory_cache – Should measured data be keep in memory and available as part of the dataset.cache object.

Returns:

the newly created qcodes.dataset.data_set.DataSet

qcodes.dataset.new_experiment(name: str, sample_name: str | None, format_string: str = '{}-{}-{}', conn: ConnectionPlus | None = None) Experiment[source]

Create a new experiment (in the database file from config)

Parameters:
  • name – the name of the experiment

  • sample_name – the name of the current sample

  • format_string – basic format string for table-name must contain 3 placeholders.

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

the new experiment

qcodes.dataset.plot_by_id(run_id: int, axes: Axes | Sequence[Axes] | None = None, colorbars: Colorbar | Sequence[Colorbar] | None = None, rescale_axes: bool = True, auto_color_scale: bool | None = None, cutoff_percentile: tuple[float, float] | float | None = None, complex_plot_type: Literal['real_and_imag', 'mag_and_phase'] = 'real_and_imag', complex_plot_phase: Literal['radians', 'degrees'] = 'radians', **kwargs: Any) AxesTupleList[source]

Construct all plots for a given run_id. Here run_id is an alias for captured_run_id for historical reasons. See the docs of qcodes.dataset.load_by_run_spec() for details of loading runs. All other arguments are forwarded to plot_dataset(), see this for more details.

qcodes.dataset.plot_dataset(dataset: DataSetProtocol, axes: Axes | Sequence[Axes] | None = None, colorbars: Colorbar | Sequence[Colorbar] | Sequence[None] | None = None, rescale_axes: bool = True, auto_color_scale: bool | None = None, cutoff_percentile: tuple[float, float] | float | None = None, complex_plot_type: Literal['real_and_imag', 'mag_and_phase'] = 'real_and_imag', complex_plot_phase: Literal['radians', 'degrees'] = 'radians', **kwargs: Any) AxesTupleList[source]

Construct all plots for a given dataset

Implemented so far:

  • 1D line and scatter plots

  • 2D plots on filled out rectangular grids

  • 2D scatterplots (fallback)

The function can optionally be supplied with a matplotlib axes or a list of axes that will be used for plotting. The user should ensure that the number of axes matches the number of datasets to plot. To plot several (1D) dataset in the same axes supply it several times. Colorbar axes are created dynamically. If colorbar axes are supplied, they will be reused, yet new colorbar axes will be returned.

The plot has a title that comprises run id, experiment name, and sample name.

**kwargs are passed to matplotlib’s relevant plotting functions By default the data in any vector plot will be rasterized for scatter plots and heatmaps if more than 5000 points are supplied. This can be overridden by supplying the rasterized kwarg.

Parameters:
  • dataset – The dataset to plot

  • axes – Optional Matplotlib axes to plot on. If not provided, new axes will be created

  • colorbars – Optional Matplotlib Colorbars to use for 2D plots. If not provided, new ones will be created

  • rescale_axes – If True, tick labels and units for axes of parameters with standard SI units will be rescaled so that, for example, ‘0.00000005’ tick label on ‘V’ axis are transformed to ‘50’ on ‘nV’ axis (‘n’ is ‘nano’)

  • auto_color_scale – If True, the colorscale of heatmap plots will be automatically adjusted to disregard outliers.

  • cutoff_percentile – Percentile of data that may maximally be clipped on both sides of the distribution. If given a tuple (a,b) the percentile limits will be a and 100-b. See also the plotting tuorial notebook.

  • complex_plot_type – Method for converting complex-valued parameters into two real-valued parameters, either "real_and_imag" or "mag_and_phase". Applicable only for the cases where the dataset contains complex numbers

  • complex_plot_phase – Format of phase for plotting complex-valued data, either "radians" or "degrees". Applicable only for the cases where the dataset contains complex numbers

  • **kwargs – Keyword arguments passed to the plotting function.

Returns:

A list of axes and a list of colorbars of the same length. The colorbar axes may be None if no colorbar is created (e.g. for 1D plots)

Config dependencies: (qcodesrc.json)

qcodes.dataset.reset_default_experiment_id(conn: ConnectionPlus | None = None) None[source]

Resets the default experiment id to to the last experiment in the db.

qcodes.dataset.rundescriber_from_json(json_str: str) RunDescriber

Deserialize a JSON string into a RunDescriber of the current version