Commands¶
asv help¶
usage: asv help [-h]
options:
-h, --help show this help message and exit
asv quickstart¶
usage: asv quickstart [-h] [--dest DEST] [--top-level | --no-top-level]
[--verbose] [--config CONFIG] [--version]
Creates a new benchmarking suite
options:
-h, --help show this help message and exit
--dest DEST, -d DEST The destination directory for the new benchmarking
suite
--top-level Use layout suitable for putting the benchmark suite on
the top level of the project's repository
--no-top-level Use layout suitable for putting the benchmark suite in
a separate repository
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv machine¶
usage: asv machine [-h] [--machine MACHINE] [--os OS] [--arch ARCH]
[--cpu CPU] [--num_cpu NUM_CPU] [--ram RAM] [--yes]
[--verbose] [--config CONFIG] [--version]
Defines information about this machine. If no arguments are provided, an
interactive console session will be used to ask questions about the machine.
options:
-h, --help show this help message and exit
--machine MACHINE A unique name to identify this machine in the results.
May be anything, as long as it is unique across all the
machines used to benchmark this project. NOTE: If changed
from the default, it will no longer match the hostname of
this machine, and you may need to explicitly use the
--machine argument to asv.
--os OS The OS type and version of this machine. For example,
'Macintosh OS-X 10.8'.
--arch ARCH The generic CPU architecture of this machine. For
example, 'i386' or 'x86_64'.
--cpu CPU A specific description of the CPU of this machine,
including its speed and class. For example, 'Intel(R)
Core(TM) i5-2520M CPU @ 2.50GHz (4 cores)'.
--num_cpu NUM_CPU The number of CPUs in the system. For example, '4'.
--ram RAM The amount of physical RAM on this machine. For example,
'4GB'.
--yes Accept all questions
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv setup¶
usage: asv setup [-h] [--parallel [PARALLEL]] [-E ENV_SPEC] [--python PYTHON]
[--verbose] [--config CONFIG] [--version]
Setup virtual environments for each combination of Python version and third-
party requirement. This is called by the ``run`` command implicitly, and isn't
generally required to be run on its own.
options:
-h, --help show this help message and exit
--parallel [PARALLEL], -j [PARALLEL]
Build (but don't benchmark) in parallel. The value is
the number of CPUs to use, or if no number provided,
use the number of cores on this machine.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv run¶
usage: asv run [-h] [--date-period DATE_PERIOD] [--steps STEPS]
[--bench BENCH] [--attribute ATTRIBUTE]
[--cpu-affinity ATTRIBUTE] [--profile] [--parallel [PARALLEL]]
[--show-stderr] [--durations [N]] [--quick] [-E ENV_SPEC]
[--python PYTHON] [--set-commit-hash SET_COMMIT_HASH]
[--launch-method {auto,spawn,forkserver}] [--dry-run]
[--machine MACHINE] [--skip-existing-successful]
[--skip-existing-failed] [--skip-existing-commits]
[--skip-existing] [--record-samples] [--append-samples]
[--interleave-rounds] [--no-interleave-rounds] [--no-pull]
[--verbose] [--config CONFIG] [--version]
[range]
Run a benchmark suite.
examples:
asv run main run for one branch
asv run main^! run for one commit (git)
asv run "--merges main" run for only merge commits (git)
positional arguments:
range Range of commits to benchmark. For a git repository,
this is passed as the first argument to ``git rev-
list``; or Mercurial log command. See 'specifying
ranges' section of the `gitrevisions` manpage, or 'hg
help revisions', for more info. Also accepts the
special values 'NEW', 'ALL', 'EXISTING', and
'HASHFILE:xxx'. 'NEW' will benchmark all commits since
the latest benchmarked on this machine. 'ALL' will
benchmark all commits in the project. 'EXISTING' will
benchmark against all commits for which there are
existing benchmarks on any machine. 'HASHFILE:xxx'
will benchmark only a specific set of hashes given in
the file named 'xxx' ('-' means stdin), which must
have one hash per line. By default, will benchmark the
head of each configured of the branches.
options:
-h, --help show this help message and exit
--date-period DATE_PERIOD
Pick only one commit in each given time period. For
example: 1d (daily), 1w (weekly), 1y (yearly).
--steps STEPS, -s STEPS
Maximum number of steps to benchmark. This is used to
subsample the commits determined by range to a
reasonable number.
--bench BENCH, -b BENCH
Regular expression(s) for benchmark to run. When not
provided, all benchmarks are run.
--attribute ATTRIBUTE, -a ATTRIBUTE
Override a benchmark attribute, e.g. `-a repeat=10`.
--cpu-affinity ATTRIBUTE
Set CPU affinity for running the benchmark, in format:
0 or 0,1,2 or 0-3. Default: not set
--profile, -p In addition to timing, run the benchmarks through the
`cProfile` profiler and store the results.
--parallel [PARALLEL], -j [PARALLEL]
Build (but don't benchmark) in parallel. The value is
the number of CPUs to use, or if no number provided,
use the number of cores on this machine.
--show-stderr, -e Display the stderr output from the benchmarks.
--durations [N] Display total duration for N (or 'all') slowest
benchmarks
--quick, -q Do a "quick" run, where each benchmark function is run
only once. This is useful to find basic errors in the
benchmark functions faster. The results are unlikely
to be useful, and thus are not saved.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--set-commit-hash SET_COMMIT_HASH
Set the commit hash to use when recording benchmark
results. This makes results to be saved also when
using an existing environment.
--launch-method {auto,spawn,forkserver}
How to launch benchmarks. Choices: auto, spawn,
forkserver
--dry-run, -n Do not save any results to disk.
--machine MACHINE, -m MACHINE
Use the given name to retrieve machine information. If
not provided, the hostname is used. If no entry with
that name is found, and there is only one entry in
~/.asv-machine.json, that one entry will be used.
--skip-existing-successful
Skip running benchmarks that have previous successful
results
--skip-existing-failed
Skip running benchmarks that have previous failed
results
--skip-existing-commits
Skip running benchmarks for commits that have existing
results
--skip-existing, -k Skip running benchmarks that have previous successful
or failed results
--record-samples Store raw measurement samples, not only statistics
--append-samples Combine new measurement samples with previous results,
instead of discarding old results. Implies --record-
samples. The previous run must also have been run with
--record/append-samples.
--interleave-rounds Interleave benchmarks with multiple rounds across
commits. This can avoid measurement biases from commit
ordering, can take longer.
--no-interleave-rounds
--no-pull Do not pull the repository
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv continuous¶
usage: asv continuous [-h] [--no-record-samples] [--append-samples] [--quick]
[--interleave-rounds] [--no-interleave-rounds]
[--factor FACTOR] [--no-stats] [--split]
[--only-changed] [--no-only-changed]
[--sort {name,ratio,default}] [--show-stderr]
[--bench BENCH] [--attribute ATTRIBUTE]
[--cpu-affinity ATTRIBUTE] [--machine MACHINE]
[-E ENV_SPEC] [--python PYTHON]
[--launch-method {auto,spawn,forkserver}] [--verbose]
[--config CONFIG] [--version]
[base] branch
Run a side-by-side comparison of two commits for continuous integration.
positional arguments:
base The commit/branch to compare against. By default, the
parent of the tested commit.
branch The commit/branch to test. By default, the first
configured branch.
options:
-h, --help show this help message and exit
--no-record-samples Do not store raw measurement samples, but only
statistics
--append-samples Combine new measurement samples with previous results,
instead of discarding old results. Implies --record-
samples. The previous run must also have been run with
--record/append-samples.
--quick, -q Do a "quick" run, where each benchmark function is run
only once. This is useful to find basic errors in the
benchmark functions faster. The results are unlikely
to be useful, and thus are not saved.
--interleave-rounds Interleave benchmarks with multiple rounds across
commits. This can avoid measurement biases from commit
ordering, can take longer.
--no-interleave-rounds
--factor FACTOR, -f FACTOR
The factor above or below which a result is considered
problematic. For example, with a factor of 1.1 (the
default value), if a benchmark gets 10% slower or
faster, it will be displayed in the results list.
--no-stats Do not use result statistics in comparisons, only
`factor` and the median result.
--split, -s Split the output into a table of benchmarks that have
improved, stayed the same, and gotten worse.
--only-changed Whether to show only changed results.
--no-only-changed
--sort {name,ratio,default}
Sort order
--show-stderr, -e Display the stderr output from the benchmarks.
--bench BENCH, -b BENCH
Regular expression(s) for benchmark to run. When not
provided, all benchmarks are run.
--attribute ATTRIBUTE, -a ATTRIBUTE
Override a benchmark attribute, e.g. `-a repeat=10`.
--cpu-affinity ATTRIBUTE
Set CPU affinity for running the benchmark, in format:
0 or 0,1,2 or 0-3. Default: not set
--machine MACHINE, -m MACHINE
Use the given name to retrieve machine information. If
not provided, the hostname is used. If no entry with
that name is found, and there is only one entry in
~/.asv-machine.json, that one entry will be used.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--launch-method {auto,spawn,forkserver}
How to launch benchmarks. Choices: auto, spawn,
forkserver
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv find¶
usage: asv find [-h] [--invert] [--skip-save] [--parallel [PARALLEL]]
[--show-stderr] [--machine MACHINE] [-E ENV_SPEC]
[--python PYTHON] [--launch-method {auto,spawn,forkserver}]
[--verbose] [--config CONFIG] [--version]
from..to benchmark_name
Adaptively searches a range of commits for one that produces a large
regression. This only works well when the regression in the range is mostly
monotonic.
positional arguments:
from..to Range of commits to search. For a git repository, this
is passed as the first argument to ``git log``. See
'specifying ranges' section of the `gitrevisions`
manpage for more info.
benchmark_name Name of benchmark to use in search.
options:
-h, --help show this help message and exit
--invert, -i Search for a decrease in the benchmark value, rather
than an increase.
--skip-save Do not save intermediate results from the search
--parallel [PARALLEL], -j [PARALLEL]
Build (but don't benchmark) in parallel. The value is
the number of CPUs to use, or if no number provided,
use the number of cores on this machine.
--show-stderr, -e Display the stderr output from the benchmarks.
--machine MACHINE, -m MACHINE
Use the given name to retrieve machine information. If
not provided, the hostname is used. If no entry with
that name is found, and there is only one entry in
~/.asv-machine.json, that one entry will be used.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--launch-method {auto,spawn,forkserver}
How to launch benchmarks. Choices: auto, spawn,
forkserver
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv rm¶
usage: asv rm [-h] [-y] [--verbose] [--config CONFIG] [--version]
patterns [patterns ...]
Removes entries from the results database.
positional arguments:
patterns Pattern(s) to match, each of the form X=Y. X may be one of
"benchmark", "commit_hash", "python" or any of the machine
or environment params. Y is a case-sensitive glob pattern.
options:
-h, --help show this help message and exit
-y Don't prompt for confirmation.
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv publish¶
usage: asv publish [-h] [--no-pull] [--html-dir HTML_DIR] [--verbose]
[--config CONFIG] [--version]
[range]
Collate all results into a website. This website will be written to the
``html_dir`` given in the ``asv.conf.json`` file, and may be served using any
static web server.
positional arguments:
range Optional commit range to consider
options:
-h, --help show this help message and exit
--no-pull Do not pull the repository
--html-dir HTML_DIR, -o HTML_DIR
Optional output directory. Default is 'html_dir' from
asv config
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv preview¶
usage: asv preview [-h] [--port PORT] [--browser] [--html-dir HTML_DIR]
[--verbose] [--config CONFIG] [--version]
Preview the results using a local web server
options:
-h, --help show this help message and exit
--port PORT, -p PORT Port to run webserver on. [8080]
--browser, -b Open in webbrowser
--html-dir HTML_DIR, -o HTML_DIR
Optional output directory. Default is 'html_dir' from
asv config
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv profile¶
usage: asv profile [-h] [--gui GUI] [--output OUTPUT] [--force] [-E ENV_SPEC]
[--python PYTHON] [--launch-method {auto,spawn,forkserver}]
[--verbose] [--config CONFIG] [--version]
benchmark [revision]
Profile a benchmark
positional arguments:
benchmark The benchmark to profile. Must be a fully-specified
benchmark name. For parameterized benchmark, it must
include the parameter combination to use, e.g.:
benchmark_name\(param0, param1, ...\)
revision The revision of the project to profile. May be a
commit hash, or a tag or branch name.
options:
-h, --help show this help message and exit
--gui GUI, -g GUI Display the profile in the given gui. Use --gui=list
to list available guis.
--output OUTPUT, -o OUTPUT
Save the profiling information to the given file. This
file is in the format written by the `cProfile`
standard library module. If not provided, prints a
simple text-based profiling report to the console.
--force, -f Forcibly re-run the profile, even if the data already
exists in the results database.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--launch-method {auto,spawn,forkserver}
How to launch benchmarks. Choices: auto, spawn,
forkserver
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv update¶
usage: asv update [-h] [--verbose] [--config CONFIG] [--version]
Update the results and config files to the current version
options:
-h, --help show this help message and exit
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv show¶
usage: asv show [-h] [--details] [--durations] [--bench BENCH]
[--attribute ATTRIBUTE] [--cpu-affinity ATTRIBUTE]
[--machine MACHINE] [-E ENV_SPEC] [--python PYTHON]
[--verbose] [--config CONFIG] [--version]
[commit]
Print saved benchmark results.
positional arguments:
commit The commit to show data for
options:
-h, --help show this help message and exit
--details Show all result details
--durations Show only run durations
--bench BENCH, -b BENCH
Regular expression(s) for benchmark to run. When not
provided, all benchmarks are run.
--attribute ATTRIBUTE, -a ATTRIBUTE
Override a benchmark attribute, e.g. `-a repeat=10`.
--cpu-affinity ATTRIBUTE
Set CPU affinity for running the benchmark, in format:
0 or 0,1,2 or 0-3. Default: not set
--machine MACHINE, -m MACHINE
Use the given name to retrieve machine information. If
not provided, the hostname is used. If no entry with
that name is found, and there is only one entry in
~/.asv-machine.json, that one entry will be used.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv compare¶
usage: asv compare [-h] [--factor FACTOR] [--no-stats] [--split]
[--only-changed] [--no-only-changed]
[--sort {name,ratio,default}] [--machine MACHINE]
[-E ENV_SPEC] [--python PYTHON] [--verbose]
[--config CONFIG] [--version]
revision1 revision2
Compare two sets of results
positional arguments:
revision1 The reference revision.
revision2 The revision being compared.
options:
-h, --help show this help message and exit
--factor FACTOR, -f FACTOR
The factor above or below which a result is considered
problematic. For example, with a factor of 1.1 (the
default value), if a benchmark gets 10% slower or
faster, it will be displayed in the results list.
--no-stats Do not use result statistics in comparisons, only
`factor` and the median result.
--split, -s Split the output into a table of benchmarks that have
improved, stayed the same, and gotten worse.
--only-changed Whether to show only changed results.
--no-only-changed
--sort {name,ratio,default}
Sort order
--machine MACHINE, -m MACHINE
The machine to compare the revisions for.
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv check¶
usage: asv check [-h] [-E ENV_SPEC] [--python PYTHON] [--verbose]
[--config CONFIG] [--version]
This imports and checks basic validity of the benchmark suite, but does not
run the benchmark target code
options:
-h, --help show this help message and exit
-E ENV_SPEC, --environment ENV_SPEC
Specify the environment and Python versions for
running the benchmarks. String of the format
'environment_type:python_version', for example
'conda:3.12'. If the Python version is not specified,
all those listed in the configuration file are run.
The special environment type
'existing:/path/to/python' runs the benchmarks using
the given Python interpreter; if the path is omitted,
the Python running asv is used. For 'existing', the
benchmarked project must be already installed,
including all dependencies. By default, uses the
values specified in the configuration file.
--python PYTHON Same as --environment=:PYTHON
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version
asv gh-pages¶
usage: asv gh-pages [-h] [--no-push] [--rewrite] [--verbose] [--config CONFIG]
[--version]
Publish the results to github pages Updates the 'gh-pages' branch in the
current repository, and pushes it to 'origin'.
options:
-h, --help show this help message and exit
--no-push Update local gh-pages branch but don't push
--rewrite Rewrite gh-pages branch to contain only a single commit,
instead of adding a new commit
--verbose, -v Increase verbosity
--config CONFIG Benchmark configuration file
--version Print program version