asv.runner

Attributes

Classes

Spawner

Manage launching individual benchmark.py commands

ForkServer

Manage launching individual benchmark.py commands

Functions

skip_benchmarks(benchmarks, env[, results])

Mark benchmarks as skipped.

run_benchmarks(benchmarks, env[, results, ...])

Run all of the benchmarks in the given Environment.

get_spawner(env, benchmark_dir, launch_method)

log_benchmark_result(results, benchmark[, show_stderr])

fail_benchmark(benchmark[, stderr, errcode])

Return a BenchmarkResult describing a failed benchmark.

run_benchmark(benchmark, spawner, profile[, ...])

Run a benchmark.

_run_benchmark_single_param(benchmark, spawner, ...)

Run a benchmark, for single parameter combination index in case it

_combine_profile_data(datasets)

Combine a list of profile data to a single profile

Module Contents

asv.runner.WIN[source]
asv.runner.BENCHMARK_RUN_SCRIPT[source]
asv.runner.JSON_ERROR_RETCODE = -257[source]
asv.runner.BenchmarkResult[source]
asv.runner.skip_benchmarks(benchmarks, env, results=None)[source]

Mark benchmarks as skipped.

Parameters

benchmarksBenchmarks

Set of benchmarks to skip

envEnvironment

Environment to skip them in

resultsResults, optional

Where to store the results. If omitted, stored to a new unnamed Results object.

Returns

resultsResults

Benchmark results.

asv.runner.run_benchmarks(benchmarks, env, results=None, show_stderr=False, quick=False, profile=False, extra_params=None, record_samples=False, append_samples=False, run_rounds=None, launch_method=None)[source]

Run all of the benchmarks in the given Environment.

Parameters

benchmarksBenchmarks

Benchmarks to run

envEnvironment object

Environment in which to run the benchmarks.

resultsResults, optional

Where to store the results. If omitted, stored to a new unnamed Results object.

show_stderrbool, optional

When True, display any stderr emitted by the benchmark.

quickbool, optional

When True, run each benchmark function exactly once. This is useful to quickly find errors in the benchmark functions, without taking the time necessary to get accurate timings.

profilebool, optional

When True, run the benchmark through the cProfile profiler.

extra_paramsdict, optional

Override values for benchmark attributes.

record_samplesbool, optional

Whether to retain result samples or discard them.

append_samplesbool, optional

Whether to retain any previously measured result samples and use them in statistics computations.

run_roundssequence of int, optional

Run rounds for benchmarks with multiple rounds. If None, run all rounds.

launch_method{‘auto’, ‘spawn’, ‘forkserver’}, optional

Benchmark launching method to use.

Returns

resultsResults

Benchmark results.

asv.runner.get_spawner(env, benchmark_dir, launch_method)[source]
asv.runner.log_benchmark_result(results, benchmark, show_stderr=False)[source]
asv.runner.fail_benchmark(benchmark, stderr='', errcode=1)[source]

Return a BenchmarkResult describing a failed benchmark.

asv.runner.run_benchmark(benchmark, spawner, profile, selected_idx=None, extra_params=None, cwd=None, prev_result=None)[source]

Run a benchmark.

Parameters

benchmarkdict

Benchmark object dict

spawnerSpawner

Benchmark process spawner

profilebool

Whether to run with profile

selected_idxset, optional

Set of parameter indices to run for.

extra_params{dict, list}, optional

Additional parameters to pass to the benchmark. If a list, each entry should correspond to a benchmark parameter combination.

cwdstr, optional

Working directory to run the benchmark in. If None, run in a temporary directory.

Returns

resultBenchmarkResult

Result data.

asv.runner._run_benchmark_single_param(benchmark, spawner, param_idx, profile, extra_params, cwd)[source]

Run a benchmark, for single parameter combination index in case it is parameterized

Parameters

benchmarkdict

Benchmark object dict

spawnerSpawner

Benchmark process spawner

param_idx{int, None}

Parameter index to run benchmark for

profilebool

Whether to run with profile

extra_paramsdict

Additional parameters to pass to the benchmark

cwd{str, None}

Working directory to run the benchmark in. If None, run in a temporary directory.

Returns

resultBenchmarkResult

Result data.

class asv.runner.Spawner(env, benchmark_dir)[source]

Manage launching individual benchmark.py commands

env[source]
benchmark_dir = b'.'[source]
interrupted = False[source]
interrupt()[source]
create_setup_cache(benchmark_id, timeout, params_str)[source]
run(name, params_str, profile_path, result_file_name, timeout, cwd)[source]
preimport()[source]
close()[source]
class asv.runner.ForkServer(env, root)[source]

Manage launching individual benchmark.py commands

tmp_dir[source]
socket_name[source]
server_proc[source]
_server_output = None[source]
stdout_reader_thread[source]
_stdout_reader()[source]
run(name, params_str, profile_path, result_file_name, timeout, cwd)[source]
preimport()[source]
_send_command(msg)[source]
close()[source]
asv.runner._combine_profile_data(datasets)[source]

Combine a list of profile data to a single profile