asv.runner¶
Attributes¶
Classes¶
Manage launching individual benchmark.py commands |
|
Manage launching individual benchmark.py commands |
Functions¶
|
Mark benchmarks as skipped. |
|
Run all of the benchmarks in the given Environment. |
|
|
|
|
|
Return a BenchmarkResult describing a failed benchmark. |
|
Run a benchmark. |
|
Run a benchmark, for single parameter combination index in case it |
|
Combine a list of profile data to a single profile |
Module Contents¶
- asv.runner.skip_benchmarks(benchmarks, env, results=None)[source]¶
Mark benchmarks as skipped.
Parameters¶
- benchmarksBenchmarks
Set of benchmarks to skip
- envEnvironment
Environment to skip them in
- resultsResults, optional
Where to store the results. If omitted, stored to a new unnamed Results object.
Returns¶
- resultsResults
Benchmark results.
- asv.runner.run_benchmarks(benchmarks, env, results=None, show_stderr=False, quick=False, profile=False, extra_params=None, record_samples=False, append_samples=False, run_rounds=None, launch_method=None)[source]¶
Run all of the benchmarks in the given Environment.
Parameters¶
- benchmarksBenchmarks
Benchmarks to run
- envEnvironment object
Environment in which to run the benchmarks.
- resultsResults, optional
Where to store the results. If omitted, stored to a new unnamed Results object.
- show_stderrbool, optional
When True, display any stderr emitted by the benchmark.
- quickbool, optional
When True, run each benchmark function exactly once. This is useful to quickly find errors in the benchmark functions, without taking the time necessary to get accurate timings.
- profilebool, optional
When True, run the benchmark through the cProfile profiler.
- extra_paramsdict, optional
Override values for benchmark attributes.
- record_samplesbool, optional
Whether to retain result samples or discard them.
- append_samplesbool, optional
Whether to retain any previously measured result samples and use them in statistics computations.
- run_roundssequence of int, optional
Run rounds for benchmarks with multiple rounds. If None, run all rounds.
- launch_method{‘auto’, ‘spawn’, ‘forkserver’}, optional
Benchmark launching method to use.
Returns¶
- resultsResults
Benchmark results.
- asv.runner.fail_benchmark(benchmark, stderr='', errcode=1)[source]¶
Return a BenchmarkResult describing a failed benchmark.
- asv.runner.run_benchmark(benchmark, spawner, profile, selected_idx=None, extra_params=None, cwd=None, prev_result=None)[source]¶
Run a benchmark.
Parameters¶
- benchmarkdict
Benchmark object dict
- spawnerSpawner
Benchmark process spawner
- profilebool
Whether to run with profile
- selected_idxset, optional
Set of parameter indices to run for.
- extra_params{dict, list}, optional
Additional parameters to pass to the benchmark. If a list, each entry should correspond to a benchmark parameter combination.
- cwdstr, optional
Working directory to run the benchmark in. If None, run in a temporary directory.
Returns¶
- resultBenchmarkResult
Result data.
- asv.runner._run_benchmark_single_param(benchmark, spawner, param_idx, profile, extra_params, cwd)[source]¶
Run a benchmark, for single parameter combination index in case it is parameterized
Parameters¶
- benchmarkdict
Benchmark object dict
- spawnerSpawner
Benchmark process spawner
- param_idx{int, None}
Parameter index to run benchmark for
- profilebool
Whether to run with profile
- extra_paramsdict
Additional parameters to pass to the benchmark
- cwd{str, None}
Working directory to run the benchmark in. If None, run in a temporary directory.
Returns¶
- resultBenchmarkResult
Result data.
- class asv.runner.Spawner(env, benchmark_dir)[source]¶
Manage launching individual benchmark.py commands