asv.results

Classes

Results

Manage a set of benchmark results for a single machine and commit

Functions

iter_results_paths(results)

Iterate over all of the result file paths.

iter_results(results)

Iterate over all of the result files.

iter_results_for_machine(results, machine_name)

Iterate over all of the result files for a particular machine.

iter_results_for_machine_and_hash(results, ...)

Iterate over all of the result files with a given hash for a

iter_existing_hashes(results)

Iterate over all of the result commit hashes and dates and yields

get_existing_hashes(results)

Get a list of the commit hashes that have already been tested.

get_result_hash_from_prefix(results, machine_name, ...)

Get the 8-char result commit identifier from a potentially shorter

get_filename(machine, commit_hash, env_name)

Get the result filename for a given machine, commit_hash and

_compatible_results(result, result_params, params)

For parameterized benchmarks, obtain values from result that

format_benchmark_result(results, benchmark)

Pretty-print a benchmark result to human-readable form.

_format_benchmark_result(result, benchmark[, max_width])

Format the result from a parameterized benchmark as an ASCII table

_format_param_value(value_repr)

Format a parameter value for displaying it as test output. The

Module Contents

asv.results.iter_results_paths(results)[source]

Iterate over all of the result file paths.

asv.results.iter_results(results)[source]

Iterate over all of the result files.

asv.results.iter_results_for_machine(results, machine_name)[source]

Iterate over all of the result files for a particular machine.

asv.results.iter_results_for_machine_and_hash(results, machine_name, commit)[source]

Iterate over all of the result files with a given hash for a particular machine.

asv.results.iter_existing_hashes(results)[source]

Iterate over all of the result commit hashes and dates and yields commit_hash.

May return duplicates. Use get_existing_hashes if that matters.

asv.results.get_existing_hashes(results)[source]

Get a list of the commit hashes that have already been tested.

asv.results.get_result_hash_from_prefix(results, machine_name, commit_prefix)[source]

Get the 8-char result commit identifier from a potentially shorter prefix. Only considers the set of commits that have had results computed.

Returns None if there are no matches. Raises a UserError if the prefix is non-unique.

asv.results.get_filename(machine, commit_hash, env_name)[source]

Get the result filename for a given machine, commit_hash and environment.

If the environment name is too long, use its hash instead.

asv.results._compatible_results(result, result_params, params)[source]

For parameterized benchmarks, obtain values from result that are compatible with parameters of benchmark

class asv.results.Results(params, requirements, commit_hash, date, python, env_name, env_vars)[source]

Manage a set of benchmark results for a single machine and commit hash.

api_version = 2[source]
_params[source]
_requirements[source]
_commit_hash[source]
_date[source]
_results[source]
_samples[source]
_stats[source]
_benchmark_params[source]
_profiles[source]
_python[source]
_env_name[source]
_started_at[source]
_duration[source]
_benchmark_version[source]
_env_vars[source]
_stderr[source]
_errcode[source]
classmethod unnamed()[source]
property commit_hash[source]
property date[source]
property params[source]
property env_vars[source]
property started_at[source]
property duration[source]
set_build_duration(value)[source]
set_setup_cache_duration(setup_cache_key, value)[source]
property benchmark_version[source]
property stderr[source]
property errcode[source]
get_all_result_keys()[source]

Return all available result keys.

get_result_keys(benchmarks)[source]

Return result keys corresponding to benchmarks.

Parameters

benchmarksBenchmarks

Benchmarks to return results for. Used for checking benchmark versions.

Returns

keysset

Set of benchmark result keys

get_result_value(key, params)[source]

Return the value of benchmark result.

Parameters

keystr

Benchmark name to return results for

params{list of list, None}

Set of benchmark parameters to return values for

Returns

value{float, list of float}

Benchmark result value. If the benchmark is parameterized, return a list of values.

get_result_stats(key, params)[source]

Return the statistical information of a benchmark result.

Parameters

keystr

Benchmark name to return results for

params{list of list, None}

Set of benchmark parameters to return values for

Returns

stats{None, dict, list of dict}

Result statistics. If the benchmark is parameterized, return a list of values.

get_result_samples(key, params)[source]

Return the raw data points of a benchmark result.

Parameters

keystr

Benchmark name to return results for

params{list of list, None}

Set of benchmark parameters to return values for

Returns

samples{None, list}

Raw result samples. If the benchmark is parameterized, return a list of values.

get_result_params(key)[source]

Return the benchmark parameters of the given result

remove_result(key)[source]

Remove results corresponding to a given benchmark.

remove_samples(key, selected_idx=None)[source]

Remove measurement samples from the selected benchmark.

add_result(benchmark, result, started_at=None, duration=None, record_samples=False, append_samples=False, selected_idx=None)[source]

Add benchmark result.

Parameters

benchmarkdict

Benchmark object

resultrunner.BenchmarkResult

Result of the benchmark.

started_atdatetime.datetime, optional

Benchmark start time.

durationfloat, optional

Benchmark total duration in seconds.

record_samplesbool, optional

Whether to save samples.

append_samplesbool, optional

Whether to combine new samples with old.

selected_idxset, optional

Which indices in a parametrized benchmark to update

_mk_pstats(bytedata)[source]
get_profile(benchmark_name)[source]

Get the profile data for the given benchmark name.

Parameters

benchmark_namestr

Name of benchmark

Returns

profile_datapstats.Stats

Profile data

get_profile_stats(benchmark_name)[source]
has_profile(benchmark_name)[source]

Does the given benchmark data have profiling information?

save(result_dir)[source]

Save the results to disk, replacing existing results.

Parameters

result_dirstr

Path to root of results tree.

load_data(result_dir)[source]

Load previous results for the current parameters (if any).

classmethod load(path, machine_name=None)[source]

Load results from disk.

Parameters

pathstr

Path to results file.

machine_namestr, optional

If given, check that the results file is for the given machine.

rm(result_dir)[source]
classmethod update(path)[source]
property env_name[source]
classmethod update_to_2(d)[source]

Reformat data in api_version 1 format to version 2.

asv.results.format_benchmark_result(results, benchmark)[source]

Pretty-print a benchmark result to human-readable form.

Parameters

resultsResults

Result set object

benchmarkdict

Benchmark dictionary

Returns

info{str, None}

One-line description of results

details{str, None}

Additional details

asv.results._format_benchmark_result(result, benchmark, max_width=None)[source]

Format the result from a parameterized benchmark as an ASCII table

asv.results._format_param_value(value_repr)[source]

Format a parameter value for displaying it as test output. The values are string obtained via Python repr.