Developer Docs

This section describes some things that may be of interest to developers and other people interested in internals of asv.

Note

From version 0.6.0 on-wards, functionality in asv has been split into the section needed by the code being benchmarked (asv_runner) and the rest of asv. This means that the asv documentation covers setting up environments, loading plugins, and collecting the results of the benchmarks run with asv_runner.

Development setup

The required packages to the full asv test suite, are listed in requirements-dev.txt. The minimal set of packages required for testing are: pytest virtualenv filelock six pip setuptools wheel.

Separation of concerns

asv consists of the following steps:

  • Setting up an environment for the project

  • Building the project within the environment

  • Running the benchmarks

  • Collecting results and visualizing them after analysis for regressions

Note

Conceptually there are two separate parts to this process. There is the main process which orchestrates the environment creation. This is followed by a subprocess which essentially runs the project benchmarks. This subprocess must have only minimal dependencies, ideally nothing beyond the minimum Python version needed to run asv along with the dependencies of the project itself.

Changed in version 0.6.0: To clarify this, starting from v0.6.0, asv has been split into asv_runner, which is responsible for loading benchmark types, discovering them and running them within an environment, while the asv repository handles the remaining tasks.

Benchmark suite layout and file formats

A benchmark suite directory has the following layout. The $-prefixed variables refer to values in the asv.conf.json file.

  • asv.conf.json: The configuration file. See asv.conf.json reference.

  • $benchmark_dir: Contains the benchmark code, created by the user. Each subdirectory needs an __init__.py.

  • $project/: A clone of the project being benchmarked. Information about the history is grabbed from here, but the actual building happens in the environment-specific clones described below.

  • $env_dir/: Contains the environments used for building and benchmarking. There is one environment in here for each specific combination of Python version and library dependency. Generally, the dependencies are only installed once, and then reused on subsequent runs of asv, but the project itself needs to be rebuilt for each commit being benchmarked.

    • $ENVIRONMENT_HASH/: The directory name of each environment is the md5hash of the list of dependencies and the Python version. This is not very user friendly, but this keeps the filename within reasonable limits.

      • asv-env-info.json: Contains information about the environment, mainly the Python version, dependencies and build environment variables used.

      • project/: An environment-specific clone of the project repository. Each environment has its own clone so that builds can be run in parallel without fear of clobbering (particularly for projects that generate source files outside of the build/ directory. These clones are created from the main $project/ directory using the --shared option to git clone so that the repository history is stored in one place to save on disk space.

        The project is built in this directory with the standard python setup.py build command. This means repeated builds happen in the same place and ccache is able to cache and reuse many of the build products.

      • wheels/: If build_cache_size in asv.conf.json is set to something other than 0, this contains Wheels of the last N project builds for this environment. In this way, if a build for a particular commit has already been performed and cached, it can be restored much more quickly. Each subdirectory is a commit hash, containing one .whl file and a timestamp.

      • usr/, lib/, bin/ etc.: These are the virtualenv or Conda environment directories that we install the project into and then run benchmarks from.

  • $results_dir/: This is the “database” of results from benchmark runs.

    • benchmarks.json: Contains metadata about all of the benchmarks in the suite. It is a dictionary from benchmark names (a fully-qualified dot-separated path) to dictionaries containing information about that benchmark. Useful keys include:

      • code: The Python code of the benchmark

      • params: List of lists describing parameter values of a parameterized benchmark. If benchmark is not parameterized, an empty list. Otherwise, the n-th entry of the list is a list of the Python repr() strings for the values the n-th parameter should loop over.

      • param_names: Names for parameters for a parameterized benchmark. Must be of the same length as the params list.

      • version: An arbitrary string identifying the benchmark version. Default value is hash of code, but user can override.

      Other keys are specific to the kind of benchmark, and correspond to Benchmark attributes.

    • MACHINE/: Within the results directory is a directory for each machine. Putting results from different machines in separate directories makes the results trivial to merge, which is useful when benchmarking across different platforms or architectures.

      • HASH-pythonX.X-depA-depB.json: Each JSON file within a particular machine represents a run of benchmarks for a particular project commit in a particular environment. Contains the keys:

        • version: the value 2.

        • commit_hash: The project commit that the benchmarks were run on.

        • env_name: Name of the environment the benchmarks were run in.

        • date: A JavaScript date stamp of the date of the commit (not when the benchmarks were run).

        • params: Information about the machine the benchmarks were run on.

        • python: Python version of the environment.

        • requirements: Requirements dictionary of the environment.

        • env_vars: Environment variable dictionary of the environment.

        • durations: Duration information for build and setup-cache timings.

        • result_columns: List of column names for the results dictionary. It is ["result", "params", "version", "started_at", "duration", "stats_ci_99_a", "stats_ci_99_b", "stats_q_25", "stats_q_75", "stats_number", "stats_repeat", "samples", "profile"] currently.

        • results: A dictionary from benchmark names to benchmark results. The keys are benchmark names, and values are lists such that dict(zip(result_columns, results[benchmark_name])) pairs the appropriate keys with the values; in particular, trailing columns with missing values can be dropped.

          Some items, marked with “(param-list)” below, are lists with items corresponding to results from a parametrized benchmark (see params below). Non-parametrized benchmarks then have lists with a single item.

          Values except params can be null, indicating missing data.

          Floating-point numbers in stats_* and duration are truncated to 5 significant base-10 digits when saving, in order to produce smaller JSON files.

          • result: (param-list) contains the summarized result value(s) of the benchmark. The values are float, NaN or null.

            The values are either numbers indicating result from successful run, null indicating a failed benchmark, or NaN indicating a benchmark explicitly skipped by the benchmark suite.

          • params: contains a copy of the parameter values of the benchmark, as described above. If the user has modified the benchmark after the benchmark was run, these may differ from the current values. The result value is a list of results. Each entry corresponds to one combination of the parameter values. The n-th entry in the list corresponds to the parameter combination itertools.product(*params)[n], i.e., the results appear in cartesian product order, with the last parameters varying fastest.

            For non-parametrized benchmarks, [].

          • version: string, a benchmark version identifier. Results whose version is not equal to the current version of the benchmark are ignored. If the value is missing, no version comparisons are done (backward compatibility).

          • started_at: Javascript timestamp indicating start time of latest benchmark run.

          • duration: float, indicating the duration of a benchmark run in seconds.

          • stats_*: (param-list) dictionary containing various statistical indicators. Possible * are ci_99_a, ci_99_b (confidence interval estimate lower/upper values), q_25 (lower quartile), q_75 (upper quartile), repeat, and number.

          • profile: string, zlib-compressed and base64-encoded Python profile dump.

          • samples: (param-list) List of samples obtained for a benchmark. The samples are in the order they were measured in.

  • $html_dir/: The output of asv publish, that turns the raw results in $results_dir/ into something viewable in a web browser. It is an important feature of asv that the results can be shared on a static web server, so there is no server side component, and the result data is accessed through AJAX calls from JavaScript. Most of the files at the root of $html_dir/ are completely static and are just copied verbatim from asv/www/ in the source tree.

    • index.json: Contains an index into the benchmark data, describing what is available. Important keys include:

      • benchmarks: A dictionary of benchmarks. At the moment, this is identical to the content in $results_dir/benchmarks.json.

      • revision_to_hash: A dictionary mapping revision number to commit hash. This allows to show commits tooltip in graph and commits involved in a regression.

      • revision_to_date: A dictionary mapping JavaScript date stamps to revisions (including tags). This allows the x-scale of a plot to be scaled by date.

      • machines: Describes the machines used for testing.

      • params: A dictionary of parameters against which benchmark results can be selected. Each entry is a list of valid values for that parameter.

      • tags: A dictionary of git tags and their revisions, so this information can be displayed in the plot.

    • graphs/: This is a nested tree of directories where each level is a parameter from the params dictionary, in asciibetical order. The web interface, given a set of parameters that are set, get easily grab the associated graph.

      • BENCHMARK_NAME.json: At the leaves of this tree are the actual benchmark graphs. It contains a list of pairs, where each pair is of the form (timestamp, result_value). For parameterized benchmarks, result_value is a list of results, corresponding to itertools.product iteration over the parameter combinations, similarly as in the result files. For non-parameterized benchmarks, it is directly the result. Missing values (eg. failed and skipped benchmarks) are represented by null.

Full-stack testing

For full-stack testing, we use Selenium WebDriver and its Python bindings. Additional documentation for Selenium Python bindings is here.

The browser back-end can be selected via:

pytest --webdriver=PhantomJS

The allowed values include None (default), PhantomJS, Chrome, Firefox, ChromeHeadless, FirefoxHeadless, or arbitrary Python code initializing a Selenium webdriver instance.

To use them, at least one of the following needs to be installed:

  • Firefox GeckoDriver: Firefox-based controllable browser.

  • ChromeDriver: Chrome-based controllable browser. On Ubuntu, install via apt-get install chromium-chromedriver, on Fedora via dnf install chromedriver.

  • PhantomJS: Headless web browser (discontinued, prefer using Firefox or Chrome).

For other options regarding the webdriver to use, see py.test --help.

Step detection

Regression detection in ASV is based on detecting stepwise changes in the graphs. The assumptions on the data are as follows: the curves are piecewise constant plus random noise. We don’t know the scaling of the data or the amplitude of the noise, but assume the relative weight of the noise amplitude is known for each data point.

ASV measures the noise amplitude of each data point, based on a number of samples. We use this information for weighting the different data points:

\[\sigma_j = \sigma \mathrm{CI}_{99} = \sigma / w_j\]

i.e., we assume the uncertainty in each measurement point is proportional to the estimated confidence interval for each data point. Their inverses are taken as the relative weights w_j. If w_j=0 or undefined, we replace it with the median weight, or with 1 if all are undefined. The step detection algorithm determines the absolute noise amplitude itself based on all available data, which is more robust than relying on the individual measurements.

Step detection is a well-studied problem. In this implementation, we mainly follow a variant of the approach outlined in [Friedrich2008] and elsewhere. This provides a fast algorithm for solving the piecewise weighted fitting problem

(1)\[\mathop{\mathrm{argmin}}_{k,\{j\},\{\mu\}} \gamma k + \sum_{r=1}^k\sum_{i=j_{r-1}}^{j_r} w_i |y_i - \mu_r|\]

The differences are: as we do not need exact solutions, we add additional heuristics to work around the \({\mathcal O}(n^2)\) scaling, which is too harsh for pure-Python code. For details, see asv.step_detect.solve_potts_approx. Moreover, we follow a slightly different approach on obtaining a suitable number of intervals, by selecting an optimal value for \(\gamma\), based on a variant of the information criterion problem discussed in [Yao1988].

[Friedrich2008] (1,2)

F. Friedrich et al., ‘’Complexity Penalized M-Estimation: Fast Computation’’, Journal of Computational and Graphical Statistics 17.1, 201-224 (2008). http://dx.doi.org/10.1198/106186008X285591

[Yao1988] (1,2)

Y.-C. Yao, ‘’Estimating the number of change-points via Schwarz criterion’’, Statistics & Probability Letters 6, 181-189 (1988). http://dx.doi.org/10.1016/0167-7152(88)90118-6

Bayesian information

To proceed, we need an argument by which to select a suitable \(\gamma\) in (1). Some of the literature on step detection, e.g. [Yao1988], suggests results based on Schwarz information criteria,

(2)\[\text{SC} = \frac{m}{2} \ln \sigma^2 + k \ln m = \text{min!}\]

where \(\sigma^2\) is maximum likelihood variance estimator (if noise is gaussian). For the implementation, see asv.step_detect.solve_potts_autogamma.

What follows is a handwaving plausibility argument why such an objective function makes sense, and how to end up with \(l_1\) rather than gaussians. Better approaches are probably to be found in step detection literature. If you have a better formulation, contributions/corrections are welcome!

We assume a Bayesian model:

(3)\[P(\{y_i\}_{i=1}^m|\sigma,k,\{\mu_i\}_{i=1}^k,\{j_i\}_{i=1}^{k-1}) = N \sigma^{-m} \exp( -\sigma^{-1}\sum_{r=1}^k\sum_{i=j_{r-1}+1}^{j_r} w_i |y_i - \mu_r| )\]

Here, \(y_i\) are the \(m\) data points at hand, \(k\) is the number of intervals, \(\mu_i\) are the values of the function at the intervals, \(j_i\) are the interval breakpoints; \(j_0=0\), \(j_k=m\), \(j_{r-1}<j_r\). The noise is assumed Laplace rather than gaussian, which results to the more robust \(l_1\) norm fitting rather than \(l_2\). The noise amplitude \(\sigma\) is not known. \(N\) is a normalization constant that depends on \(m\) but not on the other parameters.

The optimal \(k\) comes from Bayesian reasoning: \(\hat{k} = \mathop{\mathrm{argmax}}_k P(k|\{y\})\), where

(4)\[P(k|\{y\}) = \frac{\pi(k)}{\pi(\{y\})}\int d\sigma (d\mu)^k \sum_{\{j\}} P(\{y\}|\sigma,k,\{\mu\},\{j\}) \pi(\sigma, \{\mu\},\{j\}|k)\]

The prior \(\pi(\{y\})\) does not matter for \(\hat{k}\); the other priors are assumed flat. We would need to estimate the behavior of the integral in the limit \(m\to\infty\). We do not succeed in doing this rigorously here, although it might be done in the literature.

Consider first saddle-point integration over \(\{\mu\}\), expanding around the max-likelihood values \(\mu_r^*\). The max-likelihood estimates are the weighted medians of the data points in each interval. Change in the exponent when \(\mu\) is perturbed is

(5)\[\Delta = -\sigma^{-1}\sum_{r=1}^k \sum_{i=j_{r-1}+1}^{j_r}w_i[|y_i-\mu^*_r - \delta\mu_r| - |y_i-\mu^*_r|]\]

Note that \(\sum_{i=j_{r-1}+1}^{j_r} w_i\mathrm{sgn}(y_i-\mu^*_r)=0\), so that response to small variations \(\delta\mu_r\) is \(m\)-independent. For larger variations, we have

(6)\[\Delta = -\sigma^{-1}\sum_{r=1}^k N_r(\delta \mu_r) |\delta \mu_r|\]

where \(N_r(\delta\mu)=\sum_{i} w_i s_i\) where \(s_i = \pm1\) depending on whether \(y_i\) is above or below the perturbed median. Let us assume that in a typical case, \(N_r(\delta\mu)\sim{}m_r\bar{W}_r^2\delta\mu/\sigma\) where \(\bar{W}_r = \frac{1}{m_r}\sum_i w_i\) is the average weight of the interval and \(m_r\) the number of points in the interval. This recovers a result we would have obtained in the gaussian noise case

(7)\[\Delta \sim -\sigma^{-2} \sum_r W_r^2 m_r |\delta \mu_r|^2\]

For the gaussian case, this would not have required any questionable assumptions. After integration over \(\{\delta\mu\}\) we are left with

(8)\[\int(\ldots) \propto \int d\sigma \sum_{\{j\}} (2\pi)^{k/2}\sigma^k [\bar{W}_1\cdots \bar{W}_k]^{-1} [m_1\cdots m_k]^{-1/2} P(\{y\}|\sigma,k,\{\mu_*\},\{j\}) \pi(\sigma, \{j\}|k)\]

We now approximate the rest of the integrals/sums with only the max-likelihood terms, and assume \(m_j^*\sim{}m/k\). Then,

(9)\[\begin{split}\ln P(k|\{y\}) &\simeq C_1(m) + C_2(k) + \frac{k}{2}\ln(2\pi) - \frac{k}{2} \ln(m/k) - k \ln \bar{W} + \ln P(\{y\}|\sigma_*,k,\{\mu_*\},\{j_*\}) \\ &\approx \tilde{C}_1(m) + \tilde{C}_2(k) - \frac{k}{2} \ln m + \ln P(\{y\}|\sigma_*,k,\{\mu_*\},\{j_*\})\end{split}\]

where we neglect terms that don’t affect asymptotics for \(m\to\infty\), and \(C\) are some constants not depending on both \(m, k\). The result is of course the Schwarz criterion for \(k\) free model parameters. We can suspect that the factor \(k/2\) should be replaced by a different number, since we have \(2k\) parameters. If also the other integrals/sums can be approximated in the same way as the \(\{\mu\}\) ones, we should obtain the missing part.

Substituting in the max-likelihood value

(10)\[ \sigma_* = \frac{1}{m} \sum_{r=1}^k\sum_{i=j_{r-1}^*+1}^{j_r^*} w_i|y_i - \mu_r^*|\]

we get

(11)\[\ln P(k|\{y\}) \sim C - \frac{k}{2} \ln m - m \ln\sum_{r=1}^k\sum_{i=j_{r-1}^*}^{j_r^*} w_i |y_i - \mu_r^*|\]

This is now similar to (2), apart from numerical prefactors. The final fitting problem then becomes

(12)\[\mathop{\mathrm{argmin}}_{k,\{j\},\{\mu\}} r(m) k + \ln\sum_{r=1}^k\sum_{i=j_{r-1}}^{j_r} w_i |y_i - \mu_r|\]

with \(r(m) = \frac{\ln m}{2m}\). Note that it is invariant vs. rescaling of weights \(w_i\mapsto{}\alpha{}w_i\), i.e., the invariance of the original problem is retained. As we know this function \(r(m)\) is not necessarily completely correct, and it seems doing the calculation rigorously requires more effort than can be justified by the requirements of the application, we now take a pragmatic view and fudge the function to \(r(m) = \beta \frac{\ln m}{m}\) with \(\beta\) chosen so that things appear to work in practice for the problem at hand.

According to [Friedrich2008], problem (12) can be solved in \({\cal O}(n^3)\) time. This is too slow, however. We can however approach this on the basis of the easier problem (1). It produces a family of solutions \([k^*(\gamma), \{\mu^*(\gamma)\}, \{j^*(\gamma)\}]\). We now evaluate (12) restricted to the curve parameterized by \(\gamma\). In particular, \([\{\mu^*(\gamma)\}, \{j^*(\gamma)\}]\) solves (12) under the constraint \(k=k^*(\gamma)\). If \(k^*(\gamma)\) obtains all values in the set \(\{1,\ldots,m\}\) when \(\gamma\) is varied, the original problem is solved completely. This probably is not a far-fetched assumption; in practice it appears such Bayesian information criterion provides a reasonable way for selecting a suitable \(\gamma\).

Overfitting

It’s possible to fit any data perfectly by choosing size-1 intervals, one per each data point. For such a fit, the logarithm (12) gives \(-\infty\) which then always minimizes SC. This artifact of the model above needs special handling.

Indeed, for \(\sigma\to0\), (3) reduces to

(13)\[P(\{y_i\}_{i=1}^m|\sigma,k,\{\mu_i\}_{i=1}^k,\{j_i\}_{i=1}^{k-1}) = \prod_{r=1}^k \prod_{i=j_{r-1} + 1}^{j_r} \delta(y_i - \mu_r)\]

which in (4) gives a contribution (assuming no repeated y-values)

(14)\[P(k|\{y\}) = \frac{\pi(n)}{\pi(\{y\})}\delta_{n,k}\int d\sigma \pi(\sigma, \{y\},\{i\}|n) f(\sigma) + \ldots\]

with \(f(\sigma)\to1\) for \(\sigma\to0\). A similar situation occurs also in other cases where perfect fitting occurs (repeated y-values). With the flat, scale-free prior \(\pi(\ldots)\propto1/\sigma\) used above, the result is undefined.

A simple fix is to give up complete scale free-ness of the results, i.e., fixing a minimal noise level \(\pi(\sigma,\{\mu\},\{j\}|k)\propto\theta(\sigma-\sigma_0)/\sigma\) with some \(\sigma_0(\{\mu\},\{j\},k)>0\). The effect in the \(\sigma\) integral is cutting off the log-divergence, so that with sufficient accuracy we can in (12) replace

(15)\[\ln \sigma \mapsto \ln(\sigma_0 + \sigma)\]

Here, we fix a measurement accuracy floor with the following guess: sigma_0 = 0.1 * w0 * min(abs(diff(mu))) and sigma_0 = 0.001 * w0 * abs(mu) when there is only a single interval. Here, w0 is the median weight.

Autocorrelated noise

Practical experience shows that the noise in the benchmark results can be correlated. Often benchmarks are run for multiple commits at once, for example the new commits at a given time, and the benchmark machine does something else between the runs. Alternatively, the background load from other processes on the machine varies with time.

To give a basic model for the noise correlations, we include AR(1) Laplace noise in (3),

(16)\[P(\{y_i\}_{i=1}^m|\sigma,\rho,k,\{\mu_i\}_{i=1}^k,\{j_i\}_{i=1}^{k-1}) = N \sigma^{-m} \exp(-\sigma^{-1}\sum_{r=1}^k\sum_{i=j_{r-1}+1}^{j_r} |\epsilon_{i,r} - \rho \epsilon_{i-1,r}|)\]

where \(\epsilon_{i,r}=y_i-\mu_{r}\) with \(\epsilon_{j_{r-1},r}=y_{j_{r-1}}-\mu_{r-1}\) and \(\epsilon_{j_0,1}=0\) are the deviations from the stepwise model. The correlation measure \(\rho\) is unknown, but assumed to be constant in \((-1,1)\).

Since the parameter \(\rho\) is global, it does not change the parameter counting part of the Schwarz criterion. The maximum likelihood term however does depend on \(\rho\), so that the problem becomes:

(17)\[\mathop{\mathrm{argmin}}_{k,\rho,\{j\},\{\mu\}} r(m) k + \ln\sum_{r=1}^k\sum_{i=j_{r-1}}^{j_r} |\epsilon_{i,r} - \rho\epsilon_{i-1,r}|\]

To save computation time, we do not solve this optimization problem exactly. Instead, we again minimize along the \(\mu_r^*(\gamma)\), \(j_r^*(\gamma)\) curve provided by the solution to (1), and use (17) only in selecting the optimal value of the \(\gamma\) parameter.

The minimization vs. \(\rho\) can be done numerically for given \(\mu_r^*(\gamma)\), \(j_r^*(\gamma)\). This minimization step is computationally cheap compared to the piecewise fit, so including it will not significantly change the runtime of the total algorithm.

Postprocessing

For the purposes of regression detection, we do not report all steps the above approach provides. For details, see asv.step_detect.detect_regressions.

Making use of measured variance

asv measures also variance in the timings. This information is currently used to provide relative data weighting (see above).