Benchmarks¶
Introduction¶
Benchmarks allow to track performance from release to release and verify that latest changes haven not affected it drastically. Benchmarks are based on the Python pyperf module.
How to run¶
requirements/dev.txt
should be installed before we can proceed
with benchmarks. Please also make sure that you have configured your OS to have reliable results.
To run benchmarks next command can be executed:
$ python benchmarks/benchmark.py
This would run benchmarks for both classes (MultiDict
and CIMultiDict
) of both implementations (pure-Python
and C).
To run benchmarks for a specific class of specific implementation
please use --impl
option:
$ python benchmarks/benchmark.py --impl multidict_c
would run benchmarks only for MultiDict
implemented
in C.
Please use --help
to see all available options. Most of the options are
described at perf’s Runner documentation.
How to compare implementations¶
--impl
option allows to run benchmarks for a specific implementation of
class. Combined with the compare_to command of
Python pyperf module we can get a good picture of how implementation performs:
$ python benchmarks/benchmark.py --impl multidict_c -o multidict_cy.json
$ python benchmarks/benchmark.py --impl multidict_py -o multidict_py.json
$ python -m perf compare_to multidict_cy.json multidict_py.json