Changelog History
Page 1
-
v1.1.0 Changes
March 08, 2022Long time, huh? I'm sorry, combination of priorities, difficult to fix bugs, stress and arm problems kept me away too long. ๐ This release brings major features long developed and now finally released (reduction measurements + profiler after run), along with a critical bugfix around measurement accuracy for very fast functions (nanoseconds).
๐ Features (User Facing)
- ๐ Reduction counting/measurements was implemented. Basically, it's a rather stable unit of execution implemented by the BEAM that measures in more abstract manner how much work was done. It's helpful, as it shouldn't be affected by load on the system. Check out the docs.
- ๐ You can now dive straight into profiling from your benchmarks by using the profiling feature. See the docs - thanks @pablocostass
0๏ธโฃ Default Behavior Changes (User Facing)
- 0๏ธโฃ measuring function call overhead is now turned off by default, as it only helps with extreme nano benchmarks and has some potential for causing wrong results, so it should only be opt in. We now also print out the measured overhead for vsibility.
๐ Bugfixes (User Facing)
- Benchee now correctly looks for the time resolution as reported by
:erlang.system_info(:os_monotonic_time_source)
to accomodate when determining if a measurement is "precise" enough. Benche also works around an erlang bug we discovered present in erlang <= 22.2. Issue for reference. - ๐ The annoying stacktrace warning has been removed - thanks @mad42
Noteworthy
- a new dependency
statistex
will show up - it's a part of continued efforts to extract reusable libraries from Benchee.
-
v1.0.1 Changes
April 09, 2019๐ Bugfixes (User Facing)
- When memory measurements were actually different extended statistics was displayed although the option was not provided. Now correctly only displayed if the option is provided and values actually had variance.
-
v1.0.0 Changes
April 09, 2019๐ It's 0.99.0 without the deprecation warnings. Specifically:
- Old way of passing formatters (
:formatter_options
) vs. new:formatters
with modules, tuples or functions with one arg - ๐ง The configuration needs to be passed as the second argument to
Benchee.run/2
Benchee.collect/1
replacesBenchee.measure/1
- ๐ง
unit_scaling
is a top level configuration option, not for the console formatter - โ the warning for memory measurements not working on OTP <= 18 will also be dropped (we already officially dropped OTP 18 support in 0.14.0)
We're aiming to follow Semantic Versioning as we go forward. That means formatters should be safe to use
~> 1.0
(or even>= 0.99.0 and < 2.0.0
). - Old way of passing formatters (
-
v0.99.0 Changes
March 28, 2019๐ The "we're almost 1.0!" release - all the last small features, a bag of polish and deprecation warnings. If you run this release succesfully without deprecation warnings you should be safe to upgrade to 1.0.0, if not - it's a bug :)
๐ฅ Breaking Changes (User Facing)
- ๐ changed official Elixir compatibility to
~> 1.6
, 1.4+ should still work but aren't guaranteed or tested against.
๐ Features (User Facing)
- the console comparison now also displays the absolute difference in the average (like +12 ms) so that you have an idea to how much time that translates to in your applications not just that it's 100x faster
- ๐ Overhaul of README, documentation, update samples etc. - a whole lot of things have also been marked
@doc false
as they're considered internal
๐ Bugfixes (User Facing)
- โ Remove double empty line after configuration display
- ๐ Fix some wrong type specs
๐ฅ Breaking Changes (Plugins)
Scenario
made it to the big leagues, it's no longerBenchee.Benchmark.Scenario
butBenchee.Scenario
- as it is arguably one of our most important data structures.- The
Scenario
struct had some keys changed (last time before 2.0 I promise!) - instead of:run_times
/:run_time_statistics
you now have onerun_time_data
key that containsBenchee.CollectionData
which has the keys:samples
and:statistics
. Same formemory_usage
. This was done to be able to handle different kinds of measurements more uniformly as we will add more of them.
๐ Features (Plugins)
Benchee.Statistics
comes with 3 new values::relative_more
,:relative_less
,:absolute_difference
so that you don't have to calculate these relative values yourself :)
- ๐ changed official Elixir compatibility to
-
v0.14.0 Changes
February 10, 2019๐ Highlights of this release are a new way to specify formatter options closer to the formatters themselves as well as maximum precision measurements.
๐ฅ Breaking Changes (User Facing)
- โฌ๏ธ dropped support for Erlang 18.x
- Formatters no longer have an
output/1
method, instead useFormatter.output/3
please - ๐ Usage of
formatter_options
is deprecated, instead please use the new tuple way
๐ Features (User Facing)
- ๐ง Benchee now uses the maximum precision available to measure which on Linux and OSX is nanoseconds instead of microseconds. Somewhat surprisingly
:timer.tc/1
always cut down to microseconds although better precision is available. - The preferred way to specify formatters and their options is to specify them as a tuple
{module, options}
instead of usingformatter_options
. - ๐ New
Formatter.output/1
function that takes a suite and uses all configured formatters to output their results - โ Add the concept of a benchmarking title that formatters can pick up
- the displayed percentiles can now be adjusted
- inputs option can now be an ordered list of tuples, this way you can determine their order
- ๐ support FreeBSD properly (system metrics) - thanks @kimshrier
๐ Bugfixes (User Facing)
- โ Remove extra double quotes in operating system report line - thanks @kimshrier
๐ฅ Breaking Changes (Plugins)
- all reported times are now in nanoseconds instead of microseconds
- formatter methods
format
andwrite
now take 2 arguments each where the additional arguments is the options specified for this formatter so that you have direct access to it without peeling it from the suite - You can no longer
use Benchee.Formatter
- just adopt the behaviour (no more auto generatedoutput/1
method, butFormatter.output/3
takes that responsibility now)
๐ Features (Plugins)
- An optional title is now available in the suite for you to display
- Scenarios are now sorted already sorted (first by run time, then memory usage) - no need to sort them yourself!
- โ Add
Scenario.data_processed?/2
to check if either run time or memory data has had statistics generated
-
v0.13.2 Changes
August 02, 2018Mostly fixing memory measurement bugs and delivering them to you asap ;)
๐ Bugfixes (User Facing)
- โ Remove race condition that caused us to sometimes miss garbage collection events and hence report even negative or N/A results
- ๐ restructure measuring code to produce less overhead (micro memory benchmarks should be much better now)
- ๐ make console formatter more resilient to faulty memory measurements aka don't crash
-
v0.13.1 Changes
August 02, 2018๐ Mostly fixing memory measurement bugs and related issues :) Enjoy a better memory measurement experience from now on!
๐ Bugfixes (User Facing)
- ๐ Memory measurements now correctly take the old generation on the heap into account. In reality that means sometimes bigger results and no missing measurements. See #216 for details. Thanks to @michalmuskala for providing an interesting sample.
- ๐ Formatters are now more robust (aka not crashing) when dealing with partially missing memory measurements. Although it shouldn't happen anymore with the item before fixed, benchee shouldn't crash on you so we want to be on the safe side.
- It's now possible to run just memory measurements (i.e.
time: 0, warmup: 0, memory_time: 1
) - even when you already have scenarios tagged with
-2
etc. it still correctly produces-3
,-4
etc. when saving again with the same "base tage name"
-
v0.13.0 Changes
April 14, 2018Memory Measurements are finally here! Please report problems if you experience them.
๐ Features (User Facing)
- Memory measurements obviously ;) Memory measurement are currently limited to process your function will be run in - memory consumption of other processes will not be measured. More information can be found in the README. Only usable on OTP 19+. Special thanks go to @devonestes and @michalmuskala
- ๐ new
pre_check
configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.
๐ Bugfixes (User Facing)
- Standard Deviation is now calculated correctly for being a sample of the population (divided by
n - 1
and not justn
)
-
v0.12.1 Changes
March 05, 2018๐ Bugfixes (User Facing)
- Formatters that use
FileCreation.each
will no longer silently fail on file
creation and now also sanitizes/
and other file name characters to be_
.
Thanks @gfvcastro
- Formatters that use
-
v0.12.0 Changes
January 20, 2018โ Adds the ability to save benchmarking results and load them again to compare
๐ against. Also fixes a bug for running benchmarks in parallel.๐ฅ Breaking Changes (User Facing)
- โฌ๏ธ Dropped Support for elixir 1.3, new support is elixir 1.4+
๐ Features (User Facing)
- ๐ new
save
option specifying a path and a tag to save the results and tag them
(for instance with"master"
) and aload
option to load those results again
and compare them against your current results. - โ runs warning free with elixir 1.6
๐ Bugfixes (User Facing)
- ๐ If you were running benchmarks in parallel, you would see results for each
parallel process you were running. So, if you were running two jobs, and
setting your configuration toparallel: 2
, you would see four results in the
formatter. This is now correctly showing only the two jobs.
๐ Features (Plugins)
Scenario
has a newname
field to be adopted for displaying the scenario names,
as it includes the tag name and potential future additions.