All Versions
21
Latest Version
Avg Release Cycle
105 days
Latest Release
752 days ago

Changelog History
Page 1

  • v1.1.0 Changes

    March 08, 2022

    Long time, huh? I'm sorry, combination of priorities, difficult to fix bugs, stress and arm problems kept me away too long. ๐Ÿ›  This release brings major features long developed and now finally released (reduction measurements + profiler after run), along with a critical bugfix around measurement accuracy for very fast functions (nanoseconds).

    ๐Ÿ”‹ Features (User Facing)

    • ๐Ÿ“„ Reduction counting/measurements was implemented. Basically, it's a rather stable unit of execution implemented by the BEAM that measures in more abstract manner how much work was done. It's helpful, as it shouldn't be affected by load on the system. Check out the docs.
    • ๐Ÿ‘€ You can now dive straight into profiling from your benchmarks by using the profiling feature. See the docs - thanks @pablocostass

    0๏ธโƒฃ Default Behavior Changes (User Facing)

    • 0๏ธโƒฃ measuring function call overhead is now turned off by default, as it only helps with extreme nano benchmarks and has some potential for causing wrong results, so it should only be opt in. We now also print out the measured overhead for vsibility.

    ๐Ÿ›  Bugfixes (User Facing)

    • Benchee now correctly looks for the time resolution as reported by :erlang.system_info(:os_monotonic_time_source) to accomodate when determining if a measurement is "precise" enough. Benche also works around an erlang bug we discovered present in erlang <= 22.2. Issue for reference.
    • ๐Ÿšš The annoying stacktrace warning has been removed - thanks @mad42

    Noteworthy

    • a new dependency statistex will show up - it's a part of continued efforts to extract reusable libraries from Benchee.
  • v1.0.1 Changes

    April 09, 2019

    ๐Ÿ›  Bugfixes (User Facing)

    • When memory measurements were actually different extended statistics was displayed although the option was not provided. Now correctly only displayed if the option is provided and values actually had variance.
  • v1.0.0 Changes

    April 09, 2019

    ๐Ÿ—„ It's 0.99.0 without the deprecation warnings. Specifically:

    • Old way of passing formatters (:formatter_options) vs. new :formatters with modules, tuples or functions with one arg
    • ๐Ÿ”ง The configuration needs to be passed as the second argument to Benchee.run/2
    • Benchee.collect/1 replaces Benchee.measure/1
    • ๐Ÿ”ง unit_scaling is a top level configuration option, not for the console formatter
    • โš  the warning for memory measurements not working on OTP <= 18 will also be dropped (we already officially dropped OTP 18 support in 0.14.0)

    We're aiming to follow Semantic Versioning as we go forward. That means formatters should be safe to use ~> 1.0 (or even >= 0.99.0 and < 2.0.0).

  • v0.99.0 Changes

    March 28, 2019

    ๐Ÿš€ The "we're almost 1.0!" release - all the last small features, a bag of polish and deprecation warnings. If you run this release succesfully without deprecation warnings you should be safe to upgrade to 1.0.0, if not - it's a bug :)

    ๐Ÿ’ฅ Breaking Changes (User Facing)

    • ๐Ÿ”„ changed official Elixir compatibility to ~> 1.6, 1.4+ should still work but aren't guaranteed or tested against.

    ๐Ÿ”‹ Features (User Facing)

    • the console comparison now also displays the absolute difference in the average (like +12 ms) so that you have an idea to how much time that translates to in your applications not just that it's 100x faster
    • ๐Ÿ“š Overhaul of README, documentation, update samples etc. - a whole lot of things have also been marked @doc false as they're considered internal

    ๐Ÿ›  Bugfixes (User Facing)

    • โœ‚ Remove double empty line after configuration display
    • ๐Ÿ›  Fix some wrong type specs

    ๐Ÿ’ฅ Breaking Changes (Plugins)

    • Scenario made it to the big leagues, it's no longer Benchee.Benchmark.Scenario but Benchee.Scenario - as it is arguably one of our most important data structures.
    • The Scenario struct had some keys changed (last time before 2.0 I promise!) - instead of :run_times/:run_time_statistics you now have one run_time_data key that contains Benchee.CollectionData which has the keys :samples and :statistics. Same for memory_usage. This was done to be able to handle different kinds of measurements more uniformly as we will add more of them.

    ๐Ÿ”‹ Features (Plugins)

    • Benchee.Statistics comes with 3 new values: :relative_more, :relative_less, :absolute_difference so that you don't have to calculate these relative values yourself :)
  • v0.14.0 Changes

    February 10, 2019

    ๐Ÿš€ Highlights of this release are a new way to specify formatter options closer to the formatters themselves as well as maximum precision measurements.

    ๐Ÿ’ฅ Breaking Changes (User Facing)

    • โฌ‡๏ธ dropped support for Erlang 18.x
    • Formatters no longer have an output/1 method, instead use Formatter.output/3 please
    • ๐Ÿ—„ Usage of formatter_options is deprecated, instead please use the new tuple way

    ๐Ÿ”‹ Features (User Facing)

    • ๐Ÿง Benchee now uses the maximum precision available to measure which on Linux and OSX is nanoseconds instead of microseconds. Somewhat surprisingly :timer.tc/1 always cut down to microseconds although better precision is available.
    • The preferred way to specify formatters and their options is to specify them as a tuple {module, options} instead of using formatter_options.
    • ๐Ÿ†• New Formatter.output/1 function that takes a suite and uses all configured formatters to output their results
    • โž• Add the concept of a benchmarking title that formatters can pick up
    • the displayed percentiles can now be adjusted
    • inputs option can now be an ordered list of tuples, this way you can determine their order
    • ๐Ÿ‘Œ support FreeBSD properly (system metrics) - thanks @kimshrier

    ๐Ÿ›  Bugfixes (User Facing)

    • โœ‚ Remove extra double quotes in operating system report line - thanks @kimshrier

    ๐Ÿ’ฅ Breaking Changes (Plugins)

    • all reported times are now in nanoseconds instead of microseconds
    • formatter methods format and write now take 2 arguments each where the additional arguments is the options specified for this formatter so that you have direct access to it without peeling it from the suite
    • You can no longer use Benchee.Formatter - just adopt the behaviour (no more auto generated output/1 method, but Formatter.output/3 takes that responsibility now)

    ๐Ÿ”‹ Features (Plugins)

    • An optional title is now available in the suite for you to display
    • Scenarios are now sorted already sorted (first by run time, then memory usage) - no need to sort them yourself!
    • โž• Add Scenario.data_processed?/2 to check if either run time or memory data has had statistics generated
  • v0.13.2 Changes

    August 02, 2018

    Mostly fixing memory measurement bugs and delivering them to you asap ;)

    ๐Ÿ›  Bugfixes (User Facing)

    • โœ‚ Remove race condition that caused us to sometimes miss garbage collection events and hence report even negative or N/A results
    • ๐Ÿ‘ restructure measuring code to produce less overhead (micro memory benchmarks should be much better now)
    • ๐Ÿ‘‰ make console formatter more resilient to faulty memory measurements aka don't crash
  • v0.13.1 Changes

    August 02, 2018

    ๐Ÿ‘ Mostly fixing memory measurement bugs and related issues :) Enjoy a better memory measurement experience from now on!

    ๐Ÿ›  Bugfixes (User Facing)

    • ๐Ÿ‘€ Memory measurements now correctly take the old generation on the heap into account. In reality that means sometimes bigger results and no missing measurements. See #216 for details. Thanks to @michalmuskala for providing an interesting sample.
    • ๐Ÿ›  Formatters are now more robust (aka not crashing) when dealing with partially missing memory measurements. Although it shouldn't happen anymore with the item before fixed, benchee shouldn't crash on you so we want to be on the safe side.
    • It's now possible to run just memory measurements (i.e. time: 0, warmup: 0, memory_time: 1)
    • even when you already have scenarios tagged with -2 etc. it still correctly produces -3, -4 etc. when saving again with the same "base tage name"
  • v0.13.0 Changes

    April 14, 2018

    Memory Measurements are finally here! Please report problems if you experience them.

    ๐Ÿ”‹ Features (User Facing)

    • Memory measurements obviously ;) Memory measurement are currently limited to process your function will be run in - memory consumption of other processes will not be measured. More information can be found in the README. Only usable on OTP 19+. Special thanks go to @devonestes and @michalmuskala
    • ๐Ÿ†• new pre_check configuration option which allows users to add a dry run of all
      benchmarks with each input before running the actual suite. This should save
      time while actually writing the code for your benchmarks.

    ๐Ÿ›  Bugfixes (User Facing)

    • Standard Deviation is now calculated correctly for being a sample of the population (divided by n - 1 and not just n)
  • v0.12.1 Changes

    March 05, 2018

    ๐Ÿ›  Bugfixes (User Facing)

    • Formatters that use FileCreation.each will no longer silently fail on file
      creation and now also sanitizes / and other file name characters to be _.
      Thanks @gfvcastro
  • v0.12.0 Changes

    January 20, 2018

    โž• Adds the ability to save benchmarking results and load them again to compare
    ๐Ÿ›  against. Also fixes a bug for running benchmarks in parallel.

    ๐Ÿ’ฅ Breaking Changes (User Facing)

    • โฌ‡๏ธ Dropped Support for elixir 1.3, new support is elixir 1.4+

    ๐Ÿ”‹ Features (User Facing)

    • ๐Ÿ†• new save option specifying a path and a tag to save the results and tag them
      (for instance with "master") and a load option to load those results again
      and compare them against your current results.
    • โš  runs warning free with elixir 1.6

    ๐Ÿ›  Bugfixes (User Facing)

    • ๐Ÿ‘€ If you were running benchmarks in parallel, you would see results for each
      parallel process you were running. So, if you were running two jobs, and
      setting your configuration to parallel: 2, you would see four results in the
      formatter. This is now correctly showing only the two jobs.

    ๐Ÿ”‹ Features (Plugins)

    • Scenario has a new name field to be adopted for displaying the scenario names,
      as it includes the tag name and potential future additions.