aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorFlorian Fischer <florian.fl.fischer@fau.de>2019-03-05 16:06:22 +0100
committerFlorian Fischer <florian.fl.fischer@fau.de>2019-03-05 16:06:22 +0100
commitcf7f1bb43d365f8bf1dc045593018478249ea444 (patch)
treefb9caf1b5820641d01f596b70f611e31f19dad7a
parent89a316bb41077f97f7c79c3568abd90eed6e8fc4 (diff)
downloadallocbench-cf7f1bb43d365f8bf1dc045593018478249ea444.tar.gz
allocbench-cf7f1bb43d365f8bf1dc045593018478249ea444.zip
update Readme
-rw-r--r--Readme.md10
-rw-r--r--doc/Benchmarks.md2
2 files changed, 7 insertions, 5 deletions
diff --git a/Readme.md b/Readme.md
index de27d0d..8a3a05f 100644
--- a/Readme.md
+++ b/Readme.md
@@ -16,20 +16,22 @@ git clone https://muhq.space/software/allocbench.git
## Usage
- usage: bench.py [-h] [-s] [-l LOAD] [-a ALLOCATORS] [-r RUNS] [-v]
- [-b BENCHMARKS [BENCHMARKS ...]] [-ns] [-rd RESULTDIR]
- [--license]
+ usage: bench.py [-h] [-ds, --dont-save] [-l LOAD] [-a ALLOCATORS] [-r RUNS]
+ [-v] [-vdebug] [-b BENCHMARKS [BENCHMARKS ...]] [-ns]
+ [-rd RESULTDIR] [--license]
benchmark memory allocators
optional arguments:
-h, --help show this help message and exit
- -s, --save save benchmark results in RESULTDIR
+ -ds, --dont-save don't save benchmark results in RESULTDIR
-l LOAD, --load LOAD load benchmark results from directory
-a ALLOCATORS, --allocators ALLOCATORS
load allocator definitions from file
-r RUNS, --runs RUNS how often the benchmarks run
-v, --verbose more output
+ -vdebug, --verbose-debug
+ debug output
-b BENCHMARKS [BENCHMARKS ...], --benchmarks BENCHMARKS [BENCHMARKS ...]
benchmarks to run
-ns, --nosum don't produce plots
diff --git a/doc/Benchmarks.md b/doc/Benchmarks.md
index d747500..4ab0c02 100644
--- a/doc/Benchmarks.md
+++ b/doc/Benchmarks.md
@@ -4,7 +4,7 @@ A benchmark in the context of allocbench is a command usable with exec and a
list of all possible arguments. The command is executed and measured for each
permutation of the specified arguments and for each allocator to test.
-Benchmarks are implemented as python objects that have a function `run(runs, verbose)`.
+Benchmarks are implemented as python objects that have a function `run(runs)`.
Other non mandatory functions are:
* load