Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add functionality to run autocannon forever. #94

Merged
merged 12 commits into from
Nov 11, 2016
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Available options:
-t/--timeout NUM
The number of seconds before timing out and resetting a connection. default: 10
-T/--title TITLE
The title to place in the results for identifcation.
The title to place in the results for identification.
-b/--body BODY
The body of the request.
-i/--input FILE
Expand Down Expand Up @@ -92,6 +92,8 @@ Available options:
Print all the latency data. default: false.
-j/--json
Print the output as newline delimited json. This will cause the progress bar and results not to be rendered. default: false.
-f/--forever
Run the benchmark forever. Efficiently restarts the benchmark on completion. default: false.
-v/--version
Print the version number.
-h/--help
Expand Down Expand Up @@ -138,7 +140,8 @@ Start autocannon against the given target.
* `overallRate`: A `Number` stating the rate of requests to make per second from all connections. `conenctionRate` takes precedence if both are set. No rate limiting by default. _OPTIONAL_
* `reconnectRate`: A `Number` which makes the individual connections disconnect and reconnect to the server whenever it has sent that number of requests. _OPTIONAL_
* `requests`: An `Array` of `Object`s which represents the sequence of requests to make while benchmarking. Can be used in conjunction with the `body`, `headers` and `method` params above. The `Object`s in this array can have `body`, `headers`, `method`, or `path` attributes, which overwrite those that are passed in this `opts` object. Therefore, the ones in this (`opts`) object take precedence and should be viewed as defaults. Check the samples folder for an example of how this might be used. _OPTIONAL_.
* `cb`: The callback which is called on completion of the benchmark. Takes the following params. _OPTIONAL_.
* `forever`: A `Boolean` which allows you to setup an instance of autocannon that restarts indefinatly after emiting results with the `done` event. Useful for efficiently restarting your instance. To stop running forever, you must cause a `SIGINT` or call the `.stop()` function on your instance. _OPTIONAL_ default: `false`
* `cb`: The callback which is called on completion of a benchmark. Takes the following params. _OPTIONAL_.
* `err`: If there was an error encountered with the run.
* `results`: The results of the run.

Expand Down Expand Up @@ -182,13 +185,16 @@ Checkout [this example](./samples/track-run.js) to see it in use, as well.

Because an autocannon instance is an `EventEmitter`, it emits several events. these are below:

* `start`: Emitted once everything has been setup in your autocannon instance and it has started. Useful for if running the instance forever.
* `tick`: Emitted every second this autocannon is running a benchmark. Useful for displaying stats, etc. Used by the `track` function.
* `done`: Emitted when the autocannon finishes a benchmark. passes the `results` as an argument to the callback.
* `response`: Emitted when the autocannons http-client gets a http response from the server. This passes the following arguments to the callback:
* `client`: The `http-client` itself. Can be used to modify the headers and body the client will send to the server. API below.
* `statusCode`: The http status code of the response.
* `resBytes`: The response byte length.
* `responseTime`: The time taken to get a response for the initiating the request.
* `reqError`: Emitted in the case of a request error e.g. a timeout.
* `error`: Emitted if there is an error during the setup phase of autocannon.

### `Client` API

Expand Down
16 changes: 11 additions & 5 deletions autocannon.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ module.exports.track = track

function start () {
const argv = minimist(process.argv.slice(2), {
boolean: ['json', 'n', 'help', 'renderLatencyTable', 'renderProgressBar'],
boolean: ['json', 'n', 'help', 'renderLatencyTable', 'renderProgressBar', 'forever'],
alias: {
connections: 'c',
pipelining: 'p',
Expand All @@ -36,6 +36,7 @@ function start () {
renderProgressBar: 'progress',
title: 'T',
version: 'v',
forever: 'f',
help: 'h'
},
default: {
Expand All @@ -47,6 +48,7 @@ function start () {
renderLatencyTable: false,
renderProgressBar: true,
json: false,
forever: false,
method: 'GET'
}
})
Expand Down Expand Up @@ -86,16 +88,20 @@ function start () {
}, {})
}

const tracker = run(argv, (err, result) => {
if (err) {
throw err
}
const tracker = run(argv)

tracker.on('done', (result) => {
if (argv.json) {
console.log(JSON.stringify(result))
}
})

tracker.on('error', (err) => {
if (err) {
throw err
}
})

// if not rendering json, or if std isn't a tty, track progress
if (!argv.json || !process.stdout.isTTY) track(tracker, argv)

Expand Down
2 changes: 2 additions & 0 deletions help.txt
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ Available options:
Print all the latency data. default: false.
-j/--json
Print the output as newline delimited json. This will cause the progress bar and results not to be rendered. default: false.
-f/--forever
Run the benchmark forever. Efficiently restarts the benchmark on completion. default: false.
-v/--version
Print the version number.
-h/--help
Expand Down
62 changes: 33 additions & 29 deletions lib/progressTracker.js
Original file line number Diff line number Diff line change
Expand Up @@ -24,26 +24,45 @@ function track (instance, opts) {
opts = xtend(defaults, opts)

const chalk = new Chalk.constructor({ enabled: testColorSupport({ stream: opts.outputStream }) })

// this default needs to be set after chalk is setup, because chalk is now local to this func
opts.progressBarString = opts.progressBarString || `${chalk.green('running')} [:bar] :percent`

const iOpts = instance.opts
if (opts.renderProgressBar) {
let msg = `${iOpts.connections} connections`
let durationProgressBar
let amountProgressBar

if (iOpts.pipelining > 1) {
msg += ` with ${iOpts.pipelining} pipelining factor`
}
instance.on('start', () => {
if (opts.renderProgressBar) {
let msg = `${iOpts.connections} connections`

if (iOpts.pipelining > 1) {
msg += ` with ${iOpts.pipelining} pipelining factor`
}

if (!iOpts.amount) {
logToStream(`Running ${iOpts.duration}s test @ ${iOpts.url}\n${msg}\n`)
if (!iOpts.amount) {
logToStream(`Running ${iOpts.duration}s test @ ${iOpts.url}\n${msg}\n`)

trackDuration(instance, opts, iOpts)
} else {
logToStream(`Running ${iOpts.amount} requests test @ ${iOpts.url}\n${msg}\n`)
durationProgressBar = trackDuration(instance, opts, iOpts)
} else {
logToStream(`Running ${iOpts.amount} requests test @ ${iOpts.url}\n${msg}\n`)

amountProgressBar = trackAmount(instance, opts, iOpts)
}
}
})

trackAmount(instance, opts, iOpts)
// add listeners for progress bar to instance here so they aren't
// added on restarting, causing listener leaks
if (opts.renderProgressBar && opts.outputStream.isTTY) {
if (!iOpts.amount) { // duration progress bar
instance.on('tick', () => { durationProgressBar.tick() })
instance.on('done', () => { durationProgressBar.tick(iOpts.duration - 1) })
process.once('SIGINT', () => { durationProgressBar.tick(iOpts.duration - 1) })
} else { // amount progress bar
instance.on('response', () => { amountProgressBar.tick() })
instance.on('reqError', () => { amountProgressBar.tick() })
instance.on('done', () => { amountProgressBar.tick(iOpts.amount - 1) })
process.once('SIGINT', () => { amountProgressBar.tick(iOpts.amount - 1) })
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we factor this out into a function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could, but it should only be run once, so I left it in the scope of this function to make it clear to readers that this is should only be run once, when the track function is ran.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, the code above for both amountProgressBar and durationProgressBar looks very similar. we should probably factor it out in its own function, ideally calling the same function with different parameters, or something similar.

(this is a nit of style, nothing more)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could curry the callbacks for the events with the relevant progress bar var & amount to tick by (undefined in the case of raw .tick()). Thoughts?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would a nice improvement!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so I did the currying thing, and felt super happy with it, and then realised why I had them written like that when it crashed. I use the scope of the progressBar variables to instantiate new ones when restarting the benchmark, but use the same variable reference. By currying, I lose access to the scope-referencing-closure-benefit. I would like to leave this as is, because to get something "cleaner" working would end up adding add much more code, and not necessarily add any benefit. Is that okay with you?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but put in a comment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment in.

}
}

Expand Down Expand Up @@ -117,11 +136,7 @@ function trackDuration (instance, opts, iOpts) {
})

progressBar.tick(0)
instance.on('done', () => progressBar.tick(iOpts.duration - 1))
instance.on('tick', () => progressBar.tick())
process.once('SIGINT', () => {
progressBar.tick(iOpts.duration - 1)
})
return progressBar
}

function trackAmount (instance, opts, iOpts) {
Expand All @@ -138,18 +153,7 @@ function trackAmount (instance, opts, iOpts) {
})

progressBar.tick(0)

instance.on('done', () => progressBar.tick(iOpts.amount - 1))
instance.on('response', () => {
progressBar.tick()
})
instance.on('reqError', () => {
progressBar.tick()
})

process.once('SIGINT', () => {
progressBar.tick(iOpts.amount - 1)
})
return progressBar
}

function asRow (name, stat) {
Expand Down
Loading