-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 1.8.0 #1322
Release 1.8.0 #1322
Conversation
Description When invoking auth:logout without being authenticated, kuzzle tries to clean the token from redis (which obviously does not exist) and the API route responds with a success status. It doesn't seem sane to me to access redis while we now full well that there is no token to clean. So, we have 2 possibilites to prevent that: do nothing if no token is provided, silently acquiescing respond with an error I believe that an explicit behavior is always better, so I opted for the second solution
* [bugfix] scan keys on redis cluster * sonarqube * eslint
# Description Context: access logs are invoked every time an API route is invoked, to keep a trace about Kuzzle's activity. There are two problems with how access logs are currently handled: 1. If a connection drops before the end of an API request, then the warning `No connection retrieved for connection id: <connection id>` is logged. This is a problem, because, even if the connection had since dropped, the query was executed. This means that our access logs are incomplete 2. Whenever we log messages such as the warning previously described, or the ones printed when an authentication token cannot be decoded, this invokes `pluginsManager.trigger` which creates a promise in the event loop. With enough activity, this can quickly clogs up the event loop and endanger Kuzzle's stability This PR solves the 1st problem by simply using the provided `Request` object, which holds basic connection and token informations, when the raw network connection object is not available (meaning that the connection no longer exists). The second problem is solved by simply removing the usages of the `log:xxx` events, as those didn't provide valuable information at all. :warning: access logging is a sensitive feature of kuzzle, please be extra careful when reviewing this PR # Other changes When printing an access log, we now print `(anonymous)` instead of `-1` when a query is executed by an unauthenticated user. # How to test 1. Activate access logs by setting `server.logs.transports.silent = false` in the kuzzlerc configuration file 2. Run a script that stresses Kuzzle to the point that it starts buffering requests to prevent overload 3. Shut down the script suddenly Before this PR: many `No connection retrieved for connection id...` warnings are printed on the console. With this PR: access logs are properly printed even on dead connections. Moreover, Kuzzle takes far less time to recover with this PR than without, because less promises are put on the event loop. Script that you can use to test this PR: ```js const WebSocket = require('uws'); const nbjobs = 10000, ws = new WebSocket('ws://172.18.0.5:7512', {perMessageDeflate: false}), payload = JSON.stringify({controller:'server', action:'info'}); ws.on('open', () => { let done = 0; const start = Date.now(); ws.on('message', data => { done++; if (done === nbjobs) { clearInterval(interval); const elapsed = (Date.now() - start) / 1000; console.log(`Elapsed: ${elapsed}s, Throughput: ${Math.round(nbjobs/elapsed)} requests/s`); ws.close(); } }); for (let i = 0; i < nbjobs; i++) { ws.send(payload); } }); ```
This PR prepares kuzzle to run functional tests under cluster mode. adds a new cucumber profile that excludes redis tests, which are incompatible with redis cluster mode fixes a little bug with user unsubscription notifications not being send/received in cluster mode.
## What does this PR do ? This PR fixes the case where a bulk operation would fail on an elasticsearch index alias. ### How should this be manually tested? ```bash #!/bin/bash set -x es_container=kuz_elasticsearch_1 es=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "$es_container"):9200 kuzzle="localhost:7512" curl -XPOST $kuzzle/index/_create curl -XPUT $kuzzle/index/collection curl -d '{"actions": {"add": {"index": "index", "alias": "alias"}}}' $es/_aliases curl -H "Content-type: application/json" -d '{ "bulkData": [ {"index": {"_type": "collection", "_index": "alias"}}, {"foo": "bar"} ] }' $kuzzle/_bulk ```
# Description PING/PONG packets are described in the WebSocket specifications, and are used to: * efficiently detect and clean up dead sockets (a socket can stay opened indefinitly if the client couldn't send a socket closed packet) * keep idle sockets alive This PR: * adds a new `heartbeat` configuration under the `protocols.websocket` part of the .kuzzlerc file * adds a heartbeat to the WebSocket protocol, sending PING packets to client sockets, and mark them as "dead" until a PONG response is received * destroys and cleans up sockets that stayed dead after the heartbeat delay # Other change Deduplicated the test made on the `enabled` part of a protocol configuration: the test was made in each protocol implementation, AND in the global entrypoint object during protocols initialization. # How to test An easy way to test this is to use `wscat`: * (if not already done) install it with `npm -g wscat` * connect it to your local kuzzle stack: `wscat --connect http://localhost:7512` * suspend it with a SIGSTOP signal (e.g. Ctrl+Z on the terminal) Without this PR: you can go have a lunch, return to your terminal and wake wscat up (with `fg`), and you can use it again as if nothing happend, even if the socket should have been marked as dead by Kuzzle since it can never respond to any request. With this PR: if you wait at least 2 minutes by default, and wake wscat up with `fg`, it will exit immediately with the following message: `disconnected` (you need to wait for 2 heartbeats before the socket is disconnected: the 1st one to mark the socket temporarily as "dead", and the second one to clean up the dead socket) ``` $ wscat --connect http://localhost:7512 connected (press CTRL+C to quit) > [1] + 24777 suspended wscat --connect http://localhost:7512 $ fg [1] + 24777 continued wscat --connect http://localhost:7512 disconnected $ ``` You can follow what happens with this test using debug messages (set the `DEBUG` environment variable to `kuzzle:entry-point:protocols:websocket` before starting Kuzzle): ``` kuzzle:entry-point:protocols:websocket [090cacd2-8ca5-49b1-bf8a-8b9b37b5882d] creating Websocket connection +4s kuzzle:entry-point:protocols:websocket [WebSocket] Heartbeat +56s kuzzle:entry-point:protocols:websocket [WebSocket] Heartbeat +60s kuzzle:entry-point:protocols:websocket [090cacd2-8ca5-49b1-bf8a-8b9b37b5882d] received a `close` event +2ms kuzzle:entry-point:protocols:websocket [090cacd2-8ca5-49b1-bf8a-8b9b37b5882d] onClientDisconnection +1ms ``` These logs show that the 2nd heartbeat after the wscat connection correctly force closes the socket, since it couldn't respond to the PING packet in time.
## What does this PR do ? Edit the Issue template to incite users to use Stackoverflow for questions. Also add more explicit context description. If this PR is merged, I will use the same template for all our repositories
# Description Attempting a bulk update without a `doc` property results in the following error received by the client: `Cannot set property '_kuzzle_info' of undefined` That error message is misleading, either because the bulk update is invalid, or if it is a script update. This PR fixes that by checking if the `doc` property exists before trying to inject metadata. # Other change Inject metadata in the `upsert` property of a bulk update, if there is one # How to test Send the following body to a bulk import API request: ```js { "bulkData": [ {"update": {"_id": "foobar"}}, {"foo": "qux"} ] } ```
## What does this PR do ? Change the behavior of `--help` on subcommands such as `createFirstAdmin` -> All available options are correctly listed. ### How should this be manually tested? `./kuzzle createFirstAdmin --help` ### Other changes add `usage` line on `./kuzzle --help`
Adds the ability to define the `dynamic` property for collection mapping. This property can take 3 values: - "true": Stores the document and updates the collection mapping with inferred type - "false": Stores the document and does not update the collection mapping (fields are not indexed) - "strict": Rejects document See https://www.elastic.co/guide/en/elasticsearch/guide/current/dynamic-mapping.html This is configurable in the `kuzzlerc` within the `server.db.dynamic` property. The default value is `true` which allows the user to push arbitrary documents to Kuzzle and Elasticsearch will try to infer the corresponding mapping. (This is the current behaviour in Kuzzle) Admin-Console (wip): kuzzleio/kuzzle-admin-console#502 Documentation: kuzzleio/documentation#271 ## Other changes Fixed the usage of the `_meta` property for ES collection. This property must be passed to each call to `putMapping`. See #1268 chmod 777 on `node_modules/` because I'm tired of doing `sudo chmod +x node_modules/.bin/*` to run test locally
This PR makes Kuzzle throw an error when the realtime controller is invoked from within a plugin. The realtime controller can be accessed by plugin developers with: the execute method the realtime controller exposed in the SDK object the query low-level method exposed in the SDK object The realtime controller cannot work for plugin developers: it needs a physical network connection so that it can send notifications to subscribers. To prevent questions and hard to debug errors, we need to throw a PluginImplementationError exception whenever the realtime controller is invoked by a plugin developper using one of the 3 ways listed above. The error message must make it clear why an exception has been thrown, explaining that the realtime controller cannot be used by plugins.
This PR fixes the case where Kuzzle would timeout without error when/if a plugin controller action returns a non-serializable response. This is notably the case if a controller action returns the origin request.
…#1300) Validate specifications: add new endpoint http://kuzzle:7512/<index>/<collection>/_validateSpecifications deprecate the previous endpoint http://kuzzle:7512/_validateSpecifications Update specifications: add new endpoint http://kuzzle:7512/<index>/<collection>/_specifications deprecate the previous endpoint http://kuzzle:7512/_specifications
Update our Travis configuration to launch heavy and time consuming tests by night using Travis CRON.
Even with the disabled property set to false, the protocols are all loaded into memory. The footprint is small but I ran into a bug with realtime notification where Kuzzle try to send notification with the dispatch method even if the protocol was disabled. Now the protocols are initialized and if they are disabled Kuzzle does not keep them in memory.
pipes & hooks can now trigger wildcarded events like this: 'foo:*' 'foo:after*' 'foo:before*' 'foo:error*'
This PR add two method for the bulk controller that behave like createOrReplace and mCreateOrReplace: write: write a document without adding metadata or performing data validation mWrite: write several documents without adding metadata or performing data validation The name is open to debate, I have thought of put and mPut also The goal is to have a low level method that allow to arbitrary write document into the storage layer to bypass automatic metadata or the data validation layer. It's better to have these two endpoints than an option like bypassMeta on existing methods because developer can choose to not expose these methods to users.
# Description Do not be afraid by the seemingly large number of changed lines, this PR is more simple than it appears. All it does is this: 1. it lets the `kuzzle` object, as an event emitter, trigger events itself (separation of concern) 2. instead of only 1 `pluginsManager.trigger` method working for both hooks (simple event emissions) and pipes (awaited events returning promises, listened by plugins), we now have the standard `kuzzle.emit` method (synchronous, events are non-blocking), and the new `kuzzle.pipe` one (async, promise-based, awaited by kuzzle) So, a little bit of code... and a looooooooooot of `kuzzle.pluginsManager.trigger` changed into either `kuzzle.emit` or `kuzzle.pipe`. Expected benefits: 1. we have a few promise leaks and I believe that this will help me fix those (this is a prereq, not a fix in itself) 2. before this change, events that we documented as "hooks only" could actually be listened by plugins as pipes (but not awaited, which can be confusing for users) 3. better separation of concern is always better: before it was a bit confusing since we had `kuzzle` as an event emitter, but with events triggered only by using the `pluginsManager` object # Other changes * removed the `validation:error` hook: it isn't documented, and I couldn't find a use for it # How to review I suggest that you concentrate on reviewing the modifications made to the `lib/kuzzle.js` and `lib/api/core/plugins/pluginsManager.js` files. Apart from unit tests updates, all other changes are anecdotical (but it's up to you obviously)
# Context First, some context that will help understand some of the changes made in that PR: the following explains why I went back on using `ws` instead of porting Kuzzle to [µWebSockets.js](https://github.com/uNetworking/uWebSockets.js). Kuzzle currently depends on the obsolete `uws` module. Initially I worked on replacing this module with its newer version (µWebSockets.js): but I discovered, once well into the massive needed refactor (and reading a lot of code because of the poor documentation), that despite its claim, this module does not fully implements the RFC. Most notably, [it doesn't allow a server to send PING requests](uNetworking/uWebSockets#819) and, as always with this library's author, you have the choice of silently coping with the library flaws, or being insulted if you try to ask about the feature or -god forbids- if you are crazy enough to submit a PR. So, I decided that continuing to use that author's libraries was a loss of time and even dangerous: I discovered about the unimplemented PING/PONG, but there might be other unimplemented RFC features that we might discover later, and we don't need any more bad surprises. Sure, that library's performances are insane, but we already spent a lot of time in maintenance because of it. That said, there were a couple of good ideas I picked up while refactoring the WebSocket layer that I decided to keep. So, you might see what seems "unrelated" changes that I _had_ to do with µWebSockets, and since the code was there, I kept it. # Description Because `uws` is obsolete and doesn't work on newer Node.js versions (it seems to work with Node.js 10 now, but not with Node.js 12), it was needed to use another library and, for the reasons explained above, it was decided to go back to the reliable and fully RFC compliant [ws](https://www.npmjs.com/package/ws). Changes: * Kuzzle now keeps track of activity for each opened sockets, to be able to detect stale or idle sockets when PING/PONG is disabled, which is a sensible choice for high amounts of clients connections (e.g. to terminate idle sockets even if the client is still alive). This can be configured using the new `idleTimeout` parameter in the kuzzlerc file * WebSocket's `broadcast` method, used to send real-time notifications, has been massively optimized: since broadcasting notifications means sending very similar messages to a potentially very large number of sockets, that method now precomputes a RFC-6455 frame, and access directly on the low-level socket layer to send that frame, instead of asking to ws to compute a new frame for each target client socket. Benchmark results show 60k+ notifications/second sent to 10k registered sockets, on a mere laptop * The WebSocket entrypoint now uses Set and Map instead of raw objects to keep track of connections and real-time subscriptions: this change has been made because Set/Map are much more suited (performance-wise) to hold data with constantly happening additions/removals * Checks made during startup to verify user configuration are now made using `assert`, as this makes much more sense since they are supposed to make Kuzzle stop # Other changes * the HTTP entrypoint has been refactored: *no changes were made to the logic*. Initially, a refactor was required because µWebSockets opens its own TCP socket, and manages its own HTTP routing, so I had to adapt that entrypoint accordingly and, along the way, I cleaned it up, fixed the outdated/erroneous JSDoc, and made other changes to make it much more readable and maintanable. I reverted the modifications needed by µWebSockets obviously, but I decided to keep the refactor: I just didn't want to throw my work out the window.
# Description The `.kuzzlerc.sample` file at the root of the kuzzle directory is a working configuration file. Or at least, it should be: it sometimes happens to be malformed, which can be frustrating for users. To prevent that, this PR adds a unit test to check that the sample configuration file is well formed. It loads the JSON file exactly as [rc](https://www.npmjs.com/package/rc) does, by using [strip-json-comments](https://www.npmjs.com/package/strip-json-comments) on it. There is no label on that PR on purpose: I think that this PR should go in the "Others" chapter of the changelog. # Other changes * Wrapped the content of the kuzzlerc.sample file to 80 characters to make it more readable in terminals. * Fix outdated URLs
The functions exposed by the notifier core generate many promises that are not awaited, and whose rejections might not be handled. And this is bad practice.
Remove _id from url for 2e write route
## What does this PR do ? Fix notifier issue introduced by #1311 Kuzzle notifier is instanciated before the services are and `notifier.cache` and `storage` properties were set to `undefined`. ### How should this be manually tested? ``` bash features/run.sh ```
## What does this PR do ? pm2 has an optional [dependency on `ps` tool](https://github.com/Unitech/pm2/blob/64c1ac82eb27774d1ac9612633fd10b03bb8b0c4/lib/TreeKill.js#L33). When pm2 tries to restart a process, for instance if a watched file changes, we get an exception in the log trace (but still manages to restart the service): ``` { Error: spawn ps ENOENT at _errnoException (util.js:992:11) at Process.ChildProcess._handle.onexit (internal/child_process.js:190:19) at onErrorNT (internal/child_process.js:372:16) at _combinedTickCallback (internal/process/next_tick.js:138:11) at process._tickDomainCallback (internal/process/next_tick.js:218:9) code: 'ENOENT', errno: 'ENOENT', syscall: 'spawn ps', path: 'ps', spawnargs: [ '-o', 'pid', '--no-headers', '--ppid', 92 ] } ``` ### How should this be manually tested? 1. Build the image locally ```bash docker build -t kuzzleio/plugin-dev:latest --target plugin-dev . ``` 2. Test the plugin boiler plate does not throw errors anymore on file change detection.
## What does this PR do ? Add a configurable maximum TTL to the login. internal ref: https://jira.kaliop.net/browse/KZL-1126
Add an integrated Vault to securely store secrets (API key, credentials, ...) in project repositories.
Codecov Report
@@ Coverage Diff @@
## master #1322 +/- ##
==========================================
- Coverage 93.84% 93.56% -0.28%
==========================================
Files 98 99 +1
Lines 6853 7059 +206
==========================================
+ Hits 6431 6605 +174
- Misses 422 454 +32
Continue to review full report at Codecov.
|
SonarQube analysis reported 1 issue Note: The following issues were found on lines that were not modified in the pull request. Because these issues can't be reported as line comments, they are summarized here:
|
1.8.0 (2019-06-14)
Bug fixes
New features
Enhancements
Others