-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[server] Expose HTTP service to plugins #12464
Comments
An example of how to avoid using request-scoped services is in a temporary branch, here made into a PR only for ease of reviewing. #14482 (also updated in main description) |
All the must-haves are checked off. |
@azasypkin Archana sorted out the must-haves on this, but I've assigned this to you so you can draw a line somewhere on the initial HTTP service that you're shipping with the new platform in #12660. Rather than keep around some generic mega issue, if/when you're satisfied with the initial implementation, create individual issues for any followup stuff you think we should do. |
We'd like a way to limit when systems can access es client and saved objects client. One way to do this is via the http service. elasticsearch module could do this, just writing from memory... start(core, plugins) {
const { http } = core;
http.registerCapability('elasticsearch', {
getScopedDataClient: (request, ...args) => {
// some logic that returns the right scoped data client
},
// maybe this can't happen:
getAdminClient: (...args) => {
},
// more things??
})
} savedobjects module could do similar thing then some dependency later can do this // registering an endpoint
router.get/post(
{validation object},
handler(req, res, coreCapabilities) {
// whoa, check it! es service already scoped to the request
const { elasticsearch } = coreCapabilities;
const esClient = elasticsearch.getScopedDataClient();
// maybe getAdminClient, since it's not scoped, is available outside the handler
},
) We can work on solving exactly how admin/data clients are exposed, but basically I'm thinking the http service would let es and saved objects register stuff that is only available in a handler function. They themselves would define how the binding to |
Thanks for sharing your ideas @archanid! One thought though: registering elasticsearch client through generic |
@azasypkin Hmm.. Actually this is a good thing. Maybe we have to be explicit about what capabilities can be registered. We do already know what they are going to be, among our Hm. The more I think about it, I don't see a clear reason to keep it generic. |
API Availability
7.4
7.6-7.9
Questionable
Summary
The current http server in the new platform is purely exploration at this stage (e.g. only
get
is implemented so far). In addition to how we handleget
,post
,put
anddelete
there are some things we need to think about when designing the http server.First of all, this is not a "generic" http server — we wrap an existing http server to build something specific for Kibana. One of the reasons we wrap an existing http server is that we want to be able to upgrade the http server at any time, which means we need to control the apis. This also means we can expose a subset of functionality that fits with Kibana. We can also make Kibana-specific adjustments that simplifies (and clarifies) the normal patterns we see in plugins.
Below I've described some of the things I think we need to think about (and explore) in the new http server.
Request-scoped services
In current Kibana we use
callWithRequest
often. This means we pass aroundrequest
to many places. To alleviate this we have started to "request-scope" certain services, e.g.getUiSettingsService
which depends ongetSavedObjectsClient
, which depends on therequest
to scopecallWithRequest
. This simplifies getting asavedObjectsClient
, but it also adds some problems:request
becomes available everywhere. This is precisely what we're trying to avoid with the new plugin system design, where we are very specific about what you receive when you depend on another plugin. I think it's essential that transitive dependencies don't provide any values to a plugin, only direct dependencies can expose values.request
makes it more difficult to type.I think we need to explore ideas that don't involve adding function on the request like this. It feels like a leaky abstraction.
I don't have a clear api in mind here, so I think we need to explore ideas. Currently in the
new-platform
branch there is anonRequest
method that can be specified on a plugin's router. It's "pure exploration" code at the current time, so don't treat it as more than that. I'd be happy to see that go away for a better/different abstraction.I think the best way to solve this problem is to find out how we want to handle #12442, as it seems like
callWithRequest
is the only reason we scope services today.An example of how to avoid using request-scoped services is in a temporary branch, here made into a PR only for ease of reviewing. #14482
Validate fields
In the current PoC code we've implemented validating url params, query params and request body, which looks like this:
When adding validation, all fields must validate (aka the entire objects must validate, not just the specified keys). There's a couple nice values from doing this validation:
params
andquery
above will be typed.In the current implementation we do not inject
schema
, but rely on it being injected further up. I think we should change this to injectschema
directly intovalidate
instead, so it could look like this:Currently the new platform can perform validation on query params, url params and the body. Should we explore doing this for headers too? We can't validate all headers, of course, but maybe we can do it just for the specific headers you need?
Are there use-cases where this wouldn't work?
Not provide the full request
In current Kibana the handler for each request receives the full hapi
request
andresponse
objects. I think we should explore not giving the handler the full request object. Instead of giving request handlers the full request, I think we should ONLY give it the validated fields (described above).Why go this route? It ensures we're strict about input, treat failures on the input in the same way across Kibana endpoints, makes us depend less on these objects and all their features (e.g. mutating them and passing them down the stack), and it makes it easier for us to perform major changes to the underlying request and response objects (e.g. upgrade majors) without it impacting the plugin apis.
In theory we could give access to the underlying request object through an "escape hatch", e.g.
Escape hatch
Let's explore this "escape hatch" idea. With the new platform we want to create a more "stable plugin contract" and ideally "not expose internals or dependencies to plugins". However, we sometimes have plugins that need very specific access to apis, which doesn't really fit "most plugins". Two examples are Security and Reporting.
Instead of building stable apis for all use-cases, I think we should consider having an "escape hatch" that gives you access to the underlying library/framework objects. A good example of this is the request example from above:
Using
unstable_
in the name indicates that this is not a part of the "stable Kibana api", and that it can change at any time. We could even do a major upgrade in a patch release if we wanted to. It's up to plugin authors to keep up-to-date if they rely on these apis.But the nice thing is that it allows us a way of not solving for all use-cases immediately.
I don't think this is something that should be used often, but it definitely feels like there are use-cases for this kind of approach. You can basically say we handle 95% of use-cases with the stable apis, but for special cases (e.g. Security) you might sometimes need to get the raw object instead.
Middleware, request filter, response filter
For the new platform we need to decide how we can run things before and after a request. In the current platform we rely on both the hapi lifecycle for this, and being able to specify
pre
on a route.These are all the things that rely on the hapi lifecycle in Kibana today:
kbn-name
andkbn-version
stop health checks when shutting down(not relevant in new platform)redirect from http port to https port on request(this is actually setting up a another http server, so not as relevant, can just be a pure node http server)We also rely on
pre
to get theIndexPatternsService
and theSavedObjectsClient
, in addition to validating proxy routes for Console and for the basepath proxy.If we find a solution for request-scoped services as described above, there are so few other things that depend on this right now, that I think we can rely on an "escape hatch" for this stuff. That way we can postpone the api design of this until we have more relevant use-cases.
This escape hatch needs to be exposed, so Security can rely on it. (TODO: Explore ideas of what this could look like)
(I'm definitely open to exploring ideas here too, but it feels like it's not the highest priority task we have for the http server.)
Format on error responses
(TODO: describe)
Is this defined in Kibana today? If so, can/should we use the same format in the new platform?
It feels like we should explore just keep using
Boom
to signal errors in the app, but we might not want to make that dep available to plugins? Hm. Maybe we need to create our ownKibanaError
class with static helpers likenotFound
,forbidden
,badRequest
etc.How to fail requests
(TODO: describe. Code fails somewhere far down, what does the flow look like?)
Serve static files
(TODO: describe)
Plugins must be able to expose static files. This should likely just be some predefined folder, with no option to configure the file path. (TODO: What does the current plugin system do here?)
(TODO: How does this work with webpack et al when plugins are built on their own? Do they need to know which path they are being served from?)
The text was updated successfully, but these errors were encountered: