-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[System test runner] Add more service deployers #89
Comments
For reference: Testing on Kubernetes written by @ChrsMark : https://github.com/elastic/integrations/blob/master/testing/environments/kubernetes/README.md |
We need to cover following providers:
(correct me if I missed any of them) Notes:
Other use cases:
Technical observations:
Questions:
|
@kaiyan-sheng @narph @ChrsMark Would you mind describing here use cases for AWS, Azure and Kubernetes? I'm looking forward to seeing how these cloud/infra providers can be used for testing integrations. |
Thanks for the ping @mtojek, I will try to provide a scenario, with inline comments/thoughts that would cover our k8s needs. Vanilla Kubernetes
Note: I think this scenario will can be expanded to test other packages like OCP
Note 1: This is only for testing k8s module, but it should be quite similar for testing Autodiscover. |
For AWS testing, we can use a terraform script(or anything similar) per dataset/package to create AWS services for testing and cleanup after testing. I think we have an AWS account for testing in Beats jenkins (@jsoriano knows more about this) and we can leverage it here. For metrics: an example can be we can run For logs: We have sample files to test the pipelines already but it would be good to have terraform to setup S3-SQS to test the inputs. There are two use cases here: one is to run this in CI and the other one is for package developers to test locally. Because creating services can be cost-inefficient, we should consider how frequently should we run |
With this PR elastic/integrations#474 tests will be executed only if the relevant packages are changed (in this case AWS integration) or this is the master branch. Regarding |
for |
I'm going with this issue. |
Thank you for all feedback, folks! We had a sync-up with @ycombinator to discuss possible options. Here is a list of action items to help us solve this issue. Dev changes in package-spec:
@ycombinator, I still have doubts which path should we follow. If you have any preferences or see benefits of any of them, please feel free to share. Changes in elastic-package:
Changes in integrations:
|
Thanks for the heads-up @mtojek! Feel free to reach out to me if you guys have any questions about the k8s specifics since it can be tricky with different components we collect from unlike other clouds where we define a single exposed endpoint. |
Great, thank you! |
Thanks for the write up and breakdown of tasks, @mtojek. Very helpful!
I recall discussing the first option (Allow for data-stream level |
(I came up to this point based on observing the Zeek integration) I can elaborate on this. Imagine we have an integration XYZ with data streams A, B, C, ... Z. Every data stream is basically the same Docker image with terraform executor and own set of static tf templates. The improvement is to use a single Docker image and simply mount (switch) templates for the data stream test scenario. This way it will be faster than building new Docker image for a data stream. |
I always assumed (but probably didn't make it explicit, sorry!) that there would be one shared/common TF executor Docker image that is used by the TF service deployer. The definition and maintenance of this image is the responsibility of The part that varies is the TF templates, whether those come from the package-level ( So I think we're on the same page? |
I agree with the rest of your comment. Regarding the quoted paragraph - what is the best of processing these TF templates (belonging to particular data-streams)? Load them in the runtime? Include them in the build time (one image build per data stream)? (I think we're on the same page, just confirming the implementation details :) |
There is also a third option: include all of them at image build time (so you are not building one image per data stream) and then select the right data stream's templates at runtime. At any rate, I don't know if there's an obvious answer to this one. I would suggest trying one of the options, probably the one you think is simplest to implement, see how well it performs and then iterate from there as necessary. |
+1 to implement this as a generic declarative Terraform-based runner 👍 Some comments in case they are helpful:
|
Thank you for sharing your mind, lot's of tricky ideas ;) I like the idea of kops.
Honestly I think we're not there yet. First, the Elastic-Agent needs to support autodiscovery and Kubernetes runtime. Then we can think about potential integrations. |
@mtojek @ycombinator fyi for k8s package testing I'm using some mock APIs so as to proceed until we reach to a more permanent solution. You can find more at elastic/integrations#569. While working with these mocks I realise more the need for running against an actual k8s cluster and more specifically having Agent deployed on the cluster natively. Without this, many things like k8s tokens crts etc we need will not be valid. |
This is super valuable information. @mtojek and I have informally discussed the idea that for some service deployers it might make sense to deploy the agent "along side" the service — your findings seem to be along these lines so this is very valuable feedback. Thank you! |
@kaiyan-sheng AWS integration can be tested now using the Terraform executor (sample here: https://github.com/elastic/integrations/tree/master/packages/aws/data_stream/ec2_metrics). @narph this feature is written in a generic way. If you pass secrets for Azure and write some TF code, it's expected to work. EDIT: we just need to enable secrets on the Jenkins side, but shouldn't be a big issue (unless we don't have them generated at all). |
Let me summarize it - We've delivered (and applied in Integrations):
|
Follow up to #64.
Currently the system test runner only supports the Docker Compose service deployer. That is, it can only test packages whose services can be spun up using Docker Compose. We should add more service deployers to enable system testing of packages such as
system
(probably a no-op or minimal service deployer),aws
(probably some way to pass connection parameters and credentials via environment variables and/or something that understands terraform files),kubernetes
.The text was updated successfully, but these errors were encountered: