-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add pipeline launcher components for other distributed training jobs #3445
Comments
What do you think about having generic launcher components that receive resolved serialized TaskSpec (or container image + command-line) and launch the given component. What do you think about syntax like this? MyLauncher = load_component(...)
with dsl.use_launcher(MyLauncher(num_workers=10)):
launched_task = XGBoostTrainer(training_data=..., num_trees=500) or MyLauncher = load_component(...)
launcher_for_train = MyLauncher(
num_workers=10,
task=XGBoostTrainer(training_data=..., num_trees=500),
) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it. |
/reopen |
@Jeffwan: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Any update to this feature? I believe it would be great that Kuebflow pipelien can provide a generic launcher that creates CRD and manages the lifespan of a CRD, like MPIJob, PyTorchJobs, etc. This requirement can be partially satisfied by using Kitlab Expeirment. However, as far as I know, there are some clear drawback of this approach:
Thus, it is desirable to have a GenericLauncher in Kubeflow Pipeline, and an operator to manage the life span of the launcher pod and the created CRDs. |
Hi, I could run the distributed training using PytorchJob (created by ResourceOp), this way has a disadvantage that it does not show the logs in the UI of the pipeline, it only shows the logs of the job controller not the worker container. @ca-scribner please help continue the PR, thanks a lot. |
@jalola Thanks for the info. Do you mind to share an example on how to define a PytorchJob with the help of ResourceOp? Thanks in advance. |
@wangli1426 For the PytorchJob: https://github.com/kubeflow/pytorch-operator/blob/master/examples/mnist/v1/pytorch_job_mnist_nccl.yaml Remember to set on_success_condition, example: |
Hi @jalola . Just wondering how can we stream all worker logs(when no of workers > 1) into pipeline log console? Or were you looking for just the logs of chief? Do you have any idea in mind? |
I only know they have the client sdk to get logs But I don't know how to show the logs to a component of pipeline. |
Sorry I let this slip from my mind and now I don’t have a good way to test.
The changes requested were minor though and the code in the PR is working
still if that helps. Maybe you could finish it off
…On Fri, Jun 25, 2021 at 04:10 Hung Nguyen ***@***.***> wrote:
I only know they have the client sdk to get logs
Example:
https://github.com/kubeflow/pytorch-operator/blob/4aeb6503162465766476519339d3285f75ffe03e/sdk/python/examples/kubeflow-pytorchjob-sdk.ipynb
API:
https://github.com/kubeflow/pytorch-operator/blob/master/sdk/python/docs/PyTorchJobClient.md#get_logs
But I don't know how to show the logs to a component of pipeline.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3445 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALPFPIZO2FDXMZNOOKW2P2LTUQ2XFANCNFSM4MBNOYRQ>
.
|
You could just print them. |
I am using k8s_client API (Watch and read_namespaced_pod_log) to stream the logs from training pod. This one works. @Ark-kun Another trouble I find when using launch_crd is that: on the Kubeflow pipeline, if users "terminate" the run of the pipeline, only the training controller pod (which is the launch_crd) is deleted, the distributed training pod will continue running. What do you think? You can give some advise, I may implement to the #5170 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi everyone, I'm quite interested in this as well. Is there any progress towards built-in support for distributed training jobs in pipelines? |
Is this still in the roadmap? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it. |
In order to leverage different training operators in kubeflow pipeline, it would be better to provide high level launcher components as an abstraction to invoke training jobs.
katib-launcher
andlauncher
are launcher componets for katib and tf-operator. We definitely need more similar components for PyTorch, MxNet, MPI and XGBoost, etc.https://github.com/kubeflow/pipelines/tree/master/components/kubeflow
The text was updated successfully, but these errors were encountered: