Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is it possible to create my own Docker swarm scheduler? #2569

Open
ViniciusDeAndrade opened this issue Mar 23, 2018 · 9 comments
Open

is it possible to create my own Docker swarm scheduler? #2569

ViniciusDeAndrade opened this issue Mar 23, 2018 · 9 comments

Comments

@ViniciusDeAndrade
Copy link

Hello everybody.
I am trying to set a replica from a microservice into a node/host in runtime. I did it with kubernetes, creating my own scheduler and setting it in kubernetes yml file.
Is that possible with docker swarm? Whether not, how could I do that? Can I choose a node for a replica in docker swarm?
thanks

@dperny
Copy link
Collaborator

dperny commented Mar 23, 2018

I'm not super familiar with the kubernetes model, but I think what you want can't be accomplished in swarmkit. The scheduler for swarmkit isn't an external pluggable component; it is a compiled into the swarmkit binary. Its implementation can be found at manager/scheduler. You can inform its scheduling decisions by setting node constraints and placement preferences, but you can't do the scheduling on its behalf.

Does that answer your question?

@ViniciusDeAndrade
Copy link
Author

I guess it answer. So, I cannot modify the behavior of the docker swarm default scheduler, right?

@cyli
Copy link
Contributor

cyli commented Mar 29, 2018

@ViniciusDeAndrade No, not without building your own fork.

@Adirio
Copy link
Contributor

Adirio commented Apr 10, 2018

I was also looking for this feature and I may even develop it myself. Has any work been done in this topic? Is anyone intereseted in helping? Would it have options to be included in the oficial fork?

Does swarmkit or docker have any pluggable component that can be used as an inspiration for the scheduler? You know, in order to keep consistency. This idea could probably be extended to other manager subcomponents in the future.

@dperny
Copy link
Collaborator

dperny commented Apr 10, 2018

the words "pluggable scheduler" have been said a lot in the context of "things we'd like to do maybe eventually". i don't think anyone except maybe @stevvooe has put much thought into how you'd architect it. if you think you can build it, you're certainly welcome to try, a great place to start would be with a simple documentation PR or issue explaining your proposed architecture.

@Adirio
Copy link
Contributor

Adirio commented Apr 10, 2018

Would this be an accurate schematic of the actual architecture? I purposely left the allocator off the graph due to #1477. I want to have an common overview picture of the architecture to use as the base for the proposal. The global tasks are going through the scheduler as they are checked for some conditions even if the node was assigned at the orchestrator. The ---Co--- at the left side tries to be a representation of requested and provided interface from the UML deployment diagram.

 USER                                                 MANAGER NODE                                                                  WORKER NODE
            +----------------------------------------------------------------------------------------------+
            |                 +--------------+ Tasks +-----------+ Tasks (assigned to node) +------------+ |                      +-------------+
            |                 |              |------>|           |------------------------->|            | |   Tasks to execute   | +-------+   |
   O        | +-----+ Service |              |       | Scheduler |       Global tasks       |            |------------------------->|       |   |
  /|\ ---Co---| API |-------->| Orchestrator |-------|-----------|------------------------->| Dispatcher | | Worker's tasks state | | Agent |...|
  / \       | +-----+         |              |       +-----------+   Workers' tasks state   |            |<-------------------------|       |   |
            |                 |              |<---------------------------------------------|            | |                      | +-------+   |
            |                 +--------------+                                              +------------+ |                      +-------------+
            +----------------------------------------------------------------------------------------------+

@ViniciusDeAndrade
Copy link
Author

Hey Guys, I partially solve the problem.
We can set a list of constraints where which item is a string with "node.hostname == " + hostname.
And the default docker swarm scheduler will set all the services replicas into these hosts.

In fact, when I got just one replica running in a node, if I set another host constraint, this replica will be
moved to the new host. But when I do that with more replicas, and I set some nodes, the scheduler tries to move the replicas and I often get errors.

It looks like what I do with kubernetes, but here it works pretty well. And I can set a particular replicas into a particular node. In docker swarm, the scheduler choose which replica goes to which node.

Got it?

PS.: I still need help for dealing with the docker swarm moving when the services has replicas

@Adirio
Copy link
Contributor

Adirio commented Apr 11, 2018

Instead of using node's hostname you could set labels in every node that will hold a replica and then tell the service that only is allowed to deploy in nodes with that label. This approach will let you select multiple hosts and as the current scheduler tries to split the tasks from a service into the most number of nodes that will work but I don't think that this will trigger a task movement when the labels are changed.

Trying to use constraints instead of scheduling policy may work in some cases but is not valid for all.

@olljanat
Copy link
Contributor

olljanat commented Oct 3, 2018

@ViniciusDeAndrade for me it sounds that you are re-inventing the wheel on here by planning new scheduler before you have actually checked all possibilities to use/improve existing scheduler.

Did you try to use constraint with node labels?
https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint

I have been dig into scheduler logic lately as I have been working with #2758 so let me know if there is some feature which you have not found so I can check if that exists on code or can be added to there easily?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants