forked from crystal-lang/crystal
-
Notifications
You must be signed in to change notification settings - Fork 0
Concurrency Parallelism
technorama edited this page Sep 30, 2015
·
2 revisions
- Make the GC, fibers, and threads function together without crashing.
- No changes to the scheduler or IO.
- 90% complete.
- The scheduler should work with multiple threads.
- Atomic operations (cmpxchg and atomicrmw) are necessary to make this efficient.
- Mutexes can be used as placeholders until atomics are implemented.
- IO issues needs to be resolved.
- IO objects are used by a single fiber at a time.
-
accept
ing sockets are the exception. - Queueing all readers in a thread safe manner will create overhead that's probably not necessary
- By default IO ops aren't thread safe without a wrapper?
- Or are we willing to pay overhead for less than 1% use cases?
- How would multiple readers consistently read() from the same IO? They would receive random data.
- Writing is different. Example: Loggers. Solution: wrap the IO.
- Multiple threads by default
- The system will spawn N threads. All threads will share work from all created fibers.
-
Up for discussion or change.
-
Experiment using per thread queues with work stealing between threads.
-
New fibers are rescheduled on the same thread by pushing to head.
-
Other threads steal from the tail when there is no work left in their own queue.
-
Other systems may be tested against a work stealing queue.
-
Experimentation is needed with the event loop.
-
1 event loop or N? Other options?
- Threads are put in a Thread pool.
- New thread pools can be created to perform different types of tasks or dedicate resources to a specific task.
- May have priorities ((soft)real time, normal, background)
- Allow increasing or decreasing the number of threads???
- Adapt to load like grand central dispatch?
- Fibers only run within the thread pool where they are created.
- Messages may be passed between pools using channels.
- IO to be determined based on restrictions learned in phase 1.
- Experiment with thread affinity.