Skip to content

Concurrency Parallelism

technorama edited this page Sep 30, 2015 · 2 revisions

Phase 0.25

  • Make the GC, fibers, and threads function together without crashing.
  • No changes to the scheduler or IO.
  • 90% complete.

Phase 0.5

  • The scheduler should work with multiple threads.
  • Atomic operations (cmpxchg and atomicrmw) are necessary to make this efficient.
  • Mutexes can be used as placeholders until atomics are implemented.
  • IO issues needs to be resolved.

Observations:

  • IO objects are used by a single fiber at a time.
  • accepting sockets are the exception.
  • Queueing all readers in a thread safe manner will create overhead that's probably not necessary

Create a wrapper for thread safe IO ops?

  • By default IO ops aren't thread safe without a wrapper?
  • Or are we willing to pay overhead for less than 1% use cases?
  • How would multiple readers consistently read() from the same IO? They would receive random data.
  • Writing is different. Example: Loggers. Solution: wrap the IO.

Phase 1

  • Multiple threads by default
  • The system will spawn N threads. All threads will share work from all created fibers.

Scheduler:

  • Up for discussion or change.

  • Experiment using per thread queues with work stealing between threads.

  • New fibers are rescheduled on the same thread by pushing to head.

  • Other threads steal from the tail when there is no work left in their own queue.

  • Other systems may be tested against a work stealing queue.

  • Experimentation is needed with the event loop.

  • 1 event loop or N? Other options?

Phase 2

  • Threads are put in a Thread pool.
  • New thread pools can be created to perform different types of tasks or dedicate resources to a specific task.

Thread pools

  • May have priorities ((soft)real time, normal, background)
  • Allow increasing or decreasing the number of threads???
  • Adapt to load like grand central dispatch?
  • Fibers only run within the thread pool where they are created.
  • Messages may be passed between pools using channels.
  • IO to be determined based on restrictions learned in phase 1.
  • Experiment with thread affinity.