Skip to content

Request refactoring test

Jeff Squyres edited this page Jul 26, 2016 · 32 revisions

Per the discussion on the 2016-07-26 webex, we decided to test several aspect of request refactoring.

Below is a proposal for running various tests / collecting data to evaluate the performance of OMPI with and without threading, and to evaluate the performance after the request code refactoring.

##1. Single threaded case ####Benchmark : OSU message rate (osu_mbw_mr) GOAL : Single thread performance impact should be none/minimal.

####Version to be tested

  • 1.10.3
  • Master, commit before request refactoring
  • Master, commit after request refactoring
  • 2.0.0
  • Master head

##2. Multithreaded case ####Benchmark : Message Rate This test should spawn an even number of processes that does pingpong between its pair and measure message rate/bw. (Maybe modified osu_mbw_mr)

  • a. 16 processes/1 process per core : THREAD_SINGLE
  • b. 16 processes/1 process per core : THREAD_MULTIPLE
  • c. 1 process/16 threads/1 thread per core : THREAD_MULTPLE

GOAL : This should show the request refactoring does not hurt the usual multithreaded performance. b and c performance should be close.

####Version to be tested

  • Master, commit before request refactoring?
  • 2.0.0
  • Master head

##3. Proof of request refactoring benefits The goal of request refactoring is to cut the congestion from MPI_Wait call and will allow the other non MPI_Wait thread to work more efficiently. ####Benchmark : 2 threads/process.

  • Thread A, doing matrix multiplication and measuring flops.
  • Thread B, doing MPI_Wait of n requests.
  • Total of 16 processes.

GOAL : The FLOPS should increase significantly with the new request.

####Version to be tested

  • Master, commit before request refactoring.
  • Master head
  • 2.0.0

Feel free to comment, suggest or discuss. Once every one agree on the test, we can move forward. I should be able to carry the test on UTK machine and some of you might want to run the test on your hardware as well.

Regards, Arm

Clone this wiki locally