Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CTPPS update Run3 Direct Simulation to use old LHCInfo after LHCInfo update #42890

Merged
merged 1 commit into from
Dec 21, 2023

Conversation

Glitchmin
Copy link
Contributor

@Glitchmin Glitchmin commented Sep 27, 2023

PR description:

A followup PR to #42515, this PR updates Direct Simulation to be able to use both old LHCInfo and new LHCInfoPer* records for possible future updates.
Moreover it introduces a new Process Modifier Run3_CTPPS_directSim to be used when Direct Simulation is ran in Run3. This is needed because this simulation uses old LHCInfo no matter the run number. (In every other case modules ran in Run3 should be looking for new LHCInfoPer* records)

PR validation:

To validate please run Validation/CTPPS/test/simu/run_multiple

@cmsbuild
Copy link
Contributor

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-42890/37026

@cmsbuild
Copy link
Contributor

cmsbuild commented Sep 27, 2023

A new Pull Request was created by @Glitchmin (Adam Kulczycki) for master.

It involves the following packages:

  • CalibPPS/ESProducers (alca)
  • CondFormats/DataRecord (db, alca)
  • CondTools/RunInfo (db)
  • Configuration/Eras (operations)
  • RecoPPS/ProtonReconstruction (reconstruction)
  • SimPPS/DirectSimProducer (simulation)
  • SimTransport/PPSProtonTransport (simulation)
  • Validation/CTPPS (dqm)

@davidlange6, @nothingface0, @fabiocos, @antoniovagnerini, @tjavaid, @francescobrivio, @cmsbuild, @mandrenguyen, @mdhildreth, @emanueleusai, @rappoccio, @saumyaphor4252, @syuvivida, @perrotta, @civanch, @jfernan2, @antoniovilela, @rvenditti, @consuegs can you please review it and eventually sign? Thanks.
@forthommel, @seemasharmafnal, @fabiocos, @fabferro, @grzanka, @mmusich, @makortel, @tocheng, @missirol, @Martin-Grunewald, @AnnikaStein this is something you requested to watch as well.
@antoniovilela, @sextonkennedy, @rappoccio you are the release manager for this.

cms-bot commands are listed here

@Glitchmin Glitchmin changed the title update Run3 Direct Simulation to use old LHCInfo after LHCInfo update CTPPS update Run3 Direct Simulation to use old LHCInfo after LHCInfo update Sep 27, 2023
@@ -0,0 +1,3 @@
import FWCore.ParameterSet.Config as cms

run3_directSim = cms.Modifier()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run3_directSim might not be a nice name. We should have the same behaviour also in Run 2 (or anyway make it clear that Run 2 works in the same way).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that might be a good suggestion, if I understand correctly I can just name it run_direct_sim and it will be setting useNewLHCInfo to false

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add "pps" to the name of the modifier

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add "pps" to the name of the modifier

Using ctpps name would be more consistent with the existing modifiers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I believe that something like ctpps_directSim could be more appropriate, as we don't really distinguish between Run 2 and Run 3 for what concerns the usage of LHCInfo in the PPS direct simulation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in 8617402

void CTPPSHepMCDistributionPlotter::analyze(const edm::Event &iEvent, const edm::EventSetup &iSetup) {
// get conditions
const auto &lhcInfo = iSetup.getData(lhcInfoESToken_);
LHCInfoCombined lhcInfoCombined(
iSetup, lhcInfoPerLSToken_, lhcInfoPerFillToken_, lhcInfoToken_, useNewLHCInfo_, iEvent.isRealData());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module is plotting HepMC stuff, so it's clearly running on MC. It won't even run on top of real data, so we can hard-code the usage of the old LHCInfo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well it didn't cost me much time to change it and now it offers some extra stuff like isCrossingAngleValid etc and maybe - that's actually something I don't know - Direct Simulation will one day be using new LHCInfoPer* records?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, because the simulation should be setup via manual configurations. It's hard to say if run-dependent simulation will ever be used for PPS, but I'd exclude it as an option for now


void setFromLHCInfo(const LHCInfo& lhcInfo);
void setFromPerLS(const LHCInfoPerLS& infoPerLS);
void setFromPerFill(const LHCInfoPerFill& infoPerFill);

float crossingAngle();
static inline bool useNewLHCInfoForDirectSimu = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly, this is a temporary way to switch between one mechanism and the other, right?
If so, it should be removed before the final PR. Also, I believe this should be const 😄

Copy link
Contributor Author

@Glitchmin Glitchmin Sep 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be const
If isRealData()-based selection mechanism will make it to the final PR this bool should be here unless we're 100% sure Direct Simulation will never be using new LHCInfoPer*

from Configuration.Eras.Era_Run3_cff import Run3
from Configuration.Eras.Modifier_run3_directSim_cff import run3_directSim

Run3_CTPPS_directSim = cms.ModifierChain(Run3,run3_directSim)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If these Era+Modifier are not about data taking conditions, it would be better to move the modifier to Configuration/ProcessModifiers/python.



from Configuration.Eras.Modifier_run3_common_cff import run3_common
run3_common.toModify(CTPPSHepMCDistributionPlotter, useNewLHCInfo = True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ctpps_2022 Modifier would be better? The run3_common should not be used for subdetector-specific customizations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, shouldn't this be run3_directSim or whatever the final name will be?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@makortel Using run3_common was decided on a AlCaDB meeting with Tamas Vami and Francesco Brivio, where we explicitly discussed which modifier to use. run3_common is also used in a already closed PR #42515
@AndreaBellora Please notice that I'm using 2 different modifiers run3_common to set useNewLHCInfo to True and run3_directSim to set useNewLHCInfo to False again. One is used every time we're in a Run3, second one is used to revert setting useNewLHCInfo for DirectSim

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@makortel Using run3_common was decided on a AlCaDB meeting with Tamas Vami and Francesco Brivio, where we explicitly discussed which modifier to use. run3_common is also used in a already closed PR #42515

@Glitchmin @tvami @francescobrivio could you please remind me which was the rationale of using run3_common instead of ctpps_2022? run3_common is built including ctpps_2022 nonetheless, therefore the final effect should be the same. Using run3_common could be preferred in case a possible ctpps_2024 (for example) will fully replace ctpps_2022 (as it was the case for ctpps_2022 that fully replaced previous ctpps_2018 for run3): of course, in that case for future runN eras one would need to customize that ctpps_2024 explicitly, while by customizing run3_common instead this setting will be propagated till all future runs, including Phase2 ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@perrotta the point was that LHCInfoPer* records are used in Run-3, and we don't plan to go back to Run-2/Run-1 with them (this is usually not the case when a new record is introduced). So putting that older PR under run3_common was very natural. Not sure what the (far) future plans (for a rereco) are for 900 GeV runs in 2021, but the LHCInfoPer* records should be populated there too, so using ctpps_2022 might be misleading there. Otherwise probably ctpps_2022 is good too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tvami ctpps_2022 is actually used to define the whole Run3 era, see Era_Run3_cff.py: even in case of a possible rerun of 2021 data, that modifier will be picked by that era, in spite of having "2022" in its name. I agree that it would have been better to name ctpps_2022 something like run3_ctpps since the beginning (as for the other subdetectors and subpackages): but here we are.

Therefore, for the modifications themselves using ctpps_2022 or run3_common is completely equivalent. I tend to agree with @makortel that the "run3_common should not be used for subdetector-specific customizations". Beside that, I have not a strong opinion on that, and I wouldn't consider it a showstopper, unless @cms-sw/core-l2 people keep suggesting to enforce is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good to me!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The design intent, that has also been mostly followed (but not everywhere, and not enforced), was that we create a Modifier for these "subdetector-specific" customizations, and then construct the Eras (technically ModifierChains) from these Modifiers kind of like building a house from lego bricks.

I realize now that the idea seems to be to change the default for Run3 and beyond to something, and then add another modifier to be able to switch the behavior back (to Run2 and earlier). If the customizations that are currently tied to run3_common would be done with another modifier, say run3_ctpps_sim (just for the example, please invent a better name), the Run3_CTPPS_directSim Era could be defined as

Run3_CTPPS_directSim = Run3.copyAndExclude([run3_ctpps_sim])

To me this approach would be cleaner than customizing things back and forth.

@cmsbuild
Copy link
Contributor

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-42890/37464

@cmsbuild
Copy link
Contributor

@smuzaffar
Copy link
Contributor

Milestone for this pull request has been moved to CMSSW_14_0_X.Please open a backport if it should also go in to CMSSW_13_3_X.

@smuzaffar smuzaffar modified the milestones: CMSSW_13_3_X, CMSSW_14_0_X Nov 6, 2023
@cmsbuild cmsbuild modified the milestones: CMSSW_14_0_X, CMSSW_13_3_X Nov 6, 2023
@tjavaid
Copy link

tjavaid commented Dec 18, 2023

@Glitchmin, @perrotta , we do not observe the bin-by-bin comparison histograms to review before we could sign. There is an open issue #43590 being discussed which is hoped to be fixed soon.

@civanch
Copy link
Contributor

civanch commented Dec 19, 2023

+1

@perrotta
Copy link
Contributor

please test

@cmsbuild
Copy link
Contributor

+1

Summary: https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-a9ac7a/36580/summary.html
COMMIT: e224e1e
CMSSW: CMSSW_14_0_X_2023-12-19-1100/el8_amd64_gcc12
User test area: For local testing, you can use /cvmfs/cms-ci.cern.ch/week0/cms-sw/cmssw/42890/36580/install.sh to create a dev area with all the needed externals and cmssw changes.

Comparison Summary

Summary:

@antoniovilela
Copy link
Contributor

@cms-sw/dqm-l2 @cms-sw/alca-l2
Do things look ok with this PR?
Thanks.

@tjavaid
Copy link

tjavaid commented Dec 21, 2023

+1

@antoniovilela
Copy link
Contributor

+1

@cmsbuild
Copy link
Contributor

This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will be automatically merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.