Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
9d6abc2
editting readme
May 3, 2016
0d92f0f
editting readme
May 3, 2016
a3ac5e5
editting readme
May 3, 2016
6c0e313
h eta split in minus/plus in categories.py
May 3, 2016
dbe9ca0
PHYS14 to 76X in writePythonCFG.py
May 3, 2016
4130123
add cff writer file
May 3, 2016
286ad7f
edit readme
May 3, 2016
d945e75
edit run_metPhiCorr.py
May 3, 2016
9a5b305
edit run_metPhiCorr.py
May 3, 2016
821ed72
fix the cff writer
May 3, 2016
c32454d
new files for fitting the MEx,y
May 3, 2016
5e33956
cff and cfi files
May 3, 2016
dc33955
README.md editting
May 11, 2016
a88edc9
Data cfi files
May 27, 2016
6776c1b
add run_data_metPhiCorr.py
Jun 1, 2016
09acdd4
changes for 76 data
Jun 1, 2016
096ddbf
cfi files for MCs
Jun 1, 2016
3b5ea1d
80x corrections
Jun 21, 2016
0768fd8
Make the recipe work on 8_0_7
npostiau Jun 5, 2019
e0c25ed
Update scripts for 2018
npostiau Jul 15, 2019
0ec2dca
Merge with main repository
npostiau Jul 15, 2019
86f89b9
Updates to run on the 2018 data
npostiau Aug 9, 2019
0c003dd
Adapt fitting functions to corrections in nvtx
npostiau Nov 13, 2019
2afd308
Add results for full Run2
npostiau Nov 13, 2019
ab35703
Update plotting/combining scripts
npostiau Nov 15, 2019
253304e
Update running scripts
npostiau Nov 15, 2019
8817183
Update instructions
npostiau Nov 15, 2019
cfd6c12
Cleaning up of old files
npostiau Nov 15, 2019
9ba991c
Update of the instructions
npostiau Nov 15, 2019
366ebb5
Remove unnecessary files
npostiau Nov 15, 2019
9026de5
Add the fit results
npostiau Nov 15, 2019
cec5c56
Allow combination of the corrections files
npostiau Dec 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions MVAMET/bin/BuildFile.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
<use name="root"/>
<use name="CondFormats/EgammaObjects"/>
<use name="RecoMET/METPUSubtraction"/>
<use name="MetTools/MVAMET"/>
<flags ADD_SUBDIR="1"/>
<flags GENREFLEX_ARGS="--"/>
<flags CXXFLAGS="-O3 -ftree-loop-linear -floop-interchange -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4 -fopenmp -march=corei7"/>
Expand All @@ -12,7 +11,6 @@
<use name="root"/>
<use name="CondFormats/EgammaObjects"/>
<use name="RecoMET/METPUSubtraction"/>
<use name="MetTools/MVAMET"/>
<flags ADD_SUBDIR="1"/>
<flags GENREFLEX_ARGS="--"/>
<flags CXXFLAGS="-O3 -ftree-loop-linear -floop-interchange -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4 -fopenmp -march=corei7"/>
Expand Down
34 changes: 24 additions & 10 deletions MetPhiCorrections/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,29 @@

#### Met phi corrections
Tools for the derivation of MET phi corrections. The actual corrections are in cms-met/cmssw
Tools for the derivation of MET phi corrections. The actual corrections are in cms-met/cmssw\

The full recipe below is supposed to work on `CMSSW_10_2_11`.

###### How to derive MET phi corrections
1. (Skip this step if the binning in PF Candidate species and eta stays unchanged) Met-phi corrections are constructed separately in bins of PF Candidate species and eta, roughly corresponding to the subdetectors. These Categories are defined in `MetTools/MetPhiCorrections/python/tools/categories.py`
1. (Skip this step if the binning in PF Candidate species and eta stays unchanged) Met-phi corrections are constructed separately in bins of PF Candidate species and eta, roughly corresponding to the subdetectors. These Categories are defined in `MetTools/MetPhiCorrections/python/tools/categories.py`\
Once the species are defined (no/only small changes should be necessary in categories.py), the following command creates a cfg that will be used for obtaining the MEx,y profiles which later are parametrized.
`python MetTools/MetPhiCorrections/python/tools/writePythonCFG.py --postfix PHYS14`
This will create a file `phiCorr_PHYS14_cfi.py`. Change the postfix to something that identifies your usecase.
Move this file to `MetTools/MetPhiCorrections/python` and create a `_cff.py` that imports the cfi file. Start from the template `MetTools/MetPhiCorrections/python/phiCorr_PHYS14_cff.py`
2. Use the `_cff.py` file from the previous step to create the MEx,y profiles. If the previous steps were skipped, use `MetTools/MetPhiCorrections/python/phiCorr_PHYS14_cff.py`
Edit `MetTools/MetPhiCorrections/test/run_metPhiCorr.py` to make sure your `_cff.py` from the previous step is used. As a test, issue
`cmsRun run_metPhiCorr.py`
The output file(s) contain the histograms and MEx,y profiles. When the test job ran satisfactorily, check the output root file and if it looks good, run the cfg on crab on the data you want the produce the corrections for. Note: Output files are small and do not depend on the size of the dataset. Do a `hadd` of all output files.
3. Once you have the MEx,y go to `MetTools/MetPhiCorrections/python/tools/`. Look at the input parameters of `multiplicityFit.py` and change them in the calling script `fits.sh` according to your needs. The command `python multiplicityFit.py -h` should give you an idea. The script `fits.sh` performs the fits and writes the final cfi file. You can give it a prefix parameter, e.g. `./fits.sh myphys14` which then creates `multPhiCorr_myphys14_cfi.py` from the DY shifts that are stored in a root file in that directory. Check the parametrization in the plot and modify funtional form, fitrange etc., if needed. When you're done check the output `.py` file (e.g. the one from the repository, `multPhiCorr_phys14_cfi.py`, or the one you produced). Check that there is no overlap in eta for the same candidate species and that all categories are included. Finally, move the file to the `JetMETCorrections/Type1MET` module and apply the corrections using the `MultShiftMETcorrInputProducer`
`python MetTools/MetPhiCorrections/python/tools/writePythonCFG.py --postfix 102X`\
This will create a file `phiCorrBins_102X_cfi.py`. Change the postfix to something that identifies your usecase.\
Move this file to `MetTools/MetPhiCorrections/python` and use the following comment with the same postfix as before `python MetTools/MetPhiCorrections/python/write_cff.py --postfix 102X` to create a `_cff.py` that imports the cfi file.
2. Use the `_cff.py` file from the previous step to create the MEx,y profiles. If the previous steps were skipped, use `MetTools/MetPhiCorrections/python/phiCorr_102X_cff.py`.\
Edit `MetTools/MetPhiCorrections/test/run_metPhiCorr.py` to make sure your `_cff.py` from the previous step is used. As a test, issue\
`cmsRun run_metPhiCorr.py`.\
`run_metPhiCorr.py` should be updated manually with the correct global tag for the year on which you want to run (different for data and MC).\
The output file(s) contain the histograms and MEx,y profiles. When the test job ran satisfactorily, check the output root file and if it looks good, you can run the cfg on crab on the data you want to produce the corrections for. You may use the existing crab config files as an example.\
Note: Output files are small and do not depend on the size of the dataset.\
Do a `hadd` of all output files.\
You need to do this separately for each era, and once for MC for each year (no automatization of this exists yet). In principle, different eras could be regrouped, but it has been showed that the corrections depend on the era.\
3. Once you have the MEx,y go to `MetTools/MetPhiCorrections/python/tools/`. Look at the input parameters of `multiplicityFit.py` and change them in the calling script `fits.sh` according to your needs. The command `python multiplicityFit.py -h` should give you an idea. The script `fits.sh` performs the fits and writes the final cfi file. You can give it three parameters(a label for the production, input root file path, output plot directory path), e.g. `./fits.sh my102X <input.root> plots` which then creates `multPhiCorr_my102X_cfi.py` from the DY shifts that are stored in a root file in that directory, as well as plots showing the fit results. Check the parametrization in the plot and modify funtional form, fitrange etc., if needed. When you're done check the output `.py` file (e.g. one of those from the repository, `multPhiCorr_XXX_cfi.py`, or the one you produced). Check that there is no overlap in eta for the same candidate species and that all categories are included. (One can also use `runfits.sh` to run fits.sh multiple times subsequently.)\
There are 3 parametrizations possible for the corrections: as a function of multiplicity, of the (scalar) pT sum or of the number of vertices. The current recommendation is to use the number of vertices, which is found the be the most independant of the hard physics process. Later steps assume this choice is made.\
The file `multPhiCorr_my102X_cfi.py` is the one that is meant to be put in the `JetMETCorrections/Type1MET/python` module of CMSSW (if you do so, don't forget to change the module name in `pfMETmultShiftCorrections_cfi.py`). By doing so, you ensure that your corrections will be applied, based on the fits (which are quadratic functions) you just created.\
You can add different files together for different eras using the option `--combine` with `fits.sh`. See the script `runfits_combined.sh` for an example. This output should be the one to use in CMSSW.\
From there, you might want to check the impact of your corrections by applying them, using a simple analyzer.\
4. Once you have run on all the data, and on DY MC, for one given year, you can use the scripts in `MetTools/MetPhiCorrections/test/plotting`. Update the names of the input files to match the ones you obtained. There are 3 scripts you can use:
- (Optional) `root -l -q -b 'prepare_plots.cc++(YEAR)'` (where YEAR is 2016, 2017 or 2018) -> will produce plots for meaningful variables, superimposed for all eras
- `root -l -q -b 'combine_corrections.cc++(YEAR)'` -> will produce, for each era, a single combined plot for the total correction in MET X shift and another in MET Y shift, as a function of the number of vertices. It also performs a fit with a linear function (expected behavior is linear), and write the fit parameters in a text file. It is recommended to use this output if you want to apply corrections to your analysis, if you don't already have the corrected MET in your trees.
- (Optional) `root -l -q -b 'prepare_combined_plots.cc++(YEAR)'` -> will superimpose the plots produced by `combine_corrections.cc`, and compare them with the fits obtained by the alternative method, consisting in computing the corrections directly from the MET X/Y coordinates in the event. Additional options can be given to plot only several eras (to avoid overcrowded plots).
2 changes: 2 additions & 0 deletions MetPhiCorrections/plugins/BuildFile.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
<use name="DataFormats/VertexReco"/>
<use name="DataFormats/Common"/>
<use name="DataFormats/ParticleFlowCandidate"/>
<use name="DataFormats/PatCandidates"/>
<use name="DataFormats/MuonReco" />

<library file="metPhiCorrInfoWriter.cc" name="MetToolsMetPhiCorrInfoWriter">
<flags EDM_PLUGIN="1"/>
Expand Down
158 changes: 104 additions & 54 deletions MetPhiCorrections/plugins/metPhiCorrInfoWriter.cc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
#include "DataFormats/Common/interface/View.h"
#include "DataFormats/Common/interface/Association.h"
#include <string>
#include <TLorentzVector.h>

std::string namePostFix (int varType) {

Expand All @@ -15,6 +16,15 @@ std::string namePostFix (int varType) {
return std::string("unknown");
}

double delta_phi (float phi1, float phi2)
{
double dPhi = phi1 - phi2;
if (dPhi > 3.1416) dPhi -= 2*3.1416;
if (dPhi <= -3.1416) dPhi += 2*3.1416; //So that dPhi is always between -pi and +pi.
return dPhi;
}


int metPhiCorrInfoWriter::translateTypeToAbsPdgId( reco::PFCandidate::ParticleType type ) {
switch( type ) {
case reco::PFCandidate::ParticleType::h: return 211; // pi+
Expand All @@ -29,9 +39,19 @@ int metPhiCorrInfoWriter::translateTypeToAbsPdgId( reco::PFCandidate::ParticleTy
}
}

bool metPhiCorrInfoWriter::passSelection(TLorentzVector firstMuon, TLorentzVector secondMuon) {
if((firstMuon + secondMuon).Pt() > 20) return false;
if((firstMuon + secondMuon).M() > 110 or (firstMuon + secondMuon).M() < 70) return false;
if(firstMuon.Pt()/secondMuon.Pt() > 1.2 or firstMuon.Pt()/secondMuon.Pt() < 0.8) return false;
if(fabs(delta_phi(firstMuon.Phi(),secondMuon.Phi()))<2.8) return false;
return true;
}

metPhiCorrInfoWriter::metPhiCorrInfoWriter( const edm::ParameterSet & cfg ):
vertices_ ( cfg.getUntrackedParameter< edm::InputTag >("vertexCollection") ),
verticesToken_ ( consumes< reco::VertexCollection >(vertices_) ),
muons_ ( cfg.getUntrackedParameter< edm::InputTag >("muonsCollection") ),
muonsToken_(consumes< std::vector< pat::Muon> >(muons_)),
pflow_ ( cfg.getUntrackedParameter< edm::InputTag >("srcPFlow") ),
pflowToken_ ( consumes< edm::View<reco::Candidate> >(pflow_) ),
moduleLabel_(cfg.getParameter<std::string>("@module_label"))
Expand Down Expand Up @@ -84,69 +104,99 @@ metPhiCorrInfoWriter::metPhiCorrInfoWriter( const edm::ParameterSet & cfg ):

void metPhiCorrInfoWriter::analyze( const edm::Event& evt, const edm::EventSetup& setup) {

//get primary vertices
edm::Handle< reco::VertexCollection > hpv;
try {
// evt.getByLabel( vertices_, hpv );
evt.getByToken( verticesToken_, hpv );
} catch ( cms::Exception & e ) {
std::cout <<"[metPhiCorrInfoWriter] error: " << e.what() << std::endl;
//Select muons compatible with a Z
edm::Handle< std::vector< pat::Muon> > theMuons;
evt.getByToken(muonsToken_,theMuons );

std::vector<TLorentzVector> goodMuons;
std::vector<int> muonCharges;
bool isGoodEvent = false;
for(std::vector<pat::Muon>::const_iterator muon = (*theMuons).begin(); muon != (*theMuons).end(); muon++) {
if(!(&*muon)->passed(reco::Muon::CutBasedIdMedium)) continue;
if(!(&*muon)->passed(reco::Muon::PFIsoMedium)) continue;
TLorentzVector thisMuon;
thisMuon.SetPtEtaPhiE((&*muon)->pt(),(&*muon)->eta(),(&*muon)->phi(),(&*muon)->energy());
goodMuons.push_back(thisMuon);
muonCharges.push_back((&*muon)->charge());
}
std::vector<reco::Vertex> goodVertices;
for (unsigned i = 0; i < hpv->size(); i++) {
if ( (*hpv)[i].ndof() > 4 &&
( fabs((*hpv)[i].z()) <= 24. ) &&
( fabs((*hpv)[i].position().rho()) <= 2.0 ) )
goodVertices.push_back((*hpv)[i]);
// std::cout << "======================================================================================================" << std::endl;
// std::cout << "There are " << goodMuons.size() << " good muons." << std::endl;
if(goodMuons.size()>1){
// std::cout << "First muon has Pt = " << goodMuons[0].Pt() << ", eta = " << goodMuons[0].Eta() << ", phi = " << goodMuons[0].Phi() << ", E = " << goodMuons[0].E() << ", charge = " << muonCharges[0] << std::endl;
// std::cout << "Second muon has Pt = " << goodMuons[1].Pt() << ", eta = " << goodMuons[1].Eta() << ", phi = " << goodMuons[1].Phi() << ", E = " << goodMuons[1].E() << ", charge = " << muonCharges[1] << std::endl;
for(unsigned int first = 0 ; first < goodMuons.size()-1 ; first++){
for(unsigned int second = 1 ; second < goodMuons.size() ; second++){
if(passSelection(goodMuons[first],goodMuons[second]) and (muonCharges[first]+muonCharges[second] == 0)) isGoodEvent = true;
}
}
}
int ngoodVertices = goodVertices.size();

for (unsigned i=0;i<counts_.size();i++) {
counts_[i]=0;
sumPt_[i]=0;
MEx_[i]=0.;
MEy_[i]=0.;
}

edm::Handle< edm::View<reco::Candidate> > particleFlow;
evt.getByToken( pflowToken_, particleFlow );
for (unsigned i = 0; i < particleFlow->size(); ++i) {
const reco::Candidate& c = particleFlow->at(i);
for (unsigned j=0; j<type_.size(); j++) {

if(isGoodEvent){
// std::cout << "This is a good event." << std::endl;
//get primary vertices
edm::Handle< reco::VertexCollection > hpv;
try {
// evt.getByLabel( vertices_, hpv );
evt.getByToken( verticesToken_, hpv );
} catch ( cms::Exception & e ) {
std::cout <<"[metPhiCorrInfoWriter] error: " << e.what() << std::endl;
}
std::vector<reco::Vertex> goodVertices;
for (unsigned i = 0; i < hpv->size(); i++) {
if ( (*hpv)[i].ndof() > 4 &&
( fabs((*hpv)[i].z()) <= 24. ) &&
( fabs((*hpv)[i].position().rho()) <= 2.0 ) )
goodVertices.push_back((*hpv)[i]);
}
int ngoodVertices = goodVertices.size();

for (unsigned i=0;i<counts_.size();i++) {
counts_[i]=0;
sumPt_[i]=0;
MEx_[i]=0.;
MEy_[i]=0.;
}

edm::Handle< edm::View<reco::Candidate> > particleFlow;
evt.getByToken( pflowToken_, particleFlow );
for (unsigned i = 0; i < particleFlow->size(); ++i) {
const reco::Candidate& c = particleFlow->at(i);
for (unsigned j=0; j<type_.size(); j++) {
// if (abs(c.pdgId())==211) {
// std::cout<<"cand pdgId "<<c.pdgId()<<" testing type:"<<type_[j]<<" translated to pdg:"<<translateTypeToAbsPdgId(reco::PFCandidate::ParticleType(type_[j]))<<std::endl;
// }
if (abs(c.pdgId())== translateTypeToAbsPdgId(reco::PFCandidate::ParticleType(type_[j]))) {
if ((c.eta()>etaMin_[j]) and (c.eta()<etaMax_[j])) {
counts_[j]+=1;
sumPt_[j]+=c.pt();
MEx_[j]-=c.px();
MEy_[j]-=c.py();

pt_[j]->Fill(c.eta(), c.phi(), c.pt());
energy_[j]->Fill(c.eta(), c.phi(), c.energy());
occupancy_[j]->Fill(c.eta(), c.phi());
if (abs(c.pdgId())== translateTypeToAbsPdgId(reco::PFCandidate::ParticleType(type_[j]))) {
if ((c.eta()>etaMin_[j]) and (c.eta()<etaMax_[j])) {
counts_[j]+=1;
sumPt_[j]+=c.pt();
MEx_[j]-=c.px();
MEy_[j]-=c.py();

pt_[j]->Fill(c.eta(), c.phi(), c.pt());
energy_[j]->Fill(c.eta(), c.phi(), c.energy());
occupancy_[j]->Fill(c.eta(), c.phi());
}
}
}
}
}
for (std::vector<edm::ParameterSet>::const_iterator v = cfgCorrParameters_.begin(); v!=cfgCorrParameters_.end(); v++) {
unsigned j=v-cfgCorrParameters_.begin();
for (std::vector<edm::ParameterSet>::const_iterator v = cfgCorrParameters_.begin(); v!=cfgCorrParameters_.end(); v++) {
unsigned j=v-cfgCorrParameters_.begin();
// std::cout<<"j "<<j<<" "<<v->getParameter<std::string>("name")<<" varType "<<varType_[j]<<" counts "<<counts_[j]<<" sumPt "<<sumPt_[j]<<" nvtx "<<ngoodVertices<<" "<<MEx_[j]<<" "<<MEy_[j]<<std::endl;
if (varType_[j]==0) {
profile_x_[j]->Fill(counts_[j], MEx_[j]);
profile_y_[j]->Fill(counts_[j], MEy_[j]);
variable_[j]->Fill(counts_[j]);
}
if (varType_[j]==1) {
profile_x_[j]->Fill(ngoodVertices, MEx_[j]);
profile_y_[j]->Fill(ngoodVertices, MEy_[j]);
variable_[j]->Fill(ngoodVertices);
}
if (varType_[j]==2) {
profile_x_[j]->Fill(sumPt_[j], MEx_[j]);
profile_y_[j]->Fill(sumPt_[j], MEy_[j]);
variable_[j]->Fill(sumPt_[j]);
if (varType_[j]==0) {
profile_x_[j]->Fill(counts_[j], MEx_[j]);
profile_y_[j]->Fill(counts_[j], MEy_[j]);
variable_[j]->Fill(counts_[j]);
}
if (varType_[j]==1) {
profile_x_[j]->Fill(ngoodVertices, MEx_[j]);
profile_y_[j]->Fill(ngoodVertices, MEy_[j]);
variable_[j]->Fill(ngoodVertices);
}
if (varType_[j]==2) {
profile_x_[j]->Fill(sumPt_[j], MEx_[j]);
profile_y_[j]->Fill(sumPt_[j], MEy_[j]);
variable_[j]->Fill(sumPt_[j]);
}
}
}
}
Expand Down
6 changes: 6 additions & 0 deletions MetPhiCorrections/plugins/metPhiCorrInfoWriter.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@
#include "DataFormats/ParticleFlowCandidate/interface/PFCandidateFwd.h"
#include "DataFormats/VertexReco/interface/Vertex.h"
#include "DataFormats/VertexReco/interface/VertexFwd.h"
#include "DataFormats/PatCandidates/interface/Muon.h"

#include <string>
#include <vector>
#include <TProfile.h>
#include <TH2F.h>
#include <TLorentzVector.h>

class metPhiCorrInfoWriter : public edm::EDAnalyzer {
public:
Expand All @@ -25,6 +27,8 @@ class metPhiCorrInfoWriter : public edm::EDAnalyzer {
private:
edm::InputTag vertices_;
edm::EDGetTokenT< std::vector<reco::Vertex> > verticesToken_;
edm::InputTag muons_;
edm::EDGetTokenT<std::vector< pat::Muon> > muonsToken_;
edm::InputTag pflow_;
edm::EDGetTokenT< edm::View<reco::Candidate> > pflowToken_;

Expand All @@ -40,6 +44,8 @@ class metPhiCorrInfoWriter : public edm::EDAnalyzer {

static int translateTypeToAbsPdgId( reco::PFCandidate::ParticleType type );

bool passSelection(TLorentzVector firstMuon, TLorentzVector secondMuon);

};

#endif
Expand Down
12 changes: 12 additions & 0 deletions MetPhiCorrections/python/phiCorrBins_102X_cff.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import FWCore.ParameterSet.Config as cms

from MetTools.MetPhiCorrections.phiCorrBins_102X_cfi import phiCorrBins_102X as phiCorrBins

metPhiCorrInfoWriter = cms.EDAnalyzer("metPhiCorrInfoWriter",
vertexCollection = cms.untracked.InputTag("offlinePrimaryVertices"),
srcPFlow = cms.untracked.InputTag("particleFlow", ""),
parameters = phiCorrBins
)

metPhiCorrInfoWriterSequence = cms.Sequence( metPhiCorrInfoWriter )

Loading