TopStreamTutorialDayOne

Forward to TopStreamTutorialDayTwo. Skip to TopStreamTutorialDayThree.

Exercise 0: create and test your setup

Let's start with a clean testarea directory and call it TopStreamTutorial. If you are starting from scratch, you should follow the steps in WorkBookSetAccount for the site where you are working. For SLAC you can put "12.0.6" in the file $HOME/ATLCURRENT and source HEPIX files (perhaps in you .profile/.login) and /afs/slac.stanford.edu/g/atlas/etc/atlas_env.[c]sh, as described on the ATLAS Environment at SLAC page.

You should make sure everything worked: type cmt show path at the prompt. The result should look like this:

# Add path /direct/usatlas+u/you/testarea/TopStreamTutorial from initialization
# Add path /afs/usatlas.bnl.gov/software/builds/AtlasOffline/12.0.6 from initialization
...

and typing athena -h should give you the ATHENA command-line help screen.

Exercise 1: Structure of the datasets

There are many streaming test datasets; together they correspond to ten runs -- five hours of data-taking in total. According to the streaming model, each stream becomes a separate dataset, so we have one dataset per stream, per run. Within a dataset is some number of files; each file corresponds to one or more luminosity blocks -- that is, no file contains a partial luminosity block (unless that luminosity block has been marked "bad" in the database: for details on the luminosity database, see tomorrow's tutorial.

The streaming datasets are all in the StreamTest_2007 project, and we will be using the inclusive streaming data.

You can use the AMI browser to find the data in the StreamTest project. The advantage of the AMI browser over other dataset browsers is that dataset metadata is included and updated in AMI.

Try this: Find the AOD datasets in the streaming test project, using the "Advanced Search" linked from the AMI dataset search page. The StreamTest project is part of the soft[ware]-test physics group. Look for AOD datasets made at the 12.0.6.4 "recon" production step.

  • Uncheck "Exclude Trashed" so we can learn about trashed datasets.
Questions:
  1. In the streaming data, the dataset number is not the same as the run number, for obvious reasons. How many datasets per run do you expect?
  2. How many inclusive jet datasets show up in the project? Why?
    • Narrowing your search is easy by backing up and specifying the keyword streamtest%inclJet in the Advanced Search (you can also click the "Refine Query" link, but this requires more patience).
  3. Do you notice anything about the streamtest.004959.inclMuo.recon.v12000604 dataset? If you follow the "details" link for this dataset, you will see that it contains 29 files. Can you research what happened here?
Tomorrow the tutorial covers (among other things) what to do about incomplete datasets.

Exercise 1.1: 'Choose' a stream

We have already chosen a stream for these tutorials, but you should be able to do this yourself. As discussed in the lecture section, a good channel in which to reconstruct top quark pair production is the "lepton + jets + missing energy" channel. We could trigger on any of these three signatures. By answering the following questions, you will choose a trigger and hence a stream for your analysis.

Questions:
  1. Look at the trigger table for the streaming test data. What are the lepton, jet and missing energy triggers?
    • Note: When the streaming test data was "collected," the muon endcap triggers were not functional.
  2. Look at the lepton+jets ttbar event characteristics on this page.
    • What trigger will you use to measure the top cross section? Which stream should you use?

If you did not choose the e25i trigger in the electron stream, you are welcome to use your own choice, but you will have a lot more work to do for the next two days!

Exercise 2: Trigger efficiency

The data at SLAC are located in /afs/slac.stanford.edu/g/atlas/work/a/andr/StreamingTutorial/AODs

Set-up

Now you will measure the efficiency of the L2_e25i trigger with respect to reconstructed electrons. You can use whatever analysis model you like, but we have provided some tools that work nicely with the AnalysisSkeleton class in the PhysicsAnalysis/UserAnalysis directory. If you would like to follow this model, you should check out the correct package:

cd $TestArea
cmt co -r UserAnalysis-00-09-10 PhysicsAnalysis/AnalysisCommon/UserAnalysis

This will not run out-of-the-box, since the example assumes you are using a Monte Carlo dataset. Moreover, you must change the requirements files to use the trigger. Please edit PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt/requirements to include the lines

use TrigParticle                TrigParticle-*                  Trigger/TrigEvent
use TrigDecisionMaker           TrigDecisionMaker-*             Trigger/TrigAnalysis

and edit AnalysisSkeleton to remove all references to truth objects: make sure change (or avoid calling) electronSkeleton(). Actually, the only thing we need from this class is the boilerplate, so it's ok to just leave the MsgStream mLog( messageService(), name() ); and the return StatusCode::SUCCESS; lines in every method.

From other tutorials, you can learn to get the trigger decision stored in the AOD:

 // find summary trigger decision
 const TriggerDecision* trigDec = 0;
 StatusCode sc=m_storeGate->retrieve( trigDec, "MyTriggerDecision");
 if( sc.isFailure()  ||  !trigDec ) {
   mLog << MSG::WARNING
                << "No TriggerDecision found in TDS"
                << endreq;
   return StatusCode::FAILURE;
 }
 // check trigger status before continuing
 if (! (trigDec->isDefined("L2_e25i", 2) && trigDec->isTriggered("L2_e25i")) ) return StatusCode::SUCCESS;

This decision corresponds to the trigger objects (clusters, jets, muons, etc.) in the AOD: it was written when events were reconstructed in release 12.0.6, (after having been streamed according to a release 12.0.3 trigger decision.) This is the decision we will be using for the following studyl for L2_e25i, it makes little difference. However, if you want to try a different trigger, remember that none of the AOD decisions in the TriggerDecision object are prescaled. This means that asking isTriggered("L2_e25i") will give a different answer than checking the bit in the EventHeader.

This is how to check the trigger bits in the EventHeader:

 static const unsigned int TRIG_L2_E25I = (1 << 13);
 static const unsigned int TRIG_L2_2E15I = (1 << 14);
 
 if(! (evtInfo->trigger_info()->level2TriggerInfo() & TRIG_L2_E25I) ) {
   return CUT_TRIG;
 }

Measurement Method

As explained in the talk, measuring the efficiency of a trigger in a real dataset entails some subtlety to avoid biasing your sample. To apply the tag-and-probe method, you must be able to tell the "tag" from the "probe," which means matching reconstructed objects to the trigger objects, and check that the trigger object did pass the trigger's requirements.

Here are some simple code fragments to help you do this.

To select tight electrons, including an isolation cut, you would apply the following cuts to your Electron:

 (  ((*elecItr1)->isEM() & 0x7ff == 0) 
    && ((*elecItr1)->et() > 15.*GeV)
    && ((*elecItr1)->parameter(ElectronParameters::etcone20) < 5.*GeV) )

Exercise 2.1

Once you have two electrons, you can try to combine them to form a Z. (Hint: consider adding the LorentzVectors hlv() of the electrons.) A good Z mass range could be 70 to 110 GeV/c2.

 // Retrieving trigger objects without knowing their keys 
 const DataHandle<TrigElectronContainer> teContainerEnd,teContainerIt;
 StatusCode sc = sg->retrieve(teContainerIt,teContainerEnd);
 if (sc.isFailure()) return false;
  
 // This is how to access the trigger objects
 for ( ; teContainerIt != teContainerEnd; ++teContainerIt) {
   for (TrigElectronContainer::const_iterator l2it = teContainerIt->begin();
        l2it != teContainerIt->end();
        ++l2it
      ) {
     TrigElectron* thisTele = *l2it;
     if (!thisTele) {
       std::cout << "Oops, couldn't follow container iterator." << std::endl;
       return false;
     }
     
     your_code_goes_here();
     
   }
 }
//did this trig electron fulfill e25i?
bool ElectronPassed(const TrigElectron & e) {
  const TrigEMCluster *c = e.cluster();
  //threshold
  bool passed(c->et() > 18*GeV);
  //et leakge in had
  passed &= ( (c->et() > 90*GeV) || (c->ehad1()/cosh(c->eta()) < 3.8 * GeV) );
  //rCore
  passed &= ( c->e237()/c->e277() >=  .895 );
  //eRatio
  passed &= ( (c->emaxs1()-c->e2tsts1())/(c->emaxs1() + c->e2tsts1()) > .7 );
  //cluster - track matching
  passed &= ( (e.trkClusDeta() < .018) && (e.trkClusDphi() < .06) );
  //ID cuts
  passed &= (e.track()->param()->pT()  > 5*GeV);
  float etoverpt = c->et()/e.track()->param()->pT();
  passed &= ((etoverpt > 0.5) && (etoverpt < 5.5));
  return passed;
}


double deltaR(L1EMTauObject &o, Electron &e) {
  const CaloCluster *cluster = e.cluster();
  double phidiff = o.L1EM_phi() - cluster->phi();
  double etadiff = o.L1EM_eta() - cluster->eta();
  while (phidiff > M_PI) phidiff -= 2.*M_PI;
  while (phidiff < M_PI) phidiff += 2.*M_PI;
  return hypot(phidiff, etadiff);
}


A complete code can be seen at ~andr/reldirs/tutorial.saved/PhysicsAnalysis/AnalysisCommon/UserAnalysis/ (the "tutorial" code is in file Tutorial.h)

Questions:
  1. Where is the e25i trigger 90% efficient?
  2. What is the plateau efficiency?

See also this link on how to normalize e/gamma trigger efficiencies

Results:

Charlie looked at the level of agreement between trigger bits in the header and the trigger decision object. On the first 1000 events:

  TrigDecision pass && TrigBit pass = 920
  TrigDecision pass && TrigBit fail = 7
  TrigDecision fail && TrigBit pass = 51
  TrigDecision fail && TrigBit fail = 22

Forward to TopStreamTutorialDayTwo. Skip to TopStreamTutorialDayThree.