checkmate is hosted by Hepforge, IPPP Durham

Tutorial - Part 1: Our First Model Point Test

What CheckMATE does

While CheckMATE is analysing the events, let us have a closer look into what it actually does.

In general, CheckMATE processes each event file by the C++-submodule FRITZ which applies the following steps

Delphes

First of all, the simulated proton events are passed through the fast detector simulation "Delphes"[1]. It simulates the limited coverage and discrete calorimeter of the detector (in our case ATLAS), clusters cells into jets and smears energies and momenta of all reconstructed objects. Another very important quantitiy determined on the detector simulation level is the total missing transverse momentum. Delphes is a standalone code which typically produces ROOT files that store the reconstructed final state information. To save time and disk space, this ROOT file is not created but the information automatically passed within CheckMATE.

AnalysisHandler

The Delphes output, consisting of the reconstructed final state information, is passed to a separate part which applies finite probabilities that a given final state object is identified correctly. As an example, there is only an (on average) 80% chance to reconstruct a "true electron" inside the detector as an electron. This effect has to be taken into account if the electron would be crucial to test the model. Furthermore there are finite probabilities that jets that originated from hadronic tau- or b-decays can be identified as having originated from these. These so-called flavour tagging efficiencies can also be considered in the simulation.

At the beginning of a run, CheckMATE determines the settings needed for the analysis/analyses the user chose. It then automatically writes a .ini file which tells FRITZ which detector settings need to be taken into account. Delphes will produce a ROOT event file which stores the information of all reconstructed final state objects in a well defined format. Users usually do not need to have a look at these files but if wanted, one could investigate them. They are stored in /results/fritz

Analysis

As soon as Delphes has determined what the detector would see, one then has to quantify what the new physics model would predict. An analysis program loops over the files event by event and checks various conditions that have to be fulfilled to count the event as a signal candidate. A single analysis usually focusses on a class of final states with certain objects. In our case, for example, we chose atlas_1405_7875 which - as you might have seen in the lines CheckMATE printed after we submitted the parameter file - focusses on events with 2-6 jets, no leptons and missing energy.

Why is this final state signature promising for the model we want to check? How can one distinguish it from the Standard Model way of creating jets? You might want to look at the slha file again, especially at the decay tables of the SUSY particles we focussed on.
[Show Answer]

Every analysis is furthermore subdivided into signal regions, each of which place individual kinematical cuts on the event. . In many cases (but with lots of exceptions) the signal regions within a given analysis are orthogonal to each other, i.e. no event is counted more than once. In our analysis atlas_1405_7875, the signal regions all require at least two jets with 130 GeV and 60 GeV transverse momentum, and then they differentiate according to the number of additional jets (0 up to 4) with pT being at least 60 GeV.

CheckMATE's main goal is then to determine the predicted number of signal events for each signal region. These numbers are properly normalised by taking into account the cross section of the input (which we provided) and the luminosity of the experimental data sample (which the experiment provides). This is done for each input event file separately.

Which signal regions (i.e. which number of jets) are probably most sensitive to our model? Which of the two production modes (gluino and stop) will roughly populate which signal region(s)?
[Show Answer]

Evaluation

After the above two steps have been performed for all input events, the total prediction and error S +- dS for each signal region is determined by properly adding the individual numbers. The total error is determined from the statistical error (determined automatically from the size of the Monte-Carlo sample) and the systematal errors (given by the user via the XSectErr parameter).

Furthermore, CheckMATE knows the experimental observation O and the Standard Model expectation B +- dB for each signal region, taken from the experimental publication of the respective analysis. These numbers can be translated into a model independent upper 95% observed confidence limit S95obs and an expected limit S95exp (which in simple words assumes O = B)

Why does one distinguish between S95obs and S95exp? In which cases are they equal, in which cases can they differ?
[Show Answer]

In the standard way of running CheckMATE (like we did), the model prediction is directly compared to the upper limit by taking the uncertainty in S conservatively into account: r_obs := (S - 1.96 dS)/S95obs
r_exp := (S - 1.96 dS)/S95exp.
Each signal region has its own robs and rexp. They are combined as follows:

  • The r_exp values of all signal regions are compared,
  • the signal region X with the largest value is determined,
  • the respective value for r_obs is checked and
  • it is larger than 1, the model is excluded. Otherwise, it is allowed.

What is the motivation to use the expected value to determine the most sensitive signal region but to then use the observed value for the actual model test?
[Show Answer]

By the time you have reached this part, CheckMATE has hopefully finished (if not, please wait until it did).

So, what is the result?