[TOC]
To submit the jobs do:
# A virtual environment as shown below
micromamba activate post_ap
# This should allow access to shell with Ganga
# .ganga.py has to have the path to site-packages added
# to python path as shown below
post_shell
# Create proxy with 100 hours validity
lhcb-proxy-init -v 100:00
# Submit job "data_2024" with configuration in v1.yaml. The job name in dirac will be
# filter_001.
job_filter_ganga -n filter_001 -p rx_2024 -s data_2024 -c rk/v1.yaml -b Dirac -v 037This project is used to:
- Filter, slim, trim the trees from a given AP production
- Rename branches
This is done using configurations in a YAML file and through Ganga jobs.
You will need to install this project in a virtual environment provided by micromamba. For that, check this Once micromamba is installed in your system:
-
Make sure that the
${HOME}/.localdirectory does not exist. If a dependency ofpost_apis installed there,gangawould have to be pointed to that location and to the location of the virtual environment. This is too complicated and should not be done. -
Create a new environment:
# python 3.11 is used by DIRAC and it's better to also use it here
micromamba create -n post_ap python==3.12
micromamba activate post_ap- In the
$HOME/.bashrcexportPOSTAP_PATH, which will point to the place where your environment is installed, e.g.:
export POSTAP_PATH=/home/acampove/micromamba/envs/post_ap/binwhich is needed to find the executables.
- Install
XROOTDusing:
micromamba install xrootdwhich is needed to download the ntuples and is not a python project, therefore
it cannot be installed with pip.
- Install this project.
# can also be installed in editable mode
pip install post_ap- In order to make Ganga aware of the
post_appackage, in$HOME/.ganga.pyadd:
import sys
# Or the proper place where the environment is installed in your system
sys.path.append('/home/acampove/micromamba/envs/post_ap/lib/python3.12/site-packages')- This project is used from inside Ganga. To have access to Ganga do:
# Setup LHCb environment
. /cvmfs/lhcb.cern.ch/lib/LbEnv
# Make a proxy that lasts 100 hours
lhcb-proxy-init -v 100:00- To check that this is working, open ganga and run:
from post_ap.pfn_reader import PFNReaderThe script in the next setion needs a special environment to run. This environment is created by:
- Go to a virtual environment where this project is installed, could be named e.g.
post_ap - Make a grid token with:
lhcb-proxy-init -v 100:00- Run:
post_shellwhich will leave you in a new shell with the required environment variables
- Run the commands shown below
For this one would run a line like:
job_filter_ganga -n job_name -p PRODUCTION -s SAMPLE -c rx/v13.yaml -b BACKEND -v VERSION_OF_ENV
# For example
job_filter_ganga -n flt_002 -p rd_ap_2024 -s w37_39_v1r3788 -c rx/v13.yaml -b Dirac -v 037- The number of jobs will be equal to the number of PFNs, up to 500 jobs.
- The code used to filter reside in the grid and the only thing the user has to do is to provide the latest version
The options that can be used are:
usage: job_filter_ganga [-h] -n NAME -p PROD -s SAMP -c CONF [-b {Interactive,Local,Dirac}] [-t] -v VENV [-d]
Script used to send ntuple filtering jobs to the Grid, through ganga
options:
-h, --help show this help message and exit
-n NAME, --name NAME Job name
-p PROD, --prod PROD Production
-s SAMP, --samp SAMP Sample
-c CONF, --conf CONF Relative path to config file
-b {Interactive,Local,Dirac}, --back {Interactive,Local,Dirac}
Backend
-t, --test Will run one job only if used
-v VENV, --venv VENV Version of virtual environment used to run filtering
-d, --dry_run If used, will not create and send job, only initializeRX jobs: See this.
LbpKmumu jobs: See this
The jobs below will run with code from a virtual environment that is already in the grid. One should use the latest version of this environment. To know the latest versions, run:
# In a separate terminal open a shell with access to dirac
post_shell
# Run this command for a list of environmets
list_venvsThe post_shell terminal won't be used to send jobs.
Here is where all the configuration goes and an example of a config can be found here. One of the sections contains the list of MC samples, this can be updated by:
dump_samples -p rd_ap_2024 -g rd -v v1r2437 -a RK RKstWhich will dump a yaml file with the samples for the rd_ap_2024 production in the rd group and in
version v1r2437 used by the RK and RKst analyses.
In order to validate the ntuples slimmed run:
validate_slimming -P rk -v v1 -p /path/to/directory/with/slimmed/ntuples to validate ntuples produced with v1.yaml config from the rk project.
This should check that:
- Every ntuple that was meant to be slimmed, was slimmed.
- Each ntuple that was meant to be slimmed produced the same number of ntuples.
- In order to improve the ganga experience use:
# Minimizes messages when opening ganga
# Does not start monitoring of jobs by default
alias ganga='ganga --quiet --no-mon'in the $HOME/.bashrc file. Monitoring can be turned on by hand as explained here
You can also:
- Modify this project
- Make a virtual environment and put it in a tarball
- Upload it to the grid and make your jobs use it.
For this export:
- LXNAME: Your username in LXPLUS, which should also be the one in the grid, used to know where in the grid the environment will go.
- VENVS: Path to the directory where the code will place all the tarballs holding the environments.
- POSTAP_PATH: Path to micromamba directory in which the environment where you are developing is located
e.g.
/home/acampove/micromamba/envs/run3/bin. Here the name of the environment isrun3.
Then do:
# This leaves you in a shell with the right environment
post_shell
# Create and upload the environment with version 030
update_tarball -v 030For brevity, these utilities are documented separately, the utilities are:
- dump_sample Used to search for existing and missing samples in a production
For brevity, each of these issues will be documented in a separate file,
Missing ganga jobs If ganga has problems, the jobs might not appear, despite they did run.