diff --git a/_episodes/01-introduction.md b/_episodes/01-introduction.md index 94e9a1f..de1b81e 100644 --- a/_episodes/01-introduction.md +++ b/_episodes/01-introduction.md @@ -12,6 +12,8 @@ keypoints: - This tutorial is brought to you by the DUNE Computing Consortium. - The goals are to give you the computing basis to work on DUNE. --- + +{% include 01-introduction.toc.md %} ## DUNE Computing Consortium The DUNE Computing Consortium works to establish a global computing network that will handle the massive data streams produced by distributing these across the computing grid. diff --git a/_episodes/01.5-documentation.md b/_episodes/01.5-documentation.md index 51fa188..0ca5c6d 100644 --- a/_episodes/01.5-documentation.md +++ b/_episodes/01.5-documentation.md @@ -11,6 +11,8 @@ keypoints: - There is documentation somewhere! --- +{% include 01.5-documentation.toc.md %} + ## Documentation access Much of DUNE's computing documentation is public and hosted in github @@ -77,4 +79,5 @@ Many repositories have wikis or associated dune.github.io pages. [Computing FAQ](https://github.com/orgs/DUNE/projects/19) -Lists of common connection problems and issues with running jobs. \ No newline at end of file +Lists of common connection problems and issues with running jobs. + diff --git a/_episodes/02-storage-spaces.md b/_episodes/02-storage-spaces.md index a2b4b25..32cbce2 100644 --- a/_episodes/02-storage-spaces.md +++ b/_episodes/02-storage-spaces.md @@ -17,6 +17,7 @@ keypoints: - The tool suites idfh and XRootD allow for accessing data with appropriate transfer method and in a scalable way. --- +{% include 02-storage-spaces.toc.md %} ## This is an updated version of the 2023 training -**Locally mounted volumes** are physical disks, mounted directly on the computer +### Locally mounted volumes +local volumes are physical disks, mounted directly on the computer * physically inside the computer node you are remotely accessing * mounted on the machine through the motherboard (not over network) * used as temporary storage for infrastructure services (e.g. /var, /tmp,) @@ -90,17 +98,24 @@ Each has its own advantages and limitations, and knowing which one to use when i -**Network Attached Storage (NAS)** element behaves similar to a locally mounted volume. +### Network Attached Storage (NAS) +NAS elements behaves similar to a locally mounted volume. * functions similar to services such as Dropbox or OneDrive * fast and stable POSIX access to these volumes * volumes available only on a limited number of computers or servers * not available on grid computing (FermiGrid, Open Science Grid, WLCG, HPC, etc.) +#### At Fermilab * /exp/dune/app/users/.... has periodic snapshots in /exp/dune/app/..../.snap, but /exp/dune/data does NOT * easy to share files with colleagues using /exp/dune/data and /exp/dune/app +* See the [Ceph](https://fifewiki.fnal.gov/wiki/Ceph) documentation for details on those systems. +#### At CERN +At CERN the analog is EOS +See [EOS](https://cern.service-now.com/service-portal?id=kb_article&n=KB0001998) for information about using EOS - -## Grid-accessible storage volumes +### Grid-accessible storage volumes + +The following areas are grid accessible via methods such as `xrdcp/xrootd` and `ifdh`. You can read files in dCache across DUNE if you have the appropriate authorization. Writing files may require special permissions. - At Fermilab, an instance of dCache+CTA is used for large-scale, distributed storage with capacity for more than 100 PB of storage and O(10000) connections. - At CERN, the analog is EOS+CASTOR @@ -111,31 +126,85 @@ DUNE also maintains disk copies of most recent files across many sites worldwide Whenever possible, these storage elements should be accessed over xrootd (see next section) as the mount points on interactive nodes are slow, unstable, and can cause the node to become unusable. Here are the different dCache volumes: -**Persistent dCache**: the data in the file is actively available for reads at any time and will not be removed until manually deleted by user. The persistent dCache contains 3 logical areas: (1) /pnfs/dune/persistent/users in which every user has a quota up to 5TB total (2) /pnfs/dune/persistent/physicsgroups. This is dedicated for DUNE Physics groups and managed by the respective physics conveners of those physics groups. +#### Persistent dCache + `/pnfs/dune/persistent/` is "persistent" storage. If a file is in persistent dCache, the data in the file is actively available for reads at any time and will not be removed until manually deleted by user. The persistent dCache contains 3 logical areas: (1) /pnfs/dune/persistent/users in which every user has a quota up to 5TB total (2) /pnfs/dune/persistent/physicsgroups. This is dedicated for DUNE Physics groups and managed by the respective physics conveners of those physics groups. + https://wiki.dunescience.org/wiki/DUNE_Computing/Using_the_Physics_Groups_Persistent_Space_at_Fermilab gives more details on how to get access to these groups. In general, if you need to store more than 5TB in persistent dCache you should be working with the Physics Groups areas. (3) the "staging" area /pnfs/dune/persistent/staging which is not accessible by regular users but is by far the largest of the three. It is used for official datasets. + + -**Scratch dCache**: large volume shared across all experiments. When a new file is written to scratch space, old files are removed in order to make room for the newer file. Removal is based on Least Recently Utilized (LRU) policy, and performed by an automated daemon. -**Tape-backed dCache**: disk based storage areas that have their contents mirrored to permanent storage on CTA tape. +#### Scratch dCache +`/pfns/dune/scratch` is a large volume shared across all experiments. When a new file is written to scratch space, old files are removed in order to make room for the newer file. Removal is based on Least Recently Utilized (LRU) policy, and performed by an automated daemon. + + +#### Tape-backed dCache +Tape-backed disk based storage areas that have their contents mirrored to permanent storage on CTA tape. Files are not available for immediate read on disk, but needs to be 'staged' from tape first ([see video of a tape storage robot](https://www.youtube.com/watch?v=kiNWOhl00Ao)). -**Rucio Storage Elements**: Rucio Storage Elements (or RSEs) are storage elements provided by collaborating institution for official DUNE datasets. Data stored in DUNE RSE's must be fully cataloged in the [metacat][metacat] catalog and is managed by the DUNE data management team. This is where you find the official data samples. +#### Rucio Storage Elements + Rucio Storage Elements (or RSEs) are storage elements provided by collaborating institution for official DUNE datasets. Data stored in DUNE RSE's must be fully cataloged in the [metacat][metacat] catalog and is managed by the DUNE data management team. This is where you find the official data samples. + + See the [data management]({{ site.baseurl }}/03-data-management) lesson for much more information about using the `rucio` system to find official data. -**CVMFS**: CERN Virtual Machine File System is a centrally managed storage area that is distributed over the network, and utilized to distribute common software and a limited set of reference files. CVMFS is mounted over the network, and can be utilized on grid nodes, interactive nodes, and personal desktops/laptops. It is read only, and the most common source for centrally maintained versions of experiment software libraries/executables. CVMFS is mounted at `/cvmfs/` and access is POSIX-like, but read only. +### CVMFS +CVMFS is the CERN Virtual Machine File System is a centrally managed storage area that is distributed over the network, and utilized to distribute common software and a limited set of reference files. CVMFS is mounted over the network, and can be utilized on grid nodes, interactive nodes, and personal desktops/laptops. It is read only, and the most common source for centrally maintained versions of experiment software libraries/executables. CVMFS is mounted at `/cvmfs/` and access is POSIX-like, but read only. See [CVMFS]({{ site.baseurl }}/02.3-cvmfs) for more information. +### What is my quota? + +We use multiple systems so there are multiple ways for checking your disk quota. + +#### Your home area at FNAL + +~~~ +quota -u -m -s +~~~ +{: ..language-bash} + +#### Your home area at CERN +~~~ +fs listquota +~~~ +{: ..language-bash} + +#### The /app/ and /data/ areas at FNAL + +These use the Ceph file system which has directory quotas instead of user quotas. +See the quota section of: +[https://fifewiki.fnal.gov/wiki/Ceph#Quotas](https://fifewiki.fnal.gov/wiki/Ceph#Quotas) + +The most useful commands for general users are +~~~ +getfattr -n ceph.quota.max_bytes /exp/dune/app/users/$USER +getfattr -n ceph.quota.max_bytes /exp/dune/data/users/$USER +~~~ +{: ..language-bash} + +#### EOS at CERN + +~~~ +export EOS_MGM_URL=root://eosuser.cern.ch +eos quota +~~~ +{: ..language-bash} + +#### Fermilab dCache + +Go to [https://fndca.fnal.gov/cgi-bin/quota.py](https://fndca.fnal.gov/cgi-bin/quota.py) - you need to be on the Fermilab VPN - otherwise it sits there not loading. + > ## Note - When reading from dcache always use the root: syntax, not direct /pnfs -> The Fermilab dcache areas have NFS mounts. These are for your convenience, they allow you to look at the directory structure and, for example, remove files. However, NFS access is slow, inconsistent, and can hang the machine if I/O heavy processes use it. Always use the `xroot root://` ... when reading/accessing files instead of `/pnfs/` directly. Once you have your [dune environment set up](software_setup) the `pnfs2xrootd` command can do the conversion to `root:` format for you (only for files at FNAL for now). +> The Fermilab dcache areas have NFS mounts. These are for your convenience, they allow you to look at the directory structure and, for example, remove files. However, NFS access is slow, inconsistent, and can hang the machine if I/O heavy processes use it. Always use the `xroot root://` ... when reading/accessing files instead of `/pnfs/` directly. Once you have your dune environment set up the `pnfs2xrootd` command can do the conversion to `root:` format for you (only for files at FNAL for now). {: .callout} ## Summary on storage spaces @@ -187,7 +256,7 @@ This section will teach you the main tools and commands to display storage infor Another useful data handling command you will soon come across is ifdh. This stands for Intensity Frontier Data Handling. It is a tool suite that facilitates selecting the appropriate data transfer method from many possibilities while protecting shared resources from overload. You may see *ifdhc*, where *c* refers to *client*. > ## Note -> ifdh is much more efficient than NFS file access. Please use it and/or xrdcp when accessing remote files. +> `ifdh` is much more efficient than NFS file access. Please use it and/or `xrdcp/xrootd` when accessing remote files. {: .challenge} Here is an example to copy a file. Refer to the [Mission Setup]({{ site.baseurl }}/setup.html) for the setting up the `DUNELAR_VERSION`. @@ -196,42 +265,41 @@ Here is an example to copy a file. Refer to the [Mission Setup]({{ site.baseurl > For now do this in the Apptainer {: .challenge} -~~~ -/cvmfs/oasis.opensciencegrid.org/mis/apptainer/current/bin/apptainer shell --shell=/bin/bash \ --B /cvmfs,/exp,/nashome,/pnfs/dune,/opt,/run/user,/etc/hostname,/etc/hosts,/etc/krb5.conf --ipc --pid \ -/cvmfs/singularity.opensciencegrid.org/fermilab/fnal-dev-sl7:latest -~~~ -{: .language-bash} +Do the standard [sl7 setup]({{ site.baseurl }}/sl7_setup) + -once in the Apptainer +once you are set up ~~~ -source /cvmfs/dune.opensciencegrid.org/products/dune/setup_dune.sh -setup ifdhc -export IFDH_TOKEN_ENABLE=1 +export IFDH_TOKEN_ENABLE=1 # only need to do this once ifdh cp root://fndcadoor.fnal.gov:1094/pnfs/fnal.gov/usr/dune/tape_backed/dunepro/physics/full-reconstructed/2023/mc/out1/MC_Winter2023_RITM1592444_reReco/54/05/35/65/NNBarAtm_hA_BR_dune10kt_1x2x6_54053565_607_20220331T192335Z_gen_g4_detsim_reco_65751406_0_20230125T150414Z_reReco.root /dev/null ~~~ {: .language-bash} +This should go quickly as you are not actually writing the file. + Note, if the destination for an ifdh cp command is a directory instead of filename with full path, you have to add the "-D" option to the command line. -Prior to attempting the first exercise, please take a look at the full list of IFDH commands, to be able to complete the exercise. In particular, mkdir, cp, rmdir, +Prior to attempting the first exercise, please take a look at the full list of IFDH commands, to be able to complete the exercise. In particular, cp, rmdir, **Resource:** [ifdh commands](https://cdcvs.fnal.gov/redmine/projects/ifdhc/wiki/Ifdh_commands) + > ## Exercise 1 -> Using the ifdh command, complete the following tasks: -> * create a directory in your dCache scratch area (/pnfs/dune/scratch/users/${USER}/) called "DUNE_tutorial_2025" +> use normal `mkdir` to create a directory in your dCache scratch area (/pnfs/dune/scratch/users/${USER}/) called "DUNE_tutorial_2025" +> Using the `ifdh command, complete the following tasks: > * copy /exp/dune/app/users/${USER}/my_first_login.txt file to that directory > * copy the my_first_login.txt file from your dCache scratch directory (i.e. DUNE_tutorial_2024) to /dev/null > * remove the directory DUNE_tutorial_2025 > * create the directory DUNE_tutorial_2025_data_file > Note, if the destination for an ifdh cp command is a directory instead of filename with full path, you have to add the "-D" option to the command line. Also, for a directory to be deleted, it must be empty. > +> Note `ifdh` no longer has a `mkdir` command as it auto-creates directories. In this example, we use the NFS command `mkdir` directly for clarity. +> > > ## Answer > > ~~~ -> > ifdh mkdir /pnfs/dune/scratch/users/${USER}/DUNE_tutorial_2025 +> > mkdir /pnfs/dune/scratch/users/${USER}/DUNE_tutorial_2025 > > ifdh cp -D /exp/dune/app/users/${USER}/my_first_login.txt /pnfs/dune/scratch/users/${USER}/DUNE_tutorial_2025 > > ifdh cp /pnfs/dune/scratch/users/${USER}/DUNE_tutorial_2025/my_first_login.txt /dev/null > > ifdh rm /pnfs/dune/scratch/users/${USER}/DUNE_tutorial_2025/my_first_login.txt @@ -296,6 +364,10 @@ root://fndca1.fnal.gov:1094//pnfs/fnal.gov/usr/dune/tape_backed/dunepro/protodun ~~~ {: .output} +> ## Note - if you don't have pfns2xrootd on your system +> Copy [this]({{ site.baseurl}}/pnfs2xrootd) to your local area, make it executable and use it instead. +{: .callout} + you can then ~~~ @@ -314,6 +386,9 @@ export DUNELAR_QUALIFIER=e26:prof export UPS_OVERRIDE="-H Linux64bit+3.10-2.17" source /cvmfs/dune.opensciencegrid.org/products/dune/setup_dune.sh setup dunesw $DUNELAR_VERSION -q $DUNELAR_QUALIFIER +setup justin # use justin to get appropriate tokens +justin time # this will ask you to authenticate via web browser +justin get-token # this actually gets you a token ~~~ {: .language-bash} diff --git a/_episodes/02.3-cvmfs.md b/_episodes/02.3-cvmfs.md index ecf8ec3..e7a6749 100644 --- a/_episodes/02.3-cvmfs.md +++ b/_episodes/02.3-cvmfs.md @@ -10,6 +10,7 @@ keypoints: - CVMFS distributes software and related files without installing them on the target computer (using a VM, Virtual Machine). --- +{% include 02.3-cvmfs.toc.md %} ## CVMFS **What is CVMFS and why do we need it?** diff --git a/_episodes/03-data-management.md b/_episodes/03-data-management.md index 7eafab6..be0533d 100644 --- a/_episodes/03-data-management.md +++ b/_episodes/03-data-management.md @@ -13,7 +13,9 @@ keypoints: - Xrootd allows user to stream data files. --- -#### Session Video +{% include 03-data-management.toc.md %} + +## Session Video @@ -77,7 +79,11 @@ If you want to process data using the full power of DUNE computing, you should t ## How to find and access official data -### What is metacat? +{% include OfficialDatasets_include.md %} + +You can also query the catalogs yourself using [metacat][metacat] and [rucio][rucio] catalogs. Metacat contains information about file content and official datasets, rucio stores the physical location of those files. Files should have entries in both catalogs. Generally you ask metacat first to find the files you want and then ask rucio for their location. + +## What is metacat? Metacat is a file and dataset catalog - it allows you to search for files and datasets that have particular attributes and understand their provenance, including details on all of their processing steps. It also allows for querying jointly the file catalog and the DUNE conditions database. @@ -94,9 +100,9 @@ DUNE runs multiple experiments (far detectors, protodune-sp, protodune-dp hd-pro To find your data you need to specify at the minimum -- `core.run_type` (the experiment) +- `core.run_type` (the experiment: fardet-vd, hd-protodune ...) - `core.file_type` (mc or detector) -- `core.data_tier` (the level of processing raw, full-reconstructed, root-tuple) +- `core.data_tier` (the level of processing raw, full-reconstructed, root-tuple ...) and when searching for specific types of data @@ -145,7 +151,8 @@ First get metacat if you have not already done so token authentication. {: .callout} --> -### then do queries to find particular sets of files +### then do queries to find particular groups of files + ~~~ metacat query "files from dune:all where core.file_type=detector and core.run_type=hd-protodune and core.data_tier=raw and core.runs[any]=27331 limit 1" @@ -240,10 +247,9 @@ Total size: 17553648200600 (17.554 TB) {: .output} - -## Official datasets + + +### find out how much data there is in a dataset + +Do a query using the `-s` or `--summary` option + +~~~ +metacat query -s "files from fardet-vd:fardet-vd__full-reconstructed__v09_81_00d02__reco2_dunevd10kt_anu_1x8x6_3view_30deg_geov3__prodgenie_anu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg__out1__v2_official" +~~~ +{: .language-bash} + +~~~ +Files: 20648 +Total size: 34550167782531 (34.550 TB) +~~~ +{: .output} +this may take a while as that is a big dataset. + ### What describes a dataset? -Let's look at the metadata describing that anti-neutrino dataset: the -j means json output +Let's look at the metadata describing an anti-neutrino dataset: the -j means json output ~~~ metacat dataset show -j fardet-vd:fardet-vd__full-reconstructed__v09_81_00d02__reco2_dunevd10kt_anu_1x8x6_3view_30deg_geov3__prodgenie_anu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg__out1__v2_official @@ -386,7 +410,7 @@ You can use any of those keys to refine dataset searches as we did above. You pr ### What files are in that dataset and how do I use them? -You can either click on a dataset in the web data catalog or: +You can either locate and click on a dataset in the [web data catalog](https://dune-tech.rice.edu/dunecatalog/) or use the[metacat web interface](https://metacat.fnal.gov:9443/dune_meta_prod/app/gui) or use the command line: ~~~ metacat query "files from fardet-vd:fardet-vd__full-reconstructed__v09_81_00d02__reco2_dunevd10kt_anu_1x8x6_3view_30deg_geov3__prodgenie_anu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg__out1__v2_official limit 10" @@ -398,7 +422,7 @@ will list the first 10 files in that dataset (you probably don't want to list al You can also use a similar query in your batch job to get the files you want. -### Finding those files on disk +## Finding those files on disk To find your files, you need to use [Rucio](#Rucio) directly or give the [justIN](https://dunejustin.fnal.gov/docs/tutorials.dune.md) batch system your query and it will locate them for you. @@ -417,7 +441,8 @@ export SAM_EXPERIMENT=dune --> ## Getting file locations using Rucio -### What is Rucio? +### What is Rucio? + Rucio is the next-generation Data Replica service and is part of DUNE's new Distributed Data Management (DDM) system that is currently in deployment. Rucio has two functions: 1. A rule-based system to get files to Rucio Storage Elements around the world and keep them there. @@ -427,7 +452,7 @@ As of the date of the 2025 tutorial: - The Rucio client is available in CVMFS and Spack - Most DUNE users are now enabled to use it. New users may not automatically be added. -### You will need to authenticate to use read files +### You will need to authenticate to read files > #### For SL7 use justin to get a token {:.callout} @@ -498,7 +523,7 @@ which the locations of the file on disk and tape. We can use this to copy the f > Try to access the file at manchester using the command: > ~~~ > root -l root://meitner.tier2.hep.manchester.ac.uk:1094//cephfs/experiments/dune/RSE/fardet-vd/fd/a6/prodmarley_nue_es_flat_radiological_decay0_dunevd10kt_1x8x14_3view_30deg_20250217T033222Z_gen_004122_supernova_g4stage1_g4stage2_detsim_reco.root -> _file0->ls +> _file0->ls() > ~~~ > {: .language-bash} {: .challenge} diff --git a/_episodes/03.2-UPS.md b/_episodes/03.2-UPS.md index 9605449..c6d7f1a 100644 --- a/_episodes/03.2-UPS.md +++ b/_episodes/03.2-UPS.md @@ -17,7 +17,7 @@ keypoints: > You need to be in the Apptainer to use it. > UPS is being replaced by a new [spack][Spack Documentation] system for Alma9. We will be adding a Spack tutorial soon but for now, you need to use SL7/UPS to use the full DUNE code stack. > -> Go back and look at the [SL7/Apptainer]({{ site.baseurl }}/setup.html#SL7_setup) instructions to get an SL7 container for this section. +> Go back and look at the [SL7/Apptainer]({{ site.baseurl }}/sl7_setup) instructions to get an SL7 container for this section. {: .challenge} An important requirement for making valid physics results is computational reproducibility. You need to be able to repeat the same calculations on the data and MC and get the same answers every time. You may be asked to produce a slightly different version of a plot for example, and the data that goes into it has to be the same every time you run the program. diff --git a/_episodes/04-Spack.md b/_episodes/04-Spack.md index baffa25..75efa5e 100644 --- a/_episodes/04-Spack.md +++ b/_episodes/04-Spack.md @@ -10,6 +10,8 @@ keypoints: - Spack is a tool to deliver well defined software configurations - CVMFS distributes software and related files without installing them on the target computer (using a VM, Virtual Machine). --- + +{% include 04-Spack.toc.md %} ## What is Spack and why do we need it? > ## Note diff --git a/_episodes/05.1-improve-code-efficiency.md b/_episodes/05.1-improve-code-efficiency.md index 33a141f..6822096 100644 --- a/_episodes/05.1-improve-code-efficiency.md +++ b/_episodes/05.1-improve-code-efficiency.md @@ -10,11 +10,13 @@ keypoints: - CPU, memory, and build time optimizations are possible when good code practices are followed. --- -#### Session Video +## Improve your Code efficiency + +### Session Video The session will be captured on video a placed here after the workshop for asynchronous study. -#### Live Notes +### Live Notes diff --git a/_extras/Common-Error-Messages.md b/_extras/Common-Error-Messages.md index ef725e3..d6e8ddd 100644 --- a/_extras/Common-Error-Messages.md +++ b/_extras/Common-Error-Messages.md @@ -5,37 +5,40 @@ keypoints: - Errors that people report in doing the tutorial --- -- #### `/usr/bin/xauth: unable to write authority file` - #### `disk quota exceeded error with metacat auth login` +## Common Error Messages - These likely means your kerberos ticket was not forwarded and you can't access your home are without it. do a kinit in your terminal session. Or possibly you really have filled your home area. -- #### `bash: setup: command not found` +{% include Common-Error-Messages.toc.md %} - setup is a UPS command. You need to be running in the Apptainer and setup the DUNE ups system - check out the instructions in [SL7 setup] - ({{ site.baseurl }}/sl7_setup) +### Error: /usr/bin/xauth: unable to write authority file +These likely means your kerberos ticket was not forwarded and you can't access your home are without it. do a kinit in your terminal session. Or possibly you really have filled your home area. -- #### `SyntaxError: future feature annotations is not defined` +### bash: setup: command not found - This looks like a bad python version, try doing `which python` if it isn't > 3.9 you don't have a modern python version. +setup is a UPS command. You need to be running in the Apptainer and setup the DUNE ups system - check out the instructions in [SL7 setup]({{ site.baseurl }}/sl7_setup) - - On SL7 we suggest setting up the dunesw as shown in the example setup. alternatively you can - ~~~ - setup root -v v6_28_12 -q e26:p3915:prof - ~~~ - {: .language-bash} +### SyntaxError: future feature annotations is not defined - - On AL9 we suggest loading ROOT which brings in a modern version of python and allows xrootd access to data. +This looks like a bad python version, try doing `which python` if it isn't > 3.9 you don't have a modern python version. - ~~~ - spack load root@6.28.12 - ~~~ - {: .language-bash} +- On SL7 we suggest setting up the dunesw as shown in the example setup. alternatively you can +~~~ +setup root -v v6_28_12 -q e26:p3915:prof +~~~ +{: .language-bash} -- #### Spack ==> `Error: somecode matches multiple packages +- On AL9 we suggest loading ROOT which brings in a modern version of python and allows xrootd access to data. + +~~~ +spack load root@6.28.12 +~~~ +{: .language-bash} + + +### Spack : Error: somecode matches multiple packages ~~~ Matching packages: jhpj2js somecode@6.28.06%gcc@12.2.0 arch=linux-almalinux9-x86_64_v2 diff --git a/_extras/ComputerSetup.md b/_extras/ComputerSetup.md index 1c074b7..ba93d73 100644 --- a/_extras/ComputerSetup.md +++ b/_extras/ComputerSetup.md @@ -12,13 +12,17 @@ keypoints: - It is also something almost all people who get paid to program are expected to know well --- -## 0. Back up your machine +## Computer setup + +{% include ComputerSetup.toc.md %} + +### Back up your machine We are going to be messing with your operating system at some level so it is extremely wise to do a complete backup of your machine to an external drive right now. Also turn off automatic updates. Operating system updates can mess with your setup. Generally, back up before doing updates so you can revert if necessary. -## 1. Open a unix terminal window +### Open a unix terminal window First figure out how to open a terminal on your system. The Carpentries Shell Training has a [section that explains this][New Shell] @@ -35,7 +39,7 @@ On Windows it's a bit more complicated as the underlying operating system is not -## 2. Learn how to use the Unix Shell +### Learn how to use the Unix Shell @@ -47,7 +51,7 @@ It tells you how to start a terminal session in Windows, Mac OSX and Unix system Please do that [unix shell tutorial][Unix Shell Basics] to learn about the basic command line. -## 3. Install an x-windows emulator +### Install an x-windows emulator #### MacOS @@ -88,7 +92,7 @@ See the information about [Windows]({{ site.baseurl }}/Windows.html) terminal co > You should now be ready to go for the ({{ site.baseurl }}/setup) {: .callout} -## Extra - Get a compiler/code editor +### Extra - Get a code editor Although you will mainly be using python to code to begin with, most HEP code is actually C++ and it is good to have access to a C++ compiler. Bonus is that you normally get a good editor as well. @@ -108,7 +112,7 @@ You can also use vim or emacs if you are old school. Likely you should load up the full [Visual Studio][Visual Studio] as it has a nice C++ compiler -### Useful Links +## Useful Links [HSF Training Center][HSF Training Center] diff --git a/_extras/InstallConda.md b/_extras/InstallConda.md index 5ca6f8a..b6c8b5d 100644 --- a/_extras/InstallConda.md +++ b/_extras/InstallConda.md @@ -13,6 +13,8 @@ keypoints: ## Installing conda and root +{% include InstallConda.toc.md %} + This is derived from the excellent [https://iscinumpy.gitlab.io/post/root-conda/](https://iscinumpy.gitlab.io/post/root-conda/) by Henry Schreiner Currently this has been tested on OSX and Linux distributions SL7 and AL9 diff --git a/_extras/OfficialDatasets.md b/_extras/OfficialDatasets.md new file mode 100644 index 0000000..558bc54 --- /dev/null +++ b/_extras/OfficialDatasets.md @@ -0,0 +1,6 @@ +--- +title: Official Datasets +permalink: OfficialDatasets +--- + +{% include OfficialDatasets_include.md %} \ No newline at end of file diff --git a/_extras/Tokens.md b/_extras/Tokens.md index d5bb0ae..3eff642 100644 --- a/_extras/Tokens.md +++ b/_extras/Tokens.md @@ -2,10 +2,14 @@ title: Tokens --- +{% include Tokens.toc.md %} + ## SL7 Tokens {% include sl7_token.md %} ## AL9 Tokens -{% include al9_token.md %} \ No newline at end of file +{% include al9_token.md %} + +Check your token `httogendecode` \ No newline at end of file diff --git a/_extras/al9_setup.md b/_extras/al9_setup.md index 87c3f1d..204b2b1 100644 --- a/_extras/al9_setup.md +++ b/_extras/al9_setup.md @@ -6,6 +6,12 @@ keypoints: - getting authentication set up --- +## How to set up a basic session in al9 + +{% include al9_setup.toc.md %} + +### a tip + You can store the code below as `myal9.sh` and run it every time you log in. diff --git a/_extras/al9_speedrun.md b/_extras/al9_speedrun.md index 865e364..12310cb 100644 --- a/_extras/al9_speedrun.md +++ b/_extras/al9_speedrun.md @@ -5,6 +5,11 @@ keypoints: - all in one place --- +## 2025 Speedrun of AL9 setup and test + +{% include al9_speedrun.toc.md %} + + {% include al9_setup_2025a.md %} {% include al9_token.md %} diff --git a/_extras/howToBuild.md b/_extras/howToBuild.md new file mode 100644 index 0000000..aff194f --- /dev/null +++ b/_extras/howToBuild.md @@ -0,0 +1,29 @@ +--- +title: Notes on building these pages on MACOS +permalink: How to build these pages +keypoints: +- can build a (flimsy) table of contents +- need a recent version of ruby +--- + +{% include howToBuild.toc.md %} + +## Making a table of contents + +The scripts `addtoc.sh` calls `code/tocgen.py` to make a table of contents for each episode based on the section headings. That table is then added using an include of `_includes/.toc.md`. + +The TOC script does not do a good job with named links yet and generally requires that section headings be reasonably simple. + + +### Things not to do: +- start a section heading with a number - instead use things like "Step 2." +- make very long headings +- put special characters like '()` in your heading + +It also currently interprets headings within `<-- -->` comment syntax as real. If you do make a comment, also indent it. + +## Ruby +on macs you need to use homebrew to install ruby +then you need to override the system version + +`export PATH=/opt/homebrew/opt/ruby/bin:$PATH` diff --git a/_extras/pnfs2xrootd.md b/_extras/pnfs2xrootd.md index 436011c..61225de 100644 --- a/_extras/pnfs2xrootd.md +++ b/_extras/pnfs2xrootd.md @@ -12,7 +12,7 @@ permalink: pnfs2xrootd while true do -echo -n `readlink -f $1` | sed -e 's%/pnfs%root://fndca1.fnal.gov:1094//pnfs/fna +echo -n `readlink -f $1` | sed -e 's%/pnfs%root://fndcadoor.fnal.gov:1094//pnfs/fna l.gov/usr%' shift if [ x$1 == x ]; then break; fi diff --git a/_extras/setup_ruby.md b/_extras/setup_ruby.md deleted file mode 100644 index 9c46cd2..0000000 --- a/_extras/setup_ruby.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Notes on building these pages on MACOS -permalink: Building these pages -keypoints: -- need a recent version of ruby ---- -on macs you need to use homebrew to install ruby -then you need to override the system version - -`export PATH=/opt/homebrew/opt/ruby/bin:$PATH` diff --git a/_extras/sl7_setup.md b/_extras/sl7_setup.md index 5488b83..07ca2a0 100644 --- a/_extras/sl7_setup.md +++ b/_extras/sl7_setup.md @@ -30,10 +30,16 @@ Start the Apptainer {: .challenge} -## then do the following +## then do the following to set up DUNE code you can store this as `mysl7.sh` and run it every time you log in. {% include sl7_setup_2025.md %} + +## then do the following to get authentication to remote data and batch systems + +If you want to do data access or submit batch jobs, you also need to do some authentication + +{% include sl7_token.md %} diff --git a/_includes/01-introduction.md.toc.md b/_includes/01-introduction.md.toc.md new file mode 100644 index 0000000..14f5f96 --- /dev/null +++ b/_includes/01-introduction.md.toc.md @@ -0,0 +1,7 @@ +## Table of Contents +- [DUNE Computing Consortium](#dune computing consortium) +- [Schedule](#schedule) + - [Workshop Introduction Video from December 2024](#workshop introduction video from december 2024) +- [Basic setup reminder](#basic setup reminder) +- [Instructional Crew](#instructional crew) +- [Support](#support) \ No newline at end of file diff --git a/_includes/01-introduction.toc.md b/_includes/01-introduction.toc.md new file mode 100644 index 0000000..e455fa6 --- /dev/null +++ b/_includes/01-introduction.toc.md @@ -0,0 +1,9 @@ + + +**Table of Contents for 01-introduction** +- [DUNE Computing Consortium](#dune-computing-consortium) +- [Schedule](#schedule) + - [Workshop Introduction Video from December 2024](#workshop-introduction-video-from-december-2024) +- [Basic setup reminder](#basic-setup-reminder) +- [Instructional Crew](#instructional-crew) +- [Support](#support) \ No newline at end of file diff --git a/_includes/01.5-documentation.toc.md b/_includes/01.5-documentation.toc.md new file mode 100644 index 0000000..d943263 --- /dev/null +++ b/_includes/01.5-documentation.toc.md @@ -0,0 +1,11 @@ + + +**Table of Contents for 01.5-documentation** +- [Documentation access](#documentation-access) +- [DUNE tools list](#dune-tools-list) +- [Docdb (Requires FNAL SSO)](#docdb-(requires-fnal-sso)) +- [DUNE wiki (Requires FNAL SSO)](#dune-wiki-(requires-fnal-sso)) + - [Tutorials list on the wiki](#tutorials-list-on-the-wiki) +- [CERN EDMS](#cern-edms) +- [Github repositories](#github-repositories) +- [DUNE Computing FAQ](#dune-computing-faq) \ No newline at end of file diff --git a/_includes/02-storage-spaces.toc.md b/_includes/02-storage-spaces.toc.md new file mode 100644 index 0000000..201e99e --- /dev/null +++ b/_includes/02-storage-spaces.toc.md @@ -0,0 +1,37 @@ + + +**Table of Contents for 02-storage-spaces** +- [This is an updated version of the 2023 training](#this-is-an-updated-version-of-the-2023-training) + - [Live Notes](#live-notes) + - [Workshop Storage Spaces Video from December 2024](#workshop-storage-spaces-video-from-december-2024) +- [Introduction](#introduction) +- [Vocabulary](#vocabulary) +- [Interactive storage volumes (mounted on dunegpvmXX.fnal.gov or lxplus.cern.ch)](#interactive-storage-volumes-(mounted-on-dunegpvmxx.fnal.gov-or-lxplus.cern.ch)) + - [Your home area](#your-home-area) + - [at Fermilab](#at-fermilab) + - [at CERN](#at-cern) + - [Locally mounted volumes](#locally-mounted-volumes) + - [Network Attached Storage (NAS)](#network-attached-storage-(nas)) + - [At Fermilab](#at-fermilab) + - [At CERN](#at-cern) + - [Grid-accessible storage volumes](#grid-accessible-storage-volumes) + - [Persistent dCache](#persistent-dcache) + - [Scratch dCache](#scratch-dcache) + - [Tape-backed dCache](#tape-backed-dcache) + - [Rucio Storage Elements](#rucio-storage-elements) + - [CVMFS](#cvmfs) + - [What is my quota?](#what-is-my-quota) + - [Your home area at FNAL](#your-home-area-at-fnal) + - [Your home area at CERN](#your-home-area-at-cern) + - [The /app/ and /data/ areas at FNAL](#the-/app/-and-/data/-areas-at-fnal) + - [EOS at CERN](#eos-at-cern) + - [Fermilab dCache](#fermilab-dcache) +- [Summary on storage spaces](#summary-on-storage-spaces) +- [Monitoring and Usage](#monitoring-and-usage) +- [Commands and tools](#commands-and-tools) + - [ifdh](#ifdh) + - [xrootd](#xrootd) + - [What is the right xroot path for a file.](#what-is-the-right-xroot-path-for-a-file.) + - [The df command](#the-df-command) +- [Quiz](#quiz) +- [Useful links to bookmark](#useful-links-to-bookmark) \ No newline at end of file diff --git a/_includes/02.3-cvmfs.toc.md b/_includes/02.3-cvmfs.toc.md new file mode 100644 index 0000000..70f05bf --- /dev/null +++ b/_includes/02.3-cvmfs.toc.md @@ -0,0 +1,6 @@ + + +**Table of Contents for 02.3-cvmfs** +- [CVMFS](#cvmfs) +- [Restrictions](#restrictions) +- [Useful links to bookmark](#useful-links-to-bookmark) \ No newline at end of file diff --git a/_includes/03-data-management.toc.md b/_includes/03-data-management.toc.md new file mode 100644 index 0000000..254d1d8 --- /dev/null +++ b/_includes/03-data-management.toc.md @@ -0,0 +1,43 @@ + + +**Table of Contents for 03-data-management** +- [Session Video](#session-video) + - [Live Notes](#live-notes) +- [Introduction](#introduction) + - [What we need to do to produce accurate physics results](#what-we-need-to-do-to-produce-accurate-physics-results) + - [How we do it ?](#how-we-do-it-) + - [How do I use this.](#how-do-i-use-this.) +- [How to find and access official data](#how-to-find-and-access-official-data) +- [Official datasets ](#Official_Datasets) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) +- [What is metacat?](#what-is-metacat) + - [Find a file in metacat](#find-a-file-in-metacat) + - [Example of doing a metacat search](#example-of-doing-a-metacat-search) + - [then do queries to find particular groups of files](#then-do-queries-to-find-particular-groups-of-files) + - [What do those fields mean?](#what-do-those-fields-mean) + - [find out how much raw data there is in a run using the summary option](#find-out-how-much-raw-data-there-is-in-a-run-using-the-summary-option) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) + - [find out how much data there is in a dataset](#find-out-how-much-data-there-is-in-a-dataset) + - [What describes a dataset?](#what-describes-a-dataset) + - [What files are in that dataset and how do I use them?](#what-files-are-in-that-dataset-and-how-do-i-use-them) +- [Finding those files on disk](#finding-those-files-on-disk) +- [Getting file locations using Rucio](#getting-file-locations-using-rucio) + - [What is Rucio?](#what-is-rucio) + - [You will need to authenticate to read files](#you-will-need-to-authenticate-to-read-files) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) + - [finding a file](#finding-a-file) +- [More finding files by characteristics using metacat](#more-finding-files-by-characteristics-using-metacat) +- [Accessing data for use in your analysis](#accessing-data-for-use-in-your-analysis) +- [Quiz](#quiz) +- [Useful links to bookmark](#useful-links-to-bookmark) \ No newline at end of file diff --git a/_includes/03.2-UPS.toc.md b/_includes/03.2-UPS.toc.md new file mode 100644 index 0000000..286284c --- /dev/null +++ b/_includes/03.2-UPS.toc.md @@ -0,0 +1,5 @@ + + +**Table of Contents for 03.2-UPS** +- [What is UPS and why do we need it?](#what-is-ups-and-why-do-we-need-it) + - [UPS basic commands](#ups-basic-commands) \ No newline at end of file diff --git a/_includes/04-Spack.toc.md b/_includes/04-Spack.toc.md new file mode 100644 index 0000000..a6a4f7a --- /dev/null +++ b/_includes/04-Spack.toc.md @@ -0,0 +1,7 @@ + + +**Table of Contents for 04-Spack** +- [What is Spack and why do we need it?](#what-is-spack-and-why-do-we-need-it) +- [Minimal spack for root analysis and file access](#minimal-spack-for-root-analysis-and-file-access) +- [A more flexible environment with more packages but you have to make choices of versions](#a-more-flexible-environment-with-more-packages-but-you-have-to-make-choices-of-versions) + - [Spack basic commands](#spack-basic-commands) \ No newline at end of file diff --git a/_includes/05-end-of-basics.toc.md b/_includes/05-end-of-basics.toc.md new file mode 100644 index 0000000..62a2c3e --- /dev/null +++ b/_includes/05-end-of-basics.toc.md @@ -0,0 +1,5 @@ + + +**Table of Contents for 05-end-of-basics** +- [You can ask questions here:](#you-can-ask-questions-here) +- [You can continue on with these additional modules.](#you-can-continue-on-with-these-additional-modules.) \ No newline at end of file diff --git a/_includes/05.1-improve-code-efficiency.toc.md b/_includes/05.1-improve-code-efficiency.toc.md new file mode 100644 index 0000000..18b1ecc --- /dev/null +++ b/_includes/05.1-improve-code-efficiency.toc.md @@ -0,0 +1,14 @@ + + +**Table of Contents for 05.1-improve-code-efficiency** +- [Improve your Code efficiency](#improve-your-code-efficiency) + - [Session Video](#session-video) + - [Live Notes](#live-notes) + - [Code Make-over](#code-make-over) + - [CPU optimization:](#cpu-optimization) +- [Memory optimization:](#memory-optimization) +- [I/O optimization:](#i/o-optimization) +- [Build time optimization:](#build-time-optimization) +- [Workflow optimization:](#workflow-optimization) +- [Software readability and maintainability:](#software-readability-and-maintainability) +- [Coding for Thread Safety](#coding-for-thread-safety) \ No newline at end of file diff --git a/_includes/10-closing-remarks.toc.md b/_includes/10-closing-remarks.toc.md new file mode 100644 index 0000000..45386ba --- /dev/null +++ b/_includes/10-closing-remarks.toc.md @@ -0,0 +1,8 @@ + + +**Table of Contents for 10-closing-remarks** +- [Video Session](#video-session) +- [Two Days of Training](#two-days-of-training) +- [Survey time!](#survey-time!) +- [Next Steps](#next-steps) +- [Long Term Support](#long-term-support) \ No newline at end of file diff --git a/_includes/Common-Error-Messages.toc.md b/_includes/Common-Error-Messages.toc.md new file mode 100644 index 0000000..89238bf --- /dev/null +++ b/_includes/Common-Error-Messages.toc.md @@ -0,0 +1,8 @@ + + +**Table of Contents for Common-Error-Messages** +- [Common Error Messages](#common-error-messages) + - [Error: /usr/bin/xauth: unable to write authority file](#error-/usr/bin/xauth-unable-to-write-authority-file) + - [bash: setup: command not found](#bash-setup-command-not-found) + - [SyntaxError: future feature annotations is not defined](#syntaxerror-future-feature-annotations-is-not-defined) + - [Spack : Error: somecode matches multiple packages](#spack--error-somecode-matches-multiple-packages) \ No newline at end of file diff --git a/_includes/ComputerSetup.toc.md b/_includes/ComputerSetup.toc.md new file mode 100644 index 0000000..a6b8464 --- /dev/null +++ b/_includes/ComputerSetup.toc.md @@ -0,0 +1,16 @@ + + +**Table of Contents for ComputerSetup** +- [Computer setup](#computer-setup) + - [Back up your machine](#back-up-your-machine) + - [Open a unix terminal window](#open-a-unix-terminal-window) + - [Learn how to use the Unix Shell](#learn-how-to-use-the-unix-shell) + - [Install an x-windows emulator](#install-an-x-windows-emulator) + - [MacOS](#macos) + - [Unix](#unix) + - [Windows](#windows) + - [Extra - Get a code editor](#extra---get-a-code-editor) + - [OSX](#osx) + - [Unix](#unix) + - [Windows](#windows) +- [Useful Links](#useful-links) \ No newline at end of file diff --git a/_includes/InstallConda.toc.md b/_includes/InstallConda.toc.md new file mode 100644 index 0000000..4bd4b55 --- /dev/null +++ b/_includes/InstallConda.toc.md @@ -0,0 +1,9 @@ + + +**Table of Contents for InstallConda** +- [Installing conda and root](#installing-conda-and-root) +- [Download miniconda](#download-miniconda) +- [Install conda](#install-conda) +- [Making an environment with root in it](#making-an-environment-with-root-in-it) +- [Try out your new environment](#try-out-your-new-environment) +- [Testing](#testing) \ No newline at end of file diff --git a/_includes/OfficialDatasets.toc.md b/_includes/OfficialDatasets.toc.md new file mode 100644 index 0000000..c67ee77 --- /dev/null +++ b/_includes/OfficialDatasets.toc.md @@ -0,0 +1,9 @@ + + +**Table of Contents for OfficialDatasets** +- [Official datasets ](#Official_Datasets) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) \ No newline at end of file diff --git a/_includes/OfficialDatasets_include.md b/_includes/OfficialDatasets_include.md new file mode 100644 index 0000000..cc0d064 --- /dev/null +++ b/_includes/OfficialDatasets_include.md @@ -0,0 +1,94 @@ + +## Official datasets + +The production group make official datasets which are sets of files which share important characteristics such as experiment, data_tier, data_stream, processing version and processing configuration. Often all you need is an official dataset. + +See [DUNE Physics Datasets](https://docs.dunescience.org/cgi-bin/sso/RetrieveFile?docid=29787&filename=DUNEdataset_v1.pdf) for a detailed description. + +### Fast web catalog queries + +You can do fast string queries based on keywords embedded in the dataset name. + +Go to [dunecatalog](https://dune-tech.rice.edu/dunecatalog/) and log in with your services password. + +Choose your apparatus (Far Detector for example), use the category key to further refine your search and then type in keywords. Here I chose the `Far Detectors` tab and the `FD-VD` category from the pulldown menu. + +![Fast keyword search]({{ site.baseurl }}/fig/keywordquery.png){: .image-with-shadow } + +If you click on a dataset you can see a sample of the files inside it. + + +You can find a more detailed tutorial for the dunecatalog site at: +[Dune Catalog Tutorial](https://docs.dunescience.org/cgi-bin/sso/RetrieveFile?docid=33738&filename=DUNE%20Catalog%20Presentation.pdf&version=2) + + + +### Command line tools and advanced queries + +You can also explore and find the right dataset on the command line by using metacat dataset keys: + +First you need to know your namespace and then explore within it. + +~~~ +metacat namespace list # find likely namespaces +~~~ +{: .language-bash} + +There are official looking ones like `hd-protodune-det-reco` and ones for users doing production testing like `schellma`. The default for general use is `usertests` + +Creation of namespaces by non-privileged users is currently disabled. A tool is in progress which will automatically make one namespace for each user + +### metacat web interface + +Metacat also has a web interface that is useful in exploring file parentage [metacat gui](https://metacat.fnal.gov:9443/dune_meta_prod/app/gui) + +### Example of finding reconstructed Monte Carlo + +Let's look for some reconstructed Monte Carlo from the VD far detector. + +~~~ +metacat query "datasets matching fardet-vd:*official having core.data_tier=full-reconstructed" +~~~ +{: .language-bash} + +Lots of output ... looks like there are 2 types of official ones - let's get "v2" + +~~~ +metacat query "datasets matching fardet-vd:*v2_official having core.data_tier=full-reconstructed" +~~~ +{: .language-bash} + +and there are then several different generators. Let's explore reconstructed simulation of the vertical drift far detector. + +~~~ +metacat query "datasets matching fardet-vd:*v2_official having core.data_tier=full-reconstructed and dune_mc.gen_fcl_filename=prodgenie_nu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg.fcl" +~~~ +{: .language-bash} + +Ok, found the official neutrino beam dataset: + +~~~ +fardet-vd:fardet-vd__full-reconstructed__v09_81_00d02__reco2_dunevd10kt_nu_1x8x6_3view_30deg_geov3__prodgenie_nu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg__out1__v2_official +~~~ +{: .output} + + +~~~ +metacat query "datasets matching fardet-vd:*v2_official having core.data_tier=full-reconstructed and dune_mc.gen_fcl_filename=prodgenie_anu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg.fcl" +~~~ + +And the anti-neutrino dataset: + +~~~ +fardet-vd:fardet-vd__full-reconstructed__v09_81_00d02__reco2_dunevd10kt_anu_1x8x6_3view_30deg_geov3__prodgenie_anu_numu2nue_nue2nutau_dunevd10kt_1x8x6_3view_30deg__out1__v2_official +~~~ +{: .output} + + + +### you can use the web data catalog to do advanced searches + +You can also do keyword/value queries like the ones above using the Other tab on the web-based Data Catalog. + +![Full query search]({{ site.baseurl }}/fig/otherquery.png){: .image-with-shadow } + diff --git a/_includes/Tokens.toc.md b/_includes/Tokens.toc.md new file mode 100644 index 0000000..5bfaaca --- /dev/null +++ b/_includes/Tokens.toc.md @@ -0,0 +1,9 @@ + + +**Table of Contents for Tokens** +- [SL7 Tokens ](#SL7_token) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) +- [AL9 Tokens ](#AL9_token) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) \ No newline at end of file diff --git a/_includes/TutorialsMasterList.toc.md b/_includes/TutorialsMasterList.toc.md new file mode 100644 index 0000000..039ae1c --- /dev/null +++ b/_includes/TutorialsMasterList.toc.md @@ -0,0 +1,7 @@ + + +**Table of Contents for TutorialsMasterList** +- [[Computing Basics](https://dune.github.io/computing-basics/)](#[computing-basics](https//dune.github.io/computing-basics/)) +- [[The Justin Workflow System](https://dunejustin.fnal.gov/docs/)](#[the-justin-workflow-system](https//dunejustin.fnal.gov/docs/)) +- [[LArTPC Reconstruction Training](https://indico.ph.ed.ac.uk/event/268/)](#[lartpc-reconstruction-training](https//indico.ph.ed.ac.uk/event/268/)) +- [[FAQ](https://github.com/orgs/DUNE/projects/19/views/1)](#[faq](https//github.com/orgs/dune/projects/19/views/1)) \ No newline at end of file diff --git a/_includes/Windows.toc.md b/_includes/Windows.toc.md new file mode 100644 index 0000000..b3ffa26 --- /dev/null +++ b/_includes/Windows.toc.md @@ -0,0 +1,12 @@ + + +**Table of Contents for Windows** +- [Instructions for running remote terminal sessions on unix machines from Windows](#instructions-for-running-remote-terminal-sessions-on-unix-machines-from-windows) +- [Kerberos Ticket Manager](#kerberos-ticket-manager) +- [Terminal emulators](#terminal-emulators) + - [MobaXterm](#mobaxterm) + - [PuTTY/Xming](#putty/xming) + - [PuTTY](#putty) + - [Xming with PuTTY](#xming-with-putty) + - [Configuring PuTTY for remote use](#configuring-putty-for-remote-use) +- [Done!](#done!) \ No newline at end of file diff --git a/_includes/about.toc.md b/_includes/about.toc.md new file mode 100644 index 0000000..0a33b39 --- /dev/null +++ b/_includes/about.toc.md @@ -0,0 +1,3 @@ + + +**Table of Contents for about** \ No newline at end of file diff --git a/_includes/al9_setup.toc.md b/_includes/al9_setup.toc.md new file mode 100644 index 0000000..f9d3bfa --- /dev/null +++ b/_includes/al9_setup.toc.md @@ -0,0 +1,9 @@ + + +**Table of Contents for al9_setup** +- [How to set up a basic session in al9](#how-to-set-up-a-basic-session-in-al9) + - [a tip](#a-tip) +- [Set up software](#set-up-software) +- [Get a token](#get-a-token) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) \ No newline at end of file diff --git a/_includes/al9_setup_2025a.md b/_includes/al9_setup_2025a.md index ad4eda7..77b9ba1 100644 --- a/_includes/al9_setup_2025a.md +++ b/_includes/al9_setup_2025a.md @@ -2,31 +2,34 @@ # find a spack environment and set it up # setup spack (pre spack 1.0 version) -source /cvmfs/larsoft.opensciencegrid.org/spack-v0.22.0-fermi/setup-env.sh - -# get the packages you need to run this - this will become simple in future -echo "ROOT" -spack load root@6.28.12%gcc@12.2.0 arch=linux-almalinux9-x86_64_v3 - -echo "CMAKE" -spack load cmake@3.27.9%gcc@11.4.1 arch=linux-almalinux9-x86_64_v3 - +source /cvmfs/dune.opensciencegrid.org/spack/setup-env.sh +echo "Activate dune-workflow" +spack env activate dune-workflow +echo "load GCC and CMAKE so don't use system" echo "GCC" -spack load gcc@12.2.0 - -echo "Rucio and metacat" -spack load r-m-dd-config experiment=dune lab=fnal.gov -export RUCIO_ACCOUNT=justinreadonly - -echo "IFDHC" -spack load ifdhc@2.8.0%gcc@12.2.0 arch=linux-almalinux9-x86_64_v3 -spack load ifdhc-config@2.6.20%gcc@11.4.1 arch=linux-almalinux9-x86_64_v3 - +spack load gcc@12.5.0 arch=linux-almalinux9-x86_64_v2 echo "PY-PIP" spack load py-pip@23.1.2%gcc@11.4.1 arch=linux-almalinux9-x86_64_v3 -echo "no justIN yet" -#spack load justin ~~~ -{: .language-bash} \ No newline at end of file +{: .language-bash} + +You can check your environment by doing this + +~~~ +# test-paths.sh +echo "which root" +which root +root --version +echo "which gcc" +which gcc +gcc --version +echo "which python" +which python +python --version +echo "which cmake" +which cmake +cmake --version +~~~ +{: .language-bash} diff --git a/_includes/al9_speedrun.toc.md b/_includes/al9_speedrun.toc.md new file mode 100644 index 0000000..643a0fa --- /dev/null +++ b/_includes/al9_speedrun.toc.md @@ -0,0 +1,6 @@ + + +**Table of Contents for al9_speedrun** +- [2025 Speedrun of AL9 setup and test](#2025-speedrun-of-al9-setup-and-test) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) \ No newline at end of file diff --git a/_includes/al9_token.md b/_includes/al9_token.md index 5f1ed5b..b8620af 100644 --- a/_includes/al9_token.md +++ b/_includes/al9_token.md @@ -1,28 +1,11 @@ +### Interactive file access -> ## Note: The justin get-token method for authentication does not currently work on AL9 -> The justin get-token command is not distributed on AL9/Spack currently. -> Please use [SL7]({{ site.baseurl }}/sl7_setup) if you need to use rucio. -> -> normal tokens (below) for `xroot` access do work -> -{: .caution} - -### getting a token for xroot access in AL9 Make certain you have [al9 set up]({{ site.baseurl }}/al9_setup) - - -Then use htgettoken to get a token so you can read the files you find. +Then use `htgettoken` to get a token so you can read the files you find. ~~~ -htgettoken -i dune --vaultserver htvaultprod.fnal.gov #:8200 +htgettoken -i dune --vaultserver htvaultprod.fnal.gov -r interactive export BEARER_TOKEN_FILE=/run/user/`id -u`/bt_u`id -u` ~~~ {: .language-bash} @@ -49,3 +32,42 @@ YgZTGDqHQg6NOO77NsCY5J88uyIkkoZ1tRb6iTXK0j5RsX0AjA You should be able to read files at remote sites now. You may need to repeat the `htgettoken` as the interactive tokens are pretty short-lived. Batch jobs do their own tokens. +### Accessing rucio and justIn resources requires a bit more + +You should already be set up above. Now you can use `justIn` to get you a token. + +1. First tell `justIn` knows about you + +~~~ +justin time +~~~ +{: ..language-bash} + +The first time you do this you will get asked (after the `justin time` command) + +~~~ +To authorize this computer to run the justin command, visit this page with your +usual web browser and follow the instructions within the next 10 minutes: +https://dunejustin.fnal.gov/authorize/XXXXX + +Check that the Session ID displayed on that page is BfhVBmQ + +Once you've followed the instructions on that web page, please run the command +you tried again. You won't need to authorize this computer again for 7 days. +~~~ +{: ..output} + +Once again go to the website that appears and authenticate. + +2. After the first authentication to justIn you need to do a second justin call + +~~~ +justin get-token +~~~ +{: ..language-bash} + +You will need to do this sequence weekly as your justin access expires. + +> ## Note: +> Despite the name of this command it gets you both a token and a special X.509 proxy and it is the latter you are actually using to talk to rucio in these SL7 examples +{: .callout} \ No newline at end of file diff --git a/_includes/allTOC.md b/_includes/allTOC.md new file mode 100644 index 0000000..d2ec08e --- /dev/null +++ b/_includes/allTOC.md @@ -0,0 +1,306 @@ + + +**Table of Contents for setup** +- [Objectives](#objectives) +- [Requirements](#requirements) +- [Step 1: DUNE membership](#step-1-dune-membership) +- [Step 2: Getting accounts](#step-2-getting-accounts) + - [With FNAL](#with-fnal) + - [With CERN](#with-cern) +- [Step 3: Mission setup (rest of this page)](#step-3-mission-setup-(rest-of-this-page)) +- [0. Basic setup on your computer.](#0.-basic-setup-on-your-computer.) +- [1. Kerberos business](#1.-kerberos-business) +- [2. ssh-in](#2.-ssh-in) +- [3. Get a clean shell](#3.-get-a-clean-shell) +- [Software setup ](#software_setup) + - [4.1 Setting up DUNE software - Scientific Linux 7 version ](SL7setup) + - [Caveats for later](#caveats-for-later) + - [4.2 Setting up DUNE software - Alma9 version ](#AL9_setup) + - [Caveats for later](#caveats-for-later) +- [4.2 Setting up DUNE software - Alma9 version](#4.2-setting-up-dune-software---alma9-version) + - [Caveats](#caveats) +- [5. Exercise! (For SL7 - it's easy)](#5.-exercise!-(for-sl7---it's-easy)) +- [5. Exercise! (For AL9 - it's easy)](#5.-exercise!-(for-al9---it's-easy)) +- [6. Getting setup for streaming and grid access](#6.-getting-setup-for-streaming-and-grid-access) + - [Tokens method ](#tokens) + - [1. Get and store your token](#1.-get-and-store-your-token) + - [2. Tell the system where your token is](#2.-tell-the-system-where-your-token-is) +- [Set up on CERN machines ](#setup_CERN) + - [1. Setup in Alma9](#1.-setup-in-alma9) + - [2. For SL7](#2.-for-sl7) + - [Source the DUNE environment SL7 setup script](#source-the-dune-environment-sl7-setup-script) + - [3. Getting authentication for data access](#3.-getting-authentication-for-data-access) + - [4. Access tutorial datasets](#4.-access-tutorial-datasets) + - [5. Notify us](#5.-notify-us) +- [Useful Links](#useful-links) + +**Table of Contents for 01-introduction** +- [DUNE Computing Consortium](#dune-computing-consortium) +- [Schedule](#schedule) + - [Workshop Introduction Video from December 2024](#workshop-introduction-video-from-december-2024) +- [Basic setup reminder](#basic-setup-reminder) +- [Instructional Crew](#instructional-crew) +- [Support](#support) + +**Table of Contents for 01.5-documentation** +- [Documentation access](#documentation-access) +- [DUNE tools list](#dune-tools-list) +- [Docdb (Requires FNAL SSO)](#docdb-(requires-fnal-sso)) +- [DUNE wiki (Requires FNAL SSO)](#dune-wiki-(requires-fnal-sso)) + - [Tutorials list on the wiki](#tutorials-list-on-the-wiki) +- [CERN EDMS](#cern-edms) +- [Github repositories](#github-repositories) +- [DUNE Computing FAQ](#dune-computing-faq) + +**Table of Contents for 02-storage-spaces** +- [This is an updated version of the 2023 training](#this-is-an-updated-version-of-the-2023-training) + - [Live Notes](#live-notes) + - [Workshop Storage Spaces Video from December 2024](#workshop-storage-spaces-video-from-december-2024) +- [Introduction](#introduction) +- [Vocabulary](#vocabulary) +- [Interactive storage volumes (mounted on dunegpvmXX.fnal.gov or lxplus.cern.ch)](#interactive-storage-volumes-(mounted-on-dunegpvmxx.fnal.gov-or-lxplus.cern.ch)) + - [Your home area](#your-home-area) + - [at Fermilab](#at-fermilab) + - [at CERN](#at-cern) + - [Locally mounted volumes](#locally-mounted-volumes) + - [Network Attached Storage (NAS)](#network-attached-storage-(nas)) + - [At Fermilab](#at-fermilab) + - [At CERN](#at-cern) + - [Grid-accessible storage volumes](#grid-accessible-storage-volumes) + - [Persistent dCache](#persistent-dcache) + - [Scratch dCache](#scratch-dcache) + - [Tape-backed dCache](#tape-backed-dcache) + - [Rucio Storage Elements](#rucio-storage-elements) + - [CVMFS](#cvmfs) + - [What is my quota?](#what-is-my-quota) + - [Your home area at FNAL](#your-home-area-at-fnal) + - [Your home area at CERN](#your-home-area-at-cern) + - [The /app/ and /data/ areas at FNAL](#the-/app/-and-/data/-areas-at-fnal) + - [EOS at CERN](#eos-at-cern) + - [Fermilab dCache](#fermilab-dcache) +- [Summary on storage spaces](#summary-on-storage-spaces) +- [Monitoring and Usage](#monitoring-and-usage) +- [Commands and tools](#commands-and-tools) + - [ifdh](#ifdh) + - [xrootd](#xrootd) + - [What is the right xroot path for a file.](#what-is-the-right-xroot-path-for-a-file.) + - [The df command](#the-df-command) +- [Quiz](#quiz) +- [Useful links to bookmark](#useful-links-to-bookmark) + +**Table of Contents for 02.3-cvmfs** +- [CVMFS](#cvmfs) +- [Restrictions](#restrictions) +- [Useful links to bookmark](#useful-links-to-bookmark) + +**Table of Contents for 03-data-management** +- [Session Video](#session-video) + - [Live Notes](#live-notes) +- [Introduction](#introduction) + - [What we need to do to produce accurate physics results](#what-we-need-to-do-to-produce-accurate-physics-results) + - [How we do it ?](#how-we-do-it-) + - [How do I use this.](#how-do-i-use-this.) +- [How to find and access official data](#how-to-find-and-access-official-data) +- [Official datasets ](#Official_Datasets) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) +- [What is metacat?](#what-is-metacat) + - [Find a file in metacat](#find-a-file-in-metacat) + - [Example of doing a metacat search](#example-of-doing-a-metacat-search) + - [then do queries to find particular groups of files](#then-do-queries-to-find-particular-groups-of-files) + - [What do those fields mean?](#what-do-those-fields-mean) + - [find out how much raw data there is in a run using the summary option](#find-out-how-much-raw-data-there-is-in-a-run-using-the-summary-option) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) + - [find out how much data there is in a dataset](#find-out-how-much-data-there-is-in-a-dataset) + - [What describes a dataset?](#what-describes-a-dataset) + - [What files are in that dataset and how do I use them?](#what-files-are-in-that-dataset-and-how-do-i-use-them) +- [Finding those files on disk](#finding-those-files-on-disk) +- [Getting file locations using Rucio](#getting-file-locations-using-rucio) + - [What is Rucio?](#what-is-rucio) + - [You will need to authenticate to read files](#you-will-need-to-authenticate-to-read-files) +- [Accessing `rucio` and `justin` require a bit more](#accessing-`rucio`-and-`justin`-require-a-bit-more) + - [getting a token for xroot access in AL9](#getting-a-token-for-xroot-access-in-al9) + - [finding a file](#finding-a-file) +- [More finding files by characteristics using metacat](#more-finding-files-by-characteristics-using-metacat) +- [Accessing data for use in your analysis](#accessing-data-for-use-in-your-analysis) +- [Quiz](#quiz) +- [Useful links to bookmark](#useful-links-to-bookmark) + +**Table of Contents for 03.2-UPS** +- [What is UPS and why do we need it?](#what-is-ups-and-why-do-we-need-it) + - [UPS basic commands](#ups-basic-commands) + +**Table of Contents for 04-Spack** +- [What is Spack and why do we need it?](#what-is-spack-and-why-do-we-need-it) +- [Minimal spack for root analysis and file access](#minimal-spack-for-root-analysis-and-file-access) +- [A more flexible environment with more packages but you have to make choices of versions](#a-more-flexible-environment-with-more-packages-but-you-have-to-make-choices-of-versions) + - [Spack basic commands](#spack-basic-commands) + +**Table of Contents for 05-end-of-basics** +- [You can ask questions here:](#you-can-ask-questions-here) +- [You can continue on with these additional modules.](#you-can-continue-on-with-these-additional-modules.) + +**Table of Contents for 05.1-improve-code-efficiency** +- [Improve your Code efficiency](#improve-your-code-efficiency) + - [Session Video](#session-video) + - [Live Notes](#live-notes) + - [Code Make-over](#code-make-over) + - [CPU optimization:](#cpu-optimization) +- [Memory optimization:](#memory-optimization) +- [I/O optimization:](#i/o-optimization) +- [Build time optimization:](#build-time-optimization) +- [Workflow optimization:](#workflow-optimization) +- [Software readability and maintainability:](#software-readability-and-maintainability) +- [Coding for Thread Safety](#coding-for-thread-safety) + +**Table of Contents for 10-closing-remarks** +- [Video Session](#video-session) +- [Two Days of Training](#two-days-of-training) +- [Survey time!](#survey-time!) +- [Next Steps](#next-steps) +- [Long Term Support](#long-term-support) + +**Table of Contents for about** + +**Table of Contents for al9_setup** +- [How to set up a basic session in al9](#how-to-set-up-a-basic-session-in-al9) + - [a tip](#a-tip) +- [Set up software](#set-up-software) +- [Get a token](#get-a-token) + - [getting a token for xroot access in AL9](#getting-a-token-for-xroot-access-in-al9) + +**Table of Contents for al9_speedrun** +- [2025 Speedrun of AL9 setup and test](#2025-speedrun-of-al9-setup-and-test) + - [getting a token for xroot access in AL9](#getting-a-token-for-xroot-access-in-al9) + +**Table of Contents for Common-Error-Messages** +- [Common Error Messages](#common-error-messages) + - [Error: /usr/bin/xauth: unable to write authority file](#error-/usr/bin/xauth-unable-to-write-authority-file) + - [bash: setup: command not found](#bash-setup-command-not-found) + - [SyntaxError: future feature annotations is not defined](#syntaxerror-future-feature-annotations-is-not-defined) + - [Spack : Error: somecode matches multiple packages](#spack--error-somecode-matches-multiple-packages) + +**Table of Contents for ComputerSetup** +- [Computer setup](#computer-setup) + - [Back up your machine](#back-up-your-machine) + - [Open a unix terminal window](#open-a-unix-terminal-window) + - [Learn how to use the Unix Shell](#learn-how-to-use-the-unix-shell) + - [Install an x-windows emulator](#install-an-x-windows-emulator) + - [MacOS](#macos) + - [Unix](#unix) + - [Windows](#windows) + - [Extra - Get a code editor](#extra---get-a-code-editor) + - [OSX](#osx) + - [Unix](#unix) + - [Windows](#windows) +- [Useful Links](#useful-links) + +**Table of Contents for figures** + +**Table of Contents for guide** + - [Instructor Guide](#instructor-guide) + +**Table of Contents for helpers** + +**Table of Contents for InstallConda** +- [Installing conda and root](#installing-conda-and-root) +- [Download miniconda](#download-miniconda) +- [Install conda](#install-conda) +- [Making an environment with root in it](#making-an-environment-with-root-in-it) +- [Try out your new environment](#try-out-your-new-environment) +- [Testing](#testing) + +**Table of Contents for OfficialDatasets** +- [Official datasets ](#Official_Datasets) + - [Fast web catalog queries](#fast-web-catalog-queries) + - [Command line tools and advanced queries](#command-line-tools-and-advanced-queries) + - [metacat web interface](#metacat-web-interface) + - [Example of finding reconstructed Monte Carlo](#example-of-finding-reconstructed-monte-carlo) + - [you can use the web data catalog to do advanced searches](#you-can-use-the-web-data-catalog-to-do-advanced-searches) + +**Table of Contents for pnfs2xrootd** + +**Table of Contents for putty** + - [PuTTY](#putty) +- [Kerberos Ticket Manager](#kerberos-ticket-manager) +- [Xming](#xming) +- [Configuring PuTTY](#configuring-putty) + +**Table of Contents for setup_ruby** + +**Table of Contents for setup** +- [Objectives](#objectives) +- [Requirements](#requirements) +- [Step 1: DUNE membership](#step-1-dune-membership) +- [Step 2: Getting accounts](#step-2-getting-accounts) + - [With FNAL](#with-fnal) + - [With CERN](#with-cern) +- [Step 3: Mission setup (rest of this page)](#step-3-mission-setup-(rest-of-this-page)) +- [0. Basic setup on your computer.](#0.-basic-setup-on-your-computer.) +- [1. Kerberos business](#1.-kerberos-business) +- [2. ssh-in](#2.-ssh-in) +- [3. Get a clean shell](#3.-get-a-clean-shell) +- [Software setup ](#software_setup) + - [4.1 Setting up DUNE software - Scientific Linux 7 version ](SL7setup) + - [Caveats for later](#caveats-for-later) + - [4.2 Setting up DUNE software - Alma9 version ](#AL9_setup) + - [Caveats for later](#caveats-for-later) +- [4.2 Setting up DUNE software - Alma9 version](#4.2-setting-up-dune-software---alma9-version) + - [Caveats](#caveats) +- [5. Exercise! (For SL7 - it's easy)](#5.-exercise!-(for-sl7---it's-easy)) +- [5. Exercise! (For AL9 - it's easy)](#5.-exercise!-(for-al9---it's-easy)) +- [6. Getting setup for streaming and grid access](#6.-getting-setup-for-streaming-and-grid-access) + - [Tokens method ](#tokens) + - [1. Get and store your token](#1.-get-and-store-your-token) + - [2. Tell the system where your token is](#2.-tell-the-system-where-your-token-is) +- [Set up on CERN machines ](#setup_CERN) + - [1. Setup in Alma9](#1.-setup-in-alma9) + - [2. For SL7](#2.-for-sl7) + - [Source the DUNE environment SL7 setup script](#source-the-dune-environment-sl7-setup-script) + - [3. Getting authentication for data access](#3.-getting-authentication-for-data-access) + - [4. Access tutorial datasets](#4.-access-tutorial-datasets) + - [5. Notify us](#5.-notify-us) +- [Useful Links](#useful-links) + +**Table of Contents for sites** + +**Table of Contents for sl7_setup** +- [launch the Apptainer](#launch-the-apptainer) +- [then do the following to set up DUNE code](#then-do-the-following-to-set-up-dune-code) +- [then do the following to get authentication to remote data and batch systems](#then-do-the-following-to-get-authentication-to-remote-data-and-batch-systems) +- [Accessing `rucio` and `justin` require a bit more](#accessing-`rucio`-and-`justin`-require-a-bit-more) + +**Table of Contents for sl7_speedrun** +- [Accessing `rucio` and `justin` require a bit more](#accessing-`rucio`-and-`justin`-require-a-bit-more) +- [check root](#check-root) +- [check rucio](#check-rucio) + +**Table of Contents for Tokens** +- [SL7 Tokens ](#SL7_token) +- [Accessing `rucio` and `justin` require a bit more](#accessing-`rucio`-and-`justin`-require-a-bit-more) +- [AL9 Tokens ](#AL9_token) + - [getting a token for xroot access in AL9](#getting-a-token-for-xroot-access-in-al9) + +**Table of Contents for TutorialsMasterList** +- [[Computing Basics](https://dune.github.io/computing-basics/)](#[computing-basics](https//dune.github.io/computing-basics/)) +- [[The Justin Workflow System](https://dunejustin.fnal.gov/docs/)](#[the-justin-workflow-system](https//dunejustin.fnal.gov/docs/)) +- [[LArTPC Reconstruction Training](https://indico.ph.ed.ac.uk/event/268/)](#[lartpc-reconstruction-training](https//indico.ph.ed.ac.uk/event/268/)) +- [[FAQ](https://github.com/orgs/DUNE/projects/19/views/1)](#[faq](https//github.com/orgs/dune/projects/19/views/1)) + +**Table of Contents for Windows** +- [Instructions for running remote terminal sessions on unix machines from Windows](#instructions-for-running-remote-terminal-sessions-on-unix-machines-from-windows) +- [Kerberos Ticket Manager](#kerberos-ticket-manager) +- [Terminal emulators](#terminal-emulators) + - [MobaXterm](#mobaxterm) + - [PuTTY/Xming](#putty/xming) + - [PuTTY](#putty) + - [Xming with PuTTY](#xming-with-putty) + - [Configuring PuTTY for remote use](#configuring-putty-for-remote-use) +- [Done!](#done!) \ No newline at end of file diff --git a/_includes/all_toc.md b/_includes/all_toc.md new file mode 100644 index 0000000..6946765 --- /dev/null +++ b/_includes/all_toc.md @@ -0,0 +1,32 @@ +{% include +01-introduction.toc.md %}{% include +01.5-documentation.toc.md %}{% include +02-storage-spaces.toc.md %}{% include +02.3-cvmfs.toc.md %}{% include +03-data-management.toc.md %}{% include +03.2-UPS.toc.md %}{% include +04-Spack.toc.md %}{% include +05-end-of-basics.toc.md %}{% include +05.1-improve-code-efficiency.toc.md %}{% include +10-closing-remarks.toc.md %}{% include +Common-Error-Messages.toc.md %}{% include +ComputerSetup.toc.md %}{% include +InstallConda.toc.md %}{% include +OfficialDatasets.toc.md %}{% include +Tokens.toc.md %}{% include +TutorialsMasterList.toc.md %}{% include +Windows.toc.md %}{% include +about.toc.md %}{% include +al9_setup.toc.md %}{% include +al9_speedrun.toc.md %}{% include +figures.toc.md %}{% include +guide.toc.md %}{% include +helpers.toc.md %}{% include +howToBuild.toc.md %}{% include +pnfs2xrootd.toc.md %}{% include +putty.toc.md %}{% include +setup.toc.md %}{% include +setup_ruby.toc.md %}{% include +sites.toc.md %}{% include +sl7_setup.toc.md %}{% include +sl7_speedrun.toc.md %} \ No newline at end of file diff --git a/_includes/figures.toc.md b/_includes/figures.toc.md new file mode 100644 index 0000000..bc1c131 --- /dev/null +++ b/_includes/figures.toc.md @@ -0,0 +1,3 @@ + + +**Table of Contents for figures** \ No newline at end of file diff --git a/_includes/guide.toc.md b/_includes/guide.toc.md new file mode 100644 index 0000000..c74144d --- /dev/null +++ b/_includes/guide.toc.md @@ -0,0 +1,4 @@ + + +**Table of Contents for guide** + - [Instructor Guide](#instructor-guide) \ No newline at end of file diff --git a/_includes/helpers.toc.md b/_includes/helpers.toc.md new file mode 100644 index 0000000..17ec3bd --- /dev/null +++ b/_includes/helpers.toc.md @@ -0,0 +1,3 @@ + + +**Table of Contents for helpers** \ No newline at end of file diff --git a/_includes/howToBuild.toc.md b/_includes/howToBuild.toc.md new file mode 100644 index 0000000..03c0055 --- /dev/null +++ b/_includes/howToBuild.toc.md @@ -0,0 +1,6 @@ + + +**Table of Contents for howToBuild** +- [Making a table of contents](#making-a-table-of-contents) + - [Things not to do:](#things-not-to-do) +- [Ruby](#ruby) \ No newline at end of file diff --git a/_includes/links.md b/_includes/links.md index cb9dd9c..3c36e1b 100644 --- a/_includes/links.md +++ b/_includes/links.md @@ -30,6 +30,7 @@ [lesson-mainpage]: {{ relative_root_path }}{% link index.md %} [lesson-reference]: {{ relative_root_path }}{% link reference.md %} [lesson-setup]: {{ relative_root_path }}{% link setup.md %} +[lxplus]: https://twiki.cern.ch/twiki/bin/view/Main/HowtoUseLxplus [mit-license]: https://opensource.org/licenses/mit-license.html [morea]: https://morea-framework.github.io/ [numfocus]: https://numfocus.org/ diff --git a/_includes/pnfs2xrootd.toc.md b/_includes/pnfs2xrootd.toc.md new file mode 100644 index 0000000..6471a0e --- /dev/null +++ b/_includes/pnfs2xrootd.toc.md @@ -0,0 +1,3 @@ + + +**Table of Contents for pnfs2xrootd** \ No newline at end of file diff --git a/_includes/putty.toc.md b/_includes/putty.toc.md new file mode 100644 index 0000000..84d7cfa --- /dev/null +++ b/_includes/putty.toc.md @@ -0,0 +1,7 @@ + + +**Table of Contents for putty** + - [PuTTY](#putty) +- [Kerberos Ticket Manager](#kerberos-ticket-manager) +- [Xming](#xming) +- [Configuring PuTTY](#configuring-putty) \ No newline at end of file diff --git a/_includes/setup.toc.md b/_includes/setup.toc.md new file mode 100644 index 0000000..7518e69 --- /dev/null +++ b/_includes/setup.toc.md @@ -0,0 +1,33 @@ + + +**Table of Contents for setup** +- [Objectives](#objectives) +- [Requirements](#requirements) +- [Step 1: DUNE membership](#step-1-dune-membership) +- [Step 2: Getting accounts](#step-2-getting-accounts) + - [With FNAL](#with-fnal) + - [With CERN](#with-cern) +- [Step 3: Mission setup](#step-3-mission-setup) + - [Basic setup on your computer](#basic-setup-on-your-computer) + - [Kerberos business](#kerberos-business) + - [ssh-in](#ssh-in) + - [Get a clean shell](#get-a-clean-shell) +- [Step 4. Software setup](#step-4.-software-setup) + - [SL7 version](#sl7-version) + - [SL7 Caveats for batch submission](#sl7-caveats-for-batch-submission) + - [Alma9 version](#alma9-version) + - [Alma9 Caveats](#alma9-caveats) +- [Step 5. Exercises](#step-5.-exercises) + - [Exercise! (For SL7 - it's easy)](#exercise!-(for-sl7---it's-easy)) + - [Exercise! (For AL9 - it's easy)](#exercise!-(for-al9---it's-easy)) +- [Step 6. Getting authentication for streaming and grid access](#step-6.-getting-authentication-for-streaming-and-grid-access) + - [Tokens method](#tokens-method) + - [Get and store your token](#get-and-store-your-token) +- [Set up on CERN machines](#set-up-on-cern-machines) + - [Setup in Alma9](#setup-in-alma9) + - [For SL7](#for-sl7) + - [Source the DUNE environment SL7 setup script](#source-the-dune-environment-sl7-setup-script) + - [Getting authentication for data access](#getting-authentication-for-data-access) + - [Access tutorial datasets](#access-tutorial-datasets) + - [Notify us](#notify-us) +- [Useful Links](#useful-links) \ No newline at end of file diff --git a/_includes/setup_ruby.toc.md b/_includes/setup_ruby.toc.md new file mode 100644 index 0000000..c806f3e --- /dev/null +++ b/_includes/setup_ruby.toc.md @@ -0,0 +1,6 @@ + + +**Table of Contents for setup_ruby** +- [Making a table of contents](#making-a-table-of-contents) + - [Things not to do:](#things-not-to-do) +- [Ruby](#ruby) \ No newline at end of file diff --git a/_includes/sites.toc.md b/_includes/sites.toc.md new file mode 100644 index 0000000..23d30b4 --- /dev/null +++ b/_includes/sites.toc.md @@ -0,0 +1,3 @@ + + +**Table of Contents for sites** \ No newline at end of file diff --git a/_includes/sl7_setup.toc.md b/_includes/sl7_setup.toc.md new file mode 100644 index 0000000..9ab2e9d --- /dev/null +++ b/_includes/sl7_setup.toc.md @@ -0,0 +1,8 @@ + + +**Table of Contents for sl7_setup** +- [launch the Apptainer](#launch-the-apptainer) +- [then do the following to set up DUNE code](#then-do-the-following-to-set-up-dune-code) +- [then do the following to get authentication to remote data and batch systems](#then-do-the-following-to-get-authentication-to-remote-data-and-batch-systems) + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) \ No newline at end of file diff --git a/_includes/sl7_setup_2025.md b/_includes/sl7_setup_2025.md index 17e93d7..a3543f1 100644 --- a/_includes/sl7_setup_2025.md +++ b/_includes/sl7_setup_2025.md @@ -21,9 +21,5 @@ export DUNELAR_QUALIFIER=e26:prof # you want to update this setup -B dunesw ${DUNELAR_VERSION} -q ${DUNELAR_QUALIFIER} -setup metacat -setup rucio -export RUCIO_ACCOUNT=justinreadonly -setup justin ~~~ {: .language-bash} \ No newline at end of file diff --git a/_includes/sl7_speedrun.toc.md b/_includes/sl7_speedrun.toc.md new file mode 100644 index 0000000..33f812f --- /dev/null +++ b/_includes/sl7_speedrun.toc.md @@ -0,0 +1,7 @@ + + +**Table of Contents for sl7_speedrun** + - [Interactive file access](#interactive-file-access) + - [Accessing rucio and justIn resources requires a bit more](#accessing-rucio-and-justin-resources-requires-a-bit-more) +- [check root](#check-root) +- [check rucio](#check-rucio) \ No newline at end of file diff --git a/_includes/sl7_token.md b/_includes/sl7_token.md index b74881f..a5bce3c 100644 --- a/_includes/sl7_token.md +++ b/_includes/sl7_token.md @@ -1,34 +1,79 @@ +### Interactive file access -To get a token that allows you to access files (and rucio) in SL7 +To get a token that allows you to access files interactively in SL7 ~~~ +htgettoken -i dune --vaultserver htvaultprod.fnal.gov -r interactive +export BEARER_TOKEN_FILE=/run/user/`id -u`/bt_u`id -u` +export X509_CERT_DIR=/cvmfs/oasis.opensciencegrid.org/mis/certificates +~~~ +{: ..language-bash} + +put this in a file called `dune_token.sh` + +The first time you do this you will see: +~~~ +Attempting kerberos auth with https://htvaultprod.fnal.gov:8200 ... succeeded +Attempting to get token from https://htvaultprod.fnal.gov:8200 ... failed +Attempting OIDC authentication with https://htvaultprod.fnal.gov:8200 + +Complete the authentication at: + https://cilogon.org/device/?user_code=XXXX +No web open command defined, please copy/paste the above to any web browser +Waiting for response in web browser +~~~ +{: ..output} + +Go to that web site and authenticate + +~~~ +Storing vault token in /tmp/vt_uXXX +Saving credkey to /nashome/s/USER/.config/htgettoken/credkey-dune-interactive +Saving refresh token ... done +Attempting to get token from https://htvaultprod.fnal.gov:8200 ... succeeded +Storing bearer token in /run/user/XXXX/bt_XXXX +~~~ +{: ..output} + + + +### Accessing rucio and justIn resources requires a bit more + +in SL7 - put this in a file called `dune_data_sl7.sh` so you can use it again. + +~~~ +setup metacat +setup rucio +export RUCIO_ACCOUNT=justinreadonly setup justin -justin time # this just tells justin you want to authenticate +justin time # this just tells justin that you exist and want to authenticate +justin get-token # this actually gets a token and associated proxy for access to rucio and the batch system ~~~ -{: .language-bash} +{: ..language-bash} -The first time it will ask you to open a web browser, authenticate and enter the long string it delivers to you. +The first time you do this you will get asked (after the `justin time` command) ~~~ To authorize this computer to run the justin command, visit this page with your usual web browser and follow the instructions within the next 10 minutes: -https://dunejustin.fnal.gov/authorize/_W_azUJcLhYmAOqClYz9RAsnKbDgzQ6lNA +https://dunejustin.fnal.gov/authorize/XXXXX -Check that the Session ID displayed on that page is -cprbbe +Check that the Session ID displayed on that page is BfhVBmQ -Once you've followed the instructions on that web page, you can run the justin -command without needing to authorize this computer again for 7 days. +Once you've followed the instructions on that web page, please run the command +you tried again. You won't need to authorize this computer again for 7 days. ~~~ -{: .output} +{: ..output} -That gave you authorization to use justin. Now do the command again to get an actual token. +Once again go to the website that appears and authenticate. After the first authentication to justIn you need to do a second justin call ~~~ justin get-token ~~~ -{: .language-bash} +{: ..language-bash} -You will have to do this sequence weekly as your justin access expires. +You will need to do this sequence weekly as your justin access expires. > ## Note: -> Despite the name of this command it gets you both a token and a special X.509 proxy and it is the latter you are actually using to talk to rucio in these SL7 examples \ No newline at end of file +> Despite the name of this command it gets you both a token and a special X.509 proxy and it is the latter you are actually using to talk to rucio in these SL7 examples +{: .callout} \ No newline at end of file diff --git a/_includes/sl7_token_justin.md b/_includes/sl7_token_justin.md new file mode 100644 index 0000000..b74881f --- /dev/null +++ b/_includes/sl7_token_justin.md @@ -0,0 +1,34 @@ + +To get a token that allows you to access files (and rucio) in SL7 + +~~~ +setup justin +justin time # this just tells justin you want to authenticate +~~~ +{: .language-bash} + +The first time it will ask you to open a web browser, authenticate and enter the long string it delivers to you. + +~~~ +To authorize this computer to run the justin command, visit this page with your +usual web browser and follow the instructions within the next 10 minutes: +https://dunejustin.fnal.gov/authorize/_W_azUJcLhYmAOqClYz9RAsnKbDgzQ6lNA + +Check that the Session ID displayed on that page is -cprbbe + +Once you've followed the instructions on that web page, you can run the justin +command without needing to authorize this computer again for 7 days. +~~~ +{: .output} + +That gave you authorization to use justin. Now do the command again to get an actual token. + +~~~ +justin get-token +~~~ +{: .language-bash} + +You will have to do this sequence weekly as your justin access expires. + +> ## Note: +> Despite the name of this command it gets you both a token and a special X.509 proxy and it is the latter you are actually using to talk to rucio in these SL7 examples \ No newline at end of file diff --git a/addtoc.sh b/addtoc.sh new file mode 100755 index 0000000..d403997 --- /dev/null +++ b/addtoc.sh @@ -0,0 +1,36 @@ + +python code/tocgen.py setup.md +python code/tocgen.py _episodes/01-introduction.md +python code/tocgen.py _episodes/01.5-documentation.md +python code/tocgen.py _episodes/02-storage-spaces.md +python code/tocgen.py _episodes/02.3-cvmfs.md +python code/tocgen.py _episodes/03-data-management.md +python code/tocgen.py _episodes/03.2-UPS.md +python code/tocgen.py _episodes/04-Spack.md +python code/tocgen.py _episodes/05-end-of-basics.md +python code/tocgen.py _episodes/05.1-improve-code-efficiency.md +python code/tocgen.py _episodes/10-closing-remarks.md +python code/tocgen.py _extras/about.md +python code/tocgen.py _extras/al9_setup.md +python code/tocgen.py _extras/al9_speedrun.md +python code/tocgen.py _extras/Common-Error-Messages.md +python code/tocgen.py _extras/ComputerSetup.md +python code/tocgen.py _extras/figures.md +python code/tocgen.py _extras/guide.md +python code/tocgen.py _extras/helpers.md +python code/tocgen.py _extras/InstallConda.md +python code/tocgen.py _extras/OfficialDatasets.md +python code/tocgen.py _extras/pnfs2xrootd.md +python code/tocgen.py _extras/putty.md +python code/tocgen.py _extras/howToBuild.md +python code/tocgen.py _extras/sites.md +python code/tocgen.py _extras/sl7_setup.md +python code/tocgen.py _extras/sl7_token.md +python code/tocgen.py _extras/sl7_speedrun.md +python code/tocgen.py _extras/Tokens.md +python code/tocgen.py _extras/TutorialsMasterList.md +python code/tocgen.py _extras/Windows.md + +#cat _includes/setup.toc.md | sed s/TableOfContents// > _includes/allTOC.md +#cat _includes/*.toc.md | sed s/TableOfContents// >> _includes/allTOC.md + diff --git a/code/makealltoc.py b/code/makealltoc.py new file mode 100644 index 0000000..f80034a --- /dev/null +++ b/code/makealltoc.py @@ -0,0 +1,12 @@ +import sys,os + +table = os.path.join('_includes','all_toc.md') +out = open(table,'w') +files = os.listdir('./_includes') +files.sort() +for file in files: + print (file) + if file.endswith('.toc.md'): + filename = os.path.basename(file) + out.write("%s\n" % '{% include '+filename+' %}') + \ No newline at end of file diff --git a/code/tocgen.py b/code/tocgen.py new file mode 100644 index 0000000..e205591 --- /dev/null +++ b/code/tocgen.py @@ -0,0 +1,52 @@ +import os,sys,string + +def dealwithline(input,toclines): + for line in input: + if line.startswith('{% include'): + print ("dealing with include",line) + includename = line.split('%')[1].strip().split(' ')[1] + newfile = os.path.join("_includes/",includename) + newlines = open(newfile,'r').readlines() + dealwithline(newlines,toclines) + continue + # remove any labels + # if " ## If you run into problems now or later, check out the [Common Error Messages]({{ site.baseurl }}/ErrorMessages) page and the [FAQ page](https://github.com/orgs/DUNE/projects/19/) > if that doesn't help, use [DUNE Slack](https://dunescience.slack.com/archives/C02TJDHUQPR) channel `#computing-training-basics` to ask us about the problem - there is always a new one cropping up. {: .challenge} +{%include setup.toc.md%} + ## Requirements @@ -59,10 +60,12 @@ If you do not have any FNAL accounts yet, you need to contact your supervisor a ### With CERN If you have a valid CERN account and access to CERN machines, you will be able to do many of the exercises as some data is available at CERN. The LArSoft tutorial has been designed to work from CERN. We strongly advise pursuing the FNAL computing account though. +See [lxplus documentation](#lxplus) for information about using lxplus. + If you have trouble getting access, please reach out to the training team several days ahead of time. Some issues take some time to resolve. Please do not put this off. We cannot help you the day of the tutorial as we are busy doing the tutorial. -## Step 3: Mission setup (rest of this page) +## Step 3: Mission setup @@ -81,15 +84,15 @@ Also check out our [Computing FAQ](https://github.com/orgs/DUNE/projects/19/view -## 0. Basic setup on your computer. +### Basic setup on your computer [Computer Setup]({{ site.baseurl }}/ComputerSetup) goes through how to find a terminal and set up xwindows on MacOS and Windows. You can skip this if already familiar with doing that. > ## Note -> The instructions directly below are for FNAL accounts. If you do not have a valid FNAL account but a CERN one, go at the bottom of this page to the [Setup on CERN machines](#setup_CERN) section. +> The instructions directly below are for FNAL accounts. If you do not have a valid FNAL account but a CERN one, go at the bottom of this page to the [Setup on CERN machines](#Setup) section. {: .challenge} -## 1. Kerberos business +### Kerberos business @@ -294,7 +298,7 @@ To set up your environment in SL7, the commands are: Log into a DUNE machine running Alma9 -> ### Launch an SL7 container +> #### Launch an SL7 container > > > ## gpvm apptainer > > ~~~ @@ -333,7 +337,7 @@ Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/ {: .output} -> ### See if ROOT works +> #### See if ROOT works > > > ## Try testing ROOT to make certain things are working > > ~~~ @@ -344,19 +348,20 @@ Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/ > > You should see a plot that updates and then terminates. You may need to `export DISPLAY=0:0`. > {: .solution} {: .challenge} - -### 4.2 Setting up DUNE software - Alma9 version + +### Alma9 version We are moving to the Alma9 version of unix. Not all DUNE code has been ported yet but if you are doing basic root analysis work, try it out. @@ -418,12 +423,12 @@ Here is how you set up basic DUNE software on Alma 9. We are using the super-com {: .callout} -### Caveats +### Alma9 Caveats -We don't have a full ability to rebuild DUNE Software packages such as LArSoft using Spack yet. We will be adding more functionality soon. Unless you are doing simple ROOT based analysis you will need to use the [SL7 Container](#SL7_setup) method for now. +We don't have a full ability to rebuild DUNE Software packages such as LArSoft using Spack yet. We will be adding more functionality soon. Unless you are doing simple ROOT based analysis you will need to use the [SL7 Container](#sl7-version) method for now. -> ## 4.3 Optional - make an alias! +> ## Optional - make an alias! > > ## See how you can make an alias so you don't have to type everything > > You can store this in your (minimal) .bashrc or .profile if you want this alias to be available in all sessions. The alias will be defined but not executed. Only if you type the command `dune_setup7` yourself.> Not familiar with aliases? Read below. > > @@ -458,15 +463,15 @@ We don't have a full ability to rebuild DUNE Software packages such as LArSoft u +## Step 5. Exercises - -## 5. Exercise! (For SL7 - it's easy) +### Exercise! (For SL7 - it's easy) This exercise will help organizers see if you reached this step or need help. 1) Start in your home area `cd ~` on the DUNE machine (normally CERN or FNAL) and create the file ```dune_presetup_2025_sl7.sh```. -Launch the *Apptainer* as described above in the [SL7 version](#SL7_setup) +Launch the *Apptainer* as described above in the [SL7 version](#sl7-version) Write in it the following: ~~~ @@ -504,7 +509,7 @@ date >& /exp/dune/app/users/${USER}/my_first_login.txt {: .language-bash} 4) With the above, we will check if you reach this point. However we want to tailor this tutorial to your preferences as much as possible. We will let you decide which animals you would like to see in future material, between: "puppy", "cat", "squirrel", "sloth", "unicorn pegasus llama" (or "prefer not to say" of course). Write your desired option on the second line of the file you just created above. -## 5. Exercise! (For AL9 - it's easy) +### Exercise! (For AL9 - it's easy) This exercise will help organizers see if you reached this step or need help. 1) Start in your home area `cd ~` on the DUNE machine (normally CERN or FNAL) and create the file ```dune_presetup_2025_al9.sh```. @@ -539,7 +544,7 @@ date >& /exp/dune/app/users/${USER}/my_first_login.txt > If you experience difficulties, please ask for help in the Slack channel [#computing-training-basics](https://dunescience.slack.com/archives/C02TJDHUQPR). Please mention in your message this is about the Setup step 5. Thanks! {: .challenge} -## 6. Getting setup for streaming and grid access +## Step 6. Getting authentication for streaming and grid access In addition to your kerberos access, you need to be in the DUNE VO (Virtual Organization) to access to global DUNE resources. This is necessary in particular to stream data and submit jobs to the grid. If you are on the DUNE collaboration list and have a Fermilab ID you should have been added automatically to the DUNE VO. -### Tokens method +### Tokens method We have moved from proxies to tokens for authentication as of 2025. -#### 1. Get and store your token +#### Get and store your token [Scientific Linux 7]({{ site.baseurl }}/Tokens/index.html#sl7-tokens-) @@ -649,7 +654,7 @@ Storing condor credentials for dune you should only have to do the web thing once/month -#### 2. Tell the system where your token is + #### 2. Tell the system where your token is ~~~ @@ -670,7 +675,7 @@ With this done, you should be able to submit jobs and access remote DUNE storage -## Set up on CERN machines +## Set up on CERN machines @@ -679,11 +684,11 @@ With this done, you should be able to submit jobs and access remote DUNE storage See [https://github.com/DUNE/data-mgmt-ops/wiki/Using-Rucio-to-find-Protodune-files-at-CERN](https://github.com/DUNE/data-mgmt-ops/wiki/Using-Rucio-to-find-Protodune-files-at-CERN) for instructions on getting full access to DUNE data via metacat/rucio from lxplus. -### 1. Setup in Alma9 +### Setup in Alma9 -The directions above at: [AL9_setup](#AL9_setup) above should work directly at CERN, do those and proceed to step 3. +The directions above at: [AL9_setup](#alma9-version) above should work directly at CERN, do those and proceed to step 3. -### 2. For SL7 +### For SL7 #### Source the DUNE environment SL7 setup script @@ -691,7 +696,7 @@ CERN access is mainly for ProtoDUNE collaborators. If you have a valid CERN ID a log into `lxplus.cern.ch` -fire up the Apptainer as explained in [SL7 Setup](#SL7_setup) but with a slightly different version as mounts are different. +fire up the Apptainer as explained in [SL7 Setup](#sl7-version) but with a slightly different version as mounts are different. ~~~ {% include apptainer_cern.md %} @@ -716,13 +721,13 @@ Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/ ~~~ {: .output} -### 3. Getting authentication for data access +### Getting authentication for data access -If you have a Fermilab account already, get a token as described in [tokens](#tokens) +If you have a Fermilab account already, get a token as described in [tokens](#tokens-method) -### 4. Access tutorial datasets +### Access tutorial datasets Normally, the datasets are accessible through the grid resources. But with your CERN account, you may not be part of the DUNE VO yet (more on this during the tutorial). We found a workaround: some datasets have been copied locally for you. You can check them here: ~~~ ls /afs/cern.ch/work/t/tjunk/public/may2023tutorialfiles/ @@ -734,7 +739,7 @@ PDSPProd4_protoDUNE_sp_reco_stage1_p1GeV_35ms_sce_datadriven_41094796_0_20210121 ~~~ {: .output} -### 5. Notify us +### Notify us You should be good to go. If you are experiencing issues, please contact us as soon as possible. Be sure to mention "Setup on CERN machines" if that is the case, and we will do our best to assist you. @@ -747,16 +752,16 @@ If you are experiencing issues, please contact us as soon as possible. Be sure t {: .checklist} > ## Issues -> If you have issues here, please go to the [#computing-training-basics](https://dunescience.slack.com/archives/C02TJDHUQPR)Slack channel to get support. Please note that you are on a CERN machine in your message. Thanks! +> If you have issues here, please go to the [#computing-training-basics](https://dunescience.slack.com/archives/C02TJDHUQPR) Slack channel to get support. Please note that you are on a CERN machine in your message. Thanks! {: .discussion} -### Useful Links +## Useful Links The [DUNE FAQ][DUNE FAQ] on GitHub. [Wiki page][dune-wiki-interactive-resources] on DUNE's interactive computing resources, including tips on using Kerberos and VNC. -{%include links.md%} +{% include links.md %} [SL7_to_Alma9]: https://wiki.dunescience.org/wiki/SL7_to_Alma9_conversion#SL7_to_Alma_9_conversion diff --git a/setup.toc.md b/setup.toc.md new file mode 100644 index 0000000..157a1d0 --- /dev/null +++ b/setup.toc.md @@ -0,0 +1,32 @@ +- [Objectives](#objectives) +- [Requirements](#requirements) +- [Step 1: DUNE membership](#step 1: dune membership) +- [Step 2: Getting accounts](#step 2: getting accounts) + - [With FNAL](#with fnal) + - [With CERN](#with cern) +- [Step 3: Mission setup (rest of this page)](#step 3: mission setup (rest of this page)) +- [0. Basic setup on your computer.](#0. basic setup on your computer.) +- [1. Kerberos business](#1. kerberos business) +- [2. ssh-in](#2. ssh-in) +- [3. Get a clean shell](#3. get a clean shell) +- [Software setup ](#software setup ) + - [4.1 Setting up DUNE software - Scientific Linux 7 version ](#4.1 setting up dune software - scientific linux 7 version ) + - [Caveats for later](#caveats for later) + - [4.2 Setting up DUNE software - Alma9 version ](#4.2 setting up dune software - alma9 version ) + - [Caveats for later](#caveats for later) +- [4.2 Setting up DUNE software - Alma9 version](#4.2 setting up dune software - alma9 version) + - [Caveats](#caveats) +- [5. Exercise! (For SL7 - it's easy)](#5. exercise! (for sl7 - it's easy)) +- [5. Exercise! (For AL9 - it's easy)](#5. exercise! (for al9 - it's easy)) +- [6. Getting setup for streaming and grid access](#6. getting setup for streaming and grid access) + - [Tokens method ](#tokens method ) + - [1. Get and store your token](#1. get and store your token) + - [2. Tell the system where your token is](#2. tell the system where your token is) +- [Set up on CERN machines ](#set up on cern machines ) + - [1. Setup in Alma9](#1. setup in alma9) + - [2. For SL7](#2. for sl7) + - [Source the DUNE environment SL7 setup script](#source the dune environment sl7 setup script) + - [3. Getting authentication for data access](#3. getting authentication for data access) + - [4. Access tutorial datasets](#4. access tutorial datasets) + - [5. Notify us](#5. notify us) + - [Useful Links](#useful links) \ No newline at end of file