Data Release Production

Travel Calendar

We use Google Calendar to keep track of group travel. Please ask Jim, Robert or Yusra for access. Use it to share details of any substantial travel plans: vacations, conferences, etc. It is not expected that you record the minutiae of everyday life: please don’t bother telling us about your trip to the dentist, DMV, etc!

JIRA Usage

Use the following JIRA labels to identify related work. Please feel free to define more labels as needed; list those which might be of interested to others here. See also the project-wide Labels.

Label

Meaning

auxtel

Work related to the Auxiliary Telescope.

galmodel

Work related to galaxy model fitting.

hsc

Work requested and/or carried out by the HSC team.

pfs

Work requested and/or carried out by the PFS team.

Princeton HPC Systems

In addition to the regular LSST-provided compute systems, DRP team members have access to a number of clusters hosted by the Research Computing Group in Princeton. Please refer to the Research Computing Group’s pages for information on getting started, how to connect with SSH, usage policies, FAQs, etc. Be aware that you must comply with all their rules when using these systems

Obtaining Accounts

Accounts are issued on demand at the request of an appropriate PI. For our group, that means you should speak to either Robert or Yusra, and they will arrange one for you. When your account has been created, you should check that you are a member of the groups astro, hsc, and lsst (use the groups command).

Available Systems

Typically, LSST (and HSC/DECam) data processing is carried out using the Tiger cluster.

The Princeton astronomical software group owns a head node on the Tiger cluster called tiger2-sumire. You can use this node for building software and running small and/or short-lived jobs.

Shared Stack

The Tiger cluster has access to regularly-updated installations of the LSST “stack” through the shared /tigress filesystem. The stack is automatically updated every Thursday evening (i.e., 24h after a new weekly gets cut). To initialize the stack in your shell, run:

source /tigress/HSC/LSST/stack/loadLSST.sh
setup lsst_distrib

Note

The current default shared stack, described above, is a symbolic link to the latest build using the post-RFC-584 Conda environment. Older builds are also available in /tigress/HSC/LSST/stack with the syntax stack_YYYYMMDD.

Repositories

The primary HSC/LSST butler data repository is located at: /projects/HSC/repo/main. All raw HSC data on-disk has been ingested into this gen3 repo. For more information on accessing and using this repository, including setting up required permissions, see the contained README.md file.

Storage

HSC data (both public data releases and private data, which may not be shared outside the collaboration) are available in /tigress/HSC. This space may also be used to store your results. Note however that space is at a premium; please clean up any data you are not actively using. Also, be sure to set umask 002 so that your colleagues can reorganize the shared space.

For temporary data processing storage, shared space is available in /scratch/gpfs/<YourNetID> (you may need to make this directory yourself). This General Parallel File System (GPFS) space is large and visible from all Princeton clusters, however, it is not backed up. More information on Princeton cluster data storage can be found online.

Space is also available /scratch/<yourNetID> and in your home directory, but note that they are not shared across clusters (and, in the case of /scratch, not backed up).

Use the checkquota command to check your current storage and your storage limits. More information on storage limits, including on how to request a quota increase, can be found at this link.

Cluster Usage

Jobs are managed on both systems using SLURM; refer to its documentation for details.

It is occasionally useful to be able to bring up an interactive shell on a compute node. The following should work:

salloc --nodes 1 --ntasks 16 --time=1:00:00  # hh:mm:ss

A list of all available nodes is given using the snodes command. To get an estimate of the start time for any submitted jobs, use this command:

squeue -u $USER --start

See Useful Slurm Commands for additional tools which may be used in conjunction with Slurm.

Connecting from Outside Princeton

Access to all of the Princeton clusters is only available from within the Princeton network. If you are connecting from the outside, you will need to bounce through another host on campus first. Options include:

If you choose the first option, you may find the ProxyCommand option to SSH helpful. For example, adding the following to ~/.ssh/config will automatically route your connection to the right place when you run ssh tiger:

Host tiger
    HostName tiger2-sumire.princeton.edu
    ProxyCommand ssh coma.astro.princeton.edu -W %h:%p

The following SSH configuration allows access via the Research Computing gateway:

Host tigressgateway
    HostName tigressgateway.princeton.edu
Host tiger* tigressdata*
    ProxyCommand ssh -q -W %h:%p tigressgateway.princeton.edu
Host tiger
    Hostname tiger2-sumire.princeton.edu

(It may also be necessary to add a User line under Host tigressgateway if there is a mismatch between your local and Princeton usernames.) Entry to tigressgateway requires 2FA; we recommend using the ControlMaster feature of SSH to persist connections, e.g.:

ControlMaster auto
ControlPath ~/.ssh/controlmaster-%r@%h:%p
ControlPersist 5m

See also the Peyton Hall tips on using SSH.

Help & Support

Contact the Computational Science and Engineering Support group using cses@princeton.edu for technical support when using these systems. Note that neither the regular Peyton Hall sysadmins (help@astro) nor the LSST Project can provide help.