Using the lsst-login Servers

The following login nodes are run by NCSA for access to select Rubin Observatory development resources at NCSA:


To get an account, see the Onboarding Checklist.

This page is designed to assist developers in use of the lsst-login servers:

  1. Overview
  2. Connecting and Authenticating
  3. Development Work
  4. Select Appropriate Developer Tools
  5. Load the LSST Environment
  6. Validation/Test Data Sets
  7. Configure Git LFS
  8. Configure Remote Display with xpra


The lsst-login servers are primarily intended as bastions used to access other resources at NCSA. Additional capabilities include:

  • light Development Work with short-running processes that require modest resources (e.g., build docs, short compilations against LSST software stack)
  • view files (e.g., FITS files)

Users are encouraged to submit batch jobs to perform work that requires more significant resources. Please see Using the Rubin Batch Systems for more information.

The lsst-login nodes have access to the LDF file systems.

For system status and issues:

Connecting and Authenticating

You can log into Rubin Observatory development servers at NCSA with your NCSA account as follows:

  • NCSA username and password OR valid Kerberos ticket from workstation/laptop, AND
  • NCSA Duo authentication

You can reset your NCSA password at the following URL:

Information on setting up NCSA Duo is available at the following URL:

If you are using OpenSSH on your local machine and you wish to use Kerberos from your local machine (instead of entering your password on the login node), you could add something like this to your local ~/.ssh/config file:

GSSAPIAuthentication yes
PreferredAuthentications gssapi-with-mic,keyboard-interactive,password

The Kerberos domain for the lsst-login servers is NCSA.EDU, so something like this may work to generate a Kerberos ticket on your local machine:

kinit username@NCSA.EDU

You may wish to use an lsst-login node as a “jump host”. If using OpenSSH on your local machine you can do this as follows:

   User ncsausername

When using an lsst-login node as a “jump host” you may also wish to configure port forwarding through the lsst-login node to the internal cluster node. To do that you would include something like this in your OpenSSH config file:

   User ncsausername
   DynamicForward yourportnumber

You may also wish to reuse a single connection to/through an lsst-login node via a control socket/multiplexing. See for example OpenSSH Cookbook - Multiplexing.

Development Work

lsst-login nodes can be used for (light) development work. Users are encouraged to utilize batch compute nodes when more significant resources are required. Please see Using the Rubin Batch Systems for more information.

Select Appropriate Developer Tools


Although the material presented below remains valid, the shared stack from May 2020 onwards (/software/lsstsw/stack_20200504) provides the complete toolchain required for Science Pipelines development. It is no longer necessary to load a software collection to work with the shared stack.

The lsst-login systems are configured with the latest CentOS 7.x as its operating system. This release of CentOS provides an old set of development tools, centered around version 4.8.5 of the GNU Compiler Collection (GCC). Updated toolchains are made available through the “Software Collection” system. The following Software Collections are currently available:

Name Description
devtoolset-6 Updated compiler toolchain providing GCC 6.3.1.
devtoolset-7 Updated compiler toolchain providing GCC 7.3.1.
devtoolset-8 Updated compiler toolchain providing GCC 8.3.1.

To enable a particular Software Collection use the scl command. For example:

scl enable devtoolset-8 bash
gcc --version
gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO


Code compiled by different versions of GCC may not be compatible: it is generally better to stick to a particular toolchain for a given project. In particular, if you are using a shared stack you must use the matching toolchain.

You may wish to automatically enable a particular software collection every time you log in to lsst-login01 and other Rubin Observatory development systems at NCSA. Take care if you do this: it’s easy to accidentally to either start recursively spawning shells and run out of resources or lock yourself out of machines which don’t have the particular collection you’re interested in installed. If you are using Bash — the default shell on lsst-dev servers — try placing the following at the end of ~/.bash_profile and customising the list of desired_scls.

# User-specified space-delimited list of SCLs to enable.

# Only do anything if /usr/bin/scl is executable.
if [ -x /usr/bin/scl ]; then

    # Select the union of the user's desired SCLs with those which are both
    # available and not currently enabled.
    avail_scls=$(scl --list)
    for scl in $desired_scls; do
        if [[ $avail_scls =~ $scl && ! $X_SCLS =~ $scl ]]; then

    # Use `tty -s` to output messages only if connected to a terminal;
    # avoids causing problems for non-interactive sessions.
    if [ ${#scls[@]} != 0 ]; then
        tty -s && echo "Enabling ${scls[@]}."
        exec scl enable ${scls[@]} bash
        tty -s && echo "No software collections to enable."

Load the LSST Environment

We provide a ready-to-use “shared” version of the LSST software stack to enable developers to get up and running quickly with no installation step. The shared stack includes a fully-fledged Miniconda-based Python environment, a selection of additional development tools, and a selection of builds of the lsst_distrib meta-package. It is located on GPFS-based network storage; as such, it is cross-mounted across a variety of Rubin Observatory development systems at the Data Facility including those configured as part of the HTCondor pool and Verification Cluster. The currently stack is regularly updated to include the latest weekly release, which is tagged as current.

The following stacks are currently being updated:

Path Toolchain Description
/software/lsstsw/stack_20200515 Internal (Conda)

Provides weekly w_2020_19 and later of lsst_distrib and w_2020_20 and later of lsst_sims. Based on scipipe_conda_env 46b24e8 with the following additional packages installed:

  • bokeh
  • cx_Oracle
  • dask-jobqueue
  • datashaderpyct
  • fastparquet
  • holoviews
  • hvplot
  • ipdb
  • jupyter
  • numba
  • panel
  • pep8
  • psycopg2
  • pyflakes
  • pyviz_comms


When using a shared stack, you must use the corresponding developer toolchain. If this is listed in the table above as “Internal (Conda)” then no further action on your part is required; otherwise, see above for details of how to Select Appropriate Developer Tools.

In addition, the following symbolic links point to particular versions of the stack:

Path Description
/software/lsstsw/stack The latest version of the stack.

Add a shared stack to your environment and set up the latest build of the LSST applications by running, for example:

source /software/lsstsw/stack/loadLSST.bash
setup lsst_apps

(substitute loadLSST.csh, loadLSST.ksh or loadLSST.zsh, depending on your preferred shell).


Initializing the stack will prepend the string (lsst-scipipe) to your prompt. If you wish, you can disable this by running

conda config --set changeps1 false

Although the latest weeklies of LSST software are regularly installed into the shared stacks, the rest of their contents is held fixed (to avoid API or ABI incompatibilities with old stack builds). We therefore periodically retire old stacks and replace them with new ones. The following retired stacks are currently available:

Path Toolchain Description
/software/lsstsw/stack_20171023 devtoolset-6 Provides a selection of weekly and release builds dating between October 2017 and October 2018.
/software/lsstsw/stack_20181012 devtoolset-6 Provides weeklies w_2018_41 through w_2019_12; release candidates v17_0_rc1, v17_0_rc2, and v17_0_1_rc1; and releases v_17_0 and v_17_0_1. Based on the pre-RFC-584 Conda environment.
/software/lsstsw/stack_20190330 devtoolset-6 Provides weekly w_2019_12 through w_2019_38 and daily d_2019_09_30. Based on the post-RFC-584 Conda environment.
/software/lsstsw/stack_20191001 devtoolset-8 Provides weeklies w_2019_38 through w_2019_42.
/software/lsstsw/stack_20191101 devtoolset-8 Provides weekly w_2019_43 through w_2020_09 of lsst_distrib, and w_2019_43 through w_2020_07 of lsst_sims. Based on scipipe_conda_env 4d7b902 (RFC-641).
/software/lsstsw/stack_20200220 devtoolset-8 Provides weekly w_2020_07 through w_2020_17 of lsst_distrib, and weekly w_2020_10 through w_2020_16 of lsst_sims. Based on scipipe_conda_env 984c9f7 (RFC-664).
/software/lsstsw/stack_20200504 Internal (Conda) Provides weeklies w_2020_18 and w_2020_19 of lsst_distrib. Based on scipipe_conda_env 2deae7a (RFC-679).

Administrators may wish to note that the shared stack is automatically updated using the script ~lsstsw/shared-stack/, which is executed nightly by Cron.

Validation/Test Data Sets

There are two cron jobs that will update a set of validation data repositories and test data repositories. These updates will trigger overnight on the lsst-dev system. In most cases, this will be a fairly straightforward git pull, but if corruption is detected, the repository will be cloned afresh. The verification data are currently being used primarily by validate_drp to measure various metrics on the reduced data. The test data serve a variety of purposes, but generally are included via a setupOptional in a package table file.

Test data location is: /project/shared/data/test_data

Included test data repositories are:


Validation data location is: /project/shared/data/validation_data

Included validation data repositories are:


These are maintained by the lsstsw user (this is the same user that curates the shared stack on the lsst-dev system). Please ask in the #dm-infrastructure Slack channel in case of problems.

Configure Git LFS


Although the material presented below remains valid, the shared stack from May 2020 onwards (/software/lsstsw/stack_20200504) provides Git LFS as part of the environment: it is no longer necessary to explicitly run setup, as described below (although it is still necessary to follow DM’s Git LFS guide. The setup step is only necessary for older shared stacks — those marked with toolchain: devtoolset-8 (or -6) in the table above.

For newer shared stacks (toolchain: Internal (Conda)), they are not relevant.

After you have initialized a shared stack, you can enable Git LFS using EUPS:

setup git_lfs

The first time you use Git LFS you’ll need to configure it by following these steps from DM’s Git LFS guide:

  1. Basic configuration
  2. Configuring Git LFS

Configure Remote Display with xpra

xpra can be thought of as “screen for X” and offers advantages over VNC. It can be very handy and efficient for remote display to your machine from Rubin Observatory development compute nodes (e.g., debugging with ds9) because it is much faster than a regular X connection when you don’t have a lot of bandwidth (e.g., working remotely), and it saves state between connections. Here’s how to use it:

On lsst-login01:

xpra start :10
export DISPLAY=:10

You may have to choose a different display number (>10) if :10 is already in use.

On your local machine, do:

xpra attach

xpra attach --ssh="ssh -vvv -o='PreferredAuthentications=gssapi-with-mic,keyboard-interactive,password'"

You may leave that running, or put it in the background and later use:

xpra detach

Then you can open windows on lsst-login01 (with DISPLAY=:10) and they will appear on your machine. If you now kill the xpra attach on your machine, you’ll lose those windows. When you reattach, they’ll reappear.


xpra requires the use of Python 2.

If you are using a Python 3 LSST Stack, you’ll encounter a error like the following:

File "/ssd/lsstsw/stack3_20171021/stack/miniconda3-4.3.21-10a4fa6/Linux64/pyyaml/3.11.lsst2/lib/python/yaml/", line 284
  class YAMLObject(metaclass=YAMLObjectMetaclass):
SyntaxError: invalid syntax

The solution in this case is to start xpra in a separate shell where you haven’t yet setup the Python 3 LSST Stack.


If you run into issues getting xpra to authenticate when you attempt to attach, you may find that including explicit authentication options helps:

xpra attach -ssh="ssh -o='PreferredAuthentications=gssapi-with-mic,keyboard-interactive,password'"


It is possible to use xpra through a tunneled connection to an “interior” node that also has xpra, e.g., when using a login nodes as a “jump host” to reach a submit node, as described above, you may wish to use xpra on the submit node.

First, make your tunneled connection to the destination host (as detailed above).

Then attach xpra to the submit host by also telling xpra to jump/tunnel through the login node:

xpra attach --ssh="ssh -J"