Tech improves pandemic life

I can’t imagine going through the COVID-19 pandemic without computers. Tech improves pandemic life, and it makes it easier for us to make good decisions.

For reasons of both personal caution and what I see as a moral duty, I am probably in the 80th percentile for cautious behavior during the pandemic. I live alone, and my job lends itself to remote work for almost everything. What’s more, my workplace is a socially-conscious liberal arts college. As a result, I interact with very few people (those I do see are always masked-up).

That lifestyle is only sustainable because of computer technology. I buy and pick up groceries through an app. Meetings take place over video chats. Songs or podcasts play in the background while I cook. I can stream almost anything I want to see. I’ve continued to learn and to work using some excellent rectangles.

Ron Swanson "This is an excellent rectangle." Tech life.

There are tradeoffs, of course, but I have basically lived this way since March. Doing so I have weathered the pandemic as well as I could hope (so far).

The national dialogue now includes a lot of chatter about how to stay safe for the holidays. I’m cautious and want to model good behavior. That means I’ll be on FaceTime for Thanksgiving, Christmas, and New Year’s Eve. That’s not great, and it’ll be sad not to be physically visiting family.

But for people like me, the alternative to a FaceTime holiday isn’t an in-person holiday, but a canceled holiday, spent in isolation. Thanks to the people in my industry, I don’t have to do that. Technology brings people together. It’s one reason I remain idealistic about the work I do.

Amidst the tragedies and terrors of 2020, pause to appreciate the age we live in and the cool things we’ve invented. Tech improves pandemic life – and improves life in general. There’s lots to worry about if you want (conspiracy theories, AI risk, etc.), but I’m happy to live in a technologically advanced society.

Jupyterhub user issues: a 90% improvement

photo of Jupiter the planet, as a play on words in the context of Jupyterhub user issues
Jupyter errors are not to be confused with Jupiter errors.

At Earlham Computer Science we have to support a couple dozen intro CS students per semester (or, in COVID times, per 7-week term). We teach Python, and we want to make sure everyone has the right tools to succeed. To do that, we use the Jupyterhub notebook environment, and we periodically respond to user issues related to running notebooks there.

A couple of dozen people running Python code on a server can gobble up resources and induce problems. Jupyter has historically been our toughest service to support, but we’ve vastly improved. In fact, as I’ll show, we have reduced the frequency of incidents by about 90 percent over time.

Note: we only recently began automatic tracking of uptime, so that data is almost useless for comparisons over time. This is the best approximation we have. If new information surfaces to discredit any of my methods, I’ll change it, but my colleagues have confirmed to me that this analysis is at least plausible.

Retrieving the raw data

I started my job at Earlham in June 2018. In November 2018, we resolved an archiving issue with our help desk/admin mailing list that gives us our first dataset.

I ran a grep for the “Messages:” string in the thread archives:

grep 'Messages:' */thread.html # super complicated

I did a little text processing to generate the dataset: regular expression find-and-replace in an editor. That reduced the data to a column of YYYY-Month values and a column of message counts.

Then I went and searched for all lines with subject matching “{J,j}upyter” in the subject.html files:

grep -i jupyter {2018,2019,2020}*/subject.html 

I saved it to jupyter-messages-18-20.dat. I did some text processing – again regexes, find and replace – and then decided that followup messages are not what we care about and ran uniq against that file. A few quick wc -l commands later and we find:

  • 21 Jupyter requests in 2018
  • 17 Jupyter requests in 2019
  • 19 Jupyter requests in 2020

One caveat is that in 2020 we moved a lot of communication to Slack. This adds some uncertainty to the data. However, I know from context that Jupyter requests have continued to flow through the mailing list disproportionately. As such, Slack messages are likely to be the sort of redundant information already obscured using uniq in the text processing.

Another qualifier is that a year or so ago we began using GitLab’s Issues as a ticket tracking system. I searched that. It found 11 more Jupyter issues, all from 2020. Fortunately, only 1 of those was a problem that did not overlap with a mailing list entry.

Still, I think those raw numbers are a good baseline. At one level, it looks bad. The 2020 number has barely budged from 2018 and in fact it’s worse than 2019. That’s misleading, though.

Digging deeper into the data

Buried in that tiny dataset is some good news about the trends.

For one thing, those 21 Jupyter requests were in only 4 months out of the year – in other words, we were wildly misconfigured and putting out a lot of unnecessary technical fires. (That’s nobody’s fault – it’s primarily due to the fact that my position did not exist for about a year before I arrived at it, so we atrophied.)

What’s more, the 19 this year are, by inspection, half password or feature requests rather than the 17 problems we saw in 2019, which I think were real.

So in terms of Jupyter problems in the admin list, I find:

  • around 20 in the latter third of 2018
  • 17 in ALL OF 2019
  • only two (granted one was a BIG problem but still only 2) in 2020

That’s a 90% reduction in Jupyterhub user issues over three years, by my account.

“That’s amazing, how’d you do it?”

Number one: thank you, imaginary reader, you’re too kind.

Number two: a lot of ways.

In no particular order:

  1. We migrated off of a VM, which given our hardware constraints was not conducive to a resource-intensive service like Jupyterhub.
  2. Gradually over time, we’ve upgraded our storage hardware, as some of it was old and (turns out) failing.
  3. We added RAM. When it comes to RAM, some is good, more is better, and too much is just enough.
  4. We manage user directories better. We export these over NFS but have done all we can to reduce network dependencies. That significantly reduces the amount of time the CPU spends twiddling its thumbs.

What’s more, we’re not stopping here. We’re currently exploring load-balancing options – for example, running Jupyter notebooks through a batch scheduler like Slurm, or potentially a containerized environment like Kubernetes. There are several solutions, but we haven’t yet determined which is best for our use case.

This is the work of a team of people, not just me, but I wanted to share it as an example of growth and progress over time. It’s incremental but it really does make a difference. Jupyterhub user issues, like so many issues, are usually solvable.

I’m making websites!

As the exclamation point indicates, I’m excited to announce this: I’m now making websites again!

A bit over two years ago, I left self-employment as an all-around tech services provider and joined my alma mater, Earlham College. That was a good move. I have built my skills across the board, and having this job has kept my career steady through e.g. COVID.

However, I’ve missed some of the work from those days, as well as the independence. I don’t like having only one income source in a time of high economic unpredictability. I also want to continue expanding my skillset, growing my portfolio, and controlling the course of my own career.

For all these reasons, I’m accepting new projects effective now. You can click here to seen plans and examples or reach out (cearley@craigearley.com) hire me to make a website for you.

My particular passions are making websites for individuals and small businesses (including online stores). Most likely if you’re at a larger scale than that, you have in-house web and sysadmin teams anyway. 🙂 If what I offer is right for you, please reach out. I look forward to hearing from you.

Meet our Terrestrial Mapping Platform!

Just a nice photo from Iceland

I’m excited to share that the Earlham field science program is now sharing the core of our Terrestrial Mapping Platform (TMP)! This is very much a work-in-progress, but we’re excited about it and wanted to share it as soon as we could.

We had to delay the 2020 Iceland trip because of COVID-19. That of course pushed back the implementation and case study component of this project, which was Iceland-centric. But we are moving forward at full speed with everything else. As Earlham has now started the new academic year, we have also resumed work on the TMP.

The project is a UAV hardware-software platform for scientists. It consists of:

  • a consumer-grade drone for capturing images
  • flight plan generation software and application to automate drone flights
  • data analysis workflows for the images – visible light and NIR, assembled into 2D and 3D models

All of this goes toward making science more accessible to a broader range of domain scientists. Archaeologists and glaciologists are our current target cohort, but many more could find use for this work if it’s successful.

We will make all of this accessible in repositories with open licenses on our GitLab instance. Some are already available. Others we will share once we review them for (e.g.) accidentally-committed credentials.

That was all planned, if delayed. We’re also using our extra year of preparation time to make the project better in a few ways:

  • Reevaluating our choice of UAV make and model
  • Prettifying our web presence, which very much includes blog posts like this
  • Reducing the friction and pain points in our current workflow
  • Making our code and infrastructure better in general (I’ve covered my growing emphasis on quality here before)

The team mostly comprises students and faculty (of whom I’m the junior-most). Additionally, there are a few on-site partners in Iceland and innumerable personal supporters who make this possible. We’ll be sharing more at the Earlham Field Science blog as we go. I will undoubtedly share more here as well.

COVID is bad, but we want to make the best of this era. This is one way we’re doing that.

(Disclosure: We received funding for this from a National Geographic grant. None of the views in this blog post or our online presence represents, or is endorsed by, Nat Geo.)

Give yourself the gift of quality control

If you spend any time at all in the tech chatter space, you have probably heard a lot of discontent about the quality of software these days. Just two examples:

I can’t do anything about the cultural, economic, and social environment that cultivates these issues. (So maybe I shouldn’t say anything at all? 🙂 )

I can say that, if you’re in a position to do something about it, you should treat yourself to quality control.

The case I’d like to briefly highlight is about our infrastructure rather than a software package, but I think this principle can be generalized.

Case study: bringing order to a data center

After a series of (related) service outages in the spring of 2020, shortly before the onset of the COVID-19 crisis, we cut back on some expansionary ambitions to get our house in order.

Here’s a sample, not even a comprehensive list, of the things we’ve fixed in the last couple of months:

  • updated every OS we run such that most of our systems will need only incremental upgrades for the next few years
  • transitioned to the Slurm scheduler for all of our clusters and compute nodes, which has already made it easier to track and troubleshoot batch jobs
  • modernized hardware across the board, including upgraded storage and network cards
  • retired unreliable nodes
  • implemented comprehensive monitoring and alerts
  • replaced our old LDAP server and map with a new one that will better suit our authentication needs across many current and future services
  • fixed the configuration of our Jupyterhub instances for efficiency

Notice: None of those are “let’s add a new server” or “let’s support 17 new software packages”. It’s all about improving the things we already supported.

There are a lot of institutional reasons our systems needed this work, primarily the shortage of staffing that affects a lot of small colleges. But from a pragmatic perspective, to me and to the student admins, these reasons don’t matter. What matters is that we were in a position to fix them.

By consciously choosing to do so, we think we’ve reduced future overhead and downtime risk substantially. Quantitatively, we’ve gone from a few dozen open issue tickets to 19 as of this writing. Six others are advancing rapidly.

How we did it and what’s next

I don’t have a dramatic reveal here. We just made the simple (if not always easy) decision to confront our issues and make quality a priority.

Time is an exhaustible, non-renewable resource. We decided to spend our time on making existing systems work much much better, rather than adding new features. This kind of focus can be boring, because of how strictly it blocks distractions, but the results speak for themselves.

After all that work, now we can pivot to the shiny new thing: installing, supporting, and using new software. We’ve been revving up support for virtual machines and containers for a long time. HPC continues to advance and discover new applications. The freedom to explore these domains will open up a lot of room for student and faculty research over time. It may also help as we prepare to move into our first full semester under COVID-19, which is likely to have (at minimum) a substantial remote component.

Some thoughts on moving from Torque to Slurm

This is more about the process than the feature set.

Torque moved out of open-source space a couple of years ago. This summer we are finally make the full shift to Slurm. I’m not going to trash the old thing here. Instead I want to celebrate the new thing and reflect on the process of installing it.

  1. I haven’t researched the progeny of Slurm as a project, but the UI seems engineered to make this shift easier. There are tables all over the Internet (including on our wiki!) of the Torque<->Slurm translations.
  2. Slurm’s accounting features were the trickiest part of this all to configure, but taking the time was worth it. Even at the testing stage, the sacct command’s output is super-informative.
  3. SchedMD’s documentation is among the best of any large piece of software I’ve worked with. If you’re doing this and you feel like you’re missing something, double-check their documents before flogging Stack Overflow etc.
  4. You can in fact do a single-server install as well as a cluster install. We did both, the latter in conjunction with Ansible. Neither is actually much more difficult than the other. That’s because the same three pieces of software (the controller, the database, and the worker daemon) have to run no matter the topology. It’s just that the worker runs on every compute node while the controller and database run only on the head node.
  5. We’ve been successful in using an A –> AB –> B approach to this transition. Right now we have both schedulers next to each other on each of these systems. That will remain the case for a few weeks, until we confirm we’ve done Slurm right.
  6. Schedulers have the most complicated build process of any piece of software I’ve worked with – except gcc, the building of which sometimes makes one want to walk into the ocean.
  7. Dependencies and related programs (e.g. your choice of email tool) are as much a complexity as the scheduler itself.
  8. From a branding perspective, Slurm managed to pull off an impressive feat. Its name is clear and distinctive in the software space, but a fun Easter egg if you have a certain geek pop culture interest/awareness.

This is has been successful up to now. We’ve soft-launched Slurm installs on our scientific computing servers. We should be all-Slurm when classes and researchers return.

Batten down the (network) hatches

It’s been a long time since we systematically updated our security measures at Earlham CS. I spent some time on that this week. I wanted to share some of the changes we made so that if you’re running a small-to-midsize network you might implement similar fixes.

The bare minimum

We’ve been using two critical and often unmentioned security measures already:

  • physically locking down the data center
  • running a network firewall

These two things alone do a lot to secure the system.

Securing services

Of course, we also provide a lot of services over the network, everything from web servers to shells. We have to secure access to all of those tools, plus our data. We want the necessary cracks in our firewall have as low a risk as possible of being exploited.

What remained, then, was the installation and configuration of server tools to harden security above and beyond physical locks and firewalls – in a word, “DevSecOps”.

First, on those machines that didn’t already have it, we installed unattended-upgrades (Debian/Ubuntu)/yum-cron (CentOS 7)/dnf-automatic (CentOS 8). We use these to automatically apply security patches to package-managed software. We’re still free to install larger updates each semester manually to minimize disruptions. It’s a good balance of stability and security vigilance.

Next we installed fail2ban on the small number of servers to which our firewall allows SSH access. It detects and blocks possibly-malicious IP addresses trying to connect to the servers. We enabled two “jails” in fail2ban: sshd, which catches likely bad actors attempting ssh connections and bans them for a short time; and recidive, which checks the log records from sshd (and potentially other jails), detects repeat offenders, and imposes longer-lasting bans against them.

(This is the digital equivalent of locking up your house so that the lazy would-be burglar going door-to-door checking knobs can’t get in.)

We then ran trufflehog on our public GitLab repos. It gave us a few warnings but none that actually contained compromising system or user information. I consider this good luck more than anything, and we’re taking steps now proactively to prevent such mistakes.

Still to come

Our next security steps will focus on improved monitoring and notification. This has been an issue in the past for stability, but fixing it will also contribute to security. We are also constantly reevaluating security approaches at a department policy level.

Thanks to this post for pointing me to some of the tools mentioned here.

How to enable a custom systemctl service unit without disabling SELinux

We do a lot of upgrades in the summer. This year we’re migrating from the Torque scheduler (which is no longer open-source) to the Slurm scheduler (which is). It’s a good learning experience in addition to being a systems improvement.

First I installed it on a cluster, successful. That took a while: It turns out schedulers have complicated installation processes with specific dependency chains. To save time in the future, I decided that I would attempt to automate the installation.

This has gone better than you might initially guess.

I threw my command history into a script, spun up a VM, and began iterating. After a bit of work, I’ve made the installation script work consistently with almost no direct user input.

Then I tried running it on another machine and ran headfirst into SELinux.

The problem

The installation itself went fine, but the OS displayed this message every time I tried to enable the Slurm control daemon:

[root@host system]# systemctl enable slurmctld.service
Failed to enable unit: Unit file slurmctld.service does not exist.

I double- and triple-checked that my file was in a directory that systemctl expected. After that I checked /var/log/messages and saw a bunch of errors like this …

type=AVC msg=audit(1589561958.124:5788): avc:  denied  { read } for  pid=1 comm="systemd" name="slurmctld.service" dev="dm-0" ino=34756852 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:admin_home_t:s0 tclass=file permissive=0

… and this:

type=USER_END msg=audit(1589562370.317:5841): pid=3893 uid=0 auid=1000 ses=29 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix acct="slurm" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'UID="root" AUID="[omitted]"

Then I ran ls -Z on the service file’s directory to check its SELinux context:

-rw-r--r--. 1 root root unconfined_u:object_r:admin_home_t:s0                  367 May 15 13:01 slurmctld.service
[...]
-rw-r--r--. 1 root root system_u:object_r:systemd_unit_file_t:s0               337 May 11  2019 smartd.service

Notice that the smartd file has a different context (system_u...) than does the slurmctld file (unconfined_u...). My inference was that the slurmctld file’s context was a (not-trusted) default, and that the solution was to make its context consistent with the context of the working systemctl unit files.

The solution

Here’s how to give the service file a new context in SELinux:

chcon system_u:object_r:systemd_unit_file_t:s0 slurmctld.service 

To see the appropriate security context, check ls -Z. Trust that more than my command, because your context may not match mine.

Concluding remarks

I am early-career and have done very little work with SELinux, so this is not a specialty of mine right now. As such, this may or may not be the best solution. But, mindful of some security advice, I think it is preferable to disabling SELinux altogether.

Between the files and the disks of your server

I recently took a painful and convoluted path to understanding management of disks in Linux. I wanted to post that here for my own reference, and maybe you will find it useful as well. Note that these commands should generally not be directly copy-pasted, and should be used advisedly after careful planning.

Let’s take a plunge into the ocean, shall we?

Filesystem

You’ve definitely seen this. This is the surface level, the very highest layer of abstraction. If you’re not a sysadmin, there’s a good chance this is the only layer you care about (on your phone, it’s likely you don’t even care about this one!).

The filesystem is where files are kept and managed. There are tools to mount either the underlying device (/dev/mapper or /dev/vgname) or the filesystem itself to mount points – for example, over NFS. You can also use the filesystem on a logical volume (see below) as the disk for a virtual machine.

This is where ext2, ext3, ext4, xfs, and more come in. This is not a post about filesystems (I don’t know enough about filesystems to credibly write that post) but they each have features and associated utilities. Most of our systems are ext4 but we have some older systems with ext2 and some systems with xfs.

Commands (vary by filesystem)

  • mount and umount; see /etc/fstab and /etc/exports
  • df -h can show you if your filesystem mount is crowded for storage
  • fsck (e2fscke4fsck, and similar are wrappers around fsck)
  • resize2fs /dev/lv/home 512G # resize a filesystem to be 512G, might accompany lvresize below
  • xfsdump/xfsrestore for XFS filesystems
  • mkfs /dev/lvmdata/device # make a filesystem on a device
  • fdisk -l isn’t technically a filesystem tool, but it operates at a high level of abstraction and you should be aware of it

LVM

Filesystems are made on top of underlying volumes in LVM, or “logical volume manager” – Linux’s partitioning system. (Actually manipulating LVM’s rather that passively using simple defaults is technically optional, but it’s widely used.)

LVM has three layers of abstraction within itself that each have utilities associated with them. This closely follows the abstraction patterns we’ve already seen in the layers below this one.

LVM logical volumes

A volume group can then be organized into logical volumes. The commands here are incredibly powerful and give you the ability to manage disk space with ease (we’re grading “easy” on a curve).

If you want to resize a filesystem, there’s a good chance you’ll want to follow up by resizing the volume underneath it.

Commands:

  • lvdisplay
  • lvscan
  • lvcreate -L 20G -n mylv myvg # create a 20GB LVM called mylv in group myvg
  • lvresize -L 520G /dev/lv/home # make the LVM on /dev/lv/home 520GB in size

LVM volume groups

A logical volume is created from devices/space within a volume group. It’s a collection of one or more LVM “physical” volumes (see below).

Commands:

  • vgscan
  • vgdisplay
  • pvmove /dev/mydevice # to get stuff off of a PV and move it to available free space elsewhere in the VG

LVM physical volumes

At the lowest LVM layer there are “physical” volumes. These might actually correspond to physical volumes (if you have no hardware RAID), or they might be other /dev objects in the OS (/dev/md127 would be a physical volume in this model).

These are the LVM analog to disk partitions.

Commands:

  • pvscan
  • pvdisplay

Software RAID (optional)

RAID is a system for data management on disk. There are both “hardware” and “software” implementations of RAID, and software is at a higher level of abstraction. It’s convenient for a (super-)user to manage. Our machines (like many) use mdadm, but there are other tools.

Commands:

  • mdadm --detail --scan
  • mdadm -D /dev/mdXYZ # details
  • mdadm -Q /dev/mdXYZ # short, human-readable
  • cat /proc/mdstat

Devices in the OS

“In UNIX, everything is a file.” In Linux that’s mostly true as well.

The /dev directory contains the files that correspond to each particular device detected by the OS. I found these useful mostly for reference, because everything refers to them in some way.

If you look closely, things like /dev/mapper/devicename are often symlinks (pointers) to other devices.

All the other layers provide you better abstractions and more powerful tools for working with devices. For that reason, you probably won’t do much with these directly.

(The astute will observe that /dev is a directory so we’ve leapt up the layers of abstraction here. True! However, it’s the best lens you as a user have on the things the OS detects in the lower layers.)

Also: dmesg. Use dmesg. It will help you.

Hardware RAID (optional)

If you use software RAID for convenience, you use hardware RAID for performance and information-hiding.

Hardware RAID presents the underlying drives to the OS at boot time by way of a RAID controller on the motherboard. At boot, you can access a tiny bit of software (with a GUI that’s probably older than me) to create and modify hardware RAID volumes. In other words, the RAID volume(s), not the physical drives, appear to you as a user.

At least some, and I presume most, RAID controllers have software that you can install on the operating system that will let you get a look at the physical disks that compose the logical volumes.

Relevant software at this level:

  • MegaCLI # we have a MegaRAID controller on the server in question
  • smartctl --scan
  • smartctl -a -d megaraid,15 /dev/bus/6 # substitute the identifying numbers from the scan command above
  • not much else – managing hardware RAID carefully requires a reboot; for this reason we tend to keep ours simple

Physical storage

We have reached the seafloor, where you have some drives – SSD’s, spinning disks, etc. Those drives are the very lowest level of abstraction: they are literal, physical machines. Because of this, we don’t tend to work with them directly except at installation and removal.

Summary and context

From highest to lowest layers of abstraction:

  1. filesystem
  2. LVM [lv > vg > pv]
  3. software RAID
  4. devices in the OS
  5. hardware RAID
  6. disks

The origin story of this blog post (also a wiki page, if you’re an Earlham CS sysadmin student!): necessity’s the mother of invention.

I supervise a sysadmin team. It’s a team of students who work part-time, so in practice I’m a player-coach.

In February, we experienced a disk failure that triggered protracted downtime on an important server. It was a topic I was unfamiliar with, so I did a lot of on-the-job training and research. I read probably dozens of blog posts about filesystems, but none used language that made sense to me in a coherent, unified, and specific way. I hope I’ve done so here, so that others can learn from my mistakes!