Why Earlham CS restarts its servers once per semester

Last weekend, the CS sysadmin students performed a complete shutdown and restart of the servers we manage. We do this in the last month of every semester, and it’s a consistently valuable experience for us.

The department manages two subnets full of servers: cs.earlham.edu and cluster.earlham.edu.

  • On the side of cs.earlham.edu, we are (funny enough) mostly CS department-focused: the website is there, as are the software tools we use for students in their intro courses, the wiki we use for information management, and tools for senior projects. These services mostly run on virtual machines.
  • In the cluster domain, we support scientific and high-performance computing in other departments, most commonly chemistry, physics, and biology. That includes parallel processing across a tightly-linked “cluster” of small servers as well as “fat nodes” that provide large amounts of RAM, storage, and CPU power on a single machine. In contrast to cs, there are no virtual machines in the cluster domain.

Manually shutting down all servers in both domains is complex. It requires grappling with the complexities of both of them: the bare metal/virtual machine distinction, file system mounting, network configuration, which services start at boot and whether they should. There are no “trick questions” but there are plenty of places problems could appear.

Since the content is complicated, we like to keep the process orderly. We look for basic system health indicators. Do all of our virtual servers come back? (Yep.) Does everything launch at startup that should? (Mostly!) Do NFS mounts in each domain work as we expect? (Mostly!) Are we backing up everything we need? (No but we’re fixing that.)

We enforce this simplicity with two tools:

  1. A clear and unambiguous plan, communicated from the very start, that does not change except by necessity.
  2. One of the best note-taking tools ever invented: a yellow legal pad with a cheap pen. This allows notes to be taken on the fly, separating the note-taking/accumulating process from the aggregation and curation, which is better done out of the heat of a major operation.

In doing this, we always detect some problems. Some are system problems, but just as often they’re problems of knowledge transfer: no one wrote down that the VM’s have extra dependencies to manage at startup, for example, so we have a cascade of minor failures across the CS domain to fix. We add any issues to a project list in a local instance of RequestTracker.

As usual, we booked three hours to do this last weekend. Almost all of it was done in that time. There’s always something left over at the end, but most systems were running again on schedule.

The coders and admins of the world may, at this point, wonder why we would go through all this and why (if we must do it) we don’t just have a script for it.

We definitely could, but the technical value of the shutdown is orthogonal to its purpose for us. We don’t do the server shutdown because the servers strictly need to be powered off and back on every six months. We do it because it’s one of the few projects that…

  • exposes the logic and structure of the entire server system to the students managing it,
  • provides opportunities to learn a lot in terms of both computing and teamwork,
  • forces us to be accountable for what we’ve installed and how we’ve configured it,
  • involves every sysadmin student from first-timers to seniors,
  • and yet is tightly constrained in time and scope.

I like the way this works so much that I’m engineering other projects that meet these criteria and can be implemented more readily throughout the regular academic calendar.

Some reflections on guiding a student sysadmin team

How does a team of students administer a powerful data center for education and research at a small undergraduate liberal arts college?

Success at my job is largely dependent on how well I can answer that question and implement the answer.

Earlham CS, under the banner of the Applied Groups, has a single team of students running our servers:

The Systems Admin Group’s key functions include the maintenance of the physical machines used by the Earlham Computer Science Department. They handle both the hardware and software side of Earlham’s Computer Science systems. The students in the sysadmin group configure and manage the machines, computational clusters, and networks that are used in classes, for our research, and for use by other science departments at Earlham. 

The students in that group are supervised by me, with the invaluable cooperation of a tenured professor.

The students are talented, and they have a range of experience levels spanning from beginner to proficient. Every semester there’s some turnover because of time, interest, graduations, and more.

And students are wonderful and unpredictable. Some join with a specific passion in mind: “I want to learn about cybersecurity.” “I want to administer the bioinformatics software for the interdisciplinary CS-bio class and Icelandic field research.” Others have a vague sense that they’re interested in computing – maybe system administration but maybe not – but no specific focus yet. (In my experience the latter group is consistently larger.)

In addition to varieties of experience and interest, consider our relatively small labor force. To grossly oversimplify:

  • Say I put 20 hours of my week into sysadmin work, including meetings, projects, questions, and troubleshooting.
  • Assume a student works 8 hours per week, the minimum for a regular work-study position. We have a budget for 7 students. (I would certainly characterize us as two-pizza-compliant.)
  • There are other faculty who do some sysadmin work with us, but it’s not their only focus. Assume they put in 10 hours.
  • Ignore differences in scheduling during winter and summer breaks. Also ignore emergencies, which are rare but can consume more time.

That’s a total of 86 weekly person-hours to manage all our data, computation, networking, and sometimes power. That number itself limits the amount we can accomplish in a given week.

Because of all those factors, we have to make tradeoffs all the time:

  • interests versus needs
  • big valuable projects versus system stability/sustainability
  • work getting done versus documenting the work so future admins can learn it
  • innovation versus fundamentals
  • continuous service versus momentary unplanned disruption because someone actually had time this week to look at the problem and they made an error the first time

I’ve found ways to turn some of those tradeoffs into “both/and”, but that’s not always possible. When I have to make a decision, I tend to err on the side of education and letting the students learn, rather than getting it done immediately. The minor headache of today is a fair price to pay for student experience and a deepened knowledge base in the institution.

In some respects, this is less pressure than a traditional company or startup. The point is education, so failure is expected and almost always manageable. We’re not worried about reporting back to our shareholders with good quarterly earnings numbers. When something goes wrong, we have several layers of backups to prevent real disaster.

On the other hand, I am constantly turning the dials on my management and technical work to maximize for something – it’s just that instead of profit, that something is the educational mission of the college. Some of that is by teaching the admins directly, some is continuing our support for interesting applications like genome analysis, data visualization, web hosting, and image aggregation. If students aren’t learning, I’m doing something wrong.

In the big picture, what impresses me about the group I work with is that we have managed to install, configure, troubleshoot, upgrade, retire, protect, and maintain a somewhat complex computational environment with relatively few unplanned interruptions – and we’ve done it for quite a few years now. This is a system with certain obvious limitations, and I’m constantly learning to do my job better, but in aggregate I would consider it an ongoing success. And at a personal level, it’s deeply rewarding.

Responding to emergencies in the Earlham CS server room

A group of unrelated problems overlapped in time last week and redirected my entire professional energy. It was the most informative week I’ve had in maybe months, and I’m committing a record of it here for posterity. Each problem we in the admins encountered in the last week is briefly described here.

DNS on the CS servers

It began Tuesday morning with a polite request from a colleague to investigate why the CS servers were down. Unable to ping anything, I walked to the server room and found a powered-on but non-responsive server. I crashed it with the power button and brought it back up, but we still couldn’t get to anything by ssh.

That still didn’t restore network access to our virtual machines, so I began investigating, starting by perusing /var/log.

An hour or so later that morning, I was joined by two other admins. One had an idea for where we might look for problems. We discovered that one of our admins had, innocently enough, used underscores in the hostnames assigned to two computers used by sysadmins-in-training. Underscores are not generally acceptable in DNS hostnames, so we fixed that and restarted bind. That resolved the problem.

The long-term solution to this is to train our sysadmin students in DNS, DHCP, etc. more thoroughly — and to remind them consistently to RTFM.

Fumes

Another issue that materialized at the same time and worried me more: we discovered a foul smell in the server room, like some mix of burning and melting plastic. Was this related to the server failure? At the time, we didn’t know. Using a fan and an open door we ventilated the server room and investigated.

We were lucky.

By way of the sniff test, we discovered the smell came from components that had melted in an out-of-use, but still energized, security keypad control box. I unplugged the box. The smell lingered but faded after a few days, at which point we filed a work order to have it removed from the room altogether.

I want to emphasize our good fortune in this case: Had the smell pointed to a problem in the servers or the power supply, we would have had worse problems that may have lasted a long time and cost us a lot. Our long-term fix should be to implement measures to detect such problems automatically, at least to such an extent that someone can quickly respond to them.

Correlated outages

Finally, the day after those two events happened and while we were investigating them, we experienced a series of outages all at once, across both subdomains we manage. Fortunately, each of these servers is mature and well-configured, and in every case pushing the power button restored systems to normal.

Solving this problem turned out to be entertaining and enlightening as a matter of digital forensics.

Another admin student and I sat in my office for an hour and looked for clues. Examining the system and security log files on each affected server consistently pointed us toward the log file for a script run once per minute under cron.

This particular script checks (by an SNMP query) if our battery level is getting low and staying low – i.e., that we’ve lost power and need to shut down the servers before the batteries are fully drained and we experience a hard crash. The script acted properly, but we’d made it too sensitive: it allowed only 2 minutes to elapse at a <80% battery level before concluding that every server running the script needed to shut down. This happened on each server running the script – and it didn’t happen on servers not running the script.

We’re fixing the code now to allow more time before shutting down. We’re also investigating why the batteries started draining at that time: they never drained so much as to cut power to the entire system, but they clearly dipped for a time.

To my delight (if anything in this process can be a delight), a colleague who’s across the ocean for a few weeks discovered the same thing we did sitting in my office.

Have a process

Details varied, but we walked through the same process to solve each problem:

  1. Observe the issue.
  2. Find a non-destructive short-term fix.
  3. Immediately, while it’s fresh on the mind and the log files are still current, gather data. Copy relevant log files for reference but look at the originals if you still can. If there’s a risk a server will go down again, copy relevant files off that server. Check both hardware and software for obvious problems.
  4. Look for patterns in the data. Time is a good way to make an initial cut: according to the timestamps in the logs, what happened around the time we started observing a problem?
  5. Based on the data and context you have, exercise some common sense and logic. Figure it out. Ask for help.
  6. Based on what you learn, implement the long-term fix.
  7. Update stakeholders. Be as specific as is useful – maybe a little more so. [Do this earlier if there are larger, longer-lasting failures to address.]
  8. Think of how to automate fixes for the problem and how to avoid similar problems in the future.
  9. Implement those changes.

For us, the early stages will be sped up when we finish work on our monitoring/notification systems, but that would not have helped much in this case. Even with incomplete monitoring software, we discovered each problem that I’ve described within minutes or hours, because of the frequency and intensity of use they get by the Earlham community.

I would add is that, based on my observations, it’s easy to become sloppy about what goes into a log file and what doesn’t. Cleaning those up (carefully and conservatively) will be added to the tasks for the student sysadmins to work on.

Work together

We in the admins left for the weekend with no outstanding disasters on the table after a week in which three unrelated time-consuming problems surfaced. That’s tiring but incredibly satisfying. It’s to the credit of all the student admins and my colleagues in the CS faculty, whose collective patience, insights, and persistence made it work.

A brief salute to knowledge aggregation

I deleted my Reddit account after only a few months because it was an attention sink that returned little value to me over time, but (anecdotally) I still find the site extremely useful as a repository of the aggregated knowledge of groups of people with specific interests.

Example: Today I wanted to do a little simple video editing, and I tried three different commonly recommended free video editing software tools. For various reasons none worked, and I didn’t like the interfaces anyway.

A Google search took me to a page on /r/Filmmakers with two free filmmaker-oriented software options, and the one I tried (HitFilm Express) worked instantly. I may still try something else, but this instantly relieved some frustration and let me get to work on the project I actually cared about rather than continuing to try unfamiliar programs all day.

This fulfills one of the central promises of the Internet: facilitating the aggregation of knowledge from people or groups who really have that knowledge, so that people can learn and do more than we could before. I don’t need a Reddit account anymore (though I would happily create another one and just not subscribe to any subreddits if they, e.g., made pages members-only), but for all their problems I’m glad such sites exist.

Chrome History Inspector

I’m interested in digital minimalism right now, so I wanted to examine my browser history. Google Chrome on my desktop is the place I spend most of my time online, and it’s also a black box to me. Unlike iOS with its Screen Time feature, I have no obvious window into my browser activity over time. All chrome://history shows is a stream of links you’ve clicked in reverse-chronological order, with no aggregation options. I have a rough idea, but that’s not much to go on.

This weekend I decided I wanted to investigate. At first I thought I’d keep it simple: get my history as a CSV file and open it in Google Sheets. That didn’t work: 15,000 lines is apparently a lot for a web-connected browser-based tool, and it crashed my tab. I could have used macOS’s Numbers, but I realized quickly that my task lent itself to programming better than to a spreadsheet.

As a rough cut (and presented to you now), I made a Python program – code here – that, given a history file of a particular format, produces a graph of your most-visited websites. It makes use of the pandas, matplotlib, and seaborn libraries. The earliest date on my dataset is October 12, 2018. The program produced this graph:

The first thing I noticed in the image was that I clicked into Reddit a lot. I’ve had a Reddit account for less than a year, so I knew I could live without it and I swiftly deleted my account.

What was left fell into a few categories:

  • search/reference: I was surprised and then immediately unsurprised by Google’s supremacy on this list; Wikipedia and StackOverflow are also in this category
  • news: Instapaper, Feedly, Twitter
  • professional tools: Gitlab, GitHub, Wiki, Google Drive, and WordPress
  • entertainment: Netflix, TVTropes, Amazon, Facebook, YouTube – all of which I regulate using Freedom
  • Esquire scored surprisingly high, I think because viewing a slideshow there requires a click per slide and I’ve visited a few of them.

A few caveats about this approach:

  • I’d like something more dynamic, maybe an improved version of some old browser extensions I found on my initial research on this idea. This got the very specific information I wanted, but now I want more.
  • I’ve separated the code that obtains the data (which I didn’t write) from the code that processes it. This way when Google inevitably changes how it manages history data, I don’t have to disturb the processing code.
  • I used this tool to decide my Reddit account should be axed, but it’s arguably unfair to Reddit: I actually read a lot more tweets than Reddit posts, but when you want to expand a Reddit post you click it and it changes the URL. (One minor change I may make is to aggregate twitter dot com, t dot co, and tweetdeck into one row.)
  • This analyzes page visits, not time spent. This, I imagine, would be a much stickier problem. I’d need to have an indicator of when the tab and window were both active, and it would be distorted by the frequent distractions of my office. It would also be a much more useful thing to display. Maybe Google can get with the “digital wellness” moment on this.
  • Future work: group by time. I have a much better idea of when I’m on the Internet than of what sites I’m visiting most frequently over time, so this wasn’t my priority. That said, it’s possible I could learn something interesting.
  • Sites visited in Incognito Mode don’t appear in the history so they also don’t appear on the chart.

Finally, through the lens of digital minimalism, that graph is better than I had expected. There’s not a lot of cruft, the cruft that does exist can be removed pretty easily, and most of the sites provide real value to me. This has been a useful exercise.

Introducing @cooltreepix!

Once upon a time I was a character in a (wholesome!) meme a friend posted to a publicly-visible Earlham Facebook group. The meme, which I’ve stored for posterity here, said that one quality about me is “Takes pictures of cool trees”.

So my Twitter bot is super on-brand.

You can now follow @cooltreepix for pictures of cool trees! They were all taken by me and tweeted once per day. I’ve removed or never added geolocations, but probably 90% are from eastern Montana or the Richmond, Indiana, area.

Details follow for the curious. 🙂

What this bot does

Every day this bot tweets a tree picture.

That’s… that’s it, it tweets a picture containing one or more trees. Sometimes the tree will be the subject of the picture. Other times it will be a picture where the tree somehow accentuates the main element, e.g. fall color. The images are of varying quality but most were taken by my iPhone (currently an iPhone 7).

I kept it simple. I didn’t (and still don’t) want to collect your data or do much in the way of analytics. I just wanted to make a simple non-spammy bot that tweets a nice picture once a day.

Process

Here are the steps I followed, roughly, so that you can try your own:

  1. Create a Twitter developer account. I was doing this for education and with no intent to collect data etc., so I had no problems at all in creating the account.
  2. Create your app. If you’re not going to use it for your own account – i.e. if you want to allow the app to tweet on an account other than the @username of your developer account – make sure to enable “Sign in with Twitter”, though there are some more complex ways to do this if you have a specific reason to try them.
  3. Get a cloud-based server to set up your dev environment and hold any assets you need. At first I used AWS because (1) I needed something guaranteed to always be running and (2) I’ve been wanting to learn AWS. Ultimately I decided to stay on Earlham’s servers, but I’m glad to now have the AWS account and some experience with it. Your environment should have some flavor of twurl to make authentication via terminal easier (for more on that, see the next section).
  4. Write your code. I used Python and the tweepy library. My code is simple. As I describe in more detail below, the setup process was much harder than the coding. If I add any features there will be changes to make, but for now I’m happy with it.
  5. Try it out!
  6. Iterate until it works, fixing or adding one thing at a time.
  7. Maintenance.

When you’re done, most of the time your bot should live on its own, just a bot doing bot things.

Biggest challenges

Coding, it turns out, wasn’t the hardest part. I probably only spent about 10 percent of my time on this project programming. The greatest challenges:

  1. Authentication: This was easily my biggest time burner and the problem that most of the steps above solve. It’s easy to make a bot to tweet to your personal developer account but there are extra steps to tweet to a different account, as I wanted to. Worth noting: after you’ve authorized the app on whatever account you want, check out your twurl environment (e.g. if you’re running Linux, ~/.twurlc) to get the customer and access tokens that are needed to make the bot work.
  2. Image transfer: It turns out that when you have a lot of images they take up a lot of storage, so moving them around (i.e. downloading and uploading them) takes gobs of time and bandwidth. I knew this from a project in college, but if I needed a reminder I certainly got it this time.
  3. AWS: I now have a free-tier AWS account, which took some wrangling to figure out. I decided not to use it for this project in the end, but the learning experience was good. I want to try configuring it better for my needs next time I do a similar project.
  4. Image sizes: Twitter caps your images at a particular size, which was producing errors at the terminal and a failure to post tweets. I eventually used ImageMagick’s convert (via a Python subprocess) to solve the problem.

Notes on ownership

All photos tweeted directly by the bot are mine (Craig Earley’s) unless otherwise noted. Please don’t sell them or use them commercially, as they are intended for everyone’s benefit. Also please give me a photo credit and share this link to my site if you use them for your own project.

If you want to submit a tree photo, tweet it to me @cooltreepix or @craigjearley. If it’s a real picture of a tree, I’ll retweet as soon as I see it.

My logo is from the Doodle Library (shared under a CC 4.0 license) and edited by me to add color. My version is under the same license.

Finally you can put a little something in my tip jar if you want to support work like this.

A hopeful 2019 preview

On the new year’s edition of Pod Save America, Ana Marie Cox used the language of “intentions” rather than “resolutions” to describe changes we’d like to make in the new year, so as to avoid risk of failure and discouragement. That’s how I understand this list: the intentions underlying a series of changes I want to make in 2019.

(Item 0, by the way, is to keep doing what’s working, such as succeeding at my job, enjoying great media and culture, and managing my money properly.)

  1. Improve my social life: I intend to make more friends, spend more time interacting with current friends, and get better at interacting with people I don’t know.
  2. Create more: I intend to do more writing, shell scripts and maybe larger software projects, and dumb photo art.
  3. I intend to continue to train my developer and sysadmin skills, by way of (2) when possible.
  4. Finally I intend to improve my fitness: I’m about 75% of the way to what I’d like. I exercise a few times per week and my diet’s fine. The last stretch to get where I like will take some more work, but it’s a good way to get away from screens.

Happy New Year!

My 2018 in review

I know it’s filler for news orgs, but I like year in review #content, so here’s mine. This is the public portion of a basic internal self-audit I did this year and plan to do again next year.

Work

I successfully executed two large projects as a self-employed IT guy back in my hometown. I set up much of the networking and inventory for a convenience store in collaboration with a gas station technician, the clerks, and the owners. I also set up a two-building enterprise-grade hotel WiFi network in collaboration with management. I left a few other things unfinished when I changed jobs, including one big project for a neighbor, so I had to write refund checks and send some sincere apologies. Still, I’m mostly satisfied with the work I did.

That said, I’m pleased I’m now doing something else. I like my job at Earlham. I work with a group of talented students administering high-performance computing clusters and other campus-scale server systems – a phenomenal way to boost my experience and grow as a member of a community. I’m competent enough now in my technology skills to explore my prospects in software development or system administration after Earlham.

I’m starting out, so I’m experiencing many of the challenges of starting out – but it’s a good start.

Life

I moved!

When I graduated college in December 2016, I wasn’t sure what I wanted to do, and moving back home was my safety net (I’m fortunate that I had it). After over a year, that had to change. When a job opened at Earlham, I applied and was selected. I moved a few weeks later. Lifestyle, more than any factors in being self-employed, drove me to Earlham.

There’s plenty of criticism of the idea that changing your environment improves happiness over time. I take that point, but for me it has been an important step.

That said, I still don’t get out much socially, and that’s the next major change I need to make in my life.

Play

Like a lot of us, consuming culture takes up a lot of my leisure time. For details see my previous post!

Creativity probably goes in this category as well, and for that see my series on making things and why I think it’s important. I intend to share more creative work in the future.

Overall

After a year of stagnation (on net) in 2017, the year 2018 was a transitional year for me. It contained some important moments and provided a lot of clarity, though it was fundamentally about setup, not action.

I like that I took in a lot more culture, which I interpret as one way of participating in our shared experience as a society. I’m also satisfied having learned to run a business, moved, changed jobs, and succeeded at each of those steps. I have a sense of substantial progress over December of last year.

Thanks to the work done in 2018, it’s possible I’ll have a very good year in 2019. Shortly I’ll publish a post looking forward, discussing how I intend to make the most of next year.

My 2018 in culture and media

This was part of my upcoming year-in-review post. It got too long so I spun it off. It’s possible to consume a lot of media in a year.

(Assume links are not safe for work – I haven’t vetted all of them recently.)

Reading

The big one was Robert Caro’s The Power Broker, his stunning doorstopper about Bob Moses, which I spent months reading on and off in 2017 and 2018 before finishing. I love long books and this is a good example of a book that absolutely had to be long in order to capture the story and its central character. In addition, I read and enjoyed two Vonnegut novels (God Bless You, Mr. Rosewater and Cat’s Cradle) and Maureen Johnson’s young adult mystery Truly Devious.

I also read for professional growth. I reread Cal Newport’s Deep Work, which I consider the definitive book about how to do creative or technical work. I also read Peak: Secrets from the New Science of Expertise by Anders Ericsson and Robert Pool, which covers similar themes.

I spent way more time reading articles than reading books, which is typical for me. Rather than share all of them, here are some favorite articles on politics, pop culture, and tech in the past year:

In the new year I’d like to shift my balance of reading in favor of books and away from articles, but that’s not my top priority so it may not happen.

Podcasts

If the book list was too short, this one is probably too long. I swap out podcasts relatively often based on mood and interest, but here are some highlights:

Movies and TV watched

I started a classic film kick around a year ago. I’ve started with big-name classics. I was joined midway through the year by the excellent Unspooled podcast, got my heart broken by the loss of Filmstruck (I’m likely to subscribe to the Criterion Channel soon!), and came to love film and the art of film. In order to not forget, I’ve built a spreadsheet of everything I’ve seen. I’m tempted to write some movie reviews and whatnot.

Here are some movies I watched for the first time this year and especially liked – spanning genres, tones, etc. Not all fall into that “classic film” category, but as I don’t see many movies in theaters most are from before 2018. The list is roughly in the order I watched them, not a ranking:

  • Citizen Kane
  • Love, Simon
  • Treasure of the Sierra Madre
  • Titanic
  • O.J.: Made in America
  • Bride of Frankenstein
  • Little Shop of Horrors
  • Quiz Show
  • The Iron Giant
  • Duck Soup
  • Bohemian Rhapsody

(Note: “here’s a list of movies I like” is a good opportunity to not tell me why I should hate them instead. 🙂 )

For film YouTube I like:

Each of those links goes to one of my favorite of their videos.

For television, my highlights were Breaking Bad (another great opportunity: don’t tell me I should have seen this much sooner!), The Flash, and Queer Eye.

Gaming

I played through all of the Kingdom Hearts series this year, my first sustained gaming experience since the Banjo series I loved as a kid. And KH is great! Looking forward with everyone else to the release of Kingdom Hearts 3 in a few weeks. For a completely different experience but one that makes use of a greater range of the capabilities of the PS4, check out Horizon Zero Dawn.

Coda: More stuff

This doesn’t really belong anywhere but it’s all stuff I liked.

POST-SCRIPT: While compiling this post I noticed a need to diversify my sources next year. Fortunately, a nice thing about the Internet is I know I can find a lot of great content by creators of color, women, etc.

The sanitation layer of the Internet

Facebook is clean, while the rest of the Internet is not.

Along with network effects, birthday reminders, news- and information-gathering, and other surface-level benefits, the fundamental clean user experience is what (for now) gives Facebook an edge over everything else online.

Facebook is, in no particular order…

  • safe to log into.
  • trustworthy with regard to basic tools like adding friends, joining groups, following people, and sharing links.
  • devoid of NSFW imagery and similar content.
  • pleasant to look at, with soothing blues and grays.
  • familiar.
  • easy to explain.
  • a just-fine networking tool.
  • well-understood by the general public.

The rest of the Internet is comparably bad. To quote Tim Wu:

Today, you wander off the safe paths of the Internet and it’s like a trap. You know, you click on the wrong thing, suddenly fifty pop-ups come up, something says, hey, you’ve been infected with a virus, click here to fix it, which of course, if you do click on it, it does infect you with a virus, it’s teeming with weird listicles and crazy things like, reason number four and how you can increase your sperm count or something, and you have to kind of constantly control yourself. You have to be on guard, it’s worse than, it’s a mixture of being in a bad neighborhood and a used car sales place and a casino and a infectious disease ward, all combined into one, and that is not relaxing. Yeah, let’s just put it that way.

We could add: those obnoxious full-page ad spreads, malicious JavaScript, phishing links, link rot, post-GDPR cookie warnings on every other website, and programs that install extensions to your browser because you were in a hurry and didn’t un-check that one box in that one view.

Compared to Facebook, much of the Internet is…

  • dangerous.
  • untrustworthy.
  • rife with NSFW content.
  • ugly, gaudy, tacky.
  • weird and full of tricks.
  • devoid of obvious context.
  • hard to explain.

In other words, it’s everything that the Facebook timeline has been designed and engineered to avoid.

If social media companies and their services are as bad as a lot of people think, we should understand the advantages they do provide as part of imagining and creating alternatives. This is one of those advantages.