Golang and more: this week’s personal tech updates

First I haz a sad. After a server choke last week, the Earlham CS admins finally had to declare time-of-death on the filesystem underlying one of our widely-used virtual machines. Definitive causes evade us (I think they are lost to history), so we will now pivot to rebuilding and improving the system design.

In some respects this was frustrating and produced a lot of stress for us. On the other hand, it’s a sweet demo of the power of virtualization. The server died, but the hardware underlying it was still fine. That means we can rebuild at a fraction of the cost of discovering, purchasing, installing, and configuring new metal. The problem doesn’t disappear but it moves from hardware to software.

I’ve also discovered a few hardware problems. One of the drones we will take to Iceland need a bit of work, for example. I also found that our Canon camera may have a bad orientation sensor, so the LCD display doesn’t auto-rotate. Discovering those things in February is not fun. Discovering them in May or June would have been much worse.

Happier news: I began learning Go this week. I have written a lot of Java, C, and some Python, but for whatever reason I’ve taken to Golang as I have with no other language. It has a lot of the strengths of C, a little less syntactical cruft, good documents, and a rich developer literature online. I also have more experience.

A brief elaboration on experience: A lot of people say you have to really love programming or you have no hope of being good at it. Maybe. But I’m more partial to thinking of software engineering as a craft. Preexisting passion is invaluable but not critical, because passion can be cultivated (cf. Cal Newport). It emerges from building skills, trying things, perseverance, solving some interesting problems, and observing your own progress over time. In my experience (like here and here), programming as a student, brand-new to the discipline, was often frustrating and opaque. Fast forward, and today I spent several hours on my day off learning Golang because it was interesting and fun. 🤷‍♂️

Your mileage may vary, but that was my experience.

Finally, here are a few articles I read or re-read this week:

Earlham’s four-day weekend runs from today through Sunday. After a couple of stressful weeks, I’m going to take advantage of the remainder of the time off to decompress.

My new laptop

Now that Apple fixed the keyboard, I finally upgraded my Mac.

I am a non-combatant in the OS wars. I have my Mac laptop and a Windows desktop. I have an iPhone and build for Android at work. I run Linux servers professionally. I love everybody.

My little 2015 MacBook Air is a delightful machine. But it wasn’t keeping up with my developer workloads, in particular Android app builds. I bought the $2799 base model MacBook Pro 16″.

I started from scratch with a fresh install. Generally I try to avoid customizing my environments too much, on the principle of simplicity, so I didn’t bother migrating most of my old configs (one exception: .ssh/config).

Instead I’ve left the default apps and added by own one by one. I migrated my data manually – not as daunting as it sounds, given that the old laptop was only 128GB and much of it was consumed by the OS. I closed with an initial Time Machine backup to my (aging) external hard drive.

Now I’ve had a couple of weeks to actually use the MacBook Pro. Scattered observations:

  • WOW this screen.
  • WOW these speakers.
  • WOW the time I’m going to save building apps (more on that later).
  • I’m learning zsh now that it’s the Mac’s default shell.
  • Switching from MagSafe to USB-C for charging was ultimately worth the tradeoff.
  • I was worried about the footprint of this laptop (my old laptop is only 11-inch!), but I quite like it. Once I return to working at my office, I think it will be even better.
  • I am running Catalina. It’s fine. I haven’t seen some of the bad bugs people have discussed – at least not yet.
  • I’m holding on to my old Mac as a more passive machine or as a fallback if something happens to this one.

Only one of those really matters, though.

Much better for building software

The thing that makes this laptop more than a $2799 toy is the boon to my development work. I wanted to benchmark it, not in a strictly scientific way (there are websites that will do that) but in a comparative way in the actually existing use case for me: building Android apps.

The first thing I noticed: a big cut in the time taken to actually launch Studio. It’s an immediate lifestyle improvement.

I invalidated caches and restarted Studio on both machines. The two apps opened at the same time (not optimal performance-wise, but not uncommon when I’m working on these apps intensively).

I then ran and recorded times for three events, on each machine, for both of the apps I regularly build:

  • Initial Gradle sync and build
  • Build but don’t install (common for testing)
  • Build and install

Shock of all shocks, the 2019 pro computer is much better than the 2015 budget-by-Apple’s-standards computer (graphs generated with a Jupyter notebook on the new laptop, smaller bars are better; code here):

Yeah. I needed a new computer. 🙂

I expect 2020 to be a big year for me for a number of reasons I’ll share over time, and my old laptop just couldn’t keep up. This one will, and I’m happy with it.

The sanitation layer of the Internet

Facebook is clean, while the rest of the Internet is not.

Along with network effects, birthday reminders, news- and information-gathering, and other surface-level benefits, the fundamental clean user experience is what (for now) gives Facebook an edge over everything else online.

Facebook is, in no particular order…

  • safe to log into.
  • trustworthy with regard to basic tools like adding friends, joining groups, following people, and sharing links.
  • devoid of NSFW imagery and similar content.
  • pleasant to look at, with soothing blues and grays.
  • familiar.
  • easy to explain.
  • a just-fine networking tool.
  • well-understood by the general public.

The rest of the Internet is comparably bad. To quote Tim Wu:

Today, you wander off the safe paths of the Internet and it’s like a trap. You know, you click on the wrong thing, suddenly fifty pop-ups come up, something says, hey, you’ve been infected with a virus, click here to fix it, which of course, if you do click on it, it does infect you with a virus, it’s teeming with weird listicles and crazy things like, reason number four and how you can increase your sperm count or something, and you have to kind of constantly control yourself. You have to be on guard, it’s worse than, it’s a mixture of being in a bad neighborhood and a used car sales place and a casino and a infectious disease ward, all combined into one, and that is not relaxing. Yeah, let’s just put it that way.

We could add: those obnoxious full-page ad spreads, malicious JavaScript, phishing links, link rot, post-GDPR cookie warnings on every other website, and programs that install extensions to your browser because you were in a hurry and didn’t un-check that one box in that one view.

Compared to Facebook, much of the Internet is…

  • dangerous.
  • untrustworthy.
  • rife with NSFW content.
  • ugly, gaudy, tacky.
  • weird and full of tricks.
  • devoid of obvious context.
  • hard to explain.

In other words, it’s everything that the Facebook timeline has been designed and engineered to avoid.

If social media companies and their services are as bad as a lot of people think, we should understand the advantages they do provide as part of imagining and creating alternatives. This is one of those advantages.

Broken links

I was looking up some old articles and blog posts (circa-2009-2012, so not that old). I was saddened to be reminded how littered the modern Internet is with broken links.

The excellent archive.org, at its Wayback Machine, saves as much of the public Internet as it can, but it doesn’t quite capture everything.

If the link is broken, and the Wayback Machine doesn’t have it? Try your luck with a search engine and hope someone copy-pasted it, or the author cross-posted it to a website that still links it.

This highlights some significant modern problems:

  • Slipping through the cracks this way seems like about the only way to actually disappear from the Internet.
  • The web today, which has gotten pretty bad (but not irredeemably so), dominates search engine results in part because of recency bias and in part because those pages do, in fact, still exist.
  • The central challenge of the age of ubiquitous “information” is that it’s hard to find real, good information.

The web is a god of chaos. Broken links are the debris of its manic, arbitrary antics.

HTML > PDF

I stumbled into the world of PDF critics this week.

Some criticism from as far back as 2001 (emphasis in original):

PDF was designed to specify printable pages. PDF content is thus optimized for letter-sized sheets of paper, not for display in a browser window. I often see users getting lost in PDF because the print-oriented viewer gives them only a small peephole on a big, complicated layout and they can’t scroll it in the simple, linear manner they are accustomed to on the Web. Instead, PDF files often use elaborate graphic layouts and split the content into separate units for each sheet of print. Although this is highly appropriate for printed documents, it causes severe usability problems online.

PDF pages lack navigation bars and other apparatus that might help users move within the information space and relate to the rest of the site. Because PDF documents can be very big, the inability to easily navigate them takes a toll on users. PDF documents also typically lack hypertext, again because they are designed with print in mind.

In a recent study of how journalists use the Web, we found that PDF files sometimes crashed the user’s computer. This happened most often to journalists working from home on low-end computers (especially old Macs). The more fancy the company’s press kit, the less likely it would get quoted.

Because PDF is not the standard Web page format, it dumps users into a non-standard user interface. Deviating from the norm hurts usability because, for example, scrolling works differently, as do certain commands, such as the one to make text larger (or smaller). Also, after finishing with a PDF document, users sometimes close the window instead of clicking the Back button, thus losing their navigation history. Although this behavior is not common, it is symptomatic of the problems caused when you present users with a non-standard Web page that both looks different and follows different rules.

Not all of this holds up almost two decades later. PDF’s are quicker to load now than they once were, for example, just because computers and the Internet are faster now. Memory constraints are not nearly as strict either, so it’s rare that a PDF will crash a computer or even a program.

That doesn’t mean every criticism is now obsolete. From gov.uk this year (emphasis added):

GOV.UK exists to make government services and information as easy as possible to find and use.

For that reason, we’re not huge fans of PDFs on GOV.UK.

Compared with HTML content, information published in a PDF is harder to find, use and maintain. More importantly, unless created with sufficient care PDFs can often be bad for accessibility and rarely comply with open standards.

We’ll continue to improve GOV.UK content formats so it’s easy to create great-looking, usable and accessible HTML documents.

We also intend to build functionality for users to automatically generate accessible PDFs from HTML documents. This would mean that publishers will only need to create and maintain one document, but users will still be able to download a PDF if they need to. (This work is downstream of some higher priorities, but is on the long-term roadmap).

I encourage you to read the gov.uk piece, as it covers persisting problems with this format. I’m not always a partisan in these debates, but in this case I’m persuaded.

I do think that for exchanging documents in emails, PDF is still a good option. Anti-PDF posts often grant that PDF can be better than HTML for printing as well: it provides a clear print preview and is supported consistently across flavors of printer hardware and software. But for online reading, HTML wins.

Glass on the floor of the Internet

I saw this tweet today:

[tweet https://twitter.com/sarah_edo/status/1013427276350873600]

For posterity:

Sarah Drasner @sarah_edo: I miss the useless web. I miss your grandpa’s blog. I miss weird web art projects that trolled me. I miss fan pages for things like hippos. I wish I didn’t feel like the web was collapsing into just a few sites plus a thousand resumes.

I was using dial-up as a kid at the time the “useless web” last existed in a meaningful way. I don’t remember it well. I suspect I would have liked it.

She’s right that it doesn’t exist anymore, at any real scale. That got me thinking back to an interview between MSNBC’s Chris Hayes and the tech/media guru Tim Wu, talking about why the Internet is now bad:

TIM WU: And you know, the old days, links were seen as treasures. Imagine that.

CHRIS HAYES: So true.

TIM WU: So like, someone, Yahoo, its basic idea is, here are some good links to go to. You know, a whole business was built on that premise. It was like, we’re a bunch of guys who hang on the internet and here are the coolest links.

CHRIS HAYES: [laughs] Here are some links.

TIM WU: Yeah, so now, so that’s the experience back then. Today, you wander off the safe paths of the internet and it’s like a trap. You know, you click on the wrong thing, suddenly fifty pop-ups come up, something says, hey, you’ve been infected with a virus, click here to fix it, which of course, if you do click on it, it does infect you with a virus, it’s teeming with weird listicles and crazy things like, reason number four and how you can increase your sperm count or something, and you have to kind of constantly control yourself. You have to be on guard, it’s worse than, it’s a mixture of being in a bad neighborhood and a used car sales place and a casino and a infectious disease ward, all combined into one, and that is not relaxing. Yeah, let’s just put it that way.

I’ve thought about that exchange a lot.

For a regular user of the Internet, and even for someone like me with a comparatively huge advantage in privilege, education, access to technology, and skills, the Internet is a dangerous place to be. Yes you can get to just about anything you want to, but the journey is perilous. It’s like a mirror was shattered and you have to walk across the glass to get to your bookshelf, or desk, or couch. It’ll be great, if your feet don’t get punctured.

Wu traces the problem back to (I think) the correct source: consolidation of power on the Internet that broadly reflects similar consolidations in radio and television. My inference is that it goes something like this: talent flocks to the places that can pay a lot. Smaller places can’t afford the talent, so they aren’t able to handle security as well, or to defend their IP against copycats or knockoffs, or to ensure that the public knows they can be trusted where another site can’t. Aggregation theory takes over because the major players can provide a better (in this case read: safer) experience to users. Smaller players shrivel and disappear so that scammers (and personal resumes, as Sarah Drasner mentioned) overwhelm other small players left on the web. And you get the bad Internet.

There are ways that individuals, companies, and organizations can mitigate the problem. But the fundamental structure and incentives drain smaller sites of the resources needed to make and share good stuff online.

This is why, like many people, I don’t feel comfortable trusting some random link to a site I’ve never heard of. I mostly ignore those links and stay within the “safe paths of the Internet.” I’m sure I miss good stuff, but I also miss a ton of bad stuff.

I want more useless things online. I want to see Rule of Fun apply to more of what circulates online. I want the open web to work. Making that happen in a sustainable way is as much a political project as an individual one.

If you want a small taste of what the useless web might look like, check out Inspiring Online, a great source for a grab bag of delightful Internet miscellany in its purest form.

Robot forecasting, circa 1978

In The People’s Almanac 2, published in 1978, there is a section entitled, unpleasantly, “Robots – Artificial Slaves”. It’s a reminder that fear of the robots coming for us all isn’t new.

After some history of androids in ancient literature and mythology, it gets to the interesting parts. For example:

Modern robots would not be possible without miniaturized electronic circuitry and sophisticated computer technology. Their most important component is the computer brain, housed in the robot body or elsewhere, which is programmed to perform certain tasks or react in certain ways to specific stimuli.

I’ve always appreciated this way of thinking about prosthetics and pacemakers:

[Robotic] devices used in medicine make the Bionic Man and Bionic Woman seem plausible. Artificial limbs employ signals from the nerves to the muscles so that people wearing them can use them as if they were really their own. Some devices, called cyborgs [!], go inside the body; e.g., the pacemaker, which regulates heartbeats.

Today most people have at least passing awareness of robotics, but this makes clear what a niche conversation it was at the time:

The leaders of robot manufacture for industry, which, according to robotics expert Gene Bartczak, is an extremely fast-growing field…

Finally, here’s the vision that – while humorously premature in its timeline – has a ring of prescience:

In the future, robots, not people, will go to distant planets with inhospitable climates, and there they will work for a few years and die.

[British roboticist M.W.] Thring predicts for future household use a robot that will scrub, sweep, clean, make beds, dry-clean clothes, tape television shows to be replayed, activate locks, choose library materials and print them by teletype, and more. It will not look human, though it will be sized for human households. In all likelihood, its computer brain will not be attached to its body, but instead will be conveniently housed in a closet. Its spoked but rimless wheels will enable it to climb stairs. Through a sophisticated computer program, it will be able to recognize and categorize objects – differentiate between a drinking glass and a cup, for instance. Available sometime in the 1980s, according to Thring, it will cost about $20,000 and have a life of about 25 years.

At the Third International Joint Conference on Artificial Intelligence at Stanford in 1973, scientists predicted robot tutors by 1983, robot judges by 1988, robot psychiatrists by 1990, and robot chauffeurs by 1992.

The story draws its inevitable ominous conclusion:

By 2000, scientists predict, there will be one robot for every 500 blue-collar workers, and robots will be smarter than humans and able to reproduce themselves for their own ends. And then, it is possible, but not likely, that the human dream of owning the perfect slave will turn into a nightmare, as the robots turn their attention to their human masters.

For what it’s worth, here are the 2015 robot density figures for a few advanced countries and for the world (figures from the International Federation of Roboticshighlighted by Robotics Business Review). Note that this chart shows robot density for all workers, not just blue-collar workers, so the apples-to-apples ratio should be even more dramatic given the smaller denominator.

Code here.

I have four observations.

  1. The slavery analogy was probably meant to be clever or illuminating. It’s not.
  2. The robot/AI apocalypse is not upon us, even in the age of high and rising robot density. The tech community should work on mitigating the effects of mass automation, to be sure, but it should not come at the expense of addressing existing problems of economic inequality, racism, demagoguery, and institutional stagnation. Tech policy changes should focus on what to do about workers who have already lost their jobs to automation, or who will in the next five to ten years.
  3. A lot of what the article predicts is likely at some time in the future. I expect “robot psychiatrists” and “robot tutors” will come before the “robot judges” for institutional reasons, but it will probably happen, maybe in my lifetime. I’m still not worried about an AI apocalypse.
  4. The generally delightful People’s Almanac 2, while sounding close to modern in discussing robotics, contains just 4 indexed references to computers. Go figure.

[I wrote this post and most of the code months ago, and I’ve added it here as part of migrating some of my favorite content to my new site.]