Lessons from running my business

Millions of people are self-employed or run small businesses, and millions do one of those for a while and then move to something else. It’s hard for me to imagine that any of us has learned something unique to us individually.

It’s also incredibly easy to find other writers talking about it, sharing the things they learned, and the lists have a lot of overlap. A lot of the lessons also come across as pedestrian.

Still, I wanted to do a retrospective of my own experience. I wanted to approach it from a somewhat narrower perspective. To compile my list, I asked: What made an impact? What lessons did I learn from starting, running, and closing a one-person business that will cause me to act differently than I acted before I ran the business? Here are some of my answers, which of course I reserve the right to revisit.

People will give way more power to you than they should, and you have to be ethical about that.

I always asked permission before going into people’s files. And yet I was consistently told, “Oh, there’s nothing in there that’s secretive or anything, do anything you need to.” This was stunning to me.

I also saw the downside of this trust, repeatedly: computers bricked entirely by the IT support scam industry exploiting predominantly elderly people with little computer savvy and trusting hearts. On a few occasions I saved the computer only because of the sheer laziness of the scammers: one locked up a Windows computer with a particular encryption tool and set the password to “123456”.

Work honestly, and beware people who are willing to do otherwise.

Sometimes you fail. That hurts but it’s (probably) okay in the end.

This was especially true late in the business, as I was winding down and could see I had too little time to finish everything. I had a few uncomfortable phone calls and text exchanges trying to establish a compromise that met as many of my (understandably) annoyed client’s needs as possible within some tight time constraints. The end results were good for none of us but acceptable for all of us.

You will make mistakes and fall short. And that’s (usually) fine, if you patch it up as best you can, make amends, and learn from the experience.

Location is important.

I did not quit because I didn’t like the work. The job itself was fine. A few projects were great, a few were terrible, and most were in between.

I quit because the place I was living didn’t have what I needed for personal and social fulfillment (I’ll likely return to this topic in the future). I’ve never wanted to define my life around economic considerations alone, and I have the luxury of making that choice. When I got the chance at a job that paid about the same, in a place that in other respects was much better for personal development, I took the opportunity and left.


I closed down but mostly consider the business a success. I started because I needed to generate money for student loan and car payments. I made some money, got a few nice things, built my confidence, and proved my self-sufficiency. I figured out a few things I don’t want to spend my life doing. I could hardly have asked for a better first job after college.

Robot forecasting, circa 1978

In The People’s Almanac 2, published in 1978, there is a section entitled, unpleasantly, “Robots – Artificial Slaves”. It’s a reminder that fear of the robots coming for us all isn’t new.

After some history of androids in ancient literature and mythology, it gets to the interesting parts. For example:

Modern robots would not be possible without miniaturized electronic circuitry and sophisticated computer technology. Their most important component is the computer brain, housed in the robot body or elsewhere, which is programmed to perform certain tasks or react in certain ways to specific stimuli.

I’ve always appreciated this way of thinking about prosthetics and pacemakers:

[Robotic] devices used in medicine make the Bionic Man and Bionic Woman seem plausible. Artificial limbs employ signals from the nerves to the muscles so that people wearing them can use them as if they were really their own. Some devices, called cyborgs [!], go inside the body; e.g., the pacemaker, which regulates heartbeats.

Today most people have at least passing awareness of robotics, but this makes clear what a niche conversation it was at the time:

The leaders of robot manufacture for industry, which, according to robotics expert Gene Bartczak, is an extremely fast-growing field…

Finally, here’s the vision that – while humorously premature in its timeline – has a ring of prescience:

In the future, robots, not people, will go to distant planets with inhospitable climates, and there they will work for a few years and die.

[British roboticist M.W.] Thring predicts for future household use a robot that will scrub, sweep, clean, make beds, dry-clean clothes, tape television shows to be replayed, activate locks, choose library materials and print them by teletype, and more. It will not look human, though it will be sized for human households. In all likelihood, its computer brain will not be attached to its body, but instead will be conveniently housed in a closet. Its spoked but rimless wheels will enable it to climb stairs. Through a sophisticated computer program, it will be able to recognize and categorize objects – differentiate between a drinking glass and a cup, for instance. Available sometime in the 1980s, according to Thring, it will cost about $20,000 and have a life of about 25 years.

At the Third International Joint Conference on Artificial Intelligence at Stanford in 1973, scientists predicted robot tutors by 1983, robot judges by 1988, robot psychiatrists by 1990, and robot chauffeurs by 1992.

The story draws its inevitable ominous conclusion:

By 2000, scientists predict, there will be one robot for every 500 blue-collar workers, and robots will be smarter than humans and able to reproduce themselves for their own ends. And then, it is possible, but not likely, that the human dream of owning the perfect slave will turn into a nightmare, as the robots turn their attention to their human masters.

For what it’s worth, here are the 2015 robot density figures for a few advanced countries and for the world (figures from the International Federation of Roboticshighlighted by Robotics Business Review). Note that this chart shows robot density for all workers, not just blue-collar workers, so the apples-to-apples ratio should be even more dramatic given the smaller denominator.

Code here.

I have four observations.

  1. The slavery analogy was probably meant to be clever or illuminating. It’s not.
  2. The robot/AI apocalypse is not upon us, even in the age of high and rising robot density. The tech community should work on mitigating the effects of mass automation, to be sure, but it should not come at the expense of addressing existing problems of economic inequality, racism, demagoguery, and institutional stagnation. Tech policy changes should focus on what to do about workers who have already lost their jobs to automation, or who will in the next five to ten years.
  3. A lot of what the article predicts is likely at some time in the future. I expect “robot psychiatrists” and “robot tutors” will come before the “robot judges” for institutional reasons, but it will probably happen, maybe in my lifetime. I’m still not worried about an AI apocalypse.
  4. The generally delightful People’s Almanac 2, while sounding close to modern in discussing robotics, contains just 4 indexed references to computers. Go figure.

[I wrote this post and most of the code months ago, and I’ve added it here as part of migrating some of my favorite content to my new site.]

AI will be fine

Artificial intelligence will probably save lives, make lives better, and not destroy all of humanity.

On the one hand I just want to believe this. On the other hand there seems to be evidence that it’s true.

First, AI will save lives by a direct substitution of hardware/software for human labor. An explosion that would kill a mine worker would destroy a robot instead (raising questions about the personhood of software but clearly saving the flesh-and-blood person). A network of self-driving trucks will make fewer bad driving decisions than a network of tired human truckers.

Second, intelligence in humans seems to have increased at around the same time as violence in humans has decreased. If intelligence causes people to behave in less violent ways (arguable but plausible), then a piece of software exponentially smarter than a human might be at least somewhat less violent than a human.

For example, advanced AI can create better weapons, but it might also be less eager to launch them. Enter this classic xkcd:

I don’t worry about the rise of artificial intelligence from an extinction-of-the-human-species perspective. I do worry about it from the perspective of standards of living and human purpose, which are a lot harder to come by if virtually all work is done by technology. That’s a social problem that’s a century or so away, but we’re probably already seeing early signs of it.

By all means, worry about AI if you’re in a social/economic/demographic position where that’s your most imminent worry. But worry for the right reasons.