original research

Appendix to JAWWS: An Incrementally Rewritten Paragraph

Yesterday, I published a post describing an idea to improve scientific style by rewriting papers as part of a new science journal. I originally wanted to conclude the post with a demonstration of how the rewriting could be done, but I didn’t want to add too much length. Here it is as an appendix.

We start with a paragraph taken more or less at random from a biology paper titled “Shedding light on the ‘dark side’ of phylogenetic comparative methods“, published by Cooper et al. in 2016. Then, in five steps, we’ll incrementally improve it — at least according to my preferences! Let me know if it fits your own idea of good scientific writing as well.

1. Original

Most models of trait evolution are based on the Brownian motion model (Cavalli-Sforza & Edwards 1967; Felsenstein 1973). The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species (Hansen 1997; Butler & King 2004). OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations (Cooper et al. 2016b). For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes (Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014). For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses (the median number of taxa used for OU studies is 58; Cooper et al. 2016b). Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process (Boettiger, Coop & Ralph 2012; Pennell et al. 2015). Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model (e.g. Hansen 1997; Hansen & Orzack 2005), but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies.

Okay, first things first: let’s banish all those horrendous inline citations to footnotes.

2. With footnotes

Most models of trait evolution are based on the Brownian motion model.1Cavalli-Sforza & Edwards 1967; Felsenstein 1973 The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species.2Hansen 1997; Butler & King 2004 OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations.3Cooper et al. 2016b For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.4Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 However, the OU model has several well-known caveats.5see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses.6the median number of taxa used for OU studies is 58; Cooper et al. 2016b Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process.7Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model,8e.g. Hansen 1997; Hansen & Orzack 2005 but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies.

Much better.

Does this need to be a single paragraph? No, it doesn’t. Let’s not go overboard with cutting it up, but I think a three-fold division makes sense.

3. Multiple paragraphs

Most models of trait evolution are based on the Brownian motion model.9Cavalli-Sforza & Edwards 1967; Felsenstein 1973

The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species.10Hansen 1997; Butler & King 2004 OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations.11Cooper et al. 2016b For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.12Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013

However, the OU model has several well-known caveats.13see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses.14the median number of taxa used for OU studies is 58; Cooper et al. 2016b Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process.15Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model,16e.g. Hansen 1997; Hansen & Orzack 2005 but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies.

We haven’t rewritten anything yet — the changes so far are really low-hanging fruit! Let’s see if we can improve the text more with some rephrasing. This is trickier, because there’s a risk I change the original meaning, but it’s not impossible.

4. Some rephrasing

Most models of trait evolution are based on the Brownian motion model, in which traits evolve randomly and accrue variance over time.17Cavalli-Sforza & Edwards 1967; Felsenstein 1973

What if we add a parameter to measure how much the trait motion returns to a theoretical optimum for a given clade or set of species? Then we get a family of models called Ornstein-Uhlenbeck,18Hansen 1997; Butler & King 2004 first developed as a way to describe friction in the Brownian motion of a particle. These models have become increasingly popular, both because they tend to fit the data better than simple Brownian motion, and because they have attractive biological interpretations.19Cooper et al. 2016b For example, fit to an Ornstein-Uhlenbeck model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.20Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013

However, Ornstein-Uhlenbeck models have several well-known caveats.21see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, they are frequently — and incorrectly — favoured over simpler Brownian models. This occurs with likelihood ratio tests, particularly for the small data sets that are commonly used in these analyses.22the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b It also happens when there is error in the data set, even very small amounts of error, simply because Ornstein-Uhlenbeck models accommodate more variance towards the tips of the phylogeny — therefore suggesting an interesting biological process where there is none.23Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Additionally, users of Ornstein-Uhlenbeck models often state that clade-wide stabilising selection accounts for data fitting the model, even though the literature describing the model warns that such a simple explanation is unlikely.24e.g. Hansen 1997; Hansen & Orzack 2005 Unfortunately, these limitations are rarely taken into account in empirical studies.

What did I do here? First, I completely got rid of the “OU” acronym. Acronyms may look like they simplify the writing, but in fact they often ask more cognitive resources from the reader, who has to constantly remember that OU means Ornstein-Uhlenbeck.

Then I rephrased several sentences to make them flow better, at least according to my taste.

I also added a short explanation of what Brownian and Ornstein-Uhlenbeck models are. That might not be necessary, but it’s always good to make life easier for the reader. Even if you defined the terms earlier in the paper, repetition is useful to avoid asking the reader an effort to remember. And even if everyone reading your paper is expected to know what Brownian motion is, there’ll be some student somewhere thanking you for reminding them.25I considered doing this with the “evolutionary constraints, stabilising selection, niche conservatism and selective regimes” enumeration too, but these are mere examples, less critical to the main idea of the section. Adding definitions would make the sentence quite long and detract from the main flow. Also I don’t know what the definitions are and don’t feel like researching lol.

This is already pretty good, and still close enough to the original. What if I try to go further?

5. More rephrasing

Most models of trait evolution are based on the Brownian motion model.26Cavalli-Sforza & Edwards 1967; Felsenstein 1973 Brownian motion was originally used to describe the random movement of a particle through space. In the context of trait evolution, it assumes that a trait (say, beak size in some group of bird species) changes randomly, with some species evolving a larger beak, some a smaller one, and so on. Brownian motion implies that variance in beak size, across the group of species, increases over time.

This is a very simple model. What if we refined it by adding a parameter? Suppose there is a theoretical optimal beak size for this group of species. The new parameter measures how much the trait tends to return to this optimum. This gives us a type of model called Ornstein-Uhlenbeck,27Hansen 1997; Butler & King 2004 first developed as a way to add friction to the Brownian motion of a particle.

Ornstein-Uhlenbeck models have become increasingly popular in trait evolution, for two reasons.28Cooper et al. 2016b First, they tend to fit the data better than simple Brownian motion. Second, they have attractive biological interpretations. For example, fit to an Ornstein-Uhlenbeck model has been seen as evidence of a number of processes, including evolutionary constraints, stabilising selection, niche conservatism and selective regimes.29Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013

Despite this, Ornstein-Uhlenbeck models are not perfect, and have several well-known caveats.30see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 Sometimes you really should use a simpler model! It is common, but incorrect, to favour an Ornstein-Uhlenbeck model over a Brownian model after performing likelihood ratio tests, particularly for the small data sets that are often used in these analyses.31the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b Then there is the issue of error in data sets. Even a very small amount of error can lead researchers to pick an Ornstein-Uhlenbeck model, simply because they accommodate more variance towards the tips of the phylogeny — therefore suggesting interesting biological processes where there is none.32Boettiger, Coop & Ralph 2012; Pennell et al. 2015

Additionally, users of Ornstein-Uhlenbeck models often state that the reason their data fits the model is clade-wide stabilising selection (for instance, selection for intermediate beak sizes, rather than extreme ones, across the group of birds). Yet the literature describing the model warns that such simple explanations are unlikely.33e.g. Hansen 1997; Hansen & Orzack 2005

Unfortunately, these limitations are rarely taken into account in empirical studies.

Okay, many things to notice here. First, I added an example, bird beak size. I’m not 100% sure I understand the topic well enough for my example to be particularly good, but I think it’s decent. I also added more explanation of what Brownian models are in trait evolution. Then I rephrased other sentences to make the tone less formal.

As a result, this version is longer than the previous ones. It seemed justified to cut it up into more paragraphs to accommodate the extra length. It’s plausible that the authors originally tried to include too much content in too few words, perhaps to satisfy a length constraint posed by the journal.

Let’s do one more round…

6. Rephrasing, extreme edition

Suppose you want to model the evolution of beak size in some fictional family of birds. There are 20 bird species in the family, all with different average beak sizes. You want to create a model of how their beaks changed over time, so you can reimagine the beak of the family’s ancestor and understand what happened exactly.

Most people who try to model the evolution of a biological trait use some sort of Brownian motion model.34Cavalli-Sforza & Edwards 1967; Felsenstein 1973 Brownian motion, originally, refers to the random movement of a particle in a liquid or gas. The mathematical analogy here is that beak size evolves randomly: it becomes very large in some species, very small in others, with various degrees of intermediate forms between the extremes. Therefore, across the 20 species, the variance in beak size increases over time.

Brownian motion is a very simple model. What if we add a parameter to get a slightly more complicated one? Let’s assume there’s a theoretical optimal beak size for our family of birds — maybe because the seeds they eat have a constant average diameter. The new parameter measures how much beak size tends to return to the optimum during its evolution. This gives us a type of model called Ornstein-Uhlenbeck,35Hansen 1997; Butler & King 2004 first developed as a way to add friction to the Brownian motion of a particle. We can imagine the “friction” to be the resistance against deviating from the optimum.

Ornstein-Uhlenbeck models have become increasingly popular, for two reasons.36Cooper et al. 2016b First, they often fit real-life data better than simple Brownian motion. Second, they are easy to interpret biologically. For example, maybe our birds don’t have as extreme beak sizes as we’d expect from a Brownian model, so it makes sense to assume there’s some force pulling the trait towards an intermediate optimum. That force might be an evolutionary constraint, stabilising selection (i.e. selection against extremes), niche conservatism (the tendency to keep ancestral traits), or selective regimes. Studies using Ornstein-Uhlenbeck models have been seen as evidence for each of these patterns.37Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013

Of course, Ornstein-Uhlenbeck aren’t perfect, and in fact have several well-known caveats.38see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, simpler models are sometimes better. It’s common for researchers to incorrectly choose Ornstein-Uhlenbeck instead of Brownian motion when using likelihood ratio tests to compare models, a problem especially present due to the small data sets that are often used in these analyses.39the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b Then there is the issue of error in data sets (e.g. when your beak size data isn’t fully accurate). Even a very small amount of error can lead researchers to pick an Ornstein-Uhlenbeck model, simply because it’s better at accommodating variance among closely related species at the tips of a phylogenetic tree. This can suggest interesting biological processes where there are none.40Boettiger, Coop & Ralph 2012; Pennell et al. 2015

One particular mistake that users of Ornstein-Uhlenbeck models often make is to assume that their data fits the model due to clade-wise stabilising selection (e.g. selection for intermediate beak sizes, rather than extreme ones, across the family of birds). Yet the literature warns against exactly that — according to the papers describing the models, such simple explanations are unlikely.41e.g. Hansen 1997; Hansen & Orzack 2005

Unfortunately, these limitations are rarely taken into account in empirical studies.

This is longer still than the previous version! At this point I’m convinced the original paragraph was artificially short. That is, it packed far more information than a text of its size normally should.

This is a common problem in science writing. Whenever you write something, there’s a tradeoff between brevity, clarity, amount of information, and complexity: you can only maximize three of them. Since science papers often deal with a lot of complex information, and have word limits, clarity often gets the short end of the stick.

Version 6 is a good example of sacrificing brevity to get more clarity. In this case it’s important to keep the amount of information constant, because I don’t want to change what the original authors were saying. It is possible that they were saying too many things. On the other hand, this is only one paragraph in a longer paper, so maybe it made sense to simply mention some ideas without developing them.

I tried a Version 7 in which I aimed for a shorter paragraph, on the scale of the original one, but I failed. To be able to keep all the information, I would have to sacrifice the extra explanations and the bird beak example, and we’d be back to square one. This suggests that both the original paragraph and my rewritten version are on different points on the tradeoff curve. The original is brief, information-rich, and complex dense; my version is information-rich, complex, and clear.. To get brief and clear would require taking some information out, which I can’t do as a rewriter.

It is my opinion that sacrificing clarity is the worst possible world, at least in most contexts. We could then rephrase my project as attempting to emphasize clarity above all else — after all, brevity, information richness and complexity serve no purpose if they fail to communicate what they want to.


The Journal of Actually Well-Written Science

Update: The project described below is actually happening! Head to jawws.org for more content and posts.

Once upon a time, I was a master’s student in evolutionary biology, on track towards a PhD and an academic research career.

Some gloomy day (it was autumn and it was Sweden), a professor suggested that we organize a journal club — a weekly gathering to discuss a scientific paper — as an optional addition to regular coursework. I immediately thought, “Reading science papers sucks, so obviously I’m not going to do more of that just for fun.” But all my classmates enthusiastically signed up for it, so I caved in and joined too. And so, every week, I went to the journal club and tried to hide the fact that I had barely skimmed the assigned paper.

I am no longer on track towards a PhD and an academic research career.

There were, of course, many reasons to leave the field after my master’s degree, some better than others. “I hate reading science papers” doesn’t sound like a very serious reason — but if I’m honest with myself, it was a true motivation to quit.

And I think that generalizes far beyond my personal experience.

Science papers are boring. They’re boring even when they should be interesting. They’re awful at communicating their contents. They’re a chore to read. They’re work.

In a way, that’s expected — papers aren’t meant to be entertainment — but over time, I’ve grown convinced that the pervasiveness of bad writing is a major problem in science. It requires a lot of researchers’ precious time and energy. It keeps the public out, including people who disseminate knowledge, such as teachers and journalists, and those who take decisions about scientific matters, such as politicians and business leaders. It discourages wannabe scientists. In short, it makes science harder than it needs to be.

The quality of the writing is, of course, only one of countless problems with current academic publishing. Others include access,1most papers are gated by journals and very expensive to get access to peer review,2a very bad system in which anonymous scientists must review your paper before it gets published, and may arbitrarily reject your work, especially if they are in competition with you, or ask you to perform more experiments labor exploitation,3scientists don’t get paid for writing papers, or for reviewing them, and journals take all the financial upside the failure to report negative results,4which are less exciting than positive results fraud, and so on. These issues are important, but they are not the focus of this essay. The focus here is to examine and suggest a solution to a question that sounds petty and unserious, but is actually a genuine problem: the fact that science papers are incredibly tiresome.

This post contains three main sections:

If you’re short on time, please read the third one, which includes the sketch of a plan to improve scientific style. The other two sections provide background and justification for the plan.

Additionally, I published an appendix in which I rewrite a paragraph multiple times as a demonstration.

What makes scientific papers difficult to read?

Three reasons: topic, content, and style.

Boring topics

Science today is hyperspecialized. To make a new contribution, you need to be hyperspecialized in some topic, and read hyperspecialized papers, and write hyperspecialized ones. It’s unavoidable — science is too big and complex to allow people to make sweeping general discoveries all the time.

As a result, any hyperspecialized paper in a field that isn’t your own isn’t going to be super interesting to you. Consider these headlines:5These are a few titles taken at random from the journal Nature, all published on 30 June 2021.

I could see myself maybe skimming the third one because I’ve been interested in covid vaccines to some superficial extent, but none of them strike me as fun reading. But if you work in superconductors, maybe the Wigner crystal one (whatever that is) sounds appealing to you.

One of the reasons I quit biology is that I eventually figured out that I wasn’t sufficiently interested in the field. Surely that also contributed to my lack of eagerness to read papers. But that isn’t the whole story. There were scientific questions I was genuinely curious about, and for which I should have been enthusiastic about reading the latest research. Yet that almost never happened.

Just like you’re sometimes attracted to a novel or movie because of its premise, only to be disappointed in the actual execution — there are papers that should be interesting due to their topic, but still fail due to their contents or style.

Tedious content

The primary goal of a scientific paper is to communicate science. Surprisingly, we tend to forget this, because, as I said, papers are also a measure of work output. But still, they’re supposed to contain useful information. A good science paper should answer a question and allow another scientist to understand and perhaps replicate the methods.

That means that, sometimes, there is stuff that must be there even though it’s not interesting. A paper might contain a lengthy description of an experimental setup or statistical methods which, no matter what you do, will probably never be particularly compelling.

Besides, it might be very technical and complicated. It’s possible to write complex material that is engaging, but that’s a harder bar to clear.

And then sometimes your results just aren’t that interesting. Maybe they disprove the cool hypothesis you wanted to prove. Maybe you merely found a weak statistical correlation. Maybe “more research is needed.” It’s important to publish results even if they’re negative or unimpressive, but of course that means your paper will have a hard time generating excitement.

So there’s not much we can do in general about content. All scientists try to do the most engaging and life-changing research they can, but only a few will succeed, and that’s okay. (And some scientists adopt a strategy of publishing wrong or misleading content in order to generate excitement, which, well, is a rather obvious bad idea.)

Awful style

Style is somehow both the least important and the most important part of writing.

It’s the least important because it rarely is the reason we read anything. Except for some entertainment,6And even then! There’s some intellectual pleasure to be gleaned from looking at the form of a poem, but it rarely is the top reason we like poetry and songs. we pick what to read based on the contents, whether we expect to learn new things or be emotionally moved. Good style makes it easier to get the stuff, but it’s just a vehicle for the content.

And yet style is incredibly important because without good style (or, as per the transportation analogy, without a functioning vehicle), a piece of writing will never get anywhere. You could have the most amazing topic with excellent content — if it’s badly written, if it’s a chore to read, then very few people will read it.

Scientific papers suck at style.

(Quick disclaimer: As we’re going to discuss below, this isn’t the fault of any individual scientist. It’s a question of culture and social norms.)

Anyone who’s ever read anything knows that long, dense paragraphs aren’t enjoyed by anyone. Yet scientific papers somehow consist of nothing but long and dense paragraphs.7That’s not to say giant paragraphs are always bad; they serve a purpose, which is to make a coherent whole out of several ideas, and they can be written well. But often they aren’t written well, and sometimes they’re messy at the level of ideas. As a result, they often make reading harder, for no gain. Within the paragraphs, too many sentences are long and winding. The first person point of view is often eschewed in favor of some neutral-sounding (but not actually neutral, and very stiff) third person passive voice. The vocabulary tends to be full of jargon. The text is commonly sprinkled with an overabundance of AAAs,8Acronyms And Abbreviations, an acronym I just made up for illustrative purposes. even though they are rarely justified as a way to save space in this age where most papers are published digitally. Citations, which are of course a necessity, are inserted everywhere, impeding the flow of sentences.

Here’s an example, selected at random from an old folder of PDFs from one of my master’s projects back in the day. Ironically, it discusses the fact that some methods in evolutionary biology are applied incorrectly because… it’s hard to extract the info from long, technical papers.9Here’s the original paper, which by a stroke of luck for me, is open-source and shared with a Creative Commons license.

Don’t actually read it closely! This is just for illustration. Skim it and scroll down to the end to keep reading my essay.

Most models of trait evolution are based on the Brownian motion model (Cavalli-Sforza & Edwards 1967; Felsenstein 1973). The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species (Hansen 1997; Butler & King 2004). OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations (Cooper et al. 2016b). For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes (Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014). For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses (the median number of taxa used for OU studies is 58; Cooper et al. 2016b). Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process (Boettiger, Coop & Ralph 2012; Pennell et al. 2015). Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model (e.g. Hansen 1997; Hansen & Orzack 2005), but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies.

This paragraph is not good writing by any stretch of the imagination.

First, it’s a giant paragraph.10Remarkably, it is the sole paragraph in a subsection titled “Ornstein-Uhlenbeck (Single Stationary Peak) Models of Traits Evolution,” which means that the paragraph’s property of saying “hey, these ideas go together” isn’t even used; the title would suffice. It contains two related but distinct ideas, which are that (1) the Ornstein–Uhlenbeck model can be useful, and that (2) it has caveats. Why not split it? Speaking of which, the repetition of the “OU” acronym is jarring. It doesn’t even seem to serve a purpose other than shorten the text a little bit. It’d be better to spell “Ornstein-Uhlenbeck” out each time, and try to avoid repeating it so much.

The paragraph also contains inline citations to an absurd degree. Yes, I’m sure they’re all relevant, and you do need to show your sources, but this is incredibly distracting. Did you notice the following sentence when reading or skimming?

However, the OU model has several well-known caveats.

It’s a key sentence to understand the structure of the paragraph, indicating a transition from idea (1) to idea (2), but it is inelegantly sandwiched between two long enumerations of references:

(Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014).

Any normal human will just gloss over these lines and fail to grasp the structure of the paragraph. Not ideal.11The ideal format for citations in scientific writing is actually a matter of some debate, and depends to some extent on personal preference. As a friend said: “The numbered citation style (like in Science or Nature) is really nice because it doesn’t interrupt paragraphs, especially when there are a lot of citations. But many people also like to see which paper/work you are referencing without flipping to the end of the article to the references section.”

I admit I am biased towards prioritizing reading flow, but it’s true that having to match numbers to references at the end of a paper can be tedious. In print and PDFs, I’d be in favor of true footnotes (as opposed to endnotes), so that you don’t have to turn a page to read it. In digital formats, I’d go with collapsible footnotes (like the one you’re reading right now if you’re on my blog). Notes in the margin can also work, either in print or online. Alexey Guzey’s blog is a good example.

And if mentioning a reference is useful to understand the text, the writer should simply spell it out directly in the sentence.

Finally, there is quite a bit of specialized vocabulary that will make no sense to most readers, such as “niche conservatism” or “clade-wide stabilising selection.” That may be fine, depending on the intended audience; knowing what is or isn’t obvious to your audience is a difficult problem. I tend to err on the side of not including a term if a general lay audience wouldn’t understand it, but that’s debatable and dependent on the circumstances.

Now, I don’t mean to pick on this example or its authors in particular. In fact, it isn’t even a particularly egregious example.12Interestingly, the more I examined the paragraph in depth, the less I thought it was bad writing. This is because, I think, becoming familiar with something makes us see it in a more favorable light. In fact this is why authors are often blind to the flaws in their own writing. But by definition a paper is written for people who aren’t familiar with it. Many papers are worse! But as we saw, it’s far from being a breeze to read. Bad, boring style is so widespread that even “good” papers aren’t much fun.

Yet science can definitely be fun. Some Scott Alexander blog posts manage to make me read thousands of (rigorous!) words about psychiatric drugs, thanks to his use of microhumor. And then, of course, there’s an entire genre devoted to “translating” scientific papers into pleasant prose: popular science. Science popularizers follow different incentives than scientists: their goal is to attract clicks, so they have to write in a compelling way. They take tedious papers as input, and then produce fun stories as output.

There is no fundamental reason why scientists couldn’t write directly in the style of science popularizers. I’m not saying they should copy that exactly — there are problems with popular science too, like sensationalism and inaccuracies — but scientists could at least aim at making their scientific results accessible and enjoyable to interested and educated laypeople, or to undergraduate students in their discipline. I don’t think we absolutely need a layer of people who interpret the work of scientists for the rest of us, in a way akin to the Ted Chiang story about the future of human science.

Topic and content are hard to solve as a general problem. But I think we can improve style. We can create better norms. I have a crazy idea to do that, which we’ll get into at the end of the post, but first, we need to discuss the reasons behind the dismal state of current scientific style.

Why is scientific style so bad?

There are many reasons why science papers suck at style. One is that people writing them, scientists, aren’t selected for their writing ability. They have a lot on their plate already, from designing experiments to performing them to applying for funding to teaching classes. Writing plays an integral part of the process of science, but it’s only a part — compared to, say, fields like journalism or literature.

Another problem is language proficiency. Almost all science (at least in the more technical fields) today is published in English, and since native English speakers are a small minority of the world’s population, it follows that most papers are written by people who have only partial mastery over the language. You can’t exactly expect stellar style from a French or Russian or Chinese scientist who is forced to publish their work in a language that isn’t their own.

Both these reasons are totally valid! There’s no point blaming scientists for not being good writers. It’d be great if all scientists suddenly became masters of English prose, but we all know that’s not going to happen.

The third and most important reason for bad style is social norms.

Imagine being a science grad student, and having to write your first Real Science Paper that will be submitted to a Legit Journal. You’ve written science stuff before, for classes, for your undergrad thesis maybe, but this is the real deal. You really want it to be published. So you try to understand what exactly makes a science paper publishable. Fortunately, you’ve read tons of papers, so you have absorbed a lot of the style. You set out to write it… and reproduce the same crappy style as all the science papers before you.

Or maybe you don’t, and you try to write in an original, lively manner… until your thesis supervisor reads your draft and tells you you must rewrite it all in the passive voice and adopt a more formal style and avoid the verb “to sparkle” because it is “non-scientific.”13The “sparkle” example happened to a friend of mine recently.

Or maybe you have permissive supervisors, so you submit your paper written in an unconventional style… and the journal’s editors reject it. Or they shrug and send it to peer review, from whence it comes back with lots of comments by Reviewer 2 telling you your work is interesting but the paper must be completely rewritten in the proper style.

Who decides what style is proper? No one, and everyone. Social norms self-perpetuate as people copy other people. For this reason, they are extremely difficult to change.

As a scientist friend, Erik Hoel, told me on Twitter:

There is definitely a training period where grad students are learning to write papers (basically a “literary” art like learning how to write short stories) wherein you are constantly being told that things need to be rephrased to be more scientific

And of course there is. Newbie scientists have to learn the norms and conventions of their field. Not doing so would be costly for their careers.

The problem isn’t that norms exist. The problem is that the current norms are bad. In developing its own culture, with its traditions and rituals and “ways we do things,” science managed to get stuck with this horrible style that everyone is somehow convinced is the only way you can write and publish science papers, forever.

It wasn’t always like this. If you go back and look at science papers from the 19th century, for instance, you’ll find a rather different style, and, dare I say, a more pleasant one.

I know this thanks to a workshop I went to in undergrad biology, almost a decade ago. Prof. Linda Cooper of McGill University (now retired, as I have found out when trying to contact her during the writing of this post) showed us a recent physics paper, and a paper written in 1859 by Carlo Matteucci about neurophysiology experiments in frogs, titled Note on some new experiments in electro-physiology.14At least I think this is it; my memory of the workshop is very dim. Dr. David Green, local frog expert, helped me find this paper, and it fits all the details I can remember. You might expect very old papers to be difficult to parse — but no! It’s crystal clear and in fact rather delightful. Here’s a screenshot of the introduction:

It isn’t quite clickbait, but there’s an elegant quality to it. First, it’s told in first person. Second, there’s very little jargon. Third, we quickly get to the point; there’s no lengthy introduction that only serves as proof that you know your stuff. Fourth, there are no citations. Okay, again, we do want citations, but at least we see here that avoiding them can help the writing flow better. (No citations also means that you can’t leave something unexplained by directing the reader to some reference they would prefer not to read. Cite to give credit, but not as a way to avoid writing a clear explanation.)

By contrast, the contemporary physics paper shown at the workshop was basically non-human-readable. I can’t remember what it was, which is probably a good thing for all parties involved.

In the past 150 years, science has undoubtedly progressed in a thousand ways; yet in the quality of the writing, we are hardly better than the scientists of old.

I want to be somewhat charitable, though, so let’s point out that some things are currently done well. For example, I think the basic IMRaD structure — introduction, methods, results, and discussion — is sound.15Although one could argue that IMRaD is perhaps too often followed without thought, like a recipe. The systematic use of abstracts, and the growing tendency to split them into multiple paragraphs, is an excellent development.

There’s been a little bit of progress — but we should be embarrassed that we haven’t improved more.

What happened? It’s hard to say. Some plausible hypotheses, all of which might be true:

  • In the absence of a clear incentive to maximize the number of readers, good style doesn’t develop. The dry and boring style that currently dominates is simply the default.
  • Everyone has their own idea of what good scientific writing should be, and we’ve naturally converged onto a safe middle ground that no one particularly loves, but that people don’t hate enough to change.
  • The current style is favored because it is seen as a mark of positive qualities in science such as objectivity, rigor, or detachment.
  • The style serves as an in-group signal for serious scientists to recognize other serious scientists. Put differently, it is a form of elitism. This might mean that for the people in the in-group, poor style is a feature, not a bug.16Just like unpleasant bureaucracy acts as a filter so that only the most motivated people manage to pass through the system.
  • Science is too globalized and anglicized. There is only one scientific culture, so if it gets stuck on poor norms, there isn’t an alternative culture that can come to the rescue by doing its own thing and stumbling upon better norms.

It’s possible that these forces are too powerful for anyone to successfully change the current norms. Maybe most scientists would think I’m a fool for wanting to improve them. But it does seem to me that we should at least try.

How can we forge better norms?

First, I want to emphasize that the primary goal of scientific writing is communication among researchers, not between researchers and the public. Facilitating this communication, and lowering the barriers to entry into hyperspecialized fields,17For students, and for scientists in adjacent fields are the things I want to optimize for.

However, I do think there are benefits to making science more accessible to non-specialists — scientists in very different fields, academics outside science, journalists, teachers, politicians, etc. — without having to rely on the layer of popular science. So while we won’t optimize for this directly, it’s worth improving it along the way if we can.

With that in mind, how can we improve the social norms for style across all of scientific writing?

Here’s one recipe for failure. Come up with a new style guide, and share it with grad students and professors. Publish op-eds and give conferences on your new approach. Teach writing classes. In short, try to convince individual scientists. Then watch as they just write in the old style because it’s all they know and there’s no point in making it harder for themselves to publish their papers and get recognition.

Science is an insanely competitive field. Most scientists, especially grad students, postdocs and junior professors, are caught in a rat race. They will not want to reduce their chances of publication, even if they privately agree that scientific style should be improved.

(Not to mention, many have been reading and writing in that style for so long that they don’t even see it as problematic anymore.)

By definition, social norms are borderline impossible to change if you’re subject to them. That means that the impulse to change must come from someone who’s not subject to them. Either an extremely well established person, i.e. somebody famous enough to get away with norm-defying behavior, or an outsider — i.e. somebody who just doesn’t care.

Well, I don’t have a Nobel Prize, but I gave up on science years ago and I have zero attachment to current scientific norms, so I think I qualify as an outsider.

But what can an outsider do, if you can’t convince scientists to change? The answer is: do the work for them. Create something new, better, that scientists have an incentive to copy.

Here’s a sketch of how that could be done. Mind you, it’s very much at the stage of “crazy idea”; I don’t know if it would work. But I think there’s at least a plausible path.

The Plan

1. Found a new journal

Let’s call it the Journal of Actually Well-Written Science. I’ll make an exception to my anti-abbreviation stance and call it JAWWS because I just realized it’s a pretty cool and memorable one.

The journal would have precise writing guidelines. Those guidelines are the new norms we’ll try to get established. They would be dependent on personal taste to some extent, but I think it’s possible to come up with a set of guidelines that make sense.

Here’s some of what I have in mind:

  • If it’s a choice between clarity and brevity, prioritize clarity.
  • Split long paragraphs into shorter ones.
  • Use examples. Avoid expressing abstract ideas without supporting them with concrete examples.
  • Whenever possible, place the example before the abstract idea to draw the reader in.
  • Avoid abbreviations and acronyms unless they’re already well-known (e.g. DNA). If you must use or create one, make sure it’s effortless for the reader to remember what it means.
  • Allow as little space as possible for references while still citing appropriately. Of course, it’s fine to write a reference in full if you want to draw attention to it. Also, don’t use a citation as a way to avoid explaining something.
  • Write in the first person, even in the introduction and discussion. Your paper is being written by you, a human being, not by the incorporeal spirit of science.
  • Don’t hesitate to use microhumor; it is often the difference between competent and great writing. My mention of the incorporeal spirit of science is an example of that.
  • Avoid systematic use of the passive voice.
  • Avoid ornamental writing for its own sake. Occasionally, a good metaphor can clarify a thought, but be mindful that it’s easy to overuse them.
  • Remember that the primary goal of your paper is to communicate methods or results. Always keep the reader in mind. And make that imaginary reader an educated nonspecialist, i.e. you whenever you read papers not directly relevant to your field.

In the appendix, I show a multistep application of this to the paragraph I quoted above as an example.

Again, we’re not trying to reinvent popular science writing. We will borrow techniques and ideas from it, and try to emulate it insofar as it’s good at communicating its content. But the end goal is very different — JAWWS is intended not to entertain, but to publish full, rigorous methods and results that can be cited by researchers. I want it to be a new kind of scientific journal, but a scientific journal nonetheless.

2. Hire great writers

JAWWS will eventually accept direct submissions by researchers. But as a new journal, it will have approximately zero credibility at first. So we will start by republishing existing papers that have gone through a process of rewriting by highly competent science communicators.

Finding those communicators might be the hardest part. We need people who can understand scientific papers in their current dreadful state, but who haven’t already accepted the current style as inevitable. And we need them to be excellent at their job. If we rewrite a paper into something that’s no better than the original — or, worse, if we introduce mistakes — then the whole project falls apart.

On the other hand, tons of people want to be writers in general and science writers in particular, so there is some hope.

3. Pick papers to rewrite

It’s unclear how many science papers are published each year, but a reasonable estimation is quite a lot. I saw the 2,000,000 per year figure somewhere; I have no idea if it’s accurate, but even if it’s off by an order of magnitude or two, that’s still a lot.

How should JAWWS select the papers it rewrites?

I’m guessing that one criterion will be copyright status. I’m no intellectual property specialist, so I have no idea if it’s legal to rewrite an entire article that’s protected by copyright. Fortunately, there are many papers that are released with licenses allowing people to adapt them, so I suggest we start with those. Another avenue is to rewrite papers by scientists who like this project and grant us permission to use their work.

Then there are open questions. Should JAWWS focus on a particular field at first? Should it rewrite top papers? Neglected papers? Particularly difficult papers? Randomly selected papers? Should it focus more on literature reviews, experimental studies, meta-analyses, or methods papers? Should it accept applications by scientists who’d like our help? We can settle these questions in due time.

Crucially, the authors of a JAWWS rewritten paper will be the same as the paper it is based on. When people cite it, they’ll give credit to the original authors, not the rewriter, whose name should be mentioned separately. This also means that the original authors should approve the rewritten paper, since it’ll be published under their names.18My friend Caroline Nguyen makes an important point: the process must involve very little extra work for scientists who are already burdened with many tasks. Their approval could therefore be optional — i.e. they can veto, but by default we assume that they approve. It might also be possible to involve a writer earlier in the research process, so that they are in close contact with a team of scientists and are able to publish a JAWWS paper at the same time as the scientists publish a traditional one. In all cases, we can expect the first participating researchers to be the ones who agree with the aims of our project and trust that JAWWS is a good initiative.

4. Build prestige over time

If the rewritten papers are done well, then they’ll be pleasant to read. If they’re pleasant to read, more people will read them. If more people read them, then they’re likely to get cited more. If they get cited more, then they will have more impact. If JAWWS publishes a lot of high-impact papers, then JAWWS will become prestigious.

There’s no point in aiming low — we should try making JAWWS as prestigious, if not more, than top journals like NatureScience, or Cell.19Is this a good goal? Wouldn’t it be better to just try to build something different? Well, I see this project kind of like Tesla for cars: Tesla isn’t trying to replace cars with something else, it’s just trying to make cars much better. So I would like JAWWS to be taken as seriously as the prestigious journals — while being an improvement over them. The danger in building a new thing is that you just create your little island of people who care about style while the rest of science is still busy competing for a paper in prestigious journals. That wouldn’t be a good outcome.

Of course, that won’t happen overnight. But I don’t see why it wouldn’t be an achievable goal. And even if we don’t quite get there, the “aim for the moon, if you fail you’ll fall among the stars” principle comes into play. JAWWS can have a positive influence even if it doesn’t become a top journal.

Along the way, JAWWS will become able to accept direct submissions and publish original science papers. It might also split into several specialized journals. At this point we’ll be a major publishing business!

5. Profit!

I don’t know a lot about the business side of academic publishing, but my understanding is that there are two main models:

  • Paywall: researchers/institutions pay to access the contents of the journal.
  • Open-access: researchers/institutions pay to publish content that is then made accessible to everyone.

For JAWWS, a paywall model might make sense, since the potential audience would be larger than just scientists. But it would run contrary to the ideal of making science accessible to as many people as possible. Open-access seems more promising, and it feels appropriate to ask for a publication fee as compensation for the work needed to rewrite a paper. But that might be hard to set up in the beginning when we haven’t proven ourselves yet.

Maybe some sort of freemium model is conceivable, e.g. make papers accessible on a website but provide PDFs and other options to subscribers only.

Another route would be to set up JAWWS as a non-profit organization. An example of a journal that is also a non-profit is eLife. This might help with gaining respectability within some circles, but my general feeling is that profitability is better for the long-term survival of the project.

6. Improve science permanently

No, “profit” is not the last step in the plan. Making money is great, but we can and should think bigger. The end goal of this project is to improve science writing norms forever.

If JAWWS becomes a reasonably established journal, then other publications might copy its style. That would be very good and highly encouraged. But more importantly, it would show that it’s possible to change the norms for the better. Other journals will feel more free to experiment with different formats. Scientists will gain freedom in the way they share their work. Maybe we can even get rid of other problems like the ones associated with peer review while we’re at it.

One dark-side outcome I can imagine is that the norms are simply destroyed, we lose the coherence that science currently has, and then it becomes harder to find reliable information. To which I respond… that I’m not sure that it would be worse than the present situation. But anyway, it seems unlikely to happen. There will always be norms. There will always be prestigious people and publications that you can copy to make sure you write in the most prestigious style. We are a very mimetic bunch, after all.

And if we succeed… then science becomes fun again.

Less young researchers will drop out (like I did). Random curious people will read science directly instead of sensationalist popularizers. It’ll be easier for the public (who pays for most of science, after all) to keep informed about the latest research. Maybe it’ll even encourage more kids to get into the field. If everything goes well, we’ll get one step closer to a new golden age of humanity.

Okay, maybe I’m getting ahead of myself. But then again, like I said, there’s no point in aiming low.

To repeat, this is still a crazy idea. It did get less crazy after I finished writing the above plan, though. I have a feeling it might really work.

But it’s very possible I’m wrong. Maybe there are some major problems I haven’t foreseen. Maybe the entire scientific establishment will hate me for trying to change their norms. Maybe it’s just too ambitious a project, and it will fail if somebody doesn’t devote themselves to it. I don’t know if I should devote myself to it.

So, I’d really love for this post to be shared widely and for readers — whether professional scientists, writers, students, science communicators, and really anyone who’s interested in science somehow — to let me know what they think. Like science as a whole, this should be a collaborative effort.


Further reading


Thanks to Khalis Afnan, Dan Stern, Caroline Nguyen, Mahwash Jamy, Daniel Golliher, and Ulkar Aghayeva for feedback on this piece.

original research

Of Emoji and Hieroglyphs

Emoji are pictograms that are used to add nuance and meaning to electronic written text. They were invented in Japan in the 1990s and are now widely used across the world. Random examples: 🤾‍♂️ 😒 🦑 🔊 💚

Egyptian hieroglyphs are characters, mostly based on real objects, that were used to write the Ancient Egyptian language. They were invented around the 32nd century BC and fell into disuse by Late Antiquity. Random examples:1If you only see squares, that means you need to install a font that supports those Unicode characters. Most browsers will display them automatically, but I’m not sure about the details. 𓊛 𓋊 𓃕 𓌗 𓎁

There’s an obvious parallel to be drawn between the two, which multiple people have pointed out, usually with cries of “Thousands of years of language evolution and we’re back to using pictograms!” Even I tweeted about a few months ago:

As Twitter threads go, this was a reasonably popular one, which means there was some value in investigating the links between emoji and hieroglyphs. But maybe not enough to write more than a few tweets, and so the matter was put to rest.

Then I read Clo’s excellent piece on emoji and our relationship with them, and it made me want to revisit the topic. So I embarked on a small and silly side project.

The result is being released today. It is a browser extension. It is called Emoji to Hieroglyphs. It replaces the former with the latter whenever possible as you browse the web. It’s stupid and fun. And it can be downloaded here.

How it works

Emoji to Hieroglyphs is based on the famous cloud-to-butt extension — which replaces “the cloud” with “your butt” all over the internet — because I don’t really know any JavaScript so it was simpler to steal code from somewhere. Good thing that cloud-to-butt is released under the “Do What The F*ck You Want To Public License”, which I’m also using for Emoji to Hieroglyphs.

The extension searches text in web pages for certain emoji, and replaces them with the closest hieroglyphic visual equivalent I could find. Here are some examples:

🤸 → 𓀡

✍️ → 𓃈

🐇 → 𓃹

⛵ → 𓊝

(Of course, the extension needs to be uninstalled for these examples to make sense.)

Not all emoji have a hieroglyphic equivalent. As of today, there are 3,521 emoji in Unicode 13.1, but only 1,071 hieroglyphs. A lot of the extra emoji are things that didn’t exist in Ancient Egypt, such as soccer ⚽, helicopters 🚁, Japan 🗾, or jack-o-lanterns 🎃. Many others represent something that did exist along the banks of the Nile, but that the Egyptians didn’t bother making a hieroglyph for, e.g. skulls 💀, grapes 🍇, or crabs 🦀. I assume the Ancient Egyptians had emotions, but there aren’t any hieroglyphs to represent them directly, so smileys such as 😄, 😍, 🤯, or 🤑 are also not affected by my extension.

Not all hieroglyphs have an emoji equivalent, either. Many are just too abstract, like 𓊖, which is supposed to mean “village.” Several others are combinations, like 𓆲, combining an owl and a branch; I could’ve used it to replace 🪵🦉 and 🦉🪵, and indeed I did this for a few combos, but usually that’s just not very interesting. A few hieroglyphs represent things that the Unicode Consortium has prudishly decided not to depict as emoji, such as breasts or phalluses.2Ancient Egyptian has three hieroglyphs for the penis: 𓂸, 𓂹 (phallus combined with cloth), and 𓂺 (phallus with emission). I considered replacing the eggplant emoji 🍆 with 𓂸, but then I decided it’d be confusing and offensive for people using it as, uh, an actual eggplant. And a lot are just too specific to Ancient Egypt. For instance, there are regrettably not yet emoji for “pyramid,”3although I used it to replace the Tokyo Tower emoji 🗼, because why not “mummy-shaped god,” “crocodile on shrine,” or “human-headed bird with bowl with smoke.”

𓉴 𓁰 𓆋 𓅽

Maybe in Unicode 14.

I did manage to create more than 300 mappings, not counting all the skin tone and gender emoji variations, which I have for the most part merged together. Everyone is an Egyptian in my extension! Also, almost everyone is male, because there are only a few specifically female hieroglyphs, usually related to pregnancy or child rearing. Don’t blame me, blame the Ancients.

The most affected emoji categories are people (except smileys), animals, plants, and a bunch of random objects such as containers or bread-like foods.

Here’s a screenshot from Emojipedia’s list of people emoji, modified with the extension:

I should note that I created mappings only based on the visual appearance of the symbols. The word “doctor” in Ancient Egyptian is written with three glyphs, 𓌕𓏌𓀃,4The arrow should be above the pot, but I can’t do that in linear text. but I didn’t map the emoji 🧑‍⚕️ to that combination since it wouldn’t be very evocative. Such a mapping would be more akin to a translation, which isn’t the goal here.

On the other hand, not all visual mappings are as obvious as 🐘 to 𓃰. Consider 𓆳, which is supposed to be a palm branch. Since there is no palm tree hieroglyph, I used the palm branch to replace the palm tree emoji.

🌴 → 𓆳

The link may not be crystal clear to users, but I included it anyway in the interest of having as many mappings as possible. Here are a few other examples where the emoji and hieroglyphs do represent the same object, but where the resemblance isn’t that strong:

🔥 → 𓊮

🏠 → 𓉐

💩 → 𓄽

Conversely, some mappings are just based on superficial resemblance. The sistrum is an ancient percussion instrument which, as you can imagine, doesn’t have a close emoji equivalent. But since it’s about music and sort of resembles a microphone, that’s what I decided to use it for. There are also “woman holding sistrum” and “man holding sistrum” hieroglyphs, so it made sense to replace the female and male singer emoji with those.

🎤 → 𓏣

👩‍🎤 → 𓁙

👨‍🎤 → 𓁋

Finally, not all mappings are 1:1. Sometimes multiple emoji together make a single hieroglyph.

🌊 → 𓈖

🌊🌊🌊 → 𓈗

And sometimes a single emoji is expressed through multiple hieroglyphs.

🏡 → 𓆭𓉐

👀 → 𓁹𓁹

There are a few combinations that could be considered Easter eggs. I will not tell you which.

Overall, don’t expect a lot of consistency. This is obviously just for fun, and I hope some of you do have fun with it. I had fun making it; I even learned a few things! Which we’ll get into presently.

Some linguistics

To some, emoji mark a return to a more primitive form of language. We started out with cave paintings, then we developed pictograms (character = picture), then we got more general logograms (character = word), and then we gradually invented more symbolic forms of writing, culminating in clean5alphabets aren’t actually clean, they’re super redundant and inconsistent, but let’s allow this for the sake of the argument phonetic alphabets with a few dozen characters.6At least in the West. Chinese has remained at the logogram stage, and there aren’t any strong reasons to think it’s inferior to alphabetic writing. This should make us dubious of claims that the evolution of written language has followed any sort of natural progress. And now, with the advent of mind-numbing technology such as smartphones and Twitter, we’re apparently back to pictograms.

Thus joke images such as:



(Two notes about this last image: first, those mappings are terrible, and second, the image on the left isn’t even a picture of actual hieroglyphs. There isn’t a hieroglyph that looks like “#”. I don’t know where it’s from, but it’s very fake.)

Many media pieces discuss the question, and they all converge on the same point: No, emoji and hieroglyphs are not the same thing. Hieroglyphs weren’t just cute drawings to decorate Egyptian temples! They were a full-fledged writing system! A single hieroglyph, say the wigeon duck, 𓅰, could be used to represent an actual wigeon, yes, but it could also represent the idea of food, or the verb “to fatten,” and it had full phonetic value just like our letters, being used to transcribe the consonant sounds wšꜣ!7The symbol “ꜣ”, if you’re curious, represents the conventional transcription of the letter aleph in Egyptology, indicating something like a glottal stop.

Whereas emoji aren’t a writing system. They are mostly cute drawings we use to decorate our sentences. They carry meaning, and are linguistically interesting, but you can’t express arbitrary sentences with them, at least not at the moment.

Perhaps, like hieroglyphs, emoji could one day represent sounds directly. Say 🥶 = “fr”, 😇 = “en”, and 🍩 = “d”. Then 🥶😇🍩 could be used to represent the spoken word “friend,” even though the symbols have mostly nothing to do with friends. Add a ship, 🛳, and now we get a hybrid word, combining phonograms and logograms: 🥶😇🍩🛳, “friendship.” But we’re unlikely to get there, because, well, we already have symbols to represent sounds. The 26 letters of the English version of the Latin alphabet, for example. Or the > 160 symbols of the International Phonetic Alphabet, if you want more comprehensiveness. The reason the Egyptians gave phonetic value to their cute little drawings is that they were all they had.

But I want to go in a somewhat different direction than both the joke images and the serious linguistics articles.

I claim that we never actually stopped using Ancient Egyptian hieroglyphs. I claim that we’re still at the stage of using cute little drawings to represent language.

Consider the letter A, the first in the Latin alphabet. Where does it come from? The Latin alphabet is descended from the Greek one, by way of the Etruscan alphabet. So the letter A comes from the Greek equivalent, Α/α, pronounced “alpha.” But where did alpha come from?

It came from the Phoenician alphabet, whose immediate ancestor is the Proto-Sinaitic script, considered the first alphabet in the world. The Phoenicians were a coastal people of the Levant in Antiquity. Their invention of the alphabet turned out to be quite influential, since the vast majority of the world today writes in systems descended from it: Latin and Greek, but also Cyrillic (used to write Russian, among others), Arabic, Hebrew, Ge’ez (used for Ethiopian), all of the scripts used in India and Southeast Asia, and even Mongolian. In other words, pretty much everything on this map except China, Korea, Japan, possibly Georgia and the syllabary used for indigenous languages in northern Canada.8gray = Latin, teal = Cyrillic, green = Arabic, see the original source for others

Writing systems worldwide.png

The equivalent to A and alpha in Phoenician is 𐤀, pronounced “aleph.” It has an equivalent in all those other scripts, such as Hebrew א (also called aleph). Okay. But where did aleph come from?

At this point we’re quite far out in the past, with the Proto-Sinaitic script having been in use from the 19th to the 15th centuries BC, so things get a bit murky. But the land of Canaan, where the script was used, is right next to Egypt. And 𐤀 kind of looks like a stylized ox head. So does A, for that matter, except upside down. Look at the math symbol ∀ (“for all”). Pretty easy to see an animal head with horns, right? And so it is commonly accepted that the letter A is descended from the Egyptian hieroglyph 𓃾.9Below, 𐌀 is the Etruscan or old Italic version. I’m not showing Greek Α/α because it would have to go between 𐤀 and 𐌀, but it looks more similar to A than to 𐌀. This is because the actual Greek letter that led to the Etruscan version was an archaic version that is not in Unicode. For more details and more intermediate forms, see Wikipedia on the history of A.

𓃾 → 𐤀 → 𐌀 → A/a

Yes. Each time you use the symbol A or a, which, if you write at all, probably happens dozens or hundreds of times a day, you are in fact using something that ultimately comes from the Ancient Egyptian version of “🐮”.

And all of our letters are like this! (With one exception.) Some are a bit obscure, like B, which apparently comes from the house hieroglyph:

𓉐  𐤁 → 𐌁 → B/b

But most others are pretty clear.

𓈖 → 𐤌 → 𐌌 → M/m

𓆓 → 𐤍 → 𐌍 → N/n

𓁹 → 𐤏 → 𐌏 → O/o

(And then, of course, the O became the many-eyed or multiocular O, whose Unicode version is “ꙮ”, in one hilarious and terrifying instance of a monk doodling something in his copy of the Orthodox Christian Bible.)

Here’s the full Latin emoji alphabet based on the hieroglyphic origins of the letters. Hang a version in your toddler’s bedroom, to thoroughly confuse him or her!10You can notice the exception: the letter X comes from Greek Χχ (chi), but chi was apparently a native Greek invention and wasn’t derived from Phoenician or Egyptian hieroglyphs. So I left it as is.




Maybe next time I’ll create an extension to turn all Latin letters into hieroglyphs or emoji. Just to confuse everyone.

To conclude, emoji aren’t a return to anything. We’re still using symbols based on real objects, even if most of them aren’t recognizable anymore. Our system is a bit more advanced than the Egyptians’ — for one thing, we have vowels, they didn’t — but it isn’t fundamentally any different.

Of course, emoji do fulfill some needs — otherwise we wouldn’t use them. They are recognizable as objects and ideas, unlike our letters. They’re diverse. They’re fun. Maybe a good, complete writing system should feature small pictures to convey emotion, nuance, and humor. In a way, the Egyptians had a bit of that. Now we do too, thanks to emoji.

I would say it is a good development.

✨ Download the Emoji to Hieroglyphs extension here ✨


In Defence of Parentheses (and Footnotes)

A hierarchy of methods to add extra detail to a sentence, from “the info is super duper important” to “I kind of want to mention it but it’s definitely far from essential”:

1. No punctuation

If there’s no punctuation, a piece of information becomes integral to the sentence and seems important. Without it, we’d be missing something key. Consider:

The top hat-wearing President arrived at high noon.

We could remove “top hat-wearing,” but the picture created in your mind would be very different; it would even feel like we’re talking about another President.1For the sake of completeness, here’s an extra level, called level 0, which probably should have gone first. But it belongs outside the hierarchy, in my opinion. So now you get to read level 0 in this awkward spot between levels 1 and 2.

0. Separate clauses or sentences

When two pieces of information are separated by something like a semicolon or period, they’re given equality; neither is more important than the other. Consider:

“The President arrived at high noon; he was wearing a top hat.”

“The President arrived at high noon. He was wearing a top hat.”

Here, the arrival of the President isn’t more important than the top hat. The rest of the list, below, assumes that isn’t true. The top hat is just added detail to the main idea of the President arriving at noon.

2. Commas

Commas serve many purposes, but one of them is to break the flow of a sentence to add extra info. The break, however, is minimal. Commas are subtle, like a short breath.

The President, wearing a top hat, arrived at high noon.

Compared to the previous example, the top hat has been relegated to the status of important but not essential detail.

3. Em-dashes

Em-dashes2which by the way are the symbol “—”, not “–” and certainly not “-” or, God Most High forbid, “–“ 3hey, another digression: did you know the “em” in “em-dash” stands for the letter M, representing the size of the dash? Unlike what some poor souls mistakenly believe, it doesn’t mean “emphasis dash”. are sort of a middle ground between commas and parentheses, and can replace either. They’re striking, but also make it obvious the info is outside the normal flow of the sentence.

The President — who was wearing a top hat — arrived at high noon.

Wow! A top hat! So striking! We could do without it — it’s clearly identified as extra detail — but the em-dashes draw attention to it. Use em-dashes if you want the reader to actually notice.

4. Parentheses

Parentheses have a similar effect but are more delicate. While the em-dash is crashing the party and getting everyone’s attention, the parenthesis is instead just lounging quietly in a corner, happy to talk to you if you want, but making no special effort to come to you.

As a friend of mine puts it, parentheses are like a mid-sentence whisper.

The President (donning his famous top hat) arrived at high noon.

A parenthetical statement is like a thread running in parallel, a branch leaving the main sentence and rejoining it later. It gives the writing a more complex structure, which of course can be either good or bad.

5. Footnotes

Ah, the footnote.4Also called endnote if it’s at the end of a book. A little asterisk, or number, or other symbol, timidly tugging at your sleeve and saying, “Hey, there’s some other info I can give you on this, but it’s kind of outside the scope of this sentence. Read it only if you truly want to… Actually, forget I said anything. It’s not that interesting anyway. If it were interesting, it’d be included in the main text with any of the above four methods, am I right? Haha, bye!”

The President arrived at high noon.5Intriguingly for an elected head of state in the 21st century, the President was wearing a top hat.

Did you click the footnote? Did you feel compelled to? Or did you just feel like ignoring it, dismissing it as irrelevant information?

If you did click it, you may have noticed it was longer than the top hat info in the other examples. Footnotes give a writer more freedom. In fact, the lower we are in this hierarchy, the more freedom the writer has, because the farthest we are from the main sentence. But then the reader also has more freedom — freedom to ignore the piece of information.

Is it worth using the bottom of the hierarchy, parentheses and footnotes, at all?

These methods clearly indicate that a piece of information is less important. According to style guides, the reader should be allowed to skip them without changing their understanding of the text. So the writer might as well just skip them too. Right?

As always, the answer is “it depends.”

It depends on the genre, for one thing. If you’re copywriting, you really just want to show the key info, and show it in big bold letters. No room for parentheses or footnotes.6Or so I assume; I’m not a copywriter. If you’re writing fiction, digressions are likely to detract too much from the story, as you know if you’ve read a scholarly edition of something like The Odyssey that is full of endnotes to tell you that translators don’t agree on how to translate οἶνοψ πόντος exactly.7It is usually translated as wine-dark sea, if you must know. Of course, some fiction writers use them anyway. Notoriously, the novel Infinite Jest includes “388 endnotes, some with footnotes of their own.”

What you’re reading right now belongs to the age-old genre of “informal blog post with some microhumor,” for which parentheses and footnotes work very well. At least so I think. Obviously, whether a writer should use them also depends enormously on their writing style. They’re an extra color in the writer’s palette; another tool in the toolbox. You may or may not want to use them.

As the astute reader will have noticed, parentheses and footnotes tend to show up quite a lot in my own writing.8I am going a little overboard with this essay, but I that’s on purpose. I really like them. I think they’re a super useful tool. So I even installed a WordPress plugin to make it easy to include and read footnotes, like this.9I don’t actually have anything to say in this footnote. Have a bright, wonderful day!

(I also sometimes put entire paragraphs between parentheses. Like this. An entire paragraph! This is kind of a middle ground between a simple parenthetical phrase or sentence, and a full-fledged footnote. I love these paragraphs, whether they’re mine or other writers’.)

The reason I use parentheses and footnotes aplenty is that I really like them as a reader, too.

Not everyone does, though. Some readers don’t like parentheses and actually — the nerve! — quickly skim their contents until they get to the closing bracket. Style guides will warn you of this. They always tell you to use parentheses “sparingly.” Or even avoid them altogether (source):

Because they are so jarring to the reader, parentheses should be avoided whenever possible.

If removing a parenthetical note changes the meaning of the sentence, it should not be in parentheses.

Or (source):

Some of us love to use parentheses. Unfortunately, some readers ignore anything that appears in parentheses, so don’t put important information in parentheses if you can help it.

Even those who do use parentheses and footnotes, and use them well, can feel guilty about it. Here’s Scott Alexander, in an essay about nonfiction writing advice:

I agonize a lot about where it is versus isn’t appropriate to break the flow of ideas. Sometimes I use the really ugly solution of having an entire paragraph within parentheses, as if to say “I really wanted to bring this up here, but remember it’s not actually part of the structure of this argument!”

(this is a good meta-level example. I used the word “actually” there, and I wanted to point it out as an example of what I was talking about before, but doing that would break the flow of this whole argument about how you shouldn’t break the flow of things. So, in accordance with the prophecy, into a paragraph-long set of parentheses it goes. I’m starting to think maybe I’m not the best person to be giving writing advice…)

Scott writes amazingly and definitely is one of the best people to give advice. In fact, I feel a twinge of excitement whenever I get to a parenthetical paragraph in one of his essays. Where other people may think “hey, this is irrelevant, I’ll just skip,” I think “huh, this is outside the normal flow of ideas, but he still chose to include it… Must be extra interesting!”

This is the core idea of this essay so I’ll make it bold: Parentheses and footnotes are fun because they are acknowledged digressions.

A lot of the fun in life comes from digressions — of stepping outside the bounds of the ordinary, of skipping class to go on an adventure, of following a Wikipedia rabbit hole instead of finishing the article you were reading. Of course, digressions can also be distractions. But when they are acknowledged with punctuation signs, we, as readers, are warned. We are free to either skip it, or dive straight into this part that the writer thought was so good he couldn’t bring himself to take it out.

Think about it: writers are always encouraged to cut out the boring parts. They’re also encouraged to cut out parenthetical statements, as the styling guides say. So, to survive, a parenthetical statement should be the opposite of boring; otherwise the selective pressure against them would be too strong.

(Or, well, it should be. Not all writing is good, and boring digressions happen just as superfluous adverbs and stale metaphors do.)

Also, as I mentioned earlier, the inherently skippable nature of acknowledged digressions means that the writer gets more freedom. More freedom tends to mean more risk-taking. And more risk-taking, in writing as elsewhere, often means better rewards.

Footnotes, especially, allow a writer to experiment. What’s the worst that can happen? That the reader just goes back to the body of the text? That’s… perfectly fine, right? So, have fun: Write in a totally different style.10Indeed, one of the crucial sources of enjoyment in the consumption of literate material is so-called “code-switching” between linguistic registers ranging from the familiar or, even, the vulgar, to the formal. It is commonly believed that a single piece of written work should utilize a single type of phraseology; yet the juxtaposition of words pertaining to greatly differing registers may augment the vitality of the discourse to such an extent that readers may become elated. Add colors, different fonts, emoji.11🌈 color sometimes looks unserious but it can be so much fun! 🌈 Include super technical detail.12My footnote plugin is called Modern Footnotes. To insert a footnote, I add the tags [ mfn ] and  [ /mfn ] (without spaces) to the text of my essay in WordPress. Attempt to be funny.

That last one is important. Being funny is a great quality to have in most writing, but it’s risky business — a joke always has a chance of falling flat. It’s not easy to write comedy or, as I try to do, microhumor;13What is microhumor? It is tiny dashes of writing that will not make the reader laugh, exactly, but will bring a smile to his or her beautiful face. It can be done in many ways: exaggeration, hedge words, unusual juxtaposition, etc. I subscribe to Scott Alexander’s view that microhumor is “maybe the number one thing that separates really enjoyable writers from people who are technically proficient but still a chore to read.” but I do find it easier to write it in footnotes, for some reason. Now I realize what the reason is: it’s the freedom. (So don’t skip my footnotes. My best bits of writings are often hidden there!)14This parenthetical sentence used to be a footnote, but then I realized that people who don’t click footnotes are its intended audience, so there.

As an example, consider these footnotes from Shea Serrano’s Conference Room, Five Minutes: Ten Illustrated Essays About The Office:

Now these aren’t that funny in isolation, and I’m not going to  copy-past the entire page (sorry), but I really like Footnote #9: “LOL.”

Another writer who does digressions well is Tim Urban from Wait But Why. Tim is a master at this. He has two types of footnotes: gray squares for boring ones, such as citations, and blue circles for interesting stuff that didn’t make it into the body of the essay. This distinction is an excellent way to reinforce the positive signal on the fun footnotes.

Tim also invented a custom sixth level to the digression hierarchy: the blue box. Which can be nested through the use of the bluer box. Here’s a screenshot from his post about AI:


Writers can get even more creative than that. Online, especially, there’s no shortage of devices you can use to structure a piece of writing. Julian Shapiro, for instance, has collapsible sections in his guides, for instance this one on writing first drafts. They’re collapsed by default, thus working similarly to footnotes.

Of course, you can also just add information by providing a link. Or you could embed a YouTube video. At this point, however, we’re leaving the world of pure writing and entering the wider world of multimedia content.

Scott Alexander, in the writing advice essay I quoted above, suggests that it’s good to break the flow of the writing to provide variety. You can do this in many ways — bold, italics, images, links, quotations.15By contrast with parentheses and footnotes, I don’t actually like to read quotations that much, and I will often skim them. I think it’s because unless the author is self-quoting, they tend to be written in a different style from the author’s, and quite often less to my taste. After all, I selected the writer I’m reading, but had no say in selecting the author of the quote. (Yes, I’m aware that I used them several times in this essay. They are useful for a writer. In my defence, Scott Alexander writes better than I do.) Parentheses, footnotes, and Tim Urban’s blue boxes are simply extra tools for this purpose. Extra colors on the palette.

Don’t use them if you don’t like them. Don’t read them if you prefer not to. But remember that they can be, for readers and writers alike, a lot of fun.


Thanks to Rishi, Alicia, Kritika, Kushaan and Tamara for the original idea and comments on the first draft.


The Four Shapes of History

History, which contains literally everything that has happened to human civilization, is the data of the social sciences.

If you want to understand virtually anything that’s going on today, whether in politics or the economy or cultural shifts, it’s useful to look at the data. At best, you’ll recognize patterns and make valid predictions. At worst, what is happening today is so new that the data isn’t useful — but to know that, you need to know the data.

There is a complication.

Think about everything that’s happened so far in your life. It would probably be a long story if you were to tell it. Now consider that we’re approaching a population of 8 billion people on Earth. That’s a lot of life events. And now let’s add everyone who’s ever lived. World population used to be far lower, but it’s still estimated that there have been 105 billion human births in the past 50,000 years. 105 billion lives’ worth of data.1To be fair, a lot of those unfortunately died as infants, so they don’t add that many life events to the total. Only a small fraction made it into recorded history, but that’s still a lot.

As The Scholar’s Stage puts it:

The problem with history is that it is too big. It is impossible to get a fine grained picture of every people and era on the planet. There is just too much of it.

So what can we do? Well, there’s no option but to compress. Make a simplified picture of the full dataset. Something that’s easier to handle.

This compression can take any of four basic shapes. To understand them, we first need to look at the two dimensions of history.

The first dimension is depth, or verticality, or time. It describes the changes to any single thing in the world across time.

The second dimension is breadth, or horizontality, or space. It describes the state of the world at any given time point. (This includes the usual three spatial dimensions, but it’s much more comprehensive than that. We’re in fact collapsing the full multidimensional complexity of the world into a single dimension. Moving along this dimension might mean moving across geographical space, but it could also mean moving across conceptual space.)

Thus we can draw the entirety of human history like this:2Where should the chart stop? When does history begin? The traditional definition is that history starts around 3000 BC, when the first civilizations arose, and was preceded with prehistory. But the word ‘history’ is often use in a broader sense, going back e.g. to the appearance of anatomically modern humans 300,000 years ago, or even to the birth of the universe about 13.7 billion years ago. It doesn’t really matter for this post, though. I’ll just go with “the far past.”

When we compress history, we take a slice out of this two-dimensional pie. Slices come in many shapes, but we can classify them into four basic types.

  • a zero-dimensional point, i.e. a single event
  • a 1D vertical line, i.e. across time
  • a 1D horizontal line, i.e. across space
  • a rectangle across both dimensions (or some other 2D shape).

Let’s add a few examples to the chart above:3It was pointed out to me that the invention of writing happened multiple times, yet I put a single point. Very true. Let’s pretend that point is “Invention of writing in Sumeria.”

In practice, all of these are two-dimensional shapes to some extent, just like there can’t be a physical object in our world with a thickness of zero. But compression usually removes most of either the breadth, the depth, or both, leaving us with what is essentially a point or a line. In fact, you need to squint really hard to grasp a full two-dimensional slice of history, as we’ll see below.

Let’s start with the most basic shape: the point.

Zero-dimensional history

A point, in math, is a zero-dimensional object: it has no length in any dimension. Looking at history in a zero-dimensional way means looking at a specific point in space and time. In other words, a single historical moment.

The canonical way to visualize history in 0D is art. For instance, this painting from 1770 by Benjamin West:

Benjamin West 005.jpg

The painting represents the death of General James Wolfe, the commander of a British army that fought at Quebec City in 1759. Wolfe was victorious in battle, but died from gunshot wounds. The painting captures this single moment (although in a semi-mythologized way).

In this age, of course, photography is a more common way to capture a moment. Here is French president Charles de Gaulle addressing a crowd in Montreal in 1967:

n France, though many were sympathetic to the cause of Quebec nationalism, de Gaulle's speech was criticized by much of the media as being a breach of protocol.

Now, unless you have good prior knowledge of Quebec history, those pictures may not have informed you very much. They lack context. And that’s exactly the problem with zero-dimensional history: a true point contains very little data. A single painting can’t capture the full significance of the Battle of the Plains of Abraham, in which the fate of the French colony of Canada was changed forever as it set up its annexation by Britain four years later. A photograph can’t tell the tale of de Gaulle’s unorthodox visit to the province of Quebec during the Quiet Revolution, or the impact his “Vive le Québec libre” exclamation had in a place that was yearning for national recognition and emancipation.

But even if we add a bit of context, we can still effectively (but not strictly) remain in 0D, compared to the full size of history.

Say you’re really interested in the Battle of Vimy Ridge, which pitted Canada, as part of the British Empire, against Germany between April 9 and April 12, 1917. There’s a tiny bit of depth (four days, perhaps more if we’re considering the background and aftermath), and some breadth (you need to know about the general world situation in 1917, such as the fact that the British Empire and Germany were at war). But if you wanted, you could spend years learning all about that battle and only that battle. The preparations, the tactics, the operations, the follow-up events. You would get a very limited view of history as a whole, but that doesn’t matter, if you just care about the Battle of Vimy Ridge.4perhaps, for instance, if you’re a veteran who fought in it

The Battle of Vimy Ridge.jpg

A detailed description of a historical event, just like a painting, a photograph, an artifact, a period film5unless it is epic in scope, spanning multiple locations and periods or anything that remains bounded to a single point in space and time, is a snapshot of history. It can’t, by itself, provide a deep or a broad view.

Which doesn’t mean, of course, that points aren’t useful. They are, especially when we combine them into larger clusters. But then we aren’t in 0D anymore. Depending on whether those points are organized vertically or horizontally, or both, we’ll get either a deep, a broad, or a 2D view of history.

Deep history

Deep history is history viewed across time. That sounds a bit tautological, right? That’s because, usually, deep history is what we implicitly mean when we use the word ‘history’ on its own.

Anything labeled “the history of X” is deep history. By specifying an X, we exclude everything that is not X (unless it is directly relevant to X). If you decide to study the history of Canada, or the history of the fur trade, or the history of the Vachon cake company, then you’re (probably) not going to get into the history of Mesopotamia, space exploration, or the Leclerc cookie company.

Thus breadth is sacrificed. In the interest of building a narrative (it’s no coincidence that history and story are similar words), you take out everything not relevant, and end up with only a small sliver of the entire state of the world at any given time point. Of course, that sliver may contain some breadth, if you’re writing a narrative of something large like “the Americas” or “Western civilization.” But it’s still only a sliver, because your narrative will have to focus on a small part of the Americas or Western civilization at any given point.

Even when telling the story of the whole world, we only get slivers. The history of the entire world, i guess video by Bill Wurtz, arguably the best narrative of human history ever made, does a great job of jumping around to summarize the most significant events of the past 13 billion years, but it just results in an extremely zigzaggy slice of the pie.

Written narratives are the most common way to do deep history, but we can also visualize it.6There are also visual narratives. Time lapses like this one about the history of Manhattan fall into this category. The canonical way is the timeline. Here’s a timeline of Canadian history in the form of a Wikipedia table, for example. Or the Vachon cake company:

This website has a bunch of cool timelines on the history of various appliances:

You can see the lack of breadth easily. This is a good thing, because it allows us to zoom into the interesting stuff, like kettle designs, which are unlikely to be talked about when we make a low-resolution summary of the entire history of the world.

In fact, we can try to create a timeline of the entire world. It might look like this:7This is the most common periodization of world history according to French historians: Antiquity, Middle Ages, Modern Times, and Contemporary Times. With the French Revolution as a cutoff date, it is, yes, very French-centric. Source.

Not terribly informative, is it? Such a timeline pretends to be broad and to cover everything, but it has so little information at any given point that it remains basically one-dimensional. When timelines truly get 2D, they tend not to look like timelines — or, at the very least, they combine multiple timelines together.

Let’s now examine what broad history actually looks like.

Broad history

Broad history is the full state of a relatively large part of the world, or of the entire world, at a fixed time point.

Consider everything that is happening today, which as of writing this is May 8th, 2021. It’s not history yet, because it’s in the present, but it will very soon be in the past. In fact, it already is history as of you reading this, though presumably not very old history (unless somehow this essay ends up being read in the far future).

If you or some far-future historian wonder what the state of the world was on May 8th, 2021, and decide to reconstruct it in some way, then you’re doing broad history. Same thing if I decide to wonder what went down in 1759, or AD 476, or December 16th, 2334 BC.8It turns out 2334 BC is when Sargon of Akkad started conquering Mesopotamia, though the clay tablets do not say if that was in December.

Broad history is very much non-narrative, which is why, I think, it’s a somewhat less common approach than deep history. Humans like stories. They’re one of the major ways we make sense of the world. To do broad history, instead, we need to deal with a lot of simultaneous data, without (or with very little) time being involved. Our brains aren’t great at this task.

Also, especially for pre-modern times, the data in various parts of the world are often so uncorrelated that there’s rarely any point to studying them at the same time. The most extreme example of this is probably the separation of the Old World (Eurasia & Africa) and the New World (the Americas). A narrative set in either is likely to completely ignore the other (until the point when they established permanent contact, of course).

There are countless less extreme examples. For instance, did you know both the Roman and the Han Chinese Empire were going strong in 1 AD, and that there’s a whole field of historical studies about their indirect relations?

Or that J.R.R. Tolkien, Adolf Hitler, and Anne Frank’s father participated in the same World War I battle in 1916?

Those facts are surprising because they join together what we usually think of as distinct narratives. The rise to power of Hitler has little to do with Tolkien’s life and writing career. The story of how the Romans founded a republic that conquered the Mediterranean and became an empire has little to do with the unification of China under the Qin dynasty and its transition to the Han dynasty.

Put another way: There’s a Wikipedia list of Roman emperors. There’s a Wikipedia list of Chinese emperors. But there’s no Wikipedia list of emperors and leaders in AD 1.

The best we have is a list of events, births and deaths for every year (and decade, and century). Here’s 1 AD, for instance. This is a form of broad history. Another form is to write a long, detailed description of the world at some time point, for instance a book titled “The State of the World in 2021” or some such.

But broad history, unlike deep history, is best studied visually. And the canonical way to do that is the historical map. The map with China and Rome above is an example. Here’s a more complete one for the same year, AD 1, from Wikipedia’s cool collection of historical maps of the Eastern hemisphere by Thomas Lessman:

Another version, now covering the whole world:9source

GeaCron’s Interactive World History Atlas since 3000 BC is a great resource to look at the world map of any specific year in the past five millennia. AD 1 again:

Of course, such maps have low resolution. They give an overview of the state of the world, but no details of what was going on in Rome or China or among the Arctic Marine Mammal Hunters up in Siberia. Textual descriptions can do better, but in general, it’s hard to get information of this kind with broad history. 0D and deep history might be better approaches.

Or we can just go the opposite way and start stacking our broad historical maps over the temporal dimension, and see history in its majestic two-dimensional glory.

Two-dimensional history

You know how it’s always awkward to display a 3D object in a 2D space? For instance, here’s a cube:


This isn’t actually a cube. You’re (probably) reading this on a screen in 2D. So what you see is the 2D projection of a cube, which your brain converts to a 3D object in your imagination.

Your brain can do that because it’s used to 3D space. But consider a 4D hypercube, a.k.a. a tesseract. This isn’t something we can imagine well. The best we can do is project it in 3D or 2D. Here’s what it looks like when a 4D hypercube is projected onto a 2D screen:10source


We can do slightly better if we add time as a third dimension:

You can sort of see the hypercube with this gif, if you squint really hard and let go of your preconceptions about the physical world. But you can’t really make a good mental image of a 4D object, in its full detail, because that’s not a thing human brains do.

History across both breadth and depth is the same deal. You can never quite grasp its full depth and breadth. But you can sort of see it if you project it cleverly and squint really hard.

The canonical way to do that is the kind of super complicated chart that combines multiple timelines and maps. Like this:11by Andreas Nothiger. Full chart here


Or the Histomap, created by John B. Sparks in 1931 (it may take a while to scroll down)12Found here; click the image for full size.:

histomap big version

Or the incredible Temple of Time by Emma Willard (drawn in 1846):

Temple of Time Emma Willard

(See this Twitter thread for some other examples.)

I find those pictures impressive. They are masterful attempts at compressing all of history in the least lossy way possible. But you immediately see the limitations. They’re hard to read. They take a lot of space. They involve a lot of choices as to what information should be included.

A good example of the dilemma of inclusion is Tim Urban’s Horizontal History post on the blog Wait But Why. (Despite the title, it’s 2D history, not broad history.) Tim’s large charts contain famous people’s birth and death years, and little else. They’re great! We rarely see the data displayed like this. But everyone will agree that the post merely scratches the surface of the history at any given time or place. Important people are just a tiny part of the whole story.

Also, all of the above are static visualizations. What if we used time directly to add an extra dimension?

Go back to GeaCron, pick an arbitrary year (say, 1867), and then just click on the back or forward arrows repeatedly. You’ll be able to see how the political map of the world change across time, and sort of get a grasp of history in its full breadth and depth.

Of course, you might just as well look at a time lapse instead of clicking repeatedly. There are many such videos. Here’s a 19-minute one:

Here’s another, covering less breadth and depth, since it focusses on the colonial and post-colonial states of North America after 1750. But it’s still definitely 2D.13source


As we saw in the broad history section, maps, fun as they are, give us only a high-level overview. What if we could zoom into the map and examine the daily life of the women and men at any point in space and time? What if we could do this without sacrificing depth and breadth to get back to 0D history?

Now we’re entering the realm of detailed historical simulations. Those are extremely difficult to build, considering the enormous amount of data required. I’m not sure that a good, accurate, detailed simulator of world history exists, or ever will.

Fortunately, there’s an entire industry devoted to the creation of high-quality simulations, although it does so at the expense of accuracy: video games.

Specifically, grand strategy 4X14a subgenre of games that involves eXploring, eXpanding, eXploiting, and eXterminating games, of which the most prominent examples, and the ones I’m most familiar with, are the games in the Civilization series. Other examples include Age of Empire and pretty much anything by Paradox Interactive, like Europa Universalis or Crusader Kings.15Disclaimer: I’ve played zero to very little of any of those games except Civilization II to V.

(Any game with a relatively accurate historical setting can also be used as a simulation. I once saw an exhibition about Ancient Egypt, and the museum had partnered with Ubisoft to provide visitors with a special educational version of Assassin’s Creed Origins. You could explore the city of Alexandria as it might have appeared in the time of Queen Cleopatra VII, around 49-43 BC. Pretty cool! However, there wasn’t much breadth and even less depth; it was much closer to 0D than 2D.)

The people making and playing grand strategy games tend to be careful about claiming that they’re a good way to learn history. You can see why: they’re often wildly inaccurate. In a typical game of Civilization, you’ll have a randomly-generated map in which the Roman Empire coexists with the United States of America, the Aztec Triple Alliance, and the Mongol Empire in 2500 BC. In Europa Universalis IV, you can end up with such sights as this:16source

The reason games tend to be historically inaccurate is that they optimize for gameplay and fun. As interactive media, they must give players freedom, which necessarily means freedom to diverge from actual history. As a result, a lot of the details will simply be wrong.

But games can still be useful in two ways. The first way is to give the spark of curiosity. After playing the Byzantine Empire, you might get curious about this civilization, wonder why you never learned about it in school, and go binge-read Wikipedia about it.

The second way is to give a better understanding of the concepts and forces that drive history. For instance, most 4X games involve a technological tree. As the game progresses, your country develops new technologies; at some point, you learn how to build ocean-going ships; you send your new ships out and find a new continent; you meet faraway civilizations and have to decide whether to conquer or ally with them. Even if the specifics are inaccurate (say, you’re playing the Zulus and conquer Russia), role-playing the part of a conquistador civilization can help you grasp reality with more clarity. Being the Kingdom of Spain and conquering the Inca Empire is a fuller experience than reading about the conquest or looking at changes in the map of South America.

I credit a lot of my understanding of history to a heavily modified version of Civilization IV called Dawn of Civilization. This mod, which is still undergoing active development17including by myself, with a few very modest contributions sixteen years after the base game came out, tries to follow real history as much as possible without sacrificing gameplay. It’s played on a reasonably accurate real-world map, and civilizations arise (and fall) at approximately correct dates.

Any single game of Dawn of Civilization diverges from reality, especially the parts that are affected by the player’s actions. But the computer-controlled civilizations are programmed to act historically, and the overall narrative tends to follow real history fairly well. Which means it’s actually useful to get a 2D view.

For instance, you could be playing as China and spend all your time dealing with the northern barbarians and the Mongols and the Koreans while having no idea what’s going on on the other side of the world. Then you get ocean-going ships and reach Europe and the Mediterranean… at which point you realize these places have been doing their own thing for centuries. Wars, collapses, religions you knew nothing about.

(And you might see a rare occurrence, such as the civilization of Carthage in North Africa surviving until the 18th century:)

The simulation simulates everything, even if it’s not relevant to your immediate game experience. But of course, you never know what is relevant. Maybe you decide to play the Ottoman Turks. But unbeknownst to you, a stronger Persian Empire in this particular game thwarted the spread of Islam by the Arabian Caliphate, centuries before you start your game, and that’s why you end up playing a Christian Orthodox Ottoman civilization.

The 2D view of history you get from a computer game isn’t accurate, but I know of no better way to viscerally feel, as opposed to read or watch, the forces that underlie human affairs. Plus, games have the benefit of being, you know, fun to play. You’re never going to learn any history if you don’t find it fun.


None of the four basic shapes of history is better than any of the others. Like in any complex field, the only way to gain a good understanding of the whole thing is to spend a lot of time studying it. But you can be smart about that, and study history in a variety of ways.

Art and photography give you a snapshot of a time and place, a detailed idea of what it would have been like to be there.

Narratives and timelines make history coherent and interesting by hijacking our love of stories.

Maps provide a large-scale overview of an entire period, joining together distinct narratives.

And complicated 2D projections, as well as simulations and games, create this awe-inspiring impression of seeing all of history at once, even if we know the picture will always be too much for our feeble brains.

Get enough of each, and you will understand the data of the world.


Thanks to Sachin Maini, Alicia Kenworthy, Kritika Sony, Rishi Dhanaraj, and Kushaan Shah for readings drafts of this piece.


Leveling Up the Skill of Friendship: Maintenance

Wishing to be friends is quick work, but friendship is a slow ripening fruit. — Aristotle


How many friends have you lost contact with for… no specific reason at all?

It seems to happen all the time, doesn’t it? We like to tell new friends, “let’s keep in touch,” but then we don’t. Our past is littered with abandoned friendships.

In a sense, this is expected. Friendships have a life cycle and most don’t last forever. If you’re continually making new, better friends, you should expect to have less time for your existing friendships. Those then wither and die. Or, more often, they enter permanent stasis, frozen in time until something stirs them up again.

But sometimes, friendships wither and die and enter permanent stasis simply because we haven’t put in the maintenance work.

“Maintenance.” Not a very sexy word. It brings to mind a whole part of the lexicon we’d prefer never to think about: “chores,” “cleaning,” “troubleshooting,” “calling a plumber because the kitchen sink is a disgusting mess,” and so on. Yet everyone recognizes that maintenance is necessary. It’s unsexy work, but it’s work that must be done — otherwise the kitchen sink becomes grimy and might break down and then you’ll definitely need to call an expensive plumber.

This analogy is imperfect because maintaining a friendship is much brighter and more fun than taking care of yucky plumbing. But it’s worth emphasizing that it’s not as effortless as we might like. You need to take care of your friendships, to nurture them, or they will most likely break down or become forgotten.

Here are my thoughts on how to do this. I don’t claim to be an authority — I’m just someone who’s figuring it out, and who does that through writing an essay like this one. I think and hope it will be useful to you, too.

Table of Contents

This is part 2.2 of my series on friendship. You can access the previous parts, on why friendship matters and how to make new friends, from this table of contents. You can also jump to any of the subsections:

Pierre-Auguste Renoir - Luncheon of the Boating Party - Google Art Project.jpg

More classic art (Luncheon of the Boating Party by Renoir) to represent friendship, in this case showcasing how wine, food, and cool hats are the key to keeping good lifelong friends

Recognizing that friendship is not effortless

When I was writing this, I had an open email window on my laptop. The window was mostly empty. There was a friend’s email address in the To field. Then there was the very beginning of an email, something like “Hey, how’s it going?” Nothing more.

This is a friend with whom, years ago, I used to communicate via long, old-fashioned emails. I hadn’t talked with him since 2018, and I’d been meaning to reestablish contact with the same mode of communication as before.

That email stayed both open and unwritten on my computer for about three months, until — coincidentally enough — that friend contacted me through some other means, making me slightly ashamed to have waited until then.

Why didn’t I write that email? Not because I didn’t know what to say. Not because I was afraid of his reaction, or because I didn’t care about that friend.

Simply because it was work.

Communication is not effortless. It’s not always a huge effort — sending a text message to a close friend is pretty easy — but it’s more effort than doing nothing. And communication is only part of a relationship. People need love, help, support, and time. You need to be ready to give those out to the people who are important to you. None of that is effortless.

From an essay called “Friendship is work, and that’s okay“:

Just like romantic partnerships and family relationships, friendships require a delicate balance of compromising, apologizing, spending quality time, and even calling out inconsiderate or upsetting behavior instead of bottling it up.

We often don’t realize that friendship demands effort, because it is usually very enjoyable. Which is great! That’s why we want friends to begin with. But the need for work can be obscured by that enjoyment, and then we don’t realize we haven’t done enough, and then our friendship suffers.

So the first part of my maintenance advice can be summarized thus: recognize that friendship is work.

Avoid the somehow widespread view that friendship happens magically.1yeah, I realize I’m the one who put a My Little Pony: Friendship is Magic image in the last post. It is magic, but only in the sense of spending hours drawing pentagrams and incanting complicated spells. Magic is work! That they’re governed by fate. It’s seductive to think lifelong friendships are created instantly through the mere act of meeting a kindred spirit. Just like it’s seductive to think that romantic love is all about finding your soulmate and spending the rest of your life in unending bliss…

… which everyone who has been in an actual romantic relationship knows is bullshit. Romantic love is hard. But somehow we haven’t internalized the same lesson about friendship.

Keeping a friend you’ve just made

Maintenance work starts as soon as friendship is born. But at first, it’s usually easy work.

This is because the conditions that provided the initial spark are still there. That friend you made at work will remain your friend as long as both of you keep your job. Friends you met when you became passionate about your theater classes will easily stay friends with you as long as you’re into theater.

Let’s point out that almost everything in the make new friends section from the last essay still applies. If you keep doing whatever made your acquaintance turn into a friend in the first place — being kind, being interesting, making the relationship easy for them — then the friendship is likely to endure.

At least, until the conditions change. If you can control it, and care a lot about the friendships you made in some context, try whatever you can to keep the conditions constant. But of course, there are a lot of life circumstances we can’t control, such as finishing school or quitting a job, losing interest in theater classes, or simply having friends leave for their own reasons. And even when we are in control, other considerations — a career, a romantic relationship, family, etc. — will force us to sacrifice the ideal environment to keep our friendships. What then?

Keeping in touch when conditions change

That’s where the real challenge of friendship maintenance occurs.

How do you keep your friends from high school or university or your first summer job or the hometown you’ve left years ago?

How do you keep friends that you met through someone who’s no longer in your life — for instance, the friends of an ex?

One answer is that sometimes, your friendship was tied to that bygone context to such an extent that it will never be the same again, and you shouldn’t try. But that certainly isn’t always true.

If you’re intent on staying friends even after circumstances are bringing you apart, you’ll need to put in some more maintenance work. The first step is to initiate contact frequently. People are bad at reaching out. People vaguely think about contacting you, and then they keep a draft message open for three months and never send it. You can’t fix that for them, but you can fix it for yourself and maintain your friendships that way.

One trap you want to avoid is to assume that if a friend hasn’t contacted you already, it means that they don’t want to see you. It’s really easy to think that, especially if you’re shy or socially anxious!2Source: am a shy and socially anxious person, or at least I used to be. But it’s also easy to convince yourself that it’s almost never true — just realize that your friend might be making the exact same assumption, since you also haven’t contacted them for exactly as long.

Of course, it’s sometimes the case that someone you consider a friend doesn’t want to see you anymore. You need to read the room. If you’re always the one initiating contact and your friend doesn’t seem that keen to see you, you might want to stop trying. We’ll consider this in more detail in the essay on ending friendship. But I want to say this: a long time with no contact is not by itself evidence of unrequited friendship.

Concretely, what does maintaining a friendship look like?

It can be as simple as a “maintenance text” — just sending a short message every once in a while. My friend Rishi writes, in How I keep in touch with friends after college:

It can be a:

  • Dank meme
  • Interesting article
  • Piece of sports news
  • Shared memory
  • etc.

Receiving any of those things means the other person was thinking of you, which is just the sweetest thing.

It can go further, of course. Invite your friend over. Grab some food or play a game. Suggest a phone call or video chat. If they live far away and you can afford it,3and there’s none of this horrible covid-19 thing restricting travel travel to visit them! As a bonus, they’ll even be able to show you around, which is the best way to do tourism.

Some occasions are particularly suitable for maintenance texts/calls/etc. Birthdays, for instance: you have the perfect excuse to send something to a friend — so why not add to your birthday wishes something like “We should have a call, it’s been a while!” Your own birthday is also a good time to suggest hanging out to whoever is sending you wishes. Other holidays, e.g. New Year’s Day, are similarly convenient.

Those calendar events are Schelling points — easy dates to coordinate around by default. But you can also create your own recurring events with a friend. For example, decide that you both go watch a movie at the beginning of each month. In other words, create a habit — which is one of the best ways to make work not feel like work.


Not everyone is that bad at reaching out, and so you will also be on the receiving end for many of your friendships. My suggestion, if you care about those friends, is to make a habit of replying quickly when they contact you. In fact, make a habit of saying yes to whatever they’re inviting you to — or, if you must say no, immediately reschedule or suggest another activity.

There’s a real danger, when you decline invitations too often, of letting your friends think you don’t want to see them. It helps to directly tell them that it’s not the case. Say “Sorry, I’m really too busy this week, but I’m really happy you invited me! How about we grab lunch together next week?”

(I have a friend who told me something like that, and it truly warmed my heart. But it’s been a couple of months now. Hopefully that wasn’t a strategy to tactfully get rid of me.)

If on the other hand you don’t care about those friends, well, perhaps friends isn’t quite the right word then. This is a matter for when I’ll write about the end of friendship.

You may also find yourself on the side of doing the work alone, with no reciprocity. This is not an ideal situation. As I mentioned earlier, that may just mean the friend isn’t very skilled at friendship, but it could mean they’re the ones who don’t care that much about your relationship. As a friend told me, “even a ‘well-maintained’ friendship can crumble if the maintenance work isn’t shared.” I don’t think the work should necessarily be shared equally, but it can’t all fall on the shoulders of a single person. That’s too much to bear.


If you have a lot of friends and acquaintances, that’s… a lot of data to keep track of, actually. There’s basic stuff like birthdays and email addresses, but also who likes what kind of food, who has feelings for whom, who enjoys what sort of event… Humans do come equipped with a lot of mental power to process social data — for around 150 friends, to bring up Dunbar’s number again — but there’s no shame in getting a bit of technological help.

For many, social media data is largely sufficient. Facebook will remind you of birthdays and provide a convenient way to contact people. But not everyone is on whatever social media you’re using (probably for good reason), so you might still want to take extra notes. It can be as simple as keeping track of email and physical addresses in a notebook or text file.

But you can get fancier. Nat Eliason has a detailed blog post on what he calls his personal CRM (contact relationship management) system. He keeps track of:

  • First & last name, email
  • Industry / skills, Company
  • Interests
  • Location
  • How we met

I don’t think that this particular solution fits everyone, but I’ve started doing something low-key similar, and so far I like it. Sometimes there are little details about people that you’re not going to remember by yourself. For instance, I have a friend whose taste buds will not tolerate cilantro. If I invite her over for dinner, and look at my few notes about her, I will not cook food with lots of cilantro (I love cilantro). Everyone wins!

Is it weird, though, to take detailed notes on people? It somehow feels like it goes against some social norms — yet I can’t really see why. You keep information and impressions of people in your head all the time; it’s not very different to keep them in written form, as long as they remain private. Still, I rarely write anything personal in my own notes, perhaps because it’d feel like a transgression. I suppose the answer here depends on your own level of comfort.

Scaling it up

Everything I’ve told you so far is useful,4at least I hope so but it all assumes we’re talking about 1-on-1 relationships. If you have a fair number of friendships you want to maintain, it’ll be a lot of work, both for you and each of your friends. Fortunately, there are ways to become more efficient.5“Efficient” might sound too productivity-centered for something as sacred and beautiful as human relationships, but even if you don’t think of it in those terms, you still need to be efficient! You have limited energy to do the work of maintenance and it’s good to be wise about how you spend it.

Am I just going to tell you to see several friends at once? Yeah. Organizing a group event requires more work than a 1-on-1 — scheduling is more complicated, and maybe you need to get your house in order if you’re hosting — but it’s often worth it. It’s rewarding, it makes people happy to be invited, and you see many of them at once in a fun context.

But if you’re going to organize a hangout with many friends, why not make it recurring? I already mentioned that creating habits is a good way to foster a 1-on-1 friendship; that’s even more true for group events. Make them low-key, with flexible guest lists, so that they happen regardless of any specific friend’s availability — and suddenly you have created a Schelling point for a whole group of people.

Weekly dinners are perhaps the most common example. I lived in the Boston area for a little while, and the Friday evenings at some friends’ group house were a highlight of my time there. There was something really warm with the idea of joining this ever-changing group of people6there was, however, a core of regulars, which I suspect is a prerequisite for the event to persist over time for fun discussions and food every week.

It doesn’t have to be weekly, though — monthly or even yearly recurring events can work great. It can also go beyond dinner parties, like outdoor activities or game nights, as long as the events don’t get too complicated or costly.

The whole idea is to make it easy for your friends to join. To perform the maintenance work for them. This reduces the total amount of work needed, and isn’t much harder than organizing 1-on-1 hangouts.

Self-sustaining friendship

There is a final trick I need to tell you about, and it is the best trick of all. With this strategy, you can make the work of friendship maintenance perform itself with minimal effort, most of the time. The trick is to create a virtuous cycle, in which friendship begets friendship.

Friend groups are the logical next step to recurring group events. When you have a cohesive set of people who see each other regularly, eventually the group seems to get an identity of its own. It becomes a tribe; you get a feeling of belonging. But more importantly, the group’s continued existence depends on very little individual maintenance work; everyone can contribute, to the advantage of everyone else.

Rishi writes:

If you’re like me, you can’t be trusted to maintain a friendship 1:1 by calling and texting all the time. You get busy. In a friend group situation, your amazing friends keep each other engaged. This way, you’re relying on a system instead of willpower.

You still need to contribute some; if no one does the maintenance work, the group will disintegrate. In fact it’s likely that the distribution of work is unequal — we all know of situations where one friend is responsible for holding the group together. Which isn’t necessarily problematic, except insofar as it makes the group fragile.

But in an ideal group, any effort you put in will be amplified by your other friends, and their actions will be in turn amplified by you. Thus the system becomes self-sustaining.

Concretely, the easiest way to get this is to join an existing friend group, though you can also create your own. The means are diverse: recurring events, co-living, common projects, or simply online chat groups.

There is another, more personal type of self-sustaining friendship: deep friendship.

When you get very close to someone, it doesn’t really matter whether you send a text message every once in a while. You can in fact spend years without any contact. “No matter how long you spend apart, when you come back with your friend, it is like no time has passed at all,” as C.S. Lewis wrote.

The ordinary maintenance work becomes unnecessary, or so easy to do that it goes unnoticed — although perhaps that is an illusion. We’re less at risk of losing a close friend due to neglect, but when help must be given, for instance in times of crisis, the demands will be greater. Maintenance work is still needed, but its nature has changed.

Still, deep friendship is rewarding and less taxing in our daily lives. It’s a whole other topic, however. We’ll cover it in the next post.


Thanks to Gregory Yang Kam Wing, Taylor Pullinger, and the wonderful people in Writing Workshop Juliet for feedback on this piece.


Checkpoint 30

I’m turning 30 today.

People who are younger than 30 (which I still am as of writing this) tend to feel immense dread at the thought of reaching that age. People who are older than 30, of course, find this laughable. Those people are wrong. They’re wrong because they have forgotten. Turning 30 is a Big Deal.

I recognize that despite writing this, I’ll probably forget the significance of turning 30 when I’m 45 or something. I’m sure I’ll laugh at 29-year-olds who think they’re old. Oh, hi 45-year-old me! How’s it going? Yeah, I know. I swear, it was a big deal at the time.

Whatever you think about turning 30, it is a milestone. And I felt it would be appropriate to write something symbolic for the occasion.

I’ve never written anything like an autobiography. Short anecdotes, sure. Random excerpts of the movie of my life. Yet a coherent narrative of my existence so far doesn’t exist. Maybe I never made one because it would have felt pretentious? Pointless? But I don’t think what I’m doing now is either of those things — because I’m writing it for myself. I’ll make it fun, so that others who are interested in my life story, for whatever reason, have a good time reading. But this is primarily for myself. For 45-year-old me, and for all the others.

Let’s call it Checkpoint 30.1A warning: it’s full of footnotes, like this one. Usually, my best bits of writing end up in the footnotes somehow, so I recommend you read them, although they definitely are tangential to the main narrative.

0-9: A Happy Childhood

I was born in Quebec City, Quebec, in 1991.2On Tuesday, March 19th, 1991 at 4:42 am, to be precise. I scheduled this post to be published on exactly the minute I turn 30.

My dad doesn’t like when I say I was born in Quebec City, because that’s just where the hospital was. We were living across the St. Lawrence river, in what is now the suburban city of Lévis, and where my dad would eventually hold local political office, which makes him sort of adorably chauvinistic about things like where I say I’m from.

But no one outside of Quebec knows about Lévis, and people may know about Quebec City, and since I was actually born there, I stand my ground: I’m from Quebec City. Sorry, dad.

The part of Lévis where we lived, and where I grew up, is probably the most rural place out of Quebec City’s suburbs. It’s still distinctively a suburb, though. A quiet, banal place, whose economic activity mainly consists of people commuting to the city for work (often government work; Quebec City is a city of public servants). Somehow, the most striking thing I remember from my childhood there is my… utter lack of class consciousness. It took me a long time to realize that social classes even exist. When I was a kid, people all felt the same in socioeconomic status.

I suppose that’s what it means to be middle-class. Or upper-middle-class. My parents are university-educated and earn(ed) much more than the average, though I had no idea at the time. I would wager a guess that being upper-middle-class in a quiet suburb is the ideal situation to avoid being class conscious — a rich or poor or urban kid would notice the differences far more than I did.

Now, of course, I know that there were many class markers. We watched public television (Radio-Canada) rather than the more popular, private network TVA. We shopped for groceries at the store that had the fancier “President’s Choice” brand. We (well, my parents) drank coffee out of bowls.

When I go back to that suburb as an adult, I find it incredibly ordinary. I think we can outgrow places, and I certainly outgrew that one. I have become an urban dweller and would never even consider moving to anywhere suburban. The rows of houses with mowed lawns, the complete separation of residential and commercial areas, the primacy of the car3despite the French, make no mistake: Quebecers are far more North American than European feel like a mild version of hell to 29-year-old me.

Yet my memories are of a happy childhood.

Like everyone, I remember nothing of my first few years. I can’t even tell what my first memory is, because it is blurred with photographs my parents took and kept in large albums, because this was the early 1990s and physical photo albums were still a thing.

So I don’t remember the birth of my younger brother when I was 3. I have no idea how I reacted. Let’s just assume I reacted very well. My brother and I always had a good relationship.4The anecdote that my brother loves to bring up to annoy me (he knows what I’m talking about) notwithstanding.

We grew up in a loving, unproblematic household. My grandmother, who used to earn a living as a nanny, served as a nanny for us too, and so we grew really close to her. Fun fact: she has a summer cottage that is actually a converted school bus. The cottage/bus is next to a lake that was single-handedly dug by my great-grandfather for reasons I have never been able to fathom. I guess he thought that diverting a river to create five artificial lakes was his idea of a cool project.

In any case, it created a nice quiet spot down in the Beauce region, where my brother and I would spend a few weeks each summer. We would sleep in the bus or in a tent; catch frogs and loaches; throw ourselves in a lake from a swing tied to a towering tree. We also used to go wild blueberry picking. We could never pick them as fast as my grandma could. She was amazing at it, and the freezer back at home was full of wild blueberries year-long.

a view from inside the school bus-turned-cottage

Most of my extended family used to live in the same area as we did in Lévis. Grandparents, uncles, aunts, cousins — some of whom had far more tumultuous childhoods than my brother or I did. Oh, also, no one ever died. Even as of writing this, the only close family member who died is my paternal grandfather, in 2015. For most of my life I had no experience of mortality or grief. I still have had very little.

This is boring, isn’t it? Sorry for having basically no family drama to tell.

School was similarly uneventful. I was probably the smartest student. In grade 1, I was offered to skip to grade 2, since I had somehow learned to read on my own before, but I refused. I’m not sure why; today it feels a bit cowardly.

In any case, school was easy, and I was that earnest, bright kid who would volunteer to do more than necessary, or to help the other kids. I was a nerd, I guess. I was very interested in plants at some point. In the solar system at another. In the water cycle. In mushrooms. In video games — the first two I played were Pokémon Snap and Zelda: Ocarina of Time, when my parents bought a Nintendo 64. I spent so much time being stuck in the forest area at the beginning of the Zelda game because I was too afraid of the big boss from the first dungeon. I would talk multiple times to every single character and go ask my parents for a translation — back then, games weren’t translated into French for the Quebec market. That’s how I started learning English.

Good times.

And then at some point, soon after the beginning of the third millennium, I turned 10.

10-19: Teen Turmoils

Nothing special happened for a couple years, at least for me. I guess things did happen, like the 9/11 attacks, which I don’t remember much. I believe I was mostly wondering what the fuss was all about. Evidently I had no grasp yet of what mortality truly means.5I have memories of thinking ecological crises should be solved by simply removing many humans from the Earth. This is a horrible thought and I am appalled to have even considered something like that, but it’s not something unusual. It takes some learning to see the inherent value in all human lives — or, more generally, in all self-aware lives — and properly recoil at events like 9/11 or natural disasters. Some people haven’t understood this even as adults, and this occasionally leads to moral catastrophes.

Eventually primary school was over and my parents sent me to a secondary school in Quebec City that had an international education program. As one of my teachers would put it, it was “a private school for cheap parents,” because it was as selective and skewed toward high achievers as a private school is, but, unlike private schools, didn’t cost anything.6Social class, Scott Alexander writes, isn’t really about money, but about culture. This was a school for the upper-middle-class, but those who couldn’t or didn’t want to spend on an expensive private school.

It was an excellent school and I’m glad my parents (who are certainly not cheap) sent me there.

In Quebec, secondary school lasts five years, from ages 12 to 17. In other words, the exact period in which one discovers one’s sexuality and love. A few months into my first year, I fell in love. Ah, the extreme feeling of having a crush at age 12! I lost my appetite for days. This was December, so I had to endure a very lonely Christmas vacation away from her. I couldn’t wait to go back to school and… do nothing except hope that she would be my teammate in a group project or something.

It feels so bizarre to write this today, because, as I was very soon to discover, I am not and was not sexually attracted to girls at all.

Which led to about five years of utmost confusion. It became unquestionable that I was gay around age 14, although I couldn’t accept it — in large part because it was unquestionable that I had been in love with that girl! Over time, I stopped denying reality, but then it was 100% clear I was never going to tell anyone. I knew with high confidence that everyone around me would have no problems accepting my orientation. My parents had said so explicitly. That didn’t change a thing. I kept the secret for the rest of my teen years. Oh, I never did tell that girl I had loved her, either.

What a disaster, in retrospect. So much time lost.

The turmoils of my inner life didn’t really impact the rest, though. I kept being a very good student. In fact I was objectively the best student in my year, as my Governor General’s Academic Medal can vouch for. It was given to me at graduation time as a reward for having the best grade average out of the 400 or so students. When I received it, there were some puzzled looks in the crowd, because I wasn’t very well known among the class of 2008, especially the half who weren’t in the international program. There had been a poll where we could vote for various superlatives in the last year, and I wasn’t even included among the choices for “the next Einstein”! Can you imagine!

The reason I wasn’t very well known among my schoolmates is that over time, the earnest and outgoing kid had become shy and introverted. This is a development I don’t quite understand yet — it really seems that I underwent a personality shift. But why? My best hypothesis is that it was a defence mechanism against potential bullying. If so, it worked. I was never bullied, despite clearly being one of the biggest nerds around.

But sometimes I wonder if this protective aura of keeping to myself made me lose more than I gained. Today I realize how much wealth is in the connections we make with the people around us. Although I’ve never been isolated, my shyness made me miss out on many of those connections. I’m trying to redefine myself now; I am slowly shedding the introvert identity. It feels strange, but healthy.

But that is about who I am today — let’s go back to who I was as a teenager.

Around age 16, I grew annoyed at teachers who always asked us to limit our writing assignments to 750 words or so. That was my motivation to start writing my first novel. It was science fiction, set in an Ancient Greece-inspired fictional universe, and using various sources of energy (fire, light, dark matter) as a basis for a magic system. That would have been quite something if I had finished it, which of course never happened. Nevertheless, it was a milestone in the sense that I first realized I could write whatever I wanted.

My first complete novel manuscript was written over the month of November 2009, during NaNoWriMo. It was about an albino kid in 19th century southern Africa and London and it dealt with issues such as skin color and yes I cringe at the very thought of a 18-year-old Quebecer writing a novel about any of those things (the 19th century, southern Africa, Victorian London, or skin color). But I proved to myself I could write an entire book! My next goal would be to write one that would be good enough to be published. Still working on that.

At the end of high school came the time to make Life Choices. The Quebec education system has a step before university called college or cégep.7which for Americans and English Canadians would be equivalent to grade 12 of high school plus the freshman year of university. I had to pick my course of study. Good students studied the sciences, so I chose that. Also I had been into plants and planets and so on, so it made sense.

But, to shake things up a bit, I applied to a double cégep diploma that combined science with music. Music classes in school — I was a saxophonist — had been among my favorite experiences, and since I felt I would never play any music again if I didn’t study it, I decided to make things harder for myself and commit to practicing a lot of saxophone in addition to studying musical theory, basic physics, calculus, and (ugh) chemistry.

I was right: not studying music meant quitting playing altogether. I abandoned the music part of my program after one semester, and never played the saxophone again. I sold my tenor sax last summer after 10 years of letting it collect dust.

I did well in the science part, though, and then Life Choices knocked to my door again. What should I study at university? The obvious choice was medicine — I had the necessary grades, without having tried hard — but the convoluted admission process sounded like a pain, and I wasn’t really into it anyway, so I rebelled and decided I would be a [solemn voice] scientist [/solemn voice] instead. Which science? I hesitated between physics and biology. I also hesitated between the local Laval University in Quebec City or McGill in Montreal. In the end I didn’t like math enough to do physics, and McGill offered a slightly larger entry scholarship, so I picked biology there.

The summer before university I had my first real summer job. It was pretty cool, as summer jobs go. I worked at a small observatory about an hour’s drive south of Quebec City. My task was to operate the telescope and show visitors around. There were barely any visitors, so it was pretty chill. On clear nights, me and the other guy would basically just watch the sky — the cloud bands on Jupiter, the bichromatic stars of Albireo, various star clusters of all shapes.

the Mont-Cosmos observatory — a nice concrete tower in the middle of nowhere built by the local university, and almost immediately abandoned because they built a better one 100 km from there

And then I moved to the big city. Or, more accurately, to a suburb of it. Yeah. Longueuil is the Lévis of Montreal. I lived close to public transport, but I was still living a life of commuting rather than the full urban experience.

There was, of course, a reason for this: my pseudo-great-grandmother,8pseudo because she was the second spouse of my great-grandfather, so we’re not actually related. Also she was too young to be a great-grandma — 80 years old when I lived there in 2010-2011. virtually the only family member I had in the area, let me live in her place for free. A great deal, financially speaking. And my pseudo-great-grandma is a lovely woman. But in hindsight, living in the suburban apartment of an octogenarian wasn’t a very good choice. It made it hard to make friends.

Studying at McGill also meant living mostly in English. McGill is a very weird place to be for a French-speaking person because you feel like a minority (French-speaking students) within a minority (the English-speaking community in Montreal) within a minority (French Canada). I had a perfectly sufficient grasp of English to study in the language, but it compounded my shyness and stunted the growth of my social life.

Overall, my first year there was awful. I escaped to my parent’s house most weekends.

I turned 20 towards the end of my first year of university, full of uncertainty about whether I was in the right place at all.

20-29: The Meandering Life of a Twentysomething

By this point I had still never come out as gay to anyone. It was eating me from the inside. I thought about it every day. I knew I had to burst the secret open, I even knew no one would really care, and yet I didn’t do anything for years.

I can only imagine how gay kids in non-supportive environment feel. It’s hard even when it’s easy.9Although, I wonder if the huge rise of positive media representations of LGBT people, in recent years, has made it easier for the kids of today. I hope so.

I finally mustered the courage and told a friend over a messaging app in November of 2011. She was totally cool with it (and relieved the “important thing I need to tell you” wasn’t that I had a crush on her). A few days later, I told my brother. The next day, my parents. They were mildly surprised, but only because I had told them exactly the opposite thing some years prior.

After that it became quite easy, even fun to come out. There was a little thrill to it — you never know exactly how people will react — but I was confident I was safe with pretty much everyone in my social circles.10I didn’t really need to come out to any of my extended family members, because my parents told everyone while I was busy studying in Montreal. I am lucky that I’ve never actually felt unsafe due to my sexual orientation.

It was the first semester of my second year at McGill, and at that point I was starting to make friends at last. I came out to the closest of those new friends. He reacted much like my other friends — he thanked me for trusting him, and didn’t press the subject.

But that was all a façade: little did I know that he was gay, too. He didn’t use my coming out as an opportunity to come out, however, because he came from a triply conservative background — African, Chinese, and Catholic — and had decided to never tell anyone. (I’ll admit that his reasons were stronger than mine).

So we just became closer and closer friends over the next few months, until it was clear that I was in love. I told him so one June morning, despite having no idea (except vague hints) whether he was gay too. Did he come out then? No! That took another (very soul-crushing) week. But then he did (the vague hints had been correct) and now we’ve been together for close to nine years.11Minus a five-month episode of breakup in 2018-2019 which I will keep quiet about.

I quickly moved in with him in the stereotypically wealthy neighborhood of Westmount, in an apartment that wasn’t wealthy by any standards. This change provided me with roommates and an actual social life. Life in Montreal became much better. At school, I dropped my French literature minor to take computer science classes instead, which I turned out to like more than my biology major. After I graduated, I took more classes to qualify for a master’s degree in CS. But I still wanted to be a [solemn voice] scientist [/solemn voice], so I applied to a master’s degree in Europe to study evolutionary biology, got it with a full two-year scholarship,12€48,000, if you must know; thanks, European Union taxpayers! and decided it was time to experience life abroad.

What followed were two years of living in Sweden, France, and the US, learning how to deal with a long-distance relationship, trying hard to convince myself that trying to be a [solemn voice] scientist [/solemn voice] was still the right path for me, learning and doing some actual science, meeting lots of awesome people (many of whom are still close friends) and travelling all over the European continent.

one of my favorite pictures from my time in Uppsala, Sweden

though actually my daily life looked more like this

My master’s program — called MEME, by the way, as a nod to Richard Dawkins — was very much geared towards academic research. I was finding out that I wasn’t geared towards academic research at all. In the last semester, as I was having fun discovering the vibrant intellectual culture of Cambridge, Massachusetts as a Harvard research-intern-not-quite-student, I finally decided not to apply to any PhD programs.

So I returned to Montreal with… no clear goals at all.

The autumn of 2016 wasn’t very enjoyable. By a cruel twist of fate, just a month after I came back from the exotic lands of Europe and the northeastern US, my partner had to move to Western Canada for job training in weather forecasting. So we kept the long-distance relationship going for another 7 months. Meanwhile I was looking for work, which isn’t easy when all you’ve done is scientific research in such a useless discipline as evolutionary biology.13useless for industries, anyway

I finally found a job as a data analyst for a McGill-affiliated organization that sought to improve cancer care. Basically, I was an Excel specialist. Once, I learned (after about 45 seconds of googling) how to lock Excel data columns to prevent other people from editing them. Then my colleagues would always send me their Excel spreadsheets so that I could lock columns for them.

The job was… okay. The people were nice. The conditions were good. The offices were very, very gray.

Work aside, I kept writing. I published a short story in a literary magazine. The magazine had a special collaboration with some independent filmmakers to adapt some of the stories into short films, and mine was selected. So even though I had published a grand total of two short stories by that point, I could now brag that my work had been made into a movie, which is pretty cool.

This is also around when I started blogging. I had the vague goal of writing the French-language counterpart to smart, generalist blogs such as Wait But Why or Slate Star Codex. But I never quite got there. I had very few readers, and I fell into the lack-of-motivation trap. So I wrote on and off, and that blog never led anywhere, sadly.

snapshot of what my life looked like back in early 2018

After some time (not that much), I grew bored from work and decided to relearn computer science. I went back to McGill, quit my job, and patched up another degree in a little more than a year. This was during the AI boom, and I was going to follow the trend. I had a research internship in machine learning one summer. At the end of 2018, I was done, and looked for work in a more directed way than ever before.14While simultaneously dealing with a breakup, having to move away from the place my ex and I had bought together into a smaller apartment with a roommate, and exploring the strange and foreign universe of dating apps.

I got a job at Druide, a company making the software Antidote, which I have described as “fancy Grammarly for French and English.” It was good! It was the first time I was working a job I felt fit me, with a clear, exciting goal — improving Antidote, a piece of software that I use daily and which is quite well-known in French Canada. I was also working with linguists. I enjoyed working with linguists.

Then, just as I was thinking I should maybe grow more ambitious than working as a programmer for a mid-size company in Montreal, the pandemic swept the world, ruined everything, and made me hate my work.

I first escaped in getting more serious in my writing. I switched to English, something I had resisted for years. I started using Twitter more. I joined online communities, most notably Interintellect. I began casting away my identity as a shy and introverted person and making a real effort to meet more people.

And here we are, at the beginning of 2021, and at the end of my third decade in the world.

The obvious question now is: What’s next?

I don’t know for sure. I know whatever comes is going to be somewhat messy.15coincidentally, Paul Graham also wrote an autobiography of sorts last month, and I must credit it for inspiration. A key quote, related to mess: “I wrote an essay for myself to answer that question, and I was surprised how long and messy the answer turned out to be. If this surprised me, who’d lived it, then I thought perhaps it would be interesting to other people, and encouraging to those with similarly messy lives.” I don’t think I’m cut out for simple, linear career paths and life paths. I want some chaos. Chaos is more fun, as long as it’s not excessive.

A chaotic life requires some self-confidence, however. And somehow, over the past six months or so, I’ve improved a lot in this regard. This is what allowed me to quit my job last month with no precise plans, something that two years ago would have sounded insane. Now I intend to work on some freelance stuff, write more, accelerate my efforts to meet people and expose myself to the serendipity of human connections.

And so, in this atmosphere of desired chaos, I’m turning 30. That feels both young and old.

Old because I already lived through a lot of stuff, as the exercise of summarizing my entire lifetime has made obvious.16And there’s so much that I had to completely leave out!

And young because I still feel that my life is really just beginning.

where I’m sitting on March 6th, 2021, just as I finish writing the first draft of this essay. Yes, that’s a pumpkin from last October. It’s probably all rotten inside by now


Thanks to Gregory Yang Kam Wing, Kushaan Shah, Rishi Dhanaraj, Tamara Scott, and Alicia Kenworthy for their feedback!


Leveling Up the Skill of Friendship: How to Make Friends

Do you feel lonely?

Do you think your current relationships are unfulfilling?

Did you just move to a new city and haven’t built a social circle yet? Do you feel like your career or intellectual life is stuck? Are you unhappy with your romantic relationships? Does your life lack purpose?

The solution to all of these problems is simple. Make friends!

And keep them. And get the most out of your relationship with them.

In my previous essay on the topic, I argued that friendship is a skill, and that it’s an important one. Perhaps the most important.

This 3-part essay is about how to level up this skill.

Table of Contents

This was initially meant to be a single essay, but it grew into an untamable beast. I had to cut it into three smaller critters.

I like to divide the skill of friendship into five subskills, which form what we can call the cycle of friendship. The post you’re currently reading is Part 1 (or Part 2.1, I guess, if we count the why essay as the first part) and covers the first subskill, about making friends.

It’s hard to find classic paintings to represent friendship, but I think this one (A Tale From the Decameron, John William Waterhouse) does a decent job


A friend is someone who knows all about you and still loves you.
― Elbert Hubbard

I’m not, by any means, the first to write about friendship. The tradition goes back a long way. At least 2,360 years ago, in fact, ever since Aristotle dedicated two books of his Nicomachean Ethics to friendship. I could cite many more authors, but you can just look up famous quotes on friendship1some of which I stole shamelessly as section epigraphs and you’ll get the idea.

This isn’t surprising. We’re a social species. Developing bonds with others is a big part of what it means to be human, so of course thinkers have always been writing about friendship.

Yet there can be no shortage of writing about it. Because friendship, like love, or family life, or work, is an endlessly complex and central facet of human life. It is always worth refining our conception of it — and coming up with good advice to improve the skill.

I’m not calling this “The Ultimate Guide to Friendship,” because I don’t see myself as anything like an expert, and “ultimate” would sound pretentious. But I do want it to serve as a useful repository, with lots of links and quotations. Some by famous authors or ancient philosophers. Others by popular bloggers and personal friends of mine. This is meant to be nothing more than a compilation of advice, wrapped with discussion to make sense of it all.

It’s also meant to be a living document. I plan on continuously updating the post as I find more of the best advice.

What is a friendship, exactly?

A friendship is a relationship between two people who like each other. The each other part is important. If it’s not mutual, it’s not a friendship.

This implies that to be friends with someone, you have to be likable to them — and they to you. Each of you has to provide something that the other values.

Aristotle identified three types of friendship based on three types of values: the useful, the pleasurable, and the virtuous. It is clear from reading Aristotle that the third type is the “ideal” or “true” friendship. There’s nothing wrong with relationships that are based only on mutual utility or pleasure; but the most fulfilling bonds are those that we make because we value the other person for themselves, rather than for whatever benefits they bring us. That’s what we’ll be focusing on here.

In my view, the two main qualities you need to be a great (or virtuous) friend are kindness and interestingness. Kindness can be seen as giving support and love. Interestingness can be seen as bringing value and fun.

Importantly, you need both.

Imagine a person who’s very kind, always willing to help others. You’re glad to have him in your life when you need something. But you also can’t help but think he’s a bit dull. He never voices opinions. You don’t know what his tastes are. Or his ambitions. His dreams. He works some ordinary job — or so you assume, because he rarely talks about it. He has no known hobbies. You call for his help every once in a while, but you always feel some unease, and as soon as the help is given, you leave, not wanting to feel the awkwardness of hanging out with someone who bores you.

Now imagine another person, one who’s super interesting. She has a crazy life, always a cool story to tell, endless facts to teach. But she’s also kinda mean. She asks for help but rarely gives it. She is great at sarcasm — and you laugh each time she demolishes something or someone with her deadpan humor, but not quite in a wholesome, feel good way. You’re glad you know her, because she’s fun to watch. But you don’t want to get too close. She radiates an aura of mild toxicity.

Would you want to be friends with either of them?

Of course, people contain multitudes. Kindness and interestingness can be shown — or interpreted — in many ways. But the first step to becoming good at friendship is to ask yourself, whenever you interact with people you like: Am I being kind? And am I being interesting?

I think that if you focus on this, instead of worrying too much about social norms, you can go a long way.

With that, let’s dive into the first step of the friendship life cycle: making friends.

Making new friends

Of all the means to insure happiness throughout the whole life, by far the most important is the acquisition of friends.


The first step to being good at friendship is, quite obviously, to have friends.

This section is relevant to you if you feel like you have no friends, or don’t have enough, or you have “friends,” but they’re not real friends. But anyone can benefit from being better at making new friends.

Concentric circles

Here’s a common way of categorizing relationships. We can view each category as a set that includes the ones after, like concentric circles:

  1. Stranger
  2. Acquaintance
  3. Friend
  4. Close friend

When we talk about making friends, we talk about moving people from stranger to acquaintance, and then from acquaintance to friend. (The friend to close friend transition will be covered in part 2.2.)

Is there an optimal number of friends?

The anthropologist Robin Dunbar famously estimated the number of stable relationships a person can cognitively maintain, based on observations of primate brains. The average number is 150 — though the real value falls into a range of 100 to 200 depending on each individual’s circumstances.

For the purposes of Dunbar’s number, a friend is someone “you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar,” in the words of Dunbar himself.2From his book Grooming, gossip, and the evolution of language, p. 77 We’re free to give the word “friend” a more or less strict definition, and change the number accordingly. But the point remains: there may be a higher limit to the number of friendships we can have.

Does that mean it’s possible to make too many friends?

Past me definitely did believe this. I used to routinely tell myself that I had sufficient friends and didn’t want more. (And yet, I was probably nowhere near 150 friends at the time.) Present me realizes that this was a cop-out. It was a way for Past me to avoid facing the fact that he wasn’t very good at making friends.

Present me doesn’t think that you can make too many friends anymore.

See, Dunbar’s number is not a hard limit; it’s a soft one. You can always meet new people, and if a meaningful connection forms, you befriend them. If you happen to already have maxed out your cognitive capacities for relationships, well, at worst, one of your 150 will be bumped into the lower tier of acquaintances. In fact, it is expected that the exact contents of each group change over time.

The only downside to making a new friend is the opportunity cost. The time and energy could have been spent doing something else, making another friend instead, or hanging out with people who are already your friends.

If you’re 100% satisfied with your social life, and new friends would mean less time with your amazing current friends, then feel free to skip the rest of this post. Otherwise, making new friends is the way to improve your social circle. You should be spending most of your precious time with the people that you prefer — and your preferences can change. Unlike with romantic relationships, there is no expectation that your current friendships stay stable forever. They’re always in flux.

Finding potential friends

As of writing this, there are more than 7.8 billion people in the world. Actually, let’s write this number in full. 7,800,000,000. Go here to visualize how many that is. (Warning: that website is somewhat dizzying!)

screenshot from Dec 20, 2020, showing a tiny fraction of humanity

You personally know a few hundred of those, which means that there’s approximately 7,800,000,000 you could potentially get to know. Yes, that’s the same number, because on this scale, a few hundred is a rounding error.

Now, not many of them are people you would like to have as friends, nor would most of them be interested in you. Also, well, most of them don’t speak your language, some are babies, some are twice as old or young as you are, and some live in North Korea. But the point is that the world is big. No matter who you are, there is the numeric potential to make new friends. You can never run out.

Visakan Veerasamy (a.k.a. Visa3this essay does not talk about credit card companies, so you can safely assume “Visa” refers to him) writes:

there are a *lot* of people in the world, we chronically underestimate this

Meeting strangers

So, how do we meet all these strangers? They’re all around us, obviously, but how do we talk to them?

Here’s a recipe:

  • go within hearing distance of some people;
  • open your mouth;
  • utter some words.

Okay, I know, it’s not so easy in real life. Talking to strangers is scary! Especially if you’re shy or introverted (as am I).

It’s possible to bond with strangers by talking to them in a park, or coffee shop, or grocery store. But it’s not the most common way to do it. Author Kio Stark has some interesting insights about that, but when I read her piece on experimenting with talking to strangers, I couldn’t bring myself to apply any of it. (Well, except for the “observe and take notes” part.)

An easier way is to put yourself in an environment where talking with strangers is normal and expected. That is, go to public events: there’ll be people you don’t know, and who share an interest with you.4yeah, don’t go to an event you’re not interested in, that won’t be fun for anyone That makes it simple to strike up a conversation!

These events can be anything: shows, classes, hobbyist activities, organized sports, etc. They can be online events, too. Bonus points if it’s regular — you’ll keep seeing the same people, which builds familiarity. Bonus points, too, if the activity involves interacting with other attendees. As an example, improv worked well for me. Board game nights — or any sort of event where you play, really — are a great hack: games makes everything easier by providing a clear framework for social interaction.

A good mindset to adopt is to say yes by default. Unless you have a good reason, if you’re invited somewhere, go — even if you don’t feel like going very much. Even if you’re shy.

I love this six-month experiment by Default Friend (yes, her pseudonym is very topical):

When I first moved to California, and long before the days of COVID-19, I would go to any part of the Bay Area, at any time, to do (almost!) anything with anyone. . . .

I’d say I made in the neighborhood of twenty friends using this approach. Five or six close friends. Plenty of acquaintances. Tons of people who linger in the space between “I like you a lot,” and “the timing just hasn’t been right yet.” . . .

Anyway, that’s what “default friend” means. In some ways, it’s living your life like it’s an improv show.

It’s responding to every offer with a, “Yes, and…”

If and once you have an existing network of friends, you can also use it to expand your social life. Go to events with your friends. Ask them for introductions to other people they think you’d get along with. Get to know your friends’ friends. This has the extra benefit of creating friend groups, which are valuable in themselves.

From stranger to acquaintance

Okay, you’ve met a stranger. You said hi. Now what?

On the path to making a new friend, you’ll have to cross the stage of acquaintance. What does it mean to make an acquaintance out of a stranger?

Not much, depending on how you define “knowing” someone. Just saying “hello” isn’t enough, but as soon as you have a real conversation, or know each other’s names, or know how to get in touch again, then that’s it. You’re acquainted.

It helps, here as elsewhere, to be kind and interesting. Kindness with a stranger translates to being polite, warm, open to give help. Interestingness means being able to have a good first conversation, and that goes beyond small talk.

Small talk is useful, of course. It’s a skill of its own. Mastering it will make it easier to talk with strangers, since by definition you don’t know what you share with them — so you often need mundane topics like the weather to get started. But if you can’t dive into deeper topics after a little while of talking with someone, it may be hard to have the meaningful conversations that will build friendship over time.

(But note that the length of that little while can vary enormously depending on the person! Some are allergic to small talk, so avoid it as soon as possible; others can be fun to talk with about ordinary things for hours. There will be signs — read them.)

Acquaintances aren’t what we’re after — we tend to have a lot of them, and they don’t necessarily bring a lot of value to our lives. Besides, they take up some of our time, so they’re not free. But they’re a necessary first step, and having a large network of acquaintances can certainly be useful.

From acquaintance to friend

To make a friend out of a person you know, you need repeated interactions. Phrased differently, you need time and proximity. You need to share experiences and memories.

You may be lucky and develop a bond very quickly with someone, if you have exceptional compatibility5Or if the circumstances give you lots of repeated interactions in a short amount of time, which is just a compression of the above. But in most cases, it’ll take a while until you can call someone a friend. That’s okay. Don’t expect it to be fast. Focus on being kind and interesting — provide help, compliments, fun times — and you’ll get there.

Or you won’t, in which case the best course of action is to befriend someone else instead.

Let’s emphasize this point: you can’t force a friendship, so it’s useful to recognize when an acquaintance won’t bloom into a true friendship. One sign that your friendship won’t work is if you’re consistently the only one reaching out to the other person. Since a friendship has to be mutual by definition, if you find yourself in a one-sided situation, you should quickly move on.

Which brings us to the actionable advice in this section: do reach out! Unless you’re in the workplace or some environment where you meet the same people regularly, repeated interactions won’t happen by magic.

From Alexey Guzey’s How to make friends over the internet:

90% of meeting people is reaching out, so, unless you’re already very well-known, most of your network building will consist of actively initiating conversations.

This applies to online environments, and the 90% figure might be lower in the physical world. But the point stands: you’ll get much faster results if you try to make friendship happen rather than wait for it.

In fact, my personal experience is that the vast majority of people are really bad at reaching out. They just… don’t. Or rarely, anyway, especially with people they don’t know well yet. So if you also don’t reach out, you’ll end up becoming friends only with the small set of people who do, and it’ll take a long time.

What should you tell people when you reach out? Be kind and interesting, sure. But a more precise piece of advice is to make it easy for the person. They’re not your friend yet, so they won’t necessarily want to put a lot of effort into knowing you.

Ask them things — people love to help! — but ask them straightforward, specific things. It’s fine to suggest just to “connect” or “hang out,” but then do the work of finding a place and time, instead of hoping that they do it for you.

Especially avoid asking vague questions like “Will you be my friend?” It’s awkward, it rarely works, and it puts the onus on the other person to figure out what is it exactly that you want. Visa again:

“Can I know you?” “Will you be my friend?” “I think I’ve fallen in love with you” are all selfish, short-sighted, things to say that force the other person to do all the work of figuring out the relationship you’re trying to establish.

Instead of asking, “I’m lonely, please be my friend,” ask “I’m having a rough day, would you like to take a thirty minute walk with me tonight after work?” It’s much easier to say yes to that.


In the very weird but ever growing online world, making friends also requires being kind and interesting. But these words take slightly different meanings.

Let’s start with interestingness. To be interesting online, well, the first step is to put stuff online.

In other words, create. Write a blog, a newsletter, thoughtful comments on other people’s stuff, social media posts, anything. It doesn’t have to be writing, actually. You can make videos or post pictures or whatever you like. But you need to have an online presence. You need to exist.

Ideally, create well. If you manage to build an interesting online presence, you’ll attract people. You’ll craft your “personal bat signal6this is a Batman reference, even though I know next to nothing about Batman that people recognize. But don’t let the need to create “well” stop you; quality is much less important than just showing up, and showing up often.

Use social media actively, not passively.7This is also a good way to make social media have a positive rather than negative impact on your life. Passive use of social media might be linked to depressive symptoms. Yet the vast majority of people don’t really engage actively. Despite what you may think, Twitter is a good place to make friends. I can vouch for that — I’ve started using Twitter seriously a few months ago and I have, as a matter of fact, made friends through it.

At this point, the fraction of my readers who don’t use Twitter will be thinking: wait, isn’t Twitter literally the worst place on the internet?

I don’t know. Maybe it is. But I just don’t hang around the bad parts. I keep to the parts where people are being kind.

For some reason, it seems people easily forget to be kind when they’re online. Trolls roam about, ready to insult anyone who comes close. Disagreements are pretexts for insults. We’re always a few bad comments away from a flame war.

I don’t know why we do that. Human nature, I guess. But it’s easy for you not to do it. Just be kind. Give compliments. Voice your opposition in a mild, pleasant manner. Follow the common improv8You may have noted this is the third improv reference in the essay. I feel that there’s a deep connection between friendship and improvisation. But that would be another essay. advice: say “yes, and…” to validate other people’s comments before adding your input.

And then, reach out. When you’ve interacted with people that seem interesting, send them a direct message or an email. 90% of meeting people is reaching out! From Alexey Guzey again:

any platform that allows you to send private messages to people is a platform where you can make friends.

  • on reddit, when you see a comment or a post in which the author seems like somebody you want to be friends with, message them
  • on Goodreads, when you see a book review that really speaks to you, message the author and share your thoughts on it
  • on someone’s personal site, when you enjoy the writing, shoot them an email and let them know about it

The essay goes on to describe 20 examples of people Alexey Guzey met through direct messaging.9It’s also interesting because he used to to not have any friends. Advice from someone who had a problem and then solved it is the best kind of advice. He also recommended the book The Science of Making Friends: Helping Socially Challenged Teens and Young Adults, which I haven’t read, but seems of great interest for people with autism, ADHD and other such conditions. You can go read it (it’s really good), but I’d rather you finish my essay instead, so here’s one example of how it went for me with Twitter.

There was this guy I had noticed a few times in the endless chaos of the Twitter feed, without really giving it much thought. Eventually I realized he was part of a community I also am in (online communities are a great way to make friends!), but also, especially, that part of his Twitter bio was exactly the same as mine from before I used Twitter seriously. Which suggested we had similar interests and ways of thinking.

Armed with these two pieces of information, I followed him; he followed back, I think (I’m not sure of the exact order of these events); and then sent him a direct message (a.k.a. a DM). I said something to the effect of “Hi! I wanted to say that the reason I followed you was [thing in common #1] but also [thing in common #2].” He replied back, and then we had some small talk in the DMs, for instance about what our personal projects were. That lasted for a few days. Soon enough, we had a Zoom call. Since then, we’ve had another call, and interacted a lot through DMs and as part of the online community I mentioned. Someday, when travel is open once again, I’m sure we’ll meet in real life.

The process will never go exactly like this again. But we can abstract it into the following steps:

  • Find someone who interests you
  • Optional: interact in public (Twitter comments, forum discussions, etc.)
  • Reach out in a private channel (DMs, email, etc.)
  • Converse
  • Meet in real life, video call, etc.

Making new friends online is weird, in a way. It has been happening for a couple decades now, but we still haven’t really built norms around it. Yet, it will happen more and more. And it can happen to you, if you make it.

Aaaand I’m done with Part 2.1. I hope it will be useful to someone. Writing it has certainly been useful to me. (Here’s to hoping this very essay will land me some amazing new friends!)

Next up, in Part 2.2, we’ll cover close friendships and friendship maintenance. Feel free to subscribe to the blog or to the newsletter to make sure you don’t miss it.

Extra resources

Here are some useful links that didn’t make it into the main text:


Quite a few people provided feedback and help at various stages of the writing, including: Gregory Yang Kam Wing, Kushaan Shah, Rishi Dhanaraj, Tamara Scott, Kritika Sony, Alicia Kenworthy, Tom White, Daniel Hightower, Dan Stern, Liz Koblyk, James Quiambao, Kyla Scanlon, and Nivi Jayasekar. This list may not even be exhaustive!10let me know if you gave me feedback and I committed the egregious error of forgetting you 🙁


Nothing Is Inherently Obvious

You’re sitting in a math class in university. The professor is writing a proof on the blackboard.

You’re extremely focused. The logic is spelled out with perfect clarity. Each step makes sense.

Then, the instructor utters a word—perhaps “obviously” or “trivially”—and proceeds to write the last line of the proof.

You blink once.


You read the result a couple of times. You frown. It doesn’t make sense anymore. Something happened between the last two lines of the proof, but you have no idea what.

This proof wasn’t obvious to you. It certainly wasn’t trivial. You glance around, and to your relief, you see you’re not alone. Your classmates keep silent, but they look confused too.

The instructor has overestimated the obviousness of the proof.

And so does everyone, all the time, with everything.

What does it mean for a piece of information to be obvious?

It really just means that the piece of info is known.1for those who want a more precise definition of “known,” let’s say “assumed to be true.” For our purposes, it doesn’t matter if the information is false, as long as it is believed by everyone involved. Once you learn that the sky is blue, or that the Earth is round, or that the longest-lived human was a 122-year-old French woman, then these facts sound obvious to you. No one can impress you by telling you these things anymore.

Since most things aren’t known by most people, universally obvious facts are rare. The only true candidates are probably very basic facts about human biology and the natural world, such as “I can see stuff only if I open my eyes.”

For everything else, obviousness is a function of the audience.

If you’re a math teacher at a university, you can consider “2 > 1” to be obvious. Your students know this. But some more advanced concepts won’t be that obvious to them. Since by definition you know more math than your students, many things will sound more obvious to you than to them.

To a large extent, it is your job to be aware of the gap between your students’ knowledge and your own, so that you can pick the right things to teach. It can’t be too obvious, or it’ll be boring. And it can’t rely too much on non-obvious other things, or it’ll be confusing.

To do this, you need to estimate your audience’s knowledge. Fortunately for you as a hypothetical math teacher, that’s not too hard in the context of a classroom. Standardized tests and prerequisite coursework are your allies here. Misreadings can happen, like in the math proof example, but the system is tuned to minimize them.


Now let’s generalize to the larger group of people who publicly express facts and opinions — writers, journalists, podcasters, social media users, youtubers, and other communicators. We’ll take writers as an example.

In most cases, your audience as a writer is less easy to characterize as it is for a teacher. Even if you write about a very niche topic, you’re likely to be read by both novices and experts. And if your work is less specialized, or more widely distributed, estimating your audience’s knowledge gets even harder. At worst, all you know is that they can read your language.

And when you can’t get a precise estimate, you start worrying.

You might worry about confusing your audience, if you’re trying to explain a complicated point. But another common task of writers is to find interesting things to say. In that context, the danger from overestimating obviousness changes: the risk is that you (incorrectly) decide that an idea isn’t interesting enough for you to write about.

This is something I’m often concerned with. Whenever I have an interesting fact or idea to share, I’ll have thoughts along the lines of: “This is obviously true. Why am I wasting my and my readers’ time writing about it?”

Just writing this down makes it painfully obvious that it’s wrong.2So obvious that you guys have probably all figured this out, right? Right? … See, even as I write about obviousness, I worry that I’m just stating the obvious! It’s healthy to put efforts into making sure that your writing is worthy of being read, of course. But it’s also easy to err on the side of worrying too much. Of never saying anything interesting, because you’re afraid it’ll be obvious and boring.

Perhaps not every writer needs to worry about this. Maybe your problem is that you underestimate obviousness, say lots of things that are self-evident to your readers, and should consider shutting up a little.

But I think overestimation is a more widespread problem than underestimation. The reason is simple: often, the main evidence we have about other people’s knowledge is just our own knowledge. Psychology has a word for this: projection bias. We tend to project onto others and assume they are more similar to us than they truly are.

I struggle with this. So I’ve been coming up with rules to help me deal with it.

1. If you’ve never heard anyone say it, and you can’t find anything when you google it, and you share it with a professor in the relevant field and they say, “wow, I’ve never thought of this before,” then it’s not obvious

Congratulations! You’ve generated a new idea. This is extremely rare. Quick, write a paper, book, or blog post before someone else comes up with it.

2. If you’re hesitating about whether it’s obvious, then it’s not obvious

If it were obvious, you’d obviously know, wouldn’t you?

Well, not necessarily. I can come up with a contrived counterexample. Imagine that everyone in your audience has seen a tweet which you haven’t. You happen to write about the same idea, believing it’s a mind-blowing insight, and… turns out it isn’t. It was obvious to everyone, but you couldn’t know that.

Since these situations are probably very rare, I claim that my heuristic is useful.

3. If you were excited enough to write about it, then it’s not obvious

If you took the time to write something, you must have thought it was interesting. Chances are that others will, too. So, as you hover above the “Publish” button, wondering if you were really just stating the obvious, I’m here to tell you, “Click. Just click the button.”

4. If you learned it by following your own curiosity, or doing something special, then it’s not obvious

Everything you know, you had to learn.

If you learned something by following your interests, perhaps going down a Wikipedia rabbit hole, or a YouTube spiral, then it’s likely that others haven’t. If you’ve learned it doing something that most people haven’t done, like work at a super secret company or travel to Kyrgyzstan, then you can be fairly sure lots of people don’t know about that.

Either way, you’re qualified to share the knowledge with the world.

5. If you’re learning it right now, then it’s not obvious

There was a picture I saw on Twitter, but which I can’t find anymore. I’ll try to reproduce it from memory:

When you know a lot, a topic may sound obvious and boring to you. When you know very little, you won’t feel qualified. In both cases, you won’t be inclined to write about it.

The ideal is to hit the sweet spot in the middle. Guessing where you are on the curve is not easy, but a simple trick is to share as you learn. As soon as you feel you know enough, write about it, before you reach the right side of the peak. It’s almost guaranteed that many people are just to your left, and will enjoy being brought along. 

6. If it sounds obvious but you’re combining it in a non-obvious way, then it’s not obvious

Here are two obvious statements:

  • Whales are mammals.
  • Milk can be used to make cheese.

We combine these two statements, and voilà, we get something much less obvious: cheese made from whale milk is a thing (or, at any rate, a theoretical possibility). I don’t know about you, but I have literally never thought of whale cheese until I came up with this example.3to be honest, it totally sounds obvious now that I’ve stated it. Passing that peak took, what, fifteen seconds?4also, I looked it up, and apparently whale milk is so high in fat that it’s almost like cheese already, and the consensus seems to be that it’s gross

Thus the common, but true, advice: it’s far easier to combine existing ideas than to generate new ones. Everything is a remix.

Note that the rule also applies to combining an obvious idea with a non-obvious one. The beauty here is that the non-obvious idea can be as simple as some personal story. “Love hurts” is obvious to most, but we still enjoy stories that combine it with personal details.

The real reason I wrote this essay is to deal with my own insecurity.

The next time I worry I’m writing something obvious, like the importance of friendship or the complicated origin of cakes, I’ll tell myself, “No, see, nothing is actually inherently obvious. It all depends on the audience, and you’re probably overestimating what your audience knows. So go for it!”

I’ll get it wrong sometimes, but that’s better than preventing everyone from learning something because I wrongly assumed they knew.

Even now, at the end if this essay, I have a nagging feeling that all of the above is obvious. I know it isn’t! But feel free to provide me with additional supportive evidence, if you’ve learned a thing or two from it.


Thanks for the help and feedback from Gregory Yang Kam Wing, Alicia Kenworthy, Tom White, Justus Myers, Nivi Jayasekar, Kushaan Shah, Parthi Loganathan, Rob Terrin, Jihii Jolly, Default Friend, Madison Kanna, Rohen Sood, Reza Saeedi, and Anand Mariappan.


Leveling Up the Skill of Friendship: The Why

Says the popular adage:

Give a man a fish and you feed him for a day.

Teach a man to fish and you feed him for a lifetime.

Teach a man to make friends and now all his fisherman buddies give him freshly caught fish whenever he visits them down by the pier, plus they water his plants when he’s out of town, plus they help him pull through when he’s feeling down, plus he’s also friends with the bakers so he gets fresh BREAD in addition to the fish.

At the risk of stating the blindingly obvious, friends are great. Almost any relationship, including romantic or professional ones, is better when it involves friendship.

But friendship isn’t necessarily easy or natural. Everyone is somewhere between good and bad at it.

Friendship is, in fact, a skill. Even if we rarely think of it as one.

And unless you’re especially talented, or unless you’ve deliberately practiced, there’s a reasonable chance that you’re not as good at friendship as you could. Yet there may not be a skill worth mastering more, as the (totally authentic) proverb above illustrates.

This is an introductory essay in what I expect to be a series of posts on friendship. Here, we’ll discuss the “why”: why friendship is a skill, and why it’s an important one. At the end, we’ll briefly lay some groundwork for another essay about the “how.”

But first, a personal anecdote.

An auction of dreams

In the Quebec secondary school system, there used to be a class called “Personal and Social Development.” It was a hodgepodge of sex education, ethics, and other things neither I or my high school friends can remember.1though many of us remember that time the teacher brought a chocolate cake made with mayonnaise, which seemed like a mind blowing idea at the time.

One of the few things I do remember from that class was the “auction of dreams” we had once. The teacher would read out some ambitious life goals, like “to become the prime minister,” and we would bid points to buy them. The one dream I won was “to have a true friend.” I didn’t need to spend a great deal of points for it; my classmates seemed less interested. Perhaps because, to them, it felt more ordinary than the other dreams, or more easily attainable.

But I knew that making new friends was, for me, a slow process. That deep, true friendship was rare. I wasn’t (and am not) disastrously bad at social skills, but I’ve always felt a bit below average, just enough to often be somewhat dissatisfied with my social life. I regularly wish I were part of some cohesive community of friends.

In the narrative of my life, that auction of dreams has become a defining moment. A sign I knew I was slightly inadequate at something very important.

Usually, though, I’m doing just fine. “Usually” as in “when there isn’t a horrible virus floating around and forcing us to cancel so many of the good things in the world.” The pandemic contact restrictions feel as if someone had detonated dynamite in the fragile social life that I had been slowly and painstakingly growing for years. It’s not that I’ve lost friends. But all the contexts — work, organized activities, parties — in which we effortlessly and serendipitously spend time with people are now gone. Social interaction has become more intentional. It takes more energy.

So I’ve been sitting at home, wondering how to save what I could, and how to efficiently rebuild my social life. Then I happened upon the work of Visakan Veerasamy, better known as Visa. If your Twitter is anything like mine, you’ll know who I’m talking about. He’s all over the place.

Visa’s mission is “to build the greatest social graph of friendly, ambitious nerds that the world has ever seen.” As he writes in his book Friendly Ambitious Nerd:

Friendliness​ is about being a nourishing presence. It’s about becoming somebody who people (including yourself!) love and enjoy. It’s about creating supportive, encouraging spaces where people can feel comfortable sharing their honest feelings. Humans are a social species, we’re practically wired to desire kinship, to be part of something bigger than ourselves. So why not get good at it?

Yeah, why not? Why not see it as a skill, as something you can learn and master?

You can be good or bad at basic human needs

I can think of a few reasons why friendship is rarely framed as a skill.2It’s more common to talk about “social skills,” but my view is that this is both more general and less important than just friendship. One is that almost everyone picks it up to some extent in their youth, usually in school. Another is that socialization is a basic human need. So we learn the art of friendship the same way we learn the art of eating: as a necessity for survival.

But even a basic need like eating involves several activities that we can be good or bad at: selecting foods to buy, meal planning, cooking, picking a restaurant, finding a balanced diet, and even just enjoying what you eat.3Some people apparently never enjoy their food, and eat for sustenance only. For an example, read the beginning of this Slate Star Codex piece.

Similarly, the skill of friendship comprises a bunch of things we can be good or bad at:

  • making new friends
  • keeping in touch; general friendship maintenance
  • providing support and value to friends
  • moving from a superficial to a deeper friendship
  • managing conflicts and unhealthy dynamics
  • ending a friendship, as we sometimes must.

Even though most people learn to do most of these things from a young age, there are signs that we’re not, collectively, that good at them. Many people are lonely, and the media regularly mentions a “loneliness epidemic” (it’s unclear that there really is such a thing, depending on whether you focus on subjective perception of loneliness or increased atomization, but the fact that we worry about it is telling). It’s not uncommon to hear friends say they “haven’t been a good friend” when you talk to them for the first time in a while. And it has become trite to state that “it’s hard to make friends as an adult.”

Of course, there are many reasons why it can be hard to make friends, or why we feel socially isolated. Lack of skill perhaps shouldn’t be seen as a cause. But surely becoming skilled at the items in the list above can be an antidote.

Now, is it worth our time, relative to other skills we could learn?

Friendship as magic

What are friends good for?

I don’t think I really need to enumerate this. But let’s do it anyway.

There is the mundane: having your fish fed when you travel, borrowing stuff you need. There’s everything related to mental wellbeing: staving off loneliness, feeling important, getting emotional support when things fall apart. And then there’s what we like to call “networking”: finding jobs, ideas, projects, things to learn, people to meet, and more friends.

Friends are also shortcuts to skills you don’t have — like fishing. It’s more common to get these shortcuts through commercial transactions, of course, by trading some skill you have for money and then trading the money for someone’s services. But friendship provides a second way to “cheat.” It can even be the skill that earns you a living, if you’re especially good at it — through sales or community organization, for instance.

One cool way of describing the benefits of friendship is in terms of luck. Luck is a word we use to describe the good things that unexpectedly happen to us. Unlike what you may think, it’s  perfectly possible to increase your luck. You simply have to expose yourself to as many situations with potential good outcomes — new opportunities — as you can.4I was reading Antifragile by Nassim Taleb as I was writing this piece, and it sounds like Taleb would call this “optionality” as opposed to luck. You increase the probability of a good outcome by giving yourself more options, especially those with more potential upsides than downsides. I’m okay with calling this luck.

And what better way to do that than interact with lots of people who like you?

So friends can provide you with material benefits. They can replace the need to learn some skills. They allow you to manipulate luck. And they do all that while making you feel great.

Life skills don’t get any more magical than that.

logo of My Little Pony: Friendship is Magic

right, I knew I’d seen this idea expressed somewhere before

Towards the how

Okay, so how do we master the skill? How do we get better at making new friends, managing friendships, being a good friend, and dealing with the unfortunate uglier parts?

This question deserves a full essay, and one that I’m planning to write very soon. (Feel free to subscribe if you don’t want to miss it.)

The central point, however, is clear. To be good at friendship, you need to be a valuable person to your friends. You can be valuable in many ways, but most boil down to two main qualities: be kind, and be interesting.

To be kind is to care about the people around you. To remember their names and birthdays. To be there when they need help. To apologize when you’ve wronged them. To have empathy.

To be interesting is to bring value to your friends. To teach them things. To bring them new encounters and opportunities. To be a good conversation partner. To nourish their minds.

How can we get better at being kind and interesting? How do we level up the skill of friendship? There are many paths. But the first step is to realize that it is a skill.


Many thanks to the amazing people who read drafts and provided feedback: Gregory Yang Kam Wing, Sarah Boudreault, Rhishi Pethe, Carolina Perez, Padmini Pyapali, Alexander Hugh Sam, Charlene Wang, Sachin Maini, Nivi Jayasekar, Snigdha Roy, Mohammed Malik, Tom White, Alicia Kenworthy, Kritika Sony, and Christian Keil.