In the top-down model of meta-learning, you go from basic to advanced. In the bottom-up model, you go from special case to general law. In the rhizomatic model of meta-learning, you go from anywhere to anywhere.
Let me explain.
I was first introduced to “rhizomes” by Venkatesh Rao. In a Breaking Smart newsletter called “Frankenstacks and Rhizomes”. From the start of the newsletter:
“1/ Consider the difference between an onion and a piece of ginger. The ginger root is the motif for what philosophers call a rhizome. The onion for what they call an arborescence.
2/ With an onion, you can always tell which way is up, and can distinguish horizontal sections apart from vertical sections easily by visual inspection.
3/ With a piece of ginger, there is no clear absolute orientation around an up, and no clear distinction between horizontal and vertical.
4/ According to the linked Wikipedia entry (worth reading), a rhizome “allows for multiple, non-hierarchical entry and exit points in data representation and interpretation.”
5/ If you tend to use the cliched “hierarchies versus networks” metaphor for talking about old versus new IT, you would do well to shift to the rhizomatic/arborescent distinction.
6/ Both onions and ginger roots show some signs of both hierarchical structure and network-like internal connections.
7/ The difference is that one has no default orientation, direction of change, or defining internal symmetries. Rhizomes are disorderly and messy in architectural terms.”
In a rhizomatic structure relationships are implicit, ill-defined, and constantly sprouting, dying and shifting. For example, think of how the brain works. There are many different sections, which perform many different functions. These sections are composed of billions of neurons which each communicate, via axions and synapses, with thousands of others neurons across the brain. That’s what makes the brain so hard to understand and map, and it’s also one of the reasons people cite for the improbability of building an AI with human-like, or super-human, intelligence: the brain is just so damn complex.
Now, the reason I’m describing rhizomes is because they came to mind when I was trying to learn about two admittedly complex subjects: cryptocurrency (and consequently, the blockchain) and market crashes. I tried the top-down method of meta-learning—organising the disciplines into basic topics, mastering them, and progressing onto the more advanced ideas. That didn’t work. I also tried the bottom-up method—I looked at specific scenarios and tried to pull higher-level insights from them. Nope. Pretty soon I realised that it didn’t really matter where or with what I started; I was going to have to thrash about and tread water for a while before I was able to make sense of it all.
To make this point visually: I had assumed that blockchains and market crashes, and the theories and ideas that held them up, were structured like this:
A rhizomatic domain is a domain which is heavily inter-dependent. In other words, it’s a network of dependencies where one node is tied to many other nodes. This means that it’s near-impossible to isolate (learn about) one node without having to consider many others. Because of this inter-dependency it’s hard to see what particular nodes matter and which ones don’t, so you have to go with the assumption that any node could matter under particular circumstances.
So, when faced with the task of learning about a rhizomatic domain, we’re left with two choices. We can try to deconstruct and plan our way through the rhizome—an impossible task; ask neuroscientists—or, we can say “Fuck it” and jump in, hoping we can figure out how to swim whilst we’re in the water.