Chemistry lessons. Apart from the explosions, fire, smells and my secondary school teacher’s classic lines and asbestos hands, one thing stick with me. The greater the insight the greater the ambiguity. First we learned about molecules. What? Tiny things that make up everything? Sounds far fetched but I’m heavily incentivised to concur. Tiny bloody things. But not the smallest. Because then, perhaps a year later, we learn that they’re not the smallest things. They’re in fact made up of atoms. Eh? Smaller still? Well, again, far fetched, more so even, but OK. We continue. Then we learn that not only are atoms not the smallest, but they’re made up of 3 types of standard equipment – neutrons, positrons and negatrons, I mean electrons. And they’re all the same. And they’re tiny. And the core of an atom, the nucleus is crazy wicked heavy even though it’s tiiinyyy. And the electrons move in circles…pause…no, ellipses. What the actual hoopla is this jiggery pokery, this bally hoo, this sorcery of the mind. Well, it’s complexity at work. It’s as close to a parable as I’ll ever write. And now some cheeky jokesters wanna talk about quantum mechanics where particles’ locations relate to probability. It’s totally beyond me if you hadn’t already guessed by that pitiful attempt at a description.
But why am I spouting all of this tom foolery. Well. Because I’ve written a lot about complexity. A lot about how I see everything as part of an effectively infinite connected web of movement and interaction. And then I talk about simplistic models. And that juxtaposition needs addressing. I think they can sit together, and I’ll try to explain how I hope they can, and how I see them becoming effectively mutually exclusive also.
As I hope I’ve clearly inferred or explicitly discussed, the need for models essentially stems from our limitations as computational devices. We can’t hold that much complexity at once. We need it sorting, packaging, labeling. Then we can figure out where that information should go, as a grouping. We resultantly make inforred but mildly homogenising decisions every time this is done. When moving house, the most organised of us will sort items perhaps by theme, perhaps by room. Because moving everything individually is hugely laborious, so, out of pragmatism, we organise, group, label. We categorise. And that means that each box, as a grouping, arrives at the best destination. Knives end up in the kitchen. Books in the living room. Lovely. Here’s where I’m gonna get fruity.
The problem I have is that conceptually, unlike moving house, we keep things firmly in their boxes, their labels and their categories. To continue the analogy: the great thing about moving house is then you unpack these items into their specific places. They can move freely from room to room, temporarily or more permanently. Their packaging was for a purpose. Once that purpose is served, they are no longer just a part of the kitchen box. They are only temporarily a target of homogenisation. They are only temporarily a member of a reductionistic categorisation. They are then freed from that paradigm and as such, treated with respect as individual items.
When it comes to analysis, both formal and informal, I think this unpacking is rare. There is a justification for the categories, and they are held dear, allowing temporary pragmatism to bleed into longer term reductionism. And that’s just not fit for purpose and, for me, it’s not good enough. I hope that any model or method I provide is only temporary in its pragmatism, and specifically that they are seen as categories that operate as a layer on top of complexity, and not a replacement for it. Consider scattering rice on a table and placing cookie cutters over them. Those cookie cutters act as groupings, as categories. What I see most doing is to scatter the rice, place the cookie cutters and then wipe away the rice, keeping only the cookie cutters in place. A more helpful approach is trying to hold them both in parallel and seeing categories for what they really are. They are artificial tools of simplification so that we compute large amounts of information.
Let me give another example. Mental health. Categorisations, specifically diagnoses, operate similar to the cookie cutter / rice approach. They are human-made – and therefore artificial – groupings of symptoms as we can perceive them. Everyone’s mental health is a complex myriad of factors, and, like everything, is complex, dynamic, pluralistic and relational. But to ease analysis, to provide more focused research, to be more prescriptive about treatment, we use these categories. Again, it’s pragmatic. But if we focus too much on the cookie cutters, the categories, we start to blur out the rice. See individuals as individuals, and understanding their relation to categories, that makes sense to me. See them as merely shapes to be placed within singular categories. That’s a concern.
I’m not sure how much that needs to be brought back to any industry-specific context or category, however if nothing else it provides some of the underpinning thinking behind a few of these rants. Such as Vulnerability Criteria and other such categories that perhaps were once designed to illuminate topics, but now seem to mainly cast shadows. As Virginia Woolf wrote: “A light shone here required a shadow there”. Whether it is intentionally required is another conversation, but the simplest interpretation has the most expansive application. It’s a lesson of consequence and causality. So yes, this comes back to Newton’s third law of humanitarianism – of Taoist proverbs – of the need to consider the practically negative impacts of principally positive intentions. It’s all starting to come together. And I do love it when a plan comes together.