Intuitively, I’ve always represented technology as a big thing: edge-cutting lasers, spacecrafts, etc. Probably inspired by the si-fi movies and books… Instead, after checking with my friend Wikipedia, the definition is muuuuch more large:
Technology ("science of craft", from Greek) is the sum of any techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives. […] The simplest form of technology is the development and use of basic tools.
At first glace, this definition looks a bit dull. Technology encompasses so many things. But the devil tends to be hidden in the details… Looking at some of those details is the objective of this post. Let’s take a look at the Invisible Technologies! Tum, tum, tum!🪘
Tools for management 🦴
Michel Berry, a former director of my research lab, wrote in 1983 a synthesis of his works’s peers in Technologies invisibles, where he put forward a program to study the tools used for management. Those tools can take many forms: from ratios used in banks to the objectives for salesmen.
The main purpose of those tools is to reduce the complexity of a situation as individuals often don’t posses the time to study in detail each case. Hence, organisations tend to put into place simplifications:
“Abstractions of the truth”, reduced to a set of numbers (e.g. the Gross Domestic Product to represent a national economy in the international market);
“Abstractions of what is right” under the form of maxims (e.g. “more than a 2% GDP growth reflects a thriving economy”).
As the GDP indicator shows, even if those tools are criticised and proven wrong, changing them is no easy task. Those tools are embedded into a complex system of interactions between various stakeholders. To change a tool is to question all previous negotiated consensus again. You can think of a tool as a “traduction” of a situation: they need to find the right way to represent the elements (both human and non-human) involved, to paraphrase Callon’s analysis (see previous post).
No one best way 🌻
Unfortunately, despite all the scientific positivist trends, no “one best way” has ever been found. This work requires a custom-made approach. Once upon a time, McKinsey’s selling point was: the best brains come to tackle your problem by analysing your situation without preconceived solutions. However, for many interconnected reasons this method wasn’t sustainable: preset tools are more cost-efficient, newcomers tend to bring princes down, “proven” tools tend to produce an anxiolytic effect on clients... Hence, decision-makers tend to choose “standardised” tools.
Changing premises 🍂
Even a custom-made tool, or one that fits enough an organisation, is domed to be outdated. The context in which this tool performed is bound to change, mostly on the following dimensions: the access to information (e.g. digitalisation), institutional logics, the culture (e.g. how much covid changed practices?) or individuals (e.g. millennial conceive work differently).
This difficulty is also inherent to organisational sciences. As situations change, the previous knowledge becomes obsolete and new research is required to assess a situation. This keeps this filed of research evolving. This situation, proper to social sciences, can be illustrated by The Red Queen's race in Lewis Carroll's Through the Looking-Glass:
"Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"
If left untended, physical laws still work for the purpose they were intended. (Hopefully, gravity won’t suddenly change.) However, for most of the organisational (and societal) knowledge it happens!
From scaling-up to scaling deep 🐜
As implied above, when a tool somehow works somewhere, it depends on a local logic. This hardly applies to other places as each site is unique. Yet, as individuals are looking for shortcuts and simplifications, they tend to scale those “solutions” to other places.
Several weeks ago, I assisted to the conference of a sociologist, Brice Laurent, who studies such phenomena. He wrote an article on the “politics of scaling”.
Taking the effects of scaling up seriously would require scrutiny of normative, epistemic and material configurations before things scale, while they scale, and after they have scaled.
It isn't so much against scaling up, and more about doing it intelligently. In the talk, he also advanced the concept of “scaling deep” as the idea that a successful implementation of a technology in a local context could benefit a lot for further studies (to understand why it succeed, how it can adapt itself as the situation changes, etc.). So, instead of trying to replicate the success, it can be worth to ponder on how to further inscribe this local achievement!
Back to AI 🪃
As you might have guessed, those ideas are linked to Artificial Intelligence. In short, AI tools are by nature working -when they are- under local logics as they use data form a specific space-time point. Hence, their scalability is nothing but to be proven case by case. Not to forget time-dependency! Yet, most promises on AI take for granted this fact. (This is less and less true however and many adopter try to get an insurance over this.)
Also, AI is nothing but invisible. The whole propaganda exacerbate the visibility of an AI projet. Yet, as Technologies invisibles shows, once into practice, it risks to get unnoticed…
Takeaways 💝
Taking into consideration the invisible technologies surrounding you might be tiring. Yet, acknowledging and perceiving their influence in decision-making might aid to reduce their shortcomings, especially when the situation changes.
It also helps to understand why some situations, a PhD for instance, stack a psychological toll on individuals. The lack of such tools, impracticable as the context changes constantly, demands thinking things thoroughly at each step.
Corollary for my innovator friends, and myself. If those misfit tools work, it also means a new tool doesn’t need to be 100% perfect. It never will anyway. So, rather than focusing on being as accurate as possible while creating a tool, it can be worth to lower our expectations and instead invest on how it will actually be used in a particular context.