In my class, I sometimes like to show students a disposable plastic water bottle, one that is so thin that just holding the empty bottle dents it. I ask them whether a plastic water bottle represents technology. Slowly but inevitably we conclude that it is – perhaps not a particularly advanced one by modern standards, but certainly one that changed how humans transport and consume water and other liquids.
Next, I show them another water bottle: One made of steel. I ask them, whether technology is neutral? Do these water bottles assume different lifestyles? One or two students will point out the obvious – that one is intended for reuse while the other isn’t.
That is usually a good entry point for a discussion on the assumptions underlying our use of technology – each product assumes certain lifestyles and what software designers call “use cases”. A thin plastic water bottle assumes a single use scenario while a steel one imagines that the user will carry such a bottle with her everywhere she goes and that there will be clean water available to refill as needed.
I ask them the same question about apps, cars, and public transport – are these technologies neutral? It takes a while for students to realize that in each of these cases a certain type of user has been imagined. Even after 100 years of car design & manufacturing experience, car-designers haven’t made cars which meet women’s need to put their handbags somewhere within reach. Thousands of years of clothing design has not made designers realize that women need pockets as much as men do. Apple’s health app, when first released, did not include the one thing every adult woman tracks – her menstrual cycle. And that’s just a shortlist of issues from a gender perspective. One could add even more problematic issues relating to usability of products for those with physical disabilities.
These “technologies” which we take for granted aren’t neutral; they are manifestations of the imaginations of their designers – in terms of who will use them, what their lifestyle is (or will/ should be). The designers of a kitchen assume a “standard” height for cooks. The designer of a mobile phone assumes that the user is literate. And the designer of a nuclear bomb assumes that there will be a situation during war where one side will feel that it is justified in killing millions of innocent citizens simply because they are living on the other side of a political boundary.
Recently I listened to a Radiolab podcast on AI and ethics. They started with the usual trolley problem: An out-of-control train is hurtling down a track on which 5 workers are working. The workers will not be able to get out of the way in time. But you have the option of flipping a lever so that the train is diverted to a different track where one person is standing. So you have a choice of killing one versus five people. Most people will select killing one person over five. To this well-known ethics problem, they added a twist. Now there is a self-driving car with an AI. In an unavoidable accident, the AI has to decide whom to kill – one person (its passenger) or five pedestrians. The situation hasn’t changed, but although in the trolley problem most people want to save 5 people over 1, most buyers of the self-driven car want the car AI to protect them instead of the 5 pedestrians. Now imagine thousands of self-driven cars making such decisions on behalf of humanity. Obviously, car manufacturers are going to pay more attention to the needs of the buyer/ passenger and hence the current cars are being designed to save the passenger.
Ask most people if they think technology is neutral and they’ll say ‘yes, it is how one uses it which determines whether it is good or bad’. Modern technology is so complex that we hardly understand what we are using – whether it is the nuances of facebook feed algorithms, the persistent need to get more ‘likes’ or the ethics principles underlying self-driving cars. Even non-smart technology like the ubiquitous plastic water bottle surely nudges us towards a different vision of the modern lifestyle than what we might have chosen for ourselves.
When we promote e-learning in school classrooms, we imagine that teachers are unable to teach these subjects effectively, that students can learn equally well from impersonal videos as from human teachers, and that instead of investing in teacher capacity building it is easier, faster and “more efficient” to develop e-content.
When the government imagines a universal Aadhar card linked with social security programs like the PDS, it imagines the need to track and trace all benefits received by its citizens. Aadhar designers imagine that there are situations where a government needs to identify individuals either for controlling access for certain things or tracking actions.
Rather than assuming that such uses of technology are inevitable, we should pause and think about the direction such technologies are nudging us towards.*
Most people, when they say ‘technology is neutral’, what they really mean is that the science behind the technology is universal. Broadly defined, technology is the application of scientific knowledge for practical purposes. While the science of making a plastic water bottle may be neutral, deciding that such bottles represent a desirable lifestyle and therefore must be available cheaply is not. It implicitly privileges the privatisation of drinking water supply over the (harder) work of holding governments’ accountable for ensuring that the water coming out of our taps is potable.
I read with great interest the book “Geek Heresy” by my friend Kentaro Toyama, who makes a very convincing argument that technology amplifies but doesn’t solve underlying processes and realities. For example, e-education amplifies the inequities related to access to education as does e-medicine.
These days it has become fashionable to invest in agri-tech, mobile-based health services, app-enabled government petitions, etc. Now that we have reached a point where we hardly ever interact with other people without mediation of technology, it is important to occasionally take stock and reconsider: which technological products represent “use cases” which are desirable (piped water in every home?), which ones should be limited to temporary stop-gap measures until longer-term initiatives can take effect (mobile-health apps?) and which ones should be avoided completely (plastic straws?).
Given the pervasiveness of all kinds of technology around us (when was the last time you touched something not made by humans?), these questions should no longer be decided by product companies but as a society. As one commentator on the Radiolab show said in a colourful way, “I don’t want a 20 year old wearing jeans and sipping coke at his computer deciding for all of us.” I have nothing against 20 year olds – in fact, they are the ones with the burden of solving problems that my generation has created. But the commentator highlights an important point: These decisions about technology are ones we must make collectively as a society – whether they are about the use of plastic straws, the ethical principles self-driving cars, the algorithms of Google and Facebook feeds, or Aadharification of social security benefits.