Since prominent cases of racist and sexist „artifical intelligence“ have got medial attention, the term of biased algorithms is more widely discussed. However the term bias sometimes is understood as something that can be overcome easily by gaining more or „better“ data. I want to argue why the term „bias“ and its implication of potentially „unbiased“ technology is fundamentally misleading. For that I will connect the concept of model and modelification from computer science with sociological insights from science and technology studies. This illustrates how individual conceptions and necessary limitations of concrete use cases lead to restrictions of the models underlying the further technical development, which are rarely contested later on.
After that, drawing on sociology of technology, I will discuss how technology shapes individual action and society at large in order to stress the importance of evaluating and rethinking technical configurations and the specific biases and shortcomings of the models algorithms and technological are based on.
For the illustration of how specific values shape the development of software and the software shapes its users, I will revisit empirical examples of the linux communities Arch, Debian and Ubuntu. As a second case I will discuss model assumptions of the Android permission management which strongly frame possibilities of app developers and the personal privacy management of the users.
Based on empirical examples the talk illustrates the politics of technology and explains the underlying mechanisms in order to underline the responsibility which follows process of construction.