Ethics Diary: Reflections on Values, Responsibility, and the Human Sciences
This essay was written as part of the course 10,161,1.00 "Ethics, Responsibility & Sustainability in Management Research" at the University of St. Gallen. The course challenged me to confront assumptions I had carried from my technical background and to think more deeply about what it means to conduct research that takes human beings as its subject. What follows is my attempt to reconcile the precision I value as a computer scientist with the inherent messiness of studying creatures capable of choice.
Before attending these sessions, my understanding of ethics and responsibility in research was, in hindsight, rather naïve. Coming from a quantitative and technical research background, a domain where one can take refuge in the elegance of numbers, the mathematical beauty of proofs, and the precision of formal arguments, I had convinced myself that my only obligation as a researcher was methodological rigor. Be accurate, be precise, and let others interpret the findings as they will. The rest, I assumed, was not my concern. Throughout this course, my stance remained somewhat consistent, however I admit that it gained a few nuances to it. Therefore, the next passages represent some of my views on ethics and how they align with the course's topics.
Day 1
The central argument of this day, that science cannot be truly value-free, initially struck me as overstated. Upon reflection, however, I started to accept its validity, though with an important qualification: the inevitability of values within sciences arises specifically when they concern themselves with human beings and their individual decisions.
Consider the natural sciences in their purest form. A physicist investigating the fundamental laws of thermodynamics or the behavior of subatomic particles operates in a domain governed entirely by natural constants. These constants are universal, observer-independent, and carry no moral weight. There are no ethical dimensions to the gravitational constant. It simply is. In this sense, I would argue that science can indeed be value-free, at least at the level of describing natural phenomena. This aligns with the traditional positivist view of science as the mirror to nature reflecting objective truths that exist independently of human interpretation (Comte, 1853).
However, the moment human beings enter the equation, everything changes. What distinguishes humans from natural constants is the capacity for decision-making, or what philosophers have long debated as free will (Kane, 2005). Natural phenomena are bound strictly to physical laws. A simple object that falls cannot choose to defy gravity. Humans, even though also bound to basic natural laws, created a world that allows them to deliberate, weigh alternatives, and act contrary to predictions. Nature does not prevent a person from committing murder, but their values, for the most part, do. This capacity for choice based on individual traits (or let's call them values) is in my opinion precisely what introduces "values" into any scientific inquiry that takes human behavior as an object.
This observation, however, leads to a practical challenge we face every day. If values are inherently individual and personal, shaped by upbringing, experience, culture, and countless other unique factors, then what happens when humans come together to work within institutions, being fully exposed to the decisions of others? Any organization, whether a university, a corporation, or just a research consortium, is composed of individuals whose value system will inevitably differ to some degree. For such institutions to function coherently, they cannot simply hope that everyone's values happen to align. They must make their core values explicit to also communicate them clearly. This is not merely a matter of good governance, it is a necessity. Without articulated shared values, an institution becomes a collection of individuals pulling in different directions, incapable of consistent decision-making or unified purpose.
Even once values are identified and articulated for everyone to understand (as seen in Policy Statement on Objectivity, Independence, and Excellence, 2024) another challenge remains: Values are not static. Societies evolve, moral understanding deepens, and circumstances change. What was considered acceptable a century ago, may appear wrong today, and what we accept now will likely be scrutinized by future generations. Institutions cannot simply establish their values once and consider the matter settled. They must engage in something resembling continuous change management, regularly revisiting their stated values to ensure alignment with an ever-changing environment. The best an institution can hope for is a core set of values that most members can support, while acknowledging that complete consensus is neither achievable nor perhaps even desirable. This is not merely a practical difficulty but a theoretical one. Arrow's Impossibility Theorem demonstrates that no aggregation method can perfectly translate individual preferences (which values essentially are, as they are no natural laws) into a collective decision while satisfying basic fairness criteria (Arrow, 1951). In other words, computer / information science might already give us the hint that the messiness of institutional value-finding is not a failure of effort or goodwill, but rather a structural feature of collective decision-making, almost a natural law governing human coordination, that we are all exposed to. If perfect consensus is mathematically unattainable, then the only viable response is continuous discourse. Human collaboration, in my opinion, therefore, requires an ongoing process of negotiation, revision, and realignment rather than a search for definitive answers that will never come. This is, perhaps, what fundamentally sets the human sciences apart from the natural sciences.
This shift from seeking "final answers" to engaging in "continuous discourse", for me, redefines the nature of responsibility within research. In my previously stated positivist tradition, I saw the responsibility of science as a narrow technical duty: be precise, be accurate, and let others worry about the rest. However, if we accept that the decision we are making during our human-centered research are inherently value-laden (like which variables to prioritize, which inductive risks (Douglas, 2000) to accept, or even which effects to choose to explore), then responsibility, for me, expands into a form of stewardship. It is the commitment to be transparent about the nominative choices we are making behind the veil of methodology. Then, I would argue, we can move our (or rather my) mindset away from the "safety" of numerical neutrality and transition from being mere observers to moral agents who are accountable for how our models shape social reality.
Ultimately, this is where the human sciences and the natural sciences part ways for me. The physicist can, in principle, arrive at a final answer; the gravitational constant will not change its mind. But any science that takes human beings as its object must accept that its subject matter is perpetually in flux, shaped by the very values it seeks to study. It is this inescapable entanglement, I would argue, that ultimately validates the claim of value-laden science and stewardship, not as a corruption of objectivity, but as an honest acknowledgment of what it means to study creatures capable of choice.
Day 2
Building on my thoughts from Day 1, that values are not constants, that they are individual, contextual, and constantly evolving, Day 2, with Ghoshal introduced now another phenomenon. For me, it stated that values (which then influence decisions) respond to attention, again due to their deep roots in individual decision making.
Ghoshal (2005) argues that management theories do not just describe managerial behavior, they might even shape or legitimize it. If the theories we derive are active forces that influence behavior, then teaching values is never a simple communication of them, but an intervention in the whole system. When we articulate values in our textbooks, classrooms, and corporate training programs, we participate in constructing "what will be" rather than "what already is". Economists call this phenomenon reflexivity (Soros, 2013). It basically describes systems where observations or prediction changes the very thing that is being observed. Goodhart's Law puts it simply: "When a measure becomes a target, it ceases to be a good measure" (Goodhart, 1975). The moment we codify a value into a metric, a curriculum, or an institutional policy, we create incentives to optimize for the measure rather than the underlying phenomenon it was meant to capture.
Coming from computer science, I have encountered reflexivity in many ways and this course made me think about how these learnings might translate into the realm of managerial research. As an example, within distributed computing, the concept of load balancing faces a similar problem: If all nodes receive identical information about which server is least loaded and all redirect incoming traffic to it simultaneously, that server immediately becomes overloaded. The information, meant to optimize the system, destabilizes it. Engineers learned to introduce deliberate randomness and delayed information to prevent these oscillations. Recommendation algorithms present yet another variant, by introducing systems that were designed to predict user preference end up shaping those preferences and, therefore, creating the very pattern they were meant to detect. The prediction becomes self-fulfilling, and the system loses its capacity to distinguish between what users want and what the algorithm has taught them to want.
For me these are not isolated phenomena from an unrelated field. I would consider them again a fundamental difference between natural and human systems that just again popped up again in the discipline that must deal with the natural laws and humans in a way like any other. Gravity does not change when we publish equations about it. Human behavior does and so does information traffic.
In my opinion, this now creates a recursion for management education. If values volve through continuous discourse, as I argued in Day 1, then codifying them into teachable frameworks risks freezing them at a particular moment. What was once a living negotiation becomes now a fixed curriculum. Students learn which values to hold rather than how to engage in value discourse.
How, then, can we navigate this? Computer Science might offer a partial answer. It is not a full solution, but maybe a coping strategy. Engineers building distributed systems learned that they cannot eliminate reflexivity, they can only manage it through constant monitoring, feedback loops and something that CS engineers had to adapt out of necessity due to the rapid speed of developments: the capability and willingness to adjust as conditions change. In my opinion, if Goodhard's law is inescapable, the only viable response is to accept its implications and build continuous evaluation into the system itself. We must treat our measures our theories, and our curricula not as settled truths but as provisional instruments, always subject to reassessment in light of the behaviors they produce.
Tsui (2016) argues for responsible research that serves both scientific rigor and societal usefulness, oriented around real problems rather than literature gaps. This is, in essence, a call to resist the Goodhart trap: to stop optimizing for publication metrics and start asking what genuinely matters. There is something almost engineering-like for me within this orientation. Engineers, by definition, are problem-solvers who apply scientific knowledge to practical ends. Petroski (1992) even argues that their professional identity is bound not to the theoretical elegance but to whether things work in the real world. Yet, adapting an engineering mindset in the human sciences, as compelling as it sounds to me, could even raise its own difficulties. When system engineers introduce information delays or randomness to manage reflexivity in distributed systems, no ethical alarm bells ring. These are technical objects, indifferent to manipulation, and only bound to the natural laws of physics and information. But humans are not servers. The same interventions that stabilize load-balancing systems become ethically fraught when applied to people. Deliberately delaying information, introducing asymmetries, or designing choice architectures, that steer behavior may be effective, but within human sciences, they would raise questions about autonomy, consent and manipulation. I also do not know a solution to this dilemma, but the discussion has definitely left me with the realization that just because I might be able to model human behavior with the precision of distributed systems within my research, I have yet to find a way to reconcile the efficiency of those models with the ethical requirement to treat every 'node' as a self-determining agent.
Ultimately, my journey through these two days has not led me to a definitive set of moral answers, but rather a more rigorous understanding of the question. I leave it realizing that in the human sciences, the "proof" is never final, as the variables have the capacity to read the paper and then change their behavior. If I see my research as an intervention, then our primary responsibility is not just the accuracy of the model, but the integrity of the discourse it creates. We must move from an "Engineering of Control", where we treat humans as nodes to be load-balanced, to an "Engineering of Stewardship", where we design systems that enhance rather then bypass human agency.
In the end, perhaps the "messiness" I once sought to avoid is the most vital part of our work. By acknowledging the inescapable entanglement of values, we stop hiding behind the safety of numerical neutrality. We accept that we are not mere observers of a static reality, but architects of a social one.
References
- Arrow, K. J. (1951). Social choice and individual values. New York: Wiley.
- Comte, A. (1853). The positive philosophy of Auguste Comte (H. Martineau, Trans. & Condensed). London: John Chapman. (Original work published 1830–1842)
- Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science, 67(4), 559–579.
- Ghoshal, S. (2005). Bad management theories are destroying good management practices. Academy of Management Learning & Education, 4(1), 75–91.
- Goodhart, C. A. E. (1975). Monetary relationships: A view from Threadneedle Street. In Papers in Monetary Economics (Vol. 1). Sydney: Reserve Bank of Australia.
- Kane, R. (2005). A Contemporary Introduction to Free Will. Oxford University Press.
- Petroski, H. (1992). To engineer is human: The role of failure in successful design (1st Vintage Books ed.). New York: Vintage Books. (Original work published 1985)
- Soros, G. (2013). Fallibility, reflexivity, and the human uncertainty principle. Journal of Economic Methodology, 20(4), 309–329.
- Tsui, A. S. (2016). Reflections on the so-called value-free ideal: A call for responsible science in the business schools. Cross Cultural & Strategic Management, 23(1), 4–28.