Advertisement
SX514LEFV

Moral Dilemma in Specific Major

Oct 7th, 2024 (edited)
101
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.99 KB | None | 0 0
  1. Moral Dilemma in the Field of Computer Science: Algorithms
  2.  
  3. In the world of social media, “algorithm” refers to the process by which content is automatically filtered, censored, and promoted on a platform. The way these algorithms predict which content will keep users on the platform is “by collecting user-provided information and the constant monitoring of the online activities of these users” (Presuel & José). However, a moral dilemma arises for the developers of such algorithmic tools when members of vulnerable groups engage with and upload content that places them at risk of predatory marketing.
  4.  
  5. In this dilemma, I will imagine myself as a freelance software developer contracted to create an algorithm that will promote semaglutide to users who may be interested in it. Semaglutide is an injection used to treat diabetes by triggering an insulin response in the body, which lowers blood sugar; however, it has the side effect of suppressing appetite (Surampudi), which makes it an attractive medication for weight loss. Therefore, my algorithm will display targeted advertisements to users who search for information relating to diabetes, obesity, and weight loss. I learn, based on trial runs and field research, that my algorithm displays these advertisements to online communities dedicated to discussion of eating disorders such as anorexia nervosa, bulimia nervosa, and binge eating disorder, proving wildly successful in selling semaglutide online to these users. My moral dilemma, then, is as follows: either continue to run this algorithm, targeting users who suffer from eating disorders, or exclude these communities from the spaces where my algorithm may display these advertisements. For the sake of exploring this dilemma, the following conditions are true: I am contractually allowed to exclude these communities at my discretion & it will not result in my employer terminating the contract, and it is known for certain that the users purchasing semaglutide from advertisements on these forums will abuse the medication at the detriment of their health (on the basis that their disorders cause such behaviours) thereby making this algorithm a predatory advertiser. To settle this dilemma, I decide to assess it using Utilitarianism and the Second Formulation of the Categorical Imperative.
  6.  
  7. According to utilitarian doctrine, the morally correct course of action is that which maximises utility, that is to say “[the] course of conduct that will promote the greatest amount of happiness for all those who will be affected” (Rachels 81). To calculate the expected utility of any given action, we must estimate the expected pleasure and subtract from it the expected pain, as those are viewed as the only intrinsic good and evil respectively. In this dilemma, I will have to consider the pleasure and pain inflicted upon three parties: my employer, the disordered users targeted by my algorithm, and myself. If I let the algorithm run as is, my employer will gain pleasure, as this community will allow their company to gain the most profits. The users themselves will experience pleasure as well, seeing as they will satisfy their desires to lose weight, but they will ultimately experience greater pain from the side effects of their abuse of the medication, such as nausea, vomiting and diarrhoea (Smits & Raalte), as well as the symptoms of malnutrition (since semaglutide causes weight loss by suppressing the patient’s appetite, it is reasonable to assume that the symptoms of malnutrition experienced by individuals with disordered eating habits are directly tied to their use of the medication), such as reduced cardio-respiratory function, gastrointestinal function, muscular function, and immunity (Saunders & Smith). As for myself, the developper, I would only experience pain from the guilt of knowingly targeting vulnerable people, and experience no pleasure (as excluding these communities would not result in a breach of contract, and therefore I stand nothing to gain from this action). Overall, the suffering experienced by the targets as a direct result of my algorithm advertising semaglutide to them is greater than either their momentary pleasure of satisfying the desire for weight loss, or my employer’s pleasure of increasing their profit margins. Inversely, if I chose to exclude these users from my algorithm, the pain that my employer would experience due to the reduction of their profit margins is outweighed by the pleasure I would experience, having preserved the health of vulnerable users (in this case, the users remain unaffected, and cannot be included in the calculations of utility). Therefore, between these two actions, the one that maximizes utility is excluding vulnerable communities from my algorithm.
  8.  
  9. The Second Formulation of the Categorical Imperative says that one must “treat humanity [...] always as an end and never as a means only” (Rachels 131). This imperative prohibits me from disregarding a person’s ends while using them as a tool to achieve my own ends. In this dilemma, choosing to exclude these persons from my algorithm's target range would serve my own ends, by absolving me of guilt regarding the long-term consequences that these users would suffer as a result of my algorithm, at the expense of these individuals’ ends, which is to lose weight. Therefore, this action would be forbidden under Kant’s Second Formulation of the Categorical Imperative, as I would be using other rational persons as a mere means to assuage my own guilt. On the contrary, allowing my algorithm to target them would align with this formulation, as it would respect the agency of these individuals and treat them as ends to aid, rather than means only.
  10.  
  11. In conclusion, this dilemma exemplifies how the Utilitarian approach differs from the Kantian one, as the former is consequential in nature, concerning itself with maximizing the consequential utility of any given action, while Kant’s approach is deontological, prohibiting any action which, in principle, denies a rational person the agency to choose their ends freely
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement