top of page

Engineering Ethics - A reflection of my ethics class - E299

  • Writer: Pradyot Bathuri
    Pradyot Bathuri
  • Oct 17
  • 4 min read

Updated: Oct 21


When I first enrolled in E299, I was knowledgeable of the corporate ethical issues that occasionally make news, and I often read articles on the trials and how they are dealt with, but I often found them very distant from my perception, like they were too far away to really affect my day-to-day life. In terms of the ethical code itself maintained by most organizations, from a cynical view, I felt many were just complexly written grammar files to show ‘they comply’ and meant to defend in a court of law, serving their own purpose. That is, rules become rules based on that instant’s interpretation of that written rule, but companies internally rarely follow them while seeking innovation and profits, especially upper management.


ree

Developing the Ethical Use of AI Guidelines for higher education and working on the ethics extension for Chrome that calculates energy consumption and aims at prompt enhancement to solve the problem of repeated use by students for menial tasks that hinder their critical thinking, I began looking deeper at how the rules that we create on ourselves, to restrict our moments in certain directions for social purposes, lead to a united front of advocating and following rules, and more intent focused building of certain rule compliant solutions. Ethics isn’t about restrictions-it’s about conscious responsibility in every technical decision. This project gave me a clearer sense of what it means to be an engineer, especially in a world after 2022, where widespread use of generative AI blurs the line between ethical conduct, innovation, and accountability. 

 

Personal Learning and Growth

 

Previously, I thought AI ethics was more about plagiarism checks or avoiding ChatGPT for homework. I wasn’t yet thinking about the deeper problems like privacy, data ownership, environmental costs, and how each ties into professional responsibility. I was aware of IP rights and how international IP rights are often subject to exploitation, especially in software. However, with AI and its widespread use, incredible developments, it is only a matter of time before anything can be recreated or obtained, irrespective of IP rights.


Before, it was human solidarity that restricted social groups around these paths, like the dark web, but with the proficient use of AI, one person can equal the automation of more than a highly efficient team. I believe ethical engineering in the future will be something proactive rather than reactive, with rapid developments and rising issues.

 

Connecting Theory to Practice

 

Utilitarianism and duty-based reasoning are what drive many individuals to choose a path that might be considered broadly unethical. But what is ethical is just what is defined by a group of humans as unethical; they are people with all-encompassing knowledge, but they hold a belief that this is in the best interest of the majority, which itself can be a source of bias. So, in a way, it is counterintuitive to think AI ethics are guidelines. I would like to say they can be interpreted as the best social practices. For example, our rule to “Disclose AI Assistance” reflects duty-based ethics that argue that this action is inherently right if disclosed, and inherently wrong if not disclosed, regardless of the positive or negative outcome. This is not a matter of fairness but what is considered socially just. However, where in this world do we actually encounter justice on an international scale or even on a door-to-door basis? Everyone, apart from places like Bhutan that enforce or practice uniformity of class, is unequal.

 

But it is the duty of a student to be transparent, to try to keep it socially just, and to follow best learning practices in this changing tide. “Be Mindful of Energy Usage” is another one of these examples that is beneficial to follow and is utilitarian in nature, something that is socially good.  We refer to the NSPE Code of Ethics quite often for cross-referencing, which emphasizes public welfare and honesty. We asked questions like, “How would an engineer’s ethical code apply to a student using ChatGPT?” or “What’s the social responsibility behind an AI prompt?”


ree

A challenge was distinguishing between assistance and authorship. Is Grammarly's LLM considered assistance or authorship when Grammarly’s LLM fills in sentences on a very broad scope, serving a large audience, many are generalized, and the author, in pursuit of completing a work quickly, might lose its originality? Will we ever have a J.K. Rowling or Rick Riordan again? I would argue that all tools that use even the most minimal AI as a collaborator, another person sitting beside you, influencing your thoughts and actions, rather than a tool to augment your own thinking. However, this is a very interchangeable card; an AI might just be used as a tool. We will not know until neuroscience has advanced to the level of reading our mind's mechanism of thinking. This also formed our guideline “Never claim authorship of AI outputs”, the boundary between human creativity and machine generation.

 

 

Engineering Ethics and Professional Responsibility

 

Acting responsibly as an engineer in an AI-driven world means understanding that design choices ripple to users, ecosystems, and social structures that can dynamically change them. Every idea you put forward can be monumentally impactful in the right place at the right time among the right people.

 

Bias recognition and societal foresight play a critical role for a future AI engineer who is entrepreneurial in nature. For instance, when we discussed “AI ≠ Healthcare Professional,” we weren't just advocating for preventing misinformation; we emphasized the irreplaceable value of human judgment, the bias, and intuition that come into the equation.

 

Recommendations and Future Implementation


Our guidelines didn't manage to go deeper into addressing the more critical problem of AI usage; this would require more scrutiny and critical analysis, as this subject is not simply a 2-week project. I believe future iterations could go deeper into case-based learning. For example, analyzing real university incidents, such as data breaches or AI-assisted cheating cases, would relate theoretical ethics in context. Quantifying impact, linking data to ethics, would make sustainability tangible.

 

Also, I think IU should publicly share student-developed AI guidelines on its website and orientation materials. Over time, these guidelines could evolve into an official document.

 

Conclusion

 

This project redefined how I perceive ethics, both as limitations and engineering parameters for building a just (individual basis), sustainable (world perspective), and human-centered world of creativity. Ethical reasoning is essential but subjective on a person-to-person basis. We should develop a new true mark of professionalism that is bound by ethical, social, and private goals, and be thoughtful and strategic about how we present or deploy our ideas.

 
 
 

Comments


  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
bottom of page