Menu

back to blogs blog image

The Ethical Dilemma of Artificial Intelligence

December 23, 2024

This blog is also available on my Medium page!


or


donate me here on tiptap.gg!

Buy Me a Coffee at ko-fi.com

"AI can do a lot of our tasks, but they can't do 100% of our jobs," says the CEO of NVIDIA, Jensen Huang. A common misconception about artificial intelligence is that it will reduce the majority of the job market in the future or AI will replace human interaction with a much more robotic and stale interaction, but the truth is, AI is so much more than that. There's no denying that AI agents that have been established until now certainly have a lot of potential to offer, and at times, their implications can sometimes feel daunting. Its impact on our society has developed rapidly; from the medical aspect to the educational aspect, or even the political aspect, it's a game-changing promise to improve efficiency, accelerate research and development, and lower costs, and it will continue to grow as the world goes. But AI is not only a tool; AI will also determine the shape of our world, involving our privacy, security, morals, equality, and even our ethics. In the context of AI, ethical dilemmas are one of the key issues and should be our primary concern when establishing it in the real world. Will AI be able to control the economic system equally or unequally created by the machines? Will it be able to affect our behavior and interaction by talking to chatbots on your phone screen? Or maybe AI will be dumber than we thought and end up causing chaos in our future society. The main thing is that whether or not AI will replace your job, we must consider the following aspects of the ethical dilemmas first before thinking of anything else. There are four main ethical questions that I personally think are pivotal in determining the ethical state of AI. First, in terms of fairness, how can we ensure AI models trained on human data remain objective and unbiased? Second, in terms of accountability, the more impact AI has in our lives, the more critical it becomes for us to establish a clear line of responsibility. For example, who should be held accountable when automated healthcare misdiagnoses a patient and ends up risking the patient's life? Third, in terms of privacy, AI will receive more data to collect and analyze; it will be at a point that it has become more sophisticated. How much trust can we put in these AI models to not take all of our private information and guarantee that it will be acquired for a good use? And lastly, in terms of societal impact, these questions directly impact how societies function and whether or not AI becomes a tool for progress or a source of harm. We will explore more of the ethical challenges of artificial intelligence from the perspective of real-world scenarios, providing valuable insights into the consequences and what types of actions are needed to combat them. Understanding these ethical dilemmas is crucial, as the future of AI will determine not only how we live but who we become as a society.

Bias in AI Algorithms

A simple fact of bias in AI can be proven by simply doing a Google image search in your favorite search engine based on the keyword "school girl." What do you see on the page? Is it a bunch of ordinary girls that dress as normal teenage students? Or maybe it's a bunch of pictures of girls in sexualized themes wearing school attires. My guess is that it would be the latter. On the contrary, when we search for the keyword "school boy," it would result in mostly ordinary young school boys who wear school uniforms. Do you see any difference here? This is just one out of the hundreds of cases where biases in the AI can be found. But why is that the case? Isn't AI smart enough to detect these types of distinctions between two genders? Why is it stereotypical? The answer is because of people like us. AI itself, at its core, is not neutral, so a massive amount of data will be processed and prioritized based on the most clicks, relying on both user preferences and location. A search engine then became an environment that grasped the biases from the real world and further established these opinions and stereotypes online. And then the question is, how can we ensure more indistinguishable and precise results? What would the accurate results look like? And will it be a guarantee of fairness for others? It's a difficult task to resolve, but the main point that has to be applied to the algorithms is to make better decisions that are nondiscriminatory, regardless of the gender, race, or any other factors. It's important for AI to not favor certain groups that will hinder the opportunities for everyone. Good ways to implement fair AI involve regularly checking biases in the AI agents with a diverse group of people from around the world in the development process. Take all opinions from the global community into consideration so that inclusivity is a priority that we could benefit from while reducing the harmfulness that comes from AI. Another way to not replicate the stereotypical representations in the digital realm is to form a global standard on a certain subject, for example, gender. UNESCO addresses gender bias in AI by giving us a recommendation on the Ethics of Artificial Intelligence, which was the very first global standard-setting instrument on the subject. Researchers, companies, and experts have taken steps to resolve the current situation regarding biases in AI; one of the companies that has taken this step is Google. By implementing a system that they call "Model Cards," which is essentially a tool to create more transparency in algorithm behavior. Model cards have the potential to help investigate concerns like unfair bias. It can bring clarity when a model performs differently and unintentionally across a diverse range of people based on characteristics like skin color or region. Another step that can be considered is by introducing ethical AI courses for students in universities, encouraging them to ponder about their impact on a diverse range of people from the beginning of a development process. A result from those implementations may not be as expeditious as you think, but it's a worthy attempt to make diversity in the AI algorithm a priority. Therefore, developers from all around the world need to continuously monitor these AI models and keep improving them until there is no bias left behind.

AI and Privacy Concerns

AI has the potential to change our interaction with machines. For example, a lot of AIs nowadays already embody the characteristics of humans, like the use of a human-sounding voice used in personal assistants like Siri or Alexa. It creates a trusting relationship between humans and machines, which then makes us inclined to share an increasing amount of personal information with them. While AI makes our lives easier, it also becomes a challenge for us to keep our personal information private. An abundance of information privacy discourse around AI has not been taken into consideration for its growing power between the company that accumulates the data and the user, the individuals who produce that data. The current AI models will mostly treat the data as goods that can be traded and do not acknowledge the concerns of people who make the decisions for their data when dealing with a system they don't understand. Privacy considerations need to be implemented within these AI models by adjusting their ways of handling and collecting data from the users. Decisions can be made through informed consent, notice, and also the consequences or meaning to access and control your personal information. The challenge, of course, is how do you balance leveraging for innovation while also protecting the users privacy rights? The concept of personal information for these AI models is for the purpose of identifiability; it is for reasoning on whether or not a person's identity can be discovered from that particular information. The problem is that the distinction between what is considered to be 'personal' and what is not has become more difficult due to the increasing ability to link and match data to the individuals. Therefore, a privacy law needs to be formed to create guidelines to protect the information privacy of individuals in an AI environment, which has been made in the three pillars of information privacy stemming from the OECD guidelines that include collecting limitation, purpose specification, and use limitation. It may not be as impactful or effective as a way to protect individual privacy, but it's a step further to continue and improve the protection of an individual's privacy in the AI space. However, it will result in reduced functionality of the AI models themselves; by limiting the collection of information from the AI models, it creates a conflict with the functionality of the AI technologies, as it is incompatible. To put it simply, without data, the AI will not even work itself; in order to improve, it needs a vast amount of information, which could deal with privacy risks. Also, by specifying the purpose of an AI model, it enhances the ability of the user to understand how their personal information is being used in the current AI. This helps to create a more effective and transparent environment while also meeting the expectation of the law. And lastly, companies have to ensure that the personal information that is being used is only for the purpose for which it was collected to develop more trust, both from the perspective of the institute and the individual.

AI in Autonomous Weapons

During my childhood, I would always love watching films or playing video games that involved robots and machines because it fascinated me in a way that an innocent kid would think it's cool. Films like the Terminator or games like Detroit: Become Human, for example, would always be a talk for folks my age because of the iconic and unique sense of the film. But it also generates questions that would concern people in the future on whether or not it's going to be a reality or not, questions like, Will robots really take over the world? Or are they going to be a massive threat for humans in the future? But in the end, it was always just fiction, until now. Now, that concern might be developing again due to the significant impact AI has on the usage of weapons. The military has now adopted an autonomous system in their use of weapons such as mines, torpedoes, and heat-guided missiles. Although AI typically raises concerns in fields such as industrial or social life, it also raises concerns in the field of weapon design. AI-powered weapons are often involved in drones, robots, and any other automated system that can be weaponized. This would cause harm as they would pose a threat for the lack of human control that they give us and will continue to grow more capable, sophisticated, and universally used due to the rapid development of the current AI. The warzone is apparently one of the simpler implementations for AI than any other implementation, as the technical capability of it tends to find human beings to be killed more easily. While that may sound frightening, just like the Terminator film, but it is also a fact according to Stuart Russel, one of the computer scientists from University of California. One would ask then, how would people control this AI from being misused in various ways that may create a threat to humanity or even violate international laws? If we take it back to a hundred years ago, regulation for weapons has been a big issue ever since. In the medieval times, knights would agree not to target each other's horses with their lances, back in 1675, the France and Roman Empire would agree to disallow the use of poison bullets, and in 1983, made a treaty on Certain Conventional Weapons (CCW) like banning blinding laser weapons. When we look back at our past records, we always find a way or at least make an effort to control the use of these weapons, nowadays we face a different kind of challenge. The CCW has always been conducting an investigation for AI-boosted weapons since 2013, but since a lot of problems occur along the way, progress tends to slow down. The majority of the problem is always the lack of general view or agreement regarding LAWs. Countries and organizations will always have their own definition of what autonomous weapons systems are such as the North Atlantic Treaty Organization (NATO). This not only shows a limited amount of agreement that the world has, but it also shows the lack of knowledge in AI itself. All in all, There needs to be a general agreement on the laws and regulation of AI in order to keep a peaceful and safe usage of these weapons globally, resulting in an environment where there is a reduced number of harms and concerns with people around the globe.

Conclusion

AI ethical dilemmas will always continue to grow and develop as long as there is no effort from us to create a solution to combat them. All dilemmas may not have an impact directly on you, but it certainly will impact others that felt disadvantaged by it. The impact of AI in terms of bias, for example, has a significant impact on certain gender, race, ethnicity, and more cultural stuff by simply just gathering our data based on a simple keyword on Google. Therefore, monitoring the development of AI with a more diverse team could help to detect these types of biases and resolve them quickly. The impact of AI in terms of privacy also determines our concerns for the future. The lack of consents that companies provide to the user to grab their data often makes us think twice before using their product. Consequently, there needs to be a transparent agreement between the customer and the company about how the data will be used and proof of guarantee that it will not leak to the public so that it will result in a healthy relationship between them. And lastly, the impact of weaponized AI; it's a dangerous topic to delve into, but it's also a key problem in redefining our peace in society globally. This problem involves many countries and organizations having to work together to generate a general view on the usage of weapons that the world could agree on, whose goal is only to result in a much safer environment in the future. Doing all of these alone will result in failure; therefore, global collaboration is crucial to resolve these challenges. Sometimes you don't need to be involved in a big company or organization to be able to make an impact; just by simply being a better person in utilizing your technologies can make a huge impact that you will ever realize. AI is not a technology that controls us; it's a tool that we could benefit from to make us even more intelligent. It's not something that could divide or harm us, as long as the developers, lawmakers, and yourself work together to guarantee that AI fully serves the ethical and equitable goals.