AI for Good or Bad?

The COVID-19 pandemic brought about many tragedies in the world. In the US, for instance, Boston published 15 pages of obituaries for a single day. At the same time, the crisis drew out a lot of goodness from people. Many front-line doctors, nurses, and first responders are sacrificing their lives to help the patients. Many landlords waived rent for the month of April. In Korea, volunteers made masks by hand when disposable ones were in shortage. Others volunteered to make and deliver food for the elderly and the disabled. Many people donated money to help those in need.

In the field of AI, universities, companies, and research institutions are pouring resources into finding a cure and solutions for the current crisis. On April 1st, Stanford’s HAI (Human-Centered Artificial Intelligence) called for a special virtual conference: “COVID-19 and AI: A Virtual Conference“. Several world-renowned AI researchers presented and discussed how they are using AI to solve real-world problems. The virtual conference is a part of a bigger movement that precedes COVID-19. The AI for Good Global Summit is an annual conference sponsored by the UN to promote the use of AI for social good. Several researchers from the Stanford virtual conference were already involved in the AI for Good, most notably Dr. Fei-Fei Li at Stanford.

AI gone bad?

Ironically, the term “AI for Good” implies that something has gone wrong in the field of AI. We don’t hear photography for good movement though pictures can certainly be used to cause harm. The same can be said about biology, literature, mechanical engineering, and any other field of study. Any subject or anything can be used for good or bad. It is up to the person who uses it. So why should AI be any different? Why should AI be good?

A short answer is because AI almost became one of “us”. AI was once an esoteric research area in Cognitive Science, but it started to knock down humans in areas where no one expected computers to win. AlphaGo, for instance, took the world by storm after beating a world class Go master Lee Sedol in a historic match. Self-driving cars are already on many public roads and caused multiple casualties. AI can sing songs, compose music, paint beautiful pictures, write books, and even invent new technologies. Recently, the US and the UK courts had to rule that AI cannot be credited for patents. We stopped questioning what AI can do. Instead we are asking what else AI cannot do.

In the image of man

AI was modeled after the workings of human brain. The neurons have connected each other through synapses where signals are transmitted. A mathematical model was developed and later it was implemented in computers as artificial neural networks where the neuron became a node and the synapse a connection between nodes. The foundation of deep learning was laid and there was no turning back since then.

If the goal of building AI is to make a system that behaves like a human, why are we afraid of AI? I think we implicitly know the answer. We talk about AI taking away our jobs and AI dictating our lives, but the crux of the matter is that we know what is in man. I do not know about you, but I know for sure that I cannot find much goodness there if I introspect myself. If AI is made in the image of man, it will not be better than what we are.

What is in man

For every single technology advancement, we can think of, there are various ways that it is abused or used unexpected ways.

  • Deepfake technologies are used to fake celebrities to do what they never did; some are just for good laughs, but others with malicious intent.
  • Megvii‘s advanced object tracking algorithms are used to authenticate people and monitor a neighborhood, but they can also be used by the government to track political activists.
  • The wonderfully connected world of the internet is also a playground for hackers who attack every 39 seconds.
  • The creator of YOLO was excited to announce his fast object identification engine in 2017 TED talk. To his horror, he later found the engine was used by the military in drone warfare.

Figure 1YOLO in 2017 https://www.youtube.com/watch?v=Cgxsv1riJhI

Data ethics

One of the main business of Testworks Inc. is data processing and we take pride in data set quality and security. We are constantly on our toes to ensure that accuracy is high, and privacy is protected so data ethics is close to our heart. Bias amplification is a well-known phenomenon among AI practitioners. Often, we do not even know what kind of prejudices we have until we build datasets and train a model for AI. In a recent study, gender bias was greatly amplified when a model was trained to identify an activity from a picture. The activity of cooking was 33% more likely associated with females in the training set, but a trained model showed 68% more bias.

Testworks’ data annotation and management solution – ‘blackolive’

In a now infamous gaffe by Google, a photo categorizing App classified a picture of two African Americans as gorillas. Later, Google “fixed” the problem by simply removing the word gorillas from its vocabulary, together with other related words like chimpanzees and monkeys. Unfortunately, Google did not really fix the problem.

https://www.dailymail.co.uk/sciencetech/article-5270891/Google-bans-word-gorilla-racist-Photos-app.html

Asking questions is the hallmark human intelligence

Ethics seemed not relevant to the field of IT or AI until recently. If I ask engineers if what they are doing is right or wrong, I will probably get a lot of blank stares. It is assumed that introvert engineers are not good at asking questions. Engineering education is almost entirely focused on teaching the techniques in finding solutions. With the advancement of AI, the time is ripe to reshape the education system. Rather than focusing entirely on the techniques, ethics must be a part of every field of study to equip the students to ask the questions of right and wrong. The questions of ‘what’ and ‘why’ will become more important than ‘how’ in the brave new world of AI and Big Data.

The conundrum that engineers face is that they cannot simply evade the question by saying that they are just engineers doing their job. Not many people associate IBM with Nazi Germany, but its system was effectively used by the Third Reich to profile Holocaust victims. Any person who does not ask difficult questions might have already signed to be an accomplice to a horrendous crime.

Conclusion

A crisis tends to push people to two extremes. World War II was instigated by Hitler causing much havoc to Europe and beyond. Many just complied to the status quo and did their job. Others questioned. Dietrich Bonhoeffer did not take the offer to escape and chose to remain in prison until he was executed. A normal businessman, Oskar Schindler, became a savior to thousands of Jews through his business.

The current COVID-19 is no exception. There are Covidiots who care nothing but their own interests. Then there are those who sacrificed their time, money, and their own safety for the good of others. When push comes to shove, would I be Hitler or Schindler? I am afraid to find the answer, but at least I am asking the question.

Written by 테스트웍스 CTO 이창신 소장(changsin@testworks.co.kr)

Testworks Inc., Seoul, Korea, CTO, Research & Development

Amazon Corporation (2014-2019), Seattle, WA, Sr. Software Development Engineer

Microsoft Corporation (1999-2014), Redmond, WA, Software Engineer