‘On Democratization of AI’

Bias in the system

On June 10th, big tech companies like Amazon, Microsoft, and IBM declared 1-year moratorium of selling facial recognition technology to the police or until the government puts regulations in place on the use of related technologies. The announcement came on the heel of the national upheaval and riots in major US cities after the death of George Floyd by the police. Facial recognition raises privacy and security concerns when the government agencies use it to track its citizens.

A bigger issue is that even if it is used for good purposes like tracking criminals, the technology is not perfect and tends to bias against minorities like African-Americans. A flawed facial recognition system used by the police results in more mistaken identities for under-represented people groups which are not limited to ethnic boundaries. If the data set contains more males associated with criminal activities, you can imagine males will be harassed more. The same bias happens with regard to age, region, dialects, and disabilities. Given that more than 75% of IT workers in the US are males, explicit and implicit biases can be present in a lot of AI systems.

Democratization

Until recently, to work in the field of AI required years of education in graduate schools. When power was concentrated into a few individuals, it comes with different names: monarchy, autocracy, dictatorship, authoritarian, etc. The bias in AI is a direct result of technocrats where decisions are made by the hands of the selected few. The democratization of AI is the movement to fix the problem at its root by making AI accessible to everyone.

Popular ML libraries like TensorFlow, Keras, PyTorch, and Caffe are open-sourced and freely available. Companies like fast.ai and OpenAI make it their mission to provide easy-to-use AI interfaces and have great learning resources for everyone. Top universities like Harvard, MIT, and Stanford all have AI and Machine Learning courses open to anyone. Similar courses are also available from MOOC platforms like Coursera and Udemy. There are many success stories. In 2018, a small team of fast.ai students beats top Google coders in an image classification competition. Some of the students of fast.ai and other online classes are entrepreneurs, milk farmers, fishermen, homemakers, and senior citizens.

Radicalization

While empowering more people to do AI certainly needs to continue, the problems run deeper. Collaborative filtering is a technique used by most recommender systems. Simply put, it is a way to predict a user’s liking based on the information from other similar users. A popular social media like YouTube, Twitter, and Facebook employ similar methods to recommend links that might be of interest to a user. I am pretty sure that we all have experience of keep clicking links till we find ourselves very far from where we originally started. Freedom is an illusion.

On the surface, the problem seems innocuous. You just wasted precious time that you could have used more productively. A sinister problem lurks behind mindless clicking, however. As you click through the linked contents that the AI recommends, your mind tends to become more biased. Your thought is largely determined by what you see and hear. Bias amplification in AI will recommend more biased content to the user and each click adds more biased click-stream data for training the AI. A vicious cycle was created.

The radicalization of social media is a well-observed phenomenon. As similar people flock to the same communities, they volunteer themselves to be brainwashed. Radical groups like white supremacists and ISIS have all exploited social media to disseminate their extremist ideologies and to recruit young people into their fold. The trend will only intensify as people became more isolated due to the global pandemic.

Agency

To stop the radicalization, a radical measure is needed. A simple way might be stopping the use of social media altogether. In fact, some people decide to do that, especially if they had a bad experience with their online “friends”. However, this is not easy in the age of the internet, especially in the age of “remote everything” during the COVID-19. Government regulations and corporate business ethics are certainly needed, but each internet user can assert one’s agency by self-disciplining the use of the internet.

As parents, we try to regulate media time for the kids. The rule is to use the smartphone after homework is done. One of our kids call it “free time” but we know exactly what she will do. She will be glued to her smartphone the whole time. Her free time does not seem so free. Freedom is an illusion.

People might look back at our time and be shocked to find out that we let our children use smartphones at such a young age. Eighteen is the legal age for smoking in most countries but the US raised it to twenty-one in 2019. The legal smoking age limitation was put in place in the 1880s in the US but the bad effect of tobacco was discovered as early as 1602. It took more than two hundred years before the ban was made into a law. Let us hope that humanity progressed since then so that the regularization of the internet and social media will happen sooner.

While we are waiting, the onus is on parents, educators, corporate leaders, and individuals to learn the discipline to use technology wisely and responsibly. It might be setting time limits on or unplugging over the weekends and after work. Only when an individual assumes the full responsibility of agency can one be free. Discipline means freedom.

Conclusion

For good or bad, AI is here to stay. To mitigate the effects of the biases in AI, we need actions from law makers, corporate leaders, as well as individuals. Everyone should learn, at least conceptually, how AI works and involve in the makings of AI systems. Sign up for free Machine Learning courses, donate data, participate in annotation process, stop mindlessly clicking through recommended links, or just put down the smartphone and take a walk. Only when each user becomes a free agent can the spring of AI come. Only then can we say, we are free at last.

“Free at last, Free at last, Thank God Almighty we are free at last.” -MLK

Written by CHANGSIN LEE at changsin@testworks.co.kr

Testworks Inc., Seoul, Korea, CTO, Research & Development
Amazon Corporation (2014-2019), Seattle, WA, Sr. Software Development Engineer
Microsoft Corporation (1999-2014), Redmond, WA, Software Engineer