A study recently published by a mix of professionals from the University of Washington, the Mozilla Foundation, and Google’s Research division delves into machine learning and how its shortcomings are all too inevitably linked to human bias and culture. We’ll be taking the time to discuss some key findings, as well as ML’s place in today’s world.
Machine learning is the hot topic every scientist, engineer, programmer, and tech company’s after. And while it’s certainly entered an entirely new playing field over the last few years, AI’s been a staple of pop culture and technology for years. Since the time of HAL-9000 from 2001: A Space Odyssey, no one’s really been able to remove the idea of sentient machines from their mind. Certainly not developers and researches, who’re putting their heart and soul into developing machine learning databases that can help AI “learn” better and quicker.
Bing has revealed in early 2019 that about 90% of their search engine’s ranking algorithm relies on machine learning. AI’s carved out a special niche for itself in the medicine and pharmaceutical fields, being used for disease identification and to offer primary treatment in response. Even major tech companies like Apple have acquired small AI companies and start-ups as they look towards the future. Yes, AI and machine learning seem to be the way moving forward.
However, as our study points out, it really isn’t all good. Machine learning has quite a few obstacles in front of it. However, our discussion of limitations is founded less upon technical features and more on sociological ones.
The paper, entitled Data and its (Dis)contents, delves into much length about AI and machine learning’s shortcomings. Essentially, quite a few issues that cropped up with machine learning boil down to the human element that goes into making databases for these tasks. As revealed by the study, databases are often made out of information acquired illegally. Since AI can’t for the most part figure out what information is fairly obtainable and which isn’t, copyrights and personal boundaries are left in the dust, with developers not caring much as more information means better working algorithms.
Another issue with machine learning is that it can easily be misled into perpetuating biases against minorities and marginalised populations. With the developers and companies behind such AI only relying on their own technical know-how and data acquired online (which mostly depicts the viewpoints of population majorities), the learning process becomes much narrower in vision, to the further detriment of minorities.
AI still has a long way to go before reaching sci-fi glory. Developers need to taking extra steps to ensure that the libraries of information being fed to their products are curated from legal, unbiased sources. Which, considering how 2020’s only further revealed how much hatred communities can fester for their minorities, seems to be an all-too daunting task.
Photo: Monsitj / Getty Images/iStockphoto