Researchers, including those of Indian origin, have introduced a new tool to improve the fairness of online search rankings without sacrificing their usefulness or relevance.
Since most people choose from the tops of online search lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce.
"If you could examine all your choices equally and then decide what to pick, that may be considered ideal. But since we can't do that, rankings become a crucial interface to navigate these choices," said computer science doctoral student Ashudeep Singh from Cornell University in the US.
Follow NewsGram on Quora Space to get answers to all your questions.
For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar.
For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar. Unsplash
"This happens because of the way search results are presented to us. We generally go down the ranking linearly and our attention drops off fast," Singh said.
The researchers' method, called FairCo, gives roughly equal exposure to equally relevant choices and avoids preferential treatment for items that are already high on the list.
This can correct the unfairness inherent in existing algorithms, which can exacerbate inequality and political polarization, and curtail personal choice.
"What ranking systems do is they allocate exposure. So how do we make sure that everybody receives their fair share of exposure?" said Thorsten Joachims, professor of computer science and information science.
What constitutes fairness is probably very different in, say, an e-commerce system and a system that ranks resumes for a job opening.
"We came up with computational tools that let you specify fairness criteria, as well as the algorithm that will probably enforce them."
Algorithms seek the most relevant items to searchers, but because the vast majority of people choose one of the first few items in a list, small differences in relevance can lead to huge discrepancies in exposure.
Algorithms seek the most relevant items to searchers. Pexels
For example, if 51 per cent of the readers of a news publication prefer opinion pieces that skew conservative, and 49 per cent prefer essays that are more liberal, all of the top stories highlighted on the home page could conceivably lean conservative, according to the paper.
Also Read: Everything You Need to Know About Chronic Obstructive Pulmonary Disease
"When small differences in relevance lead to one side being amplified, that often causes polarization, where some people tend to dominate the conversation and other opinions get dropped without their fair share of attention," Joachims said in a university statement.
The paper titled 'Controlling Fairness and Bias in Dynamic Learning-to-Rank' won the Best Paper Award at the Association for Computing Machinery SIGIR Conference on Research and Development in Information Retrieval. (IANS)