SCI-ART LAB

Science, Art, Litt, Science based Art & Science Communication

Q: Why do all researchers seem to have biases?

Krishna: Here we have to consider four things:

1. You said 'all researchers'. All researchers are human beings. 

Human beings will have lopsided thought processes because of emotions and conditioning of minds. 

The problem arises only if you don't notice them or refuse to overcome them, even if you did notice them.

2. Then there are researchers in science and researchers in other subjects. 

I did research in both science and art and published papers in both the subjects in peer-reviewed journals. 

In science you have to follow the scientific method which takes care of  eliminating almost all of your biases. Your peers would try to point them out and help you throw them into trash cans if any left. Although not very perfect, this is a very good process that makes you overcome biases and establish evidence based facts.   That is why science works most of the time. This makes you very satisfied as a researcher.

Then research in other subjects. In art subjects, there is no scientific method to accurately guide you. 

There are two paramount differences between art and science. The first is that art is subjective while science is objective. The second is that art expresses knowledge, most often in the form of subjective representation, while science is the system of acquiring knowledge using checks and balances.

Art is mostly based on human perception or it is conceptual in nature, where it can come under the influence of human biases. Unlike science, art doesn't need evidence to accept.

I had complete freedom when I did my research in art. Where there is total freedom, you tend to come under the influence of your biases. There is no universal principle  that aligns with actual facts  to find in art, therefore whatever you think is true is mentioned as a concept, not as "a theory that has been verified" like you do in science.

3. Sample bias. I recently read an interesting article (1). It deals with the "Big Data Paradox". This is not about researchers bias but about sample bias.

About 2018 analysis of polling during the 2016 US presidential election. Famous for predicting a Hillary Clinton presidency, those election polls were skewed by what is termed "nonresponse bias," which in this case was the tendency of Trump voters to either not respond or define themselves as "undecided."

The danger posed by the paradox is that a biased big data survey has the potential to be worse than no survey at all, because with no survey, researchers still understand that they don't know the answer. When underlying bias is poorly understood—as in the 2016 election—it can be masked by the confidence given by the large sample size, leading researchers and subsequent consumers of survey results to mistakenly think they know the answer.

Though it is broadly known that survey accuracy comes from both data quantity and data quality, data quantity has stolen the spotlight in recent years as technology has dramatically increased our ability to both collect and process massive data sets. Though these potentially offer insights never before possible, particularly of subpopulations previously difficult to study, if attention isn't paid to data quality—gained by ensuring your sample population is representative of the larger population or by understanding how it differs so results can be adjusted—the results can be misleading.

There's this drive to get the biggest data sets possible and modern technology, big data, has made that possible. What that allows is analysis at a more granular level than ever before, but we need to be mindful that biases in the data get worse with bigger sample size, and that can carry right to the subgroups.

The problems posed by big data are recognised recently. An official met with a group of statisticians and asked them about the handling of data sets that were becoming available covering large percentages of the U.S. population. Using the hypothetical example of tax data collected by the IRS, researchers asked whether the statisticians would prefer a sample covering 5 percent of the population that they knew was representative of the larger population or IRS data that they weren't sure was representative but covered 80 percent of the population. The statisticians chose the 5 percent. "What if it was 90 percent?" the researchers asked. The statisticians still chose the 5 percent, because if they understood the data, their answer would likely be more accurate than even a much larger set with unknown biases.

Every data set is going to have certain quirks, but the question is whether the quirk matters to whatever your problem is. Social media has tons of data just sitting there. And they may think they have a public sample, but may not realize that their population is biased to start.

Indeed, nonresponse bias remains pernicious even if survey researchers are aware of its dangers. For example, a 2020 article  correctly predicted overconfidence in the 2020 US presidential election polls despite new methods being introduced in the aftermath of 2016.

Here although there is no wilful bias, it looks like the researchers really have a bias towards someone or something!

4. Then when research is conducted and results are published, the general publics' skewed understanding of it also makes people feel it is biased. Why do some people refuse to accept evidence on vaccines or evolution, or GM foods and the like? Some people still think scientists are biased  and therefore, manipulate the figures to get these results. The same is true for exit polls. If they say your favourite candidate is not winning, you might feel they are biased. Denial of truth!

When so many things are influencing your perception of research, I am not surprised you asked this question.

Footnotes:

1. Seth Flaxman, Unrepresentative big surveys significantly overestimate US vaccine uptake, Nature (2021). DOI: 10.1038/s41586-021-04198-4www.nature.com/articles/s41586-021-04198-4

Views: 75

Replies to This Discussion

58

RSS

Badge

Loading…

© 2024   Created by Dr. Krishna Kumari Challa.   Powered by

Badges  |  Report an Issue  |  Terms of Service