Science, Art, Litt, Science based Art & Science Communication

There is a lot of confusion going around in general public regarding science. Several of my friends from non-scientific fields ask me why they see and read contradictory reports regarding a  single subject in science. Well, I agree with them. I too read these reports in the media. As a person from the field of science, I can understand the reasons but others might lose faith in science if this aspect is not explained properly. So am trying to tell here why this happens.

The main reason is scientific uncertainties. Some of these uncertainties arise when risk assessments are not based on studies directly on people or relevant features of our environment, but using models. Chemicals are often tested on rodents rather than people, for example, and climate change is studied using computer models rather than by experimenting on the atmosphere.

Researchers will be asking and answering different questions on the same topic, and consequently gathering and analysing different bodies of evidence.

Read these articles in scientific American:

and this one on beta-carotine:

If you read them, you will find how complex scientific research is. There will be several aspects that influence a test result. All these aspects that govern a scientific experiment and its results have to be taken into account and the whole picture has to be covered to get the facts correct.  *Even if one point is missed, the whole process will go waste.*

Here is another article that supports my view:

and another one:

Turning the tables on obesity and BMI: When more can be better.

Usually scientists will try to cover all the aspects while planning an experiment. However, in the cut throat world of competition, these days, some scientists, in order "to get there first", might knowingly or unknowingly ignore some points. Different conditions in which different experiments get conducted might give different results. Scientists stress this and clearly state the conditions in which their work is done. Moreover, scientific world lacks proper  equipment to deal with all the complexities of Nature and the Universe at present. *So the experiments conducted now with primitive equipment might yield different results from those of the ones which will be conducted several years later from now where technology will be more advanced.* The world of science will always be prepared for such corrections.

For eg., we have been told all these days that indiscriminate use of antibiotics is responsible for  antibiotic resistant bacteria. However, recent studies throw a new light in this matter. Please read these articles:

Data available in the primary stages of any scientific research make scientists think in a certain  way. Given the conditions in which they are working, they may be true up to a certain extent. As and when new data arrives science  corrects its old data. Therefore, keeping with the new findings gives a new perspective to scientific outlook and it is very important to always keep in touch with science to find out which findings are more closer to truth and facts.

Now read this article in THE Hindu:

this one in Times of India:

and this one in Physics News :

What do you think? I feel *the persons who conducted those earlier studies jumped to conclusions too early without conducting experiments in a proper manner.*

*When scientists from other areas of work try to do work in an area which is not their field of specialization, then  again they might go wrong.* Here is an example to show what happens where mathematicians who cannot understand Biology properly try to formulate things :

If a study - especially in psychology - that using a select group, say students,  as their subject population, jumps to conclusions based on incomplete information might not yield results that can be applied to everybody on this planet. *Results obtained by testing immature people like students or people who have peculiar cultures cannot be reliably generalized to represent all of humanity. The results do not necessarily apply to ‘people’ in general but specifically to students. The only group that correctly applies to the general statement is the one tested.* This should be mentioned clearly by the researchers. Students are a special population with unique characteristics that produce specific results when tested under specific conditions. Those results may or may not apply to or represent other specific populations or especially humans in general. This failing is ubiquitous in the ‘scientific’ study of human characteristics.

*There are forces that try to influence the results too.* Here is an example:

(Behavioral research may overstate results

Analysis implicates ambiguous methods and publish-or-perish culture in the United States)
Now read here why *results obtained in labs fail in field conditions*:
Feces in termites' nests block biological pest control
Built-in poop nourishes bacteria that protect notorious Formosan species

Mixing their own poop into nest walls gives Formosan termites a bacterial boost in fighting off human attempts to destroy them with insect plagues.

A bacterial strain found in the fecally-enhanced nest walls of pest termites Coptotermes formosanus helps protect them from a potentially deadly fungus, says entomologist Nan-Yao Su of the University of Florida in Fort Lauderdale. Such live-in boosters could help explain why efforts to control the termites with fungal diseases have been a failure, Su and his colleagues report September 18 in the Proceedings of the Royal Society B.

“You can put the fungus on an insect in a lab dish and say, ‘Hah! We killed the termite,’” Su says. But for termites in their natural colonies, the soil-dwelling fungus Metarhizium anisopliae has failed to devastate.

There lies the difference! *You need not get the same results you get in labs in natural conditions where several unknown and other factors play a major role in the complex ecological webs!*
*And when commercial interests like the tob0cco industry fund the research, they try to manipulate the results  about smoking and its ill effects by downplaying them. Their results might differ from those of the ones conducted by the medical practitioners.*

Some researchers say these things "might be responsible", but not certainly " are responsible". Bad science. I agree. Peer reviewing is what checks such anomalies but sometimes these peers too will be unable to find these mistakes. Scientists are human beings too. They too make mistakes. But what bothers me is those mistakes should not be so bad as to get a bad name to science. Personal ambitions have no place in science.

*In psychology you get the most controversial reports. Because your emotions and feelings are abstract and cannot be proved with certainty.  So author's opinion and speculation rather than facts get into the reports. Another field is Astrophysics where the inadequacy of human understanding and equipment we have right now proves to be a hindering aspect to come to any conclusion.*

In the absence of advanced equipment to find all the complexities of this universe, scientists first try to put forward some theories derived from formulas and calculations based on available data. These theories will be tested as times goes by and when found correct will be incorporated into the main stream science otherwise discarded. That is why you get so many models and theories in several scientific fields. Like the Higgs - Boson or the god particle theory. For now they are just theories and should be taken as such. Scientists caution you that more research is needed in these cases to prove the theories. *Science is a work in progress. Nothing is absolute truth here.*

*Sometimes when you conduct experiments during different times of the day you might get different results!* For example work on circadian rhythms. Read an interesting article on how the researchers working on the same problem got different results when the work was done at different times of the same day here:

*Atmospheric temperature where the animals are kept can too skew the results.*

Keeping mice in cold cages hides the effects of diet on metabolic disorders. Warming up those mice might lead to more reliable study results, researchers conclude in a study published online November 5 (2015) in Cell Metabolism .

Lab mice living in warmer habitats and fed high-fat and high-cholesterol diets showed more inflammation than they did when fed the same diets in colder homes. In mice engineered to be prone to metabolic disorders, the inflammation sped up the progression of atherosclerosis, a disease that causes hardening of arteries. But the inflammation surprisingly didn’t affect how well cells respond to insulin, a problem for people with obesity and type 2 diabetes.

Although an ideal temperature for mice might be closer to 30 degrees Celsius (86 degrees Fahrenheit), lab mice are typically kept at temperatures between 19° C and 22° C (66° F to 71° F). That temperature range might be OK for fully clothed humans, but it stresses mice, says study coauthor Ajay Chawla, a molecular physiologist at the University of California, San Francisco.

To keep warm, mice move around more and expend more energy than they would at higher, more comfortable temperatures. Heart rate and blood pressure go up, too. And unlike people, who can cope with the stress of being uncomfortably cool — by turning up the heat or putting on more clothes, for example — the mice experience this thermal stress all the time, Chawla notes. “It’s one thing to live in stress and have it dissipate,” he says. “It’s another to live in this kind of stress forever.”

The new results add to the concern that thermal stress affects the results of studies that use mice to model human disease. Earlier studies using mice kept at cooler temperatures could have masked correlations between diet and metabolic disease, and the problem could be broader, Chawla says. “Perhaps we’re not modeling human disease in mice well.” Hmmm....

Source of this observation:

Now read this article on left hand- right hand people :

Here there are only two kinds to deal with. If you conduct one study, the data might depend on   the majority kind of people that have more IQ -say right-handed people- during the study which would be taken into account. This gives you a result. And again if somebody else undertakes another study at another point and place then left handed people might show to have more IQ! Then which one is correct? *So statistics always says to avoid mistakes, a large number of studies on large number of (sample) people have to be conducted. But in today's world that rule is not being followed properly and therefore the studies are becoming flawed.*

Here is another contradiction:

Here you can read another one:

Sometime back I read a paper which says luck plays a part in getting cancer. And here is what other scientists working in the field think about it:

Iris Aging Controversy In Biometrics

Here is a dumb study:

It says people who have tattoos and ear piercings drink too much!

Maybe they are influenced by the company they keep and doing all these things is cool for them. But what is the co-relation? Isn't the study dumb? I want to stress on the point given at the end of this report: caution against a "tendency to see a tattoo or piercing and automatically profile or stereotype that individual as a 'high-risk person'." Yes, the findings of this study has to be taken with a pinch of salt.

Another study explains why science can't really tell us whether pets are good for human health

because details about the animal-owner relationship are probably critical factors in determining whether pets are beneficial or not. And, again, they're impossible to control in experimental situations. Researchers are working hard to solve these problems but the most we can say for certain is that pets almost certainly benefit the health of some people, some of the time. Not always though! Other people probably don't benefit from owning a pet and, for some, it's likely to be a costly exercise that may increase stress levels and result in health problems. People who own pets that fit their lifestyle and meet their particular needs are most likely to benefit from pet ownership. But they don't need science to tell them they're happy with their animal companion anyway.

One issue is that it's extremely difficult to design conclusive studies in this area of research.

When studying the effects of a new drug, scientists typically assign participants to two groups. Neither the participants nor those responsible for data collection know which participants are receiving the real drug or an identical-looking fake.

Any difference in outcomes between the two groups can then be safely attributed to the effects of the drug. But living animals can clearly not be randomly assigned to people who may not want them or know how to care for them correctly.

Nor can the participants or the researchers be unaware of whether someone is assigned to a pet or placebo; it's just too difficult to hide a big furry dog inside a brightly coloured capsule!

This means *most people in human-animal studies own pets because they want to, which is likely to bias the results.*

So even in studies where there's a clear link between pet ownership and health outcomes, we can rarely say that the health outcomes are caused by the pet rather than some other factor.

People who own pets may have better health, but this may be because healthier people are more likely to own pets in the first place…because they can care for them. And so on. So?!

( )
This another way why we could go wrong with regard to research with human beings :
The persistent Y and its store of widespread regulatory genes brings up another issue for biologists: the cells of men and women could be biochemically different. Men with their Y-related genes will have slightly different cells than women, who have two X chromosomes, and that goes above and beyond the differences related to sex determination. *When biologists experiment with cell lines, they typically don’t note whether the cells originally came from a male or a female. “We’ve been operating with a unisex model for a long time”. And it may not be valid. An experiment on an XX cell line may not have the same result as the same experiment run on an XY cell line.
Why this matters is because some diseases, like autoimmune illnesses, appear to affect women to a greater degree, while other problems, like autism-spectrum disorders, affect more men. But *biologists trying to untangle these mysteries on a cellular level have been, by and large, blind to subtle biochemical differences—because they are not comparing male and female cell lines—that could affect their results. It is time to take those blinders off.*
Now we have a very interesting study:

Male Researchers Stress Out Rodents

*Rats and mice show increased stress levels when handled by men rather than women, potentially skewing study results!
Yes, the researchers got different results when men were present from the scenes when women researchers were present!*
Male, but not female, experimenters induce intense stress in rodents that can dampen pain responses, according to a paper published today in Nature Methods. Such reactions affect the rodents’ behaviour and potentially confound the results of animal studies, the study suggests.
The authors discovered this surprising gender disparity while investigating whether the presence of experimenters affects rodent pain studies. For years, anecdotal reports have suggested that rodents show a diminished pain response when a handler remains in the room.

A T-shirt worn by a man the previous night, placed in the room with the animals, had the same effect. And so did the scent of chemicals from the armpit, called axillary secretions, some of which are found at higher concentrations in male mammals than in females.

But women experimenters did not alter the animals’ pain response — in fact, a female presence (or that of their T-shirts) seemed to counteract the response to men.

When the authors dug further, they discovered that these male scent stimuli weren’t acting on pain pathways, as an analgesic does. Instead, the stressed-out animals had elevated blood levels of the stress hormone corticosterone. The stress had, in effect, temporarily quashed the pain response.

*It wasn’t just men who caused the stress spike in the rodents, but any nearby male animal, including guinea pigs, rats, cats and dogs. Male cage-mates of the animal being tested were the only exception, and produced no changes in stress hormone levels.

“What this boils down to is that olfactory exposure to male stimuli is stressful for mice — and just shockingly stressful, compared to other known stressors.”*

 More than just a curiosity, this stress response can throw a curveball into study results. On reanalysing data from the group’s past studies, such as on pain sensitivity to hot water, the researchers found that mice tested by men showed lower baseline pain sensitivity than mice tested by women.The work indirectly demonstrates potential effects on nearly any kind of medical research.
The findings should at least prompt researchers to report the gender of experimenters in their publications, and if the experimenters change mid-stream, to include their gender as a variable in the analysis. Therefore, according to these researchers, animal researchers will have to embrace statistical methods that compensate for a greater range of variability. “We need to think about animals as more like human subjects than as controllable reagents.”
What is more interesting is  people got the same results when these experiments were repeated elsewhere.
( )
In animal-based biomedical research, both the sex and the age of the animals studied affect disease phenotypes by modifying their susceptibility, presentation and response to treatment. The accurate reporting of experimental methods and materials, including the sex and age of animals, is essential so that other researchers can build on the results of such studies.  The percentage of papers reporting the sex and age of mice has increased over the past two decades: however, only about 50% of the papers published in 2014 reported these two variables.
When compared, the quality of reporting in six preclinical research areas  it was found that evidence for different levels of sex-bias in these areas: the strongest male-bias was observed in cardiovascular disease models and the strongest female-bias was found in infectious disease models. These results demonstrate the ability of text mining to contribute to the ongoing debate about the reproducibility of research, and confirm the need to continue efforts to improve the reporting of experimental methods and materials. ( )
And painkillers, antidepressants work differently for men, women. Women are prescribed drugs that may have never been specifically tested on females! 

Threats of popping a painkiller or antidepressant without a second thought depends on your sex.

This is because hormones and genes affect how the body metabolises drugs. Recent research reveals that women are prescribed drugs that may have never been specifically tested on females, as they are excluded from clinical trials under the assumption that ‘one size fits all.’

“Right now, when you go to the doctor and you are given a prescription, it might not ever have been specifically tested in females,” say researchers. Although, it is believed that a new painkiller or antidepressant will be equally effective in either sex, a growing number of scientists say hormones and genetic differences affect how medicines behave in the body – meaning drugs might affect women differently to men.

Almost all basic research – regardless of whether it involves rodent models, dogs, or humans – is predominately done in males. The majority of research is done with the assumption that men and women are biologically the same.

Experts, in the journal Cell Metabolism, said both men and women must be accounted for in trials to move medical advances forward. They further said that one reason women are excluded from studies is levels of hormones such as oestrogen and progesterone fluctuate during the menstrual cycle.

This may impact the study, so researchers often use men instead. But the sex hormones are implicated in all biological processes, including sensitivity to fatty acids, or the ability to metabolise simple sugars.

Researchers stated that the differences have implications for all clinical trials, whether they are testing the effects of a drug or a body’s ability to tolerate an organ transplant. They claime many researchers don’t know how to properly include sex as a variable in their experiments, adding that they include females in their study without addressing if they are pre- or post-menopause, whether they are on birth control pills, or if they are taking hormone blockers.

“Without addressing all of these variables in your analysis, you’re still not accurately reflecting the impact of hormones and chromosomes in your research. It would be great if there were drugs that were specifically tested and dosed based on sex.

*“There are so many variables in medical research that can’t be solved by placing all women, regardless of age, into one category and certainly can’t be solved by excluding us completely. With the goal of personalised medicine, it is important to begin to address and focus on sex as a biological variable”. *

* Poor quality chemicals and equipment used by researchers too undermine experiments in molecular biology and drug discovery.*

*When a few parameters of causes are omitted or neglected or overlooked, this can lead to faulty research*
An example can be taken from cancer research. Earlier it was reported that coffee can cause cancer. Even WHO classified coffee as a possible carcinogen that could lead to bladder cancer. But now people realized that 'extreme hotness'of a beverage causes cancer of oesophagus and not actual coffee!
So in the earlier studies, 'hotness' was not considered as a possible cause and this factor was overlooked as only coffee was taken into account.  Therefore, this research is flawed! Now it has been corrected.

*High resolution thinking is very important in research to concentrate on all finer details other wise the results will be flawed. Resolution gives "clarity" to science as the field demands it. In order to get it you have to consider and take into account several things that might effect the result of an experiment. Otherwise the results of scientific research becomes cloudy.* I will give a simple example here. I read an article in one of the prestigious science journals about research on lie detection which showed how forensic scientists should go about detecting lies. One of the important points mentioned there was on "establishing eye contact". The paper says if a person is unable to look directly into the eyes of an investigation officer, it suggests that the person is lying and is hiding something and therefore can be considered as "suspect". I pointed out that the research was flawed as it was not fine-tuned to all the possible truths. I told the people who conducted the research, in my part of the world - the culture tells women not to look directly into the eyes of men who are strangers as it is considered as bad manners. So if we go to some other part of the world and there if we are being investigated for some crime we haven't committed, and our minds that are conditioned by our cultures don't allow us to look directly into the eyes of a male investigating officer, does that mean we are lying and therefore can be treated as suspects?

( A girl from Vietnam who immigrated to the US told me this: Making direct eye contact seems weird to us - this one was difficult for me but I got better at it.  I spent so much of my youth bowing down to elders and revering them.  You knew your place and elders are held in such high esteem and regard - think of how much life and wisdom they have!  It was unimaginable to directly look at an elder in the eye - so startling and confronting!  I shudder at the very behavior even though I am more "elder" ranking now - still there will always be someone older than you.  Anyhow - juxtapose this "norm" to what is expected in the US - direct respectful eye contact. The practice is very difficult to follow! )

In the research work done by the forensic scientists this aspect of cultural conditioning of the mind was not taken into account. Therefore the work is not of "high resolution quality" and flawed. That is what peer - reviewing of scientific research does. It acts as high resolution instrument where all the flaws of your work will be put under a microscope, thoroughly searched for all the aspects -especially for flaws, tells you whether you are correct or not, whether you followed all the rules of science or not and ask you to correct yourself if your work is flawed. Here the criticism is based on facts and rules. Therefore in science you should have clarity (of high quality) of what you are doing. In order to get it your thoughts must be tuned to high resolution to get all the details correctly. No wonder the stressed out scientists make so many mistakes while conducting the research along with running the rat races!

Faulty Antibodies Undermine Widespread Research

Two papers reveal that many commonly used research antibodies don’t bind as believed, highlighting the need to validate these reagents before use.

The ability of research antibodies to bind to their proteins of interest, or to the peptide tags attached to such proteins, can vary depending on the particular amino acids surrounding the target binding sites or modifications to those sites, according to reports published Tuesday (January 28) in Science Signaling. In some cases, such antibodies even cross-react with other proteins, the authors warn.

“These papers, like others, clearly indicate that even commercially available antibodies need thorough evaluations for the respective purposes for which they will be used”.

But these antibodies don’t always bind as expected in the experimetns, as a number of recent articles have highlighted. Still, the issue “keeps popping up its head because people don’t follow directions for validation,” says pathologist David Rimm of Yale School of Medicine “[Researchers] don’t take it seriously enough,” he says, explaining that there is still data from experiments with unvalidated antibodies being published in the literature.

Before going to the peer review, the researchers themselves can take precautions to avoid blunders in their work.

Problematic areas can be avoided when people work in groups where each member can contribute and check and recheck the conditions under which the experiments can be conducted.

All the researchers in science have to keep the whole picture or their work before them and understand it properly and discuss it with their group members in detail.

Repeated experiments in various conditions is another solution to check and recheck the results.

Your lab results fail in the field conditions because of unknown factors working in the field. Fine tuning your gray matter to all the effects and possibilities is one way to counter this. But still you never know what is awaiting to spoil your results. Years of work will go waste. Be prepared for these unknown possibilities.
I know, as a person of science how difficult it is to explain people this and really build trust in science. Being honest about science's inadequacies is perhaps one way to do it.

Clarity of the problem undertaken  - what is the main aim of your work, how to go about it, what are the hurdles, how to overcome them, how to get flawless results and how to come to conclusions  - is very important in any scientific research.

Not participating in rat races and conducting studies just for the sake of science is very important. What is the use of a flawed study -even if it is the first of its kind -and  regretting it later?

I think the suggestions given above will help researchers in science to conduct flawless studies when followed properly.

When there are no controversies and confusion, I am sure people will trust science more and more.

Views: 1158

Replies to This Discussion

Many researchers maintain close financial ties to the drug companies that stand to gain from the results of their research.
Congress passed the Physician Payments Sunshine Act, which, starting in 2013, will compel pharmaceutical firms and medical device manufacturers to reveal most of the money that they are putting into the pockets of physicians.
Yet as the case study in this article shows, neither scientific institutions nor the scientists themselves have shown a willingness to police conflicts of interest in research.

How Drug Company Money Is Undermining Science

The pharmaceutical industry funnels money to prominent scientists who are doing research that affects its products--and nobody can stop it

In the past few years the pharmaceutical industry has come up with many ways to funnel large sums of money—enough sometimes to put a child through college—into the pockets of independent medical researchers who are doing work that bears, directly or indirectly, on the drugs these firms are making and marketing. The problem is not just with the drug companies and the researchers but with the whole system—the granting institutions, the research labs, the journals, the professional societies, and so forth. No one is providing the checks and balances necessary to avoid conflicts. Instead organizations seem to shift responsibility from one to the other, leaving gaps in enforcement that researchers and drug companies navigate with ease, and then shroud their deliberations in secrecy.


Here you will find another blog that says the same:

And another one:

Futile pieces of research as news


Contradictory reports: 

Weak Immune Response in Women May Raise Autism Risk in Children

Researchers rethink the link between infections in pregnant women and autism in children


Heat kills: We need consistency in the way we measure these deaths

Heat or cold? Which one kills more people? Two papers report two different things Why? Because there was no consistency!
What Do Scientific Studies Show?


Retraction Watch

Tracking retractions as a window into the scientific process

Caught Our Notice: Dear peer reviewer, please read the methods sect...

with 2 comments

Via Wikimedia

TitlePlasma contributes to the antimicrobial activity of whole blood against Mycobacterium tuberculosis

What Caught Our Attention: A big peer review (and perhaps academic mentorship) fail.  These researchers used the wrong anticoagulant for their blood samples, leading them to believe that certain blood components were fighting microbes. The authors counted the number of colonies to show how well or poorly Tuberculin mycobacteria were growing in cultures — but blood samples need anticoagulants to prevent clots before analysis, and they used an anticoagulant that actually prevented the microbes from colonizing. The authors (and reviewers) should have known this from a 1999 CDC publication about the diagnosis of tuberculosis (echoed in virtually every public health pamphlet since), which explicitly says not to use their anticoagulant — ethylenediaminetetraacetic acid (EDTA) — if intending to culture the blood sample for mycobacteria.

At least the post-publication peer review process seemed to work…a year later.

JournalInnate Immunity

Authors: Ramiro López-Medrano, José Manuel Guerra-Laso, Eduardo López-Fidalgo, Cristina Diez-Tascón, Silvia García-García, Sara Blanco-Conde, Octavio Miguel Rivero-Lezcano

Affiliations: Hospital Comarcal del Bierzo, León, Spain; Complejo Asistencial Universitario de León, (CAULE), León, Spain; Universidad de León, León, Spain; Fundación Instituto de Estudios de Ciencias de la Salud de Castilla y León (IECSCYL), León, Spain

The Notice:  

The authors did not realise that the use of EDTA created errors in the paper. Although mycobacteria remain alive in the presence of EDTA, the formation of visible colonies is inhibited, which affected the enumeration of colony forming units for the quantification of the antimicrobial activity. Consequently, the absence of colonies was erroneously interpreted as mycobactericidal activity. Due to this error the article’s main finding about the antimycobacterial activity detected in plasma is incorrect.

Date of Article: August 22, 2016

Times Cited, according to Clarivate Analytics’ Web of Science: Zero

Date of Notice: September 27, 2017
  An expert usually:
(1) Look at author’s affiliations.
(2) Look at funding source.
(3) Look through all figures and arrive at my own conclusions.
(4) Read Results and Discussion and see if I agree with the authors.

Impact factors are calculated mostly by looking at how many times the journal gets cited. What’s missing is the fact that many times an author cites articles that they are disagreeing with, or proving to be flat out wrong. The metric is also prone to artificial inflation, since sub- fields will all cite certain journals and increase the metric that way.

I’m liking the whole article level metrics, but this has flaws as well. It tells you how many people looked at your article, but not how many people agreed with your conclusions.

However, the only real way is to do this “post publication review”. is a new site that does this- only authors whose emails have been published in a peer reviewed journal are allowed to comment. (Here’s mine: This ensures that it’s not just some “random internet person” commenting- it’s an actual scientist who has published in the literature.

In general, the public should bear in mind that the scientific record- peer reviewed articles- is not a textbook. It’s a record of the conversation between experts discussing what they found out in the lab. Once the experiment is independently replicated and consensus is arrived at, then it turns into the textbook.
5 Steps to Separate Science from Hype, No PhD Required
Spring (and Scientific Fraud) Is Busting Out all over
Behavioral research may overstate results
Analysis implicates ambiguous methods and publish-or-perish culture in the United States
Here’s a hard pill to swallow for practitioners of “soft” sciences: Behavioral studies statistically exaggerate findings more often than investigations of biological processes do, especially if U.S. scientists are involved, a new report finds.

The inflated results stem from there being little consensus about experimental methods and measures in behavioral research, combined with intense publish-or-perish pressure in the United States, say evolutionary biologist Daniele Fanelli of the University of Edinburgh and epidemiologist John Ioannidis of Stanford University. Without clear theories and standardized procedures, behavioral scientists have a lot of leeway to produce results that they expect to find, even if they’re not aware of doing so, the researchers conclude Aug. 26 in the Proceedings of the National Academy of Sciences.
“U.S. studies in our sample overestimated effects not because of a simple reluctance of researchers to publish nonsignificant findings, but because of how studies were conceived and carried out,” Fanelli says.

The new study appears as psychologists consider ways to clean up research practices (SN: 6/1/13, p. 26).

“Sadly, the general finding about U.S. science sounds rather plausible,” remarks psychologist Hal Pashler of the University of California, San Diego.

Fanelli and Ioaniddis examined the primary findings of 1,174 studies that appeared in 82 recently published meta-analyses. A meta-analysis weighs and combines results from related studies to estimate the true effect in a set of reported findings (SN: 3/27/10, p. 26).

The researchers chose psychological and other behavioral studies that examined impulsive acts or other deeds that can be measured with different scales or instruments. Studies in genetics and several other nonbehavioral fields investigated unambiguous outcomes, such as death. Biobehavioral studies from neurology and a few other areas probed a combination of biological and behavioral effects.

The studies’ authors primarily came from the United States, Europe and Asia.

Of the three study types, individual behavioral studies were most likely to report effects greater than those calculated in associated meta-analyses. Behavioral studies with a lead author in the United States showed an especially strong tendency to find what researchers had predicted before performing the research.

Biobehavioral studies displayed a smaller “U.S. effect.” No such tendency characterized nonbehavioral investigations, in which findings differed from those of meta-analyses mostly due to the use of samples that were unrepresentative of populations being studied, the researchers say.

It’s doubtful that the U.S. effect reflects a superior ability of U.S. scientists to formulate correct hypotheses, the researchers add. Fanelli and Ioaniddis accounted for the studies’ differing choices of hypotheses across fields and for whether researchers recruited large samples of participants, which makes it easier to detect true effects.

Debunking The Myth Of The Power Of Eye Contact
You may have heard that making eye contact is an effective way to help communicate an important message or make an effective point, but new research published in the journal Psychological Science found that eye contact may actually make people more resistant to ideas that they aren’t inclined to agree with.

“There is a lot of cultural lore about the power of eye contact as an influence tool,” said study author Frances Chen, a University of British Columbia professor who conducted the research at the University of Freiburg in Germany. “But our findings show that direct eye contact makes skeptical listeners less likely to change their minds, not more, as previously believed.”

To breakdown this myth of eye contact as influence, Chen and colleagues turned to state-of-the-art eye-tracking technology. In an initial experiment, participants were instructed to freely watch videos of speakers espousing various viewpoints on controversial sociopolitical issues.

By tracking study participants’ eye movement during these videos, researchers found that the more volunteers watched a speaker’s eyes, the less persuaded they were by their argument, meaning volunteers’ attitudes on the issues changed less the more they made eye contact.

Extended periods of eye contact were only found to be associated with greater receptiveness to the opinion being presented if participants already agreed with the speaker.

In a second experiment, participants were asked to look at either only the eyes or only the mouths of speakers presenting arguments counter to the volunteers’ own attitudes. The results of this trial indicated that participants who watched the speaker’s eyes were less interested in the argument being presented and less receptive to the notion of interacting with someone espousing the opposing view. These participants showed a lower level of persuasion than those who were told to watch the speaker’s mouth.

According to study author Julia Minson of the Harvard’s Kennedy School of Government, the study’s findings emphasize the idea that eye contact can mean different things in different situations. While eye contact could denote a positive connection in friendly situations, it may be seen as an attempt at dominance or intimidation in disagreeable situations. Minson advocated a “soft sell” approach as more effective in the latter situation.

“Whether you’re a politician or a parent, it might be helpful to keep in mind that trying to maintain eye contact may backfire if you’re trying to convince someone who has a different set of beliefs than you,” she said.

The study researchers said future studies will examine whether eye contact is associated with certain patterns of brain activity, hormonal responses and increases in heart rate during a disagreeable situation.

“Eye contact is so primal that we think it probably goes along with a whole suite of subconscious physiological changes,” Chen said.

In a 2006 article for Esquire, writer Tom Chiarella described the power that eye contact can have in social situations.

“A person’s gaze has weight, resistance, muscularity,” Chiarella wrote. “Clearly, there are people who use their eyes well. You know them: the sales rep, the fundraiser, the tyrannical supervisor. Their eyes force the question. These people may be as dumb as streetlamps, but they are an undeniable presence in the room. They know they must be dealt with. You know it, too.”

Source: Brett Smith for - Your Universe Online

Am I in the field of science till now? You really have shown me what a scientific mind should be like!


© 2024   Created by Dr. Krishna Kumari Challa.   Powered by

Badges  |  Report an Issue  |  Terms of Service