Science Simplified!

                       JAI VIGNAN

All about Science - to remove misconceptions and encourage scientific temper

Communicating science to the common people

'To make  them see the world differently through the beautiful lense of  science'

Load Previous Comments
  • Dr. Krishna Kumari Challa

    New study finds that traumatic stress is associated with a smaller cerebellum

    Adults with posttraumatic stress disorder (PTSD) have smaller cerebellums, according to new research from a brain imaging study.

    The cerebellum, a part of the brain well-known for helping to coordinate movement and balance, can influence emotion and memory, which are impacted by PTSD. What isn't known yet is whether a smaller cerebellum predisposes a person to PTSD or PTSD shrinks the brain region.

    The differences were largely within the posterior lobe, where a lot of the more cognitive functions attributed to the cerebellum seem to localize, as well as the vermis, which is linked to a lot of emotional processing functions.

    PTSD is a mental health disorder brought about by experiencing or witnessing a traumatic event, such as a car accident, sexual abuse, or military combat.

    Though most people who endure a traumatic experience are spared from the disorder, about 6% of adults develop PTSD, which is often marked by increased fear and reliving the traumatizing event.

    Researchers have found several brain regions involved in PTSD, including the almond-shaped amygdala that regulates fear, and the hippocampus, a critical hub for processing memories and routing them throughout the brain.

    The cerebellum (Latin for "little brain"), by contrast, has received less attention for its role in PTSD.

    A grapefruit-sized lump of cells that looks like it was clumsily tacked underneath the back of the brain as an afterthought, the cerebellum is best known for its role in coordinating balance and choreographing complex movements, like walking or dancing. But there is much more to it than that.

    It's a really complex area. If you look at how densely populated with neurons it is relative to the rest of the brain, it's not that surprising that it does a lot more than balance and movement.

    Dense may be an understatement. The cerebellum makes up just 10% of the brain's total volume, but packs in more than half of the brain's 86 billion nerve cells.

    Researchers have recently observed changes to the size of the tightly-packed cerebellum in PTSD. Most of that research, however, is limited by either a small dataset (fewer than 100 participants), broad anatomical boundaries, or a sole focus on certain patient populations, such as veterans or sexual assault victims with PTSD.

    Part 1

  • Dr. Krishna Kumari Challa

    To overcome those limitations, Researchers containing over 40  research groups that are part of a larger data-sharing initiative, pooled together their brain imaging scans to study PTSD as broadly and universally as possible. The group ended up with images from 4,215 adult MRI scans, about a third of whom had been diagnosed with PTSD.

    The result of this thorough methodology was a fairly simple and consistent finding: PTSD patients had cerebellums about 2% smaller.

    The results are an important first step at looking at how and where PTSD affects the brain.

    Smaller Total And Subregional Cerebellar Volumes In Posttraumatic Stress Disorder: A Mega-Analysis By The ENIGMA-PGC PTSD Workgroup, Molecular Psychiatry (2024). DOI: 10.1038/s41380-023-02352-0www.nature.com/articles/s41380-023-02352-0

    Part 2

    **

  • Dr. Krishna Kumari Challa

    More than 900 chemicals, many found in consumer products and the environment, display breast-cancer causing traits

    With tens of thousands of synthetic chemicals on the market, and new ones in development all the time, knowing which ones might be harmful is a challenge both for the federal agencies that regulate them and the companies that use them in products. Now scientists have found a quick way to predict whether a chemical is likely to cause breast cancer based on whether the chemical harbors specific traits.

    This new study provides a roadmap for regulators and manufacturers to quickly flag chemicals that could contribute to breast cancer in order to prevent their use in consumer products and find safer alternatives.

    The study titled "Application of the Key Characteristics framework to identify potential breast carcinogens using publicly available in vivo, in vitro, and in silico data," published in Environmental Health Perspectives.

    The researchers identified a total of 921 chemicals that could promote the development of breast cancer. Ninety percent of the chemicals are ones to which people are commonly exposed in consumer products, food and drink, pesticides, medications, and workplaces.

    A breakdown of the list revealed 278 chemicals that cause mammary tumors in animals. More than half of the chemicals cause cells to make more estrogen or progesterone, and about a third activate the estrogen receptor.

    Breast cancer is a hormonal disease, so the fact that so many chemicals can alter estrogen and progesterone is concerning.

    Since damage to DNA can also trigger cancer, the researchers searched additional databases and found that 420 of the chemicals on their list both damage DNA and alter hormones, which could make them riskier. What's more, the team's analysis found that chemicals that cause mammary tumors in animals are more likely to have these DNA-damaging and hormone-disrupting characteristics than ones that don't.

    These findings show that screening chemicals for these hormonal traits could be an effective strategy for flagging potential breast carcinogens.

    Application of the Key Characteristics framework to identify potential breast carcinogens using publicly available in vivo, in vitro, and in silico data, Environmental Health Perspectives (2024). DOI: 10.1289/EHP13233ehp.niehs.nih.gov/ehp13233

  • Dr. Krishna Kumari Challa

    Higher viral load during HIV infection can shape viral evolution

    A new paper in Molecular Biology and Evolution finds that HIV populations in people with higher viral loads also have higher rates of viral recombination. In effect, the more HIV in the blood, the easier it is for the virus to diversify.

    One of the reasons HIV has historically been so difficult to combat is the virus's exceptionally high rate of recombination. Recombination enables the exchange of genetic information across strains of the virus and drives HIV's evolution within people. This genetic exchange helps the virus evade the immune system and become resistant to many drugs designed to treat HIV.

    More generally, recombination is an important evolutionary driver, permitting organisms to purge destructive mutations and combine beneficial ones.

    Understanding the factors that impact recombination rate in a well-studied system such as HIV can help uncover some of the effects that recombination has on evolution more broadly.

    One important yet understudied step in HIV recombination is coinfection, in which two different virus particles infect the same cell.

    The researchers involved with the new study hypothesized that people with higher viral loads (more HIV in the blood) would have more cells that were coinfected, which would lead to higher rates of recombination for the virus. To investigate this hypothesis, they developed a new approach called Recombination Analysis via Time Series Linkage Decay (RATS-LD) to quantify recombination using genetic associations between mutations over time.

    This investigation found that while HIV populations with viral loads in the lowest third of the data set have recombination rates in line with previous estimates, while populations with viral loads in the upper third have a median recombination rate that is nearly six-fold higher. Furthermore, the researchers observe patterns of viral load and effective recombination rate increasing simultaneously within single individuals.

     Elena Romero et al, Elevated HIV viral load is associated with higher recombination rate in vivo, Molecular Biology and Evolution (2023). DOI: 10.1093/molbev/msad260academic.oup.com/mbe/article-l … .1093/molbev/msad260

  • Dr. Krishna Kumari Challa

    Is Vitamin D that importnat?
    For a time, vitamin D was touted as a potential miracle vitamin, thought to prevent everything from heart disease to cancer to diabetes. But several recent randomized controlled trials showed no significant benefit of vitamin D for any major condition. To be sure, vitamin D plays a vital role in health, but most people get all they need in several minutes of daily sunlight.

    Why this matters: Overtesting by doctors of vitamin D serum levels remains widespread. Experts disagree about how to interpret the test and many doctors still unnecessarily recommend vitamin D supplements. The supplements represent more than a $1 billion market, despite the lack of evidence that they are necessary for the majority of people (not to mention the fact that vitamins are not independently tested for purity or dosing).

    What the experts say: “There’s a religiosity around vitamin D,” says Clifford Rosen, an endocrinologist at the Maine Medicine Center's Research Institute. “The evidence is out there. People don’t want to pay attention to it.”

     no significant benefit of vitamin D

  • Dr. Krishna Kumari Challa

    Scientists tame chaotic protein fueling 75% of cancers

    MYC is the shapeless protein responsible for making the majority of human cancer cases worse. Researchers have now found a way to rein it in, offering hope for a new era of treatments.

    In healthy cells, MYC helps guide the process of transcription, in which genetic information is converted from DNA into RNA and, eventually, into proteins. Normally, MYC's activity is strictly controlled. In cancer cells, it becomes hyper active, and is not regulated properly.

    MYC is less like food for cancer cells and more like a steroid that promotes cancer's rapid growth. That is why MYC is a culprit in 75% of all human cancer cases.

    In 2018, the researchers noticed that changing the rigidity and shape of a peptide improves its ability to interact with structureless protein targets such as MYC.

    Peptides can assume a variety of forms, shapes, and positions. Once you bend and connect them to form rings, they cannot adopt other possible forms, so they then have a low level of randomness. This helps with the binding.

    Part 1

  • Dr. Krishna Kumari Challa

    In a new paper, scientists describe a new peptide that binds directly to MYC with what is called sub-micro-molar affinity, which is getting closer to the strength of an antibody. In other words, it is a very strong and specific interaction.

    Researchers now improved the binding performance of this peptide over previous versions by two orders of magnitude. This makes it closer to their drug development goals.

    Currently, the researchers are using lipid nanoparticles to deliver the peptide into cells. These are small spheres made of fatty molecules, and they are not ideal for use as a drug. Going forward, the researchers are developing chemistry that improves the lead peptide's ability to get inside cells.

    Once the peptide is in the cell, it will bind to MYC, changing MYC's physical properties and preventing it from performing transcription activities.

    MYC represents chaos, basically, because it lacks structure. That, and its direct impact on so many types of cancer make it one of the holy grails of cancer drug development.

    Zhonghan Li et al, MYC-Targeting Inhibitors Generated from a Stereodiversified Bicyclic Peptide Library, Journal of the American Chemical Society (2024). DOI: 10.1021/jacs.3c09615

    Part 2

  • Dr. Krishna Kumari Challa

    Watching others visibly dislike vegetables might make onlookers dislike them, too

    Humans learn which behaviours pay off and which don't from watching others. Based on this, we may draw conclusions about how to act—or eat. In the latter's case, people may use each other as guides to determine what and how much to eat. This is called social modeling and is one of the most powerful social influences on eating behaviour.

    In a new study, researchers  investigated whether observing others' facial expressions while eating raw broccoli influenced young women's liking and desire to eat raw broccoli.

    They show that watching others eating a raw vegetable with a negative facial expression reduces adult women's liking of that vegetable, but not their desire to eat it. This highlights the power of observing food dislike on adults' eating behaviour. 

    Previous research shows that behaviors are more likely to be imitated if positive consequences are observed, while the reverse is true if negative outcomes are witnessed. In the present study, however, this correlation was observed only partially: Exposure to models eating broccoli while conveying negative facial expressions resulted in a greater reduction in liking ratings, whereas the reverse did not hold. Watching others eating a raw vegetable with a positive facial expression did not increase adults' vegetable liking or eating desire.

    One possible explanation may be that avoiding any food—irrespective of whether it is commonly liked or disliked—that appears disgusting can protect us from eating something that tastes bad or is harmful. Another reason may be that smiling while eating is perceived as an untypical display of liking a certain food.

    "This might imply that watching someone eating a raw vegetable with positive facial expressions does not seem an effective strategy for increasing adults' vegetable consumption.

    Part 1

  • Dr. Krishna Kumari Challa

    Copy and taste

    There is still much that needs to be understood about the interplay of obvious enjoyment and the liking of food. For example, the researchers have focused on adults, and while this has not been tested for on this occasion, they said that given the power of negative facial expressions and because children tend to be less willing to try vegetables by default, these findings could generalize to kids.

    "For example, if a child sees their parent showing disgust while eating vegetables, this could have negative consequences on children's vegetable acceptance.

    In the present study, participants also watched short video clips rather than watching people eat in front of them. This allowed them to observe the dynamic nature of reactive facial expressions, which is more realistic than looking at static pictures; however, in the future, an important focus will be to examine the effect of watching live food enjoyment on eating behavior, the researchers said.

    "We also need more research to see whether the findings from this study translate to adults' actual intake of vegetables.

    Katie L. Edwards et al, Facial expressions and vegetable liking, Frontiers in Psychology (2024). DOI: 10.3389/fpsyg.2023.1252369

    Part 2

    **

  • Dr. Krishna Kumari Challa

    Lab-grown retinas explain why people see colours some animals can't

    With human retinas grown in a petri dish, researchers discovered how an offshoot of vitamin A generates the specialized cells that enable people to see millions of colors, an ability that dogs, cats, and other mammals do not possess.

    These retinal organoids allowed scientists for the first time to study this very human-specific trait. It's a huge question about what makes us human, what makes us different.

    The findings, published in PLOS Biology, increase understanding of color blindness, age-related vision loss, and other diseases linked to photoreceptor cells. They also demonstrate how genes instruct the human retina to make specific color-sensing cells, a process scientists thought was controlled by thyroid hormones. By tweaking the cellular properties of the organoids, the research team found that a molecule called retinoic acid determines whether a cone will specialize in sensing red or green light. Only humans with normal vision and closely related primates develop the red sensor.

    Scientists for decades thought red cones formed through a coin toss mechanism where the cells haphazardly commit to sensing green or red wavelengths—and research from researchers recently hinted that the process could be controlled by thyroid hormone levels. Instead, the new research suggests red cones materialize through a specific sequence of events orchestrated by retinoic acid within the eye.

    The team found that high levels of retinoic acid in early development of the organoids correlated with higher ratios of green cones. Similarly, low levels of the acid changed the retina's genetic instructions and generated red cones later in development.

    There still might be some randomness to it, but the big finding is that you make retinoic acid early in development.This timing really matters for learning and understanding how these cone cells are made.

    Part1

  • Dr. Krishna Kumari Challa

    Green and red cone cells are remarkably similar except for a protein called opsin, which detects light and tells the brain what colours people see. Different opsins determine whether a cone will become a green or a red sensor, though the genes of each sensor remain 96% identical. With a breakthrough technique that spotted those subtle genetic differences in the organoids, the researchers tracked cone ratio changes over 200 days.

    The researchers also mapped the widely varying ratios of these cells in the retinas of 700 adults. Seeing how the green and red cone proportions changed in humans was one of the most surprising findings of the new research.

    Retinoic acid signaling regulates spatiotemporal specification of human green and red cones, PLoS Biology (2024). DOI: 10.1371/journal.pbio.3002464journals.plos.org/plosbiology/ … journal.pbio.3002464

    --

    Scientists still don't fully understand how the ratio of green and red cones can vary so greatly without affecting someone's vision.

    To build understanding of diseases like macular degeneration, which causes loss of light-sensing cells near the center of the retina, the researchers are working with other Johns Hopkins labs. The goal is to deepen their understanding of how cones and other cells link to the nervous system.

    Part 2

  • Dr. Krishna Kumari Challa

    Biomaterials contribute greatly to reduction of greenhouse gas emis...

    On average, bio-based products emit 45% less greenhouse gas emissions than the fossil materials they replace, according to research conducted by Radboud University, published in Nature Communications. At the same time, there is a large variation between individual bio-based products and more efforts are required to achieve climate neutrality. Additionally, biomaterials may have less favorable environmental impacts in other areas.

    On average, bio-based products emit 45% less greenhouse gas emissions than the fossil materials they replace, according to research conducted by  scientists, published in Nature Communications. At the same time, there is a large variation between individual bio-based products and more efforts are required to achieve climate neutrality. Additionally, biomaterials may have less favourable environmental impacts in other areas.

    Globally, there is a lot of investment in developing new materials from biomass, commonly known as biomaterials, to mitigate CO2 emissions from fossil materials. Biomaterials are derived from plants and are intended to replace materials made from fossil fuels, such as bio-plastics or bio-fibers (for clothing). It is assumed that biomaterials are better in terms of environmental impact.

    Research shows that, on average, new biomaterials emit 45% less CO2 than their counterparts made from fossil fuels. The researchers analyzed data from 98 new biomaterials reported in 130 international studies. "These studies considered the entire chain: from raw material extraction, production itself, to the final waste processing.

    Part 1

  • Dr. Krishna Kumari Challa

    Even though CO2 emissions on the whole decrease, significant differences exist between and among biomaterials, necessitating more action to achieve complete climate neutrality across the entire production chain. No material is 100% climate-neutral. Some are close, but others even emit more CO2 than the fossil materials they replace.

    Another consideration is that, despite reducing CO2 emissions, the production of biomaterials may still cause other environmental impacts. For instance, through the use of fertilizers in the production of biomass used for biomaterials. This can lead to eutrophication: an excess of nutrients resulting in oxygen depletion in surface waters.

    The researchers calculated that, on average, the production of biomaterials contributed to an increase in eutrophication impact. Reducing CO2 emissions is very important in mitigating climate change, however, we should avoid shifting the impact to other areas. Extra attention is therefore needed if we decide to transition to biomaterials on a large scale.

    Emma A. R. Zuiderveen et al, The potential of emerging bio-based products to reduce environmental impacts, Nature Communications (2023). DOI: 10.1038/s41467-023-43797-9

    Part 2

    **

  • Dr. Krishna Kumari Challa

    Mouse study finds aging sperm affects microRNA, increasing the risk of neurodevelopmental disorders

    A recent study has reported that changes in mice sperm microRNAs brought about by aging may affect the growth and development of offspring. The finding adds to the growing literature on the effects of paternal aging on offspring.

    Marriages and childbearing later in life are increasingly becoming the norm. While the impacts of maternal age on offspring, such as a higher risk of miscarriage and Down syndrome, are widely understood, the impacts from the paternal side are less so. Yet this is changing.

    Recent epidemiological studies have demonstrated that paternal aging exerts a more substantial influence on the heightened risk of neurodevelopmental disorders such as autism spectrum disorder.

    A research team has Previously revealed* that epigenetic factors, including histone modifications in spermatogenesis and DNA methylation in mice sperm, undergo changes with age. These alterations might lead to transgenerational effects.

    However, the impact of paternal aging on microRNAs (miRNAs), small, non-coding RNA molecules that play a crucial role in regulating gene expression, remains under-explored.

    * Misako Tatehana et al. Comprehensive histochemical profiles of histone modification in male germline cells during meiosis and spermiogenesis: Comparison of young and aged testes in mice, PLOS ONE (2020). DOI: 10.1371/journal.pone.0230930

    Part 1

  • Dr. Krishna Kumari Challa

    To rectify this, the same research team has conducted a comprehensive analysis of age-related variations in microRNAs in mice sperm. They compared microRNAs in sperm from mice aged 3, 12, and 20 months and identified the microRNAs that had changed in quantity.

    The researchers discovered significant age-associated differences in the microRNAs. Some changes were in microRNAs responsible for regulating the nervous system and genes related to autism spectrum disorder, and these altered microRNAs included those transferred to fertilized eggs.

    The present study reveals the potential association between alteration in sperm microRNAs caused by paternal aging, underscoring the significance of investigating the impact of sperm microRNAs on offspring, an aspect that has been relatively overlooked in previous research.

    The anticipation is that further exploration of epigenetic factors, specifically microRNAs, will not only contribute to unraveling the pathogenic mechanisms underlying neurodevelopmental disorders but will also offer insights into promoting the health and disease prevention of successive generations.

    Kazusa Miyahara et al, Investigating the impact of paternal aging on murine sperm miRNA profiles and their potential link to autism spectrum disorder, Scientific Reports (2023). DOI: 10.1038/s41598-023-47878-z

    Part 2

  • Dr. Krishna Kumari Challa

    Bizarre Galaxy Discovered With Seemingly No Stars Whatsoever

    A newly discovered object is stretching our understanding of what constitutes a galaxy.

    Called J0613+52, this massive blob of something some 270 million light-years away appears to have no stars whatsoever. At least, none that can be seen. It's just a haze made of the kind of gas that's found between stars in normal galaxies, drifting around by its lone self .

    Its mass and motion appear to be normal for what we'd expect of a spiral galaxy… in fact, if you extracted the stars from a spiral galaxy like the Milky Way or Andromeda, J0613+52 is pretty much what you'd end up with.

    According to a team of astronomers  it could be the first discovery of a primordial galaxy in the nearby Universe – a galaxy made up mostly of the gas that formed at the beginning of time.

    The object appears to be isolated and undisturbed, having experienced no gravitational interactions over the 13.8 billion-year course that would have disrupted the gas, either tearing it apart, or pushing it into the clumps needed to trigger significant star formation. This makes J0613+52 an object unlike any other we've ever seen before.

    It's a galaxy made only out of gas – it has no visible stars. Stars could be there, we just can't see them.

    The discovery – one made purely by chance – has been presented at the 243rd meeting of the American Astronomical Society.

  • Dr. Krishna Kumari Challa

    Startling Signs of Gravity's Laws Breaking Down Detected in Twin Stars

    In 1859, French astronomer and mathematician Urbain Le Verrier detected something strangeMercury deviated in its dance around the Sun, defying the orderly precession predicted by Newtonian physics.

    This odd anomaly couldn't be explained by unknown planets tugging at Mercury's orbit; only by physicist Albert Einstein's 1915 general theory of relativity, which describes how gravity creates curves in the fabric of space-time.

    Einstein's general theory has held strong in the century since, but there are a few things about the Universe his mind-bending model can't explain. It breaks down in the centers of black holes and at the dawn of the Universe, for example, and doesn't fit very easily with quantum mechanics, leading some physicists to ponder alternative takes on how gravity works.

    While those ideas remain fringe theories, the discovery of gravitational anomalies in widely separated twin stars at infinitesimally low acceleration is once again challenging Einstein's general theory.

    Part 1

  • Dr. Krishna Kumari Challa

    In a new study, astrophysicist Kyu-Hyun Chae of Sejong University in Korea has analyzed nearly 2,500 wide binary star systems observed by European Space Agency's Gaia space telescope, arriving at the conclusion that standard gravity is breaking down at certain points within them.

    Chae first reported finding gravitational anomalies midway through 2023 in a study of the orbital motions of wide binaries, anomalies which he thought represented evidence of one theory of modified gravity, called modified Newtonian dynamics (MOND).

    However, some physicists disagreed and instead suggested his sample had been 'contaminated' by pull of undetected close companions in the binary star systems. In other words, the larger-than-expected accelerations Chae observed in some wide binaries were more likely the effect of interlopers lurking in the shadows Chae had missed.

    So the Sejong University physicist sought to test his methods again in a smaller, refined subset of 'pure' binary stars. Chae found that closely orbiting twin stars were behaving consistently with classical Newtonian dynamics, so no problems there.

    part2

  • Dr. Krishna Kumari Challa

    But binary stars separated by more than 2,000 astronomical units appeared to get a velocity 'boost' at low accelerations, inconsistent with what classical mechanics predicts and regardless of whether hypothetical dark matter was included in the models.

    "This gravitational anomaly implies a low-acceleration breakdown of both Newtonian dynamics and general relativity and so has immense implications for astrophysics, cosmology, and fundamental physics," Chae writes in his new paper.

    "Thus, one cannot overemphasize the importance of confirming the claimed anomaly from as many independent studies as possible."

    While two studies from the same researcher are light-years away from the independent verification theory-overturning results demand, Chae thinks his methods are solid. Although he does admit that theoretical interpretations of the reported anomaly are "wide open."

    However, he also makes some big claims in his paper such as "the dark matter paradigm seems now doomed to be abandoned" and that "standard cosmology based on general relativity seems no longer valid, even in principle."

    Part 3

  • Dr. Krishna Kumari Challa

    Those types of claims need ridiculously strong evidence to back them up, replicated multiple times over. Chae's paper will no doubt be scrutinized closely by his peers. Nonetheless, it's in findings like this that we might find a way to bridge our gaps in knowledge over gravity's remaining mysteries.

    "The evidence for the gravity boost in the low-acceleration regime is now clear enough," Chae writes, "although the scientific community should keep gathering further evidence from future observations."

    The study has been published in The Astrophysical Journal.

    Part 4

    **

  • Dr. Krishna Kumari Challa

    First prehistoric people with different syndromes identified from ancient DNA

    Researchers  have developed a new technique to measure the number of chromosomes in ancient genomes more precisely, using it to identify the first prehistoric person with mosaic Turner syndrome (characterized by one X chromosome instead of two [XX]), who lived about 2,500 years ago.

    As part of their research published in Communications Biology, they also identified the earliest known person with Jacob's syndrome (characterized by an extra Y chromosome—XYY) in the Early Medieval Period, three people with Klinefelter syndrome (characterized by an extra X chromosome—XXY) across a range of time periods, and an infant with Down Syndrome from the Iron Age.

    Most cells in the human body have 23 pairs of DNA molecules called chromosomes, and the sex chromosomes are typically XX (female) or XY (male), although there are differences in sexual development. Aneuploidy occurs when a person's cells have an extra or missing chromosome. If this occurs in the sex chromosomes, a few differences like delayed development or changes in height can be seen around puberty.

    Ancient DNA samples can erode over time and can be contaminated by DNA from other ancient samples or from people handling them. This makes it difficult to accurately capture differences in the number of sex chromosomes.

    The team  of researchers developed a computational method that aims to pick up more variation in sex chromosomes. For the sex chromosomes, it involves counting the number of copies of X and Y chromosomes, and comparing the outcome to a predicted baseline (what you would expect to see).

    The team used the new method to analyze ancient DNA from a large dataset of individuals collected as part of their Thousand Ancient British Genomes project across British history, identifying six individuals with aneuploidies across five sites in Somerset, Yorkshire, Oxford and Lincoln (two sites). The individuals lived across a range of time periods, from the Iron Age (2,500 years ago) up to the Post-Medieval Period (about 250 years ago).

    They identified five people who had sex chromosomes that fell outside the XX or XY categories. All were buried according to their society's customs, although no possessions were found with them to shed more light on their lives.

    The three individuals with Klinefelter syndrome lived across very different time periods, but they shared some similarities—all were slightly taller than average and showed signs of delayed development in puberty.

    By investigating details on the bones, the research team could see that it was unlikely that the individual with Turner syndrome had gone through puberty and started menstruation, despite their estimated age of 18-22. Their syndrome was shown to be mosaic; some cells had one copy of chromosome X and some had two.

    Through precisely measuring sex chromosomes, they were able to show the first prehistoric evidence of Turner syndrome 2,500 years ago, and the earliest known incidence of Jacob's syndrome around 1,200 years ago. It's hard to see a full picture of how these individuals lived and interacted with their society, as they weren't found with possessions or in unusual graves, but it can allow some insight into how perceptions of gender identity have evolved over time.

     Detection of chromosomal aneuploidy in ancient genomesCommunications Biology, 2024; 7 (1) DOI: 10.1038/s42003-023-05642-z

  • Dr. Krishna Kumari Challa

    The Key to Creating Blood Stem Cells May Lie in Your Own Blood

    The development of blood stem cells relies on a seemingly unrelated microbe-sensing protein receptor, according to a new study.

    The discovery could break new ground in the ongoing quest to produce blood stem cells from a person's own blood – thereby negating the need for bone marrow transplants.

    The protein receptor in question, called Nod1, is already known for its role in helping recognize bacterial infections in the body and rallying an immune response, the study's authors note.

    But according to their research, Nod1 also seems to serve a different purpose much earlier in life, when an embryo's vascular system is still developing.

    The study suggests this microbial sensor helps embryos force some of their vascular endothelial cells to become blood stem cells.

    That could be valuable information, given its potential for shedding light on how an embryo makes blood stem cells – and perhaps how we can grow them much later in life, too.
    This would eliminate the challenging task of finding compatible bone marrow transplant donors and the complications that occur after receiving a transplant, improving the lives of many leukemia, lymphoma, and anemia patients.

    Blood stem cells are progenitors of all white and red cells in our blood, producing all the components of our blood in a process called hematopoiesis.

    These blood stem cells, also known as hematopoietic stem cells, arise themselves in the body before birth, developing from endothelial cells within an embryo's aorta.
    Part 1
  • Dr. Krishna Kumari Challa

    The researchers first homed in on Nod1 by analyzing public databases of human embryos, then studied the receptor further using zebrafish, a commonly used model organism that shares roughly 70 percent of its genome with humans. By inhibiting or boosting Nod1, the researchers demonstrated a positive correlation with the creation of blood stem cells.

    Most of a person's blood stem cells reside in their bone marrow, so patients with certain blood disorders often need a bone marrow transplant to provide a vital supply of blood stem cells.

    But armed with this evidence about Nod1's role in creating blood stem cells in embryos, scientists have new hope for devising a way to produce new blood stem cells from human samples, potentially even from patients' own blood.

    That could help avoid not only the logistical challenges of arranging and performing bone marrow transplants, the researchers note, but also complications like graft-versus-host disease, in which transplanted immune cells recognize the host as foreign and attack the recipient's cells.

    https://www.nature.com/articles/s41467-023-43349-1.pdf

    Part 2

  • Dr. Krishna Kumari Challa

    Water molecule discovery contradicts textbook models

    Textbook models will need to be re-drawn after a team of researchers found that water molecules at the surface of salt water are organized differently than previously thought.

    Many important reactions related to climate and environmental processes take place where water molecules interface with air. For example, the evaporation of ocean water plays an important role in atmospheric chemistry and climate science. Understanding these reactions is crucial to efforts to mitigate the human effect on our planet.

    The distribution of ions at the interface of air and water can affect atmospheric processes. However, a precise understanding of the microscopic reactions at these important interfaces has so far been intensely debated.

    In a paper published in the journal Nature Chemistry, researchers show that ions and water molecules at the surface of most salt-water solutions, known as electrolyte solutions, are organized in a completely different way than traditionally understood. This could lead to better atmospheric chemistry models and other applications.

    Part 1

  • Dr. Krishna Kumari Challa

    The researchers set out to study how water molecules are affected by the distribution of ions at the exact point where air and water meet. Traditionally, this has been done with a technique called vibrational sum-frequency generation (VSFG). With this laser radiation technique, it is possible to measure molecular vibrations directly at these key interfaces.

    However, although the strength of the signals can be measured, the technique does not measure whether the signals are positive or negative, which has made it difficult to interpret findings in the past. Additionally, using experimental data alone can give ambiguous results.

    The team overcame these challenges by utilizing a more sophisticated form of VSFG, called heterodyne-detected (HD)-VSFG, to study different electrolyte solutions. They then developed advanced computer models to simulate the interfaces in different scenarios.

    The combined results showed that both positively charged ions, called cations, and negatively charged ions, called anions, are depleted from the water/air interface. The cations and anions of simple electrolytes orient water molecules in both up- and down-orientation. This is a reversal of textbook models, which teach that ions form an electrical double layer and orient water molecules in only one direction.
    This work demonstrates that the surface of simple electrolyte solutions has a different ion distribution than previously thought and that the ion-enriched subsurface determines how the interface is organized: at the very top there are a few layers of pure water, then an ion-rich layer, then finally the bulk salt solution.

    Kuo-Yang Chiang et al, Surface stratification determines the interfacial water structure of simple electrolyte solutions, Nature Chemistry (2024). DOI: 10.1038/s41557-023-01416-6www.nature.com/articles/s41557-023-01416-6

  • Dr. Krishna Kumari Challa

    Deadly 'Zombie Drug' Believed to Contain Human Bones Wreaks Havoc in West Africa

    A new drug called kush is wreaking havoc in west Africa, particularly in Sierra Leone where it is estimated to kill around a dozen people each week and hospitalise thousands.
    The drug, taken mostly by men aged 18 to 25, causes people to fall asleep while walking, to fall over, to bang their heads against hard surfaces and to walk into moving traffic.
    Kush should not be confused with the drug of the same name found in the US, which is a mixture of "an ever-changing host of chemicals" sprayed on plant matter and smoked.

    Kush in Sierra Leone is quite different; it is a mixture of cannabis, fentanyl, tramadol, formaldehyde and – according to some – ground down human bones.

    It is mixed by local criminal gangs, but the constituent drugs have international sources, facilitated no doubt by the internet and digital communications.

    While cannabis is widely grown in Sierra Leone, the fentanyl is thought to originate in clandestine laboratories in China where the drug is manufactured illegally and shipped to west Africa.

    Tramadol has a similar source, namely illegal laboratories across Asia. Formaldehyde, which can cause hallucinations, is also reported in this mixture.

    As for ground human bones, there is no definitive answer about whether or not they occur in the drug, where such bones would come from, or why they might be incorporated into the drug.

    Some people say that grave robbers provide the bones, but there is no direct evidence of this.
    But why would bones be incorporated into the drug? Some suggest that the sulphur content of the bones causes a high. Another reason might be the drug content of the bones themselves, if the deceased was a fentanyl or tramadol user.

    However, both are unlikely. Sulphur levels in bones are not high. Smoking sulphur would result in highly toxic sulphur dioxide being produced and inhaled. Any drug content in bones is orders of magnitude less than that required to cause a physiological effect.
    Part 1
  • Dr. Krishna Kumari Challa

    Where is the drug found?
    The drug is reported in both Guinea and Liberia, which share porous land borders with Sierra Leone, making drug trafficking easy.

    Kush costs around five leones (20 UK pence) per joint, which may be used by two or three people, with up to 40 joints being consumed in a day.

    This represents a massive spend on drugs and illustrates the addictive nature of the mixture, in a country where the annual income per capita is around £500.

    The effects of the drug vary and depend on the user and the drug content. Cannabis causes a wide variety of effects, which include euphoria, relaxation and an altered state of consciousness.
    Fentanyl, an extremely potent opioid, produces euphoria and confusion and causes sleepiness among a wide range of other side-effects.

    Similarly, tramadol, which is also an opioid but less potent than fentanyl (100mg tramadol has the same effect as 10mg morphine) results in users becoming sleepy and "spaced out" – disconnected from things happening around them.

    The danger of the drug is twofold: the risk of self-injury to the drug taker and the highly addictive nature of the drug itself. A further problem is the need to finance the next dose, often achieved through prostitution or criminal activity
    Part 2

  • Dr. Krishna Kumari Challa

    Joining the ranks of existing polydrugs
    Kush is another example of polydrug mixtures of which forensic scientists are becoming increasingly aware. Another tobacco and cannabis-based drug, nyaope, otherwise known as whoonga, is found in South Africa.

    This time the tobacco and cannabis are mixed with heroin and antiretroviral drugs used to treat Aids, some of which are hallucinogenic.

    A further polydrug, "white pipe", a mixture of methaqualone (Mandrax), cannabis and tobacco, is smoked in southern Africa.

    These drugs are inexpensive and provide an escape from unemployment, the drudgery of poverty, sexual and physical abuse, and the effect, in some cases, especially in west Africa, from having been a child soldier.
    So what can be done about these drugs? The effectiveness of legislation alone is questionable, and many of those who attend the very limited rehabilitation centres return to drug use.

    Perhaps what is required is an integrated forensic healthcare system where legislative control is backed up by properly resourced rehabilitation centres coupled with a public health and employment programme. What changes are made in response to this epidemic remains to be seen.

    Michael Cole, Professor of Forensic Science, Anglia Ruskin University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Part 3

    **

  • Dr. Krishna Kumari Challa

    Scientists Film Plant 'Talking' to Its Neighbour

    Imperceptible to us, plants are surrounded by a fine mist of airborne compounds that they use to communicate and protect themselves. Kind of like smells, these compounds repel hungry herbivores and warn neighboring plants of incoming assailants.

    Scientists have known about these plant defenses since the 1980s, detecting them in over 80 plant species since then. Now, a team of Japanese researchers has deployed real-time imaging techniques to reveal how plants receive and respond to these aerial alarms.

    This was a big gap in our understanding of plant chatter: we knew how plants send messages, but not how they receive them.

    In this study, Yuri Aratani and Takuya Uemura, molecular biologists at Saitama University in Japan, and colleagues rigged up a pump to transfer compounds emitted by injured and insect-riddled plants onto their undamaged neighbors, and a fluorescence microscope to watch what happened.

  • Dr. Krishna Kumari Challa

    Caterpillars (Spodoptera litura) were set upon leaves cut from tomato plants and Arabidopsis thaliana, a common weed in the mustard family, and the researchers imaged the responses of a second, intact, insect-free Arabidopsis plant to those danger cues.

    These plants weren't any ordinary weeds: they had been genetically altered so their cells contained a biosensor that fluoresced green when an influx of calcium ions was detected. Calcium signaling is something human cells use to communicate too.

    The team used a similar technique to measure calcium signals in a study last year of fluorescent Mimosa pudica plants, which quickly move their leaves in response to touch, to avoid predators.

    This time, the team visualized how plants responded to being bathed in volatile compounds, which plants release within seconds of wounding.

    Part 2

  • Dr. Krishna Kumari Challa

    It wasn't a natural set-up; the compounds were concentrated in a plastic bottle and pumped onto the recipient plant at a constant rate, but this allowed the researchers to analyze what compounds were in the pungent mix.

    As you can see in the video above, the undamaged plants received the messages of their injured neighbors loud and clear, responding with bursts of calcium signaling that rippled across their outstretched leaves.

    Analyzing the airborne compounds, the researchers found that two compounds called Z-3-HAL and E-2-HAL induced calcium signals in Arabidopsis.

    They also identified which cells are the first to respond to the danger cues by engineering Arabidopsis plants with fluorescent sensors exclusively in guard, mesophyll, or epidermal cells.

    Guard cells are bean-shaped cells on plant surfaces that form stomata, small pores that open up to the atmosphere when plants 'breathe' in CO2. Mesophyll cells are the inner tissue of leaves, and epidermal cells are the outermost layer or skin of plant leaves.

    When Arabidopsis plants were exposed to Z-3-HAL, guard cells generated calcium signals within a minute or so, after which mesophyll cells picked up the message.

    What's more, pre-treating plants with a phytohormone that shuts stomata significantly reduced calcium signaling, suggesting stomata act as the 'nostrils' of the plant.
    part 3

  • Dr. Krishna Kumari Challa

    "We have finally unveiled the intricate story of when, where, and how plants respond to airborne 'warning messages' from their threatened neighbors," says Masatsugu Toyota, a molecular biologist at Saitama University in Japan and senior author of the study.

    "This ethereal communication network, hidden from our view, plays a pivotal role in safeguarding neighboring plants from imminent threats in a timely manner."

    The study has been published in Nature Communications.

    https://www.sciencealert.com/scientists-film-plant-talking-to-its-n...

    Part 4

    **

  • Dr. Krishna Kumari Challa

    UK-wide study reveals harm done by people not getting COVID jabs

    More than 7,000 people were hospitalized or died from COVID-19 in the UK during the summer of 2022 because they had not received the recommended number of vaccine doses, according to a study released Tuesday that was the first to cover Britain's entire population.

    The researchers said the "landmark" population-wide study showed how important it is for people to keep getting booster jabs as COVID continues to pose a major health threat. More than 90 percent of the UK's adult population were vaccinated during the earlier stages of the pandemic. However, between June to September 2022, after the pandemic's emergency phase was declared over and attention turned elsewhere, around 44 percent of Britons were under-vaccinated, the researchers said. Using individual health data from the National Health Service (NHS) as well as modeling, the researchers estimated that there would have been 7,180 fewer hospitalizations or deaths if everyone had been up to date with their shots. That means that nearly 20 percent of the 40,000 COVID hospitalizations or deaths over the summer could have been avoided if Britons were fully vaccinated.

    Undervaccination and severe COVID-19 outcomes: meta-analysis of national cohort studies in England, Northern Ireland, Scotland, and Wales, The Lancet (2024). DOI: 10.1016/S0140-6736(23)02467-4 , www.thelancet.com/journals/lan … (23)02467-4/fulltext

  • Dr. Krishna Kumari Challa

    Amnesia caused by head injury reversed in early mouse study

    A mouse study designed to shed light on memory loss in people who experience repeated head impacts, such as athletes, suggests the condition could potentially be reversed. The research in mice finds that amnesia and poor memory following head injury are due to inadequate reactivation of neurons involved in forming memories.

    The study, conducted by researchers is reported January 16, 2024, in The Journal of Neuroscience.

    Importantly for diagnostic and treatment purposes, the researchers found that the memory loss attributed to head injury was not a permanent pathological event driven by a neurodegenerative disease. Indeed, the researchers could reverse the amnesia to allow the mice to recall the lost memory, potentially allowing cognitive impairment caused by head impact to be clinically reversed.
    The  investigators had previously found that the brain adapts to repeated head impacts by changing the way the synapses in the brain operate. This can cause trouble in forming new memories and remembering existing memories. In their new study, investigators were able to trigger mice to remember memories that had been forgotten due to head impacts.

     Amnesia after repeated head impact is caused by impaired synaptic plasticity in the memory engram, The Journal of Neuroscience (2024).

  • Dr. Krishna Kumari Challa

    Energy-starved breast cancer cells consume their surroundings for fuel, research suggests

    Breast cancer cells ingest and consume the matrix surrounding them to overcome starvation, according to a new study published January 16 in the open access journal PLOS Biology. The finding elucidates a previously unknown mechanism of cancer cell survival, and may offer a new target for therapy development.

    Cells in the breast, including tumor cells, are embedded in a meshwork called the extracellular matrix (ECM). Nutrients are scarce in the ECM, due to limited blood flow, and become even scarcer as tumor cells grow. And yet they continue to grow, leading the authors to investigate how tumor cells supply themselves with the raw materials to support that growth. To do so, they seeded breast adenocarcinoma cells into either collagen (a major component of the ECM) or a commercial matrix preparation, or onto plastic, with or without certain critical amino acids. Without those amino acids, cells on plastic fared poorly compared to those in one or the other matrix. Similar results were seen with other matrix models—the tumor cells were able to overcome the reduction of amino acids when surrounded by matrix. Next, by fluorescently labeling the collagen and watching its journey through the cell, the authors showed that the cells took up ECM and broke it down in digestive compartments called lysosomes; when the ECM was chemically treated to cross-link its components, the cells were unable to ingest it. Further investigation indicated that uptake was through an ingestion process called macropinocytosis, in which the cell engulfs large quantities of extracellular material. What were the tumor cells after? Analysis of their metabolome indicated that procurement and breakdown of two amino acids, tyrosine and phenylalanine, dominated the metabolic changes in response to starvation. The authors noted that these two can serve as the raw material for energy production through the mitochondrial tricarboxylic acid (Krebs) cycle. When they knocked down HPDL, a central enzyme in the pathway from phenylalanine to the TCA, cell growth was significantly impaired. Blocking or reducing expression of HPDL, or the macropinocytosis promoter PAK1, reduced the ability of tumor cells to migrate and to invade surrounding tissue. These results indicate that breast cancer cells take advantage of nutrients in the extracellular matrix in times of nutrient starvation, and that this process depends on both macropinocytosis and metabolic conversion of key amino acids to energy-releasing substrates.

    Nazemi M, Yanes B, Martinez ML, Walker HJ, Pham K, Collins MO, et al. (2024) The extracellular matrix supports breast cancer cell growth under amino acid starvation by promoting tyrosine catabolism. PLoS Biology (2024). DOI: 10.1371/journal.pbio.3002406

  • Dr. Krishna Kumari Challa

    Soldering wounds with light and nano thermometers

    Not every wound can be closed with needle and thread. Researchers have now developed a soldering process with nanoparticles that gently fuses tissue. The soldering technique is expected to prevent wound healing disorders and life-threatening complications from leaking sutures.

    Some time more than 5,000 years ago, humankind came up with the idea of suturing a wound with a needle and thread. Since then, this surgical principle has not changed much: Depending on the fingertip feeling of the person performing the operation and the equipment, cuts or tears in the tissue can be joined together more or less perfectly. Once both sides of a wound are neatly fixed to each other, the body can begin to close the tissue gap permanently in a natural way.

    However, the suture does not always achieve what it is supposed to. In very soft tissues, the thread can cut through the tissue and cause additional injury. And if the wound closure does not seal on internal organs, permeable sutures can pose a life-threatening problem. Researchers have now found a way to solder wounds using lasers.

    Soldering usually involves joining materials together by means of heat via a melting bonding agent. The fact that this thermal reaction must remain within narrow limits for biological materials and at the same time the temperature is difficult to measure in a non-invasive way has been a problem for the application of soldering processes in medicine.

    The researchers tinkered with a smart wound closure system in which laser soldering can be controlled gently and efficiently. For this purpose, they developed a bonding agent with metallic and ceramic nanoparticles and used nanothermometry to control the temperature.

    The elegance of the new soldering process is also based on the interaction of the two types of nanoparticles in the bonding protein-gelatin paste. While the paste is irradiated by laser, titanium nitride nanoparticles convert the light into heat. The specially synthesized bismuth vanadate particles in the paste, on the other hand, act as tiny fluorescent nano thermometers. They emit light of a specific wavelength in a temperature-dependent manner, allowing extremely precise temperature regulation in real time.

    This makes the method particularly suitable for use in minimally invasive surgery, as it does not require stirring and determines temperature differences with extremely fine spatial resolution in superficial and deep wounds.

    The researchers also succeeded in replacing the laser light source with gentler infrared (IR) light. This brings the soldering technology another step closer to be used in hospitals.

    Oscar Cipolato et al, Nanothermometry‐Enabled Intelligent Laser Tissue Soldering, Small Methods (2023). DOI: 10.1002/smtd.202300693

  • Dr. Krishna Kumari Challa

    Scientists report fundamental asymmetry between heating and cooling

    A new study  by scientists has found a fundamental asymmetry showing that heating is consistently faster than cooling, challenging conventional expectations and introducing the concept of "thermal kinematics" to explain this phenomenon. The findings are published in Nature Physics.

    Traditionally, heating and cooling, fundamental processes in thermodynamics, have been perceived as symmetric, following similar pathways.

    On a microscopic level, heating involves injecting energy into individual particles, intensifying their motion. On the other hand, cooling entails the release of energy, dampening their motion. However, one question has always remained: Why is heating more efficient than cooling?

    To answer this questions, researchers have introduced a new framework: thermal kinematics.

    Part 1

  • Dr. Krishna Kumari Challa

    At the microscopic level, heating and cooling are processes involving the exchange and redistribution of energy among individual particles within a system.

    In heating, energy is injected into each particle of a system, leading to an intensification of the particles' motion. This causes them to move more vigorously. The higher the temperature, the more intense the Brownian (or random) motion of these particles due to increased collisions with surrounding water molecules.

    On the other hand, cooling at the microscopic level involves the release of energy from individual particles, resulting in a dampening of their motion. This process corresponds to the system losing energy, leading to a decrease in the intensity of particle movement.

    Researchers took an object from a boiling-water bath (at 100 degrees Celsius) and immersing it in a mixture of water and ice (at 0 degrees Celsius)."

    They compared how fast the system equilibrates with the reverse protocol when the object is initially in the cold bath and heated in boiling water. They observed that, at the microscale, heating is faster than cooling, and they explained this theoretically by developing a new framework they call thermal kinematics.

    Part 2

  • Dr. Krishna Kumari Challa

    The researchers employed a sophisticated experimental setup to observe and quantify the dynamics of microscopic systems undergoing thermal relaxation. At the heart of their experimentation were optical tweezers—a powerful technique using laser light to capture single microparticles made of silica or plastic.

    "These tiny objects move in an apparently random fashion due to the collisions with water molecules, executing the so-called Brownian motion while they are confined to a small region by tweezers. The higher the temperature of the water, the more intense the Brownian motion will be due to more frequent and intense collisions with water molecules.

    To induce thermal changes, the researchers subjected the confined microparticles to varying temperatures. They carefully controlled the temperature of the surrounding environment using a noisy electrical signal, simulating a thermal bath.

    This experimental device allows the physicists to track the motion of the particle with exquisite precision, giving access to these previously unexplored dynamics.

    By manipulating the temperature and observing the resulting movements, the team gathered crucial data to understand the intricacies of heating and cooling at the microscale level.

    The development of the theoretical framework (thermal kinematics) played a pivotal role in explaining the observed phenomena. This framework combined principles from stochastic thermodynamics—a generalization of classical thermodynamics to individual stochastic trajectories—with information geometry.

    Thermal kinematics provided a quantitative means to elucidate the observed asymmetry between heating and cooling processes. This allowed the researchers not only to validate theoretical predictions but also to explore the dynamics between any two temperatures, revealing a consistent pattern of heating being faster than cooling.

    M. Ibáñez et al, Heating and cooling are fundamentally asymmetric and evolve along distinct pathways, Nature Physics (2024). DOI: 10.1038/s41567-023-02269-z

    Part 3

  • Dr. Krishna Kumari Challa

    The world smells different for female silk moths than for males

    A team of researchers  has studied olfaction in female silk moths. Using electrophysiological methods, they discovered that the antenna, which is specialized in males to detect female pheromones, is particularly sensitive to the scent of silkworm excrement in females.

    Components of this scent turned out to be a deterrent for mated females, probably allowing them to avoid competition for their own offspring when laying eggs. The responsible sensory neurons are located in hair-like structures called sensilla.
    In males, the detection of pheromones takes place in a long type of these sensilla, whereas the long sensilla neurons of females detect the odor of larval excrement. The odor of the mulberry tree, the only silkworm host plant, on the other hand, is detected by the female silk moths' sensory neurons in medium-length sensilla.
    In humans, the sense of smell is similarly developed in men and women, although women have slightly more olfactory neurons and therefore a slightly more sensitive nose. On the whole, however, they perceive the same odors. Male moths, on the other hand, live in a completely different olfactory world to their female counterparts. For example, the antennae of male silk moths—their "nose"—are highly specialized to detect female sex pheromones, while females cannot even smell their own pheromones.

    Females smell differently: characteristics and significance of the most common olfactory sensilla of female silkmoths, Proceedings of the Royal Society B: Biological Sciences (2024). DOI: 10.1098/rspb.2023.2578royalsocietypublishing.org/doi … .1098/rspb.2023.2578

  • Dr. Krishna Kumari Challa

    Study finds microplastics from natural fertilizers are blowing in the wind more often than once thought

    Though natural fertilizers made from treated sewage sludge are used to reintroduce nutrients onto agricultural fields, they bring along microplastic pollutants too. And according to a small-scale study published in Environmental Science & Technology Letters, more plastic particles get picked up by the wind than once thought. Researchers have discovered that the microplastics are released from fields more easily than similarly sized dust particles, becoming airborne from even a slight breeze.

    Microplastics, or small bits of plastic less than 5 millimeters long, have appeared everywhere from clouds to heart tissues. And with these plastics' increasing prevalence in people and water supplies, they've also been found in sewage and wastewater.

    Though sewage solids might not immediately seem like a useful product, after treatment they can form "biosolids," which are applied to agricultural soils as a natural, renewable source of fertilizer.
    According to estimates by the U.S. Environmental Protection Agency, over 2 million dry metric tons of biosolids—roughly half of the total amount collected by wastewater treatment plants—are applied to land each year.

    As a result, microplastics in these biosolids have the chance to reenter the environment. Because the plastics could carry other pollutants from the wastewater they originated from, they can be potentially dangerous when inhaled.
    Researchers analyzed airborne microplastics in wind-blown sediments that were gathered during wind-tunnel experiments on two plots of biosolid-treated land in rural areas.
    The researchers discovered that these wind-blown sediments contained higher concentrations of microplastics than either the biosolids or the source soil itself. This enrichment effect is caused by the plastic particles being less dense than soil minerals, such as quartz, and less "sticky"—they're not trapped as easily by moisture as the soil minerals are.

    As a result, microplastics can be picked up by a breeze more easily than soil minerals, and winds that might not be strong enough to kick up dust could still be introducing microplastics into the air.

    The researchers say that previous models did not take this sticky effect and other unique properties of microplastics into account when estimating emissions from treated fields. Therefore, these older models are likely to underestimate the actual amount of plastic particles released into the air.
    The present work indicate that microplastics may be emitted from barren agricultural fields from nearly two and a half times more wind events than previously estimated.

     Preferential Emission of Microplastics from Biosolid-Applied Agricultural Soils: Field Evidence and Theoretical Framework, Environmental Science & Technology Letters (2024). DOI: 10.1021/acs.estlett.3c00850

  • Dr. Krishna Kumari Challa

    Indian Tectonic Plate Is Splitting in Two Beneath Tibet, Latest Analysis Finds

    The engines driving the growth of the world's highest mountains into the sky run deep beneath the planet's skin. Geologists have some idea of the mechanisms at work, but evidence has so far left plenty of room for debate over the details.

    Combined with a fresh look at previous research, a recent analysis of new seismic data collected from across southern Tibet has delivered a surprising depiction of the titanic forces operating below the Himalayas.

    Presenting at the American Geophysical Union conference in San Francisco last December, researchers from institutions in the US and China described a disintegration of the Indian continental plate as it grinds along the basement of the Eurasian tectonic plate that sits atop it. It's a surprising compromise on two models currently favored as explanations for the lifting of the Tibetan plateau and the colossal Himalayan mountain range.

    In both cases, a collision between the chunks of crust belonging to India and Eurasia is responsible. Starting around 60 million years ago, the Indian plate was driven beneath its northern neighbor as it was carried along by currents of molten rock within the mantle.

    Bit by bit, the Eurasian land mass has been lifted skyward on the shoulders of a drowned giant, giving us Earth's highest elevations.

    Studies of the density of the mantle and the crust suggest the rather buoyant Indian continental plate shouldn't sink so easily, however, meaning it's likely the submerged sections of the crust should still be grinding along under the belly of the Eurasian plate rather than being plunged into the mantle's depths.

    Another possibility is the Indian plate is distorting in a way that causes some parts to wrinkle and fold, and others to dip and dive.

    Different perspectives emerge depending on which kinds of evidence are favored and how data is processed.

    Part 1

  • Dr. Krishna Kumari Challa

    In an investigation led by Ocean University of China geophysicist Lin Liu, researchers amassed 'up-and-down' S-wave and shear-wave splitting data from 94 broadband seismic stations arranged west-to-east across southern Tibet, and combined it with previously collected 'back-and-forth' P-wave data to come up with a more nuanced view of the dynamics below.

    They determined the Indian slab wasn't merely bobbing along smoothly below the Eurasian plate, nor was it bunching up like a rug on a slippery floor.

    Instead it is delaminating, with its dense base peeling free and sinking into the mantle as its lighter top-half continued its journey just beneath the surface.

    While computer models had suggested thicker sections of some plates could come apart like this, the study provides the first empirical evidence of it occurring.

    The team's description is consistent with geological models based on limits of helium-3 enriched spring water and patterns of fractures and earthquakes near the surface, which taken together support a map of carnage below, where sections of the old Indian plate seem more or less intact, and others are stripping apart around 100 kilometers below, allowing the base to warp into the planet's molten heart.

    Having a clear 3D description of the boundaries and borders of plates as they grind together not only makes it easier to understand how our surface came to look as it does, but could inform future methods of earthquake prediction.

    The study was presented at the 2023 American Geophysical Union conference. A pre-print copy of the study is available online.

    https://www.sciencealert.com/indian-tectonic-plate-is-splitting-in-...

    Part 2

    **

  • Dr. Krishna Kumari Challa

    Gut Bacteria Is Surprisingly Resilient to Antibiotics, Long-Term Study Shows

    Antibiotic resistance is causing us all sorts of havoc. It's turning once manageable pathogens into superbugs, hitting our most vulnerable people the hardest. Superbugs were involved with nearly 5 million deaths globally in 2019, so antibiotic resistance is generally not something we at all want to promote in the slightest. But while it's boosting our bacterial nemeses, resistance factors mean our good bacteria are becoming more resistant to antibiotics as well. A new study has found antibiotic resistance genes is also giving our gut bacteria an advantage, allowing them stay on top in their never ending battle against microbes that cause us harm. Antimicrobial resistance mutations in commensals can have paradoxically beneficial effects by promoting microbiome resilience to antimicrobials. researchers investigated the war between good and bad gut bacteria within 24 patients who have multidrug-resistant tuberculosis (Mycobacterium tuberculosis), detailing the battle in a blow-by-blow account. The patients had received numerous types of antibiotics to keep their illness at bay, a regime they were required to maintain for 20 months. Treatment initially devastated their gut microbiomes, causing disruptions in the metabolism, composition, and diversity of bacterial species. Unpleasant side effects included diarrhea and gut inflammation, which isn't surprising given antibiotic use has also been linked to increased cases of irritable bowel syndrome. Yet over time, something rather surprising was observed among the patients. The study findings document an unexpected resilience of the microbiome to disruption by long-term antibiotics, write the researchers, explaining how the microbiomes in the patients spontaneously reestablished the dominance of friendly bacteria in the wake of initial imbalance.

    Part 1

  • Dr. Krishna Kumari Challa

    What's more, they found the state of the patient's gut microbiome as well as the clearance of the tuberculosis bacteria itself are linked to recovery from tuberculosis-driven inflammation. Blooms of bacteria from the Enterobacteriaceae family post-treatment led to more widespread inflammation, whereas microbiomes with more Clostridia and fewer Enterobacteriaceae were associated with a faster drop in the inflammation that drives tuberculosis symptoms.

    Further tests in mice involving fecal transplants from the patients revealed their microbiomes had acquired a resistance to subsequent antibiotic disruption. These resistance genes were still seen in some of the friendly bacterial species a year after antibiotic treatments had ceased in the mice.
    These mutations may have a fitness benefit in the microbiome ecosystem established by long-term antibiotics.

    https://www.science.org/doi/10.1126/scitranslmed.adi9711

    part 2

  • Dr. Krishna Kumari Challa

    Researchers discover rare phages that attack dormant bacteria

    In nature, most bacteria live on the bare minimum. If they experience nutrient deficiency or stress, they shut down their metabolism in a controlled manner and go into a resting state. In this stand-by mode, certain metabolic processes still take place that enable the microbes to perceive their environment and react to stimuli, but growth and division are suspended.

    This also protects bacteria from, say, antibiotics or from viruses that prey exclusively on bacteria. Such bacteria-infecting viruses, known as phages, are considered a possible alternative to antibiotics that are no longer (sufficiently) effective due to drug resistance. Until now, expert consensus held that phages successfully infect bacteria only when the latter are growing.

    Then researchers asked themselves now whether evolution might have produced bacteriophages that specialize in dormant bacteria and could be used to target them. They began their search in 2018. Now, in a new publication in the journal Nature Communications, they show that such phages, though rare, do indeed exist.

    They found them first in rotting plant material and this virus can infect and destroy dormant bacteria. This is the first phage described in the literature that has been shown to attack bacteria in a dormant state. They have named their new phage Paride.

    The virus the researchers found infects Pseudomonas aeruginosa, a bacterium commonly found in many environments. Various strains colonize bodies of water, plants, the soil—and people. In the human body, certain strains can cause serious respiratory diseases such as pneumonia, which can be fatal. How the new phage takes dormant P. aeruginosa germs by surprise, however, is not yet clear to the researchers. They suspect that the virus uses a specific molecular key to awaken the bacteria, and then hijacks the cell's multiplication machinery for its own reproduction. However, the researchers have not yet been able to clarify exactly how this works.

    They now aim to elucidate the genes or molecules that underlie this awakening mechanism. Based on this, they could develop substances in a test tube that take over the wake-up process. Such a substance could then be combined with a suitable antibiotic that completely eliminates the bacteria.

    Part 1

  • Dr. Krishna Kumari Challa

    To test the efficacy of the Paride phage, the researchers paired it with an antibiotic called meropenem. This disrupts cell wall synthesis and so it interferes only with cellular processes that don't damage the phages. The antibiotic has no effect on dormant bacteria, as these don't synthesize a new cell wall.

    When tested in cell culture dishes, the virus was able to kill 99% of all dormant bacteria but left 1% alive. Only the combination of Paride phages and meropenem was able to eradicate the bacterial culture completely, even though the latter had no detectable effect on its own.

    In a further experiment  other researchers tested this combination on mice with a chronic infection. Neither the phage nor the antibiotic alone worked particularly well in the mice, but the interaction between phages and antibiotics proved to be very effective in living organisms as well.

    In the case of chronic infections, that means it would be important to know the physiological state of the bacteria in question. Then the right phages, combined with antibiotics, could be used in a targeted manner. However, you need to know exactly how a phage attacks a bacterium before you can select the right phages for a particular treatment.

    The researchers will now investigate precisely how the new phage brings bacteria out of deep sleep, infects them and makes them susceptible to antibiotics.

    Enea Maffei et al, Phage Paride can kill dormant, antibiotic-tolerant cells of Pseudomonas aeruginosa by direct lytic replication, Nature Communications (2024). DOI: 10.1038/s41467-023-44157-3

  • Dr. Krishna Kumari Challa

    Energy supply in human cells is subject to quality control, researchers discover

    Researchers have discovered a new quality control mechanism that regulates energy production in human cells. This process takes place in mitochondria, the power plants of the cell.

    Malfunctions of mitochondria lead to serious diseases of the nerves, the muscles and the heart. The findings could contribute to the development of new therapies for affected patients. The results have been published in Molecular Cell.

    Mitochondria play a central role for cellular metabolism. Therefore, malfunctions of the mitochondria lead to serious, often fatal, heart, muscle or nerve diseases. Mitochondria are surrounded by two membranes, an outer and an inner one, which separate them from the surrounding cell. The final conversion of food into energy takes place in the inner membrane. Proteins are involved in this process. Central proteins for energy production are formed in the mitochondria, transported to the inner membrane and inserted there.

    The protein OXA1L is mainly responsible for the insertion of proteins into the membrane, where larger complex structures are formed with other proteins that interact with each other and ensure energy production. How the incorporation and assembly of these structures works in detail has been poorly investigated thus far.

    Scientists have now discovered that the process of energy production depends on the interaction of the protein OXA1L with the protein TMEM126A.

    If TMEM126A is missing, a quality control mechanism is activated in the inner membrane of the mitochondria, which ensures that OXA1L and proteins newly generated for the energy production machinery are degraded and thus cannot be incorporated into the membrane. This shows that the protein TMEM126A is critical for energy production in mitochondria. "This finding is an important step in the search for new therapeutic approaches for affected patients. Understanding how proteins interact with each other in mitochondria could help to identify the causes of certain diseases. If we know what is missing in the cell or which process is not working properly in certain diseases, we can develop treatment measures to 'repair' this defect.

    Sabine Poerschke et al, Identification of TMEM126A as OXA1L-interacting protein reveals cotranslational quality control in mitochondria, Molecular Cell (2024). DOI: 10.1016/j.molcel.2023.12.013

  • Dr. Krishna Kumari Challa

    Bugs as Drugs to Boost Cancer Therapy

    Bioengineered bacteria sneak past solid tumor defenses to guide CAR T cells’ attacks.

    n the 1890s, physician William Coley injected patients who had cancer with bacteria after learning about patients who experienced spontaneous tumor regression following a concurrent bacterial infection.  He was one of the first scientists to link immune system activation with an antitumor response, which earned him the appellation, “the father of immunotherapy.” Despite some successes in the clinic, safety concerns and the rise of radiotherapy caused these bacterial elixirs, known as Coley’s toxins, to fall by the wayside.

    Over the last few decades, advancements across immunology, microbiology, and synthetic biology renewed interest in bioengineering bugs for cancer therapies. In a paper published in Science, researchers designed probiotics to colonize tumors and guide engineered T cells to the cancer site.

     Their novel platform not only shows that engineered bacteria can help existing immunotherapies gain access to difficult-to-treat solid tumors, but also highlights the broader potential of living drugs.

    Part 1

  • Dr. Krishna Kumari Challa

    The tumour microenvironment is an inhospitable ecosystem. “For bacteria, they kind of don't care about all of that. They are really, at a very general level, looking for a place where they can survive in the body that's away from the immune system.

    The hypoxic, minimally surveilled core of a solid tumor is the perfect bacterial bungalow. However, some bacterial strains can still inhabit healthy organs, so researchers need to explore ways to modify the bacterial genome to reduce virulence and toxicity. Although attenuated bacteria proved safer in both mice and humans, researchers observed poor tumor colonization and no tumor regression. Now, to improve tumor targeting and specificity, researchers are searching for synthetic biology solutions.

    Bacteria already have a proclivity for the tumor microenvironment.

    So researchers engineered the probiotic strain Escherichia coli Nissle 1917, which was equipped with Danino’s quorum sensing lysis circuit. Once the bacteria reached quorum, they released synthetic antigens that stuck to the tumor. Specifically, the researchers fused a green fluorescent protein (GFP) to the heparin binding domain (HBD) of a placental growth factor protein.10 The sticky HBD anchored to collagens and polysaccharides, ubiquitous components in the tumor environment, thus planting GFP flags on the tumor. Although these molecules are found in healthy tissue, they are highly abundant in the tumour.

    Part 2