Scientists discover caves carved by water on Mars that may have once harbored life
If there is, or ever has been, life on Mars, the chances are it would exist in caves protected from the severe dust storms, extreme temperatures, and high radiation present on its surface. One place to focus our attention could be eight possible cave sites (called skylights) recently discovered by researchers.
In a paper published in The Astrophysical Journal Letters, the team presents the first evidence of a new type of cave on the red planet, formed by water dissolving rock. Most Martian caves discovered so far have been lava tubes, but the study authors argue that they have identified the first documented karstic caves on Mars.
"These skylights are interpreted as the first known potential karstic caves on Mars, representing collapse entrances formed through the dissolution of water-soluble lithologies—defining a new cave-forming class distinct from all previously reported volcanic and tectonic skylights," wrote the researchers in their paper.
On Earth, karstic caves are typically formed when water dissolves soluble rock such as limestone or gypsum, creating and enlarging underground cracks and fractures that grow large enough to become caves. The paper proposes a similar process on Mars, where ancient Martian water may have dissolved carbonate- and sulfate-rich rocks on the crust.
The caves are located in the Hebrus Valles, a northwestern region, and are eight pits that were mapped by previous Mars missions. They are deep and predominantly circular depressions, not impact craters, which typically have raised rims and ejected debris around them.
The researchers studied data from the Thermal Emission Spectrometer (TES) that was onboard NASA's Mars Global Surveyor and discovered that the rocks around the pits are rich in carbonates and sulfates. These are the types of rocks that water can easily dissolve. The team also used high-resolution imagery to create 3D structural models of the pits, which showed that their shapes are consistent with collapse caused by water rather than volcanic or tectonic activity.
The search for life on Mars feels like looking for a needle in a haystack. But it could be narrowed down and made easier by having well-defined targets that, more than others, could be possible homes for life.
Therefore, the scientists behind this latest study think the eight possible karstic caves should be high-priority targets for future human or robotic missions to the planet. Even if no life is there, they could serve as landing sites and natural shelters for astronauts when they are not exploring the surface.
Ravi Sharma et al, Water-driven Accessible Potential Karstic Caves in Hebrus Valles, Mars: Implications for Subsurface Habitability, The Astrophysical Journal Letters (2025). DOI: 10.3847/2041-8213/ae0f1c
Gut microbes pass down behavioral traits in mice offspring independent of genes
Gut microbes are essential partners that help digest food, produce vitamins and train the immune system. They can also pass on behavioral traits to their host's offspring, at least in mice. Scientists have discovered that the mouse microbiome can alter the animal's behaviour in just four generations, independent of its genes. The study is published in the journal Nature Communications
Animals and their microbes have coevolved for millions of years. While it was already known that bacteria, viruses and fungi living in the gut can drive inherited changes in some simple creatures, like insects, scientists didn't know till now whether they could be the sole mechanism for passing down a specific trait (like a behavioral tendency) in more complex animals, such as mammals.
The research team designed an experiment in which they strictly controlled the host animal's genes. First, they took gut microbiota from wild-derived mice and gave it to genetically identical germ-free mice in the lab.
Then they set up a selection line, repeatedly choosing the two mice that traveled the least and transferring their microbes into a fresh batch of genetically identical, germ-free mice for the next generation. The scientists focused on locomotor activity (movement) because previous tests had confirmed that it was a behavior strongly influenced by the microbiome. The researchers also ran a control line where two donor mice were chosen at random.
This serial transfer of gut microbes carried on for four generations. By using germ-free mice, the researchers could be sure that any behavioral changes they observed were due to the selection and transfer of the microbial community.
Part 1
Selecting for low activity successfully caused slower movement across the four generations. The researchers analyzed the composition of the microbial community and found that this reduction in activity was linked to higher levels of the bacterium Lactobacillus, which produces the substance indolelactic acid (ILA). They also showed the link was causal. When they independently gave Lactobacillus or ILA to other mice, it was enough to suppress their locomotion.
This work highlights the role of microbiome-mediated trait inheritance in shaping host ecology and evolution," wrote the researchers in a paper.
The novelty in our study lies in experimentally demonstrating that selection on a host trait can lead to changes in that same trait over time purely through microbiome transmission, without any genetic evolution in the host.
Taichi A. Suzuki et al, Selection and transmission of the gut microbiome alone can shift mammalian behavior, Nature Communications (2025). DOI: 10.1038/s41467-025-65368-w
Scientists tie lupus to a virus nearly all of us carry
One of humanity's most ubiquitous infectious pathogens bears the blame for the chronic autoimmune condition called systemic lupus erythematosus (lupus), Stanford Medicine investigators and their colleagues have found.
The Epstein-Barr virus (EBV), which usually resides silently inside the bodies of 19 out of 20 people, is directly responsible for commandeering what starts out as a minuscule number of immune cells to go rogue and persuade far more of their fellow immune cells to launch a widespread assault on the body's tissues, the scientists have shown.
The work is published in the journal Science Translational Medicine.
About five million people worldwide have lupus, in which the immune system attacks the contents of cell nuclei. This results in damage to organs and tissues throughout the body—skin, joints, kidneys, heart, nerves and elsewhere—with symptoms varying widely among individuals. For unknown reasons, nine out of 10 lupus patients are women.
With appropriate diagnosis and medication, most lupus patients can live reasonably normal lives, but for about 5% of them the disorder can be life-threatening.
Existing treatments slow down disease progression but don't 'cure' it.
By the time we've reached adulthood, the vast majority of us have been infected by EBV. Transmitted in saliva, EBV infection typically occurs in childhood, from sharing a spoon with or drinking from the same glass as a sibling or a friend, or maybe during our teen years, from exchanging a kiss. EBV can cause mononucleosis, "the kissing disease," which begins with a fever that subsides but lapses into a profound fatigue that can persist for months.
Practically the only way to not get EBV is to live in a bubble. If you've lived a normal life the odds are nearly 20 to 1 you've got it.
Once you've been infected by EBV, you can't get rid of it, even if you remain or become symptom-free. EBV belongs to a large family of viruses, including those responsible for chickenpox and herpes, that can deposit their genetic material into the nuclei of infected cells. There the virus slumbers in a latent form, hiding from the immune system's surveillance agents. This may last as long as the cell it's hiding in stays alive; or under certain conditions, the virus may reactivate and force the infected cell's replicative machinery to produce myriad copies of themselves that break out to infect other cells and other people. Among the cell types in which EBV takes up permanent residence are B cells, immune cells that do a couple of important things after they ingest bits of microbial pathogens. For one, they can produce antibodies: customized proteins that find and bind immune-system-arousing proteins or other molecules (immunologists call them "antigens") on microbial pathogens that have infected an individual, or are trying to.
In addition, B cells are what immunologists call "professional antigen-presenting cells": They can process antigens and display them on their surfaces in a way that encourages other immune cells to raise the intensity of their hunt for the pathogen in question. That's a substantial force multiplier for kick-starting an immune response.
Our bodies harbor hundreds of billions of B cells, which over the course of numerous rounds of cell division develop an enormous diversity of antibodies. In the aggregate, these antibodies can bind an estimated 10 billion to 100 billion different antigenic shapes. This is why we're able to mount a successful immune response to so many different pathogens.
Oddly, about 20% of the B cells in our bodies are autoreactive. They target antigens belonging to our own tissues—not by design, but due to the random way B-cell diversity comes about: through sloppy replication, apparently engineered by evolution to ensure diversification. Fortunately, these B cells are typically in a dopey state of inertia, and they pretty much leave our tissues alone. Part 2
But at times, somnolent autoreactive B cells become activated, take aim at our own tissues and instigate one of the disorders collectively called autoimmunity. Some awakened autoreactive B cells crank out antibodies that bind to proteins and DNA inside the nuclei of our cells. Such activated "antinuclear antibodies"—the hallmark of lupus—trigger damage to tissues randomly distributed throughout the body, because virtually all our body's cells have nuclei.
The vast majority of EBV-infected people (most of us, that is) have no idea they're still sheltering a virus and never get lupus. But essentially everyone with lupus is EBV-infected, studies have shown. An EBV-lupus connection has long been suspected but never nailed down until now. Although latent EBV is ubiquitous in the sense that almost everybody carries it, it resides in only a tiny fraction of any given person's B cells. As a result, until the new study, it was virtually impossible for existing methods to identify infected B cells and distinguish them from uninfected ones.
But researchers now developed an extremely high-precision sequencing system that enabled them to do this. They found that fewer than 1 in 10,000 of a typical EBV-infected but otherwise healthy individual's B cells are hosting a dormant EBV viral genome.
Employing their new EBV-infected-B-cell-identifying technology along with bioinformatics and cell-culture experimentation, the researchers found out how such small numbers of infected cells can cause a powerful immune attack on one's own tissues. In lupus patients, the fraction of EBV-infected B cells rises to about 1 in 400—a 25-fold difference. It's known that the latent EBV, despite its near-total inactivity, nonetheless occasionally nudges the B cell in which it's been snoozing to produce a single viral protein, EBNA2. The researchers showed that this protein acts as a molecular switch—in geneticists' language, a transcription factor—activating a battery of genes in the B cell's genome that had previously been at rest. At least two of the human genes switched on by EBNA2 are recipes for proteins that are, themselves, transcription factors that turn on a variety of other pro-inflammatory human genes.
The net effect of all these genetic fireworks is that the B cell becomes highly inflammatory: It dons its "professional antigen-presenting cell" uniform and starts stimulating other immune cells (called helper T cells) that happen to share a predilection for targeting cell-nuclear components. These helper T cells enlist multitudes of other antinuclear B cells as well as antinuclear killer T cells, vicious attack dogs of the immune system.
When that militia bulks up, it doesn't matter whether any of the newly recruited antinuclear B cells are EBV-infected or not. (The vast majority of them aren't.) If there are enough of them, the result is a bout of lupus. Part 3
Scientists suspect that this cascade of EBV-generated self-targeting B-cell activation might extend beyond lupus to other autoimmune diseases such as multiple sclerosis, rheumatoid arthritis and Crohn's disease, where hints of EBV-initiated EBNA2 activity have been observed. The million-dollar question: If about 95% of us are walking around with latent EBV in our B cells, why do some of us—but not all of us—get autoimmunity? The researchers speculate that perhaps only certain EBV strains spur the transformation of infected B cells into antigen-presenting "driver" cells that broadly activate huge numbers of antinuclear B cells. Many companies are working on an EBV vaccine, and clinical trials of such a vaccine are underway. But that vaccine would have to be given soon after birth, they noted, as such vaccines are unable to rid an already-infected person of the virus.
Shady Younis et al, Epstein-Barr virus reprograms autoreactive B cells as antigen presenting cells in systemic lupus erythematosus, Science Translational Medicine (2025). DOI: 10.1126/scitranslmed.ady0210
In the future, clothes might come from vats of living microbes. Reporting in the journal Trends in Biotechnology, researchers demonstrate that bacteria can both create fabric and dye it in every color of the rainbow—all in one pot. The approach offers a sustainable alternative to the chemical-heavy practices used in today's textile industry.
The industry relies on petroleum-based synthetic fibers and chemicals for dyeing, which include carcinogens, heavy metals, and endocrine disruptors.
These processes generate lots of greenhouse gases, degrade water quality, and contaminate the soil, so researchers wanted to find a better solution.
Known as bacterial cellulose, fibrous networks produced by microbes during fermentation have emerged as a potential alternative to petroleum-based fibers such as polyester and nylon.
Researchers set out to create fibers with vivid natural pigment by growing cellulose-spinning bacteria alongside color-producing microbes. The microbial colors stemmed from two molecular families: violaceins—which range from green to purple—and carotenoids, which span from red to yellow.
But the first experiments failed. The team learned that the cellulose-spinning bacteria Komagataeibacter xylinus and the color-producing bacteria Escherichia coli interfered with each other's growth.
Tweaking their recipe, the researchers found a way to make peace between the microbes. For the cool-toned violaceins, they developed a delayed co-culture approach by adding in the color-producing bacteria after the cellulose bacteria had already begun growing, allowing each to do its job without thwarting the other.
For the warm-toned carotenoids, the team devised a sequential culture method, where the cellulose is first harvested and purified, then soaked in the pigment-producing cultures. Together, the two strategies yielded a vibrant palette of bacterial cellulose sheets in purple, navy, blue, green, yellow, orange, and red.
To see if the colors could survive the rigors of daily life, the team tested the materials by washing, bleaching, and heating them, as well as soaking them in acid and alkali. Most held their hues, and the violacein-based textile even outperformed synthetic dye in washing tests.
Researchers are proposing an environmentally friendly direction toward sustainable textile dyeing while producing cellulose at the same time.
Accepting it is in your hands!
Scaling up production and competing with low-cost petroleum products are among the remaining hurdles. Real progress will also require a shift in the consumer mindset toward prioritizing sustainability over price.
But the bacteria-based fabrics are at least five years from store shelves.
Could altering mosquitoes' internal clocks stop them from biting?
People who live in the tropical areas where Aedes aegypti mosquitoes reside have probably known for centuries, or even millennia—thanks to their itchy bites—that the mosquitoes hunt most often at dawn and dusk.
A new study offers scientific proof of that hunting behavior, and new insight into the biological mechanism behind it. It also offers a potential path to reducing bites and helping stop the spread of deadly, mosquito-borne disease.
The research focuses on Aedes aegypti, a type of mosquito that lives primarily in tropical areas, and that can carry diseases like dengue, chikungunya, and Zika. Although this mosquito species cannot survive harsh winters, in recent years they have made their habitats in new areas as climate change enables them to thrive in more temperate climates.
There are many reasons people could, in theory, get more bites from these mosquitoes at dawn and dusk. It could be that they're just outside more, or that humans are more attractive to them at these times. This study focused on understanding if this biting pattern is also influenced by mosquitoes' own daily rhythm.
The research team began by video recording mosquitoes, and watching how they responded to carbon dioxide, a signal mosquitoes use to locate humans. The team used machine learning to quantify the mosquitoes' movements.
They found that the mosquitoes had a more persistent response to the same amount of carbon dioxide at dawn and dusk, exactly the times of day that many people (both mosquito researchers and not) have reported getting more bites.
This is consistent with the idea that they're actually better predators at that time of day.
Part 1
Suspecting that what was changing was not the mosquitoes' ability to sense us at these times of day, but the persistence, or aggressiveness, of their response, the research team used CRISPR-Cas9, a gene editing tool, to mutate a gene that controls mosquitoes' internal clocks.. They found that mutating the gene changed their behavioral timing, making the mosquitoes less persistently responsive to carbon dioxide in the morning. Normally, biting rates are high in the mornings, but when they disrupted the clock gene, the mutant mosquitoes were less successful at feeding during that time. This is the first time we've found that there's an internal rhythm in the mosquito's behavior that could be driving these bites at dawn and dusk. Their internal clocks make them more persistent and predatory in their response to humans at these times of day. Scientists could, in theory, find ways to lock mosquitoes in a state that prevents them from effectively seeking out humans. This could mean fewer uncomfortable bites and less disease. But the mosquitoes in our place bite us all the time, not only during dawn and dusk. When we try to protect ourselves at dawn and dusk, they evolve to bite us all through the day! Hmmm! Researchers are you listening?
Linhan Dong et al, Time-of-day modulation in mosquito response persistence to carbon dioxide is controlled by Pigment-Dispersing Factor, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2520826122
Sleep is an absolute necessity but most of us aren't getting enough of it.
Sleep is required to consolidate memories, regenerate tissue and build up energy for emotional regulation and alertness during the day. Sleep deprivation disrupts these processes, increases stress and predisposes people to heart, psychiatric and neurological problems.
Factors like stress, jet lag, work, diet and screen time all interfere with our ability to get the 7–9 hours of solid slumber needed to recover from the physiological pummels of the day.
But there's more to this story than travel and cellphones. How much sleep we get, and its quality, may have ties to the microbial variety your body harbours!
We may have some control over how our sleep plays out, but many pieces of the puzzle—particularly those at the cellular level—are out of our hands. Some of them, in fact, are in our guts.
Studies in animal models and humans suggest there is a bidirectional relationship between the composition of the gut microbiota and sleep quality and duration. In mice, altering the gut microbial community, such as through antibiotic treatment, can lead to poorer, more fragmented sleep.
Indeed, a more diverse microbiome is generally correlated with increased sleep efficiency and total sleep time. Better sleep is also associated with a higher abundance of bacteria with health-promoting metabolic functions, like production of short-chain fatty acids (SCFAs), whereas conditions like insomnia are linked with lower abundances of these microbes. Whether insomnia causes these alterations, or vice versa, is still unclear—likely, they feed into one another.
Like many of the chemicals and processes powering our bodies, the composition of the microbiome naturally fluctuates throughout the day. These changes are linked to host circadian processes, and alterations in sleep (e.g., jet lag) can disrupt microbiota rhythms, resulting in important health implications.
"[What] we find in stress-related disorders, [and] in many mental health disorders in general, is that they're often associated with disordered sleep and dysregulation of sleep and circadian rhythms. Researchersrecently showed in animal models that daily fluctuations in stress pathways closely linked to sleep (e.g., cortisol levels) are modulated by the microbiome.
They found that there are circuits in the brain that are sensitive to microbial signals. Now, they are seeking to identify the mechanisms that underlie those sensitivities.
The internal clock of immune cells: Is the immune system younger in the morning?
As the immune system ages, it reacts more slowly to pathogens, vaccines become less effective, and the risk of cancer increases. At the same time, the immune system follows a 24-hour rhythm, as the number and activity of many immune cells fluctuate throughout the day.
Researchers have now investigated whether this daily rhythm influences the aging of the immune system and whether the immune system behaves "younger" or "older" at certain times of the day.
For the new studypublishedinFrontiers in Aging, researchers took blood samplesfrom participants in the morning, at noon, and in the evening. Using the so-called "IMMune Age indeX (IMMAX)," they determined each individual's immune age and analyzed how it changed throughout the day.
The IMMAX is a biomarker determined by the ratio of certain immune cells in the blood. As part of biological age, it correlates with actual chronological age.
Individual immune cells that are relevant for calculating the IMMAX fluctuate throughout the day. For example, in the morning, researchers observed an increased frequency of natural killer cells (NK cells)—key protective cells that defend the body against infections and cancer. In contrast, other immune cell types showed the opposite pattern.
The circadian rhythm regulates immune system activity. Hormones, body temperature, nerve signals, and messenger molecules tell immune cells when to move or become active. This leads to daily fluctuations in the number of immune cells in the blood. However, these fluctuations do not appear to influence immune age over the course of the day, as the researchers discovered. Despite measurable daily differences, the IMMAX remained largely stable, as individual immune cell types apparently balance each other out.
The IMMAX is a biomarker for immune age that is largely independent of the time of day. Nevertheless, slight differences were observed depending on a person's chronotype—that is, whether they are more active early in the day ("larks") or late in the day ("owls"). For early risers ("larks"), the IMMAX value decreased slightly from morning to noon. This suggests that the time of blood sampling in relation to waking up is important. "When we wake up in the morning and become active, this apparently influences the movement of our immune cells and thus has a slight effect on the IMMAX value.
In large cohort studies, the timing of sampling is less critical, as fluctuations are balanced out.
Sina Trebing et al, Influence of circadian rhythm on the determination of the IMMune age indeX (IMMAX), Frontiers in Aging (2025). DOI: 10.3389/fragi.2025.1716985
How Lung and Breast Cells Fight Cancer Differently
The study findings can help develop early-stage cancer prevention strategies.
Researchers have discovered why a specific mutation leads to tumorous growth in human lungs. The same mutation, however, fails to develop a tumor in human breasts, reducing the chances of a cancer.
They focused on the human epithelial tissues, the thin sheets of cells that line our vital organs. That’s because eighty percent of all human cancers begin in epithelial tissues.
epithelial cells act as the body’s first barrier and are constantly exposed to stress, damage, and mutations. In fact, both the lung and the breast epithelial tissues display ‘epithelial defense against cancer’ — the epithelium’s natural act against tumorous cells. That’s why the group tested a specific cancerous mutation that occurs in both breast and lung epithelial tissues.
Researchers have long recognized that breast and lung epithelial tissues exhibit distinct differences. Breast epithelium is relatively stable and compact — the cells are tightly packed with stronger junctions. On the other hand, lung epithelium expands and relaxes with each breath, which is why its cells are more flexible, elongated, and loosely connected.
The group, through a combination of live imaging and computer models, has determined how these differences directly lead to the lung epithelium being more prone to growing tumors compared to the breast.
Once a cancerous mutation sets in the breast tissue, single mutant cells are pushed out, and groups of cells get stuck together. In lung tissue, the same mutant cells spread easily and make finger-like shapes.
The team found that the “tug-of-war” forces between normal and mutant cells decide whether cancerous cells are removed, trapped, or allowed to grow. Thus, cell mechanics help explain tissue-specific cancer risk. The surprise was how directly those mechanics determine whether mutant cells are restrained or allowed to spread.
A belt forms around the tumor in the breast epithelia, raises tension, jams the mutant cluster, and often forces out single mutants. Thus, this belt restrains the tumor.
In the lung epithelia, the cells are more elongated, motile, and weakly connected. So, no such belt forms, allowing the mutant cells to survive, grow longer, and spread into the tissue.
The group’s work demonstrates a pathway to resisting the growth of mutations into tumors, and eventually, into cancers.
Protein needs can vary from person to person and can change throughout a lifetime. “For older adults, 1.2 grams per kilogram is just giving them a little extra protection,” says nutrition and exercise researcher Nicholas Burd.
Meeting your protein needs with plants rather than meat is better for your heart and the planet, but it might require a greater variety of foods (and maybe supplements).
If you’re getting enough calories and eating a varied diet, you’re probably getting enough protein.
For most people, unless you have kidney disease, too much protein won’t hurt you.
How fast and in which direction is our solar system moving through the universe? This seemingly simple question is one of the key tests of our cosmological understanding. A research team of astrophysicists has now found new answers, ones that challenge the established standard model of cosmology.
The study'sfindingshave just been published in the journalPhysical Review Letters.
Their analysis shows that the solar system is moving more than three times faster than current models predict. This result clearly contradicts expectations based on standard cosmology and forces us to reconsider our previous assumptions.
To determine the motion of the solar system, the team analyzed the distribution of so-called radio galaxies, distant galaxies that emit particularly strong radio waves, a form of electromagnetic radiation with very long wavelengths similar to those used for radio signals. Because radio waves can penetrate dust and gas that obscure visible light, radio telescopes can observe galaxies invisible to optical instruments.
As the solar system moves through the universe, this motion produces a subtle "headwind": slightly more radio galaxies appear in the direction of travel. The difference is tiny and can only be detected with extremely sensitive measurements.
Using data from the LOFAR (Low Frequency Array) telescope, a Europe-wide radio telescope network, combined with data from two additional radio observatories, the researchers were able to make an especially precise count of such radio galaxies for the first time. They applied a new statistical method that accounts for the fact that many radio galaxies consist of multiple components. This improved analysis yielded larger but also more realistic measurement uncertainties.
Despite this, the combination of data from all three radio telescopes revealed a deviation exceeding five sigma, a statistically very strong signal considered in science as evidence for a significant result.
The measurement shows an anisotropy ("dipole") in the distribution of radio galaxies that is 3.7 times stronger than what the standard model of the universe predicts. This model describes the origin and evolution of the cosmos since the Big Bang and assumes a largely uniform distribution of matter.
If our solar system is indeed moving this fast, we need to question fundamental assumptions about the large-scale structure of the universe, say the cosmologists.
Alternatively, the distribution of radio galaxies itself may be less uniform than we have thought. In either case, the current models are being put to the test.
The new results confirm earlier observations in which researchers studied quasars, the extremely bright centers of distant galaxies where supermassive black holes consume matter and emit enormous amounts of energy. The same unusual effect appeared in these infrared data, suggesting that it is not a measurement error but a genuine feature of the universe.
Lukas Böhme et al, Overdispersed Radio Source Counts and Excess Radio Dipole Detection, Physical Review Letters (2025). DOI: 10.1103/6z32-3zf4
Chinese team finds a fern that makes rare earth elements
Scientists have discovered a fern from South China that naturally forms tiny crystals containing rare earth elements (REEs). This breakthrough opens the door to a promising new way of "green mining" of these minerals called phytomining.
REEs are a group of 17 elements, all metals with similar properties that are essential for everything from wind turbines and electric car batteries to smartphones and medical scanners.
Extracting them is expensive and normally involves large-scale conventional mining operations that rely on harsh chemicals and cause significant pollution and land damage. That's why researchers are exploring cleaner, sustainable plant-based alternatives to collect REEs.
According to a paper published in the journal Environmental Science & Technology, the researchers studied the Blechnum orientale fern, which had been collected from REE-rich areas in South China. It was already known to be a hyperaccumulator, which is a plant that can grow in soil and water with high concentrations of metals and absorb them through its roots. But what they didn't know was the chemical form the REEs take when inside the plant. This knowledge is vital for designing the most efficient extraction process.
Using high-powered imaging technology and chemical analysis, the team discovered that the fern was forming nanoscale crystals of the REE-rich mineral monazite within its tissues, particularly in the cell walls and spaces between cells. Monazite is one of the main sources of rare earth elements in geological ore deposits worldwide.
The study authors also observed the crystal form, noting that it grows in a highly complex self-organizing pattern of tiny branches, likening it to a microscopic "chemical garden." This is the first time scientists have seen a living plant create a rare earth element crystal.
While we might not be going out to garden for REEs in the immediate future, the research is further proof that phytomining is feasible. The discovery, even with just one fern, strengthens the case that plants could one day provide a valuable, cheaper, and less destructive method for extracting much-needed rare earth elements.
Liuqing He et al, Discovery and Implications of a Nanoscale Rare Earth Mineral in a Hyperaccumulator Plant, Environmental Science & Technology (2025). DOI: 10.1021/acs.est.5c09617
Magnetic nanoparticles that successfully navigate complex blood vessels may be ready for clinical trials Magnetic microrobots with iron oxide and tantalum nanoparticles can be precisely guided through complex blood vessels using modular electromagnetic navigation, delivering drugs directly to target sites such as thrombi. These microrobots achieve over 95% delivery accuracy in realistic vessel models and animal tests, supporting readiness for clinical trials and potential applications beyond vascular occlusions.
Every year, 12 million people worldwide suffer a stroke; many die or are permanently impaired. Currently, drugs are administered to dissolve the thrombus that blocks the blood vessel. These drugs spread throughout the entire body, meaning a high dose must be administered to ensure that the necessary amount reaches the thrombus. This can cause serious side effects, such as internal bleeding.
Since medicines are often only needed in specific areas of the body, medical research has long been searching for a way to use microrobots to deliver pharmaceuticals to where they need to be: in the case of a stroke, directly to the stroke-related thrombus.
Now, a team of researchers at ETH Zurich has made major breakthroughs on several levels. They havepublishedtheir findings inScience.
The microrobot the researchers use comprises a proprietary spherical capsule made of a soluble gel shell that they can control with magnets and guide through the body to its destination. Iron oxide nanoparticles in the capsule provide the magnetic properties.
Because the vessels in the human brain are so small, there is a limit to how big the capsule can be. The technical challenge is to ensure that a capsule this small also has sufficient magnetic properties.
The microrobot also needs a contrast agent to enable doctors to track via X-ray how it is moving through the vessels. The researchers focused on tantalum nanoparticles, which are commonly used in medicine but are more challenging to control due to their greater density and weight.
Combining magnetic functionality, imaging visibility and precise control in a single microrobot required perfect synergy between materials science and robotics engineering, which has taken the researchers many years to successfully achieve.
They developed precision iron oxide nanoparticles that enable this delicate balancing act.
The microrobots also contain the active ingredient they need to deliver. The researchers successfully loaded the microrobots with common drugs for a variety of applications—in this case, a thrombus-dissolving agent, an antibiotic or tumor medication.
These drugs were released by a high-frequency magnetic field that heats the magnetic nanoparticles, dissolving the gel shell and the microrobot.
The researchers used a two-step strategy to bring the microrobot close to its target: first, they injected the microrobot into the blood or cerebrospinal fluid via a catheter. They went on to use an electromagnetic navigation system to guide the magnetic microrobot to the target location.
The catheter's design is based on a commercially available model with an internal guidewire connected to a flexible polymer gripper. When pushed beyond the external guide, the polymer gripper opens and releases the microrobot. To precisely steer the microrobots, the researchers developed a modular electromagnetic navigation system suitable for use in the operating theater.
Iron rusts. On Earth, this common chemical reaction often signals the presence of something far more interesting than just corroding metal—for example, living microorganisms that make their living by manipulating iron atoms. Now researchers argue these microbial rust makers could provide some of the most promising biosignatures for detecting life on Mars and the icy moons of the outer solar system.
Researchers now have compiled a comprehensive review of how iron metabolizing bacteria leave distinctive fingerprints in rocks and minerals, and why these signatures matter for astrobiology. The research, published in Earth-Science Reviews, bridges decades of terrestrial microbiology with the practical challenges of searching for life beyond Earth.
Iron ranks among the most abundant elements in the solar system, and Earth's microorganisms have evolved remarkably diverse ways to exploit it. Some bacteria oxidize ferrous ironto generate energy, essentially breathing iron the way humans breathe oxygen. Others reduce ferric iron, using it as the final electron acceptor in their metabolism. These processes don't happen in isolation. Iron metabolizing microbes link their element of choice to the carbon and nitrogen cycles, coupling iron transformations to carbon dioxide fixation, organic matter degradation, and even photosynthesis.
The byproducts of these microbial reactions create what researchers call biogenic iron oxyhydroxide minerals. These aren't subtle traces. Organisms that thrive in neutral pH environments and oxidize iron produce distinctive structures such as twisted stalks, tubular sheaths, and filamentous networks of iron minerals mixed with organic compounds. The minerals precipitate as the bacteria work, forming rusty deposits that can persist in the geological record for billions of years.
This durability makes iron biosignatures particularly attractive for planetary exploration. Unlike fragile organic molecules that degrade under radiation and harsh chemistry, mineralized iron structures can survive. Researchers have identified these biosignatures in environments ranging from hydrothermal vents on the ocean floor to terrestrial soils, from acidic mine drainage to neutral freshwater springs. Wherever liquid water contacts iron-bearing rocks, iron-metabolizing bacteria typically establish themselves.
Mars presents an obvious target. The planet's distinctive red color comes from oxidized iron in surface dust and rocks. Ancient Mars hosted liquid water, and spacecraft have documented iron-rich minerals throughout the geological record. If microbial life ever evolved on Mars, iron metabolism would have provided an accessible energy source. The minerals these hypothetical organisms produced could still exist, locked in ancient sediments awaiting discovery by rovers equipped with the right instruments.
The icy moons Europa and Enceladus offer different but equally compelling possibilities. Both harbor subsurface oceans beneath frozen shells. Europa's ocean likely contacts a rocky seafloor, where water and rock interactions would release dissolved iron. Enceladus actively vents ocean material through ice geysers at its south pole. Mission concepts propose sampling these plumes or landing near the vents, analyzing ejected particles for iron minerals that might betray biological origins.
The review emphasizes that recognizing biogenic iron minerals requires understanding how they form, what textures they create, and how they differ from abiotic iron precipitates. Mission planners must equip spacecraft with instruments capable of detecting not just iron minerals generally, but the specific morphological and chemical signatures that distinguish biology from geology.
The stakes are high. Finding iron biosignatures on another world wouldn't just confirm life exists elsewhere, it would reveal that the same fundamental chemistry supporting Earth's deep biosphere operates throughout the solar system.
Laura I. Tenelanda-Osorio et al, Terrestrial iron biosignatures and their potential in solar system exploration for astrobiology, Earth-Science Reviews (2025). DOI: 10.1016/j.earscirev.2025.105318
How chromosomes separate accurately: Molecular 'scissors' caught in action
Cell division is a process of remarkable precision: during each cycle, the genetic material must be evenly distributed between the two daughter cells. To achieve this, duplicated chromosomes, known as sister chromatids, are temporarily linked by cohesin—a ring-shaped protein complex that holds them together until separation.
Researchers have uncovered the mechanism by which separase—the molecular ''scissors'' responsible for this cleavage—recognizes and cuts cohesin.
Their findings,published inScience Advances, shed new light on chromosome segregation errors that can lead to certain forms of cancer.
Before a cell divides, its chromosomes are duplicated. These identical copies, called sister chromatids, are held together by cohesin—a ring-like structure composed of several proteins that prevents premature separation.
When the cell is ready to divide, separase, a specialized enzyme, cleaves one of the cohesin subunits, the protein SCC1, allowing the chromatids to separate and the genetic material to be evenly distributed between the two daughter cells. Any malfunction in this process can compromise genome stability, potentially resulting in severe diseases, including cancer.
Using cryo-electron microscopy (cryo-EM)—a cutting-edge technique that enables biological samples to be visualized in their native state at near-atomic resolution—the team captured the interaction between separase and SCC1 and identified the precise cleavage sites on the protein.
Biochemical and structural analyses also revealed multiple "docking sites" on the surface of separase, ensuring high-affinity binding of SCC1 to separase prior to cleavage. These contact points include five phosphate-binding sites that recognize phosphorylated residues on SCC1.
These affinity experiments showed that these phosphate–separase interactions stabilize the complex and accelerate SCC1 cleavage, ensuring fast and precise separation of chromosomes.
The work provides an extensive functional and structural framework to understand how separase is regulated and how it recognizes its substrates.
Testosterone in body odor linked to perceptions of social status
As humans, we are constantly navigating social status, using subconscious strategies to assert either our dominance or prestige.
We often use voice or body language to communicate this. Imagine a politician with a slow, booming voice, expanding their chest and extending their arms, quickly asserting authority over their audience.
We also use our sense of smell, according to new research from the University of Victoria (UVic),publishedinEvolution and Human Behaviour.
This study examines the role of body odor in people's perceptions of others' social status.
Researchers examined whether scent cues associated with levels of circulating testosterone impact people's social status judgments. They found that both male and female participants in their study perceived men with higher levels of testosterone to be more dominant than men with lower testosterone levels.
Chemical signaling is the most widespread form of communication on Earth. Many animals will use scent to express and understand social status within their group. Mice, for example, scent-mark their territory to assert their dominance.
Previous research shows that humans use two different strategies to assert and maintain social status: dominance and prestige. Dominance is coercive, using tactics to force compliance. Prestige, on the other hand, involves showing valuable skills and traits that lead others to show deference voluntarily.
Research also reveals that scent plays an important role in human communication—of fear, sickness, safety, attraction, and personality traitssuch as dominance and neuroticism.
This is the first study to directly examine whether humans use scent cues related to circulating testosterone levels in the formation of social status judgments.
No significant relationship was found in the study between testosterone levels and perceived prestige. Perceptions of dominance, on the other hand, were associated with higher testosterone levels.
This study contributes to a growing body of work seeking to understand how social communication occurs through scent.
However, the findings should be interpreted with caution. The study involved a relatively small and uniform sample, and replication with larger and more diverse groups will be important to confirm whether these patterns hold.
Marlise K. Hofer et al, The role of testosterone in odor-based perceptions of social status, Evolution and Human Behavior (2025). DOI: 10.1016/j.evolhumbehav.2025.106752
Hitler's DNA reveals possible genetic disorder tied to sexual and social behaviour
Adolf Hitler most likely suffered from the genetic condition Kallmann Syndrome that can manifest itself in undescended testicles and a micropenis, researchers said this week, following DNA testing of the Nazi dictator's blood.
The new research also quashes the suggestion that Hitler had Jewish ancestry.
Popular World War II songs often mocked Hitler's anatomy but lacked any scientific basis.
The findings by an international team of scientists and historians now appear to confirm longstanding suspicions around his sexual development.
No one has ever really been able to explain why Hitler was so uncomfortable around women throughout his life, or why he probably never entered into intimate relations with women.
But now we know that he had Kallmann Syndrome, this could be the answer we've been looking for, say the researchers.
The research findings are featured in a new documentary, "Hitler's DNA: Blueprint of a Dictator."
The testing found a "high likelihood" that Hitler had Kallmann Syndrome and "very high" scores—in the top one percent—for a predisposition to autism, schizophrenia and biopolar disorder.
The research team stressed that such conditions, however, could not explain or excuse Hitler's warmongering or racist policies.
The testing was made possible after researchers obtained a sample of Hitler's blood from a piece of material taken from the sofa on which he shot himself.
The DNA results additionally rule out the possibility that Hitler had a Jewish grandfather via his grandmother.
Randomized trials show no evidence of non-specific vaccine effects
Researchers have conducted randomized trials involving thousands of children in Guinea-Bissau and Denmark to demonstrate so-called non-specific vaccine effects—that is, whether vaccines also protect against diseases other than the one they are designed to prevent.
A new comprehensive Danish review now shows that the trials have been unable to demonstrate non-specific effects for the widely used vaccinations against measles, tuberculosis, diphtheria, tetanus, and whooping cough. The study has just been published in the journal Vaccine.
It is concerning that such a prominent research group has conducted so many randomized trials over such a long period without finding real results. Randomized trials are normally considered the gold standard in medical research, so if they do not show anything, one should be very cautious about presenting it as convincing evidence.
The new study is the first to systematically analyze all of Benn and Aaby's randomized trials. While others have previously criticized individual studies, the researchers behind the new review examined the full body of work.
Researchers find indications that the researchers systematically selected and highlighted results that supported their theories, while downplaying the fact that they did not confirm the primary hypothesis the trials were actually designed to test. When you look at the overall picture, there are almost no real findings left.
The results meant it was time to change the global approach to vaccination—all new vaccines should routinely be assessed for non-specific effects, and vaccination programs should be revised worldwide.
The new review is based on 13 randomized trials presented in 26 articles containing more than 1,400 separate statistical analyses. Only one of the 13 randomized trials demonstrated the effect it was designed to detect—and that trial was stopped early and was considered unsuccessful by the researchers themselves.
The review showed that only about 7% of the many hypotheses tested by the researchers could be expected to be correct. Notably, this did not apply to the researchers' own primary hypotheses—these were not supported by the corrected results.
In 23 out of 25 articles, the researchers highlighted secondary findings as support for their theories, but in 22 of these cases the evidence disappeared after proper statistical handling. Overall, the researchers' interpretation did not take into account how many analyses they had conducted, and they did not focus on the main outcomes of the trials. The researchers stress that the purpose was not to determine whether non-specific vaccine effects exist, but to examine Benn and Aaby's research practices.
"We hope that others in the field will now re-evaluate the evidence—what do we actually know about non-specific vaccine effects? Although Benn and Aaby have contributed about one-third of all research in the area, others have also studied the question, and this should be included to form a complete picture.
Henrik Støvring et al, What is actually the emerging evidence about non-specific vaccine effects in randomized trials from the Bandim Health Project?, Vaccine (2025). DOI: 10.1016/j.vaccine.2025.127937
What puts the ‘cable’ in cable bacteria The ‘wires’ that cable bacteria use to conduct electricity seem to be made of repeating units of a compound that contains nickel and sulfur. Researchers found that these units make up ‘nanoribbons’, which are woven together like a plait to form the larger wires. The work has yet to be peer reviewed, but if confirmed, the finding could be the first known example of a biologically produced metal–organic framework. “If it holds true, this is a major step in our understanding of what cable bacteria can accomplish,” says electromicrobiologist Lars Peter Nielsen, who co-discovered the microorganisms in 2009.
AI at the speed of light just became a possibility
Researchers have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.
Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially artificial intelligence, yet they extend beyond the simple math we're familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik's cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.
Today, every task in AI, from image recognition to natural language processing, relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.
Motivated by this pressing problem, an international research collaboration has unlocked a new approach that performs complex tensor computations using a single propagation of light. The result is single-shot tensor computing, achieved at the speed of light itself.
The new method performs the same kinds of operations that today's GPUs handle, like convolutions and attention layers, but does them all at the speed of light.
To achieve this, the researchers encoded digital data into the amplitude and phase of light waves, effectively turning numbers into physical properties of the optical field. When these light fields interact and combine, they naturally carry out mathematical operations such as matrix and tensor multiplications, which form the core of deep learning algorithms.
By introducing multiple wavelengths of light, the team extended this approach to handle even higher-order tensor operations.
Another key advantage of this method is its simplicity. The optical operations occur passively as the light propagates, so no active control or electronic switching is needed during computation.
Direct link between peak air pollution and cardiac risk revealed
The risk of suffering cardiac arrest may increase on days recording high levels of air pollution in some parts of the world. This emerges from a study conducted by researchers and published in the journal Global Challenges.
Researchers analyzed 37,613 cases of out-of-hospital cardiac arrest in Lombardy between 2016 and 2019 by assessing, for each episode, the daily concentrations of various pollutants (PM₂.₅, PM₁₀, NO₂, O₃ and CO) obtained from satellite data of the European Copernicus program (ESA). The study used advanced spatio-temporal statistical models to identify the relationship between pollution peaks and increased risk of cardiac events.
They observed a strong association with nitrogen dioxide (NO₂). Indeed, for every 10 micrograms per cubic meter increase, the risk of cardiac arrest rises by 7% over the next 96 hours.
Even particulate matter PM₂.₅ and PM₁₀ present a 3% and 2.5% increase in the risk rate, respectively, on the same day of exposure.
The effect is more pronounced in urban areas but significant associations are also observed in rural towns. The risk particularly marks an upswing in the warm months, suggesting a possible interaction between heat and pollutants. The association was also observed at levels below legal limits, suggesting that there is no safe exposure threshold.
The link between air quality and out-of-hospital cardiac arrest is a wake-up call for local health systems.
Amruta Umakant Mahakalkar et al, Short‐Term Effect of Air Pollution on Out of Hospital Cardiac Arrest (OHCA) in Lombardy—A Case‐Crossover Spatiotemporal Study, Global Challenges (2025). DOI: 10.1002/gch2.202500241
Neanderthals may have never truly gone extinct, according to new research – at least not in the genetic sense.
A new mathematical model has explored a fascinating scenario in which Neanderthals gradually disappeared not through "true extinction" but through genetic absorption into a more prolific species: Us.
According to the analysis, the long and drawn-out 'love affair' between Homo sapiens and Neanderthals could have led to almost complete genetic absorption within 10,000–30,000 years.
The model is simple and not regional, but it provides a "robust explanation for the observed Neanderthal demise, say researchers.
Once, the idea that Neanderthals and Homo sapiens interbred was a radical notion, and yet now, modern genome studies and archaeological evidence have provided strong evidence that our two lineages were hooking up and reproducing right across Eurasia for tens of thousands of years.
Today, people of non-African ancestry have inherited about 1 to 4 percent of their DNA from Neanderthals.
No one knows why Neanderthals disappeared from the face of our planet roughly 40,000 years ago, but experts tend to agree that there are probably numerous factors that played a role, like environmental changes, loss of genetic diversity, or competition with Homo sapiens.
Now researchers present a model that does not rule out these other explanations.
It does suggest that genetic drift played a strong role – even with the researchers assuming the Neanderthal genes our species 'absorbed' had no survival benefit.
If the model were to include the potential advantages of some Neanderthal genes to the larger Homo sapiens population, then mathematical support for genetic dilution may become even stronger.
Like all models, this new one is based on imperfect assumptions. It uses the birth rates of modern hunter-gatherer tribes to predict how quickly small Neanderthal tribes would be engulfed by a much larger human population, given how frequently it seems we were interbreeding.
Part 1
The results are consistent with recent archaeological discoveries and align with a body of evidence that suggests the decline of Neanderthals in Europe was gradual rather than sudden.
Homo sapiens seem to have started migrating out of Africa much earlier than scientists previously thought, and they arrived in Europe in several influxes, possibly starting more than 200,000 years ago. As each wave of migration came crashing into the region, it engulfed local Neanderthal communities, diluting their genes, like sand pulled into the sea. Today, some scientists argue that there is more to unite Homo sapiens and Neanderthals than there is to differentiate us. Our lineages, they say, should not be regarded as two separate species, but rather as distinct populations belonging to a "common human species.".
Neanderthals were surprisingly adaptable and intelligent. They made intricate tools, created cave art, and used fire – and when it came to communication, they were probably capable of far more than simple grunting. Neanderthal populations and cultures may no longer exist, but their genetic legacy lives on inside of us. These are not just our cousins; they are also our ancestors.
Parasitic ant tricks workers into killing their queen, then takes the throne
Scientists document a new form of host manipulation where an invading, parasitic ant queen "tricks" ant workers into killing their queen mother. The invading ant integrates herself into the nest by pretending to be a member of the colony, then sprays the host queen with fluid that causes her daughters to turn against her. The parasitic queen then usurps the throne, having the workers serve her instead as the new queen regent. This work was published in Current Biology on November 17.
Matricide, a behavior where offspring kill or eat their mother, is a rarely seen phenomenon in nature. Despite appearing maladaptive at first glance, it does offer advantages by either nourishing the young and giving the mother indirect benefits through increased offspring survival or allowing the young to invest in offspring of their own.
Up until now, only two types of matricide have been recorded in which either the mother or offspring benefit. In this novel matricide that researchers reported, neither profit; only the parasitic third party.
The ants Lasius orientalis and umbratus, commonly referred to as the "bad-smell ants" in Japanese, are so-called "social parasites" that execute a covert operation to infiltrate and eventually take over the colony of their unsuspecting ant queen hosts, Lasius flavus and japonicus, respectively. The parasitic queen takes advantage of ants' reliance on smell to identify both friends and foes to dupe unsuspecting worker ants into believing she is part of the family.
Ants live in the world of odors.
Before infiltrating the nest, the parasitic queen stealthily acquires the colony's odor on her body from workers walking outside so that she is not recognized as the enemy.
Once these "bad-smell" ants have been accepted by the colony's workers and locate the queen, the parasitic ant douses her with a foul-smelling chemical researchers presume to be formic acid—a chemical unique to some ants and stored in a specialized organ.
The parasitic ants exploit that ability to recognize odors by spraying formic acid to disguise the queen's normal scent with a repugnant one. This causes the daughters, who normally protected their queen mother, to attack her as an enemy.
Then, like fleeing the scene of the crime, the parasitic queen immediately (but temporarily) retreats. "She knows the odor of formic acid is very dangerous, because if host workers perceive the odor they would immediately attack her as well.
She will periodically return and spray the queen multiple times until the workers have killed and disposed of their mother queen. Then, once the dust has settled, the parasitic ant queen returns and begins laying eggs of her own. With this newly accepted parasitic queen in the colony and no other queen to compete with, the matricidal workers begin taking care of her and her offspring instead.
Lead-free alternative discovered for essential electronics component
Ferroelectric materials are used in infrared cameras, medical ultrasounds, computer memory and actuators that turn electric properties into mechanical properties and vice-versa. Most of these essential materials, however, contain lead and can therefore be toxic.
Therefore, for the last 10 years, there has been a huge initiative all over the world to find ferroelectric materials that do not contain lead.
The atoms in a ferroelectric material can have more than one crystalline structure. Where two crystalline structures meet is called a phase boundary, and the properties that make ferroelectric materials useful are strongest at these boundaries.
Using chemical processes, scientists have manipulated the phase boundaries of lead-based ferroelectric materials to create higher performance and smaller devices. Chemically tuning the phase boundaries of lead-free ferroelectric material, however, has been challenging.
New research found a way to enhance lead-free ferroelectrics using strain, or mechanical force, rather than a chemical process. The discovery could produce lead-free ferroelectric components, opening new possibilities for devices and sensors that could be implanted in humans.
Ferroelectric materials, first discovered in 1920, have a natural electrical polarization that can be reversed by an electric field. That polarization remains reversed even once the electric field has been removed.
The materials are dielectric, meaning they can be polarized by the application of an electric field. That makes them highly effective in capacitors.
Ferroelectrics are also piezoelectric, which means they can generate electric properties in response to mechanical energy, and vice versa. This quality can be used in sonars, fire sensors, tiny speakers in a cell phone or actuators that precisely form letters in an inkjet printer.
All these properties can be enhanced by manipulating the phase boundary of ferroelectric materials. In a lead-based ferroelectric, such as lead zirconate titanate, one can chemically tune the compositions to land right at the phase. Lead-free ferroelectrics, however, contain highly volatile alkaline metals, which can become a gas and evaporate when chemically tuned. The researchers instead created a thin film of the lead-free ferroelectric material sodium niobate (NaNbO3). The material is known to have a complex crystalline ground state structure at room temperature. It is also flexible. Scientists have long known that changing the temperature of sodium niobate can produce multiple phases, or different arrangements of atoms. Instead of a chemical process or manipulating the temperature, the researchers changed the structure of the atoms in sodium niobate by strain.
They grew a thin film of sodium niobate on a substrate. The structure of the atoms in the sodium niobate contract and expand as they try to match the structure of the atoms in the substrate. The process creates strain on the sodium niobate.
"What is quite remarkable with sodium niobate is if you change the length a little bit, the phases change a lot. To the researchers' surprise, the strain caused the sodium niobate to have three different phases at once, which optimizes the useful ferroelectric properties of the material by creating more boundaries. The experiments were conducted at room temperature. The next step will be seeing if sodium niobate responds to strain in the same way at extreme temperatures ranging from minus 270 C to 1,000 C above.
Reducing arsenic in drinking water cuts risk of death, even after years of chronic exposure: 20-year study
Arsenic is among the most common chemical pollutants of ground water.
Arsenic is a naturally occurring element that accumulates in groundwater, and because it has no taste or odor, people can unknowingly drink contaminated water for years.
A 20-year study of nearly 11,000 adults in Bangladesh found that lowering arsenic levels in drinking water was associated with up to a 50% lower risk of death from heart disease, cancer and other chronic illnesses, compared with continued exposure.
The study provides the first long-term, individual-level evidence that reducing arsenic exposure may lower mortality, even among people exposed to the toxic contaminant for years.
The landmark analysis by researchers is important for public health because groundwater contamination from naturally occurring arsenic remains a serious issue worldwide.
The study shows what happens when people who are chronically exposed to arsenic are no longer exposed. You're not just preventing deaths from future exposure, but also from past exposure.
The results provide the clearest evidence to date of the link between arsenic reduction and lower mortality.
For two decades, the research team followed each participant's health and repeatedly collected urine samples to track exposure, which they say strengthened the accuracy of their findings.
People whose urinary arsenic levels dropped from high to low had mortality rates identical to those who had consistently low exposure throughout the duration of the study. The larger the drop in arsenic levels, the greater the decrease in mortality risk. By contrast, individuals who continued drinking high-arsenic water saw no reduction in their risk of death from chronic disease.
Lethal dose of plastics for ocean wildlife: Surprisingly small amounts can kill seabirds, sea turtles and marine mammals
By studying more than 10,000 necropsies, researchers now know how much plastic it takes to kill seabirds, sea turtles, and marine mammals, and the lethal dose is much smaller than you might think. Their new study titled "A quantitative risk assessment framework for mortality due to macroplastic ingestion in seabirds, marine mammals, and sea turtles" ispublishedin theProceedings of the National Academy of Sciences.
Led by Ocean Conservancy researchers, the paper is the most comprehensive study yet to quantify the extent to which a range of plastic types—from soft, flexible plastics like bags and food wrappers; to balloon pieces; to hard plastics ranging from fragments to whole items like beverage bottles—result in the death of seabirds, sea turtles, and marine mammals that consume them.
The study reveals that, on average, consuming less than three sugar cubes' worth of plastics for seabirds like Atlantic puffins (which measure approximately 28 centimeters, or 11 inches, in length); just over two baseballs' worth of plastics for sea turtles like Loggerheads (90 centimeters or 35 inches); and about a soccer ball's worth of plastics for marine mammals like harbor porpoises (1.5 meters, or 60 inches), has a 90% likelihood of death.
At the 50% mortality threshold, the volumes are even more startling: consuming less than one sugar cube's worth of plastics kills one in two Atlantic puffins; less than half a baseball's worth of plastics kills one in two Loggerhead turtles; and less than a sixth of a soccer ball kills one in two harbor porpoises.
The lethal dose varies based on the species, the animal's size, the type of plastic it's consuming, and other factors, but overall it's much smaller than you might think, which is troubling when you consider that more than a garbage truck's worth of plastics enters the ocean every minute.
Nearly half (47%) of all sea turtles; a third (35%) of seabirds; and 12% of marine mammals in the dataset had plastics in their digestive tracts at their time of death. Overall, one in five (21.5%) of the animals recorded had ingested plastics, often of varying types. Additional findings included: Seabirds Of seabirds that ate plastic, 92% ate hard plastics, 9% ate soft plastics, 8% ate fishing debris, 6% ate rubber, and 5% ate foams, with many individuals eating multiple plastic types. Seabirds are especially vulnerable to synthetic rubber: just six pieces, each smaller than a pea, are 90% likely to cause death. Sea turtles Of sea turtles that ate plastic, 69% ate soft plastics, 58% ate fishing debris, 42% ate hard plastics, 7% ate foam, 4% ate synthetic rubbers, and 1% ate synthetic cloth. Sea turtles, which on average weigh several hundred pounds, are especially vulnerable to soft plastics, like plastic bags: just 342 pieces, each about the size of a pea, would be lethal with 90% certainty. Mammals Of marine mammals that ate plastic, 72% ate fishing debris, 10% ate soft plastics, 5% ate rubber, 3% ate hard plastics, 2% ate foam, and 0.7% ate synthetic cloth. Marine mammals are especially vulnerable to fishing debris: 28 pieces, each smaller than a tennis ball, are enough to kill a sperm whale in 90% of cases. Threatened species and broader impacts The study also found that nearly half of the individual animals who had ingested plastics are red-listed as threatened—that is, near-threatened, vulnerable, endangered or critically endangered—by the IUCN. Notably, the study only analyzed the impacts of ingesting large plastics (greater than 5 millimeters) on these species, and did not account for all plastic impacts and interactions. For example, they excluded entanglement, sublethal impacts of ingestion that can impact overall animal health, and microplastics consumed.
This research really drives home how ocean plastics are an existential threat to the diversity of life on our planet.
Murphy, Erin L., A quantitative risk assessment framework for mortality due to macroplastic ingestion in seabirds, marine mammals, and sea turtles, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2415492122. doi.org/10.1073/pnas.2415492122
Dogs 10,000 years ago roamed with bands of humans and came in all shapes and sizes Analysis of ancient dog skulls and genomes indicates that significant physical and genetic diversity in dogs emerged over 10,000 years ago, predating modern selective breeding. Distinctive dog-like skulls appeared around 11,000 years ago, and ancient DNA reveals that dogs often migrated with human groups, reflecting complex, intertwined histories and early biocultural exchanges.
New study identifies part of brain animals use to make inferences
Animals survive in changing and unpredictable environments by not merely responding to new circumstances, but also, like humans, by forming inferences about their surroundings—for instance, squirrels understand that certain bird noises don't signal the presence of a predator, so they won't seek shelter when they later hear these same sounds. But less clear is how the brain works to create these inferences.
In a study published in the journal Neuron, a team of researchers identified a particular part of the brain that serves as an "inference engine." The region, the orbitofrontal cortex (OFC), allows animals to update their understanding of their surroundings based on changing circumstances.
To survive, animals cannot simply react to their surroundings. They must generalize and make inferences—a cognitive process that is among the most vital and complicated operations that nervous systems perform. These findings advance our knowledge of how the brain works in applying what we've learned.
The scientists add that the results offer promise for better understanding the nature of neuropsychiatric disorders, such as bipolar disorder and schizophrenia, in which our ability to make inferences is diminished.
In the experiments conducted when the brain's OFC was disrupted, the trained rats could no longer update their understanding of what the other available rewards might be—specifically, they couldn't make distinctions among hidden states.
These results, based on recordings of more than 10,000 neurons, suggest that the OFC is directly involved in helping the brain make inferences in changing situations.
Learning to see after being born blind: Brain imaging study highlights infant adaptability
Some babies are born with early blindness due to dense bilateral congenital cataracts, requiring surgery to restore their sight. This period of several months without vision can leave a lasting mark on how the brain processes visual details, but surprisingly little on the recognition of faces, objects, or words. This is the main finding of an international study conducted by neuroscientist.
Using brain imaging, the researchers compared adults who had undergone surgery for congenital cataracts as babies with people born with normal vision. The results are striking: in people born with cataracts, the area of the brain that analyzes small visual details (contours, contrasts, etc.) retains a lasting alteration from this early blindness.
On the other hand, the more advanced regions of the visual brain, responsible for recognizing faces, objects, and words, function almost normally. These "biological" results have been validated by computer models involving artificial neural networks. This distinction between altered and preserved areas of the brain paves the way for new treatments. In the future, clinicians may be able to offer visual therapies that are better tailored to each patient.
Babies' brains are highly adaptable . Even if vision is lacking at the very beginning of life, the brain can adapt and learn to recognize the world around it even on the basis of degraded information.
These findings also challenge the idea of a single "critical period" for visual development. Some areas of the brain are more vulnerable to early vision loss, while others retain a surprising capacity for recovery. The brain is both fragile and resilient. Early experiences matter, but they don't determine everything.
Impact of a transient neonatal visual deprivation on the development of the ventral occipito-temporal cortex in humans, Nature Communications (2025).
Frozen RNA survived 50,000 years Researchers have found the oldest RNA molecules to date in mummified woolly mammoth tissue. RNA is a fragile molecule, which makes intact ancient samples few and far between. But such samples are sought after because analysing ancient RNA could shed light on the gene activity of extinct animals. Scientists used enzymes to convert RNA in the mammoth tissue to DNA, and then reverse-engineered the original RNA sequences. This technique recovered fragments of RNA from three samples, dated to between 39,000 and 52,000 years old.
Psilocybin could reverse effects of brain injuries resulting from intimate partner violence, rat study finds
The term intimate partner violence (IPV) refers to physical, sexual or psychological abuse perpetrated by an individual on their romantic partner or spouse. Victims of IPV who are violently attacked and physically abused on a regular basis can sometimes present injuries that have lasting consequences on their mood, mental processes and behaviour.
Chronic neurobehavioral sequelae from IPV-BI are associated with neuroinflammation and impaired neuroplasticity, and effective treatment options are scarce, particularly in the context of IPV.
Common types of injuries observed in IPV victims who are periodically attacked physically include mild traumatic brain injuries (mTBI) and disruptions in the flow of blood or oxygen to the brain emerging from non-fatal strangulation (NFS). Both these have been linked to inflammation in the brain and a hindered ability to form new connections between neurons or change older connections (i.e., neuroplasticity).
Researchers recently carried out a study involving rats aimed at assessing the potential of the psychedelic compound psilocybin for reversing the chronic effects of IPV-related brain injuries. Their findings, published in Molecular Psychiatry, suggest that psilocybin could in fact reduce inflammation and anxiety, improve memory and facilitate learning following brain injuries caused by repeated physical trauma.
Psilocybin, a 5-HT2A receptor agonist with therapeutic potential in psychiatric disorders that share overlapping pathophysiology as BI, is a promising candidate.
As the majority of IPV victims are female, Allen, Sun and their colleagues performed their experiments on female rats. To model IPV-related brain injuries, they subjected the rats to mild injuries that mirrored those observed in many victims of IPV.
"Female rats underwent daily mTBI (lateral impact) followed by NFS (90 s) for five days, followed by 16 weeks of recovery," the authors explained.
Four months after the rats had been subjected to the injuries, they were either given a dose of psilocybin or a placebo (i.e., saline water) injection. 24 hours later they completed behavioral tasks designed to assess their memory, learning, motivation and anxiety levels.
Psilocybin is known to activate 5-HT2A receptors, a subtype of serotonin receptor that are known to play a role in the regulation of mood, mental processes and the brain's adaptability (i.e., plasticity). Using a drug that can block the activity of 5-HT2A receptors, the researchers also tried to determine whether these receptors played a key role in any effects they observed.
"To investigate whether psilocybin's effects were 5-HT2A receptor dependent, additional rats received pre-treatment with selective 5-HT2A receptor antagonist M100907 (1.5 mg/kg) one hour before psilocybin administration," wrote the authors. in their paper. Overall, the results of this research team's experiments suggest that psilocybin could help to reverse some of the behavioral, cognitive and brain-related damage caused by repeated physical attacks. The female rats they tested were found to exhibit less anxiety-like and depression-like behaviors after they received psilocybin, while also performing better in memory and learning tests.
As this study was carried out in rats, its findings cannot be confidently applied to humans yet. In the future, human clinical trials could help to determine whether psilocybin is in fact a safe and effective therapeutic strategy to aid recovery from IPV-related brain injuries.
Josh Allen et al, Psilocybin mitigates chronic behavioral and neurobiological alterations in a rat model of recurrent intimate partner violence-related brain injury, Molecular Psychiatry (2025). DOI: 10.1038/s41380-025-03329-x.
How most of the universe's visible mass is generated: Experiments explore emergence of hadron mass
Deep in the heart of the matter, some numbers don't add up. For example, while protons and neutrons are made of quarks, nature's fundamental building blocks bound together by gluons, their masses are much larger than the individual quarks from which they are formed.
This leads to a central puzzle … why? In the theory of the strong interaction, known as quantum chromodynamics or QCD, quarks acquire their bare mass through the Higgs mechanism. The long-hypothesized process was confirmed by experiments at the CERN Large Hadron Collider in Switzerland and led to the Nobel Prize for Peter Higgs in 2013.
Yet the inescapable issue remains that this mechanism contributes to the measured proton and neutron masses at the level of less than 2%. This clearly demonstrates that the dominant part of the mass of real-world matter is generated through another mechanism, not through the Higgs. The rest arises from emergent phenomena.
The unaccounted mass and how it arises have long been open problems in nuclear physics, but scientists are now getting a more detailed understanding of this mass-generating process than ever before.
So, what gives protons and other strongly interacting particles (together called hadrons) their added "heft?" The answer lies in the dynamics of QCD. Through QCD, the strong interaction generates mass from the energy stored in the fields of the strongly interacting quarks and gluons. This is termed the emergence of hadron mass, or EHM.
Over the past decade, significant progress has been made in understanding the dominant portion of the universe's visible mass. This is driven by studies of the distance (or momentum) dependence of the strong interaction within a QCD-based approach known as the continuum Schwinger method (CSM).
Bridging CSM and experiments via phenomenology, physicists analyzed nearly 30 years of data collected. The comprehensive effort has given scientists the most detailed look yet at the mechanisms responsible for EHM.
The comprehensive effort has given scientists the most detailed look yet at the mechanisms responsible for EHM.
QCD describes the dynamics of the most elementary constituents of matter known so far: quarks and gluons. Through QCD processes, all hadronic matter is generated. This includes protons, neutrons, other bound quark-gluon systems and, ultimately, all atomic nuclei. A distinctive feature of the strong force is gluon self-interaction.
Without gluon self-interaction, the universe would be completely different.
It creates beauty through different particle properties and makes real-world hadron phenomena through emergent physics.
Because of this property, the strong interaction evolves rapidly with distance. This evolution of strong-interactiondynamics is described within the CSM approach. At distances comparable to the size of a hadron, ~10-13cm, its relevant constituents are no longer the bare quarks and gluons of QCD.
Instead, dressed quarks and dressed gluons emerge when bare quarks and gluons are surrounded by clouds of strongly coupled quarks and gluons undergoing continual creation and annihilation.
In this regime, dressed quarks acquire dynamically generated masses that evolve with distance. This provides a natural explanation for EHM: a transition from the nearly massless bare quarks (with masses of only a few MeV) to fully dressed quarks of approximately 400 MeV mass. Strong interactions among the three dressed quarks of the proton generate its mass of about 1 GeV, as well as the masses of its excited states in the range of 1.0–3.0 GeV.
This raises the question: Can EHM be elucidated by mapping the momentum dependence of the dressed-quark mass from experimental studies of the proton and its excited states?
Investigations, comparing experiment with theory, demonstrate conclusively that dressed quarks with dynamically generated masses are the active degrees of freedom underlying the structure of the proton and its excited states. They also solidify the case that experimental results can be used to assess the mechanisms responsible for EHM.
Could atoms be reordered to enhance electronic devices? Reordering atoms in SiGeSn barriers surrounding GeSn quantum wells unexpectedly increases charge carrier mobility, contrary to prior assumptions. This enhancement is likely due to atomic short-range ordering, suggesting that manipulating atomic arrangements could significantly improve electronic device performance and miniaturization, with implications for neuromorphic and quantum computing.
Humans are evolved for nature, not cities, say anthropologists
A new paper by evolutionary anthropologists argues that modern life has outpaced human evolution. The study suggests that chronic stress and many modern health issues are the result of an evolutionary mismatch between our primarily nature-adapted biology and the industrialized environments we now inhabit.
Over hundreds of thousands of years, humans adapted to the demands of hunter-gatherer life—high mobility, intermittent stress and close interaction with natural surroundings.
Industrialization, by contrast, has transformed the human environment in only a few centuries, by introducing noise, air and light pollution, microplastics, pesticides, constant sensory stimulation, artificial light, processed foods and sedentary lifestyles.
In our ancestral environments, we were well adapted to deal with acute stress to evade or confront predators.
The lion would come around occasionally, and you had to be ready to defend yourself—or run. The key is that the lion goes away again.
Today's stressors—traffic, work demands, social media and noise, to name just a few—trigger the same biological systems, but without resolution or recovery. Our body reacts as though all these stressors were lion, say the researchers.
Whether it's a difficult discussion with your boss or traffic noise, your stress response system is still the same as if you were facing lion after lion. As a result, you have a very powerful response from your nervous system, but no recovery.
This stress is becoming constant.
In their review, the researchers synthesize evidence suggesting that industrialization and urbanization are undermining human evolutionary fitness. From an evolutionary standpoint, the success of a species depends on survival and reproduction. According to the authors, both have been adversely affected since the Industrial Revolution.
They point to declining global fertility rates and rising levels of chronic inflammatory conditions such as autoimmune diseases as signs that industrial environments are taking a biological toll.
There's a paradox where, on the one hand, we've created tremendous wealth, comfort and health care for a lot of people on the planet, but on the other hand, some of these industrial achievements are having detrimental effects on our immune, cognitive, physical and reproductive functions.
One well-documented example is the global decline in sperm count and motility observed since the 1950s, which the researchers link to environmental factors. This is thought to be tied to pesticides and herbicides in food, but also to microplastics.
Given the pace of technological and environmental change, biological evolution cannot keep up. Biological adaptation is very slow. Longer-term genetic adaptations are multigenerational—tens to hundreds of thousands of years.
That means the mismatch between our evolved physiology and modern conditions is unlikely to resolve itself naturally. Instead, the researchers argue, societies need to mitigate these effects by rethinking their relationship with nature and designing healthier, more sustainable environments.
Addressing the mismatch requires both cultural and environmental solutions.
One approach is to fundamentally rethink our relationship with nature—treating it as a key health factor and protecting or regenerating spaces that resemble those from our hunter-gatherer past. Another is to design healthier, more resilient cities that take human physiology into account. This new research based thinking can identify which stimuli most affect blood pressure, heart rate or immune function, for example, and pass that knowledge on to decision-makers. We need to get our cities right—and at the same time regenerate, value and spend more time in natural spaces, say the researchers.
Daniel P. Longman et al, Homo sapiens, industrialisation and the environmental mismatch hypothesis, Biological Reviews (2025). DOI: 10.1111/brv.70094
Dr. Krishna Kumari Challa
Scientists discover caves carved by water on Mars that may have once harbored life
If there is, or ever has been, life on Mars, the chances are it would exist in caves protected from the severe dust storms, extreme temperatures, and high radiation present on its surface. One place to focus our attention could be eight possible cave sites (called skylights) recently discovered by researchers.
In a paper published in The Astrophysical Journal Letters, the team presents the first evidence of a new type of cave on the red planet, formed by water dissolving rock. Most Martian caves discovered so far have been lava tubes, but the study authors argue that they have identified the first documented karstic caves on Mars.
"These skylights are interpreted as the first known potential karstic caves on Mars, representing collapse entrances formed through the dissolution of water-soluble lithologies—defining a new cave-forming class distinct from all previously reported volcanic and tectonic skylights," wrote the researchers in their paper.
On Earth, karstic caves are typically formed when water dissolves soluble rock such as limestone or gypsum, creating and enlarging underground cracks and fractures that grow large enough to become caves. The paper proposes a similar process on Mars, where ancient Martian water may have dissolved carbonate- and sulfate-rich rocks on the crust.
The caves are located in the Hebrus Valles, a northwestern region, and are eight pits that were mapped by previous Mars missions. They are deep and predominantly circular depressions, not impact craters, which typically have raised rims and ejected debris around them.
The researchers studied data from the Thermal Emission Spectrometer (TES) that was onboard NASA's Mars Global Surveyor and discovered that the rocks around the pits are rich in carbonates and sulfates. These are the types of rocks that water can easily dissolve. The team also used high-resolution imagery to create 3D structural models of the pits, which showed that their shapes are consistent with collapse caused by water rather than volcanic or tectonic activity.Nov 13
Dr. Krishna Kumari Challa
The search for life on Mars feels like looking for a needle in a haystack. But it could be narrowed down and made easier by having well-defined targets that, more than others, could be possible homes for life.
Therefore, the scientists behind this latest study think the eight possible karstic caves should be high-priority targets for future human or robotic missions to the planet. Even if no life is there, they could serve as landing sites and natural shelters for astronauts when they are not exploring the surface.
Ravi Sharma et al, Water-driven Accessible Potential Karstic Caves in Hebrus Valles, Mars: Implications for Subsurface Habitability, The Astrophysical Journal Letters (2025). DOI: 10.3847/2041-8213/ae0f1c
Part 2
Nov 13
Dr. Krishna Kumari Challa
Gut microbes pass down behavioral traits in mice offspring independent of genes
Gut microbes are essential partners that help digest food, produce vitamins and train the immune system. They can also pass on behavioral traits to their host's offspring, at least in mice. Scientists have discovered that the mouse microbiome can alter the animal's behaviour in just four generations, independent of its genes. The study is published in the journal Nature Communications
Animals and their microbes have coevolved for millions of years. While it was already known that bacteria, viruses and fungi living in the gut can drive inherited changes in some simple creatures, like insects, scientists didn't know till now whether they could be the sole mechanism for passing down a specific trait (like a behavioral tendency) in more complex animals, such as mammals.
The research team designed an experiment in which they strictly controlled the host animal's genes. First, they took gut microbiota from wild-derived mice and gave it to genetically identical germ-free mice in the lab.Then they set up a selection line, repeatedly choosing the two mice that traveled the least and transferring their microbes into a fresh batch of genetically identical, germ-free mice for the next generation. The scientists focused on locomotor activity (movement) because previous tests had confirmed that it was a behavior strongly influenced by the microbiome. The researchers also ran a control line where two donor mice were chosen at random.
This serial transfer of gut microbes carried on for four generations. By using germ-free mice, the researchers could be sure that any behavioral changes they observed were due to the selection and transfer of the microbial community.
Part 1
Nov 13
Dr. Krishna Kumari Challa
Selecting for low activity successfully caused slower movement across the four generations. The researchers analyzed the composition of the microbial community and found that this reduction in activity was linked to higher levels of the bacterium Lactobacillus, which produces the substance indolelactic acid (ILA). They also showed the link was causal. When they independently gave Lactobacillus or ILA to other mice, it was enough to suppress their locomotion.
This work highlights the role of microbiome-mediated trait inheritance in shaping host ecology and evolution," wrote the researchers in a paper.
The novelty in our study lies in experimentally demonstrating that selection on a host trait can lead to changes in that same trait over time purely through microbiome transmission, without any genetic evolution in the host.
Taichi A. Suzuki et al, Selection and transmission of the gut microbiome alone can shift mammalian behavior, Nature Communications (2025). DOI: 10.1038/s41467-025-65368-w
Part 2
Nov 13
Dr. Krishna Kumari Challa
Scientists tie lupus to a virus nearly all of us carry
One of humanity's most ubiquitous infectious pathogens bears the blame for the chronic autoimmune condition called systemic lupus erythematosus (lupus), Stanford Medicine investigators and their colleagues have found.
The Epstein-Barr virus (EBV), which usually resides silently inside the bodies of 19 out of 20 people, is directly responsible for commandeering what starts out as a minuscule number of immune cells to go rogue and persuade far more of their fellow immune cells to launch a widespread assault on the body's tissues, the scientists have shown.
The work is published in the journal Science Translational Medicine.
About five million people worldwide have lupus, in which the immune system attacks the contents of cell nuclei. This results in damage to organs and tissues throughout the body—skin, joints, kidneys, heart, nerves and elsewhere—with symptoms varying widely among individuals. For unknown reasons, nine out of 10 lupus patients are women.
With appropriate diagnosis and medication, most lupus patients can live reasonably normal lives, but for about 5% of them the disorder can be life-threatening.
Existing treatments slow down disease progression but don't 'cure' it.
By the time we've reached adulthood, the vast majority of us have been infected by EBV. Transmitted in saliva, EBV infection typically occurs in childhood, from sharing a spoon with or drinking from the same glass as a sibling or a friend, or maybe during our teen years, from exchanging a kiss. EBV can cause mononucleosis, "the kissing disease," which begins with a fever that subsides but lapses into a profound fatigue that can persist for months.
Practically the only way to not get EBV is to live in a bubble. If you've lived a normal life the odds are nearly 20 to 1 you've got it.
part 1
Nov 13
Dr. Krishna Kumari Challa
Once you've been infected by EBV, you can't get rid of it, even if you remain or become symptom-free. EBV belongs to a large family of viruses, including those responsible for chickenpox and herpes, that can deposit their genetic material into the nuclei of infected cells. There the virus slumbers in a latent form, hiding from the immune system's surveillance agents. This may last as long as the cell it's hiding in stays alive; or under certain conditions, the virus may reactivate and force the infected cell's replicative machinery to produce myriad copies of themselves that break out to infect other cells and other people.
Among the cell types in which EBV takes up permanent residence are B cells, immune cells that do a couple of important things after they ingest bits of microbial pathogens. For one, they can produce antibodies: customized proteins that find and bind immune-system-arousing proteins or other molecules (immunologists call them "antigens") on microbial pathogens that have infected an individual, or are trying to.
In addition, B cells are what immunologists call "professional antigen-presenting cells": They can process antigens and display them on their surfaces in a way that encourages other immune cells to raise the intensity of their hunt for the pathogen in question. That's a substantial force multiplier for kick-starting an immune response.
Our bodies harbor hundreds of billions of B cells, which over the course of numerous rounds of cell division develop an enormous diversity of antibodies. In the aggregate, these antibodies can bind an estimated 10 billion to 100 billion different antigenic shapes. This is why we're able to mount a successful immune response to so many different pathogens.
Oddly, about 20% of the B cells in our bodies are autoreactive. They target antigens belonging to our own tissues—not by design, but due to the random way B-cell diversity comes about: through sloppy replication, apparently engineered by evolution to ensure diversification. Fortunately, these B cells are typically in a dopey state of inertia, and they pretty much leave our tissues alone.
Part 2
Nov 13
Dr. Krishna Kumari Challa
But at times, somnolent autoreactive B cells become activated, take aim at our own tissues and instigate one of the disorders collectively called autoimmunity. Some awakened autoreactive B cells crank out antibodies that bind to proteins and DNA inside the nuclei of our cells. Such activated "antinuclear antibodies"—the hallmark of lupus—trigger damage to tissues randomly distributed throughout the body, because virtually all our body's cells have nuclei.
The vast majority of EBV-infected people (most of us, that is) have no idea they're still sheltering a virus and never get lupus. But essentially everyone with lupus is EBV-infected, studies have shown. An EBV-lupus connection has long been suspected but never nailed down until now.
Although latent EBV is ubiquitous in the sense that almost everybody carries it, it resides in only a tiny fraction of any given person's B cells. As a result, until the new study, it was virtually impossible for existing methods to identify infected B cells and distinguish them from uninfected ones.
But researchers now developed an extremely high-precision sequencing system that enabled them to do this. They found that fewer than 1 in 10,000 of a typical EBV-infected but otherwise healthy individual's B cells are hosting a dormant EBV viral genome.
Employing their new EBV-infected-B-cell-identifying technology along with bioinformatics and cell-culture experimentation, the researchers found out how such small numbers of infected cells can cause a powerful immune attack on one's own tissues. In lupus patients, the fraction of EBV-infected B cells rises to about 1 in 400—a 25-fold difference.
It's known that the latent EBV, despite its near-total inactivity, nonetheless occasionally nudges the B cell in which it's been snoozing to produce a single viral protein, EBNA2. The researchers showed that this protein acts as a molecular switch—in geneticists' language, a transcription factor—activating a battery of genes in the B cell's genome that had previously been at rest. At least two of the human genes switched on by EBNA2 are recipes for proteins that are, themselves, transcription factors that turn on a variety of other pro-inflammatory human genes.
The net effect of all these genetic fireworks is that the B cell becomes highly inflammatory: It dons its "professional antigen-presenting cell" uniform and starts stimulating other immune cells (called helper T cells) that happen to share a predilection for targeting cell-nuclear components. These helper T cells enlist multitudes of other antinuclear B cells as well as antinuclear killer T cells, vicious attack dogs of the immune system.
When that militia bulks up, it doesn't matter whether any of the newly recruited antinuclear B cells are EBV-infected or not. (The vast majority of them aren't.) If there are enough of them, the result is a bout of lupus.
Part 3
Nov 13
Dr. Krishna Kumari Challa
Scientists suspect that this cascade of EBV-generated self-targeting B-cell activation might extend beyond lupus to other autoimmune diseases such as multiple sclerosis, rheumatoid arthritis and Crohn's disease, where hints of EBV-initiated EBNA2 activity have been observed.
The million-dollar question: If about 95% of us are walking around with latent EBV in our B cells, why do some of us—but not all of us—get autoimmunity? The researchers speculate that perhaps only certain EBV strains spur the transformation of infected B cells into antigen-presenting "driver" cells that broadly activate huge numbers of antinuclear B cells.
Many companies are working on an EBV vaccine, and clinical trials of such a vaccine are underway. But that vaccine would have to be given soon after birth, they noted, as such vaccines are unable to rid an already-infected person of the virus.
Shady Younis et al, Epstein-Barr virus reprograms autoreactive B cells as antigen presenting cells in systemic lupus erythematosus, Science Translational Medicine (2025). DOI: 10.1126/scitranslmed.ady0210
Part 4
Nov 13
Dr. Krishna Kumari Challa
Bacteria spin rainbow-colored, sustainable textiles
In the future, clothes might come from vats of living microbes. Reporting in the journal Trends in Biotechnology, researchers demonstrate that bacteria can both create fabric and dye it in every color of the rainbow—all in one pot. The approach offers a sustainable alternative to the chemical-heavy practices used in today's textile industry.
The industry relies on petroleum-based synthetic fibers and chemicals for dyeing, which include carcinogens, heavy metals, and endocrine disruptors.
These processes generate lots of greenhouse gases, degrade water quality, and contaminate the soil, so researchers wanted to find a better solution.
Known as bacterial cellulose, fibrous networks produced by microbes during fermentation have emerged as a potential alternative to petroleum-based fibers such as polyester and nylon.
Researchers set out to create fibers with vivid natural pigment by growing cellulose-spinning bacteria alongside color-producing microbes. The microbial colors stemmed from two molecular families: violaceins—which range from green to purple—and carotenoids, which span from red to yellow.
But the first experiments failed. The team learned that the cellulose-spinning bacteria Komagataeibacter xylinus and the color-producing bacteria Escherichia coli interfered with each other's growth.
Tweaking their recipe, the researchers found a way to make peace between the microbes. For the cool-toned violaceins, they developed a delayed co-culture approach by adding in the color-producing bacteria after the cellulose bacteria had already begun growing, allowing each to do its job without thwarting the other.
For the warm-toned carotenoids, the team devised a sequential culture method, where the cellulose is first harvested and purified, then soaked in the pigment-producing cultures. Together, the two strategies yielded a vibrant palette of bacterial cellulose sheets in purple, navy, blue, green, yellow, orange, and red.
To see if the colors could survive the rigors of daily life, the team tested the materials by washing, bleaching, and heating them, as well as soaking them in acid and alkali. Most held their hues, and the violacein-based textile even outperformed synthetic dye in washing tests.
Researchers are proposing an environmentally friendly direction toward sustainable textile dyeing while producing cellulose at the same time.
Accepting it is in your hands!
Scaling up production and competing with low-cost petroleum products are among the remaining hurdles. Real progress will also require a shift in the consumer mindset toward prioritizing sustainability over price.
But the bacteria-based fabrics are at least five years from store shelves.
One-pot production of colored bacterial cellulose, Trends in Biotechnology (2025). DOI: 10.1016/j.tibtech.2025.09.019
Nov 13
Dr. Krishna Kumari Challa
Could altering mosquitoes' internal clocks stop them from biting?
People who live in the tropical areas where Aedes aegypti mosquitoes reside have probably known for centuries, or even millennia—thanks to their itchy bites—that the mosquitoes hunt most often at dawn and dusk.
A new study offers scientific proof of that hunting behavior, and new insight into the biological mechanism behind it. It also offers a potential path to reducing bites and helping stop the spread of deadly, mosquito-borne disease.
The research focuses on Aedes aegypti, a type of mosquito that lives primarily in tropical areas, and that can carry diseases like dengue, chikungunya, and Zika. Although this mosquito species cannot survive harsh winters, in recent years they have made their habitats in new areas as climate change enables them to thrive in more temperate climates.
There are many reasons people could, in theory, get more bites from these mosquitoes at dawn and dusk. It could be that they're just outside more, or that humans are more attractive to them at these times. This study focused on understanding if this biting pattern is also influenced by mosquitoes' own daily rhythm.
The research team began by video recording mosquitoes, and watching how they responded to carbon dioxide, a signal mosquitoes use to locate humans. The team used machine learning to quantify the mosquitoes' movements.They found that the mosquitoes had a more persistent response to the same amount of carbon dioxide at dawn and dusk, exactly the times of day that many people (both mosquito researchers and not) have reported getting more bites.
This is consistent with the idea that they're actually better predators at that time of day.
Part 1
Nov 13
Dr. Krishna Kumari Challa
Suspecting that what was changing was not the mosquitoes' ability to sense us at these times of day, but the persistence, or aggressiveness, of their response, the research team used CRISPR-Cas9, a gene editing tool, to mutate a gene that controls mosquitoes' internal clocks..
They found that mutating the gene changed their behavioral timing, making the mosquitoes less persistently responsive to carbon dioxide in the morning. Normally, biting rates are high in the mornings, but when they disrupted the clock gene, the mutant mosquitoes were less successful at feeding during that time.
This is the first time we've found that there's an internal rhythm in the mosquito's behavior that could be driving these bites at dawn and dusk. Their internal clocks make them more persistent and predatory in their response to humans at these times of day.
Scientists could, in theory, find ways to lock mosquitoes in a state that prevents them from effectively seeking out humans. This could mean fewer uncomfortable bites and less disease.
But the mosquitoes in our place bite us all the time, not only during dawn and dusk.
When we try to protect ourselves at dawn and dusk, they evolve to bite us all through the day!
Hmmm! Researchers are you listening?
Linhan Dong et al, Time-of-day modulation in mosquito response persistence to carbon dioxide is controlled by Pigment-Dispersing Factor, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2520826122
Part 2
Nov 13
Dr. Krishna Kumari Challa
Can't sleep? Your microbiome may play a role
Sleep is an absolute necessity but most of us aren't getting enough of it.
Sleep is required to consolidate memories, regenerate tissue and build up energy for emotional regulation and alertness during the day. Sleep deprivation disrupts these processes, increases stress and predisposes people to heart, psychiatric and neurological problems.
Factors like stress, jet lag, work, diet and screen time all interfere with our ability to get the 7–9 hours of solid slumber needed to recover from the physiological pummels of the day.
But there's more to this story than travel and cellphones. How much sleep we get, and its quality, may have ties to the microbial variety your body harbours!
We may have some control over how our sleep plays out, but many pieces of the puzzle—particularly those at the cellular level—are out of our hands. Some of them, in fact, are in our guts.Studies in animal models and humans suggest there is a bidirectional relationship between the composition of the gut microbiota and sleep quality and duration. In mice, altering the gut microbial community, such as through antibiotic treatment, can lead to poorer, more fragmented sleep.
Nov 13
Dr. Krishna Kumari Challa
Indeed, a more diverse microbiome is generally correlated with increased sleep efficiency and total sleep time. Better sleep is also associated with a higher abundance of bacteria with health-promoting metabolic functions, like production of short-chain fatty acids (SCFAs), whereas conditions like insomnia are linked with lower abundances of these microbes. Whether insomnia causes these alterations, or vice versa, is still unclear—likely, they feed into one another.
Like many of the chemicals and processes powering our bodies, the composition of the microbiome naturally fluctuates throughout the day. These changes are linked to host circadian processes, and alterations in sleep (e.g., jet lag) can disrupt microbiota rhythms, resulting in important health implications.
"[What] we find in stress-related disorders, [and] in many mental health disorders in general, is that they're often associated with disordered sleep and dysregulation of sleep and circadian rhythms.
Researchers recently showed in animal models that daily fluctuations in stress pathways closely linked to sleep (e.g., cortisol levels) are modulated by the microbiome.
They found that there are circuits in the brain that are sensitive to microbial signals. Now, they are seeking to identify the mechanisms that underlie those sensitivities.
https://asm.org/articles/2025/november/cant-sleep-your-microbiome-m...
Part 2
Nov 13
Dr. Krishna Kumari Challa
The internal clock of immune cells: Is the immune system younger in the morning?
As the immune system ages, it reacts more slowly to pathogens, vaccines become less effective, and the risk of cancer increases. At the same time, the immune system follows a 24-hour rhythm, as the number and activity of many immune cells fluctuate throughout the day.
Researchers have now investigated whether this daily rhythm influences the aging of the immune system and whether the immune system behaves "younger" or "older" at certain times of the day.
For the new study published in Frontiers in Aging, researchers took blood samples from participants in the morning, at noon, and in the evening. Using the so-called "IMMune Age indeX (IMMAX)," they determined each individual's immune age and analyzed how it changed throughout the day.
The IMMAX is a biomarker determined by the ratio of certain immune cells in the blood. As part of biological age, it correlates with actual chronological age.
Individual immune cells that are relevant for calculating the IMMAX fluctuate throughout the day. For example, in the morning, researchers observed an increased frequency of natural killer cells (NK cells)—key protective cells that defend the body against infections and cancer. In contrast, other immune cell types showed the opposite pattern.
The circadian rhythm regulates immune system activity. Hormones, body temperature, nerve signals, and messenger molecules tell immune cells when to move or become active. This leads to daily fluctuations in the number of immune cells in the blood. However, these fluctuations do not appear to influence immune age over the course of the day, as the researchers discovered. Despite measurable daily differences, the IMMAX remained largely stable, as individual immune cell types apparently balance each other out.
The IMMAX is a biomarker for immune age that is largely independent of the time of day. Nevertheless, slight differences were observed depending on a person's chronotype—that is, whether they are more active early in the day ("larks") or late in the day ("owls"). For early risers ("larks"), the IMMAX value decreased slightly from morning to noon. This suggests that the time of blood sampling in relation to waking up is important.
"When we wake up in the morning and become active, this apparently influences the movement of our immune cells and thus has a slight effect on the IMMAX value.
In large cohort studies, the timing of sampling is less critical, as fluctuations are balanced out.
Sina Trebing et al, Influence of circadian rhythm on the determination of the IMMune age indeX (IMMAX), Frontiers in Aging (2025). DOI: 10.3389/fragi.2025.1716985
Nov 13
Dr. Krishna Kumari Challa
The study findings can help develop early-stage cancer prevention strategies.
Researchers have discovered why a specific mutation leads to tumorous growth in human lungs. The same mutation, however, fails to develop a tumor in human breasts, reducing the chances of a cancer.
They focused on the human epithelial tissues, the thin sheets of cells that line our vital organs. That’s because eighty percent of all human cancers begin in epithelial tissues.
epithelial cells act as the body’s first barrier and are constantly exposed to stress, damage, and mutations. In fact, both the lung and the breast epithelial tissues display ‘epithelial defense against cancer’ — the epithelium’s natural act against tumorous cells. That’s why the group tested a specific cancerous mutation that occurs in both breast and lung epithelial tissues.
Researchers have long recognized that breast and lung epithelial tissues exhibit distinct differences. Breast epithelium is relatively stable and compact — the cells are tightly packed with stronger junctions. On the other hand, lung epithelium expands and relaxes with each breath, which is why its cells are more flexible, elongated, and loosely connected.
The group, through a combination of live imaging and computer models, has determined how these differences directly lead to the lung epithelium being more prone to growing tumors compared to the breast.
Once a cancerous mutation sets in the breast tissue, single mutant cells are pushed out, and groups of cells get stuck together. In lung tissue, the same mutant cells spread easily and make finger-like shapes.
The team found that the “tug-of-war” forces between normal and mutant cells decide whether cancerous cells are removed, trapped, or allowed to grow. Thus, cell mechanics help explain tissue-specific cancer risk. The surprise was how directly those mechanics determine whether mutant cells are restrained or allowed to spread.
A belt forms around the tumor in the breast epithelia, raises tension, jams the mutant cluster, and often forces out single mutants. Thus, this belt restrains the tumor.
In the lung epithelia, the cells are more elongated, motile, and weakly connected. So, no such belt forms, allowing the mutant cells to survive, grow longer, and spread into the tissue.
The group’s work demonstrates a pathway to resisting the growth of mutations into tumors, and eventually, into cancers.
https://elifesciences.org/reviewed-preprints/106893v1
Nov 13
Dr. Krishna Kumari Challa
How much protein you really need
Most official guidelines recommend a bare minimum of close to one gram of protein per kilogram of body weight per day. But many scientists object to suggestions flying around social media that people need m.... Here’s what experts advise:
https://www.nature.com/articles/d41586-025-03632-1?utm_source=Live+...
Nov 13
Dr. Krishna Kumari Challa
Our solar system is moving faster than expected
How fast and in which direction is our solar system moving through the universe? This seemingly simple question is one of the key tests of our cosmological understanding. A research team of astrophysicists has now found new answers, ones that challenge the established standard model of cosmology.
The study's findings have just been published in the journal Physical Review Letters.
Their analysis shows that the solar system is moving more than three times faster than current models predict. This result clearly contradicts expectations based on standard cosmology and forces us to reconsider our previous assumptions.
To determine the motion of the solar system, the team analyzed the distribution of so-called radio galaxies, distant galaxies that emit particularly strong radio waves, a form of electromagnetic radiation with very long wavelengths similar to those used for radio signals. Because radio waves can penetrate dust and gas that obscure visible light, radio telescopes can observe galaxies invisible to optical instruments.
As the solar system moves through the universe, this motion produces a subtle "headwind": slightly more radio galaxies appear in the direction of travel. The difference is tiny and can only be detected with extremely sensitive measurements.
Using data from the LOFAR (Low Frequency Array) telescope, a Europe-wide radio telescope network, combined with data from two additional radio observatories, the researchers were able to make an especially precise count of such radio galaxies for the first time. They applied a new statistical method that accounts for the fact that many radio galaxies consist of multiple components. This improved analysis yielded larger but also more realistic measurement uncertainties.
Despite this, the combination of data from all three radio telescopes revealed a deviation exceeding five sigma, a statistically very strong signal considered in science as evidence for a significant result.
The measurement shows an anisotropy ("dipole") in the distribution of radio galaxies that is 3.7 times stronger than what the standard model of the universe predicts. This model describes the origin and evolution of the cosmos since the Big Bang and assumes a largely uniform distribution of matter.
If our solar system is indeed moving this fast, we need to question fundamental assumptions about the large-scale structure of the universe, say the cosmologists.
Alternatively, the distribution of radio galaxies itself may be less uniform than we have thought. In either case, the current models are being put to the test.
The new results confirm earlier observations in which researchers studied quasars, the extremely bright centers of distant galaxies where supermassive black holes consume matter and emit enormous amounts of energy. The same unusual effect appeared in these infrared data, suggesting that it is not a measurement error but a genuine feature of the universe.
Lukas Böhme et al, Overdispersed Radio Source Counts and Excess Radio Dipole Detection, Physical Review Letters (2025). DOI: 10.1103/6z32-3zf4
on Friday
Dr. Krishna Kumari Challa
Chinese team finds a fern that makes rare earth elements
Scientists have discovered a fern from South China that naturally forms tiny crystals containing rare earth elements (REEs). This breakthrough opens the door to a promising new way of "green mining" of these minerals called phytomining.
REEs are a group of 17 elements, all metals with similar properties that are essential for everything from wind turbines and electric car batteries to smartphones and medical scanners.
Extracting them is expensive and normally involves large-scale conventional mining operations that rely on harsh chemicals and cause significant pollution and land damage. That's why researchers are exploring cleaner, sustainable plant-based alternatives to collect REEs.
According to a paper published in the journal Environmental Science & Technology, the researchers studied the Blechnum orientale fern, which had been collected from REE-rich areas in South China. It was already known to be a hyperaccumulator, which is a plant that can grow in soil and water with high concentrations of metals and absorb them through its roots. But what they didn't know was the chemical form the REEs take when inside the plant. This knowledge is vital for designing the most efficient extraction process.
Using high-powered imaging technology and chemical analysis, the team discovered that the fern was forming nanoscale crystals of the REE-rich mineral monazite within its tissues, particularly in the cell walls and spaces between cells. Monazite is one of the main sources of rare earth elements in geological ore deposits worldwide.
The study authors also observed the crystal form, noting that it grows in a highly complex self-organizing pattern of tiny branches, likening it to a microscopic "chemical garden." This is the first time scientists have seen a living plant create a rare earth element crystal.
While we might not be going out to garden for REEs in the immediate future, the research is further proof that phytomining is feasible. The discovery, even with just one fern, strengthens the case that plants could one day provide a valuable, cheaper, and less destructive method for extracting much-needed rare earth elements.
Liuqing He et al, Discovery and Implications of a Nanoscale Rare Earth Mineral in a Hyperaccumulator Plant, Environmental Science & Technology (2025). DOI: 10.1021/acs.est.5c09617
on Friday
Dr. Krishna Kumari Challa
Magnetic nanoparticles that successfully navigate complex blood vessels may be ready for clinical trials
Magnetic microrobots with iron oxide and tantalum nanoparticles can be precisely guided through complex blood vessels using modular electromagnetic navigation, delivering drugs directly to target sites such as thrombi. These microrobots achieve over 95% delivery accuracy in realistic vessel models and animal tests, supporting readiness for clinical trials and potential applications beyond vascular occlusions.
Every year, 12 million people worldwide suffer a stroke; many die or are permanently impaired. Currently, drugs are administered to dissolve the thrombus that blocks the blood vessel. These drugs spread throughout the entire body, meaning a high dose must be administered to ensure that the necessary amount reaches the thrombus. This can cause serious side effects, such as internal bleeding.
Since medicines are often only needed in specific areas of the body, medical research has long been searching for a way to use microrobots to deliver pharmaceuticals to where they need to be: in the case of a stroke, directly to the stroke-related thrombus.
Now, a team of researchers at ETH Zurich has made major breakthroughs on several levels. They have published their findings in Science.
The microrobot the researchers use comprises a proprietary spherical capsule made of a soluble gel shell that they can control with magnets and guide through the body to its destination. Iron oxide nanoparticles in the capsule provide the magnetic properties.
Because the vessels in the human brain are so small, there is a limit to how big the capsule can be. The technical challenge is to ensure that a capsule this small also has sufficient magnetic properties.
The microrobot also needs a contrast agent to enable doctors to track via X-ray how it is moving through the vessels. The researchers focused on tantalum nanoparticles, which are commonly used in medicine but are more challenging to control due to their greater density and weight.
Combining magnetic functionality, imaging visibility and precise control in a single microrobot required perfect synergy between materials science and robotics engineering, which has taken the researchers many years to successfully achieve.
They developed precision iron oxide nanoparticles that enable this delicate balancing act.
Part 1
on Friday
Dr. Krishna Kumari Challa
The microrobots also contain the active ingredient they need to deliver. The researchers successfully loaded the microrobots with common drugs for a variety of applications—in this case, a thrombus-dissolving agent, an antibiotic or tumor medication.
These drugs were released by a high-frequency magnetic field that heats the magnetic nanoparticles, dissolving the gel shell and the microrobot.
The researchers used a two-step strategy to bring the microrobot close to its target: first, they injected the microrobot into the blood or cerebrospinal fluid via a catheter. They went on to use an electromagnetic navigation system to guide the magnetic microrobot to the target location.
The catheter's design is based on a commercially available model with an internal guidewire connected to a flexible polymer gripper. When pushed beyond the external guide, the polymer gripper opens and releases the microrobot.
To precisely steer the microrobots, the researchers developed a modular electromagnetic navigation system suitable for use in the operating theater.
Fabian C. Landers et al, Clinically ready magnetic microrobots for targeted therapies, Science (2025). DOI: 10.1126/science.adx1708. www.science.org/doi/10.1126/science.adx1708
Part 2
on Friday
Dr. Krishna Kumari Challa
The rust that could reveal alien life
Iron rusts. On Earth, this common chemical reaction often signals the presence of something far more interesting than just corroding metal—for example, living microorganisms that make their living by manipulating iron atoms. Now researchers argue these microbial rust makers could provide some of the most promising biosignatures for detecting life on Mars and the icy moons of the outer solar system.
Researchers now have compiled a comprehensive review of how iron metabolizing bacteria leave distinctive fingerprints in rocks and minerals, and why these signatures matter for astrobiology. The research, published in Earth-Science Reviews, bridges decades of terrestrial microbiology with the practical challenges of searching for life beyond Earth.
Iron ranks among the most abundant elements in the solar system, and Earth's microorganisms have evolved remarkably diverse ways to exploit it. Some bacteria oxidize ferrous iron to generate energy, essentially breathing iron the way humans breathe oxygen. Others reduce ferric iron, using it as the final electron acceptor in their metabolism. These processes don't happen in isolation. Iron metabolizing microbes link their element of choice to the carbon and nitrogen cycles, coupling iron transformations to carbon dioxide fixation, organic matter degradation, and even photosynthesis.
The byproducts of these microbial reactions create what researchers call biogenic iron oxyhydroxide minerals. These aren't subtle traces. Organisms that thrive in neutral pH environments and oxidize iron produce distinctive structures such as twisted stalks, tubular sheaths, and filamentous networks of iron minerals mixed with organic compounds. The minerals precipitate as the bacteria work, forming rusty deposits that can persist in the geological record for billions of years.
Part 1
on Friday
Dr. Krishna Kumari Challa
This durability makes iron biosignatures particularly attractive for planetary exploration. Unlike fragile organic molecules that degrade under radiation and harsh chemistry, mineralized iron structures can survive. Researchers have identified these biosignatures in environments ranging from hydrothermal vents on the ocean floor to terrestrial soils, from acidic mine drainage to neutral freshwater springs. Wherever liquid water contacts iron-bearing rocks, iron-metabolizing bacteria typically establish themselves.
Mars presents an obvious target. The planet's distinctive red color comes from oxidized iron in surface dust and rocks. Ancient Mars hosted liquid water, and spacecraft have documented iron-rich minerals throughout the geological record. If microbial life ever evolved on Mars, iron metabolism would have provided an accessible energy source. The minerals these hypothetical organisms produced could still exist, locked in ancient sediments awaiting discovery by rovers equipped with the right instruments.
The icy moons Europa and Enceladus offer different but equally compelling possibilities. Both harbor subsurface oceans beneath frozen shells. Europa's ocean likely contacts a rocky seafloor, where water and rock interactions would release dissolved iron. Enceladus actively vents ocean material through ice geysers at its south pole. Mission concepts propose sampling these plumes or landing near the vents, analyzing ejected particles for iron minerals that might betray biological origins.
The review emphasizes that recognizing biogenic iron minerals requires understanding how they form, what textures they create, and how they differ from abiotic iron precipitates. Mission planners must equip spacecraft with instruments capable of detecting not just iron minerals generally, but the specific morphological and chemical signatures that distinguish biology from geology.
The stakes are high. Finding iron biosignatures on another world wouldn't just confirm life exists elsewhere, it would reveal that the same fundamental chemistry supporting Earth's deep biosphere operates throughout the solar system.
Laura I. Tenelanda-Osorio et al, Terrestrial iron biosignatures and their potential in solar system exploration for astrobiology, Earth-Science Reviews (2025). DOI: 10.1016/j.earscirev.2025.105318
Part 2
on Friday
Dr. Krishna Kumari Challa
How chromosomes separate accurately: Molecular 'scissors' caught in action
Cell division is a process of remarkable precision: during each cycle, the genetic material must be evenly distributed between the two daughter cells. To achieve this, duplicated chromosomes, known as sister chromatids, are temporarily linked by cohesin—a ring-shaped protein complex that holds them together until separation.
Researchers have uncovered the mechanism by which separase—the molecular ''scissors'' responsible for this cleavage—recognizes and cuts cohesin.
Their findings, published in Science Advances, shed new light on chromosome segregation errors that can lead to certain forms of cancer.
Before a cell divides, its chromosomes are duplicated. These identical copies, called sister chromatids, are held together by cohesin—a ring-like structure composed of several proteins that prevents premature separation.
When the cell is ready to divide, separase, a specialized enzyme, cleaves one of the cohesin subunits, the protein SCC1, allowing the chromatids to separate and the genetic material to be evenly distributed between the two daughter cells. Any malfunction in this process can compromise genome stability, potentially resulting in severe diseases, including cancer.
Using cryo-electron microscopy (cryo-EM)—a cutting-edge technique that enables biological samples to be visualized in their native state at near-atomic resolution—the team captured the interaction between separase and SCC1 and identified the precise cleavage sites on the protein.
Biochemical and structural analyses also revealed multiple "docking sites" on the surface of separase, ensuring high-affinity binding of SCC1 to separase prior to cleavage. These contact points include five phosphate-binding sites that recognize phosphorylated residues on SCC1.
These affinity experiments showed that these phosphate–separase interactions stabilize the complex and accelerate SCC1 cleavage, ensuring fast and precise separation of chromosomes.
The work provides an extensive functional and structural framework to understand how separase is regulated and how it recognizes its substrates.
Jun Yu et al, Substrate recognition by human separase, Science Advances (2025). DOI: 10.1126/sciadv.ady9807
on Friday
Dr. Krishna Kumari Challa
Testosterone in body odor linked to perceptions of social status
As humans, we are constantly navigating social status, using subconscious strategies to assert either our dominance or prestige.
We often use voice or body language to communicate this. Imagine a politician with a slow, booming voice, expanding their chest and extending their arms, quickly asserting authority over their audience.
We also use our sense of smell, according to new research from the University of Victoria (UVic), published in Evolution and Human Behaviour.
This study examines the role of body odor in people's perceptions of others' social status.
Researchers examined whether scent cues associated with levels of circulating testosterone impact people's social status judgments. They found that both male and female participants in their study perceived men with higher levels of testosterone to be more dominant than men with lower testosterone levels.
Chemical signaling is the most widespread form of communication on Earth. Many animals will use scent to express and understand social status within their group. Mice, for example, scent-mark their territory to assert their dominance.
Previous research shows that humans use two different strategies to assert and maintain social status: dominance and prestige. Dominance is coercive, using tactics to force compliance. Prestige, on the other hand, involves showing valuable skills and traits that lead others to show deference voluntarily.
Research also reveals that scent plays an important role in human communication—of fear, sickness, safety, attraction, and personality traits such as dominance and neuroticism.
This is the first study to directly examine whether humans use scent cues related to circulating testosterone levels in the formation of social status judgments.
No significant relationship was found in the study between testosterone levels and perceived prestige. Perceptions of dominance, on the other hand, were associated with higher testosterone levels.
This study contributes to a growing body of work seeking to understand how social communication occurs through scent.
However, the findings should be interpreted with caution. The study involved a relatively small and uniform sample, and replication with larger and more diverse groups will be important to confirm whether these patterns hold.
Marlise K. Hofer et al, The role of testosterone in odor-based perceptions of social status, Evolution and Human Behavior (2025). DOI: 10.1016/j.evolhumbehav.2025.106752
**
on Friday
Dr. Krishna Kumari Challa
Hitler's DNA reveals possible genetic disorder tied to sexual and social behaviour
Adolf Hitler most likely suffered from the genetic condition Kallmann Syndrome that can manifest itself in undescended testicles and a micropenis, researchers said this week, following DNA testing of the Nazi dictator's blood.
The new research also quashes the suggestion that Hitler had Jewish ancestry.
Popular World War II songs often mocked Hitler's anatomy but lacked any scientific basis.
The findings by an international team of scientists and historians now appear to confirm longstanding suspicions around his sexual development.
No one has ever really been able to explain why Hitler was so uncomfortable around women throughout his life, or why he probably never entered into intimate relations with women.
But now we know that he had Kallmann Syndrome, this could be the answer we've been looking for, say the researchers.
The research findings are featured in a new documentary, "Hitler's DNA: Blueprint of a Dictator."
The testing found a "high likelihood" that Hitler had Kallmann Syndrome and "very high" scores—in the top one percent—for a predisposition to autism, schizophrenia and biopolar disorder.
The research team stressed that such conditions, however, could not explain or excuse Hitler's warmongering or racist policies.
The testing was made possible after researchers obtained a sample of Hitler's blood from a piece of material taken from the sofa on which he shot himself.
The DNA results additionally rule out the possibility that Hitler had a Jewish grandfather via his grandmother.
Source: AFP
**
on Friday
Dr. Krishna Kumari Challa
Randomized trials show no evidence of non-specific vaccine effects
Researchers have conducted randomized trials involving thousands of children in Guinea-Bissau and Denmark to demonstrate so-called non-specific vaccine effects—that is, whether vaccines also protect against diseases other than the one they are designed to prevent.
A new comprehensive Danish review now shows that the trials have been unable to demonstrate non-specific effects for the widely used vaccinations against measles, tuberculosis, diphtheria, tetanus, and whooping cough. The study has just been published in the journal Vaccine.
It is concerning that such a prominent research group has conducted so many randomized trials over such a long period without finding real results. Randomized trials are normally considered the gold standard in medical research, so if they do not show anything, one should be very cautious about presenting it as convincing evidence.
The new study is the first to systematically analyze all of Benn and Aaby's randomized trials. While others have previously criticized individual studies, the researchers behind the new review examined the full body of work.
Researchers find indications that the researchers systematically selected and highlighted results that supported their theories, while downplaying the fact that they did not confirm the primary hypothesis the trials were actually designed to test. When you look at the overall picture, there are almost no real findings left.
The results meant it was time to change the global approach to vaccination—all new vaccines should routinely be assessed for non-specific effects, and vaccination programs should be revised worldwide.
Part 1
on Friday
Dr. Krishna Kumari Challa
The new review is based on 13 randomized trials presented in 26 articles containing more than 1,400 separate statistical analyses. Only one of the 13 randomized trials demonstrated the effect it was designed to detect—and that trial was stopped early and was considered unsuccessful by the researchers themselves.
The review showed that only about 7% of the many hypotheses tested by the researchers could be expected to be correct. Notably, this did not apply to the researchers' own primary hypotheses—these were not supported by the corrected results.
In 23 out of 25 articles, the researchers highlighted secondary findings as support for their theories, but in 22 of these cases the evidence disappeared after proper statistical handling. Overall, the researchers' interpretation did not take into account how many analyses they had conducted, and they did not focus on the main outcomes of the trials.
The researchers stress that the purpose was not to determine whether non-specific vaccine effects exist, but to examine Benn and Aaby's research practices.
"We hope that others in the field will now re-evaluate the evidence—what do we actually know about non-specific vaccine effects? Although Benn and Aaby have contributed about one-third of all research in the area, others have also studied the question, and this should be included to form a complete picture.
Henrik Støvring et al, What is actually the emerging evidence about non-specific vaccine effects in randomized trials from the Bandim Health Project?, Vaccine (2025). DOI: 10.1016/j.vaccine.2025.127937
Part 2
**
on Friday
Dr. Krishna Kumari Challa
What puts the ‘cable’ in cable bacteria
The ‘wires’ that cable bacteria use to conduct electricity seem to be made of repeating units of a compound that contains nickel and sulfur. Researchers found that these units make up ‘nanoribbons’, which are woven together like a plait to form the larger wires. The work has yet to be peer reviewed, but if confirmed, the finding could be the first known example of a biologically produced metal–organic framework. “If it holds true, this is a major step in our understanding of what cable bacteria can accomplish,” says electromicrobiologist Lars Peter Nielsen, who co-discovered the microorganisms in 2009.
https://www.biorxiv.org/content/10.1101/2025.10.10.681601v1
https://www.science.org/content/article/metal-scaffolds-turn-bacter...
on Friday
Dr. Krishna Kumari Challa
AI at the speed of light just became a possibility
Researchers have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.
Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially artificial intelligence, yet they extend beyond the simple math we're familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik's cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.
Today, every task in AI, from image recognition to natural language processing, relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.
Motivated by this pressing problem, an international research collaboration has unlocked a new approach that performs complex tensor computations using a single propagation of light. The result is single-shot tensor computing, achieved at the speed of light itself.
The new method performs the same kinds of operations that today's GPUs handle, like convolutions and attention layers, but does them all at the speed of light.
To achieve this, the researchers encoded digital data into the amplitude and phase of light waves, effectively turning numbers into physical properties of the optical field. When these light fields interact and combine, they naturally carry out mathematical operations such as matrix and tensor multiplications, which form the core of deep learning algorithms.
By introducing multiple wavelengths of light, the team extended this approach to handle even higher-order tensor operations.
Another key advantage of this method is its simplicity. The optical operations occur passively as the light propagates, so no active control or electronic switching is needed during computation.
Direct tensor processing with coherent light, Nature Photonics (2025). DOI: 10.1038/s41566-025-01799-7.
on Sunday
Dr. Krishna Kumari Challa
Direct link between peak air pollution and cardiac risk revealed
The risk of suffering cardiac arrest may increase on days recording high levels of air pollution in some parts of the world. This emerges from a study conducted by researchers and published in the journal Global Challenges.
Researchers analyzed 37,613 cases of out-of-hospital cardiac arrest in Lombardy between 2016 and 2019 by assessing, for each episode, the daily concentrations of various pollutants (PM₂.₅, PM₁₀, NO₂, O₃ and CO) obtained from satellite data of the European Copernicus program (ESA). The study used advanced spatio-temporal statistical models to identify the relationship between pollution peaks and increased risk of cardiac events.
They observed a strong association with nitrogen dioxide (NO₂). Indeed, for every 10 micrograms per cubic meter increase, the risk of cardiac arrest rises by 7% over the next 96 hours.
Even particulate matter PM₂.₅ and PM₁₀ present a 3% and 2.5% increase in the risk rate, respectively, on the same day of exposure.
The effect is more pronounced in urban areas but significant associations are also observed in rural towns. The risk particularly marks an upswing in the warm months, suggesting a possible interaction between heat and pollutants. The association was also observed at levels below legal limits, suggesting that there is no safe exposure threshold.
The link between air quality and out-of-hospital cardiac arrest is a wake-up call for local health systems.
Amruta Umakant Mahakalkar et al, Short‐Term Effect of Air Pollution on Out of Hospital Cardiac Arrest (OHCA) in Lombardy—A Case‐Crossover Spatiotemporal Study, Global Challenges (2025). DOI: 10.1002/gch2.202500241
on Sunday
Dr. Krishna Kumari Challa
Neanderthals May Never Have Truly Gone Extinct
Neanderthals may have never truly gone extinct, according to new research – at least not in the genetic sense.A new mathematical model has explored a fascinating scenario in which Neanderthals gradually disappeared not through "true extinction" but through genetic absorption into a more prolific species: Us.
According to the analysis, the long and drawn-out 'love affair' between Homo sapiens and Neanderthals could have led to almost complete genetic absorption within 10,000–30,000 years.
The model is simple and not regional, but it provides a "robust explanation for the observed Neanderthal demise, say researchers.
Once, the idea that Neanderthals and Homo sapiens interbred was a radical notion, and yet now, modern genome studies and archaeological evidence have provided strong evidence that our two lineages were hooking up and reproducing right across Eurasia for tens of thousands of years.
Today, people of non-African ancestry have inherited about 1 to 4 percent of their DNA from Neanderthals.
No one knows why Neanderthals disappeared from the face of our planet roughly 40,000 years ago, but experts tend to agree that there are probably numerous factors that played a role, like environmental changes, loss of genetic diversity, or competition with Homo sapiens.
Now researchers present a model that does not rule out these other explanations.
It does suggest that genetic drift played a strong role – even with the researchers assuming the Neanderthal genes our species 'absorbed' had no survival benefit.
If the model were to include the potential advantages of some Neanderthal genes to the larger Homo sapiens population, then mathematical support for genetic dilution may become even stronger.
Like all models, this new one is based on imperfect assumptions. It uses the birth rates of modern hunter-gatherer tribes to predict how quickly small Neanderthal tribes would be engulfed by a much larger human population, given how frequently it seems we were interbreeding.
Part 1
on Sunday
Dr. Krishna Kumari Challa
The results are consistent with recent archaeological discoveries and align with a body of evidence that suggests the decline of Neanderthals in Europe was gradual rather than sudden.
Homo sapiens seem to have started migrating out of Africa much earlier than scientists previously thought, and they arrived in Europe in several influxes, possibly starting more than 200,000 years ago.
As each wave of migration came crashing into the region, it engulfed local Neanderthal communities, diluting their genes, like sand pulled into the sea.
Today, some scientists argue that there is more to unite Homo sapiens and Neanderthals than there is to differentiate us. Our lineages, they say, should not be regarded as two separate species, but rather as distinct populations belonging to a "common human species.".
Neanderthals were surprisingly adaptable and intelligent. They made intricate tools, created cave art, and used fire – and when it came to communication, they were probably capable of far more than simple grunting.
Neanderthal populations and cultures may no longer exist, but their genetic legacy lives on inside of us.
These are not just our cousins; they are also our ancestors.
https://www.nature.com/articles/s41598-025-22376-6
Part 2
**
on Sunday
Dr. Krishna Kumari Challa
Parasitic ant tricks workers into killing their queen, then takes the throne
Scientists document a new form of host manipulation where an invading, parasitic ant queen "tricks" ant workers into killing their queen mother. The invading ant integrates herself into the nest by pretending to be a member of the colony, then sprays the host queen with fluid that causes her daughters to turn against her. The parasitic queen then usurps the throne, having the workers serve her instead as the new queen regent. This work was published in Current Biology on November 17.
Matricide, a behavior where offspring kill or eat their mother, is a rarely seen phenomenon in nature. Despite appearing maladaptive at first glance, it does offer advantages by either nourishing the young and giving the mother indirect benefits through increased offspring survival or allowing the young to invest in offspring of their own.
Up until now, only two types of matricide have been recorded in which either the mother or offspring benefit. In this novel matricide that researchers reported, neither profit; only the parasitic third party.
The ants Lasius orientalis and umbratus, commonly referred to as the "bad-smell ants" in Japanese, are so-called "social parasites" that execute a covert operation to infiltrate and eventually take over the colony of their unsuspecting ant queen hosts, Lasius flavus and japonicus, respectively. The parasitic queen takes advantage of ants' reliance on smell to identify both friends and foes to dupe unsuspecting worker ants into believing she is part of the family.
Ants live in the world of odors.
Before infiltrating the nest, the parasitic queen stealthily acquires the colony's odor on her body from workers walking outside so that she is not recognized as the enemy.
Once these "bad-smell" ants have been accepted by the colony's workers and locate the queen, the parasitic ant douses her with a foul-smelling chemical researchers presume to be formic acid—a chemical unique to some ants and stored in a specialized organ.
The parasitic ants exploit that ability to recognize odors by spraying formic acid to disguise the queen's normal scent with a repugnant one. This causes the daughters, who normally protected their queen mother, to attack her as an enemy.
Then, like fleeing the scene of the crime, the parasitic queen immediately (but temporarily) retreats. "She knows the odor of formic acid is very dangerous, because if host workers perceive the odor they would immediately attack her as well.
She will periodically return and spray the queen multiple times until the workers have killed and disposed of their mother queen. Then, once the dust has settled, the parasitic ant queen returns and begins laying eggs of her own. With this newly accepted parasitic queen in the colony and no other queen to compete with, the matricidal workers begin taking care of her and her offspring instead.
Host daughter ants manipulated into unwitting matricide by a social parasitic queen, Current Biology (2025). DOI: 10.1016/j.cub.2025.09.037. www.cell.com/current-biology/f … 0960-9822(25)01207-2
yesterday
Dr. Krishna Kumari Challa
Lead-free alternative discovered for essential electronics component
Ferroelectric materials are used in infrared cameras, medical ultrasounds, computer memory and actuators that turn electric properties into mechanical properties and vice-versa. Most of these essential materials, however, contain lead and can therefore be toxic.
Therefore, for the last 10 years, there has been a huge initiative all over the world to find ferroelectric materials that do not contain lead.
The atoms in a ferroelectric material can have more than one crystalline structure. Where two crystalline structures meet is called a phase boundary, and the properties that make ferroelectric materials useful are strongest at these boundaries.
Using chemical processes, scientists have manipulated the phase boundaries of lead-based ferroelectric materials to create higher performance and smaller devices. Chemically tuning the phase boundaries of lead-free ferroelectric material, however, has been challenging.
New research found a way to enhance lead-free ferroelectrics using strain, or mechanical force, rather than a chemical process. The discovery could produce lead-free ferroelectric components, opening new possibilities for devices and sensors that could be implanted in humans.
Part 1
yesterday
Dr. Krishna Kumari Challa
Ferroelectric materials, first discovered in 1920, have a natural electrical polarization that can be reversed by an electric field. That polarization remains reversed even once the electric field has been removed.
The materials are dielectric, meaning they can be polarized by the application of an electric field. That makes them highly effective in capacitors.
Ferroelectrics are also piezoelectric, which means they can generate electric properties in response to mechanical energy, and vice versa. This quality can be used in sonars, fire sensors, tiny speakers in a cell phone or actuators that precisely form letters in an inkjet printer.
All these properties can be enhanced by manipulating the phase boundary of ferroelectric materials.
In a lead-based ferroelectric, such as lead zirconate titanate, one can chemically tune the compositions to land right at the phase.
Lead-free ferroelectrics, however, contain highly volatile alkaline metals, which can become a gas and evaporate when chemically tuned.
The researchers instead created a thin film of the lead-free ferroelectric material sodium niobate (NaNbO3). The material is known to have a complex crystalline ground state structure at room temperature. It is also flexible. Scientists have long known that changing the temperature of sodium niobate can produce multiple phases, or different arrangements of atoms.
Instead of a chemical process or manipulating the temperature, the researchers changed the structure of the atoms in sodium niobate by strain.
They grew a thin film of sodium niobate on a substrate. The structure of the atoms in the sodium niobate contract and expand as they try to match the structure of the atoms in the substrate. The process creates strain on the sodium niobate.
"What is quite remarkable with sodium niobate is if you change the length a little bit, the phases change a lot.
To the researchers' surprise, the strain caused the sodium niobate to have three different phases at once, which optimizes the useful ferroelectric properties of the material by creating more boundaries.
The experiments were conducted at room temperature. The next step will be seeing if sodium niobate responds to strain in the same way at extreme temperatures ranging from minus 270 C to 1,000 C above.
Reza Ghanbari et al, Strain-induced lead-free morphotropic phase boundary, Nature Communications (2025). DOI: 10.1038/s41467-025-63041-w
Part 2
yesterday
Dr. Krishna Kumari Challa
Reducing arsenic in drinking water cuts risk of death, even after years of chronic exposure: 20-year study
Arsenic is among the most common chemical pollutants of ground water.
Arsenic is a naturally occurring element that accumulates in groundwater, and because it has no taste or odor, people can unknowingly drink contaminated water for years.
A 20-year study of nearly 11,000 adults in Bangladesh found that lowering arsenic levels in drinking water was associated with up to a 50% lower risk of death from heart disease, cancer and other chronic illnesses, compared with continued exposure.
The study provides the first long-term, individual-level evidence that reducing arsenic exposure may lower mortality, even among people exposed to the toxic contaminant for years.
The landmark analysis by researchers is important for public health because groundwater contamination from naturally occurring arsenic remains a serious issue worldwide.
The study shows what happens when people who are chronically exposed to arsenic are no longer exposed. You're not just preventing deaths from future exposure, but also from past exposure.
The results provide the clearest evidence to date of the link between arsenic reduction and lower mortality.
For two decades, the research team followed each participant's health and repeatedly collected urine samples to track exposure, which they say strengthened the accuracy of their findings.
People whose urinary arsenic levels dropped from high to low had mortality rates identical to those who had consistently low exposure throughout the duration of the study. The larger the drop in arsenic levels, the greater the decrease in mortality risk. By contrast, individuals who continued drinking high-arsenic water saw no reduction in their risk of death from chronic disease.
Arsenic Exposure Reduction and Chronic Disease Mortality, JAMA (2025). jamanetwork.com/journals/jama/ … 1001/jama.2025.19161
yesterday
Dr. Krishna Kumari Challa
Lethal dose of plastics for ocean wildlife: Surprisingly small amounts can kill seabirds, sea turtles and marine mammals
By studying more than 10,000 necropsies, researchers now know how much plastic it takes to kill seabirds, sea turtles, and marine mammals, and the lethal dose is much smaller than you might think. Their new study titled "A quantitative risk assessment framework for mortality due to macroplastic ingestion in seabirds, marine mammals, and sea turtles" is published in the Proceedings of the National Academy of Sciences.
Led by Ocean Conservancy researchers, the paper is the most comprehensive study yet to quantify the extent to which a range of plastic types—from soft, flexible plastics like bags and food wrappers; to balloon pieces; to hard plastics ranging from fragments to whole items like beverage bottles—result in the death of seabirds, sea turtles, and marine mammals that consume them.
The study reveals that, on average, consuming less than three sugar cubes' worth of plastics for seabirds like Atlantic puffins (which measure approximately 28 centimeters, or 11 inches, in length); just over two baseballs' worth of plastics for sea turtles like Loggerheads (90 centimeters or 35 inches); and about a soccer ball's worth of plastics for marine mammals like harbor porpoises (1.5 meters, or 60 inches), has a 90% likelihood of death.
At the 50% mortality threshold, the volumes are even more startling: consuming less than one sugar cube's worth of plastics kills one in two Atlantic puffins; less than half a baseball's worth of plastics kills one in two Loggerhead turtles; and less than a sixth of a soccer ball kills one in two harbor porpoises.
The lethal dose varies based on the species, the animal's size, the type of plastic it's consuming, and other factors, but overall it's much smaller than you might think, which is troubling when you consider that more than a garbage truck's worth of plastics enters the ocean every minute.
Part 1
yesterday
Dr. Krishna Kumari Challa
Nearly half (47%) of all sea turtles; a third (35%) of seabirds; and 12% of marine mammals in the dataset had plastics in their digestive tracts at their time of death. Overall, one in five (21.5%) of the animals recorded had ingested plastics, often of varying types. Additional findings included:
Seabirds
Of seabirds that ate plastic, 92% ate hard plastics, 9% ate soft plastics, 8% ate fishing debris, 6% ate rubber, and 5% ate foams, with many individuals eating multiple plastic types.
Seabirds are especially vulnerable to synthetic rubber: just six pieces, each smaller than a pea, are 90% likely to cause death.
Sea turtles
Of sea turtles that ate plastic, 69% ate soft plastics, 58% ate fishing debris, 42% ate hard plastics, 7% ate foam, 4% ate synthetic rubbers, and 1% ate synthetic cloth.
Sea turtles, which on average weigh several hundred pounds, are especially vulnerable to soft plastics, like plastic bags: just 342 pieces, each about the size of a pea, would be lethal with 90% certainty.
Mammals
Of marine mammals that ate plastic, 72% ate fishing debris, 10% ate soft plastics, 5% ate rubber, 3% ate hard plastics, 2% ate foam, and 0.7% ate synthetic cloth.
Marine mammals are especially vulnerable to fishing debris: 28 pieces, each smaller than a tennis ball, are enough to kill a sperm whale in 90% of cases.
Threatened species and broader impacts
The study also found that nearly half of the individual animals who had ingested plastics are red-listed as threatened—that is, near-threatened, vulnerable, endangered or critically endangered—by the IUCN. Notably, the study only analyzed the impacts of ingesting large plastics (greater than 5 millimeters) on these species, and did not account for all plastic impacts and interactions. For example, they excluded entanglement, sublethal impacts of ingestion that can impact overall animal health, and microplastics consumed.
This research really drives home how ocean plastics are an existential threat to the diversity of life on our planet.
Murphy, Erin L., A quantitative risk assessment framework for mortality due to macroplastic ingestion in seabirds, marine mammals, and sea turtles, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2415492122. doi.org/10.1073/pnas.2415492122
Part2
yesterday
Dr. Krishna Kumari Challa
Dogs 10,000 years ago roamed with bands of humans and came in all shapes and sizes
Analysis of ancient dog skulls and genomes indicates that significant physical and genetic diversity in dogs emerged over 10,000 years ago, predating modern selective breeding. Distinctive dog-like skulls appeared around 11,000 years ago, and ancient DNA reveals that dogs often migrated with human groups, reflecting complex, intertwined histories and early biocultural exchanges.
https://theconversation.com/dogs-10-000-years-ago-roamed-with-bands...
yesterday
Dr. Krishna Kumari Challa
New study identifies part of brain animals use to make inferences
Animals survive in changing and unpredictable environments by not merely responding to new circumstances, but also, like humans, by forming inferences about their surroundings—for instance, squirrels understand that certain bird noises don't signal the presence of a predator, so they won't seek shelter when they later hear these same sounds. But less clear is how the brain works to create these inferences.
In a study published in the journal Neuron, a team of researchers identified a particular part of the brain that serves as an "inference engine." The region, the orbitofrontal cortex (OFC), allows animals to update their understanding of their surroundings based on changing circumstances.
To survive, animals cannot simply react to their surroundings. They must generalize and make inferences—a cognitive process that is among the most vital and complicated operations that nervous systems perform. These findings advance our knowledge of how the brain works in applying what we've learned.
The scientists add that the results offer promise for better understanding the nature of neuropsychiatric disorders, such as bipolar disorder and schizophrenia, in which our ability to make inferences is diminished.
In the experiments conducted when the brain's OFC was disrupted, the trained rats could no longer update their understanding of what the other available rewards might be—specifically, they couldn't make distinctions among hidden states.
These results, based on recordings of more than 10,000 neurons, suggest that the OFC is directly involved in helping the brain make inferences in changing situations.
The orbitofrontal cortex updates beliefs for state inference, Neuron (2025). DOI: 10.1016/j.neuron.2025.10.024. www.cell.com/neuron/fulltext/S0896-6273(25)00805-0
yesterday
Dr. Krishna Kumari Challa
Learning to see after being born blind: Brain imaging study highlights infant adaptability
Some babies are born with early blindness due to dense bilateral congenital cataracts, requiring surgery to restore their sight. This period of several months without vision can leave a lasting mark on how the brain processes visual details, but surprisingly little on the recognition of faces, objects, or words. This is the main finding of an international study conducted by neuroscientist.
Using brain imaging, the researchers compared adults who had undergone surgery for congenital cataracts as babies with people born with normal vision. The results are striking: in people born with cataracts, the area of the brain that analyzes small visual details (contours, contrasts, etc.) retains a lasting alteration from this early blindness.
On the other hand, the more advanced regions of the visual brain, responsible for recognizing faces, objects, and words, function almost normally. These "biological" results have been validated by computer models involving artificial neural networks. This distinction between altered and preserved areas of the brain paves the way for new treatments. In the future, clinicians may be able to offer visual therapies that are better tailored to each patient.
Babies' brains are highly adaptable . Even if vision is lacking at the very beginning of life, the brain can adapt and learn to recognize the world around it even on the basis of degraded information.
These findings also challenge the idea of a single "critical period" for visual development. Some areas of the brain are more vulnerable to early vision loss, while others retain a surprising capacity for recovery. The brain is both fragile and resilient. Early experiences matter, but they don't determine everything.
Impact of a transient neonatal visual deprivation on the development of the ventral occipito-temporal cortex in humans, Nature Communications (2025).
https://www.nature.com/articles/s41467-025-65468-7#:~:text=We%20sho...
yesterday
Dr. Krishna Kumari Challa
Frozen RNA survived 50,000 years
Researchers have found the oldest RNA molecules to date in mummified woolly mammoth tissue. RNA is a fragile molecule, which makes intact ancient samples few and far between. But such samples are sought after because analysing ancient RNA could shed light on the gene activity of extinct animals. Scientists used enzymes to convert RNA in the mammoth tissue to DNA, and then reverse-engineered the original RNA sequences. This technique recovered fragments of RNA from three samples, dated to between 39,000 and 52,000 years old.
https://www.cell.com/cell/fulltext/S0092-8674(25)01231-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867425012310%3Fshowall%3Dtrue
https://www.science.org/content/article/forty-thousand-year-old-mam...
yesterday
Dr. Krishna Kumari Challa
Psilocybin could reverse effects of brain injuries resulting from intimate partner violence, rat study finds
The term intimate partner violence (IPV) refers to physical, sexual or psychological abuse perpetrated by an individual on their romantic partner or spouse. Victims of IPV who are violently attacked and physically abused on a regular basis can sometimes present injuries that have lasting consequences on their mood, mental processes and behaviour.
Chronic neurobehavioral sequelae from IPV-BI are associated with neuroinflammation and impaired neuroplasticity, and effective treatment options are scarce, particularly in the context of IPV.
Common types of injuries observed in IPV victims who are periodically attacked physically include mild traumatic brain injuries (mTBI) and disruptions in the flow of blood or oxygen to the brain emerging from non-fatal strangulation (NFS). Both these have been linked to inflammation in the brain and a hindered ability to form new connections between neurons or change older connections (i.e., neuroplasticity).
Researchers recently carried out a study involving rats aimed at assessing the potential of the psychedelic compound psilocybin for reversing the chronic effects of IPV-related brain injuries. Their findings, published in Molecular Psychiatry, suggest that psilocybin could in fact reduce inflammation and anxiety, improve memory and facilitate learning following brain injuries caused by repeated physical trauma.
Part 1
5 hours ago
Dr. Krishna Kumari Challa
Psilocybin, a 5-HT2A receptor agonist with therapeutic potential in psychiatric disorders that share overlapping pathophysiology as BI, is a promising candidate.
As the majority of IPV victims are female, Allen, Sun and their colleagues performed their experiments on female rats. To model IPV-related brain injuries, they subjected the rats to mild injuries that mirrored those observed in many victims of IPV.
"Female rats underwent daily mTBI (lateral impact) followed by NFS (90 s) for five days, followed by 16 weeks of recovery," the authors explained.
Four months after the rats had been subjected to the injuries, they were either given a dose of psilocybin or a placebo (i.e., saline water) injection. 24 hours later they completed behavioral tasks designed to assess their memory, learning, motivation and anxiety levels.
Psilocybin is known to activate 5-HT2A receptors, a subtype of serotonin receptor that are known to play a role in the regulation of mood, mental processes and the brain's adaptability (i.e., plasticity). Using a drug that can block the activity of 5-HT2A receptors, the researchers also tried to determine whether these receptors played a key role in any effects they observed.
"To investigate whether psilocybin's effects were 5-HT2A receptor dependent, additional rats received pre-treatment with selective 5-HT2A receptor antagonist M100907 (1.5 mg/kg) one hour before psilocybin administration," wrote the authors. in their paper.
Overall, the results of this research team's experiments suggest that psilocybin could help to reverse some of the behavioral, cognitive and brain-related damage caused by repeated physical attacks. The female rats they tested were found to exhibit less anxiety-like and depression-like behaviors after they received psilocybin, while also performing better in memory and learning tests.
As this study was carried out in rats, its findings cannot be confidently applied to humans yet. In the future, human clinical trials could help to determine whether psilocybin is in fact a safe and effective therapeutic strategy to aid recovery from IPV-related brain injuries.
Josh Allen et al, Psilocybin mitigates chronic behavioral and neurobiological alterations in a rat model of recurrent intimate partner violence-related brain injury, Molecular Psychiatry (2025). DOI: 10.1038/s41380-025-03329-x.
Part 2
**
4 hours ago
Dr. Krishna Kumari Challa
How most of the universe's visible mass is generated: Experiments explore emergence of hadron mass
Deep in the heart of the matter, some numbers don't add up. For example, while protons and neutrons are made of quarks, nature's fundamental building blocks bound together by gluons, their masses are much larger than the individual quarks from which they are formed.
This leads to a central puzzle … why? In the theory of the strong interaction, known as quantum chromodynamics or QCD, quarks acquire their bare mass through the Higgs mechanism. The long-hypothesized process was confirmed by experiments at the CERN Large Hadron Collider in Switzerland and led to the Nobel Prize for Peter Higgs in 2013.
Yet the inescapable issue remains that this mechanism contributes to the measured proton and neutron masses at the level of less than 2%. This clearly demonstrates that the dominant part of the mass of real-world matter is generated through another mechanism, not through the Higgs. The rest arises from emergent phenomena.
The unaccounted mass and how it arises have long been open problems in nuclear physics, but scientists are now getting a more detailed understanding of this mass-generating process than ever before.
So, what gives protons and other strongly interacting particles (together called hadrons) their added "heft?" The answer lies in the dynamics of QCD. Through QCD, the strong interaction generates mass from the energy stored in the fields of the strongly interacting quarks and gluons. This is termed the emergence of hadron mass, or EHM.
Part 1
4 hours ago
Dr. Krishna Kumari Challa
Over the past decade, significant progress has been made in understanding the dominant portion of the universe's visible mass. This is driven by studies of the distance (or momentum) dependence of the strong interaction within a QCD-based approach known as the continuum Schwinger method (CSM).
Bridging CSM and experiments via phenomenology, physicists analyzed nearly 30 years of data collected. The comprehensive effort has given scientists the most detailed look yet at the mechanisms responsible for EHM.
The comprehensive effort has given scientists the most detailed look yet at the mechanisms responsible for EHM.
QCD describes the dynamics of the most elementary constituents of matter known so far: quarks and gluons. Through QCD processes, all hadronic matter is generated. This includes protons, neutrons, other bound quark-gluon systems and, ultimately, all atomic nuclei. A distinctive feature of the strong force is gluon self-interaction.
Without gluon self-interaction, the universe would be completely different.
It creates beauty through different particle properties and makes real-world hadron phenomena through emergent physics.
Because of this property, the strong interaction evolves rapidly with distance. This evolution of strong-interaction dynamics is described within the CSM approach. At distances comparable to the size of a hadron, ~10-13 cm, its relevant constituents are no longer the bare quarks and gluons of QCD.
Instead, dressed quarks and dressed gluons emerge when bare quarks and gluons are surrounded by clouds of strongly coupled quarks and gluons undergoing continual creation and annihilation.
In this regime, dressed quarks acquire dynamically generated masses that evolve with distance. This provides a natural explanation for EHM: a transition from the nearly massless bare quarks (with masses of only a few MeV) to fully dressed quarks of approximately 400 MeV mass. Strong interactions among the three dressed quarks of the proton generate its mass of about 1 GeV, as well as the masses of its excited states in the range of 1.0–3.0 GeV.
This raises the question: Can EHM be elucidated by mapping the momentum dependence of the dressed-quark mass from experimental studies of the proton and its excited states?
Part 2
4 hours ago
Dr. Krishna Kumari Challa
Investigations, comparing experiment with theory, demonstrate conclusively that dressed quarks with dynamically generated masses are the active degrees of freedom underlying the structure of the proton and its excited states. They also solidify the case that experimental results can be used to assess the mechanisms responsible for EHM.
Patrick Achenbach et al, Electroexcitation of Nucleon Resonances and Emergence of Hadron Mass, Symmetry (2025). DOI: 10.3390/sym17071106. www.mdpi.com/2073-8994/17/7/1106
Part 3
4 hours ago
Dr. Krishna Kumari Challa
Could atoms be reordered to enhance electronic devices?
Reordering atoms in SiGeSn barriers surrounding GeSn quantum wells unexpectedly increases charge carrier mobility, contrary to prior assumptions. This enhancement is likely due to atomic short-range ordering, suggesting that manipulating atomic arrangements could significantly improve electronic device performance and miniaturization, with implications for neuromorphic and quantum computing.
Christopher R. Allemang et al, High Mobility and Electrostatics in GeSn Quantum Wells With SiGeSn Barriers, Advanced Electronic Materials (2025). DOI: 10.1002/aelm.202500460. advanced.onlinelibrary.wiley.c … .1002/aelm.202500460
4 hours ago
Dr. Krishna Kumari Challa
Humans are evolved for nature, not cities, say anthropologists
A new paper by evolutionary anthropologists argues that modern life has outpaced human evolution. The study suggests that chronic stress and many modern health issues are the result of an evolutionary mismatch between our primarily nature-adapted biology and the industrialized environments we now inhabit.
Over hundreds of thousands of years, humans adapted to the demands of hunter-gatherer life—high mobility, intermittent stress and close interaction with natural surroundings.
Industrialization, by contrast, has transformed the human environment in only a few centuries, by introducing noise, air and light pollution, microplastics, pesticides, constant sensory stimulation, artificial light, processed foods and sedentary lifestyles.
In our ancestral environments, we were well adapted to deal with acute stress to evade or confront predators.
The lion would come around occasionally, and you had to be ready to defend yourself—or run. The key is that the lion goes away again.
Today's stressors—traffic, work demands, social media and noise, to name just a few—trigger the same biological systems, but without resolution or recovery. Our body reacts as though all these stressors were lion, say the researchers.
Whether it's a difficult discussion with your boss or traffic noise, your stress response system is still the same as if you were facing lion after lion. As a result, you have a very powerful response from your nervous system, but no recovery.
This stress is becoming constant.
In their review, the researchers synthesize evidence suggesting that industrialization and urbanization are undermining human evolutionary fitness. From an evolutionary standpoint, the success of a species depends on survival and reproduction. According to the authors, both have been adversely affected since the Industrial Revolution.
They point to declining global fertility rates and rising levels of chronic inflammatory conditions such as autoimmune diseases as signs that industrial environments are taking a biological toll.
There's a paradox where, on the one hand, we've created tremendous wealth, comfort and health care for a lot of people on the planet, but on the other hand, some of these industrial achievements are having detrimental effects on our immune, cognitive, physical and reproductive functions.
One well-documented example is the global decline in sperm count and motility observed since the 1950s, which the researchers link to environmental factors. This is thought to be tied to pesticides and herbicides in food, but also to microplastics.
Part 1
4 hours ago
Dr. Krishna Kumari Challa
Given the pace of technological and environmental change, biological evolution cannot keep up. Biological adaptation is very slow. Longer-term genetic adaptations are multigenerational—tens to hundreds of thousands of years.
That means the mismatch between our evolved physiology and modern conditions is unlikely to resolve itself naturally. Instead, the researchers argue, societies need to mitigate these effects by rethinking their relationship with nature and designing healthier, more sustainable environments.
Addressing the mismatch requires both cultural and environmental solutions.
One approach is to fundamentally rethink our relationship with nature—treating it as a key health factor and protecting or regenerating spaces that resemble those from our hunter-gatherer past.
Another is to design healthier, more resilient cities that take human physiology into account.
This new research based thinking can identify which stimuli most affect blood pressure, heart rate or immune function, for example, and pass that knowledge on to decision-makers.
We need to get our cities right—and at the same time regenerate, value and spend more time in natural spaces, say the researchers.
Daniel P. Longman et al, Homo sapiens, industrialisation and the environmental mismatch hypothesis, Biological Reviews (2025). DOI: 10.1111/brv.70094
Part 2
4 hours ago