Newborns whose mother spoke in a mix of languages during pregnancy are more sensitive to a range of sound pitches
It's well established that babies in the womb hear and learn about speech, at least in the third trimester. For example, newborns have been shown to already prefer the voice of their mother, recognize a story that had been repeatedly told to them while in the womb, and tell apart their mother's native language.
What wasn't known until now was how developing fetuses learn about speech when their mother speaks to them in a mix of languages. Yet this is common: there are 3.3 billion bilingual people (43% of the population) worldwide, and in many countries, bilingualism or multilingualism is the norm.
Researchers showed that exposure to monolingual or a bilingual speech has different effects at birth on 'neural encoding' of voice pitch and vowel sounds: that is, how information about these aspects of speech has been initially learned by the fetus.
At birth, newborns from bilingual mothers appear more sensitive to a wider range of acoustic variation of speech, whereas newborns from monolingual mothers seem to be more selectively tuned to the single language they have been immersed in.
Exposure to bilingual or monolingual maternal speech during pregnancy affects the neurophysiological encoding of speech sounds in neonates differently, Frontiers in Human Neuroscience (2024). DOI: 10.3389/fnhum.2024.1379660
Exposure to endocrine-disrupting chemicals in utero associated with higher odds of metabolic syndrome in children
The term 'metabolic syndrome' (MetS) encompasses a group of factors, such as abdominal obesity, hypertension and insulin resistance, that together increase the risk of cardiovascular disease and type 2 diabetes.
A new study suggests that prenatal exposure to a combination of endocrine disrupting chemicals (EDCs) is associated with poorer metabolic health in childhood, which in turn may contribute to an increased risk of metabolic syndrome in adulthood.
EDCs are chemical substances that are so named because of their ability to interfere with the functioning of our hormonal system, growth, energy balance and metabolism and whose exposure, given their ubiquity in our environment, is difficult to escape.
Previous studies have already shown a link between individual exposure to some of these compounds during the prenatal phase and some of the factors that make up the metabolic syndrome, particularly obesity and blood pressure.
The study involved 1,134 mothers and their children from six European countries (Spain, France, Greece, Lithuania, Norway and the United Kingdom), all volunteers from the HELIX (Human Early Life Exposome) cohort. Prenatal exposure to a total of 45 endocrine disruptors was analyzed through blood and urine samples collected from the mothers during pregnancy or from the umbilical cord after birth.
Later, when the children were between 6 and 11 years old, they were followed up, including a clinical examination, interview and collection of biological samples. This yielded data on waist circumference, blood pressure, cholesterol, triglycerides and insulin levels, which were aggregated to obtain a risk index for metabolic syndrome.
Statistical analysis showed that mixtures of metals, perfluoroalkylated and polyfluoroalkylated substances (PFAS), organochlorine pesticides and flame retardants (or PBDEs) were associated with a higher risk of metabolic syndrome. In the case of metals, the association observed was mainly due to the effect of mercury, the main source of which is the intake of large fish.
PFASs are one of the most widely used families of chemical compounds, being used in pesticides, paints, non-stick pans or fast food packaging, among many other common uses. Because of their persistence, they are also known as the "forever chemicals." Also very persistent are organochlorine pesticides, which were already banned in Europe in the 1970s, but to which we are still widely exposed due to their permanence in the environment.
Researchers also observed that associations were stronger in girls for mixtures of PFASs and polychlorinated biphenyls (PCBs), while boys were more susceptible to exposure to parabens. Since endocrine disruptors interfere with sex steroid hormones, these differences fall within what would be expected.
These results suggest that exposure to widespread mixtures of endocrine disruptors during pregnancy may be associated with adverse metabolic health in both boys and girls. This association may contribute to the current increase in the prevalence of lifetime metabolic syndrome, which currently affects a quarter of the adult population, with upward trends evident even among young people Prenatal Exposure to Chemical Mixtures and Metabolic Syndrome Risk in Children, JAMA Network Open (2024). DOI: 10.1001/jamanetworkopen.2024.12040
Why the brain can robustly recognize images, even without colour
Even though the human visual system has sophisticated machinery for processing colour, the brain has no problem recognizing objects in black-and-white images. A new study offers a possible explanation for how the brain comes to be so adept at identifying both colour and colour-degraded images.
Using experimental dataand computational modeling, the researchers found evidence suggesting the roots of this ability may lie in development. Thework has been publishedinScience
Early in life, when newborns receive strongly limited colour information, the brain is forced to learn to distinguish objects based on their luminance, or intensity of light they emit, rather than their colour. Later in life, when the retina and cortex are better equipped to process colours, the brain incorporates colour information as well but also maintains its previously acquired ability to recognize images without critical reliance on colour cues.
The findings also help to explain why children who are born blind but have their vision restored later in life, through the removal of congenital cataracts, have much more difficulty identifying objects presented in black and white. Those children, who receive rich colour input as soon as their sight is restored, may develop an overreliance on colour that makes them much less resilient to changes or removal of colour information.
Researchers have observed that limitations in early sensory input can also benefit other aspects of vision, as well as the auditory system.
In 2022, they used computational models to show that early exposure to only low-frequency sounds, similar to those that babies hear in the womb, improves performance on auditory tasks that require analyzing sounds over a longer period of time, such as recognizing emotions.
Nanoparticle vaccines: A potential leap forward in veterinary medicine
Classical vaccines often rely on traditional technologies, such as live attenuated or inactivated pathogens, which carry inherent risks including reduced immunogenicity under certain conditions and potential safety concerns. This has spurred the need for innovative approaches that can provide safer and more effective prophylactic solutions in veterinary medicine.
Self-assembled protein nanoparticles (SAPNs) emerge as a cutting-edge solution, harnessing the power of nanotechnology to revolutionize vaccine design and implementation.
In an article published on 10 May 2024 in Animal Diseases, researchers at Zhejiang University's Institute of Preventive Veterinary Medicine, delve into the development and application of SAPNs and virus-like nanoparticles (VLPs), offering a detailed discussion of their potential in veterinary medicine.
The article focuses on various types of SAPNs, including natural and synthetically designed nanoparticles. These nanoparticles are tailored to enhance the immune system's ability to recognize and respond to pathogens more effectively.
Key highlights include the use of animal virus-derived nanoparticles and bacteriophage-derived nanoparticles, which have shown the potential to elicit strong cellular and humoral responses. The nanoparticles' ability to mimic pathogen structures enables them to trigger a more substantial immune reaction, potentially leading to long-lasting immunity.
Researchers have documented successes in using these nanoparticles to protect against diseases like foot-and-mouth disease and swine fever, showcasing their broad applicability and effectiveness.
Veterinary nanoparticle vaccines have broad implications, with the potential to extend the benefits beyond veterinary applications into human health. The enhanced safety and immunogenicity of these vaccines could lead to the development of advanced vaccines for human use.
Additionally, by reducing the environmental impact of livestock diseases, this technology may contribute to more sustainable agricultural practices globally.
Meiqi Sun et al, Toward innovative veterinary nanoparticle vaccines, Animal Diseases (2024). DOI: 10.1186/s44149-024-00119-w
How a world record 'squeeze' could offer comfort for dark matter hunters
Quantum engineers have developed a new amplifier that could help other scientists search for elusive dark matter particles.
Imagine throwing a ball. You'd expect science to be able to work out its exact speed and location at any given moment, right? Well, the theory of quantum mechanics says you can't actually know both with infinite precision at the same time.
It turns out that as you more precisely measure where the ball is, knowing its speed becomes less and less accurate.
This conundrum is commonly referred to as Heisenberg's uncertainty principle, named after the famous physicist Werner Heisenberg who first described it.
For the ball, this effect is imperceptible, but in the quantum world of small electrons and photons the measurement uncertainty suddenly becomes very significant.
That's the problem being addressed by a team of engineers who have developed an amplifying device that performs precise measurements of very weak microwave signals, and it does so through a process known as squeezing.
Squeezing involves reducing the certainty of one property of a signal in order to obtain ultra-precise measurements of another property.
The team of researchers have significantly increased the accuracy of measuring signals at microwave frequencies, like those emitted by your mobile phone, to the point of setting a new world record.
The precision of measuring any signal is fundamentally limited by noise. Noise is the fuzziness that creeps in and masks signals, which is something you may have experienced if you've ever ventured out of range when listening to AM or FM radio.
However, uncertainty in the quantum world means there is a limit as to how low noise can be made in a measurement.
Even in a vacuum, a space void of everything, the uncertainty principle tells us we must still have noise. We call this 'vacuum' noise. For many quantum experiments, vacuum noise is the dominant effect that prevents us from making more precise measurements.
The squeezer produced by the research team now can beat this quantum limit.
The device amplifies noise in one direction, so that noise in another direction is significantly reduced, or 'squeezed.' Think of the noise as a tennis ball, if we stretch it vertically, then it must reduce along the horizontal to maintain its volume. Researchers can then use the reduced part of the noise to do more precise measurements.
They showed that the squeezer is able to reduce noise to record low levels.
Squeezing is very difficult at microwave frequenciesbecause the materials used tend to destroy the fragile squeezed noise quite easily.
What they've done is a lot of engineering in order to remove sources of loss, which means utilizing very high-quality superconducting materials to build the amplifier.
And the team think that the new device could help speed up the search for notoriously elusive particles known as axions, which are so far only theoretical, but proposed by many as the secret ingredient of mysterious dark matter.
The squeezed noise itself could even be used in future quantum computers.
It turns out that squeezed vacuum noise is an ingredient to build a certain type of quantum computer. Excitingly, the level of squeezing they've achieved is not far off the amount needed to build such a system.
Arjen Vaartjes et al, Strong microwave squeezing above 1 Tesla and 1 Kelvin, Nature Communications (2024). DOI: 10.1038/s41467-024-48519-3
Atomic-resolution imaging shows why ice is so slippery
A team of physicists has uncovered the reason behind the slipperiness of ice. In their study, published in the journal Nature, the group used atomic force microscopy to get a closer look at the surface of ice at different temperatures.
Prior research and day-to-day experiences have shown that ice is slippery, even when temperatures are well below the freezing point. Research has suggested this is because of a pre-melt coating that develops at the surface, which serves as a lubricant.
In this new study, the research team used an atomic force microscope fitted with a carbon monoxide atom on its tip to get a better look at the structure of normal ice and its pre-melt coating.
The researchers began by chilling ice inside the microscope chamber to -150°C and then using the microscope to look at its atomic structure. They could see that the internal ice (known as ice Ih), and the ice at the surface were different.
The ice Ih, as expected, was arranged in stacked hexagons. The ice on the surface, by contrast, was only partially hexagonal. The researchers also found defects in the ice at the border between the two types of ice that occurred as the different ice shapes met one another.
The researchers then raised the temperaturein the chamber slightly, which resulted in more disorder as the differences in shape became more pronounced. The team then created a simulation showing how such disorder would impact the surface as a whole unit—it showed the disorder expanding all the way across the surface, giving the ice a liquid-like appearance that would be slippery if trod upon.
Jiani Hong et al, Imaging surface structure and premelting of ice Ih with atomic resolution, Nature (2024). DOI: 10.1038/s41586-024-07427-8
Bizarre bacteria scramble workflow of life Bacteria have stunned biologists by reversing the usual flow of information. Typically genes written in DNA serve as the template for making RNA molecules, which are then translated into proteins. Some viruses are known to have an enzyme that reverses this flow by scribing RNA into DNA. Now scientists have found bacteria with a similar enzyme that can even make completely new genes — by reading RNA as a template. These genes create protective proteins when a bacterium is infected by a virus. It should change the way we look at the genome.
Now, scientists have discovered an even weirder twist. A bacterial version of reverse transcriptase reads RNA as a template to make completely new genes written in DNA. These genes are then transcribed back into RNA, which is translated into protective proteins when abacterium is infected by a virus. By contrast, viral reverse transcriptases don’t make new genes; they merely transfer information from RNA to DNA.
The discovery that reverse transcriptase — which has previously been known only for copying genetic material — can create completely new genes has left other researchers gobsmacked.
How gut microbes drive tumour growth Scientists have long known that obese people have poorer cancer survival rates. Now they have some idea why. A high-fat diet increases the number of Desulfovibrio bacteria in the gut of mice. These release leucine, an amino acid, which encourages the proliferation of a kind of cell that suppresses the immune system. With a suppressed immune system, tumour growth can increase. In breast cancer patients, poorer outcomes were seen for women with higher body-mass index, who also had higher levels of Desulfovibrio bacteria in their gut and leucine in their blood. It’s a provocative finding that will open up new avenues that we should be thinking about.
Biggest risk factors for disease spread A meta-analysis of five ways that humanity’s environmental footprint spreads disease — biodiversity loss, chemical pollution, climate change, invasive species and deforestation/urbanization — suggests that conserving biodiversity, controlling invasive species and lowering greenhouse-gas emissions would reduce disease spread the most. This evidence can be used in international policy to spur action on climate change and biodiversity loss due to their negative impacts on disease.
Biologists observe recurring evolutionary changes, over time, in stick insects
A long-standing debate among evolutionary scientists goes something like this: Does evolution happen in a predictable pattern or does it depend on chance events and contingency? That is, if you could turn back the clock, as celebrated scientist Stephen Jay Gould (1941–2002) described in his famous metaphor, "Replaying the Tape of Life," would life on Earth evolve, once again, as something similar to what we know now, or would it look very, very different?
Now researchers report evidence of repeatable evolution in populations of stick insects in the paper "Evolution repeats itself in replicate long-term studies in the wild," in Science Advances.
The team examined three decades of data on the frequency of cryptic color-pattern morphs in the stick insect species Timema cristinae in ten naturally replicate populations in California. T. cristinae is polymorphic in regard to its body colour and pattern. Some insects are green, which allows the wingless, plant-feeding insect to blend in with California lilac (Ceanothus spinosus) shrubs. In contrast, green striped morphs disappear against chamise (Adenostoma fasciculatum) shrubs.
Hiding among the plants is one of T. christinae's key defenses as hungry birds, such as scrub jays, are insatiable predators of the stick insects.
Bird predation is a constant driver shaping the insects' organismal traits, including coloration and striped vs. non-striped.
Scientists observed predictable 'up-and-down' fluctuations in stripe frequency in all populations, representing repeatable evolutionary dynamics based on standing genetic variation.
A field experiment demonstrates these fluctuations involved negative frequency-dependent natural selection (NFDS), where cryptic colour patterns are more beneficial when rare rather than common. This is likely because birds develop a 'search image' for very abundant prey.
At short time scales, evolution involving existing variations can be quite predictable. You can count on certain drivers always being there, such as birds feeding on the insects.
But at longer time scales, evolutionary dynamics become less predictable.
The populations might experience a chance event, such as a severe droughtor a flooding event, that disrupts the status quo and thus, the predictable outcomes.
On long time scales, a new mutation in the species could introduce a rare trait. That's about as close to truly random as you can get.
Rare things are easily lost by chance, so there's a strong probability a new mutation could disappear before it gains a stronghold.
Indeed, another species of Timema stick insect that also feeds on chamise either never had or quickly lost the mutations making the cryptic stripe trait. Thus, the evolution of stripe is not a repeatable outcome of evolution at this long scale.
Replicated, long-term studies from natural populations, including research on the famous Darwin's finches, are rare.
Because most of this work is restricted to one or few populations, it is difficult to draw inferences on repeatability among multiple evolutionary independent populations.
Such studies are challenging to implement not only because they take concerted effort, but also because you can't rush time.
Fine air particles, less than 2.5 micrometers in diameter (PM2.5), are a major air pollutant linked to various health problems. These particles can travel deep into the lungs and even enter the bloodstream when inhaled. Recent research suggests a major health concern: PM2.5 exposure can also damage the digestive system, including the liver, pancreas, and intestines.
The work is published in the journal eGastroenterology. This recent research has been focused on how PM2.5 exposure triggers stress responses within the digestive system's cells. These stress responses involve specialized subcellular structures within cells called organelles, such as the endoplasmic reticulum (ER), mitochondria, and lysosomes. When PM2.5 disrupts these organelles, it creates a chain reaction within the cells that can lead to inflammation and other harmful effects.
The liver, a major organ for detoxification and metabolism, is particularly susceptible to PM2.5 damage. Studies have shown that PM2.5 exposure can lead to a cascade of problems within the liver, including inflammation, stress responses, and damage to the organelles, and disrupted energy metabolism. These effects can contribute to the development of non-alcoholic fatty liver disease (NASH) and type 2 diabetes.
PM2.5 exposure does not stop at the liver. It can also harm the pancreas and intestines. Studies have linked PM2.5 to an increased risk of pancreatic impairment in people with diabetes, as well as damage to intestinal cells and an increase in their permeability. This increased permeability can lead to a variety of digestive issues.
Researchers are exploring whether dietary or pharmaceutical interventions can mitigate PM2.5 damage. Interestingly, some studies suggest that certain nutrients, like monounsaturated fatty acids and vitamins, may offer some protection against the harmful effects of PM2.5.
Many plants and plant oils are high in monounsaturated fats but low in saturated fats. These include: oils from olives, peanuts, canola seeds, safflower seeds, and sunflower seeds, avocadoes, pumpkin seeds, sesame seeds, almonds, cashews, peanuts and peanut butter, pecans.
Air pollution is a complex issue with no easy solutions. While research continues mitigating PM2.5 exposure, the current understanding of its impact on the digestive system highlights the far-reaching consequences of air pollution on human health. It underscores the need for continued efforts to reduce air pollution levels and develop strategies to protect ourselves from its detrimental effects.
Big brands are 'failing to curb plastic sachet use'
Small plastic sachets commonly used in low- and middle-income countries must be phased out and packaging reuse systems promoted, urge campaigners and waste pickers, as new analysis reveals major corporations have failed to curb their use.
Pocket-sized individual portions of goods ranging from shampoo to instant coffee have become popular in lower-income communities for their affordability.
An estimated 855 billion sachets are sold globally each year with Southeast Asia consuming nearly half of the total and this figure projected to rise to 1.3 trillion by 2027, according to environmental groups.
But the convenience of sachets comes with a heavy environmental cost as they end up as significant contributors to plastic pollution. Their commonly multi-layered design, using different materials, makes them hard to recycle.
Consumer goods giants Unilever, Nestlé and Procter & Gamble are among the biggest contributors to plastic sachet pollution in developing countries in Asia, despite promises to reduce plastic packaging, according to a multi-country environmental audit report.
It said some corporations were trying to deal with the waste problem by burning sachets as fuel, creating further pollution.
Environmental groups in Asia have long been demanding firms to phase out their sachet packaging as the resulting waste is deluging the region's landfills and waters.
Consumers, can you please help by avoiding these small plastic sachet based products?
When massive stars die, as we understand the Universe, they don't go quietly. As their fuel runs out, they become unstable, wracked by explosions before finally ending their lives in a spectacular supernova.
But some massive stars, scientists have found, have simply vanished, leaving no trace in the night sky. Stars clearly seen in older surveys are inexplicably absent from newer ones. A star isn't exactly a set of keys – you can't just lose it down the back of the couch. So where the heck do these stars go?
A new study has given us the most compelling explanation yet. Some massive stars, suggest an international team led by astrophysicist Alejandro Vigna-Gómez of the Niels Bohr Institute in Denmark and the Max Planck Institute for Astrophysics in Germany, can die, not with a bang, after all, but a whimper.
Their evidence? A binary system named VFTS 243 in the Large Magellanic Cloud, consisting of a black hole and a companion star. This system shows no signs of a supernova explosion that, according to the models, ought to have accompanied the formation of the black hole.
Were one to stand gazing up at a visible star going through a total collapse, it might, just at the right time, be like watching a star suddenly extinguish and disappear from the heavens.
The collapse is so complete that no explosion occurs, nothing escapes and one wouldn't see any bright supernova in the night sky. Astronomers have actually observed the sudden disappearance of brightly shining stars in recent times. Scientists cannot be sure of a connection, but the results they have obtained from analyzing VFTS 243 has brought them much closer to a credible explanation.
When a star more massive than about 8 times the mass of the Sun goes supernova, it's extremely messy. The outer layers – most of the star's mass – are explosively ejected into the space around the star, where they form a huge, expanding cloud of dust and gas that lingers for hundreds of thousands to millions of years.
Meanwhile, the star's core, no longer supported by the outward pressure of fusion, collapses under gravity to form an ultradense object, a neutron star or a black hole, depending on the initial star's mass.
These collapsed cores don't always stay put; if the supernova explosion is lopsided, this can punt the core off into space in a natal kick. We can also sometimes trace the core's trajectory back to the cloud of material it ejected as it died, but if enough time has elapsed, the material may have dissipated. But the signs of the natal kick can remain a lot longer.
VFTS 243 is a very interesting system. It consists of a massive star that's around 7.4 million years old and around 25 times the mass of the Sun, and a black hole around 10 times the mass of the Sun.
Although we can't see the black hole directly, we can measure it based on the orbital motion of its companion star – and, of course, we can infer other things about the system. One interesting thing is the shape of the orbit. It's almost circular. This, together with the motion of the system in space, suggests that the black hole did not receive a huge kick from a supernova. The researchers who discovered the black hole back in 2022 suspected as much; now, the work of Vigna-Gómez and his colleagues have confirmed it. There has been a growing body of evidence that suggests that sometimes, massive stars can collapse directly into black holes, without passing supernova or collecting 200 space dollars. VFTS 243 represents the best evidence we have for this scenario to date.
Our results highlight VFTS 243 as the best observable case so far for the theory of stellar black holes formed through total collapse, where the supernova explosion fails and which our models have shown to be possible," says astrophysicist Irene Tamborra of the Niels Bohr Institute. "It is an important reality check for these models. And we certainly expect that the system will serve as a crucial benchmark for future research into stellar evolution and collapse."
In recent years have scientists found that not everyone has the sense of an inner voice – and a new study sheds some light on how living without an internal monologue affects how language is processed in the brain.
This is similar to anauralia, a term researchers coined in 2021 for people who don't have an inner voice, nor can they imagine sounds, like a musical tune or siren.
Focusing on inner voices in a study, a research team recruited 93 volunteers, half of whom said they had low levels of inner speech, while the other half reported having a very chatty internal monologue. These participants attempted a series of tasks – including one where they had to remember the order of words in a sequence, and another where rhyming words had to be paired together.
It is a task that will be difficult for everyone, but the hypothesis of the researchers was that it might be even more difficult if people did not have an inner voice because they have to repeat the words to themselves inside their head in order to remember them.
And this hypothesis turned out to be true.
The volunteers who reported hearing inner voices during everyday life did significantly better at the tasks than those without inner monologues: Inner speakers recalled more words correctly, and matched rhyming words faster. The researchers think this could be evidence that inner voices help people process words.
It's interesting to note that the performance differences disappeared when the volunteers spoke out loud to try and solve the problems they were given. It may be that using an audible voice is just as effective as using an inner voice in these situations.
In two other tasks, covering multitasking and distinguishing between different picture shapes, there was no difference in performance. The researchers take this as a sign that the way inner speech affects behavior depends on what we're doing.
Maybe people who don't have an inner voice have just learned to use other strategies. For example, some said that they tapped with their index finger when performing one type of task and with their middle finger when it was another type of task.
The researchers are keen to emphasize that the differences they found would not cause delays that you would notice in regular conversation. Scientists still at the very early stages in terms of figuring out how anendophasia might affect someone – and likewise anauralia.
Dyson spheres: Astronomers report potential candidates for alien structures, and evidence against their existence
There are three ways to look for evidence of alien technological civilizations. One is to look out for deliberate attempts by them to communicate their existence, for example, through radio broadcasts. Another is to look for evidence of them visiting the solar system. And a third option is to look for signs of large-scale engineering projects in space.
A team of astronomers have taken the third approach by searching through recent astronomical survey data to identify seven candidates for alien megastructures, known as Dyson spheres, "deserving of further analysis." Their research is published in the journal Monthly Notices of the Royal Astronomical Society.
This is a detailed study looking for "oddballs" among stars—objects that might be alien megastructures. However, the authors are careful not to make any overblown claims. The seven objects, all located within 1,000 light-years of Earth, are "M-dwarfs"—a class of stars that are smaller and less bright than the sun.
Dyson spheres were first proposed by the physicist Freeman Dyson in 1960 as a way for an advanced civilization to harness a star's power. Consisting of floating power collectors, factories and habitats, they'd take up more and more space until they eventually surrounded almost the entire star like a sphere.
What Dyson realized is that these megastructures would have an observable signature. Dyson's signature (which the team searched for in the recent study) is a significant excess of infrared radiation. That's because megastructures would absorb visible light given off by the star, but they wouldn't be able to harness it all. Instead, they'd have to "dump" excess energy as infrared light with a much longer wavelength.
Part 1
Unfortunately, such light can also be a signature of a lot of other things, such as a disk of gas and dust, or disks of comets and other debris. But the seven promising candidates aren't obviously due to a disk, as they weren't good fits to disk models.
It is worth noting there is another signature of Dyson sphere: that visible light from the star dips as the megastructure passes in front of it. Such a signature has been found before. There was a lot of excitement about Tabby's star, or Kic 8462852, which showed many really unusual dips in its light that could be due to an alien megastructure.
It almost certainly isn't an alien megastructure. A variety of natural explanations have been proposed, such as clouds of comets passing through a dust cloud. But it is an odd observation. An obvious follow up on the seven candidates would be to look for this signature as well. The case against Dyson spheres
Dyson spheres may well not even exist, however. I think they are unlikely to be there. That's not to say they couldn't exist, rather that any civilization capable of building them would probably not need to (unless it was some mega art project).
Dyson's reasoning for considering such megastructures assumed that advanced civilizations would have vast power requirements. Around the same time, astronomer Nikolai Kardashev proposed a scale on which to rate the advancement of civilizations, which was based almost entirely on their power consumption.
In the 1960s, this sort of made sense. Looking back over history, humanity had just kept exponentially increasing its power use as technology advanced and the number of people increased, so they just extrapolated this ever-expanding need into the future.
However, our global energy use has started to grow much more slowly over the past 50 years, and especially over the last decade. What's more, Dyson and Kardashev never specified what these vast levels of power would be used for, they just (fairly reasonably) assumed they'd be needed to do whatever it is that advanced alien civilizations do.
But, as we now look ahead to future technologies we see efficiency, miniaturization and nanotechnologies promise vastly lower power use (the performance per watt of pretty much all technologies is constantly improving).
A quick calculation reveals that, if we wanted to collect 10% of the sun's energy at the distance the Earth is from the sun, we'd need a surface area equal to 1 billion Earths. And if we had a super-advanced technology that could make the megastructure only 10km thick, that'd mean we'd need about a million Earths worth of material to build them from. Part3
A significant problem is that our solar system only contains about 100 Earths worth of solid material, so our advanced alien civilization would need to dismantle all the planets in 10,000 planetary systems and transport it to the star to build their Dyson sphere. To do it with the material available in a single system, each part of the megastructure could only be one meter thick.
This is assuming they use all the elements available in a planetary system. If they needed, say, lots of carbon to make their structures, then we're looking at dismantling millions of planetary systems to get hold of it. Now, I'm not saying a super-advanced alien civilization couldn't do this, but it is one hell of a job.
I'd also strongly suspect that by the time a civilization got to the point of having the ability to build a Dyson sphere, they'd have a better way of getting the power than using a star, if they really needed it (I have no idea how, but they are a super-advanced civilization).
Maybe I'm wrong, but it can't hurt to look.
Matías Suazo et al, Project Hephaistos – II. Dyson sphere candidates from Gaia DR3, 2MASS, and WISE, Monthly Notices of the Royal Astronomical Society (2024). DOI: 10.1093/mnras/stae1186
Scientists report unified framework for diverse aurorae across planets
The awe-inspiring aurorae seen on Earth, known as the Northern and Southern Lights, have been a source of fascination for centuries. Between May 10 and 12, 2024, the most powerful aurora event in 21 years reminded us of the stunning beauty of these celestial light shows.
Recently, space physicists have published a paper in Nature Astronomy that explores the fundamental laws governing the diverse aurorae observed across planets, such as Earth, Jupiter and Saturn.
This work provides new insights into the interactions between planetary magnetic fields and solar wind, updating the textbook picture of giant planetary magnetospheres. Their findings can improve space weather forecasting, guide future planetary exploration, and inspire further comparative studies of magnetospheric environments.
Earth, Saturn and Jupiter all generate their own dipole-like magnetic field, resulting in funnel-canopy-shaped magnetic geometry that leads the space's energetic electrons to precipitate into polar regionsand cause polar auroral emissions.
Yet the three planets differ in many aspects, including their magnetic strength, rotating speed, solar wind condition, moon activities, etc. It is unclear how these different conditions are related to the different auroral structures that have been observed on those planets for decades.
Using three-dimensional magnetohydrodynamics calculations, which model the coupled dynamics of electrically conducting fluids and electromagnetic fields, the research team assessed the relative importance of these conditions in controlling the main auroral morphology of a planet.
Combining solar wind conditions and planetary rotation, they defined a new parameter that controls the main auroral structure, which for the first time, nicely explains the different auroral structures observed at Earth, Saturn and Jupiter.
Stellar winds' interaction with planetary magnetic fields is a fundamental process in the universe. The research can be applied to grasp the space environments of Uranus, Neptune, and even exoplanets.
This study has revealed the complex interplay between solar wind and planetary rotation, providing a deeper understanding of aurorae across different planets. These findings will not only enhance our knowledge of the aurorae in our solar system but also potentially extend to the study of aurorae in exoplanetary systems.
The aurorae at Earth and Jupiter are different. Yet, it is a big surprise that they can be explained by a unified framework.
By advancing our fundamental understanding of how planetary magnetic fieldsinteract with the solar wind to drive auroral displays, this research has important practical applications for monitoring, predicting, and exploring the magnetic environments of the solar system.
This study also represents a significant milestone in understanding auroral patterns across planets that deepen our knowledge of diverse planetary space environments, paving the way for future research into the mesmerizing celestial light shows that continue to capture our imagination.
B. Zhang et al, A unified framework for global auroral morphologies of different planets, Nature Astronomy (2024). DOI: 10.1038/s41550-024-02270-3
Light therapy increases brain connectivity following injury, study finds
Low-level light therapy appears to affect healing in the brains of people who suffered significant brain injuries, according to a study published in Radiology.
Lights of different wavelengths have been studied for years for their wound-healing properties. Researchers at Massachusetts General Hospital (MGH) conducted low-level light therapyon 38 patients who had suffered moderate traumatic brain injury, an injury to the head serious enough to alter cognition and/or be visible on a brain scan. Patients received light therapy within 72 hours of their injuries through a helmet that emits near-infrared light.
The skull is quite transparent to near-infrared light. Once you put the helmet on, your whole brain is bathing in this light.
The researchers used an imaging technique called functional MRI to gauge the effects of the light therapy. They focused on the brain's resting-state functional connectivity, the communication between brain regions that occurs when a person is at rest and not engaged in a specific task. The researchers compared MRI results during three recovery phases: the acute phase of within one week after injury, the subacute phase of two to three weeks post-injury and the late-subacute phase of three months after injury.
Of the 38 patients in the trial, 21 did not receive light therapy while wearing the helmet. This was done to serve as a control to minimize bias due to patient characteristics and to avoid potential placebo effects.
Patients who received low-level light therapy showed a greater change in resting-state connectivity in seven brain region pairs during the acute-to-subacute recovery phase compared to the control participants.
There was increased connectivity in those receiving light treatment, primarily within the first two weeks. Researchers were unable to detect differences in connectivity between the two treatment groups long term, so although the treatment appears to increase the brain connectivity initially, its long-term effects are still to be determined.
The precise mechanism of the light therapy's effects on the brain is also still to be determined. Previous research points to the alteration of an enzyme in the cell's mitochondria (often referred to as the "powerhouse" of a cell).
This leads to more production of adenosine triphosphate, a molecule that stores and transfers energy in the cells. Light therapy has also been linked with blood vessel dilation and anti-inflammatory effects.
"There is still a lot of work to be done to understand the exact physiological mechanism behind these effects, though.
While connectivity increased for the light therapy-treated patients during the acute to subacute phases, there was no evidence of a difference in clinical outcomes between the treated and control participants. Additional studies with larger cohorts of patients and correlative imaging beyond three months may help determine the therapeutic role of light in traumatic brain injury.
Effects of Low-Level Light Therapy on Resting-State Connectivity Following Moderate Traumatic Brain Injury: Secondary Analyses of a Double-blinded, Placebo-controlled Study, Radiology (2024).
Study demonstrates how cytokines produce long lasting humoral immunity following vaccination
A new study has shed new light on how cytokines, in particular interleukin 21(IL-21), shape long lasting humoral immunity following vaccination.
Published in Nature Communications, the study elucidates how various immune responses impact the recruitment and maintenance of memory plasma cells in the bone marrow. These cells are crucial for secreting protective antibodies and sustaining humoral immunity throughout a lifetime.
Researchers revealed the heterogeneity of human bone marrow plasma cells (BMPCs) and their origins from various immune reactions.
By analyzing single-cell transcriptomes from individuals vaccinated against SARS-CoV-2 and the triple vaccine against diphtheria, tetanus, and pertussis (DTaP), the team uncovered distinct pathways through which plasma cells are recruited to the bone marrow.
The study categorizes BMPCs into different "clans" based on their transcriptional profiles, presuming that these cells reflect the specific signals they received during their activation in the tissue.
Understanding the mechanisms behind the recruitment and maintenance of plasma cells in the bone marrow is crucial for improving vaccine strategies and developing therapies for immune-related diseases, including chronic inflammatory diseases such as systemic lupus erythematosus.
IL-21 plays a crucial role in the formation of bone marrow plasma cells. Cells formed in IL-21-dependent follicular germinal centers have low CD19 expression, while those from IL-21-independent extrafollicular reactions have high CD19 levels.
Primary immune responses produce both CD19low and CD19high BMPCs, but secondary responses mainly create CD19high cells from reactivated memory B cells in extrafollicular sites. This finding is important for understanding how long-term immunity is maintained and how previous immune responses can impact the effectiveness of future vaccinations.
The rapid extrafollicular immune responses in tissues, like the bone marrow itself, are especially crucial for responding to emerging variants quickly. This research could help optimize vaccination strategies for better long-term protection.
Marta Ferreira-Gomes et al, Recruitment of plasma cells from IL-21-dependent and IL-21-independent immune reactions to the bone marrow, Nature Communications (2024). DOI: 10.1038/s41467-024-48570-0
Green chemistry: Producing gold nano-particles (and hydrogen) in water without the need for toxic chemicals
In a surprise discovery, Flinders University nanotechnology researchers have produced a range of different types of gold nanoparticles by adjusting water flow in the novel vortex fluidic device—without the need for toxic chemicals. The article, "Nanogold Foundry Involving High-Shear-Mediated Photocontact Electri...," has been published in Small Science.
The green chemistry lab work on nano gold formation also led to the discovery of a contact electrification reaction in water in the device—which resulted in the generation of hydrogen and hydrogen peroxide.
In their study scientists collaborated on the developing size and form of gold nanoparticles from various VFD processing parameters and concentrations of gold chloride solution.
Through this research, they have discovered a new phenomenon in the vortex fluidic device. The photo-contact electrification process at the solid-liquid interface which could be used in other chemical and biological reactions.
They also have achieved synthesis of pure, pristine gold nanoparticles in water in the VFD, without the use of chemicals commonly used—and thus minimizing waste.
This method is significant for the formation of nanomaterials in general because it is a green process, quick, scalable and yields nanoparticles with new properties.
Gold nanoparticles' size and shape are critical for a range of applications—from drug delivery to catalysis, sensing and electronics—due to their physical, chemical and optical properties.
The vortex fluidic device, devised a decade ago is a rapidly rotating tube open at one end with liquids delivered through jet feeds. Different rotational speeds and external application of light in the device can be used to synthesize particles to specification.
Researchers around the world are now finding the continuous flow, thin film fluidic device useful in exploring and optimizing more sustainable nano-scale processing techniques.
In this latest experiment, the researchers hypothesize that the high shear regimes of the VFD led to the quantum mechanical effect known as contact electrification, which is another exciting development.
Badriah M. Alotaibi et al, Nanogold Foundry Involving High‐Shear‐Mediated Photocontact Electrification in Water, Small Science (2024). DOI: 10.1002/smsc.202300312
Biologists find nanoplastics in developing chicken heart
Nanoplastics can accumulate in developing hearts, according to a study published in Environment International by biologists. Research on chicken embryos sheds new light on how these tiny plastic particles pose a threat to our health.
Disposable cups, plastic bagsand packaging material: Plastics exposed to the elements become brittle over time, and start shedding small particlesfrom their surface into nature. These particles can be as tiny as only a few nanometers in size.
You can find these nanoplastics everywhere now: in the sea, in the soil, in the food chain… and in our blood. They have even been found in human placentas.
This made scientists think: What happens when those nanoplastics end up in the blood of the embryo?
During an earlier study, investigators discovered that a high concentration of nanoplastics can cause malformations in the heart, eyes, and nervous systems of chicken embryos. But for a more complete understanding of the toxicity of nanoplastics, they first need more information about how they spread from the blood throughout the rest of the body.
That knowledge will also be informative in nanomedicine, where scientists aim to use nanoplastics (and other nanoparticles) as vehicles for drug-delivery.
Researchers administered polystyrene nanoparticles directly into the bloodstream of chicken embryos. Chicken embryos are a widely used model for research on growth and development. In mammals, it's much more challenging to administer substances or take measurements because their embryos develop inside the mother's womb.
Because nanoparticles are so small, it's impossible to see them using conventional microscopes. Therefore, the researchers tagged the nanoparticles with either fluorescence or europium, a rare metal that is not naturally present in the human body.
They found that the nanoplastics can cross blood vessel walls, and that they accumulated to relatively high levels in the heart, liver and kidneys. Some nanoplastics were excreted by the kidneys.
Interestingly, the researchers also found nanoplastics in the avascular heart cushions: a type of heart tissue without blood vessels. They think the nanoplastics might enter the heart through the fenestrate. These are small openings within the developing heart tissue that play a role in the formation and remodeling of the heart's structure during development. These fenestrations are temporary structures that typically close as the heart matures.
Now we know how these nanoplastics spread, we can start investigating the health risks.
There is already research linking nanoparticles to a higher risk of heart attacks and strokes. Especially during the developmental stage, nanoparticles could potentially be quite dangerous.
We now understand that we shouldn't administer nanomedicines to pregnant women indiscriminately, as there is a risk that nanoparticles could reach and affect the developing organs of their babies.
Meiru Wang et al, The biodistribution of polystyrene nanoparticles administered intravenously in the chicken embryo, Environment International (2024). DOI: 10.1016/j.envint.2024.108723
Study suggests 'biodegradable' teabags don't readily deteriorate in the environment
Some teabags manufactured using plastic alternatives do not degrade in soil and have the potential to harm terrestrial species, a new study has shown.
The research looked at commonly available teabags made using three different compositions of polylactic acid (PLA), which is derived from sources such as corn starch or sugar cane.
The teabags were buried in soil for seven months, and a range of techniques were then used to assess whether—and to what extent—they had deteriorated.
The results showed that teabags made solely from PLA remained completely intact. However, the two types of teabags made from a combination of cellulose and PLA broke down into smaller pieces, losing between 60% and 80% of their overall mass and with the PLA component remaining.
The study also examined the impacts of the disks cut from the teabags on a species of earthworm, Eisenia fetida, which has a critical role in soil nutrient turnover as it consumes organic matter.
Researchers found that being exposed to three different concentrations of teabag disks—equivalent to the mass of half, one and two teabags—resulted in up to 15% greater mortality, while some concentrations of PLA had a detrimental effect on earthworm reproduction.
Writing in the journalScience of the Total Environment, the study's authorshighlight the needfor accurate disposal information to be clearly displayed on product packaging.
Only one of the manufacturers whose products were chosen for the study indicated on the packaging that the teabags were not home compostable.
This could lead to them ending up in soil, while there is also high potential for consumer confusion about the meaning of terms such as plant-based or biodegradable, emphasizing the need for clear guidance on appropriate disposal.
W. Courtene-Jones et al, Deterioration of bio-based polylactic acid plastic teabags under environmental conditions and their associated effects on earthworms, Science of The Total Environment (2024). DOI: 10.1016/j.scitotenv.2024.172806
Landslides happen when the pull from gravity exceeds the strength of the geomaterial forming the slope of a hill or mountain. Geomaterials can be as varied as rocks, sand, silt and clays.
Then, part of this slope starts sliding downhill. Depending on where the slope fails, the material sliding down can be just a few cubic meters or a fewmillioncubic meters in volume.
Why do slopes fail? Most natural landslides are triggered by earthquakes or rainfall, or a combination of both.
Earthquakes shake the ground, stress it and weaken it over time. Rainwater can seep through the ground and soak it—the ground is often porous like a sponge—and add weight to the slope. This is why PNG is so prone to landslides, as it sits on an active fault and is subjected to heavy rainfalls.
Another adverse effect of water is erosion: the constant action of waves undercuts coastal slopes, causing them to fail. Groundwater can also dissolve rocks within slopes.
Humans can (and do) cause landslides in several ways, too. For example, deforestation has a negative impact on slope stability, as tree rootsnaturally reinforce the ground and drain water out. Also, mine blasts produce small earthquake-like ground vibrations that shake slopes nearby.
It's very difficult to predict and mitigate landslide risk effectively.
So what would it take to warn people of a coming landslide? You would need a prediction for earthquakes and rainfall, in addition to a perfect knowledge of the slope-forming geomaterial.
Under our feet, geomaterials may include multiple, entangled layers of various kinds of rocks and particulate materials, such as sand, silt and clays. Their strength varies from a factor of one to 1,000, and their spatial distribution dictates where the slope is likely to fail.
To accurately assess the stability of the slope, a three-dimensional mapping of these materials and their strengths is needed. No sensor can provide this information, so geologists and geotechnical engineers must deal with partial information obtained at a few selected locations and extrapolate this data to the rest of the slope.
The weakest link of the chain—such as an existing fracture in a rock mass—is easily missed. This is an inevitable source of uncertainty when trying to predict how much material might slip. We do know that the larger the volume of a landslide, the farther its runout distance. But it's hard to gauge the exact size of a landslide, making predictions of runout distances and safe zones uncertain. The question of "when will a landslide will occur" is also uncertain. Mechanical analysis enables us to estimate the vulnerability of a slope in a particular scenario, including earthquake magnitude and distribution of groundwater. But predicting if and when these triggers will happen is as "easy" as predicting the weather and seismic activity—a difficult task. Unfortunately, all the money in the world can't buy accurate landslide predictions—especially in remote parts of the world.
Landslides occur due to due to failure of stability of slopes. The following factors cause failure of slopes.
Clay and shale beds are weak materials and tend to get slided particularly in the presence of water.
Groundwater and precipitation – groundwater conditions of the area depends on the geological and hydrological parameters, storage, recharge, , duration of precipitation , intensity of rainfall, lithological characteristics of the area and gradient of slope etc. In general slope stability is adversely affected by rise in groundwater table, resulting in pore water pressure . The shearing resistance of the mineral grains reduces and then this leads to landslides.
Insitu stresses – the stress within the slope rock is generated when load is placed at top of the slope. This may be due to accumulation of rain or snow water or human activities such as stock piling of ore deposits, waste deposits, mine tailings, buildings and removal of natural rocks for construction or foundation works.
Textures – anisotrophy in fabric is the weakest internal structure of a rock and it facilitates movement of the rock mass. Uniform or medium grained mineral content of rock bears a greater strength than that borne by rocks of unequal grains.
Weak planes- Schistocity, slaty structure and lamination are weak points . It has been observed that massive slides tales place in sedimentary rocks inter bedded with limestones and shales.
Weathering weakens the physical and chemical bonding in rocks and this leads to failure of mass even in hard rocks.
Structural features
Bedding planes, fractures, joints, faults , fissures and schistocity will have influence on downward slope of rock mass. In nature no rock mass is truly continuous. It will be broken by fracture, joints, bedding planes, dykes, and veins. These discontinuities will lead to instability of strata.
Seismic activity – travel of elastic waves released during an earthquake in the ground increase the shear stress in the slope and consequently decrease volume open spaces within the rock material there by increasing the pressure of water in void space and leads to failure of rock mass.
The death of Vulcan: Study reveals planet is actually an astronomical illusion caused by stellar activity
A planet thought to orbit the star 40 Eridani A—host to Mr. Spock's fictional home planet, Vulcan, in the "Star Trek" universe—is really a kind of astronomical illusion caused by the pulses and jitters of the star itself, a new study shows.
A science team led by astronomer Abigail Burrows of Dartmouth College, and previously of NASA's Jet Propulsion Laboratory, has published a paper describing the new result, titled "The death of Vulcan: NEID reveals the planet candidate orbiting HD 26965 is stellar activity," in The Astronomical Journal. (Note: HD 26965 is an alternate designation for the star 40 Eridani A.)
The possible detection of a planet orbiting a star that Star Trek made famous drew excitement and plenty of attention when it was announced in 2018. Only five years later, the planet appeared to be on shaky ground when other researchers questioned whether it was there at all.
Now, precision measurements using a NASA-NSF instrument, installed a few years ago atop Kitt Peak in Arizona, seem to have returned the planet Vulcan even more definitively to the realm of science fiction.
Two methods for detecting exoplanets—planets orbiting other stars—dominate all others in the continuing search for strange new worlds. The transit method, watching for the tiny dip in starlight as a planet crosses the face of its star, is responsible for the vast majority of detections. But the "radial velocity" method also has racked up a healthy share of exoplanet discoveries.
This method is especially important for systems with planets that don't, from Earth's point of view, cross the faces of their stars. By tracking subtle shifts in starlight, scientists can measure "wobbles" in the star itself, as the gravity of an orbiting planet tugs it one way, then another. For very large planets, the radial velocity signal mostly leads to unambiguous planet detections. But not-so-large planets can be problematic.
Even the scientists who made the original, possible detection of planet HD 26965 b—almost immediately compared to the fictional Vulcan—cautioned that it could turn out to be messy stellar jitters masquerading as a planet. They reported evidence of a "super-Earth"—larger than Earth, smaller than Neptune—in a 42-day orbit around a sun-like star about 16 light-years away. The new analysis, using high-precision radial velocity measurements not yet available in 2018, confirms that caution about the possible discovery was justified.
The bad news for "Star Trek" fans comes from an instrument known as NEID, a recent addition to the complex of telescopes at Kitt Peak National Observatory. NEID, like other radial velocity instruments, relies on the Doppler effect: shifts in the light spectrum of a star that reveal its wobbling motions. In this case, parsing out the supposed planet signal at various wavelengths of light, emitted from different levels of the star's outer shell (photosphere), revealed significant differences between individual wavelength measurements—their Doppler shifts—and the total signal when they were all combined.
That means, in all likelihood, that the planet signal is really the flickering of something on the star's surface that coincides with a 42-day rotation—perhaps the roiling of hotter and cooler layers beneath the star's surface, called convection, combined with stellar surface features such as spots and "plages," which are bright, active regions. Both can alter a star's radial velocity signals.
The demonstration of such finely tuned radial velocity measurements holds out the promise of making sharper observational distinctions between actual planets and the shakes and rattles on surfaces of distant stars.
Abigail Burrows et al, The Death of Vulcan: NEID Reveals That the Planet Candidate Orbiting HD 26965 Is Stellar Activity*, The Astronomical Journal (2024). DOI: 10.3847/1538-3881/ad34d5
New molecule found to suppress bacterial antibiotic resistance evolution
Researchers have developed a new small molecule that can suppress the evolution of antibiotic resistance in bacteria and make resistant bacteria more susceptible to antibiotics. The paper, "Development of an inhibitor of the mutagenic SOS response that suppresses the evolution of quinolone antibiotic resistance," has been published in the journal Chemical Science.
The global rise in antibiotic-resistant bacteria is one of the top global public health and development threats, with many common infections becoming increasingly difficult to treat. It is estimated that drug-resistant bacteria are already directly responsible for around 1.27 million global deaths each year and contribute to a further 4.95 million deaths. Without the rapid development of new antibiotics and antimicrobials, this figure is set to rise significantly.
A new study led by researchers at the Ineos Oxford Institute for antimicrobial research (IOI) and the Department of Pharmacology at Oxford University offers hope in the discovery of a small molecule that works alongside antibiotics to suppress the evolution of drug-resistance in bacteria.
One of the ways that bacteria become resistant to antibiotics is due to new mutations in their genetic code. Some antibiotics (such as fluoroquinolones) work by damaging bacterial DNA, causing the cells to die. However, this DNA damage can trigger a process known as the "SOS response" in the affected bacteria.
The SOS response repairs the damaged DNA in bacteria and increases the rate of genetic mutations, which can accelerate the development of resistance to the antibiotics. In the new study, the Oxford scientists identified a molecule capable of suppressing the SOS response, ultimately increasing the effectiveness of antibiotics against these bacteria.
The researchers studied a series of molecules previously reported to increase the sensitivity of methicillin-resistant Staphylococcus aureus (MRSA) to antibiotics, and to prevent the MRSA SOS response. MRSA is a type of bacteria that usually lives harmlessly on the skin. But if it gets inside the body, it can cause a serious infection that needs immediate treatment with antibiotics. MRSA is resistant to all beta-lactam antibiotics such as penicillins and cephalosporins.
Researchers modified the structure of different parts of the molecule and tested their action against MRSA when given with ciprofloxacin, a fluoroquinolone antibiotic. This identified the most potent SOS inhibitor molecule reported to-date, called OXF-077. When combined with a range of antibiotics from different classes, OXF-077 made these more effective in preventing the visible growth of MRSA bacteria.
In a key result, the team then tested the susceptibility of bacteria treated with ciprofloxacin over a series of days to determine how quickly resistance to the antibiotic was developing, either with or without OXF-077. They found that the emergence of resistance to ciprofloxacin was significantly suppressed in bacteria treated with OXF-077, compared to those not treated with OXF-077.
This is the first study to demonstrate that an inhibitor of the SOS response can suppress the evolution of antibiotic resistance in bacteria. Moreover, when resistant bacteria previously exposed to ciprofloxacin were treated with OXF-077, it restored their sensitivity to the antibiotic to the same level as bacteria that had not developed resistance.
Jacob D. Bradbury et al, Development of an inhibitor of the mutagenic SOS response that suppresses the evolution of quinolone antibiotic resistance, Chemical Science (2024). DOI: 10.1039/D4SC00995A
Chocolate's tasty flavors might pose a health risk in other desserts
What makes chocolate taste and smell so delicious? Chemistry, of course. A variety of molecules work together to create that unmistakable aroma, but those same molecules might carry some unwanted health effects if there are too many around. According to research published in Journal of Agricultural and Food Chemistry, while many of the compounds appeared in chocolate in low enough concentrations to be safe, higher amounts were found in some baked sweet treats.
When making chocolate, cocoa beans are roasted to help their chocolatey flavors shine. During this process, new molecules like α,β-unsaturated carbonyls are formed when they react with other ingredients under high temperatures. This class of carbonyls is highly reactive and potentially genotoxic, or able to cause damage to DNA when consumed.
Though naturally found in many foods, these carbonyls are also used as flavoring additives, and some have been banned in some countries, including the buttery-tasting furan-2(5H)-one. To better understand how these molecules form naturally in foods, and whether or not they are present in levels that could pose a health concern, researchers tested chocolates and other sweet treats for 10 different α,β-unsaturated carbonyls—some of which have been confirmed as safe by the European Food Safety Authority, while others are still under evaluation.
The team created its own chocolates and found that α,β-unsaturated carbonyls formed during roasting and after the addition of cocoa butter; however, their concentrations remained too low to pose any health concerns from consuming the chocolates.
Next, researchers screened 22 commercially available desserts, including crepes, waffles, cakes and biscuits, either with or without chocolate. In these packaged treats, they found even lower concentrations of nine of the 10 carbonyls compared to the chocolates.
The remaining carbonyl—genotoxic furan-2(5H)-one—appeared in much higher concentrations in the crepe and cake samples, reaching up to 4.3 milligrams per kilogram. Considering that the recommended threshold for genotoxic substances is only 0.15 micrograms per person per day, consuming these desserts could exceed that limit, though additional studies are needed to accurately assess the potential health risk.
Researchers concluded that the furan-2(5H)-one molecule likely formed during the baking process and did not seem to correlate with the amount of chocolate present in the packaged desserts. The team says that this work helps to better understand where these carbonyls come from in chocolate and highlights the importance of monitoring flavorings in food to keep consumers informed and safe.
More rogue planets discovered The Euclid space telescope has discovered seven more rogue planets, shining a light on the dark and lonely worlds floating freely through the universe untethered to any star.
The Euclid study also offered clues to how rogue planets are created: Some could be formed in the outer part of a solar system before getting detached from their star and floating away.
But the study indicates that many rogue planets may be created as a "natural byproduct" of the star-formation process. This suggests a "really close connection between stars and planets and how they form".
Without being bound to a star, as the Earth is to the sun, there are no days or years on these planets, which languish in perpetual night. Yet scientists think there is a chance they could be able to host life—and estimate there may be trillions dotted throughout the Milky Way.
Last week the European Space Agency released the Euclid telescope's first scientific results since the mission launched in July.
Among the discoveries were seven new free-floating planets, gas giants at least four times the mass of Jupiter.
They were spotted in the Orion Nebula, the nearest star-forming region to Earth, roughly 1,500 light years away.
Euclid also confirmed the existence of dozens of other previously detected rogue planets. This is likely to be just the tip of the iceberg.
Because they do not reflect the light of a star, spotting rogue planets is like "finding a needle in a haystack".
Younger planets, such as those discovered by Euclid, are hotter, making them a little easier to see.
Some research has suggested there are around 20 rogue planets for every star, which could put their number in the trillions in our home galaxy alone.
Given there are thought to be hundreds of billions of galaxies across the universe, the potential number of free-floating worlds becomes difficult to fathom.
When NASA's Roman space telescope launches in 2027 it is expected to find many more rogue planets, possibly offering clarity about how many could be out there.
But not all rogue planets wander alone. Four of the more than 20 confirmed by Euclid are believed to be binaries—two planets orbiting each other in a single system.
If rogue planets are habitable, they could be a key target in humanity's search for extraterrestrial life.
Lacking heat from a nearby star, free-floating planets are believed to be cold, with frozen surfaces.
That means any life-supporting energy would have to come from inside the planet.
And geothermal vents allow animals to survive on Earth that have never seen the sun's rays.
But even under the best conditions, this extreme isolation would likely be able to support only bacterial and microbial life.
Advantages of being a rogue planet: Rogue planets could be thought of as traversing a lonely path through the cosmos.
But "being around a star has its downsides".
Once the sun becomes a red giant—in an estimated 7.6 billion years—it will greatly expand, swallowing the Earth.
Rogue planets do not have to worry about eventually being destroyed by a star. "These things will last forever". ( Maybe if they don't go near any other star or planet or anybody that can influence its path and structure).
"If you don't mind the cold temperatures you could survive on these planets for eternity."
The study published ‘s on arXiv.org e-Print archive Friday.
Is a train's risk of derailment affected by its length?
Longer freight trains are more likely to derail compared with shorter trains, according to new research published in Risk Analysis. The increased risk held even after accounting for the need for fewer trains if more cars were on each train.
For the study, investigators assessed information on US freight train accidents between 2013–2022 from Federal Railroad Administration databases. The team found that running 100-car trains would lead to an 11% higher risk of derailment compared with running 50-car trains, even when accounting for the fact that only half as many 100-car trains would need to run. For 200-car trains, the risk was 24% higher than for 50-car trains.
he Relationship between Freight Train Length and the Risk of Derailment, Risk Analysis (2024). DOI: 10.1111/risa.14312
The eye is one of a few sites in the body with something called immune privilege. That special status makes the eye an ideal environment for researching certain therapies for treating vision loss.
The body’s immune system—made up of organs, tissues and cells—works to protect us from infection and disease. When a virus or other foreign substance is detected, the immune system kicks in. It makes molecules called antibodies to attack these invading substances known as antigens. This natural defense system is called an inflammatory response, and results in swelling of tissue and a higher-than-normal temperature.
While an inflammatory response can help fight off infection or disease, it can also cause problems. For example, if someone has a donated organ transplanted in their body, their immune system may recognize the organ as foreign tissue and mount an inflammatory response. This can cause the transplant to fail.
Interestingly, certain areas of the body have something called immune privilege. This means that the body’s normal inflammatory immune response is limited here. Scientists think the purpose of immune privilege is to protect these important areas from damage that may occur with swelling and higher temperatures from the immune response. The eye is one of a few areas of the body with immune privilege. The eye limits its inflammatory immune response so that vision isn’t harmed by swelling and other tissue changes. Other sites with immune privilege include the brain, testes, placenta and fetus.
Because of this immune privilege, the eye offers an excellent location for certain kinds of research and therapy. For example, scientists can implant types of cells called stem cells in the eye to study their role in regrowing or repairing damaged tissue. Cells implanted in the immune-privileged eye are less likely to be rejected than they might be in other parts of the body. Studies of stem cell use in the eye have shown promise in treating vision loss.
Another reason the eye is a good place for researching new therapies? It is relatively easy to reach and see inside of the structure. That makes implanting cells in the eye much easier than other areas of the body.
The ocular immune system protects the eye from infection and regulates healing processes following injuries. The interior of the eye lacks lymph vessels but is highly vascularized, and many immune cells reside in the uvea, including mostly macrophages, dendritic cells, and mast cells.[1] These cells fight off intraocular infections, and intraocular inflammation can manifest as uveitis (including iritis) or retinitis. The cornea of the eye is immunologically a very special tissue. Its constant exposure to the exterior world means that it is vulnerable to a wide range of microorganisms while its moist mucosal surface makes the cornea particularly susceptible to attack. At the same time, its lack of vasculature and relative immune separation from the rest of the body makes immune defense difficult. Lastly, the cornea is a multifunctional tissue. It provides a large part of the eye's refractive power, meaning it has to maintain remarkable transparency, but must also serve as a barrier to keep pathogens from reaching the rest of the eye, similar to function of the dermis and epidermis in keeping underlying tissues protected. Immune reactions within the cornea come from surrounding vascularized tissues as well as innate immune responsive cells that reside within the cornea.
Astronomers find most distant galaxy using James Webb Space Telescope
An international team of astronomers recently announced the discovery of the two earliest and most distant galaxies ever seen, dating back to only 300 million years after the Big Bang. These results, using NASA's James Webb Space Telescope (JWST), mark a major milestone in the study of the early universe.
The discoveries were made by the JWST Advanced Deep Extragalactic Survey (JADES) team.
Because of the expansion of the universe, the light from distant galaxies stretches to longer wavelengths as it travels. This effect is so extreme for these two galaxies that their ultraviolet light is shifted to infrared wavelengths where only JWST can see it. Because light takes time to travel, more distant galaxies are also seen as they were earlier in time.
The two record-breaking galaxies are called JADES-GS-z14-0 and JADES-GS-z14-1, the former being the more distant of the two. In addition to being the new distance record holder, JADES-GS-z14-0 is remarkable for how big and bright it is.
The size of the galaxy clearly proves that most of the light is being produced by large numbers of young stars, rather than material falling onto a supermassive blackhole in the galaxy's center, which would appear much smaller.
The combination of the extreme brightness and the fact that young stars are fueling this high luminosity makes JADES-GS-z14-0 the most striking evidence yet found for the rapid formation of large, massive galaxies in the early universe. It is stunning that the universe can make such a galaxy in only 300 million years.
The galaxy is located in a field where the JWST Mid-Infrared Instrument had conducted an ultra-deep observation. Its brightness at intermediate infrared wavelengths is a sign of emission from hydrogen and even oxygen atoms in the early universe. Despite being so young, the galaxy is already hard at work creating the elements familiar to us on Earth.
This amazing object shows that galaxy formation in the early universe is very rapid and intense.
A shining cosmic dawn: spectroscopic confirmation of two luminous galaxies at z∼14, arXiv:2405.18485 [astro-ph.GA]arxiv.org/abs/2405.18485
JWST/MIRI photometric detection at 7.7 μm of the stellar continuum and nebular emission in a galaxy at z>14, arXiv:2405.18462 [astro-ph.GA]arxiv.org/abs/2405.18462
Brant Robertson et al, Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic Star-Formation Rate Density 300 Myr after the Big Bang,arXiv(2023).DOI: 10.48550/arxiv.2312.10033
Dr. Krishna Kumari Challa
Newborns whose mother spoke in a mix of languages during pregnancy are more sensitive to a range of sound pitches
It's well established that babies in the womb hear and learn about speech, at least in the third trimester. For example, newborns have been shown to already prefer the voice of their mother, recognize a story that had been repeatedly told to them while in the womb, and tell apart their mother's native language.
What wasn't known until now was how developing fetuses learn about speech when their mother speaks to them in a mix of languages. Yet this is common: there are 3.3 billion bilingual people (43% of the population) worldwide, and in many countries, bilingualism or multilingualism is the norm.
Researchers showed that exposure to monolingual or a bilingual speech has different effects at birth on 'neural encoding' of voice pitch and vowel sounds: that is, how information about these aspects of speech has been initially learned by the fetus.
At birth, newborns from bilingual mothers appear more sensitive to a wider range of acoustic variation of speech, whereas newborns from monolingual mothers seem to be more selectively tuned to the single language they have been immersed in.
Exposure to bilingual or monolingual maternal speech during pregnancy affects the neurophysiological encoding of speech sounds in neonates differently, Frontiers in Human Neuroscience (2024). DOI: 10.3389/fnhum.2024.1379660
May 23
Dr. Krishna Kumari Challa
Exposure to endocrine-disrupting chemicals in utero associated with higher odds of metabolic syndrome in children
The term 'metabolic syndrome' (MetS) encompasses a group of factors, such as abdominal obesity, hypertension and insulin resistance, that together increase the risk of cardiovascular disease and type 2 diabetes.
A new study suggests that prenatal exposure to a combination of endocrine disrupting chemicals (EDCs) is associated with poorer metabolic health in childhood, which in turn may contribute to an increased risk of metabolic syndrome in adulthood.
EDCs are chemical substances that are so named because of their ability to interfere with the functioning of our hormonal system, growth, energy balance and metabolism and whose exposure, given their ubiquity in our environment, is difficult to escape.
Previous studies have already shown a link between individual exposure to some of these compounds during the prenatal phase and some of the factors that make up the metabolic syndrome, particularly obesity and blood pressure.
The study involved 1,134 mothers and their children from six European countries (Spain, France, Greece, Lithuania, Norway and the United Kingdom), all volunteers from the HELIX (Human Early Life Exposome) cohort. Prenatal exposure to a total of 45 endocrine disruptors was analyzed through blood and urine samples collected from the mothers during pregnancy or from the umbilical cord after birth.
Later, when the children were between 6 and 11 years old, they were followed up, including a clinical examination, interview and collection of biological samples. This yielded data on waist circumference, blood pressure, cholesterol, triglycerides and insulin levels, which were aggregated to obtain a risk index for metabolic syndrome.
Statistical analysis showed that mixtures of metals, perfluoroalkylated and polyfluoroalkylated substances (PFAS), organochlorine pesticides and flame retardants (or PBDEs) were associated with a higher risk of metabolic syndrome. In the case of metals, the association observed was mainly due to the effect of mercury, the main source of which is the intake of large fish.
PFASs are one of the most widely used families of chemical compounds, being used in pesticides, paints, non-stick pans or fast food packaging, among many other common uses. Because of their persistence, they are also known as the "forever chemicals." Also very persistent are organochlorine pesticides, which were already banned in Europe in the 1970s, but to which we are still widely exposed due to their permanence in the environment.
Researchers also observed that associations were stronger in girls for mixtures of PFASs and polychlorinated biphenyls (PCBs), while boys were more susceptible to exposure to parabens. Since endocrine disruptors interfere with sex steroid hormones, these differences fall within what would be expected.
These results suggest that exposure to widespread mixtures of endocrine disruptors during pregnancy may be associated with adverse metabolic health in both boys and girls. This association may contribute to the current increase in the prevalence of lifetime metabolic syndrome, which currently affects a quarter of the adult population, with upward trends evident even among young people
Prenatal Exposure to Chemical Mixtures and Metabolic Syndrome Risk in Children, JAMA Network Open (2024). DOI: 10.1001/jamanetworkopen.2024.12040
May 24
Dr. Krishna Kumari Challa
Why the brain can robustly recognize images, even without colour
Even though the human visual system has sophisticated machinery for processing colour, the brain has no problem recognizing objects in black-and-white images. A new study offers a possible explanation for how the brain comes to be so adept at identifying both colour and colour-degraded images.
Using experimental data and computational modeling, the researchers found evidence suggesting the roots of this ability may lie in development. The work has been published in Science
Early in life, when newborns receive strongly limited colour information, the brain is forced to learn to distinguish objects based on their luminance, or intensity of light they emit, rather than their colour. Later in life, when the retina and cortex are better equipped to process colours, the brain incorporates colour information as well but also maintains its previously acquired ability to recognize images without critical reliance on colour cues.
The findings also help to explain why children who are born blind but have their vision restored later in life, through the removal of congenital cataracts, have much more difficulty identifying objects presented in black and white. Those children, who receive rich colour input as soon as their sight is restored, may develop an overreliance on colour that makes them much less resilient to changes or removal of colour information.
Researchers have observed that limitations in early sensory input can also benefit other aspects of vision, as well as the auditory system.
In 2022, they used computational models to show that early exposure to only low-frequency sounds, similar to those that babies hear in the womb, improves performance on auditory tasks that require analyzing sounds over a longer period of time, such as recognizing emotions.
Marin Vogelsang et al, Impact of early visual experience on later usage of color cues, Science (2024). DOI: 10.1126/science.adk9587. www.science.org/doi/10.1126/science.adk9587
May 24
Dr. Krishna Kumari Challa
Nanoparticle vaccines: A potential leap forward in veterinary medicine
Classical vaccines often rely on traditional technologies, such as live attenuated or inactivated pathogens, which carry inherent risks including reduced immunogenicity under certain conditions and potential safety concerns. This has spurred the need for innovative approaches that can provide safer and more effective prophylactic solutions in veterinary medicine.
Self-assembled protein nanoparticles (SAPNs) emerge as a cutting-edge solution, harnessing the power of nanotechnology to revolutionize vaccine design and implementation.
In an article published on 10 May 2024 in Animal Diseases, researchers at Zhejiang University's Institute of Preventive Veterinary Medicine, delve into the development and application of SAPNs and virus-like nanoparticles (VLPs), offering a detailed discussion of their potential in veterinary medicine.
The article focuses on various types of SAPNs, including natural and synthetically designed nanoparticles. These nanoparticles are tailored to enhance the immune system's ability to recognize and respond to pathogens more effectively.
Key highlights include the use of animal virus-derived nanoparticles and bacteriophage-derived nanoparticles, which have shown the potential to elicit strong cellular and humoral responses. The nanoparticles' ability to mimic pathogen structures enables them to trigger a more substantial immune reaction, potentially leading to long-lasting immunity.
Researchers have documented successes in using these nanoparticles to protect against diseases like foot-and-mouth disease and swine fever, showcasing their broad applicability and effectiveness.
Veterinary nanoparticle vaccines have broad implications, with the potential to extend the benefits beyond veterinary applications into human health. The enhanced safety and immunogenicity of these vaccines could lead to the development of advanced vaccines for human use.
Additionally, by reducing the environmental impact of livestock diseases, this technology may contribute to more sustainable agricultural practices globally.
Meiqi Sun et al, Toward innovative veterinary nanoparticle vaccines, Animal Diseases (2024). DOI: 10.1186/s44149-024-00119-w
May 24
Dr. Krishna Kumari Challa
How a world record 'squeeze' could offer comfort for dark matter hunters
Quantum engineers have developed a new amplifier that could help other scientists search for elusive dark matter particles.
Imagine throwing a ball. You'd expect science to be able to work out its exact speed and location at any given moment, right? Well, the theory of quantum mechanics says you can't actually know both with infinite precision at the same time.
It turns out that as you more precisely measure where the ball is, knowing its speed becomes less and less accurate.
This conundrum is commonly referred to as Heisenberg's uncertainty principle, named after the famous physicist Werner Heisenberg who first described it.
For the ball, this effect is imperceptible, but in the quantum world of small electrons and photons the measurement uncertainty suddenly becomes very significant.
That's the problem being addressed by a team of engineers who have developed an amplifying device that performs precise measurements of very weak microwave signals, and it does so through a process known as squeezing.
Squeezing involves reducing the certainty of one property of a signal in order to obtain ultra-precise measurements of another property.
The team of researchers have significantly increased the accuracy of measuring signals at microwave frequencies, like those emitted by your mobile phone, to the point of setting a new world record.
The precision of measuring any signal is fundamentally limited by noise. Noise is the fuzziness that creeps in and masks signals, which is something you may have experienced if you've ever ventured out of range when listening to AM or FM radio.
However, uncertainty in the quantum world means there is a limit as to how low noise can be made in a measurement.
Even in a vacuum, a space void of everything, the uncertainty principle tells us we must still have noise. We call this 'vacuum' noise. For many quantum experiments, vacuum noise is the dominant effect that prevents us from making more precise measurements.
The squeezer produced by the research team now can beat this quantum limit.
The device amplifies noise in one direction, so that noise in another direction is significantly reduced, or 'squeezed.' Think of the noise as a tennis ball, if we stretch it vertically, then it must reduce along the horizontal to maintain its volume. Researchers can then use the reduced part of the noise to do more precise measurements.
They showed that the squeezer is able to reduce noise to record low levels.
Squeezing is very difficult at microwave frequencies because the materials used tend to destroy the fragile squeezed noise quite easily.
What they've done is a lot of engineering in order to remove sources of loss, which means utilizing very high-quality superconducting materials to build the amplifier.
Part 1
May 24
Dr. Krishna Kumari Challa
And the team think that the new device could help speed up the search for notoriously elusive particles known as axions, which are so far only theoretical, but proposed by many as the secret ingredient of mysterious dark matter.
The squeezed noise itself could even be used in future quantum computers.
It turns out that squeezed vacuum noise is an ingredient to build a certain type of quantum computer. Excitingly, the level of squeezing they've achieved is not far off the amount needed to build such a system.
Arjen Vaartjes et al, Strong microwave squeezing above 1 Tesla and 1 Kelvin, Nature Communications (2024). DOI: 10.1038/s41467-024-48519-3
Part 2
May 24
Dr. Krishna Kumari Challa
Atomic-resolution imaging shows why ice is so slippery
A team of physicists has uncovered the reason behind the slipperiness of ice. In their study, published in the journal Nature, the group used atomic force microscopy to get a closer look at the surface of ice at different temperatures.
Prior research and day-to-day experiences have shown that ice is slippery, even when temperatures are well below the freezing point. Research has suggested this is because of a pre-melt coating that develops at the surface, which serves as a lubricant.
In this new study, the research team used an atomic force microscope fitted with a carbon monoxide atom on its tip to get a better look at the structure of normal ice and its pre-melt coating.
The researchers began by chilling ice inside the microscope chamber to -150°C and then using the microscope to look at its atomic structure. They could see that the internal ice (known as ice Ih), and the ice at the surface were different.
The ice Ih, as expected, was arranged in stacked hexagons. The ice on the surface, by contrast, was only partially hexagonal. The researchers also found defects in the ice at the border between the two types of ice that occurred as the different ice shapes met one another.
The researchers then raised the temperature in the chamber slightly, which resulted in more disorder as the differences in shape became more pronounced. The team then created a simulation showing how such disorder would impact the surface as a whole unit—it showed the disorder expanding all the way across the surface, giving the ice a liquid-like appearance that would be slippery if trod upon.
Jiani Hong et al, Imaging surface structure and premelting of ice Ih with atomic resolution, Nature (2024). DOI: 10.1038/s41586-024-07427-8
May 24
Dr. Krishna Kumari Challa
Bizarre bacteria scramble workflow of life
Bacteria have stunned biologists by reversing the usual flow of information. Typically genes written in DNA serve as the template for making RNA molecules, which are then translated into proteins. Some viruses are known to have an enzyme that reverses this flow by scribing RNA into DNA. Now scientists have found bacteria with a similar enzyme that can even make completely new genes — by reading RNA as a template. These genes create protective proteins when a bacterium is infected by a virus. It should change the way we look at the genome.
https://www.biorxiv.org/content/10.1101/2024.05.08.593200v1
Part 1
May 24
Dr. Krishna Kumari Challa
Strange bacteria defy textbooks by writing new genes
Genetic information usually travels down a one-way street: genes written in DNA serve as the template for making RNA molecules.... That tidy textbook story got a bit complicated in 1970 when scientists discovered that some viruses have enzymes called reverse transcriptases, which scribe RNA into DNA — the reverse of the usual traffic flow.
Now, scientists have discovered an even weirder twist. A bacterial version of reverse transcriptase reads RNA as a template to make completely new genes written in DNA. These genes are then transcribed back into RNA, which is translated into protective proteins when a bacterium is infected by a virus. By contrast, viral reverse transcriptases don’t make new genes; they merely transfer information from RNA to DNA.
This is a crazy molecular biology!
https://www.nature.com/articles/d41586-024-01477-8?utm_source=Live+...
Part 2
The discovery that reverse transcriptase — which has previously been known only for copying genetic material — can create completely new genes has left other researchers gobsmacked.
May 24
Dr. Krishna Kumari Challa
Micro-ballistics research has shown metals hardening as they are heated, under extreme strain rates.
May 24
Dr. Krishna Kumari Challa
How gut microbes drive tumour growth
Scientists have long known that obese people have poorer cancer survival rates. Now they have some idea why. A high-fat diet increases the number of Desulfovibrio bacteria in the gut of mice. These release leucine, an amino acid, which encourages the proliferation of a kind of cell that suppresses the immune system. With a suppressed immune system, tumour growth can increase. In breast cancer patients, poorer outcomes were seen for women with higher body-mass index, who also had higher levels of Desulfovibrio bacteria in their gut and leucine in their blood. It’s a provocative finding that will open up new avenues that we should be thinking about.
https://www.pnas.org/doi/10.1073/pnas.2306776121?utm_source=Live+Au...
----
Biggest risk factors for disease spread A meta-analysis of five ways that humanity’s environmental footprint spreads disease — biodiversity loss, chemical pollution, climate change, invasive species and deforestation/urbanization — suggests that conserving biodiversity, controlling invasive species and lowering greenhouse-gas emissions would reduce disease spread the most. This evidence can be used in international policy to spur action on climate change and biodiversity loss due to their negative impacts on disease.
https://www.nature.com/articles/s41586-024-07380-6?utm_source=Live+...
May 24
Dr. Krishna Kumari Challa
Biologists observe recurring evolutionary changes, over time, in stick insects
A long-standing debate among evolutionary scientists goes something like this: Does evolution happen in a predictable pattern or does it depend on chance events and contingency? That is, if you could turn back the clock, as celebrated scientist Stephen Jay Gould (1941–2002) described in his famous metaphor, "Replaying the Tape of Life," would life on Earth evolve, once again, as something similar to what we know now, or would it look very, very different?
Now researchers report evidence of repeatable evolution in populations of stick insects in the paper "Evolution repeats itself in replicate long-term studies in the wild," in Science Advances.
The team examined three decades of data on the frequency of cryptic color-pattern morphs in the stick insect species Timema cristinae in ten naturally replicate populations in California. T. cristinae is polymorphic in regard to its body colour and pattern. Some insects are green, which allows the wingless, plant-feeding insect to blend in with California lilac (Ceanothus spinosus) shrubs. In contrast, green striped morphs disappear against chamise (Adenostoma fasciculatum) shrubs.
Hiding among the plants is one of T. christinae's key defenses as hungry birds, such as scrub jays, are insatiable predators of the stick insects.
Bird predation is a constant driver shaping the insects' organismal traits, including coloration and striped vs. non-striped.
Scientists observed predictable 'up-and-down' fluctuations in stripe frequency in all populations, representing repeatable evolutionary dynamics based on standing genetic variation.
A field experiment demonstrates these fluctuations involved negative frequency-dependent natural selection (NFDS), where cryptic colour patterns are more beneficial when rare rather than common. This is likely because birds develop a 'search image' for very abundant prey.
At short time scales, evolution involving existing variations can be quite predictable. You can count on certain drivers always being there, such as birds feeding on the insects.
But at longer time scales, evolutionary dynamics become less predictable.
The populations might experience a chance event, such as a severe drought or a flooding event, that disrupts the status quo and thus, the predictable outcomes.
On long time scales, a new mutation in the species could introduce a rare trait. That's about as close to truly random as you can get.
Rare things are easily lost by chance, so there's a strong probability a new mutation could disappear before it gains a stronghold.
Indeed, another species of Timema stick insect that also feeds on chamise either never had or quickly lost the mutations making the cryptic stripe trait. Thus, the evolution of stripe is not a repeatable outcome of evolution at this long scale.
Part 1
May 25
Dr. Krishna Kumari Challa
Replicated, long-term studies from natural populations, including research on the famous Darwin's finches, are rare.
Because most of this work is restricted to one or few populations, it is difficult to draw inferences on repeatability among multiple evolutionary independent populations.
Such studies are challenging to implement not only because they take concerted effort, but also because you can't rush time.
Patrik Nosil et al, Evolution repeats itself in replicate long-term studies in the wild, Science Advances (2024). DOI: 10.1126/sciadv.adl3149. www.science.org/doi/10.1126/sciadv.adl3149
Part 2
May 25
Dr. Krishna Kumari Challa
How air pollution affects the digestive system
Fine air particles, less than 2.5 micrometers in diameter (PM2.5), are a major air pollutant linked to various health problems. These particles can travel deep into the lungs and even enter the bloodstream when inhaled. Recent research suggests a major health concern: PM2.5 exposure can also damage the digestive system, including the liver, pancreas, and intestines.
The work is published in the journal eGastroenterology. This recent research has been focused on how PM2.5 exposure triggers stress responses within the digestive system's cells. These stress responses involve specialized subcellular structures within cells called organelles, such as the endoplasmic reticulum (ER), mitochondria, and lysosomes. When PM2.5 disrupts these organelles, it creates a chain reaction within the cells that can lead to inflammation and other harmful effects.
The liver, a major organ for detoxification and metabolism, is particularly susceptible to PM2.5 damage. Studies have shown that PM2.5 exposure can lead to a cascade of problems within the liver, including inflammation, stress responses, and damage to the organelles, and disrupted energy metabolism. These effects can contribute to the development of non-alcoholic fatty liver disease (NASH) and type 2 diabetes.
PM2.5 exposure does not stop at the liver. It can also harm the pancreas and intestines. Studies have linked PM2.5 to an increased risk of pancreatic impairment in people with diabetes, as well as damage to intestinal cells and an increase in their permeability. This increased permeability can lead to a variety of digestive issues.
Researchers are exploring whether dietary or pharmaceutical interventions can mitigate PM2.5 damage. Interestingly, some studies suggest that certain nutrients, like monounsaturated fatty acids and vitamins, may offer some protection against the harmful effects of PM2.5.
Part 1
May 25
Dr. Krishna Kumari Challa
Many plants and plant oils are high in monounsaturated fats but low in saturated fats. These include: oils from olives, peanuts, canola seeds, safflower seeds, and sunflower seeds, avocadoes, pumpkin seeds, sesame seeds, almonds, cashews, peanuts and peanut butter, pecans.
Air pollution is a complex issue with no easy solutions. While research continues mitigating PM2.5 exposure, the current understanding of its impact on the digestive system highlights the far-reaching consequences of air pollution on human health. It underscores the need for continued efforts to reduce air pollution levels and develop strategies to protect ourselves from its detrimental effects.
Kezhong Zhang, Environmental PM2.5-triggered stress responses in digestive diseases, eGastroenterology (2024). DOI: 10.1136/egastro-2024-100063
Part 2
May 25
Dr. Krishna Kumari Challa
Big brands are 'failing to curb plastic sachet use'
Small plastic sachets commonly used in low- and middle-income countries must be phased out and packaging reuse systems promoted, urge campaigners and waste pickers, as new analysis reveals major corporations have failed to curb their use.
Pocket-sized individual portions of goods ranging from shampoo to instant coffee have become popular in lower-income communities for their affordability.
An estimated 855 billion sachets are sold globally each year with Southeast Asia consuming nearly half of the total and this figure projected to rise to 1.3 trillion by 2027, according to environmental groups.
But the convenience of sachets comes with a heavy environmental cost as they end up as significant contributors to plastic pollution. Their commonly multi-layered design, using different materials, makes them hard to recycle.
Consumer goods giants Unilever, Nestlé and Procter & Gamble are among the biggest contributors to plastic sachet pollution in developing countries in Asia, despite promises to reduce plastic packaging, according to a multi-country environmental audit report.
It said some corporations were trying to deal with the waste problem by burning sachets as fuel, creating further pollution.
Environmental groups in Asia have long been demanding firms to phase out their sachet packaging as the resulting waste is deluging the region's landfills and waters.
Consumers, can you please help by avoiding these small plastic sachet based products?
Source: SciDev.Net
May 25
Dr. Krishna Kumari Challa
Why some huge stars have disappeared from the sky
When massive stars die, as we understand the Universe, they don't go quietly. As their fuel runs out, they become unstable, wracked by explosions before finally ending their lives in a spectacular supernova.
But some massive stars, scientists have found, have simply vanished, leaving no trace in the night sky. Stars clearly seen in older surveys are inexplicably absent from newer ones. A star isn't exactly a set of keys – you can't just lose it down the back of the couch. So where the heck do these stars go?
A new study has given us the most compelling explanation yet. Some massive stars, suggest an international team led by astrophysicist Alejandro Vigna-Gómez of the Niels Bohr Institute in Denmark and the Max Planck Institute for Astrophysics in Germany, can die, not with a bang, after all, but a whimper.
Their evidence? A binary system named VFTS 243 in the Large Magellanic Cloud, consisting of a black hole and a companion star. This system shows no signs of a supernova explosion that, according to the models, ought to have accompanied the formation of the black hole.
Were one to stand gazing up at a visible star going through a total collapse, it might, just at the right time, be like watching a star suddenly extinguish and disappear from the heavens.
The collapse is so complete that no explosion occurs, nothing escapes and one wouldn't see any bright supernova in the night sky. Astronomers have actually observed the sudden disappearance of brightly shining stars in recent times. Scientists cannot be sure of a connection, but the results they have obtained from analyzing VFTS 243 has brought them much closer to a credible explanation.
Part 1
May 25
Dr. Krishna Kumari Challa
When a star more massive than about 8 times the mass of the Sun goes supernova, it's extremely messy. The outer layers – most of the star's mass – are explosively ejected into the space around the star, where they form a huge, expanding cloud of dust and gas that lingers for hundreds of thousands to millions of years.
Meanwhile, the star's core, no longer supported by the outward pressure of fusion, collapses under gravity to form an ultradense object, a neutron star or a black hole, depending on the initial star's mass.
These collapsed cores don't always stay put; if the supernova explosion is lopsided, this can punt the core off into space in a natal kick. We can also sometimes trace the core's trajectory back to the cloud of material it ejected as it died, but if enough time has elapsed, the material may have dissipated. But the signs of the natal kick can remain a lot longer.
VFTS 243 is a very interesting system. It consists of a massive star that's around 7.4 million years old and around 25 times the mass of the Sun, and a black hole around 10 times the mass of the Sun.
Although we can't see the black hole directly, we can measure it based on the orbital motion of its companion star – and, of course, we can infer other things about the system. One interesting thing is the shape of the orbit. It's almost circular. This, together with the motion of the system in space, suggests that the black hole did not receive a huge kick from a supernova. The researchers who discovered the black hole back in 2022 suspected as much; now, the work of Vigna-Gómez and his colleagues have confirmed it. There has been a growing body of evidence that suggests that sometimes, massive stars can collapse directly into black holes, without passing supernova or collecting 200 space dollars. VFTS 243 represents the best evidence we have for this scenario to date.
Our results highlight VFTS 243 as the best observable case so far for the theory of stellar black holes formed through total collapse, where the supernova explosion fails and which our models have shown to be possible," says astrophysicist Irene Tamborra of the Niels Bohr Institute. "It is an important reality check for these models. And we certainly expect that the system will serve as a crucial benchmark for future research into stellar evolution and collapse."
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.132.191403
Part 2
May 25
Dr. Krishna Kumari Challa
Anendophasia: not having any inner speech
In recent years have scientists found that not everyone has the sense of an inner voice – and a new study sheds some light on how living without an internal monologue affects how language is processed in the brain.
This is similar to anauralia, a term researchers coined in 2021 for people who don't have an inner voice, nor can they imagine sounds, like a musical tune or siren.
Focusing on inner voices in a study, a research team recruited 93 volunteers, half of whom said they had low levels of inner speech, while the other half reported having a very chatty internal monologue. These participants attempted a series of tasks – including one where they had to remember the order of words in a sequence, and another where rhyming words had to be paired together.
It is a task that will be difficult for everyone, but the hypothesis of the researchers was that it might be even more difficult if people did not have an inner voice because they have to repeat the words to themselves inside their head in order to remember them.
And this hypothesis turned out to be true.
The volunteers who reported hearing inner voices during everyday life did significantly better at the tasks than those without inner monologues: Inner speakers recalled more words correctly, and matched rhyming words faster. The researchers think this could be evidence that inner voices help people process words.
It's interesting to note that the performance differences disappeared when the volunteers spoke out loud to try and solve the problems they were given. It may be that using an audible voice is just as effective as using an inner voice in these situations.
In two other tasks, covering multitasking and distinguishing between different picture shapes, there was no difference in performance. The researchers take this as a sign that the way inner speech affects behavior depends on what we're doing.
Maybe people who don't have an inner voice have just learned to use other strategies. For example, some said that they tapped with their index finger when performing one type of task and with their middle finger when it was another type of task.
The researchers are keen to emphasize that the differences they found would not cause delays that you would notice in regular conversation. Scientists still at the very early stages in terms of figuring out how anendophasia might affect someone – and likewise anauralia.
https://journals.sagepub.com/doi/10.1177/09567976241243004
May 27
Dr. Krishna Kumari Challa
Dyson spheres: Astronomers report potential candidates for alien structures, and evidence against their existence
There are three ways to look for evidence of alien technological civilizations. One is to look out for deliberate attempts by them to communicate their existence, for example, through radio broadcasts. Another is to look for evidence of them visiting the solar system. And a third option is to look for signs of large-scale engineering projects in space.
A team of astronomers have taken the third approach by searching through recent astronomical survey data to identify seven candidates for alien megastructures, known as Dyson spheres, "deserving of further analysis." Their research is published in the journal Monthly Notices of the Royal Astronomical Society.
This is a detailed study looking for "oddballs" among stars—objects that might be alien megastructures. However, the authors are careful not to make any overblown claims. The seven objects, all located within 1,000 light-years of Earth, are "M-dwarfs"—a class of stars that are smaller and less bright than the sun.Dyson spheres were first proposed by the physicist Freeman Dyson in 1960 as a way for an advanced civilization to harness a star's power. Consisting of floating power collectors, factories and habitats, they'd take up more and more space until they eventually surrounded almost the entire star like a sphere.
What Dyson realized is that these megastructures would have an observable signature. Dyson's signature (which the team searched for in the recent study) is a significant excess of infrared radiation. That's because megastructures would absorb visible light given off by the star, but they wouldn't be able to harness it all. Instead, they'd have to "dump" excess energy as infrared light with a much longer wavelength.
Part 1
May 28
Dr. Krishna Kumari Challa
Unfortunately, such light can also be a signature of a lot of other things, such as a disk of gas and dust, or disks of comets and other debris. But the seven promising candidates aren't obviously due to a disk, as they weren't good fits to disk models.
It is worth noting there is another signature of Dyson sphere: that visible light from the star dips as the megastructure passes in front of it. Such a signature has been found before. There was a lot of excitement about Tabby's star, or Kic 8462852, which showed many really unusual dips in its light that could be due to an alien megastructure.
It almost certainly isn't an alien megastructure. A variety of natural explanations have been proposed, such as clouds of comets passing through a dust cloud. But it is an odd observation. An obvious follow up on the seven candidates would be to look for this signature as well.
The case against Dyson spheres
Dyson spheres may well not even exist, however. I think they are unlikely to be there. That's not to say they couldn't exist, rather that any civilization capable of building them would probably not need to (unless it was some mega art project).
Dyson's reasoning for considering such megastructures assumed that advanced civilizations would have vast power requirements. Around the same time, astronomer Nikolai Kardashev proposed a scale on which to rate the advancement of civilizations, which was based almost entirely on their power consumption.
In the 1960s, this sort of made sense. Looking back over history, humanity had just kept exponentially increasing its power use as technology advanced and the number of people increased, so they just extrapolated this ever-expanding need into the future.
Part 2
May 28
Dr. Krishna Kumari Challa
However, our global energy use has started to grow much more slowly over the past 50 years, and especially over the last decade. What's more, Dyson and Kardashev never specified what these vast levels of power would be used for, they just (fairly reasonably) assumed they'd be needed to do whatever it is that advanced alien civilizations do.
But, as we now look ahead to future technologies we see efficiency, miniaturization and nanotechnologies promise vastly lower power use (the performance per watt of pretty much all technologies is constantly improving).
A quick calculation reveals that, if we wanted to collect 10% of the sun's energy at the distance the Earth is from the sun, we'd need a surface area equal to 1 billion Earths. And if we had a super-advanced technology that could make the megastructure only 10km thick, that'd mean we'd need about a million Earths worth of material to build them from.
Part3
May 28
Dr. Krishna Kumari Challa
A significant problem is that our solar system only contains about 100 Earths worth of solid material, so our advanced alien civilization would need to dismantle all the planets in 10,000 planetary systems and transport it to the star to build their Dyson sphere. To do it with the material available in a single system, each part of the megastructure could only be one meter thick.
This is assuming they use all the elements available in a planetary system. If they needed, say, lots of carbon to make their structures, then we're looking at dismantling millions of planetary systems to get hold of it. Now, I'm not saying a super-advanced alien civilization couldn't do this, but it is one hell of a job.
I'd also strongly suspect that by the time a civilization got to the point of having the ability to build a Dyson sphere, they'd have a better way of getting the power than using a star, if they really needed it (I have no idea how, but they are a super-advanced civilization).
Maybe I'm wrong, but it can't hurt to look.
Matías Suazo et al, Project Hephaistos – II. Dyson sphere candidates from Gaia DR3, 2MASS, and WISE, Monthly Notices of the Royal Astronomical Society (2024). DOI: 10.1093/mnras/stae1186
https://phys.org/news/2024-05-dyson-spheres-astronomers-potential-c...
Part 4
By Simon Goodwin
May 28
Dr. Krishna Kumari Challa
Scientists report unified framework for diverse aurorae across planets
The awe-inspiring aurorae seen on Earth, known as the Northern and Southern Lights, have been a source of fascination for centuries. Between May 10 and 12, 2024, the most powerful aurora event in 21 years reminded us of the stunning beauty of these celestial light shows.
Recently, space physicists have published a paper in Nature Astronomy that explores the fundamental laws governing the diverse aurorae observed across planets, such as Earth, Jupiter and Saturn.
This work provides new insights into the interactions between planetary magnetic fields and solar wind, updating the textbook picture of giant planetary magnetospheres. Their findings can improve space weather forecasting, guide future planetary exploration, and inspire further comparative studies of magnetospheric environments.
Earth, Saturn and Jupiter all generate their own dipole-like magnetic field, resulting in funnel-canopy-shaped magnetic geometry that leads the space's energetic electrons to precipitate into polar regions and cause polar auroral emissions.
Yet the three planets differ in many aspects, including their magnetic strength, rotating speed, solar wind condition, moon activities, etc. It is unclear how these different conditions are related to the different auroral structures that have been observed on those planets for decades.
Using three-dimensional magnetohydrodynamics calculations, which model the coupled dynamics of electrically conducting fluids and electromagnetic fields, the research team assessed the relative importance of these conditions in controlling the main auroral morphology of a planet.
Combining solar wind conditions and planetary rotation, they defined a new parameter that controls the main auroral structure, which for the first time, nicely explains the different auroral structures observed at Earth, Saturn and Jupiter.
May 28
Dr. Krishna Kumari Challa
The aurorae at Earth and Jupiter are different. Yet, it is a big surprise that they can be explained by a unified framework.
By advancing our fundamental understanding of how planetary magnetic fields interact with the solar wind to drive auroral displays, this research has important practical applications for monitoring, predicting, and exploring the magnetic environments of the solar system.
This study also represents a significant milestone in understanding auroral patterns across planets that deepen our knowledge of diverse planetary space environments, paving the way for future research into the mesmerizing celestial light shows that continue to capture our imagination.
B. Zhang et al, A unified framework for global auroral morphologies of different planets, Nature Astronomy (2024). DOI: 10.1038/s41550-024-02270-3
Part 2
**
May 28
Dr. Krishna Kumari Challa
Debunking Fake Banana Hack Viral Videos
Debunking YouTube video myths and dis- and misinformation using bananas
May 28
Dr. Krishna Kumari Challa
May 28
Dr. Krishna Kumari Challa
Light therapy increases brain connectivity following injury, study finds
Low-level light therapy appears to affect healing in the brains of people who suffered significant brain injuries, according to a study published in Radiology.
Lights of different wavelengths have been studied for years for their wound-healing properties. Researchers at Massachusetts General Hospital (MGH) conducted low-level light therapy on 38 patients who had suffered moderate traumatic brain injury, an injury to the head serious enough to alter cognition and/or be visible on a brain scan. Patients received light therapy within 72 hours of their injuries through a helmet that emits near-infrared light.
The skull is quite transparent to near-infrared light. Once you put the helmet on, your whole brain is bathing in this light.
The researchers used an imaging technique called functional MRI to gauge the effects of the light therapy. They focused on the brain's resting-state functional connectivity, the communication between brain regions that occurs when a person is at rest and not engaged in a specific task. The researchers compared MRI results during three recovery phases: the acute phase of within one week after injury, the subacute phase of two to three weeks post-injury and the late-subacute phase of three months after injury.
Of the 38 patients in the trial, 21 did not receive light therapy while wearing the helmet. This was done to serve as a control to minimize bias due to patient characteristics and to avoid potential placebo effects.
Patients who received low-level light therapy showed a greater change in resting-state connectivity in seven brain region pairs during the acute-to-subacute recovery phase compared to the control participants.
There was increased connectivity in those receiving light treatment, primarily within the first two weeks. Researchers were unable to detect differences in connectivity between the two treatment groups long term, so although the treatment appears to increase the brain connectivity initially, its long-term effects are still to be determined.
The precise mechanism of the light therapy's effects on the brain is also still to be determined. Previous research points to the alteration of an enzyme in the cell's mitochondria (often referred to as the "powerhouse" of a cell).
This leads to more production of adenosine triphosphate, a molecule that stores and transfers energy in the cells. Light therapy has also been linked with blood vessel dilation and anti-inflammatory effects.
"There is still a lot of work to be done to understand the exact physiological mechanism behind these effects, though.
While connectivity increased for the light therapy-treated patients during the acute to subacute phases, there was no evidence of a difference in clinical outcomes between the treated and control participants. Additional studies with larger cohorts of patients and correlative imaging beyond three months may help determine the therapeutic role of light in traumatic brain injury.
Effects of Low-Level Light Therapy on Resting-State Connectivity Following Moderate Traumatic Brain Injury: Secondary Analyses of a Double-blinded, Placebo-controlled Study, Radiology (2024).
**
May 29
Dr. Krishna Kumari Challa
Study demonstrates how cytokines produce long lasting humoral immunity following vaccination
A new study has shed new light on how cytokines, in particular interleukin 21(IL-21), shape long lasting humoral immunity following vaccination.
Published in Nature Communications, the study elucidates how various immune responses impact the recruitment and maintenance of memory plasma cells in the bone marrow. These cells are crucial for secreting protective antibodies and sustaining humoral immunity throughout a lifetime.
Researchers revealed the heterogeneity of human bone marrow plasma cells (BMPCs) and their origins from various immune reactions.
By analyzing single-cell transcriptomes from individuals vaccinated against SARS-CoV-2 and the triple vaccine against diphtheria, tetanus, and pertussis (DTaP), the team uncovered distinct pathways through which plasma cells are recruited to the bone marrow.
The study categorizes BMPCs into different "clans" based on their transcriptional profiles, presuming that these cells reflect the specific signals they received during their activation in the tissue.
Understanding the mechanisms behind the recruitment and maintenance of plasma cells in the bone marrow is crucial for improving vaccine strategies and developing therapies for immune-related diseases, including chronic inflammatory diseases such as systemic lupus erythematosus.
Part 1
May 29
Dr. Krishna Kumari Challa
IL-21 plays a crucial role in the formation of bone marrow plasma cells. Cells formed in IL-21-dependent follicular germinal centers have low CD19 expression, while those from IL-21-independent extrafollicular reactions have high CD19 levels.
Primary immune responses produce both CD19low and CD19high BMPCs, but secondary responses mainly create CD19high cells from reactivated memory B cells in extrafollicular sites. This finding is important for understanding how long-term immunity is maintained and how previous immune responses can impact the effectiveness of future vaccinations.
The rapid extrafollicular immune responses in tissues, like the bone marrow itself, are especially crucial for responding to emerging variants quickly. This research could help optimize vaccination strategies for better long-term protection.
Marta Ferreira-Gomes et al, Recruitment of plasma cells from IL-21-dependent and IL-21-independent immune reactions to the bone marrow, Nature Communications (2024). DOI: 10.1038/s41467-024-48570-0
**
Part 2
May 29
Dr. Krishna Kumari Challa
Green chemistry: Producing gold nano-particles (and hydrogen) in water without the need for toxic chemicals
In a surprise discovery, Flinders University nanotechnology researchers have produced a range of different types of gold nanoparticles by adjusting water flow in the novel vortex fluidic device—without the need for toxic chemicals. The article, "Nanogold Foundry Involving High-Shear-Mediated Photocontact Electri...," has been published in Small Science.
The green chemistry lab work on nano gold formation also led to the discovery of a contact electrification reaction in water in the device—which resulted in the generation of hydrogen and hydrogen peroxide.
In their study scientists collaborated on the developing size and form of gold nanoparticles from various VFD processing parameters and concentrations of gold chloride solution.
Through this research, they have discovered a new phenomenon in the vortex fluidic device. The photo-contact electrification process at the solid-liquid interface which could be used in other chemical and biological reactions.
They also have achieved synthesis of pure, pristine gold nanoparticles in water in the VFD, without the use of chemicals commonly used—and thus minimizing waste.
This method is significant for the formation of nanomaterials in general because it is a green process, quick, scalable and yields nanoparticles with new properties.
Gold nanoparticles' size and shape are critical for a range of applications—from drug delivery to catalysis, sensing and electronics—due to their physical, chemical and optical properties.
The vortex fluidic device, devised a decade ago is a rapidly rotating tube open at one end with liquids delivered through jet feeds. Different rotational speeds and external application of light in the device can be used to synthesize particles to specification.
Researchers around the world are now finding the continuous flow, thin film fluidic device useful in exploring and optimizing more sustainable nano-scale processing techniques.
In this latest experiment, the researchers hypothesize that the high shear regimes of the VFD led to the quantum mechanical effect known as contact electrification, which is another exciting development.
Badriah M. Alotaibi et al, Nanogold Foundry Involving High‐Shear‐Mediated Photocontact Electrification in Water, Small Science (2024). DOI: 10.1002/smsc.202300312
May 29
Dr. Krishna Kumari Challa
Biologists find nanoplastics in developing chicken heart
Nanoplastics can accumulate in developing hearts, according to a study published in Environment International by biologists. Research on chicken embryos sheds new light on how these tiny plastic particles pose a threat to our health.
Disposable cups, plastic bags and packaging material: Plastics exposed to the elements become brittle over time, and start shedding small particles from their surface into nature. These particles can be as tiny as only a few nanometers in size.
You can find these nanoplastics everywhere now: in the sea, in the soil, in the food chain… and in our blood. They have even been found in human placentas.
This made scientists think: What happens when those nanoplastics end up in the blood of the embryo?
During an earlier study, investigators discovered that a high concentration of nanoplastics can cause malformations in the heart, eyes, and nervous systems of chicken embryos. But for a more complete understanding of the toxicity of nanoplastics, they first need more information about how they spread from the blood throughout the rest of the body.
That knowledge will also be informative in nanomedicine, where scientists aim to use nanoplastics (and other nanoparticles) as vehicles for drug-delivery.
Researchers administered polystyrene nanoparticles directly into the bloodstream of chicken embryos. Chicken embryos are a widely used model for research on growth and development. In mammals, it's much more challenging to administer substances or take measurements because their embryos develop inside the mother's womb.
Part 1
May 29
Dr. Krishna Kumari Challa
Because nanoparticles are so small, it's impossible to see them using conventional microscopes. Therefore, the researchers tagged the nanoparticles with either fluorescence or europium, a rare metal that is not naturally present in the human body.
They found that the nanoplastics can cross blood vessel walls, and that they accumulated to relatively high levels in the heart, liver and kidneys. Some nanoplastics were excreted by the kidneys.
Interestingly, the researchers also found nanoplastics in the avascular heart cushions: a type of heart tissue without blood vessels. They think the nanoplastics might enter the heart through the fenestrate. These are small openings within the developing heart tissue that play a role in the formation and remodeling of the heart's structure during development. These fenestrations are temporary structures that typically close as the heart matures.
Now we know how these nanoplastics spread, we can start investigating the health risks.
There is already research linking nanoparticles to a higher risk of heart attacks and strokes. Especially during the developmental stage, nanoparticles could potentially be quite dangerous.
We now understand that we shouldn't administer nanomedicines to pregnant women indiscriminately, as there is a risk that nanoparticles could reach and affect the developing organs of their babies.
Meiru Wang et al, The biodistribution of polystyrene nanoparticles administered intravenously in the chicken embryo, Environment International (2024). DOI: 10.1016/j.envint.2024.108723
Part 2
May 29
Dr. Krishna Kumari Challa
Study suggests 'biodegradable' teabags don't readily deteriorate in the environment
Some teabags manufactured using plastic alternatives do not degrade in soil and have the potential to harm terrestrial species, a new study has shown.
The research looked at commonly available teabags made using three different compositions of polylactic acid (PLA), which is derived from sources such as corn starch or sugar cane.
The teabags were buried in soil for seven months, and a range of techniques were then used to assess whether—and to what extent—they had deteriorated.
The results showed that teabags made solely from PLA remained completely intact. However, the two types of teabags made from a combination of cellulose and PLA broke down into smaller pieces, losing between 60% and 80% of their overall mass and with the PLA component remaining.
The study also examined the impacts of the disks cut from the teabags on a species of earthworm, Eisenia fetida, which has a critical role in soil nutrient turnover as it consumes organic matter.
Researchers found that being exposed to three different concentrations of teabag disks—equivalent to the mass of half, one and two teabags—resulted in up to 15% greater mortality, while some concentrations of PLA had a detrimental effect on earthworm reproduction.
Writing in the journal Science of the Total Environment, the study's authors highlight the need for accurate disposal information to be clearly displayed on product packaging.
Only one of the manufacturers whose products were chosen for the study indicated on the packaging that the teabags were not home compostable.
This could lead to them ending up in soil, while there is also high potential for consumer confusion about the meaning of terms such as plant-based or biodegradable, emphasizing the need for clear guidance on appropriate disposal.
W. Courtene-Jones et al, Deterioration of bio-based polylactic acid plastic teabags under environmental conditions and their associated effects on earthworms, Science of The Total Environment (2024). DOI: 10.1016/j.scitotenv.2024.172806
May 29
Dr. Krishna Kumari Challa
What causes landslides?
Landslides happen when the pull from gravity exceeds the strength of the geomaterial forming the slope of a hill or mountain. Geomaterials can be as varied as rocks, sand, silt and clays.
Then, part of this slope starts sliding downhill. Depending on where the slope fails, the material sliding down can be just a few cubic meters or a few million cubic meters in volume.
Why do slopes fail? Most natural landslides are triggered by earthquakes or rainfall, or a combination of both.
Earthquakes shake the ground, stress it and weaken it over time. Rainwater can seep through the ground and soak it—the ground is often porous like a sponge—and add weight to the slope. This is why PNG is so prone to landslides, as it sits on an active fault and is subjected to heavy rainfalls.
Another adverse effect of water is erosion: the constant action of waves undercuts coastal slopes, causing them to fail. Groundwater can also dissolve rocks within slopes.
Humans can (and do) cause landslides in several ways, too. For example, deforestation has a negative impact on slope stability, as tree roots naturally reinforce the ground and drain water out. Also, mine blasts produce small earthquake-like ground vibrations that shake slopes nearby.
It's very difficult to predict and mitigate landslide risk effectively.
Part 1
May 29
Dr. Krishna Kumari Challa
So what would it take to warn people of a coming landslide? You would need a prediction for earthquakes and rainfall, in addition to a perfect knowledge of the slope-forming geomaterial.
Under our feet, geomaterials may include multiple, entangled layers of various kinds of rocks and particulate materials, such as sand, silt and clays. Their strength varies from a factor of one to 1,000, and their spatial distribution dictates where the slope is likely to fail.
To accurately assess the stability of the slope, a three-dimensional mapping of these materials and their strengths is needed. No sensor can provide this information, so geologists and geotechnical engineers must deal with partial information obtained at a few selected locations and extrapolate this data to the rest of the slope.
The weakest link of the chain—such as an existing fracture in a rock mass—is easily missed. This is an inevitable source of uncertainty when trying to predict how much material might slip. We do know that the larger the volume of a landslide, the farther its runout distance. But it's hard to gauge the exact size of a landslide, making predictions of runout distances and safe zones uncertain. The question of "when will a landslide will occur" is also uncertain. Mechanical analysis enables us to estimate the vulnerability of a slope in a particular scenario, including earthquake magnitude and distribution of groundwater. But predicting if and when these triggers will happen is as "easy" as predicting the weather and seismic activity—a difficult task. Unfortunately, all the money in the world can't buy accurate landslide predictions—especially in remote parts of the world.
https://theconversation.com/what-causes-landslides-can-we-predict-t...
Part 2
May 29
Dr. Krishna Kumari Challa
Landslides occur due to due to failure of stability of slopes. The following factors cause failure of slopes.
Clay and shale beds are weak materials and tend to get slided particularly in the presence of water.
Groundwater and precipitation – groundwater conditions of the area depends on the geological and hydrological parameters, storage, recharge, , duration of precipitation , intensity of rainfall, lithological characteristics of the area and gradient of slope etc. In general slope stability is adversely affected by rise in groundwater table, resulting in pore water pressure . The shearing resistance of the mineral grains reduces and then this leads to landslides.
Insitu stresses – the stress within the slope rock is generated when load is placed at top of the slope. This may be due to accumulation of rain or snow water or human activities such as stock piling of ore deposits, waste deposits, mine tailings, buildings and removal of natural rocks for construction or foundation works.
Textures – anisotrophy in fabric is the weakest internal structure of a rock and it facilitates movement of the rock mass. Uniform or medium grained mineral content of rock bears a greater strength than that borne by rocks of unequal grains.
Weak planes- Schistocity, slaty structure and lamination are weak points . It has been observed that massive slides tales place in sedimentary rocks inter bedded with limestones and shales.
Weathering weakens the physical and chemical bonding in rocks and this leads to failure of mass even in hard rocks.
Structural features
Bedding planes, fractures, joints, faults , fissures and schistocity will have influence on downward slope of rock mass. In nature no rock mass is truly continuous. It will be broken by fracture, joints, bedding planes, dykes, and veins. These discontinuities will lead to instability of strata.
Seismic activity – travel of elastic waves released during an earthquake in the ground increase the shear stress in the slope and consequently decrease volume open spaces within the rock material there by increasing the pressure of water in void space and leads to failure of rock mass.
Part 3
May 29
Dr. Krishna Kumari Challa
The death of Vulcan: Study reveals planet is actually an astronomical illusion caused by stellar activity
A planet thought to orbit the star 40 Eridani A—host to Mr. Spock's fictional home planet, Vulcan, in the "Star Trek" universe—is really a kind of astronomical illusion caused by the pulses and jitters of the star itself, a new study shows.
A science team led by astronomer Abigail Burrows of Dartmouth College, and previously of NASA's Jet Propulsion Laboratory, has published a paper describing the new result, titled "The death of Vulcan: NEID reveals the planet candidate orbiting HD 26965 is stellar activity," in The Astronomical Journal. (Note: HD 26965 is an alternate designation for the star 40 Eridani A.)
The possible detection of a planet orbiting a star that Star Trek made famous drew excitement and plenty of attention when it was announced in 2018. Only five years later, the planet appeared to be on shaky ground when other researchers questioned whether it was there at all.
Now, precision measurements using a NASA-NSF instrument, installed a few years ago atop Kitt Peak in Arizona, seem to have returned the planet Vulcan even more definitively to the realm of science fiction.
Two methods for detecting exoplanets—planets orbiting other stars—dominate all others in the continuing search for strange new worlds. The transit method, watching for the tiny dip in starlight as a planet crosses the face of its star, is responsible for the vast majority of detections. But the "radial velocity" method also has racked up a healthy share of exoplanet discoveries.
This method is especially important for systems with planets that don't, from Earth's point of view, cross the faces of their stars. By tracking subtle shifts in starlight, scientists can measure "wobbles" in the star itself, as the gravity of an orbiting planet tugs it one way, then another. For very large planets, the radial velocity signal mostly leads to unambiguous planet detections. But not-so-large planets can be problematic.
Part 1
May 29
Dr. Krishna Kumari Challa
Even the scientists who made the original, possible detection of planet HD 26965 b—almost immediately compared to the fictional Vulcan—cautioned that it could turn out to be messy stellar jitters masquerading as a planet. They reported evidence of a "super-Earth"—larger than Earth, smaller than Neptune—in a 42-day orbit around a sun-like star about 16 light-years away. The new analysis, using high-precision radial velocity measurements not yet available in 2018, confirms that caution about the possible discovery was justified.
The bad news for "Star Trek" fans comes from an instrument known as NEID, a recent addition to the complex of telescopes at Kitt Peak National Observatory. NEID, like other radial velocity instruments, relies on the Doppler effect: shifts in the light spectrum of a star that reveal its wobbling motions. In this case, parsing out the supposed planet signal at various wavelengths of light, emitted from different levels of the star's outer shell (photosphere), revealed significant differences between individual wavelength measurements—their Doppler shifts—and the total signal when they were all combined.
That means, in all likelihood, that the planet signal is really the flickering of something on the star's surface that coincides with a 42-day rotation—perhaps the roiling of hotter and cooler layers beneath the star's surface, called convection, combined with stellar surface features such as spots and "plages," which are bright, active regions. Both can alter a star's radial velocity signals.
The demonstration of such finely tuned radial velocity measurements holds out the promise of making sharper observational distinctions between actual planets and the shakes and rattles on surfaces of distant stars.
Abigail Burrows et al, The Death of Vulcan: NEID Reveals That the Planet Candidate Orbiting HD 26965 Is Stellar Activity*, The Astronomical Journal (2024). DOI: 10.3847/1538-3881/ad34d5
Part 2
May 29
Dr. Krishna Kumari Challa
The digital touchscreens of the future could let you 'feel' items through your phone
May 29
Dr. Krishna Kumari Challa
New molecule found to suppress bacterial antibiotic resistance evolution
Researchers have developed a new small molecule that can suppress the evolution of antibiotic resistance in bacteria and make resistant bacteria more susceptible to antibiotics. The paper, "Development of an inhibitor of the mutagenic SOS response that suppresses the evolution of quinolone antibiotic resistance," has been published in the journal Chemical Science.
The global rise in antibiotic-resistant bacteria is one of the top global public health and development threats, with many common infections becoming increasingly difficult to treat. It is estimated that drug-resistant bacteria are already directly responsible for around 1.27 million global deaths each year and contribute to a further 4.95 million deaths. Without the rapid development of new antibiotics and antimicrobials, this figure is set to rise significantly.
A new study led by researchers at the Ineos Oxford Institute for antimicrobial research (IOI) and the Department of Pharmacology at Oxford University offers hope in the discovery of a small molecule that works alongside antibiotics to suppress the evolution of drug-resistance in bacteria.
One of the ways that bacteria become resistant to antibiotics is due to new mutations in their genetic code. Some antibiotics (such as fluoroquinolones) work by damaging bacterial DNA, causing the cells to die. However, this DNA damage can trigger a process known as the "SOS response" in the affected bacteria.
The SOS response repairs the damaged DNA in bacteria and increases the rate of genetic mutations, which can accelerate the development of resistance to the antibiotics. In the new study, the Oxford scientists identified a molecule capable of suppressing the SOS response, ultimately increasing the effectiveness of antibiotics against these bacteria.
The researchers studied a series of molecules previously reported to increase the sensitivity of methicillin-resistant Staphylococcus aureus (MRSA) to antibiotics, and to prevent the MRSA SOS response. MRSA is a type of bacteria that usually lives harmlessly on the skin. But if it gets inside the body, it can cause a serious infection that needs immediate treatment with antibiotics. MRSA is resistant to all beta-lactam antibiotics such as penicillins and cephalosporins.
Researchers modified the structure of different parts of the molecule and tested their action against MRSA when given with ciprofloxacin, a fluoroquinolone antibiotic. This identified the most potent SOS inhibitor molecule reported to-date, called OXF-077. When combined with a range of antibiotics from different classes, OXF-077 made these more effective in preventing the visible growth of MRSA bacteria.
Part 1
May 29
Dr. Krishna Kumari Challa
In a key result, the team then tested the susceptibility of bacteria treated with ciprofloxacin over a series of days to determine how quickly resistance to the antibiotic was developing, either with or without OXF-077. They found that the emergence of resistance to ciprofloxacin was significantly suppressed in bacteria treated with OXF-077, compared to those not treated with OXF-077.
This is the first study to demonstrate that an inhibitor of the SOS response can suppress the evolution of antibiotic resistance in bacteria. Moreover, when resistant bacteria previously exposed to ciprofloxacin were treated with OXF-077, it restored their sensitivity to the antibiotic to the same level as bacteria that had not developed resistance.
Jacob D. Bradbury et al, Development of an inhibitor of the mutagenic SOS response that suppresses the evolution of quinolone antibiotic resistance, Chemical Science (2024). DOI: 10.1039/D4SC00995A
Part 2
**
May 29
Dr. Krishna Kumari Challa
Debunking "Healthy" TikTok DESSERTS
May 29
Dr. Krishna Kumari Challa
This (Edible) Mushroom Could Kill You
May 29
Dr. Krishna Kumari Challa
Chocolate's tasty flavors might pose a health risk in other desserts
What makes chocolate taste and smell so delicious? Chemistry, of course. A variety of molecules work together to create that unmistakable aroma, but those same molecules might carry some unwanted health effects if there are too many around. According to research published in Journal of Agricultural and Food Chemistry, while many of the compounds appeared in chocolate in low enough concentrations to be safe, higher amounts were found in some baked sweet treats.
When making chocolate, cocoa beans are roasted to help their chocolatey flavors shine. During this process, new molecules like α,β-unsaturated carbonyls are formed when they react with other ingredients under high temperatures. This class of carbonyls is highly reactive and potentially genotoxic, or able to cause damage to DNA when consumed.
Though naturally found in many foods, these carbonyls are also used as flavoring additives, and some have been banned in some countries, including the buttery-tasting furan-2(5H)-one. To better understand how these molecules form naturally in foods, and whether or not they are present in levels that could pose a health concern, researchers tested chocolates and other sweet treats for 10 different α,β-unsaturated carbonyls—some of which have been confirmed as safe by the European Food Safety Authority, while others are still under evaluation.
The team created its own chocolates and found that α,β-unsaturated carbonyls formed during roasting and after the addition of cocoa butter; however, their concentrations remained too low to pose any health concerns from consuming the chocolates.
Next, researchers screened 22 commercially available desserts, including crepes, waffles, cakes and biscuits, either with or without chocolate. In these packaged treats, they found even lower concentrations of nine of the 10 carbonyls compared to the chocolates.
The remaining carbonyl—genotoxic furan-2(5H)-one—appeared in much higher concentrations in the crepe and cake samples, reaching up to 4.3 milligrams per kilogram. Considering that the recommended threshold for genotoxic substances is only 0.15 micrograms per person per day, consuming these desserts could exceed that limit, though additional studies are needed to accurately assess the potential health risk.
Researchers concluded that the furan-2(5H)-one molecule likely formed during the baking process and did not seem to correlate with the amount of chocolate present in the packaged desserts. The team says that this work helps to better understand where these carbonyls come from in chocolate and highlights the importance of monitoring flavorings in food to keep consumers informed and safe.
Occurrence and Synthesis Pathways of (Suspected) Genotoxic α,β-Unsaturated Carbonyls in Chocolate and Other Commercial Sweet Snacks, Journal of Agricultural and Food Chemistry (2024). DOI: 10.1021/acs.jafc.4c01043 , pubs.acs.org/doi/abs/10.1021/acs.jafc.4c01043
May 30
Dr. Krishna Kumari Challa
More rogue planets discovered
The Euclid space telescope has discovered seven more rogue planets, shining a light on the dark and lonely worlds floating freely through the universe untethered to any star.
The Euclid study also offered clues to how rogue planets are created: Some could be formed in the outer part of a solar system before getting detached from their star and floating away.
But the study indicates that many rogue planets may be created as a "natural byproduct" of the star-formation process. This suggests a "really close connection between stars and planets and how they form".
Without being bound to a star, as the Earth is to the sun, there are no days or years on these planets, which languish in perpetual night. Yet scientists think there is a chance they could be able to host life—and estimate there may be trillions dotted throughout the Milky Way.
Last week the European Space Agency released the Euclid telescope's first scientific results since the mission launched in July.
Among the discoveries were seven new free-floating planets, gas giants at least four times the mass of Jupiter.
They were spotted in the Orion Nebula, the nearest star-forming region to Earth, roughly 1,500 light years away.
Euclid also confirmed the existence of dozens of other previously detected rogue planets. This is likely to be just the tip of the iceberg.
Because they do not reflect the light of a star, spotting rogue planets is like "finding a needle in a haystack".
Younger planets, such as those discovered by Euclid, are hotter, making them a little easier to see.
Some research has suggested there are around 20 rogue planets for every star, which could put their number in the trillions in our home galaxy alone.
Given there are thought to be hundreds of billions of galaxies across the universe, the potential number of free-floating worlds becomes difficult to fathom.
When NASA's Roman space telescope launches in 2027 it is expected to find many more rogue planets, possibly offering clarity about how many could be out there.
But not all rogue planets wander alone. Four of the more than 20 confirmed by Euclid are believed to be binaries—two planets orbiting each other in a single system.
If rogue planets are habitable, they could be a key target in humanity's search for extraterrestrial life.
Lacking heat from a nearby star, free-floating planets are believed to be cold, with frozen surfaces.
That means any life-supporting energy would have to come from inside the planet.
And geothermal vents allow animals to survive on Earth that have never seen the sun's rays.
But even under the best conditions, this extreme isolation would likely be able to support only bacterial and microbial life.
Advantages of being a rogue planet: Rogue planets could be thought of as traversing a lonely path through the cosmos.
But "being around a star has its downsides".
Once the sun becomes a red giant—in an estimated 7.6 billion years—it will greatly expand, swallowing the Earth.
Rogue planets do not have to worry about eventually being destroyed by a star. "These things will last forever". ( Maybe if they don't go near any other star or planet or anybody that can influence its path and structure).
"If you don't mind the cold temperatures you could survive on these planets for eternity."
The study published ‘s on arXiv.org e-Print archive Friday.
https://arxiv.org/abs/2405.13497
May 30
Dr. Krishna Kumari Challa
Is a train's risk of derailment affected by its length?
Longer freight trains are more likely to derail compared with shorter trains, according to new research published in Risk Analysis. The increased risk held even after accounting for the need for fewer trains if more cars were on each train.
For the study, investigators assessed information on US freight train accidents between 2013–2022 from Federal Railroad Administration databases. The team found that running 100-car trains would lead to an 11% higher risk of derailment compared with running 50-car trains, even when accounting for the fact that only half as many 100-car trains would need to run. For 200-car trains, the risk was 24% higher than for 50-car trains.
he Relationship between Freight Train Length and the Risk of Derailment, Risk Analysis (2024). DOI: 10.1111/risa.14312
May 30
Dr. Krishna Kumari Challa
The eye is one of a few sites in the body with something called immune privilege. That special status makes the eye an ideal environment for researching certain therapies for treating vision loss.
The body’s immune system—made up of organs, tissues and cells—works to protect us from infection and disease. When a virus or other foreign substance is detected, the immune system kicks in. It makes molecules called antibodies to attack these invading substances known as antigens. This natural defense system is called an inflammatory response, and results in swelling of tissue and a higher-than-normal temperature.
While an inflammatory response can help fight off infection or disease, it can also cause problems. For example, if someone has a donated organ transplanted in their body, their immune system may recognize the organ as foreign tissue and mount an inflammatory response. This can cause the transplant to fail.
Interestingly, certain areas of the body have something called immune privilege. This means that the body’s normal inflammatory immune response is limited here. Scientists think the purpose of immune privilege is to protect these important areas from damage that may occur with swelling and higher temperatures from the immune response. The eye is one of a few areas of the body with immune privilege. The eye limits its inflammatory immune response so that vision isn’t harmed by swelling and other tissue changes. Other sites with immune privilege include the brain, testes, placenta and fetus.
Because of this immune privilege, the eye offers an excellent location for certain kinds of research and therapy. For example, scientists can implant types of cells called stem cells in the eye to study their role in regrowing or repairing damaged tissue. Cells implanted in the immune-privileged eye are less likely to be rejected than they might be in other parts of the body. Studies of stem cell use in the eye have shown promise in treating vision loss.
Another reason the eye is a good place for researching new therapies? It is relatively easy to reach and see inside of the structure. That makes implanting cells in the eye much easier than other areas of the body.
Source: https://www.aao.org/eye-health/tips-prevention/eye-immune-privilege
May 30
Dr. Krishna Kumari Challa
The ocular immune system protects the eye from infection and regulates healing processes following injuries. The interior of the eye lacks lymph vessels but is highly vascularized, and many immune cells reside in the uvea, including mostly macrophages, dendritic cells, and mast cells.[1] These cells fight off intraocular infections, and intraocular inflammation can manifest as uveitis (including iritis) or retinitis. The cornea of the eye is immunologically a very special tissue. Its constant exposure to the exterior world means that it is vulnerable to a wide range of microorganisms while its moist mucosal surface makes the cornea particularly susceptible to attack. At the same time, its lack of vasculature and relative immune separation from the rest of the body makes immune defense difficult. Lastly, the cornea is a multifunctional tissue. It provides a large part of the eye's refractive power, meaning it has to maintain remarkable transparency, but must also serve as a barrier to keep pathogens from reaching the rest of the eye, similar to function of the dermis and epidermis in keeping underlying tissues protected. Immune reactions within the cornea come from surrounding vascularized tissues as well as innate immune responsive cells that reside within the cornea.
https://en.wikipedia.org/wiki/Ocular_immune_system
May 30
Dr. Krishna Kumari Challa
Astronomers find most distant galaxy using James Webb Space Telescope
An international team of astronomers recently announced the discovery of the two earliest and most distant galaxies ever seen, dating back to only 300 million years after the Big Bang. These results, using NASA's James Webb Space Telescope (JWST), mark a major milestone in the study of the early universe.
The discoveries were made by the JWST Advanced Deep Extragalactic Survey (JADES) team.
Because of the expansion of the universe, the light from distant galaxies stretches to longer wavelengths as it travels. This effect is so extreme for these two galaxies that their ultraviolet light is shifted to infrared wavelengths where only JWST can see it. Because light takes time to travel, more distant galaxies are also seen as they were earlier in time.
The two record-breaking galaxies are called JADES-GS-z14-0 and JADES-GS-z14-1, the former being the more distant of the two. In addition to being the new distance record holder, JADES-GS-z14-0 is remarkable for how big and bright it is.
The size of the galaxy clearly proves that most of the light is being produced by large numbers of young stars, rather than material falling onto a supermassive blackhole in the galaxy's center, which would appear much smaller.
The combination of the extreme brightness and the fact that young stars are fueling this high luminosity makes JADES-GS-z14-0 the most striking evidence yet found for the rapid formation of large, massive galaxies in the early universe. It is stunning that the universe can make such a galaxy in only 300 million years.
The galaxy is located in a field where the JWST Mid-Infrared Instrument had conducted an ultra-deep observation. Its brightness at intermediate infrared wavelengths is a sign of emission from hydrogen and even oxygen atoms in the early universe. Despite being so young, the galaxy is already hard at work creating the elements familiar to us on Earth.
This amazing object shows that galaxy formation in the early universe is very rapid and intense.
A shining cosmic dawn: spectroscopic confirmation of two luminous galaxies at z∼14, arXiv:2405.18485 [astro-ph.GA] arxiv.org/abs/2405.18485
JWST/MIRI photometric detection at 7.7 μm of the stellar continuum and nebular emission in a galaxy at z>14, arXiv:2405.18462 [astro-ph.GA] arxiv.org/abs/2405.18462
Brant Robertson et al, Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic Star-Formation Rate Density 300 Myr after the Big Bang, arXiv (2023). DOI: 10.48550/arxiv.2312.10033
May 31