Pub garden smoking ban dropped from government plans

The Health Secretary said he didn’t want to cause further harm to the hospitality industry in England.

Share Button

Women claim injury and disfigurement after liposuction

One woman was hospitalised after being treated at a clinic in south-west London, the BBC is told.

Share Button

Persistent problems with AI-assisted genomic studies

University of Wisconsin-Madison researchers are warning that artificial intelligence tools gaining popularity in the fields of genetics and medicine can lead to flawed conclusions about the connection between genes and physical characteristics, including risk factors for diseases like diabetes.

The faulty predictions are linked to researchers’ use of AI to assist genome-wide association studies. Such studies scan through hundreds of thousands of genetic variations across many people to hunt for links between genes and physical traits. Of particular interest are possible connections between genetic variations and certain diseases.

Genetics’ link to disease not always straightforward

Genetics play a role in the development of many health conditions. While changes in some individual genes are directly connected to an increased risk for diseases like cystic fibrosis, the relationship between genetics and physical traits is often more complicated.

Genome-wide association studies have helped to untangle some of these complexities, often using large databases of individuals’ genetic profiles and health characteristics, such as the National Institutes of Health’s All of Us project and the UK Biobank. However, these databases are often missing data about health conditions that researchers are trying to study.

“Some characteristics are either very expensive or labor-intensive to measure, so you simply don’t have enough samples to make meaningful statistical conclusions about their association with genetics,” says Qiongshi Lu, an associate professor in the UW-Madison Department of Biostatistics and Medical Informatics and an expert on genome-wide association studies.

The risks of bridging data gaps with AI

Researchers are increasingly attempting to work around this problem by bridging data gaps with ever more sophisticated AI tools.

“It has become very popular in recent years to leverage advances in machine learning, so we now have these advanced machine-learning AI models that researchers use to predict complex traits and disease risks with even limited data,” Lu says.

Now, Lu and his colleagues have demonstrated the peril of relying on these models without also guarding against biases they may introduce. The team describe the problem in a paper recently published in the journal Nature Genetics. In it, Lu and his colleagues show that a common type of machine learning algorithm employed in genome-wide association studies can mistakenly link several genetic variations with an individual’s risk for developing Type 2 diabetes.

“The problem is if you trust the machine learning-predicted diabetes risk as the actual risk, you would think all those genetic variations are correlated with actual diabetes even though they aren’t,” says Lu.

These “false positives” are not limited to these specific variations and diabetes risk, Lu adds, but are a pervasive bias in AI-assisted studies.

New statistical method can reduce false positives

In addition to identifying the problem with overreliance on AI tools, Lu and his colleagues propose a statistical method that researchers can use to guarantee the reliability of their AI-assisted genome-wide association studies. The method helps removing bias that machine learning algorithms can introduce when they’re making inferences based on incomplete information.

“This new strategy is statistically optimal,” Lu says, noting that the team used it to better pinpoint genetic associations with individuals’ bone mineral density.

AI not the only problem with some genome-wide association studies

While the group’s proposed statistical method could help improve the accuracy of AI-assisted studies, Lu and his colleagues also recently identified problems with similar studies that fill data gaps with proxy information rather than algorithms.

In another recently published paper appearing in Nature Genetics, the researchers ring the alarm about studies that over-rely on proxy information in an attempt to establish connections between genetics and certain diseases.

For instance, large health databases like the UK Biobank have a ton of genetic information about large populations, but they don’t have very much data regarding the incidence of diseases that tend to crop up later in life, like most neurodegenerative diseases.

For Alzheimer’s disease specifically, some researchers have attempted to bridge that gap with proxy data gathered through family health history surveys, where individuals can report a parent’s Alzheimer’s diagnosis.

The UW-Madison team found that such proxy-information studies can produce “highly misleading genetic correlation” between Alzheimer’s risk and higher cognitive abilities.

“These days, genomic scientists routinely work with biobank datasets that have hundreds of thousands of individuals, however, as statistical power goes up, biases and the probability of errors are also amplified in these massive datasets,” says Lu. “Our group’s recent studies provide humbling examples and highlight the importance of statistical rigor in biobank-scale research studies.”

Share Button

Two new cases of more spreadable mpox found in UK

All three patients were infected with the Clade 1b variant, which appears to transmit more easily.

Share Button

Scientists tackle farm nutrient pollution with sustainable, affordable designer biochar pellets

What if farmers could not only prevent excess phosphorus from polluting downstream waterways, but also recycle that nutrient as a slow-release fertilizer, all without spending a lot of money? In a first-of-its-kind field study, University of Illinois Urbana-Champaign researchers show it’s possible and economical.

“Phosphorus removal structures have been developed to capture dissolved phosphorus from tile drainage systems, but current phosphorus sorption materials are either inefficient or they are industrial waste products that aren’t easy to dispose of. This motivated us to develop an eco-friendly and acceptable material to remove phosphorus from tile drainage systems,” said study author Hongxu Zhou, who completed the study as a doctoral student in the Department of Agricultural and Biological Engineering (ABE), part of the College of Agricultural, Consumer and Environmental Sciences and The Grainger College of Engineering at U. of I.

Zhou and his co-authors used sawdust and lime sludge, byproducts from milling and drinking water treatment plants, respectively. They mixed the two ingredients, formed the mixture into pellets, and slow-burned them under low-oxygen conditions to create a “designer” biochar with significantly higher phosphorus-binding capacity compared to lime sludge or biochar alone. Importantly, once these pellets bind all the phosphorus they can hold, they can be spread onto fields where the captured nutrient is slowly released over time.

Leveraging designer biochar’s many sustainable properties, the team tested pellets in working field conditions for the first time, monitoring phosphorus removal in Fulton County, Illinois, fields for two years. Like the majority of Midwestern corn and soybean fields, the experimental fields were fitted with subsurface drainage pipes. This drainage water flowed through phosphorus removal structures filled with designer biochar pellets of two different sizes. The team tested 2-3 centimeter biochar pellets during the first year of the experiment, then replaced them with 1 cm pellets for the second year.

Both pellet sizes removed phosphorus, but the 1-centimeter pellets performed much better, reaching 38 to 41% phosphorus removal efficiency, compared with 1.3 to 12% efficiency for the larger pellets.

The result was not a surprise for study co-author Wei Zheng, who said smaller particle sizes allow more contact time for phosphorus to stick on designer biochar. Zheng, a principal research scientist at the Illinois Sustainable Technology Center (ISTC), part of the Prairie Research Institute at U. of I., has done previous laboratory studies showing a powdered form of designer biochar is highly efficient for phosphorus removal. But powdered materials wouldn’t work in the field.

“If we put powder-form biochar in the field, it would easily wash away,” Zhou said. “This is why we have to make pellets. We have to sacrifice some efficiency to ensure the system will work under field conditions.”

After showing the pellets are effective in real-world scenarios, the research team performed techno-economic and life-cycle analyses to evaluate the economic breakdown for farmers and the overall sustainability of the system.

The cost to produce designer biochar pellets was estimated at $413 per ton, less than half the market cost of alternatives such as granular activated carbon ($800-$2,500 per ton). The team also estimated the total cost of phosphorus removal using the system, arriving at an average cost of $359 per kilogram removed. This figure varied according to inflation and depending on the frequency of replacing pellets — two years appeared to be the most cost-effective scenario.

The life cycle analysis showed the system — including returning spent biochar pellets to crop fields and avoiding additional phosphorus and other inputs — could save 12 to 200 kilograms of carbon dioxide-equivalent per kilogram of phosphorus removed. Zhou says the benefits go beyond nutrient loss reduction and carbon sequestration to include energy production, reduction of eutrophication, and improving soils.

“At the moment, there’s no regulation that requires farmers to remove phosphorus from drainage water. But we know there are many conservation conscious farmers who want to reduce nitrate and phosphorus losses from their fields,” said co-author Rabin Bhattarai, associate professor in ABE. “If they’re already installing a woodchip bioreactor to remove nitrate, all they’d have to do is add the pellets to the control structure to remove the phosphorus at the same time. And there’s something very attractive about being able to reuse the pellets on the fields.”

Share Button

AI for real-time, patient-focused insight

A picture may be worth a thousand words, but still…they both have a lot of work to do to catch up to BiomedGPT.

Covered recently in the journal Nature Medicine, BiomedGPT is a new a new type of artificial intelligence (AI) designed to support a wide range of medical and scientific tasks. This new study, conducted in collaboration with multiple institutions, is described in the article as “the first open-source and lightweight vision-language foundation model, designed as a generalist capable of performing various biomedical tasks.”

“This work combines two types of AI into a decision support tool for medical providers,” explains Lichao Sun, an assistant professor of computer science and engineering at Lehigh University and a lead author of the study. “One side of the system is trained to understand biomedical images, and one is trained to understand and assess biomedical text. The combination of these allows the model to tackle a wide range of biomedical challenges, using insight gleaned from databases of biomedical imagery and from the analysis and synthesis of scientific and medical research reports.”

’16 state-of-the-art results’ for medical practitioners and patients

The key innovation described in the August 7 Nature Medicine article, “A generalist vision-language foundation model for diverse biomedical tasks,” is that this AI model doesn’t need to be specialized for each task. Typically, AI systems are trained for specific jobs, like recognizing tumors in X-rays or summarizing medical papers. However, this new model can handle many different tasks using the same underlying technology. This versatility makes it a “generalist” model?and a powerful new tool in the hands of medical providers.

“BiomedGPT is based on foundation models, a recent development in AI,” says Sun. “Foundation models are large, pre-trained AI systems that can be adapted to various tasks with minimal additional training. The generalist model described in the article has been trained on vast amounts of biomedical data, including images and text, enabling it to perform well across different applications.”

“By evaluating 25 datasets across 9 biomedical tasks and different modalities,” says Kai Zhang, a Lehigh PhD student advised by Sun who serves as first author of the Nature article, “BiomedGPT achieved 16 state-of-the-art results. A human evaluation of BiomedGPT on three radiology tasks showcased the model’s robust predictive abilities.”

Zhang says that he is proud that the open-source codebase is available for other researchers to use as a springboard to drive further development and adoption.

The team reports that the technology behind BiomedGPT may one day help doctors by interpreting complex medical images, assist researchers by analyzing scientific literature, or even aid in drug discovery by predicting how molecules behave.

“The potential impact of such technology is significant,” Zhang says, “as it could streamline many aspects of healthcare and research, making them faster and more accurate. Our method demonstrates that effective training with diverse data can lead to more practical biomedical AI for improving diagnosis and workflow efficiency.”

A team effort for clinical validation, and more

A crucial step in the process was validation of the model’s effectiveness and applicability in real-world healthcare settings.

“Clinical testing involves applying the AI model to real patient data to assess its accuracy, reliability, and safety,” Sun says. “This testing ensures that the model performs well across different scenarios. The outcomes of these tests helped refine the model, demonstrating its potential to improve clinical decision-making and patient care.”

Massachusetts General Hospital (MGH), a founding member of the Mass General Brigham healthcare system and teaching affiliate of Harvard Medical School, played a crucial role in the development and validation of the BiomedGPT model. The institution’s involvement primarily focused on providing clinical expertise and facilitating the evaluation of the model’s effectiveness in real-world healthcare settings. For instance, the model was tested with radiologists at MGH, where it demonstrated superior performance in tasks like visual question answering and radiology report generation. This collaboration helped ensure that the model was both accurate and practical for clinical use.

Other contributors to BiomedGPT include researchers from University of Georgia, Samsung Research America, University of Pennsylvania, Stanford University, University of Central Florida, UC-Santa Cruz, University of Texas-Health, Children’s Hospital of Philadelphia, and the Mayo Clinic.

“This research is highly interdisciplinary and collaborative,” says Sun. “The research involves expertise from multiple fields, including computer science, medicine, radiology, and biomedical engineering. Each author contributes specialized knowledge necessary to develop, test, and validate the model across various biomedical tasks. Large-scale projects like this often require access to diverse datasets and computational resources, along with access to skills in algorithm development, model training, evaluation, and application to real-world scenarios, as well as clinical testing and validation.

“This was a true team effort,” he says. “Creating something that can truly help the medical community improve patient outcomes across a wide range of issues is a very complex challenge. With such complexity, collaboration is key to creating impact through the application of science and engineering.”

Share Button

‘Don’t delay’ making stroke 999 call – NHS

The average time taken to call an ambulance for a stroke was nearly 88 minutes, analysis found.

Share Button

‘I can’t afford a child on £53,000 salary’ – why fertility rate is falling

From ‘fruitless’ dating to financial pressures, people share their views on falling fertility rates.

Share Button

Exposure to particular sources of air pollution is harmful to children’s learning and memory

A new USC study involving 8,500 children from across the country reveals that a form of air pollution, largely the product of agricultural emissions, is linked to poor learning and memory performance in 9- and 10-year-olds.

The specific component of fine particle air pollution, or PM2.5, ammonium nitrate, is also implicated in Alzheimer’s and dementia risk in adults, suggesting that PM2.5 may cause neurocognitive harm across the lifespan. Ammonium nitrate forms when ammonia gas and nitric acid, produced by agricultural activities and fossil fuel combustion, respectively, react in the atmosphere.

The findings appear in Environmental Health Perspectives.

“Our study highlights the need for more detailed research on particulate matter sources and chemical components,” said senior author Megan Herting, an associate professor of population and public health sciences at the Keck School of Medicine of USC. “It suggests that understanding these nuances is crucial for informing air quality regulations and understanding long-term neurocognitive effects.”

For the last several years, Herting has been working with data from the largest brain study across America, known as the Adolescent Brain Cognitive Development Study, or ABCD, to understand how PM2.5 may affect the brain.

PM2.5, a key indicator of air quality, is a mixture of dust, soot, organic compounds and metals that come in a range of particle sizes less than 2.5 micrometers in diameter. PM2.5 can travel deep into the lungs, where these particles can pass into the bloodstream, and bypass the blood-brain barrier, causing serious health problems.

Fossil fuel combustion is one of the largest sources of PM2.5, especially in urban areas, but sources like wildfires, agriculture, marine aerosols and chemical reactions are also important.

In 2020, Herting and her colleagues published a paper in which they looked at PM2.5 as a whole, and its potential impact on cognition in children, and did not find a relationship.

For this study, they used special statistical techniques to look at 15 chemical components in PM2.5 and their sources. That’s when ammonium nitrate — which is usually a result of agricultural and farming operations — in the air appeared as a prime suspect.

“No matter how we examined it, on its own or with other pollutants, the most robust finding was that ammonium nitrate particles were linked to poorer learning and memory,” Herting said. “That suggests that overall PM2.5 is one thing, but for cognition, it’s a mixture effect of what you’re exposed to.”

For their next project, the researchers hope to look at how these mixtures and sources may map on to individual differences in brain phenotypes during child and adolescent development.

In addition to Herting, other study authors include Rima Habre, Kirthana Sukumaran, Katherine Bottenhorn, Jim Gauderman, Carlos Cardenas-Iniguez, Rob McConnell and Hedyeh Ahmadi, all of the Keck School of Medicine; Daniel A. Hackman of the USC Suzanne Dworak-Peck School of Social Work; Kiros Berhane of the Columbia University Mailman School of Public Health; Shermaine Abad of University of California, San Diego; and Joel Schwartz of the Harvard T.H. Chan School of Public Health.

The research was supported by grants from the National Institutes of Health [NIEHS R01ES032295, R01ES031074, P30ES007048] and the Environmental Protection Agency [RD 83587201, RD 83544101].

Share Button

Evolutionary paths vastly differ for birds, bats

New Cornell University research has found that, unlike birds, the evolution of bats’ wings and legs is tightly coupled, which may have prevented them from filling as many ecological niches as birds.

“We initially expected to confirm that bat evolution is similar to that of birds, and that their wings and legs evolve independently of one another. The fact we found the opposite was greatly surprising,” said Andrew Orkney, postdoctoral researcher in the laboratory of Brandon Hedrick, assistant professor biomedical sciences.

Both researchers are co-corresponding authors of research published on Nov. 1 in Nature Ecology and Evolution.

Because legs and wings perform different functions, researchers had previously thought that the origin of flight in vertebrates required forelimbs and hindlimbs to evolve independently, allowing them to adapt to their distinct tasks more easily. Comparing bats and birds allows for the testing of this idea because they do not share a common flying ancestor and, therefore, constitute independent replicates to study the evolution of flight.

The researchers observed in both bats and birds that the shapes of the bones within a species’ wing (handwing, radius, humerus), or within a species’ leg (femur and tibia) are correlated — meaning that within a limb, bones evolve together. However, when looking at the correlation across legs and wings, results are different: Bird species show little to no correlation, whereas bats show strong correlation.

This means that, contrary to birds, bats’ forelimbs and hindlimbs did not evolve independently: When the wing shape changes — either increases or shrinks, for example — the leg shape changes in the same direction.

“We suggest that the coupled evolution of wing and leg limits bats’ capability to adapt to new ecologies,” Hedrick said.

Following their discovery, the team began re-examining the evolution of bird skeletons in greater depth.

“While we showed that the evolution of birds’ wings and legs is independent, and it appears this is an important explanation for their evolutionary success,” Orkney said, “we still don’t know why birds are able to do this or when it began to occur in their evolutionary history.”

Share Button