Gene therapy experiment gives children ‘life-changing’ sight boost

Four toddlers born with a rare eye condition have seen “life-changing improvements”, say UK doctors.

Share Button

Cooling materials — Out of the 3D printer

Rapid, localized heat management is essential for electronic devices and could have applications ranging from wearable materials to burn treatment. While so-called thermoelectric materials convert temperature differences to electrical voltage and vice versa, their efficiency is often limited, and their production is costly and wasteful. In a new paper published in Science, researchers from the Institute of Science and Technology Austria (ISTA) used a 3D printing technique to fabricate high-performance thermoelectric materials, reducing production costs significantly.

Thermoelectric coolers, also called solid-state refrigerators, can induce localized cooling by using an electric current to transfer heat from one side of the device to another. Their long lifetimes, invulnerability to leaks, size and shape tunability, and the lack of moving parts (such as circulating liquids) make these devices ideal for diverse cooling applications, such as electronics. However, manufacturing them out of ingots is associated with high costs and generates lots of material waste. In addition, the devices’ performance remains limited.

Now, a team at the Institute of Science and Technology Austria (ISTA), led by Verbund Professor for Energy Sciences and Head of the Werner Siemens Thermoelectric Laboratory Maria Ibáñez, with first author and ISTA postdoc Shengduo Xu, developed high-performance thermoelectric materials out of the 3D printer and used them to build a thermoelectric cooler. “Our innovative integration of 3D printing into thermoelectric cooler fabrication greatly improves manufacturing efficiency and reduces costs,” says Xu. Also, in contrast to previous attempts at 3D printing thermoelectric materials, the present method yields materials with considerably higher performance. ISTA Professor Ibáñez adds, “With commercial-level performance, our work has the potential to extend beyond academia, holding practical relevance and attracting interest from industries seeking real-world applications.”

Pushing the boundaries of thermoelectric technologies

While all materials demonstrate some thermoelectric effect, it is often too negligible to be useful. Materials exhibiting a high enough thermoelectric effect are usually so-called “degenerate semiconductors,” i.e., “doped” semiconductors, to which impurities are introduced intentionally so they behave like conductors. Current state-of-the-art thermoelectric coolers are produced using ingot-based manufacturing techniques — expensive and power-hungry procedures requiring extensive machining processes after production, where a lot of material is wasted. “With our present work, we can 3D print exactly the needed shape of thermoelectric materials. In addition, the resulting devices exhibit a net cooling effect of 50 degrees in the air. This means that our 3D-printed materials perform similarly to ones that are significantly more expensive to manufacture,” says Xu. Thus, the team of ISTA material scientists proposes a scalable and cost-effective production method for thermoelectric materials, circumventing energy-intensive and time-consuming steps.

Printed materials with optimized particle bonding

Beyond applying 3D printing techniques to produce thermoelectric materials, the team designed the inks so that, as the carrier solvent evaporates, effective and robust atomic bonds are formed between grains, creating an atomically connected material network. As a result, the interfacial chemical bonds improve the charge transfer between grains. This explains how the team managed to enhance the thermoelectric performance of their 3D-printed materials while also shedding new light on the transport properties of porous materials. “We employed an extrusion-based 3D printing technique and designed the ink formulation to ensure the integrity of the printed structure and boost particle bonding. This allowed us to produce the first thermoelectric coolers from printed materials with comparable performance to ingot-based devices while saving material and energy,” says Ibáñez.

Medical applications, energy harvesting, and sustainability

Beyond rapid heat management in electronics and wearable devices, thermoelectric coolers could have medical applications, including burn treatment and muscle strain relief. In addition, the ink formulation method developed by the team of ISTA scientists can be adapted for other materials to be used in high-temperature thermoelectric generators — devices that can generate electrical voltage from a temperature difference. According to the team, such an approach could broaden the applicability of thermoelectric generators across various waste energy harvesting systems.

“We successfully executed a full-cycle approach, from optimizing the raw materials’ thermoelectric performance to fabricating a stable, high-performance end-product,” says Ibáñez. Xu adds, “Our work offers a transformative solution for thermoelectric device production and heralds a new era of efficient and sustainable thermoelectric technologies.”

Share Button

Groundbreaking study shows potential of new mRNA vaccine to help fight tuberculosis

A new vaccine that boosts immunity against tuberculosis (TB) has been shown to be effective in pioneering pre-clinical trials, as part of a successful collaboration between three leading Australian research institutions.

A study into the vaccine’s effectiveness, published in eBioMedicine, was led by experts from the Sydney Infectious Diseases Institute at University of Sydney, the Centenary Institute and the Monash Institute of Pharmaceutical Science (MIPS) at Monash University.

Currently the only approved vaccine for TB is the century-old Bacillus Calmette-Guerin (BCG) vaccine, which is widely used despite its effectiveness in adults being inconsistent.

The study found that the new mRNA vaccine was successful in triggering an immune defence response that helped to reduce TB numbers in infected mice. In addition, the researchers discovered that for mice that had received the BCG vaccine, a booster dose of the new mRNA vaccine significantly improved their long-term protection.

The vaccine used mRNA technology, which is where genetic instructions are used to trigger an immune response in the body, as opposed to using a weakened or deadened version of a virus.

Senior author Professor Jamie Triccas, Deputy Director of the Sydney Infectious Diseases Institute, said: “Our findings demonstrate that an mRNA vaccine can induce potent, pathogen-specific immune responses that target TB, a disease that has long evaded effective vaccine development. This represents a major advance in TB vaccine research and provides a strong rationale for further clinical development.”

TB is the leading cause of infectious mortality worldwide, responsible for approximately 1.3 million deaths annually, with a particular prevalence in countries such as India, Indonesia, Vietnam and Pakistan.

The researchers hope that the mRNA vaccine will ultimately be more effective and consistent than the BCG when used in humans. This is because, unlike protein-based or live-attenuated vaccines (those that contain a weakened version of a pathogen), mRNA vaccines allow for rapid adaptation, making them an attractive option for global TB control efforts.

Dr Claudio Counoupas, co-lead author from the Centenary Institute’s Centre for Infection & Immunity, highlighted the vaccine’s potential impact: “mRNA vaccines offer a scalable, cost-effective, and adaptable platform that can be rapidly deployed against infectious diseases. This study is an important step in demonstrating that mRNA technology is not just for COVID-19 but could be a game-changer for bacterial diseases like TB.”

Professor Colin Pouton from Monash University, a key contributor to the study, explained: “The success of mRNA vaccines in the COVID-19 pandemic underscored their ability to generate strong immune responses. Our study provides the evidence that this platform can be harnessed for TB, potentially improving protection and durability of immunity in a way that traditional vaccines cannot.”

Following the study’s promising results, the research team is now looking to advance the vaccine to clinical trials.

“Our next goal is to refine the formulation and assess its efficacy in larger models before moving to human studies,” said Professor Triccas. “Given the global burden of TB and the limitations of current vaccines, we believe this platform could provide a new pathway toward eradicating this disease.”

Share Button

Closing the recycle loop: Waste-derived nutrients in liquid fertilizer

Growing plants can be a joyous, yet frustrating process as plants require a delicate balance of nutrients, sun, and water to be productive.

Phosphorus and nitrogen, which are essential for plant growth, are often supplemented by chemical fertilizers to assure proper balance and output of produce. However, the amount of these nutrients on the planet is increasing due to excessive use, which in turn is causing various environmental problems. For this reason, there is a growing movement to promote sustainable agriculture through the recycling of phosphorus and nitrogen. In Japan, a target has been set to reduce the use of chemical fertilizers by 30% by 2050.

With this in mind, a research group led by Ryosuke Endo, a lecturer, and graduate student Satoru Sakuma at Osaka Metropolitan University’s Graduate School of Agriculture conducted an experiment on producing recycled liquid fertilizer from organic waste as a replacement for chemical fertilizers. Using food waste, manure, and sewer sludge, the researchers filled nitrification reactors with organic waste and tap water, then extracted nitrified biogas digestate (f-NBD) to use as seed culture. The phosphorus and nitrogen outputs from each type of organic waste were compared. This experimental method produced nutrient solutions capable of replacing unsustainable chemical phosphorus and nitrogen.

Additionally, the researchers have established an improved method that increases phosphorus solubility, as phosphorus often fails to dissolve during traditional fertilizer production methods. By lowering the pH of the waste-derived liquid fertilizer, the phosphorus will dissolve and produce high phosphorus content, before the pH is restored to its original level.

“This research suggests that it is possible to replace up to 100% of the nitrogen and up to 77% of the phosphorus in liquid chemical fertilizers with the solution produced in this study,” stated graduate student Sakuma.

“Reducing the use of chemical fertilizers has become a global trend,” Dr. Endo added, “but hydroponic agricultural systems are highly dependent on them. By applying the results of this research and reusing the phosphorus contained in organic waste as liquid fertilizer, we hope that this will lead to the development of recycling-oriented agriculture.”

Share Button

Norovirus hospital cases reach highest level ever

More than 1,100 patients a day ill in hospital with vomiting bug last week in England.

Share Button

‘My eye was saved by a placenta after acid attack’

Newcastle doctors use donated tissue to help save Paul Laskey’s eyesight.

Share Button

New graves mark lost generation in drugs-ravaged Scottish town

In just a year and a half, at least eight victims of drug misuse have been buried in Oban’s Pennyfuir cemetery.

Share Button

New therapy may effectively control HIV in Uganda

A multi-national, multi-institutional study led by Weill Cornell Medicine investigators found little natural resistance to a new HIV therapy called lenacapavir in a population of patients in Uganda.

The study, published Jan. 30 in the Journal of Antimicrobial Chemotherapy, adds to growing evidence that lenacapavir may be a powerful new tool in the global anti-HIV drug arsenal. Approximately, 1.5 million people are living with HIV in Uganda.

“Our data shows that only 1.6% of the individuals studied are living with HIV strains that have any known lenacapavir-associated resistance mutations,” said senior author Dr. Guinevere Lee, assistant professor of virology in medicine at Weill Cornell Medicine. “That’s important because it shows lenacapavir is likely to be effective against strains of HIV circulating in East Africa.”

Since the 1990s, HIV drug combinations targeting different steps in the virus’ life cycle have been able to reduce virus load in patients to nearly undetectable levels. But drug resistance is a growing concern as the virus has evolved ways to thwart existing therapies. Lenacapavir, however, is the first drug to disrupt the protective capsid layer surrounding HIV’s genetic material (RNA), blocking the virus’s ability to reproduce and be transmitted from person to person.

Treatment twice a year with lenacapavir has been effective in patients who have never been treated and those with HIV strains that are resistant to other drugs. Last year, clinical trials showed that lenacapavir injections were 100% effective in preventing HIV infection among women in sub-Saharan Africa, who were HIV-negative.

However, little information was available about pre-existing resistance to lenacapavir in less well-studied HIV-1 strains like subtype A1 and D, which are more common in Eastern and Southern Africa. HIV-1 subtype B strains, which predominantly affect Europe and the United States, rarely have pre-existing mutations that would cause lenacapavir drug resistance.

Dr. Lee and her colleagues at Mbarara University of Science and Technology in Uganda and Massachusetts General Hospital in Boston helped fill that gap. They sequenced the capsid proteins from HIV-1 subtypes A1 and D from 546 Ugandan patients, who had never used antiretroviral therapy before. This approach allowed the investigators to examine naturally circulating viral variants.

They found that none of the patients had genetic mutations that would lead to major lenacapavir resistance. Only nine participants had minor lenacapavir resistance mutations that could partially reduce the effectiveness, but not enough to cause full resistance to the drug.

“Our study supports lenacapavir’s potential efficacy in this region. As lenacapavir is rolled out in East Africa, further studies will need to monitor for the emergence of drug-resistant strains,” Dr. Lee said. “It is important that we ensure HIV research reaches understudied communities where unique viral strains circulate.”

Share Button

Like human brains, large language models reason about diverse data in a general way

While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.

MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.

Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.

These findings could help scientists train future LLMs that are better able to handle diverse data.

“LLMs are big black boxes. They have achieved very impressive performance, but we have very little knowledge about their internal working mechanisms. I hope this can be an early step to better understand how they work so we can improve upon them and better control them when needed,” says Zhaofeng Wu, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this research.

His co-authors include Xinyan Velocity Yu, a graduate student at the University of Southern California (USC); Dani Yogatama, an associate professor at USC; Jiasen Lu, a research scientist at Apple; and senior author Yoon Kim, an assistant professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.

Integrating diverse data

The researchers based the new study upon prior work which hinted that English-centric LLMs use English to perform reasoning processes on various languages.

Wu and his collaborators expanded this idea, launching an in-depth study into the mechanisms LLMs use to process diverse data.

An LLM, which is composed of many interconnected layers, splits input text into words or sub-words called tokens. The model assigns a representation to each token, which enables it to explore the relationships between tokens and generate the next word in a sequence. In the case of images or audio, these tokens correspond to particular regions of an image or sections of an audio clip.

The researchers found that the model’s initial layers process data in its specific language or modality, like the modality-specific spokes in the human brain. Then, the LLM converts tokens into modality-agnostic representations as it reasons about them throughout its internal layers, akin to how the brain’s semantic hub integrates diverse information.

The model assigns similar representations to inputs with similar meanings, despite their data type, including images, audio, computer code, and arithmetic problems. Even though an image and its text caption are distinct data types, because they share the same meaning, the LLM would assign them similar representations.

For instance, an English-dominant LLM “thinks” about a Chinese-text input in English before generating an output in Chinese. The model has a similar reasoning tendency for non-text inputs like computer code, math problems, or even multimodal data.

To test this hypothesis, the researchers passed a pair of sentences with the same meaning but written in two different languages through the model. They measured how similar the model’s representations were for each sentence.

Then they conducted a second set of experiments where they fed an English-dominant model text in a different language, like Chinese, and measured how similar its internal representation was to English versus Chinese. The researchers conducted similar experiments for other data types.

They consistently found that the model’s representations were similar for sentences with similar meanings. In addition, across many data types, the tokens the model processed in its internal layers were more like English-centric tokens than the input data type.

“A lot of these input data types seem extremely different from language, so we were very surprised that we can probe out English-tokens when the model processes, for example, mathematic or coding expressions,” Wu says.

Leveraging the semantic hub

The researchers think LLMs may learn this semantic hub strategy during training because it is an economical way to process varied data.

“There are thousands of languages out there, but a lot of the knowledge is shared, like commonsense knowledge or factual knowledge. The model doesn’t need to duplicate that knowledge across languages,” Wu says.

The researchers also tried intervening in the model’s internal layers using English text when it was processing other languages. They found that they could predictably change the model outputs, even though those outputs were in other languages.

Scientists could leverage this phenomenon to encourage the model to share as much information as possible across diverse data types, potentially boosting efficiency.

But on the other hand, there could be concepts or knowledge that are not translatable across languages or data types, like culturally specific knowledge. Scientists might want LLMs to have some language-specific processing mechanisms in those cases.

“How do you maximally share whenever possible but also allow languages to have some language-specific processing mechanisms? That could be explored in future work on model architectures,” Wu says.

In addition, researchers could use these insights to improve multilingual models. Often, an English-dominant model that learns to speak another language will lose some of its accuracy in English. A better understanding of an LLM’s semantic hub could help researchers prevent this language interference, he says.

This research is funded, in part, by the MIT-IBM Watson AI Lab.

Share Button

Data from all 50 states shows early onset breast cancer is on the rise in younger women: Does place of exposure matter?

Breast cancer incidence trends in U.S. women under 40 vary by geography and supports incorporating location information with established risk factors into risk prediction, improving the ability to identify groups of younger women at higher risk for early-onset breast cancer, according to a new study at Columbia University Mailman School of Public Health. This study comprehensively examined trends across different states, regions, metropolitan versus non-metropolitan areas and by racial and ethnic groups. It also is one of the first to incorporate registry data from all 50 states to examine age-specific breast cancer trends. The findings are published in the journal Cancer Causes & Control.

Breast cancer incidence is increasing in U.S. women under 40, but until now, it was unknownif incidence trends varied by U.S. geographic region,” said Rebecca Kehm, PhD, assistant professor of Epidemiology at Columbia Mailman School, and first author. “Our findings can more accurately inform whether exposures that vary in prevalence across the U.S. also contributes to breast cancer risk in younger women.”

Using the U.S. Cancer Statistics database, the researchers analyzed age-adjusted breast cancer-incidence rates from 2001 to 2020 in women aged 25-39. They calculated the average annual percent change using statistical regression formulas and performed age-distribution analyses.

“Two-thirds of all cancers identified both in the U.S. and globally are diagnosed in women,” said Mary Beth Terry, PhD, professor of Epidemiology at Columbia Mailman School of Public Health, and senior author of the study.

From 2001 to 2020, breast cancer incidence in women under 40 increased by more than 0.50 percent per year in 21 states, while remaining stable or decreasing in the other states. Incidence was 32 percent higher in the five states with the highest rates compared to the five states with the lowest rates. The Western region had the highest rate of increase from 2001 to 2020; the Northeast had the highest absolute rate among women under 40 and experienced a significant increase over time The South was the only region where breast cancer under 40 did not increase from 2001 to 2020.

The overall incidence of early-onset breast cancer ranged from 28.6 per 100,000 in Wyoming to 41 cases per 100,000 people in Connecticut. The five states with the highest early-onset incidence from 2001 to 2020 were Maryland, New York, New Jersey, Hawaii, and Connecticut. Hispanic women had the lowest early-onset frequency rates in all regions, ranging from 26 per 100,000 in the Midwest to 32.6 per 100,000 in the Northeast.

Non-Hispanic White women were the only group to experience a statistically significant increase in early-onset breast cancer incidence across all four regions of the U.S. Non-Hispanic Black women had the highest incidence of early-onset breast cancer. This was true across the regions of the country.

The authors note the importance of investigating other risk factors including alcohol consumption, an established risk factor for breast cancer and which is known to vary across states and also be influenced by state alcohol policies.

“The increase in incidence we are seeing is alarming and cannot be explained by genetic factors, alone which evolve over much longer periods nor by changes in screening practices given that women under 40 years are below the recommended age for routine mammography screening,” noted Kehm.

“While the causes behind the rising incidence of early onset breast cancer are not yet fully understood, studying how trends vary across different population subgroups can offer valuable insights and help generate hypotheses for future research,” said Professor Terry. “We also are able to gain an understanding into the increase in breast cancer incidence among women who are not currently recommended for routine screening.”

Co-authors are Josephine Daaboul, Columbia Mailman School of Public Health and Fielding School of Public Health, University of California Los Angeles; and Parisa Tehranifar, Columbia Mailman School of Public Health.

The study was supported by the National Cancer Institute (R00CA263024).

Share Button