Pregnancy after sterilization turns out to be surprisingly common

Tubal sterilization is thought to be a permanent form of birth control and is the most common method of contraception nationally. But a new study led by UC San Francisco reports that tubal surgery fails often enough that some other forms of birth control are usually more effective.

The authors found that 3 to 5% of women in the United States who had their tubes tied later reported an unplanned pregnancy. This failure rate led the authors to suggest that patients who really want to avoid future pregnancy should instead use a contraceptive arm implant or intrauterine device (IUD).

The paper appears August 27 in NEJM Evidence.

Interest in permanent contraception has risen since the 2022 U.S. Supreme Court Dobbs decision removed federal protections for abortion services and limited access to abortion services in many states. As a result, the researchers say that information about contraceptive effectiveness is especially important.

“Since the Dobbs decision, many more people are worried about how pregnancy may impact their health and family life,” said first author Eleanor Bimla Schwarz, MD, chief of the UCSF Division of General Internal Medicine at Zuckerberg San Francisco General. “This is especially true for patients with medical conditions like diabetes and high blood pressure that can complicate pregnancy.

“This study shows that tubal surgery cannot be considered the best way to prevent pregnancy,” Schwarz said. “People using a contraceptive arm implant or an IUD are less likely to become pregnant than those who have their tubes tied.”

Many U.S. women get tubal surgeries

About 65 percent of women 15 to 49 in the U.S. use birth control, according to national statistics, and tubal sterilization — an abdominal surgery in which the fallopian tubes are clamped or cut and removed — is used by more than 21% of women ages 30 to 39, and 39% of women older than 40. These surgeries are especially common among low-income people and those with chronic medical conditions.

Tubal sterilization aims to permanently end fertility, but as previously reported, women can nonetheless get pregnant. Based on older studies, the American College of Obstetricians and Gynecologists has advised that fewer than 1% of patients become pregnant after tubal sterilization.

In the new study, the authors examined four independent rounds of the National Survey of Family Growth from 2002 to 2015. Data were collected from more than 31,000 women, including 4,184 who reported having undergone tubal sterilization and were the focus of the study.

Within the first year after tubal surgery, the researchers estimated that 2.9% of those who reported having been sterilized in 2013 to 2015 became pregnant. The chance of pregnancy was highest among those who were younger at the time of their tubal surgery.

Patients who had Medicaid-funded procedures were not more likely than those with private insurance to become pregnant. In recent years, the proportion of respondents who reported a tubal sterilization funded by Medicaid has increased from 18% in 2002 to about 36% from 2013 to 2015.

“When choosing what birth control will work best for them, people consider many different things including safety, convenience and how fast they can start to use the method,” Schwarz said. “For people who have chosen a ‘permanent’ method, learning they got pregnant can be very distressing. It turns out this is unfortunately a fairly common experience.”

Share Button

NHS 111 offers new mental health service

People in crisis can now access urgent help in England by calling 111 and talking to trained staff.

Share Button

A switch for immune memory and anti-tumor immunity

A Ludwig Cancer Research study has identified a metabolic switch in the immune system’s T cells that is essential to the generation of memory T cells — which confer lasting immunity to previously encountered pathogens — and a T cell subtype found in tumors that drives anti-tumor responses during immunotherapy.

Led by Ludwig Lausanne’s Ping-Chih Ho and Alessio Bevilacqua and published in the current issue of Science Immunology, the study identifies PPARβ/δ, a master regulator of gene expression, as that essential molecular switch. Ho, Bevilacqua and their colleagues also show that the switch’s dysfunction compromises T cell “memory” of previously encountered viruses as well as the induction of anticancer immune responses in mice.

“Our findings suggest that we might be able to engage this switch pharmacologically to improve the efficacy of cancer immunotherapies,” said Ho.

When killer (or CD8+) T cells, which kill sick and cancerous cells, are activated by their target antigen, they switch on metabolic pathways that most other healthy cells only use when starved of oxygen. This type of metabolism — involving a metabolic process known as aerobic glycolysis — supports multiple processes essential to the killer T cell’s ability to proliferate and destroy its target cells.

Most killer T cells die off after they’ve cleared an infection. A few, however, transform into central memory CD8+ T cells (Tcms) that linger in the circulation to establish what we call immunity: the ability to mount a swift and lethal response to the same pathogen if it is ever encountered again. To achieve this transformation, T cells switch off aerobic glycolysis and otherwise adapt their metabolism to persist over the long term in tissues or in the circulation. How precisely they do this was until now unknown.

Aware that PPARβ/δ activates many of the metabolic processes characteristic of Tcms, Ho, Bevilacqua and their colleagues hypothesized it might play a key role in Tcm formation. They examined immunologic gene expression data collected from yellow fever vaccine recipients long after vaccination and, as expected, saw that the PPARβ/δ was produced abundantly in their Tcms.

Their studies in mice revealed that PPARβ/δ is activated in T cells not in the peak phase of the immune response to viral infection but as that response winds down. Further, CD8+ T cells were unable to make the metabolic switch required to become circulating Tcms if they failed to express PPARβ/δ. Disrupting its expression impaired survival of such Tcms and resident memory T cells in the intestines following infection.

The researchers show that T cell exposure to interleukin-15 — an immune factor important for Tcm formation — and their expression of a protein named TCF1 engages the PPARβ/δ pathway. TCF1 is already known to be critical to the rapid expansion of Tcms when they encounter their target pathogen. The researchers show in this study that it is also important to the maintenance of TCMs.

As it happens, TCF1 expression is a hallmark of a subset of CD8+ T cells — progenitor-exhausted T cells — that are found in tumors. These progenitor-exhausted T cells follow one of two paths: they either become completely lethargic, “terminally exhausted” T cells; or, given the appropriate stimulus, proliferate to produce “effector” CD8+ T cells that kill cancer cells. Checkpoint blockade immunotherapies, like anti-PD-1 antibodies, can provide such stimulus.

The observation that TCF1 modulates the PPARβ/δ pathway in T cells raised the possibility that it might also be essential to the formation and maintenance of progenitor-exhausted T cells. The researchers showed that this is indeed the case. Deleting the PPARβ/δ gene from T cells led to the loss of progenitor-exhausted T cells in a mouse model of melanoma. They also demonstrate that the PPARβ/δ pathway curtails the tendency of progenitor-exhausted T cells to stagger toward terminal exhaustion.

To assess the therapeutic potential of their findings, Ho, Bevilacqua and their colleagues exposed T cells to a molecule that stimulates PPARβ/δ activity and used the treated cells against a mouse model of melanoma. These cells delayed the growth of melanoma tumors in mice more efficiently than their untreated counterparts and bore biochemical hallmarks of progenitor exhausted T cells primed to generate cancer-killing descendants.

“Based on these findings,” said Bevilacqua, “we suggest that targeting PPARβ/δ signaling may be a promising approach to improve T cell-mediated anti-tumor immunity.

How exactly this might be achieved in people is a subject for further study that will doubtless be pursued by the Ho laboratory.

This study was supported by Ludwig Cancer Research, the Swiss National Science Foundation, the European Research Council, the Swiss Cancer Foundation, the Cancer Research Institute, Helmut Horten Stiftung, the Melanoma Research Alliance, the Taiwan Ministry of Science and Technology, the NYU Abu Dhabi Research Institute Award and Academia Sinica.

Ping-Chih Ho is a member of the Lausanne Branch of the Ludwig Institute for Cancer Research and a full professor at the University of Lausanne.

Share Button

Study finds nearly half of U.S. counties have at least one ‘pharmacy desert’

Nearly half of counties in the United States have at least one ‘pharmacy desert’ where there is no retail pharmacy within 10 miles, according to a new study published by researchers at The Ohio State University Comprehensive Cancer Center — Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSUCCC — James).

“As pharmacies close, more and more Americans are left without easy access to medications, with disproportionate consequences on certain communities. We found that patients in counties with higher social vulnerabilities and fewer primary care providers were up to 40% more likely to reside in a region with a pharmacy desert,” said Timothy Pawlik, MD, senior author of the study and holder of the Urban Meyer III and Shelley Meyer Chair for Cancer Research at the OSUCCC — James. Pawlik also serves as surgeon-in-chief at The Ohio State University Wexner Medical Center and as chair of the Department of Surgery in the Ohio State College of Medicine.

The U.S. Centers for Disease Control (CDC) defines social vulnerability as “potential negative effects on communities caused by external stresses on human health.”

“These findings highlight how disparities compound the lack of access to basic health care and how it can lead to many people not taking their prescribed medications and having worse health outcomes, especially for chronic conditions like diabetes and hypertension,” Pawlik added.

Study results were published today in JAMA Network Open.

Methods and Results

Researchers reviewed data on communities located less than 10 miles from the nearest retail pharmacy from the publicly available TelePharm Map. Counties were noted as having a high pharmacy desert density if the number of pharmacy deserts per 1,000 residents was in the 75th percentile. Social vulnerability index (SVI) and healthcare provider data were obtained from the CDC’s Agency for Toxic Substances and Disease Registry and the Area Health Resource File databases, respectively. The researchers used statistical methods to analyze the relationships between these factors.

The study found almost 46% of the 3,143 counties had at least one pharmacy desert. Counties with a high density of pharmacy deserts had higher social vulnerability and fewer primary care providers. People in these high-density pharmacy desert areas were more likely to face difficulties accessing medications and healthcare services.

Collaborators in this study include Giovanni Catalano, MD, Muhammad Muntazir Mehdi Khan, MBBS, and Odysseas P. Chatzipanagiotou, MD.

Share Button

Placebos reduce stress, anxiety, depression — even when people know they are placebos

A study out of Michigan State University found that nondeceptive placebos, or placebos given with people fully knowing they are placebos, effectively manage stress — even when the placebos are administered remotely.

Researchers recruited participants experiencing prolonged stress from the COVID-19 pandemic for a two-week randomized controlled trial. Half of the participants were randomly assigned to a nondeceptive placebo group and the other half to the control group that took no pills. The participants interacted with a researcher online through four virtual sessions on Zoom. Those in the nondeceptive placebo group received information on the placebo effect and were sent placebo pills in the mail along with and instructions on taking the pills.

The study, published in Applied Psychology: Health and Well-Being, found that the nondeceptive group showed a significant decrease in stress, anxiety and depression in just two weeks compared to the no-treatment control group. Participants also reported that the nondeceptive placebos were easy to use, not burdensome and appropriate for the situation.

“Exposure to long-term stress can impair a person’s ability to manage emotions and cause significant mental health problems long-term, so we’re excited to see that an intervention that takes minimal effort can still lead to significant benefits,” said Jason Moser, co-author of the study and professor in MSU’s Department of Psychology. “This minimal burden makes nondeceptive placebos an attractive intervention for those with significant stress, anxiety and depression.”

The researchers are particularly hopeful in the ability to remotely administer the nondeceptive placebos by health care providers.

“This ability to administer nondeceptive placebos remotely increases scalability potential dramatically,” said Darwin Guevarra, co-author of the study and postdoctoral scholar at the University of California, San Francisco, “Remotely administered nondeceptive placebos have the potential to help individuals struggling with mental health concerns who otherwise would not have access to traditional mental health services.”

Share Button

CRISPR-based genome editing in Nile grass rats

A team of researchers at Michigan State University has discovered a set of methods that enabled the first successful CRISPR-based genome editing in Nile grass rats.

The study, published in BMC Biology, is the first to successfully edit genomes in Nile grass rats. As diurnal rodents, Nile grass rats have similar sleep/awake patterns to humans which could be advantageous in preclinical or translational research.

Currently, preclinical research relies heavily on laboratory mice, which are nocturnal rodents who are active at night and sleep during the day. With these different sleep patterns, diurnal and nocturnal mammals have evolved differently, including having a distinct wiring of neural circuits and gene-regulatory networks.

“The differences between diurnal and nocturnal mammals present a significant translational flaw when applying the research findings obtained from mice to humans. Numerous therapeutic agents such as neuroprotectants proven effective in mouse or rat models of cerebral ischemia have failed in human clinical stroke trials, with mounting evidence suggesting the nocturnal and diurnal differences causing such failures,” said Lily Yan, co-author of the study and professor in MSU’s Department of Psychology.

Katrina Linning-Duffy and Jiaming Shi, also co-authors on the research, work in Yan’s Lab.

Because the differences between diurnal and nocturnal animals are complex, the researchers believe a diurnal model is essential to untangle the relationship between genes and behaviors that are relevant to human health and disease.

The method developed includes a superovulation protocol that can yield nearly 30 eggs per female. They also developed protocols for in vitro — outside of the body — embryo culture and manipulation and in vivo — in the living body — gene targeting using GONAD, or Genome editing via Oviductal Nucleic Acids Delivery, methods.

The Nile grass rat colony is a unique resource available at MSU. Thanks to the joint efforts of the departments of Psychology and Integrative Biology and the Transgenic and Genome Editing Facility, a Nile grass rat colony was established on campus in 1993.

Research projects at MSU involving Nile grass rats have been continuously funded for more than 30 years. Animals from the MSU grass rat colony have been shared with over 20 research labs in the U.S., Canada, Belgium, China and Japan that study topics including circadian rhythms and sleep, mood and cognition, immune function, metabolic syndromes and evolutionary biology.

Huirong Xie is the program director of MSU’s Transgenic and Genome Editing Facility.

“We hope that Nile grass rats will eventually become an alternative mammalian model to investigate genes’ roles in any biological processes, particularly in which chronotype (diurnal vs. nocturnal) is a critical biological variable,” Yan said. “This study will be an essential first step towards the far-reaching goal.”

Co-authors on the research are Huirong Xie, program director of MSU Transgenic and Genome Editing Facility; Katrina Linning-Duffy and Jiaming Shi, who work in Yan’s Lab.

Share Button

Researcher finds sound progress in babies’ speech development

The sounds babies make in their first year of life may be less random than previously believed, according to a language development researcher from The University of Texas at Dallas.

Dr. Pumpki Lei Su, an assistant professor of speech, language, and hearing in the School of Behavioral and Brain Sciences, is co-lead author on two recent articles in which researchers examined the sounds babies make. The results suggest that children in their first year are more active than previously thought in their acquisition of speech.

“We observed in these studies that infant vocalizations are not produced randomly; they form a pattern, producing three categories of sounds in clusters,” said Su, who also directs the Language Interaction and Language Acquisition in Children Lab (LILAC Lab) at the Callier Center for Communication Disorders. “The home recordings we analyzed included times when adults were interacting with their child and when children were on their own, showing that children explore their vocal capabilities with or without language input from an adult.”

One study, published May 29 in PLOS ONE, focused on typically developing infants, and the other, published Feb. 25 in the Journal of Autism and Developmental Disorders, focused on infants who later received a confirmed diagnosis of autism. The researchers documented how children “play” vocally, learning what actions produce certain sounds and then repeating that process.

Within the past 40 to 50 years, scientists have realized that vocalizations before a child’s first word are meaningful precursors for speech and can be broken into sequential stages of cooing, vocal play and babbling. Su’s team studied a dataset of all-day home recordings from more than 300 children amassed by the Marcus Autism Center, a subsidiary of Children’s Healthcare of Atlanta, and coded by senior author Dr. D. Kimbrough Oller’s team at The University of Memphis.

“Parents tell us that sometimes a baby will just scream or make low-frequency sounds for a really long period. But it’s never been studied empirically,” Su said. “With access to a huge dataset from hundreds of children during the first 12 months of their lives, we set out to quantitatively document how babies explore and cluster patterns as they practice different sound categories.”

Sound types are characterized by pitch and wave frequency as squeals, growls or vowellike sounds. The PLOS ONE study used more than 15,000 recordings from 130 typically developing children in the dataset. Infants showed significant clustering patterns: 40% of recordings showed significantly more squeals than expected by chance, and 39% showed clustered growls. Clustering was common at every age, with the highest rates occurring after 5 months of age.

“Of the 130 infants, 87% showed at least one age at which their recordings showed significant squeal clustering and at least one age with significant growl clustering,” Su said. “There was not a single infant who, on evaluation of all the available recordings, showed neither significant squeal nor growl clustering.”

Su said the study represents the first large-scale empirical study investigating the nonrandom occurrence of the three main sound types in infancy.

In the Journal of Autism and Developmental Disorders article, Su and her colleagues demonstrated that this exploration behavior also occurs during the first year in children who are later diagnosed with autism spectrum disorder.

“Whether or not a child is eventually diagnosed with autism, they are clustering sounds within one vocal category at a time,” Su said. “While one cannot rule out the possibility that some patterns may be mimicry, these are not just imitations; they are doing this with and without the presence of a parent, even in the first month of life. This process of learning to produce sounds is more endogenous, more spontaneous than previously understood.

“We tend to think babies are passive recipients of input. And certainly, parents are their best teachers. But at the same time, they’re doing a lot of things on their own.”

Su has received a three-year grant from the National Institute on Deafness and Other Communication Disorders (NIDCD) to study parents’ use of “parentese” — or baby talk — with autistic children. Parentese is an exaggerated style of speech often containing high-pitched elongated words and singsong diction.

Parentese is portrayed in the literature as a type of optimal input for typically developing children, who tend to pay better attention and respond to it more than they do to normal speech. It also helps children learn to segment words. But is it also ideal for autistic children?

“One hypothesis of why parentese works is that it encourages social interaction by being very animated,” Su said. “Autistic children have differences in social communication and responses to sensory stimuli. Would they also find parentese engaging? Could it be too loud or extreme? This new grant will allow me to examine whether parentese facilitates word learning for autistic children compared to a more standard adult-directed register.”

Other researchers who contributed to both articles include co-lead author Dr. Hyunjoo Yoo of The University of Alabama; Dr. Edina Bene from The University of Memphis; Dr. Helen Long of Case Western Reserve University; and Dr. Gordon Ramsay from the Emory University School of Medicine. Additional researchers from the Marcus Autism Center contributed to the Journal of Autism and Developmental Disorders study.

The research was funded by grants from the NIDCD (R01DC015108) and the National Institute of Mental Health (P50MH100029), both components of the National Institutes of Health.

Share Button

Two epicenters led to Japan’s violent Noto earthquake on New Year’s Day

The first seven months of 2024 have been so eventful, it’s easy to forget that the year started off with a magnitude 7.5 earthquake centered beneath Japan’s Noto Peninsula on New Year’s Day. The earthquake killed more than 280 people and damaged more than 83,000 homes.

Geologists have now discovered that the earthquake began almost simultaneously at two different points on the fault, allowing the seismic rupture to encircle and break through a resistant area on the fault known as a barrier. This rare “dual-initiation” mechanism applied intense pressure from both sides of the barrier, leading to the powerful release of energy and substantial ground shaking across the Noto Peninsula.

The Noto earthquake was preceded by intense seismic swarms, which are sequences of many small earthquakes that can sometimes lead to a larger, catastrophic event. By using advanced seismic and geodetic technologies, the research team meticulously analyzed the movements within the Earth during this swarm that led to the earthquake.

The study, published in the journal Science, offers insight into the role of fault barriers, also known as asperities, in earthquake genesis, and will help improve seismic risk assessments and future earthquake forecasting.

Earthquakes happen when fractures in the Earth’s crust, known as faults, allow blocks of rocks on either side of the fault to move past each other. This movement is localized, not continuous along the fault line, because the line is not even or smooth, which dissipates energy and eventually stops the movement.

A barrier is a rough area that locks the two sides of a fault in place. Barriers absorb the energy of fault movement, slowing it down or stopping it altogether. But there’s only so much energy the barrier can absorb, and under the right conditions, the pent-up energy causes it to break violently, leading to strong shaking. A swarm of small earthquakes might not be enough to break a barrier, but if much stronger subsequent movement occurs on the fault, the barrier’s rupture will release all that stored-up energy.

Led by Lingsen Meng, a UCLA associate professor of earth, planetary and space sciences, UCLA graduate student Liuwei Xu and UC Santa Barbara geophysics professor Chen Ji, an international team of researchers from the United States, France, China and Japan analyzed geospatial data and recordings of seismic waves to understand the relationships between the swarm of smaller tremors and the larger earthquake that followed them. They identified a previously unknown barrier in the region of the swarm.

To their surprise, the New Year’s Day earthquake began almost simultaneously in two separate locations on the fault. Energy from each location moved toward the barrier, causing a violent rupture and extremely strong shaking.

“The earthquake started in two places and circled together,” Meng said. “The first one started waves that traveled fast and triggered a different epicenter. Then both parts propagated outward together and met in middle, where the barrier was, and broke it.”

The mechanics resemble bending a pencil on both ends until it snaps in the middle.

The finding was surprising because although dual initiation, as the process is known, has been seen in simulations, it has been much harder to observe in nature. Dual-initiation mechanisms require just the right conditions, which can be set in the lab but are less predictable in the real world.

“We were able to observe it because Japan has very good seismic monitoring stations and we also used GPS and satellite radar data. We grabbed all the data we could find! It’s only through all of this data together that we got really good resolution on this fault and could get into these fine details,” Meng said.

The vast majority of earthquakes don’t have anywhere near this level of data collected, so it’s possible that earthquakes with dual-initiation mechanisms are more common than geologists think.

“It could be that through better imaging and resolution, we’ll identify more like this in the future,” Meng said.

Earthquakes with dual epicenters have a higher risk for stronger shaking because there is stronger movement. Meng’s group plans to consider future scenarios to learn about the conditions and probabilities of these earthquakes.

“Our findings emphasize the complex nature of earthquake initiation and the critical conditions that can lead to large-scale seismic events,” Meng said. “Understanding these processes is vital for improving our ability to predict and mitigate the impacts of future earthquakes.”

Key takeaways

  • The 7.5- magnitude earthquake beneath Japan’s Noto Peninsula on Jan. 1, 2024, occurred when a “dual-initiation mechanism” applied enough energy from two different locations to break through a fault barrier — an area that locks two sides of a fault in place and absorbs the energy of fault movement, slowing it down or stopping it altogether.
  • An international team of researchers led by UCLA graduate student Liuwei Xu, professor Lingsen Meng and UC Santa Barbara’s Chen Ji analyzed a preceding seismic swarm and identified a previously unknown barrier in the region of the swarm.
  • The team’s data collection methods could aid future research into the conditions and probabilities of dual-initiation earthquakes.
Share Button

A leaky sink: Carbon emissions from forest soil will likely grow with rising temperatures

The soils of northern forests are key reservoirs that help keep the carbon dioxide that trees inhale and use for photosynthesis from making it back into the atmosphere.

But a unique experiment led by Peter Reich of the University of Michigan is showing that, on a warming planet, more carbon is escaping the soil than is being added by plants.

“This is not good news because it suggests that, as the world warms, soils are going to give back some of their carbon to the atmosphere,” said Reich, director of the Institute for Global Change Biology at U-M.

“The big picture story is that losing more carbon is always going to be a bad thing for climate,” said Guopeng Liang, the lead author of the study published in Nature Geoscience. Liang was a postdoctoral researcher at the University of Minnesota during the study and is now a postdoctoral researcher at Yale University and an exchange fellow at the Institute for Global Change Biology.

By understanding how rising temperatures affect the flow of carbon into and out of soils, scientists can better understand and forecast changes in our planet’s climate. Forests, for their part, store roughly 40% of the Earth’s soil carbon.

Because of that, there have been many research projects studying how climate change affects the carbon flux from forest soils. But few have lasted for longer than three years and most look at warming either in the soil or in air above it, but not both, Reich said.

In the experiment believed to be the first of its kind led by Reich, researchers controlled both the soil and above-ground temperatures in open air, without the use of any kind of enclosure. They also kept the study running for more than a dozen years.

“Our experiment is unique,” said Reich, who is also a professor at the U-M School for Environment and Sustainability. “It’s far and away the most realistic experiment like this in the world.”

The trade-off is that running such a sophisticated experiment for so long is expensive. The research was supported by the National Science Foundation, the U.S. Department of Energy and the University of Minnesota, where Reich is also a Distinguished McKnight University Professor.

Joining Reich and Liang on the study were colleagues from the University of Minnesota, the University of Illinois and the Smithsonian Environmental Research Center.

The team worked at two sites in northern Minnesota on a total of 72 plots, investigating two different warming scenarios compared with ambient conditions.

In one, plots were kept at 1.7 degrees Celsius above ambient and, in the other, the difference was 3.3 degrees Celsius (or about 3 and 6 degrees Fahrenheit, respectively). Soil respiration — the process that releases carbon dioxide — increased by 7% in the more modest warming case and by 17% in the more extreme case.

The respired carbon comes from the metabolism of plant roots and of soil microbes feeding on carbon-containing snacks available to them: sugars and starches leached out of roots, dead and decaying plant parts, soil organic matter, and other live and dead microorganisms.

“The microbes are a lot like us. Some of what we eat is respired back to the atmosphere,” Reich said. “They use the same exact metabolic process we do to breathe CO2 back out into the air.”

Although the amount of respired carbon dioxide increased in plots at higher temperatures, it likely didn’t jump as much as it could have, the researchers found.

Their experimental setup also accounted for soil moisture, which decreased at warmer temperatures that cause faster water loss from plants and soils. Microbes, however, prefer wetter soils and the drier soils constrained respiration.

“The take-home message here is that forests are going to lose more carbon than we would like,” Reich said. “But maybe not as they would if this drying wasn’t happening.”

Share Button

Bioengineers develop lotus leaf-inspired system to advance study of cancer cell clusters

The lotus leaf is a pioneer of self-cleaning, water-repellant engineering. Water droplets all but hover on its surface, whose unique texture traps air in its nanosized ridges and folds.

Rice University bioengineers report harnessing the lotus effect to develop a system for culturing cancer cell clusters that can shed light on hard-to-study tumor properties. The new zinc oxide-based culturing surface mimics the lotus leaf surface structure, providing a highly tunable platform for the high-throughput generation of three-dimensional nanoscale tumor models.

The superhydrophobic array device (SHArD) designed by Rice bioengineer Michael King and collaborators can be used to create tunable, compact, physiologically relevant models for studying the progression of cancer, including metastasis — the stage in the disease when cancerous cells travel through the bloodstream from a primary tumor site to other parts of the body.

“The study of metastasis — the leading cause of cancer deaths — poses a particular challenge in part due to the difficulty of developing accurate, high-throughput models,” said King, who is corresponding author on a study published in ACS Nano that describes the new culturing platform. “We hope this tool will unlock new knowledge about this problematic stage of the disease and help us identify ways to intervene in order to stop or prevent it from happening.”

Scientists and clinicians now rely on blood samples containing circulating tumor cells — a key marker of metastasis — to understand the properties of primary tumors as well as what causes cancer to spread. Often referred to as “liquid biopsy,” this sampling approach typically does not yield enough of a “catch” to enable in-depth, large-scale studies of metastatic processes.

“‘Safety in numbers’ unfortunately also applies to cancer cells circulating in the bloodstream,” said Alexandria Carter, a researcher in the King lab who is a co-author on the study. “Cancer cells traveling alone are more likely to succumb to shear stress destruction or immune cell attacks. However, when they travel in groups, the likelihood that they successfully reach and settle in other parts of the body increases.

“Those few lone cancer cells in a single blood draw are already rare, so isolating enough clusters for a detailed study is especially challenging. This is why SHArD is an exciting new tool for understanding primary and metastatic cancer.”

The King lab had previously succeeded in creating nanorod layers of halloysite, a naturally occurring substance whose texture promotes the adhesion of circulating tumor cells while simultaneously repelling blood cells.

“When Kalana Jayawardana joined our lab as a new postdoctoral fellow in 2018, he started to experiment with growing zinc oxide nanorod surfaces,” said King, a Cancer Prevention and Research Institute of Texas Scholar who recently joined Rice as the E.D. Butcher Chair of Bioengineering and special adviser to the provost on life science collaborations with the Texas Medical Center. “At first, we didn’t have a specific application in mind, but we were curious and hopeful that the new material would have special properties that would be useful for cancer biology.”

The project was later taken over by a doctoral student in the King lab, Maria Lopez-Cavestany, and took off in an exciting direction. Cavestany, now a Ph.D. graduate, is the first author on the study.

Once they were able to grow a stable “carpet” of zinc oxide nanotubes, the researchers added a teflonlike coating on top, in essence recreating the lotus leaf structure — nanoscale roughness combined with a hydrophobic layer that together gave rise to true superhydrophobicity, a word stemming from the Greek for “extreme fear of water.” To create SHArD, the researchers added a microwell grid with perfectly sized compartments, then tested the system to assess its performance.

“SHArD is ready to use in biomedical research,” Carter said. “Any lab with clean room access can follow our protocols and create versions of this platform that meet the exact needs of their specific research projects.”

Initially intended as a means to reliably culture primary tumor models at a higher throughput, SHArD is highly tunable and can easily be adapted to culture metastatic clusters as well. The fact that SHArD was successfully used to grow spheroidal models of primary tumors already expands the cancer modeling toolkit, making it possible to create superhydrophobic culturing devices in the absence of highly specialized equipment.

“The cluster-forming device has opened the door to new areas of research into the dangerous clusters found in the bloodstream of late-stage cancer patients,” King said.

Share Button