Tech meets tornado recovery

It started as a low, haunting roar building in the distance. It grew into a deafening thunder that drowned out all else. The sky turned an unnatural shade of green, then black. The wind lashed at trees and buildings with brutal force. Sirens wailed. Windows and buildings exploded.

In spring 2011, Joplin, Missouri, was devastated by an EF5 tornado with estimated winds exceeding 200 mph. The storm caused 161 fatalities, injured over 1,000 people, and damaged and destroyed around 8,000 homes and businesses. The tornado carved a mile-wide path through the densely populated south-central area of the city, leaving behind miles of splintered rubble and causing over $2 billion in damage.

The powerful winds of tornadoes often surpass the design limits of most residential and commercial buildings. Traditional methods of assessing damage after a disaster can take weeks or even months, delaying emergency response, insurance claims and long-term rebuilding efforts.

New research from Texas A&M University might change that. Led by Dr. Maria Koliou, associate professor and Zachry Career Development Professor II in the Zachry Department of Civil and Environmental Engineering at Texas A&M, researchers have developed a new method that combines remote sensing, deep learning and restoration models to speed up building damage assessments and predict recovery times after a tornado. Once post-event images are available, the model can produce damage assessments and recovery forecasts in less than an hour.

The researchers published their model in Sustainable Cities and Society.

“Manual field inspections are labor-intensive and time-consuming, often delaying critical response efforts,” said Abdullah Braik, coauthor and a civil engineering doctoral student at Texas A&M. “Our method uses high-resolution sensing imagery and deep learning algorithms to generate damage assessments within hours, immediately providing first responders and policymakers with actionable intelligence.”

The model does more than assess damage — it also helps predict repair costs and estimate recovery times. Researchers can assess these timelines and costs in different situations by combining deep learning technology, a type of artificial intelligence, with advanced recovery models.

“We aim to provide decision-makers with near-instantaneous damage assessment and probabilistic recovery forecasts, ensuring that resources are allocated efficiently and equitably, particularly for the most vulnerable communities,” Braik said. “This enables proactive decision-making in the aftermath of a disaster.”

How It Works

Researchers combined three tools to create the model: remote sensing, deep learning and restoration modeling.

Remote sensing uses high-resolution satellite or aerial images from sources such as NOAA to show the extent of damage across large areas.

“These images are crucial because they offer a macro-scale view of the affected area, allowing for rapid, large-scale damage detection,” Braik said.

Deep learning automatically analyzes these images to identify the severity of the damage accurately. The AI is trained before disasters by analyzing thousands of images of past events, learning to recognize visible signs of damage such as collapsed roofs, missing walls and scattered debris. The model then classifies each building into categories such as no damage, moderate damage, major damage, or destroyed.

Restoration modeling uses past recovery data, building and infrastructure details and community factors — like income levels or access to resources — to estimate how long it might take for homes and neighborhoods to recover under different funding or policy conditions.

When these three tools are combined, the model can quickly assess the damage and predict short- and long-term recovery timelines for communities affected by disasters.

“Ultimately, this research bridges the gap between rapid disaster assessment and strategic long-term recovery planning, offering a risk-informed yet practical framework for enhancing post-tornado resilience,” Braik said.

Testing The Model

Koliou and Braik used data from the 2011 Joplin tornado to test their model due to its massive size, intensity and availability of high-quality post-disaster information. The tornado destroyed thousands of buildings, creating a diverse dataset that allowed the model to be trained and tested across various levels of structural damage. Detailed ground-level damage assessments provided a reliable benchmark to check how accurately the model could classify the severity of the damage.

“One of the most interesting findings was that, in addition to detecting damage with high accuracy, we could also estimate the tornado’s track,” Braik said. “By analyzing the damage data, we could reconstruct the tornado’s path, which closely matched the historical records, offering valuable information about the event itself.”

Future Directions

Researchers are working on using this model for other types of disasters, such as hurricanes and earthquakes, as long as satellites can detect damage patterns.

“The key to the model’s generalizability lies in training it to use past images from specific hazards, allowing it to learn the unique damage patterns associated with each event,” Braik said. “We have already tested the model on hurricane data, and the results have shown promising potential for adapting to other hazards.”

The research team believes their model could be critical in future disaster response, helping communities recover faster and more efficiently. The team wants to extend the model beyond damage assessment to include real-time updates on recovery progress and tracking recovery over time.

“This will allow for more dynamic and informed decision-making as communities rebuild,” he said. “We aim to create a reliable tool that enhances disaster management efficiency and supports quicker recovery efforts.”

The technology has the potential to transform how emergency officials, insurers and policymakers respond in the crucial hours and days after a storm by delivering near-instant assessments and recovery projections.

Funding for this research was provided by the National Science Foundation.

Share Button

Research shows how hormone can reverse fatty liver disease in mice

A pioneering research study published today in Cell Metabolism details how the hormone FGF21 (fibroblast growth factor 21) can reverse the effects of fatty liver disease in mice. The hormone works primarily by signaling the brain to improve liver function.

University of Oklahoma researcher Matthew Potthoff, Ph.D., is the lead author of the study, which provides valuable insight about the mechanism of action of the hormone, which is a target for a new class of highly anticipated drugs that are in Phase 3 clinical trials.

“Fatty liver disease, or MASLD (metabolic dysfunction-associated steatotic liver disease), is a buildup of fat in the liver. It can progress to MASH (metabolic dysfunction-associated steatohepatitis) during which fibrosis and, ultimately, cirrhosis can occur. MASLD is becoming a very big problem in the United States, affecting 40% of people worldwide, and there is currently only one treatment approved by the Food and Drug Administration to treat MASH. A new class of drugs, based on FGF21 signaling, is showing good therapeutic benefits in clinical trials, but until now, the mechanism for how they work has been unclear,” said Potthoff, a professor of biochemistry and physiology at the University of Oklahoma College of Medicine and deputy director of OU Health Harold Hamm Diabetes Center.

The study’s results demonstrated that FGF21 was effective at causing signaling in the model species that changed the liver’s metabolism. In doing so, the liver’s fat was lowered and the fibrosis was reversed. The hormone also sent a separate signal directly to the liver, specifically to lower cholesterol.

“It’s a feedback loop where the hormone sends a signal to the brain, and the brain changes nerve activity to the liver to protect it,” Potthoff said. “The majority of the effect comes from the signal to the brain as opposed to signaling the liver directly, but together, the two signals are powerful in their ability to regulate the different types of lipids in the liver.”

Similar to the family of weight loss drugs known as GLP-1s (glucagon-like peptide 1), which help regulate blood sugar levels and appetite, FGF21 acts on the brain to regulate metabolism. In addition, both are hormones produced from peripheral tissues — GLP-1 from the intestine and FGF21 from the liver — and both work by sending a signal to the brain.

“It is interesting that this metabolic hormone/drug works primarily by signaling to the brain instead of to the liver directly, in this case,” he said. “FGF21 is quite powerful because it not only led to a reduction of fat, but it also mediated the reversal of fibrosis, which is the pathological part of the disease, and it did so while the mice were still eating a diet that would cause the disease. Now, we not only understand how the hormone works, but it may guide us in creating even more targeted therapies in the future.”

Share Button

Study shows vision-language models can’t handle queries with negation words

Imagine a radiologist examining a chest X-ray from a new patient. She notices the patient has swelling in the tissue but does not have an enlarged heart. Looking to speed up diagnosis, she might use a vision-language machine-learning model to search for reports from similar patients.

But if the model mistakenly identifies reports with both conditions, the most likely diagnosis could be quite different: If a patient has tissue swelling and an enlarged heart, the condition is very likely to be cardiac related, but with no enlarged heart there could be several underlying causes.

In a new study, MIT researchers have found that vision-language models are extremely likely to make such a mistake in real-world situations because they don’t understand negation — words like “no” and “doesn’t” that specify what is false or absent.

“Those negation words can have a very significant impact, and if we are just using these models blindly, we may run into catastrophic consequences,” says Kumail Alhamoud, an MIT graduate student and lead author of this study.

The researchers tested the ability of vision-language models to identify negation in image captions. The models often performed as well as a random guess. Building on those findings, the team created a dataset of images with corresponding captions that include negation words describing missing objects.

They show that retraining a vision-language model with this dataset leads to performance improvements when a model is asked to retrieve images that do not contain certain objects. It also boosts accuracy on multiple choice question answering with negated captions.

But the researchers caution that more work is needed to address the root causes of this problem. They hope their research alerts potential users to a previously unnoticed shortcoming that could have serious implications in high-stakes settings where these models are currently being used, from determining which patients receive certain treatments to identifying product defects in manufacturing plants.

“This is a technical paper, but there are bigger issues to consider. If something as fundamental as negation is broken, we shouldn’t be using large vision/language models in many of the ways we are using them now — without intensive evaluation,” says senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Ghassemi and Alhamoud are joined on the paper by Shaden Alshammari, an MIT graduate student; Yonglong Tian of OpenAI; Guohao Li, a former postdoc at Oxford University; Philip H.S. Torr, a professor at Oxford; and Yoon Kim, an assistant professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at Conference on Computer Vision and Pattern Recognition.

Neglecting negation

Vision-language models (VLM) are trained using huge collections of images and corresponding captions, which they learn to encode as sets of numbers, called vector representations. The models use these vectors to distinguish between different images.

A VLM utilizes two separate encoders, one for text and one for images, and the encoders learn to output similar vectors for an image and its corresponding text caption.

“The captions express what is in the images — they are a positive label. And that is actually the whole problem. No one looks at an image of a dog jumping over a fence and captions it by saying ‘a dog jumping over a fence, with no helicopters,'” Ghassemi says.

Because the image-caption datasets don’t contain examples of negation, VLMs never learn to identify it.

To dig deeper into this problem, the researchers designed two benchmark tasks that test the ability of VLMs to understand negation.

For the first, they used a large language model (LLM) to re-caption images in an existing dataset by asking the LLM to think about related objects not in an image and write them into the caption. Then they tested models by prompting them with negation words to retrieve images that contain certain objects, but not others.

For the second task, they designed multiple choice questions that ask a VLM to select the most appropriate caption from a list of closely related options. These captions differ only by adding a reference to an object that doesn’t appear in the image or negating an object that does appear in the image.

The models often failed at both tasks, with image retrieval performance dropping by nearly 25 percent with negated captions. When it came to answering multiple choice questions, the best models only achieved about 39 percent accuracy, with several models performing at or even below random chance.

One reason for this failure is a shortcut the researchers call affirmation bias — VLMs ignore negation words and focus on objects in the images instead.

“This does not just happen for words like ‘no’ and ‘not.’ Regardless of how you express negation or exclusion, the models will simply ignore it,” Alhamoud says.

This was consistent across every VLM they tested.

“A solvable problem”

Since VLMs aren’t typically trained on image captions with negation, the researchers developed datasets with negation words as a first step toward solving the problem.

Using a dataset with 10 million image-text caption pairs, they prompted an LLM to propose related captions that specify what is excluded from the images, yielding new captions with negation words.

They had to be especially careful that these synthetic captions still read naturally, or it could cause a VLM to fail in the real world when faced with more complex captions written by humans.

They found that finetuning VLMs with their dataset led to performance gains across the board. It improved models’ image retrieval abilities by about 10 percent, while also boosting performance in the multiple-choice question answering task by about 30 percent.

“But our solution is not perfect. We are just recaptioning datasets, a form of data augmentation. We haven’t even touched how these models work, but we hope this is a signal that this is a solvable problem and others can take our solution and improve it,” Alhamoud says.

At the same time, he hopes their work encourages more users to think about the problem they want to use a VLM to solve and design some examples to test it before deployment.

In the future, the researchers could expand upon this work by teaching VLMs to process text and images separately, which may improve their ability to understand negation. In addition, they could develop additional datasets that include image-caption pairs for specific applications, such as health care.

Share Button

The risk of death or complications from broken heart syndrome was high from 2016 to 2020

Takotsubo cardiomyopathy, also known as broken heart syndrome, is associated with a high rate of death and complications, and those rates were unchanged between 2016 and 2020, according to new research published today in the Journal of the American Heart Association, an open-access, peer-reviewed journal of the American Heart Association.

Takotsubo cardiomyopathy is a stress-related heart condition in which part of the heart temporarily enlarges and doesn’t pump well. It is thought to be a reaction to a surge of stress hormones that can be caused by an emotionally or physically stressful event, such as the death of a loved one or a divorce. It can lead to severe, short-term failure of the heart muscle and can be fatal. Takotsubo cardiomyopathy may be misdiagnosed as a heart attack because the symptoms and test results are similar.

This study is one of the largest to assess in-hospital death rates and complications of the condition, as well as differences by sex, age and race over five years.

“We were surprised to find that the death rate from Takotsubo cardiomyopathy was relatively high without significant changes over the five-year study, and the rate of in-hospital complications also was elevated,” said study author M. Reza Movahed, M.D., Ph.D., an interventional cardiologist and clinical professor of medicine at the University of Arizona’s Sarver Heart Center in Tucson, Arizona. “The continued high death rate is alarming, suggesting that more research be done for better treatment and finding new therapeutic approaches to this condition.”

Researchers reviewed health records in the Nationwide Inpatient Sample database to identify people diagnosed with Takotsubo cardiomyopathy from 2016 to 2020.

The analysis found:

  • The death rate was considered high at 6.5%, with no improvement over period.
  • Deaths were more than double in men at 11.2% compared to the rate of 5.5% among women.
  • Major complications included congestive heart failure (35.9%), atrial fibrillation (20.7%), cardiogenic shock (6.6%), stroke (5.3%) and cardiac arrest (3.4%).
  • People older than age 61 had the highest incidence rates of Takotsubo cardiomyopathy. However, there was a 2.6 to 3.25 times higher incidence of this condition among adults ages 46-60 compared to those ages 31-45 during the study period.
  • White adults had the highest rate of Takotsubo cardiomyopathy (0.16%), followed by Native American adults (0.13%) and Black adults (0.07%).
  • In addition, socioeconomic factors, including median household income, hospital size and health insurance status, varied significantly.

“Takotsubo cardiomyopathy is a serious condition with a substantial risk of death and severe complications,” Movahed said. “The health care team needs to carefully review coronary angiograms that show no significant coronary disease with classic appearance of left ventricular motion, suggesting any subtypes of stress-induced cardiomyopathy. These patients should be monitored for serious complications and treated promptly. Some complications, such as embolic stroke, may be preventable with an early initiation of anti-clotting medications in patients with a substantially weakened heart muscle or with an irregular heart rhythm called atrial fibrillation that increases the risk of stroke.”

He also noted that age-related findings could serve as a useful diagnostic tool in discriminating between heart attack/chest pain and Takotsubo cardiomyopathy, which may prompt earlier diagnosis of the condition and could also remove assumptions that Takotsubo cardiomyopathy only occurs in the elderly.

Among the study’s limitations is that it relied on data from hospital codes, which could have errors or overcount patients hospitalized more than once or transferred to another hospital. In addition, there was no information on outpatient data, different types of Takotsubo cardiomyopathy or other conditions that may have contributed to patients’ deaths.

Movahed said further research is needed about the management of patients with Takotsubo cardiomyopathy and the reason behind differences in death rates between men and women.

Study details, background and design:

  • The analysis included 199,890 U.S. adults from across the nation (average age 67; 83% of cases were among women). White adults comprised 80% of the Takotsubo cardiomyopathy patients, while 8% were Black adults, 6% were Hispanic adults, 2% were Asian/Pacific Islander adults, 0.64% were Native American adults and 2.2% were reported as Other.
  • The Nationwide Inpatient Sample database is the largest publicly available source detailing publicly and privately paid hospital care in the U.S. It produces estimates of inpatient utilization, access, cost, quality and outcomes for about 35 million hospitalizations nationally annually.
Share Button

Got data? Breastfeeding device measures babies’ milk intake in real time

While breastfeeding has many benefits for a mother and her baby, it has one major drawback: It’s incredibly difficult to know how much milk the baby is consuming.

To take the guesswork out of breastfeeding, an interdisciplinary team of engineers, neonatologists and pediatricians at Northwestern University has developed a new wearable device that can provide clinical-grade, continuous monitoring of breast milk consumption.

The unobtrusive device softly and comfortably wraps around the breast of a nursing mother during breastfeeding and wirelessly transmits data to a smartphone or tablet. The mother can then view a live graphical display of how much milk her baby has consumed in real time.

By eliminating uncertainty, the device can provide peace of mind for parents during their baby’s first days and weeks. In particular, the new technology could help reduce parental anxiety and improve clinical management of nutrition for vulnerable babies in the neonatal intensive care unit (NICU).

The study will be published on Wednesday (May 14) in the journal Nature Biomedical Engineering. To ensure its accuracy and practicality, the device endured several stages of rigorous assessments, including theoretical modeling, benchtop experiments and testing on a cohort of new mothers in the hospital.

“Knowing exactly how much milk an infant is receiving during breastfeeding has long been a challenge for both parents and healthcare providers,” said Northwestern’s John A. Rogers, who led the device development. “This technology eliminates that uncertainty, offering a convenient and reliable way to monitor milk intake in real time, whether in the hospital or at home.”

“Uncertainty around whether an infant is getting sufficient nutrition can cause stress for families, especially for breastfeeding mothers with preterm infants in the NICU,” said Dr. Daniel Robinson, a Northwestern Medicine neonatologist and co-corresponding author of the study. “Currently, only cumbersome ways exist for measuring how much milk a baby has consumed during breastfeeding, such as weighing the baby before and after they have fed. We expect this sensor to be a big advance in lactation support, reducing stress for families and increasing certainty for clinicians as infants make progress with breastfeeding but still need nutritional support. Reducing uncertainty and helping families achieve their breastfeeding goals will lead to healthier children, healthier mothers and healthier communities.”

A bioelectronics pioneer, Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern — where he has appointments in the McCormick School of Engineering and Feinberg School of Medicine — and the director of the Querrey Simpson Institute for Bioelectronics (QSIB). Robinson is an associate professor of pediatrics at Feinberg and an attending physician in the division of neonatology at Ann & Robert H. Lurie Children’s Hospital of Chicago. Rogers and Robinson co-led the study with Dr. Craig Garfield, a professor of pediatrics at Feinberg and attending physician at Lurie Children’s, and Dr. Jennifer Wicks, a pediatrician at Lurie Children’s.

Three postdoctoral researchers at QSIB contributed equally to the project, each of whom is now a faculty member in Korea: Jiyhe Kim, an assistant professor at Ajou University, led the device design and supported clinical trials; Seyong Oh, an assistant professor at Hanyang University, engineered the wireless electronics; and Jae-Young Yoo, an assistant professor at Sungkyunkwan University, developed methods for data analytics. Kim and Oh are co-first authors with Raudel Avila, an assistant professor of mechanical engineering at Rice University and Northwestern Ph.D. graduate, who led the computational modeling.

Addressing an unmet need

The project started four years ago, when neonatologists and pediatricians at Lurie Children’s approached Rogers’ team with a critical unmet need. Because the transfer of milk from mother to baby during breastfeeding is not visible and the flow of milk varies, it’s nearly impossible to know the precise volume of milk a baby consumes in one sitting.

“Currently, there are no reliable ways to know how much babies are eating when they are breastfeeding,” said Wicks, who is a mother of three. “Some pediatricians and lactation consultants will use scales to weigh a baby before and after feeding, and that measurement gives a decent estimate of the amount of milk the baby drank. But unfortunately, baby scales are not small, and most people do not own baby scales. So, while that can provide an estimate, it’s not convenient.”

As another option, mothers can pump breastmilk into a bottle. While bottle-feeding offers precise volume measurements and visual reassurance that the baby is consuming milk, it removes the benefits of skin-to-skin contact. And the extra steps of pumping, storing and handling milk are time-consuming and can even increase the risk of bacterial contamination.

“There are several advantages to breastfeeding at the breast compared to feeding breast milk with a bottle,” Wicks said. “First and foremost, that skin-to-skin bond is beneficial for both babies and moms. Additionally, milk production is oftentimes stimulated better by actual breastfeeding.”

Although other academic researchers and small startup companies have explored technologies to monitor aspects of breast milk and feeding, peer-reviewed studies are scarce.

“Based on our reviews of the scientific literature and our discussions with pediatricians and neonatologists, there are no clinically validated technologies that address this important medical need,” Rogers said. “Our work fills that gap.”

Pinpointing the right strategy

Rogers’ team previously developed soft, flexible wireless body sensors for monitoring babies in the NICU as well as wearable sensors for tracking the drainage of fluid flow through shunts, which are commonly used to treat patients with hydrocephalus. With experience working with vulnerable populations and developing devices capable of measuring fluid flow, Rogers and his team were ideal candidates for the project.

“Our clinical colleagues asked us whether we could develop a sensor that would allow new mothers to determine how much milk their babies are consuming during a nursing session,” Rogers said. “At first, we weren’t sure how to approach the problem. The strategies we used to track flow through shunts as they pass through locations superficially below the skin don’t work because milk ducts lie too far beneath the skin’s surface.”

After years of failed attempts based on methods to monitor the optical properties of the breast, to quantify suckling motions, to track swallowing events and several others, the engineers finally settled on a remarkably simple technique. The device sends a tiny, safe electrical current through the breast using two small pads, or electrodes, placed on the skin. Another pair of electrodes captures the voltage difference associated with that current.

As the baby drinks milk, the amount of milk in the breast decreases. This reduction leads to a change in the electrical properties of the breast in a subtle but measurable manner. These changes directly relate to the amount of milk removed from the breast. The larger the amount, the bigger the change in electrical properties. Though subtle, that change can be accurately calibrated and quantified for real-time display on a smartphone during breastfeeding.

“This is a concept called bioimpedance, and it’s commonly used to measure body fat,” Rogers said. “Because muscle, fat, bone and tissues conduct electricity differently, bioimpedance can yield an accurate measurement of fat content. In a conceptually similar way, we can quantify the change in milk volume within the breast. This was the last strategy we tried, unfortunately. But fortunately, we found that we were able to make it work really well.”

Rigorous testing

After designing initial prototypes, the engineering team optimized it through several stages of testing and modeling. First, they built simplified models of a breast using materials that mimic the electrical properties of skin, fat and milk. By precisely controlling the amount of “milk” in these models, the researchers could see how the device’s data changed as the volume of “milk” changed.

Led by Avila at Rice, the team then created detailed computer models of the breast, based on real anatomy. Their physics-based computer simulations monitored the physiological changes that occur during breastfeeding. Using bioimpedance, Avila linked the flow of electrical signals to the amount of milk leaving the breast in real time. His team’s anatomically correct computer models incorporate patient-specific breast shapes and tissue distributions, enabling them to test how sensor placement and tissue variation affect readings.

“Our simulation results matched the trends of experiments and human clinical studies,” Avila said. “Connecting our models to impact in the real world is always a highlight, and it’s only possible through the collaboration among experimental, modeling and clinical teams.”

Personalized for all shapes and sizes

The resulting device is a thin, soft, pliable cord that lightly wraps around the outer circumference of the breast. Electrodes, which gently adhere to the skin, are integrated into each end of the cord. A small, lightweight “base station,” which also softly mounts onto the skin, sits in the middle of the cord between the electrodes. Enclosed in a soft, silicone case, the base station holds a small rechargeable battery, Bluetooth technology for wireless data transfer and a memory chip.

Because every mother has differences in breast density, shape and size, the device can be personalized through a single calibration. To calibrate the system, the mother wears the device while using a breast pump connected to a bottle with volume markings. This enables the user to know the precise volume of milk being expressed over a specific period of time. Meanwhile, the device records the breast’s electrical properties throughout the pumping process. This calibration scheme teaches the device how to interpret the changes in electrical signals for each specific mother.

After developing prototypes, the team tested the device on 12 breastfeeding mothers — both in the NICU and at home. To assess whether the device was consistent and reliable over time, the researchers took multiple measurements from the same mothers, spans of time as long as 17 weeks.

In this first stage of testing, mothers wore the sensor while they pumped as this important step required knowing precisely the amount of milk mothers expressed. In one testing session, the researchers compared the device’s data to the difference in the baby’s weight before and after breastfeeding. Overall, with the testing during pumping, the results between amounts in the bottle and amounts detected by the sensor were strikingly similar.

Improving care in the NICU

While the device would provide reassurance and useful information to all parents, Robinson and Wicks say NICU babies would benefit the most from careful monitoring. Knowing exactly how much a baby in the NICU is eating is even more critical than for healthy, full-term infants.

These babies often have precise nutritional needs. Premature babies, for example, may have underdeveloped digestive systems, making them more vulnerable to feeding intolerance. Precise feeding volumes can help minimize the risks of developing intestinal disorders and reflux.

“Some babies are limited to a certain number of feeds at a time,” Wicks said. “For babies who are born prematurely or who are recovering from a surgery, they can only eat small amounts of milk very slowly. Oftentimes, we cannot allow them to breastfeed because there’s no way for us to know how much milk they are getting from mom. Having a sensor to monitor this would enable these babies to breastfeed more successfully with their mom.”

Future directions

To become even more user-friendly, the researchers envision the technology eventually could be integrated into comfortable undergarments like breastfeeding bras. This would further enhance the device’s ease of use and overall experience for mothers.

The researchers still plan to complete comprehensive comparisons to the pre- and post-feed weighing. The team also aims to ensure the sensor is usable for mothers with a wide range of skin tones. While the current version of the device detects the amount of milk flowing out of the breast, future iterations could measure milk refilling into the breast. Then mothers could track changes in milk production over time. The team also plans to continue optimizing the device so it can glean even more insights, such as milk quality and fat content.

“Breastfeeding can be extremely emotional for mothers, in part due to the uncertainty surrounding how much milk their babies are getting,” Wicks said. “It can come with a lot of sadness because mothers feel anxious and like they aren’t doing a good job. Oftentimes, mothers experience anxiety, frustration or symptoms of depression and give up on breastfeeding altogether.

“There are many factors that make breastfeeding difficult. Being able to remove one piece of uncertainty and being able to help reassure them that they are producing enough milk will really help decrease some of that stress and anxiety. For all moms around the world — who are in all different stages of their breastfeeding journeys — this device will be incredibly helpful. We’re looking forward to bringing it to more people.”

Share Button

Study sheds light on how autistic people communicate

There is no significant difference in the effectiveness of how autistic and non-autistic people communicate, according to a new study, challenging the stereotype that autistic people struggle to connect with others.

The findings suggest that social difficulties often faced by autistic people are more about differences in how autistic and non-autistic people communicate, rather than a lack of social ability in autistic individuals, experts say.

Researchers hope the results of the study will help reduce the stigma surrounding autism, and lead to more effective communication support for autistic people.

Autism is a lifelong neurodivergence, and influences how people experience and interact with the world.

Autistic people often communicate more directly and may struggle with reading social cues and body language, leading to differences in how they engage in conversation compared to non-autistic people.

The study, led by experts from the University of Edinburgh, tested how effectively information was passed between 311 autistic and non-autistic people.

Participants were tested in groups where everyone was autistic, everyone was non-autistic, or a combination of both.

The first person in the group heard a story from the researcher, then passed it along to the next person. Each person had to remember and repeat the story, and the last person in the chain recalled the story aloud.

The amount of information passed on at each point in the chain was scored to discern how effective participants were at sharing the story. Researchers found there were no differences between autistic, non-autistic, and mixed groups.

After the task, participants rated how much they enjoyed the interaction with the other participants, based on how friendly, easy, or awkward the exchange was.

Researchers found that non-autistic people preferred interacting with others like themselves, and autistic people preferred learning from fellow autistic individuals. This is likely down to the different ways that autistic and non-autistic people communicate, experts say.

The findings confirm similar findings from a previous smaller study undertaken by the same researchers. They say the new evidence should lead to increased understanding of autistic communication styles as a difference, not a deficiency.

Dr Catherine Crompton, Chancellor’s Fellow at the University of Edinburgh’s Centre for Clinical Brain Sciences, said: “Autism has often been associated with social impairments, both colloquially and in clinical criteria. Researchers have spent a lot of time trying to ‘fix’ autistic communication, but this study shows that despite autistic and non-autistic people communicating differently it is just as successful. With opportunities for autistic people often limited by misconceptions and misunderstandings, this new research could lead the way to bridging the communication gap and create more inclusive spaces for all.”

Share Button

Helping on the farm reaps mental health benefits

Farm Care has “seen an impact” on youngsters’ mental health on east Surrey farms.

Share Button

For, against, undecided: Three GPs give their views on assisted dying

GPs from different areas of England tell us how they feel about plans to legalise assisted dying.

Share Button

GPs split over assisted dying plans, BBC research suggests

GPs are deeply divided over assisted dying with personal beliefs shaping their views, BBC research reveals.

Share Button

Government has no clear plan for NHS England abolition, say MPs

Cross-party group of MPs say move is causing uncertainty at time when NHS is under huge pressure.

Share Button