Is long-term beta-blocker therapy needed after a heart attack?

The cardiovascular safety of interrupting beta-blocker could not be shown in comparison to continuation in patients with a history of myocardial infarction (MI) and there was no benefit to the patients’ quality of life, according to late-breaking research presented August 30 at ESC Congress 2024.1

“Improvements in MI management and data from observational studies have led physicians to question whether continuing beta-blockers after 1 year post-MI is needed since unnecessary treatment may result in side effects.2-5 We conducted the ABYSS trial to provide conclusive randomised data on the effects of beta-blocker interruption vs. continuation on cardiovascular events and quality of life, but we were unable to show safety preservation in terms of clinical events nor any benefit on quality of life with beta-blocker interruption,” said Principal Investigator, Professor Johanne Silvain of the Sorbonne University, Paris, France.

The open-label, non-inferiority, randomised ABYSS trial, conducted by the ACTION Group, included patients with a prior MI taking long-term beta-blockers, with a left ventricular ejection fraction of at least 40% and no cardiovascular events in the previous 6 months. Participants were randomised (1:1) to interrupting or continuing their β-blocker medication.

The primary endpoint was a composite of death, non-fatal MI, non-fatal stroke or hospitalisation for cardiovascular reasons at the longest follow-up (minimum, 1 year), according to an analysis of non-inferiority (defined as a between-group absolute difference of <3 percentage points for the upper boundary of the two-sided 95% confidence interval [CI]). The main secondary endpoint was the change in quality of life as measured by the European Quality of Life-5 Dimensions questionnaire.

In total 3,698 patients were randomised from 49 sites in France. The mean age was 64 years and 17% were female. The median time between last MI and randomisation was 2.9 years (interquartile range 1.2-6.4 years).

Over median follow-up of 3 years, interruption of long-term beta-blocker treatment was not shown to be non-inferior to beta-blocker continuation. A primary-outcome event occurred in 23.8% of patients in the interruption group and in 21.1% in the continuation group (risk difference 2.8 percentage points; 95% CI <0.1-5.5), with a hazard ratio of 1.16 (95% CI 1.01-1.33; p=0.44 for non-inferiority).

Death occurred in 4.1% in the interruption group and 4.0% in the continuation group, while MI occurred in 2.5% and 2.4%, respectively. Of note, hospitalisation for cardiovascular causes occurred in 18.9% in the interruption group and 16.6% in the continuation group. Beta-blocker interruption was also associated with increases in systolic and diastolic blood pressure and heart rate at 6 months (all p<0.001 vs. beta-blocker continuation) and during the study follow up. Beta-blocker interruption did not improve the patients’ quality of life.

Summing up the evidence from the ABYSS trial, Professor Silvain concluded: “Differences between the groups with respect to hospitalisation for cardiovascular reasons and the negative effect on blood pressure levels, together with the absence of quality-of-life improvement do not support interruption of a chronic beta-blocker treatment in post-MI patients. These results must be put into context with recent findings from the open-label REDUCE-MI6 trial and ongoing trials to provide additional evidence on the optimal use of beta-blockers after MI.”

1 ‘Beta blocker interruption in patients with prior myocardial infarction: results of the ABYSS trial and effect on blood pressure and heart rate control’ will be discussed during Hot Line 1 on Friday 30 August in room London.

2 Holt A, Blanche P, Zareini B, et al. Effect of long-term beta-blocker treatment following myocardial infarction among stable, optimally treated patients without heart failure in the reperfusion era: a Danish, nationwide cohort study. Eur Heart J. 2021;42:907-914.

3 Park CS, Yang H-M, Ki Y-J, et al. Left ventricular ejection fraction 1 year after acute myocardial infarction identifies the benefits of the long-term use of beta-blockers: analysis of data from the KAMIR-NIH Registry. Circ Cardiovasc Interv. 2021;14:e010159.

4 Puymirat E, Riant E, Aissaoui N, et al. β Blockers and mortality after myocardial infarction in patients without heart failure: multicentre prospective cohort study. BMJ. 2016;354:i4801.

5 Kim J, Kang D, Park H, et al. Long-term β-blocker therapy and clinical outcomes after acute myocardial infarction in patients without heart failure: nationwide cohort study. Eur Heart J. 2020;41:3521-3529.

6 Yndigegn T, Lindahl B, Mars K, et al. Beta-blockers after myocardial infarction and preserved ejection fraction. N Engl J Med. 2024;390:1372-1381.

Share Button

How hope beats mindfulness when times are tough

A recent study finds that hope appears to be more beneficial than mindfulness at helping people manage stress and stay professionally engaged during periods of prolonged stress at work. The study underscores the importance of looking ahead, rather than living “in the moment,” during hard times.

Mindfulness refers to the ability of an individual to focus attention on the present, in a way that is open, curious and not judgmental. Essentially, the ability to be fully in the moment.

“There’s a lot of discussion about the benefits of mindfulness, but it poses two challenges when you’re going through periods of stress,” says Tom Zagenczyk, co-author of a paper on the work and a professor of management in North Carolina State University’s Poole College of Management. “First, it’s hard to be mindful when you’re experiencing stress. Second, if it’s a truly difficult time, you don’t necessarily want to dwell too much on the experience you’re going through.

“Because hope is inherently forward looking, while mindfulness is about appreciating your current circumstances, we wanted to see how each of these two mindsets influenced people’s well-being and professional attitudes during difficult times,” Zagenczyk says. “The COVID pandemic presented us with an unfortunate, but useful, opportunity to explore this topic. And we chose to focus on the performing arts since that sector was particularly hard hit by the pandemic.”

For the study, researchers recruited 247 professional musicians from the organization MusiCares to take two surveys, one month apart. The first survey was given in September 2021. In addition to collecting broad demographic data, study participants were asked about their thoughts and experiences at the beginning of the pandemic — March to August 2020. They were also asked questions aimed at capturing how hopeful and mindful they were from September 2020 through March 2021.

The second survey was given in October 2021 and asked study participants questions aimed at capturing work engagement, work tensions, how positive their emotions were, and the extent to which they were experiencing distress.

The researchers then used statistical techniques to identify relationships between hope, mindfulness, and outcomes related to their personal well-being and attitudes toward work.

“Fundamentally, our findings tell us that hope was associated with people being happy, and mindfulness was not,” says Kristin Scott, study co-author and a professor of management at Clemson University. “And when people are hopeful — and happy — they experience less distress, are more engaged with their work, and feel less tension related to their professional lives.”

“Being mindful can be tremendously valuable — there are certainly advantages to living in the moment,” says Sharon Sheridan, study co-author and an assistant professor of management at Clemson. “But it’s important to maintain a hopeful outlook — particularly during periods of prolonged stress. People should be hopeful while being mindful — hold on to the idea that there’s a light at the end of the tunnel.”

While the study focused on musicians during an extreme set of circumstances, the researchers think there is a takeaway message that is relevant across industry sectors.

“Whenever we have high levels of job stress, it’s important to be hopeful and forward looking,” says Emily Ferrise, study co-author and a Ph.D. student at Clemson. “And to the extent possible, there is real value for any organization to incorporate hope and forward thinking into their corporate culture — through job conditions, organizational communications, etc.”

“Every work sector experiences periods of high stress,” says Zagenczyk. “And every company should be invested in having happy employees who are engaged with their work.”

Share Button

New findings on TB could change how we treat inflammatory disorders

Tuberculosis is a confounding scourge. It’s the leading cause of death from infectious disease in the world, and yet it’s estimated that those deaths represent perhaps 5% of infections with Mycobacterium tuberculosis (Mtb). Antibiotics can take credit for saving the lives of some of those with Mtb, but a chasm nevertheless persists between the prevalence of infection and the targeted severity of its impact. A growing body of evidence suggests genetic vulnerabilities to TB account for that gap.

Now researchers from The Rockefeller University have found another rare mutation that leaves its carriers much more likely to become ill with TB — but, curiously, not with other infectious diseases. This finding, recently published in Nature, may upend long held assumptions about the immune system.

It’s long been known that an acquired deficiency of a pro-inflammatory cytokine called TNF is linked to an increased risk of developing TB. The current study, led by Rockefeller’s Stéphanie Boisson-Dupuis and Jean-Laurent Casanova, revealed a genetic cause of TNF deficiency, as well as the underlying mechanism: a lack of TNF incapacitates a specific immune process in the lungs, leading to severe — but surprisingly targeted — illness.

The findings suggest that TNF, long considered a key galvanizer of the immune response, might actually play a much narrower role — a discovery with far-reaching clinical implications.

“The past 40 years of scientific literature have attributed a wide variety of pro-inflammatory functions to TNF,” says Casanova, head of the St. Giles Laboratory of Human Genetics of Infectious Diseases. “But beyond protecting the lungs against TB, it may have a limited role in inflammation and immunity.”

Rare risk

Casanova’s lab has been studying the genetic causes of TB for more than two decades through field work in several countries and a wide network of collaborating physicians across the world. They maintain an ever-growing database of whole-exome sequences from a global pool of patients — more than 25,000 people to date. Of those, some 2,000 have had TB.

Over the years they’ve identified several rare genetic mutations that render some people vulnerable to TB. For example, mutations in a gene called CYBB can disable an immune mechanism called the respiratory burst, which produces chemicals called reactive oxygen species (ROS). Despite its pulmonary-sounding name, the respiratory burst takes place in immune cells throughout the body.

ROS help pathogen-consuming white blood cells called phagocytes (from the Greek for “eating”) to destroy the invaders they’ve devoured. If ROS aren’t produced, those pathogens can thrive unchecked, leading to debilitating complications. As a result, carriers of this CYBB mutation become vulnerable to not just TB but to a wide variety of infectious diseases.

For the current study, the team suspected that a similar inborn error of immunity may lay behind the severe, recurring TB infections experienced by two people in Colombia — a 28-year-old woman and her 32-year-old cousin — who had been repeatedly hospitalized with significant lung conditions. In each cycle, they initially responded well to anti-TB antibiotics, but within a year, they were sick again.

Puzzlingly, however, their long-term health records showed that their immune systems functioned normally, and that they were otherwise healthy.

A telling deficiency

To find out why they were particularly prone to getting TB, the researchers performed whole-exome sequencing on the two, as well as a genetic analysis of their respective parents and relatives.

The two were the only members of their extended family with a mutation in the TNF gene, which encodes for proteins linked to the regulation of a variety of biological processes. Short for “tumor necrosis factor,” increased TNF production is also associated with a variety of conditions, including septic shock, cancer, rheumatoid arthritis, and cachexia, which causes dangerous weight loss.

The protein is largely secreted by a type of phagocyte called a macrophage, which relies on the ROS molecules generated by the respiratory burst to finish off pathogens they’ve consumed.

In these two patients, the TNF gene failed to function, preventing the respiratory burst from occurring, and thus the creation of ROS molecules. As a result, the patients’ alveolar macrophages, located in their lungs, were overrun with Mtb.

“We knew that the respiratory burst was important for protecting people against various types of mycobacteria, but now we know that TNF is actually regulating the process,” says Boisson-Dupuis. “And when it’s missing in alveolar macrophages, people will be susceptible to airborne TB.”

She adds, “It’s very surprising that the people we studied are adults who have never been sick with other infectious diseases, despite being repeatedly exposed to their microbes. They are apparently selectively at risk for TB.”

Treatment potential

The discovery also solves a long-standing mystery about why TNF inhibitors, which are used to treat autoimmune and inflammatory diseases, raise the chances of contracting TB. Without TNF, a key part of the defense against it is defunct.

The findings may lead to a radical reassessment of TNF’s role in immune function — and new treatment possibilities. “TNF is required for immunity against Mtb, but it seems to be redundant for immunity against many other pathogens,” Casanova says. “So the question is, what other pro-inflammatory cytokines are doing the jobs we thought TNF was doing? If we can discover that, we may be able to block these cytokines rather than TNF to treat diseases where inflammation plays a role.”

Share Button

Doughnut-shaped region found inside Earth’s core deepens understanding of planet’s magnetic field

A doughnut-shaped region thousands of kilometres beneath our feet within Earth’s liquid core has been discovered by scientists from The Australian National University (ANU), providing new clues about the dynamics of our planet’s magnetic field.

The structure within Earth’s liquid core is found only at low latitudes and sits parallel to the equator. According to ANU seismologists, it has remained undetected until now.

The Earth has two core layers: the inner core, a solid layer, and the outer core, a liquid layer. Surrounding the Earth’s core is the mantle. The newly discovered doughnut-shaped region is at the top of Earth’s outer core, where the liquid core meets the mantle.

Study co-author and ANU geophysicist, Professor Hrvoje Tkalčić, said the seismic waves detected are slower in the newly discovered region than in the rest of the liquid outer core.

“The region sits parallel to the equatorial plane, is confined to the low latitudes and has a doughnut shape,” he said.

“We don’t know the exact thickness of the doughnut, but we inferred that it reaches a few hundred kilometres beneath the core-mantle boundary.”

Rather than using traditional seismic wave observation techniques and observing signals generated by earthquakes within the first hour, the ANU scientists analysed the similarities between waveforms many hours after the earthquake origin times, leading them to make the unique discovery.

“By understanding the geometry of the paths of the waves and how they traverse the outer core’s volume, we reconstructed their travel times through the Earth, demonstrating that the newly discovered region has low seismic speeds,” Professor Tkal?i? said.

“The peculiar structure remained hidden until now as previous studies collected data with less volumetric coverage of the outer core by observing waves that were typically confined within one hour after the origin times of large earthquakes.

“We were able to achieve much better volumetric coverage because we studied the reverberating waves for many hours after large earthquakes.”

Study co-author, Dr Xiaolong Ma, said that the discovery uncovers some mysteries of the dynamics of Earth’s magnetic field.

“There are still mysteries about the Earth’s outer core that are yet to be solved, which requires multidisciplinary efforts from seismology, mineral physics, geomagnetism and geodynamics,” Dr Ma said.

The outer core is predominantly made of liquid iron and nickel, and the vigorous movement of the electrically conductive liquid creates Earth’s magnetic field, which shields around Earth and helps to sustain all life, protecting it from damaging solar winds and harmful radiation.

The scientists believe that knowing more about the Earth’s outer core’s composition, including light chemical elements, is fundamental to understanding the magnetic field and predicting when it could potentially cease or weaken.

“Our findings are interesting because this low velocity within the liquid core implies that we have a high concentration of light chemical elements in these regions that would cause the seismic waves to slow down. These light elements, alongside temperature differences, help stir liquid in the outer core,” Professor Tkalčić said.

“The magnetic field is a fundamental ingredient that we need for life to be sustained on the surface of our planet.

“The dynamics of Earth’s magnetic field is an area of strong interest in the scientific community, so our results could promote more research about the magnetic field on both Earth and other planets.”

The research is published in Science Advances.

Share Button

Study combines data, molecular simulations to accelerate drug discovery

Researchers from the University of Cincinnati College of Medicine and Cincinnati Children’s Hospital have found a new method to increase both speed and success rates in drug discovery.

The study, published Aug. 30 in the journal Science Advances, offers renewed promise when it comes to discovering new drugs.

“The hope is we can speed up the timeline of drug discovery from years to months,” said Alex Thorman, PhD, co-first author and a postdoctoral fellow in the Department of Environmental and Public Health Sciences in the College of Medicine.

Researchers combined two approaches for screening potential new drugs. First, they used a database from the Library of Integrated Network-based Cellular Signatures (LINCS) to screen tens of thousands of small molecules with potential therapeutic effects simultaneously. Then they combined the search with targeted docking simulations used to model the interaction between small molecules and their protein targets to find compounds of interest. That sped up the timing of the work from months to minutes — taking weeks of work required for initial screening down to an afternoon.

Thorman said this faster screening method for compounds that could become drugs accelerates the drug research process. But it’s not only speed that is crucial.

He added that this newer approach is more efficient at identifying potentially effective compounds.

“And the accuracy will only improve, hopefully offering new hope to many people who have diseases with no known cure, including those with cancer,” Thorman said.

It can also create more targeted treatment options in precision medicine, an innovative approach to tailoring disease prevention and treatment that takes into account differences in people’s genes, environments and lifestyles.

“An accelerated drug discovery process also could be a game changer in the ability to respond to public health crises, such as the COVID-19 pandemic,” said Thorman. “The timeline for developing effective drugs could be expedited.”

The other co-first authors were Jim Reigle, PhD, a postdoctoral fellow at Cincinnati Children’s Hospital, and Somchai Chutipongtanate, PhD, an associate professor in the Department of Environmental and Public Health Sciences in the College of Medicine.

The corresponding authors of the study were Jarek Meller, PhD, a professor of biostatistics, health informatics and data sciences in the College of Medicine, and Andrew Herr, PhD, a professor of immunobiology in the Department of Pediatrics in the College of Medicine.

Other co-investigators included Mario Medvedovic, PhD, professor and director of the Center for Biostatistics and Bioinformatics Services in the College of Medicine, and David Hildeman, PhD, professor of immunobiology in the College of Medicine. Both Herr and Hildeman have faculty research labs at Cincinnati Children’s Hospital.

This research was funded in part by grants from the National Institutes of Health, a Department of Veterans Affairs merit award, a UC Cancer Center Pilot Project Award and a Cincinnati Children’s Hospital Innovation Fund award.

Those involved in the research are also co-inventors on three U.S. patents that are related to their work and have been filed by Cincinnati Children’s Hospital.

Share Button

Like people, vultures get set in their ways and have fewer friends as they age

If you’d rather be watching TV on your couch than dancing at the club, you might have something in common with aging griffon vultures. New research shows that young griffon vultures move frequently between sleeping sites in different locations and interact with many friends but get set in their ways as they age, roosting in the same spots with the same individuals. As moving between roosts becomes a grind, older vultures follow the same path, establishing movement routines, that are not seen in young vultures.

Younger vultures shy away from the most popular roosts, suggesting they might be intimidated by the older ones or that there’s a vulture equivalent of “Hey you kids, get off my lawn.”

The research, published in Proceedings of the National Academy of Sciences, shows that like many people, older vultures tend to have fewer, more selective friendships with stronger bonds. They may also have a more thorough knowledge of where to find food resources.

Eurasian griffon vultures, or Gyps fulvus, are large vultures that live in the Mediterranean, the Middle East and India. With wingspans up to 9 feet, they’re much larger than North American turkey vultures and bigger than bald eagles.

Finding food can be tricky for vultures because it depends on locating animal carcasses — an unpredictable and ephemeral source. When griffon vultures find a carcass, they tend to sleep or roost nearby and feed on it over a period of days. Roosting sites can thus be ‘information hubs,’ where vultures that recently fed signal to others about food sources; they then follow each other to carcasses and form friendships that help them stay in the loop about food.

The researchers wanted to know if an individual griffon vulture’s movement patterns and social behavior changed over the course of its life. They used GPS data from 142 individually tagged birds in Israel gathered over a period of 15 years to cross-reference the vultures’ ages with their movement and social interactions at roost sites.

“What we found was as they age, their loyalty to certain roost sites increases,” said co-author Noa Pinter-Wollman, a UCLA professor of ecology and evolutionary biology. “Young vultures check out many different roosts but in middle age, they start going repeatedly to the same places.”

The study showed young vultures sometimes returned to the same roost but usually chose different ones, rarely spending two nights in the same place. From young adulthood at around 5 years old through middle age, they spent about half their nights at the same “home” site and half elsewhere. In old age, they became true homebodies.

“When they are old, from the age of 10 onward, they no longer have the energy to be ‘out and about’ and return consistently to the same site,” said corresponding author Orr Spiegel of Tel Aviv University. “Those who were adventurous at the age of 5 became more sedentary by age 10.”

As the vultures grew older, the strength of their social bonds decreased as well for at least part of the year. The number of individuals they interacted with didn’t change with age — if they had five friends when young, they still had five when older. But the amount of time they spent with vultures outside of their close friend group plummeted. Older vultures spent most of their time with and roosted mostly with these close friends. Their movements also became more routine, eventually following a predictable pattern.

The study is unique because the researchers were able to track the movements and social behaviors of the same vultures for up to 12 nearly consecutive years over a 15-year period.

“We are able to show that the trends of individuals becoming more loyal to the same sites with age is not because the more exploratory individuals die earlier and live shorter lives, and the older, more sedentary individuals live longer lives,” said first author and Tel Aviv University postdoctoral fellow Marta Acácio. “Individuals actually change their behavior with age, and this has rarely been shown in nature for long-lived birds due to the difficulty of tracking individuals for such a long time.”

The research backs up findings from studies in other species that, with age, animals become more faithful to their known sites and routines — and potentially become more selective in their social relationships. These behaviors are commonly attributed to aging in humans and can help improve understanding of how animal populations move about in their environments and relate to other members of their species, as well as identify better ways to protect them from threats. For griffon vultures, this could mean better protection of important roosting sites and using knowledge about their social interactions to reduce the risk of poisoning.

“It looks like they just get set in their ways,” Pinter-Wollman said. “They’ve gathered information over the years, and they might as well use it. Carcasses are hard to come by and roosts are information hubs. Some roosts become popular for a reason; for example, they tend to be closer to reliable food sources and older vultures potentially monopolize these roosts.”

Share Button

Morphing facial technology sheds light on the boundaries of self-recognition

Facial recognition is a critical part of self-image and social interactions. In an era of advanced digital technology, we face intriguing questions about communication and identity. How does altering our facial identity affect our sense of “self” and our interactions with others? These are questions Dr. Shunichi Kasahara, a researcher in the Cybernetic Humanity Studio at the Okinawa Institute of Science and Technology (OIST) is investigating, using real-time morphing of facial images (turning our faces into someone else’s and vice versa). The studio was established in 2023 as a platform for joint research between OIST and Sony Computer Science Laboratories, Inc.

Dr. Kasahara and his collaborators have investigated the dynamics of face recognition using motor-visual synchrony — the coordination between a person’s physical movements and the visual feedback they receive from those movements. They found that whether we influence the movement of our self-image or not, levels of identification with our face remain consistent. Therefore, our sense of agency, or subjective feelings of control, do not impact our level of identification with our self-image. Their results have been published in Scientific Reports.

The effect of agency on perceptions of identity

With psychological experiments using displays and cameras, the scientists investigated where the “self-identification boundary” is and what impacts this boundary. Participants were seated and asked to look at screens showing their faces gradually changing. At some point, the participants could notice a change in their facial identity and were asked to press a button when they felt that the image on the screen was no longer them. The experiment was done in both directions: the image changing from self to other and other to self.

“It’s like watching your face in a mirror as you move it and you identify yourself, but your face slowly changes up to a point and you realize this is no longer you,” Dr. Kasahara explained.

The researchers examined how three movement conditions affect the facial boundary: synchronous, asynchronous, and static. They hypothesized that if the motions are synchronized, participants would identify with the images to a greater extent. Surprisingly, they found that whether movements were synchronized or not, their facial identity boundaries were similar. Additionally, participants were more likely to identify with static images of themselves than images with their faces moving.

Interestingly, the direction of morphing — whether from self to other or other to self — influenced how participants perceived their own facial boundaries: participants were more likely to identify with their facial images when these images morphed from self to other rather than from other to self. Overall, the results suggest that a sense of agency of facial movements does not significantly impact our ability to judge our facial identity.

“Consider the example of deepfakes, which are essentially a form of asynchronous movement. When I remain still but the visual representation moves, it creates an asynchronous situation. Even in these deepfake scenarios, we can still experience a feeling of identity connection with ourselves,” Dr. Kasahara explained. “This suggests that even when we see a fake or manipulated version of our image, for example, someone else using our face, we might still identify with that face. Our findings raise important questions about our perception of self and identity in the digital age.”

How does identity impact perceptions of control?

What about the other way around? How does our sense of identity impact our sense of agency? Dr. Kasahara recently published a paper in collaboration with Professor of Psychology at Rikkyo University, Dr. Wen Wen, who specializes in research on our sense of agency. They investigated how recognizing oneself through facial features might affect how people perceive control over their own movements.

During experiments, participants observed either their own face or another person’s face on a screen and could interact and control the facial and head movements. They were asked to observe the screen for about 20 seconds while moving their faces and changing their facial expressions. The motion of the face was controlled either only by their own facial and head motion or by an average of the participant’s and the experimenter’s motion (full control vs. partial control). Thereafter, they were asked “how much did you feel that this face looks like you?” and “how much control did you feel over this presented face?”

Again, the main findings were intriguing: participants reported a higher sense of agency over the “other face” rather than the “self-face.” Additionally, controlling someone else’s face resulted in more variety of facial movements than controlling one’s own face.

“We gave the participants a different face, but they could control the facial movements of this face — similar to deepfake technology, where AI can transfer movement to other objects. This AI technology allows us to go beyond the conventional experience of simply looking into a mirror, enabling us to disentangle and investigate the relationship between facial movements and visual identity,” Dr. Kasahara stated.

“Based on previous research, one might expect that if I see my own face, I will feel more control over it. Conversely, if it’s not my face, I might expect to feel less control because it’s someone else’s face. That’s the intuitive expectation. However, the results are the opposite — when people see their own face, they report a lower sense of agency. Conversely, when they see another person’s face, they’re more likely to feel a sense of agency.” These surprising results challenge what we thought we knew about how we see ourselves in images.

Dr. Kasahara emphasized that the acceptance of technology in society plays a crucial role in technological advancements and human evolution: “The relationship between technology and human evolution is cyclical; we evolve together. But concerns about certain computer technology may lead to restrictions. My goal is to help foster acceptance within society and update our understanding of “the self” in relation to human-computer integration technology.”

Share Button

Topological quantum simulation unlocks new potential in quantum computers

Researchers from the National University of Singapore (NUS) have successfully simulated higher-order topological (HOT) lattices with unprecedented accuracy using digital quantum computers. These complex lattice structures can help us understand advanced quantum materials with robust quantum states that are highly sought after in various technological applications.

The study of topological states of matter and their HOT counterparts has attracted considerable attention among physicists and engineers. This fervent interest stems from the discovery of topological insulators — materials that conduct electricity only on the surface or edges — while their interiors remain insulating. Due to the unique mathematical properties of topology, the electrons flowing along the edges are not hampered by any defects or deformations present in the material. Hence, devices made from such topological materials hold great potential for more robust transport or signal transmission technology.

Using many-body quantum interactions, a team of researchers led by Assistant Professor Lee Ching Hua from the Department of Physics under the NUS Faculty of Science has developed a scalable approach to encode large, high-dimensional HOT lattices representative of actual topological materials into the simple spin chains that exist in current-day digital quantum computers. Their approach leverages the exponential amounts of information that can be stored using quantum computer qubits while minimising quantum computing resource requirements in a noise-resistant manner. This breakthrough opens up a new direction in the simulation of advanced quantum materials using digital quantum computers, thereby unlocking new potential in topological material engineering.

The findings from this research have been published in the journal Nature Communications.

Asst Prof Lee said, “Existing breakthrough studies in quantum advantage are limited to highly-specific tailored problems. Finding new applications for which quantum computers provide unique advantages is the central motivation of our work.”

“Our approach allows us to explore the intricate signatures of topological materials on quantum computers with a level of precision that was previously unattainable, even for hypothetical materials existing in four dimensions” added Asst Prof Lee.

Despite the limitations of current noisy intermediate-scale quantum (NISQ) devices, the team is able to measure topological state dynamics and protected mid-gap spectra of higher-order topological lattices with unprecedented accuracy thanks to advanced in-house developed error mitigation techniques. This breakthrough demonstrates the potential of current quantum technology to explore new frontiers in material engineering. The ability to simulate high-dimensional HOT lattices opens new research directions in quantum materials and topological states, suggesting a potential route to achieving true quantum advantage in the future.

Share Button

What a submerged ancient bridge discovered in a Spanish cave reveals about early human settlement

A new study led by the University of South Florida has shed light on the human colonization of the western Mediterranean, revealing that humans settled there much earlier than previously believed. This research, detailed in a recent issue of the journal, Communications Earth & Environment, challenges long-held assumptions and narrows the gap between the settlement timelines of islands throughout the Mediterranean region.

Reconstructing early human colonization on Mediterranean islands is challenging due to limited archaeological evidence. By studying a 25-foot submerged bridge, an interdisciplinary research team — led by USF geology Professor Bogdan Onac — was able to provide compelling evidence of earlier human activity inside Genovesa Cave, located in the Spanish island of Mallorca.

“The presence of this submerged bridge and other artifacts indicates a sophisticated level of activity, implying that early settlers recognized the cave’s water resources and strategically built infrastructure to navigate it,” Onac said.

The cave, located near Mallorca’s coast, has passages now flooded due to rising sea levels, with distinct calcite encrustations forming during periods of high sea level. These formations, along with a light-colored band on the submerged bridge, serve as proxies for precisely tracking historical sea-level changes and dating the bridge’s construction.

Mallorca, despite being the sixth largest island in the Mediterranean, was among the last to be colonized. Previous research suggested human presence as far back as 9,000 years, but inconsistencies and poor preservation of the radiocarbon dated material, such as nearby bones and pottery, led to doubts about these findings. Newer studies have used charcoal, ash and bones found on the island to create a timeline of human settlement about 4,400 years ago. This aligns the timeline of human presence with significant environmental events, such as the extinction of the goat-antelope genus Myotragus balearicus.

By analyzing overgrowths of minerals on the bridge and the elevation of a coloration band on the bridge, Onac and the team discovered the bridge was constructed nearly 6,000 years ago, more than two-thousand years older than the previous estimation — narrowing the timeline gap between eastern and western Mediterranean settlements.

“This research underscores the importance of interdisciplinary collaboration in uncovering historical truths and advancing our understanding of human history,” Onac said.

This study was supported by several National Science Foundation grants and involved extensive fieldwork, including underwater exploration and precise dating techniques. Onac will continue exploring cave systems, some of which have deposits that formed millions of years ago, so he can identify preindustrial sea levels and examine the impact of modern greenhouse warming on sea-level rise.

This research was done in collaboration with Harvard University, the University of New Mexico and the University of Balearic Islands.

Share Button

Transparency is often lacking in datasets used to train large language models

In order to train more powerful large language models, researchers use vast dataset collections that blend diverse data from thousands of web sources.

But as these datasets are combined and recombined into multiple collections, important information about their origins and restrictions on how they can be used are often lost or confounded in the shuffle.

Not only does this raise legal and ethical concerns, it can also damage a model’s performance. For instance, if a dataset is miscategorized, someone training a machine-learning model for a certain task may end up unwittingly using data that are not designed for that task.

In addition, data from unknown sources could contain biases that cause a model to make unfair predictions when deployed.

To improve data transparency, a team of multidisciplinary researchers from MIT and elsewhere launched a systematic audit of more than 1,800 text datasets on popular hosting sites. They found that more than 70 percent of these datasets omitted some licensing information, while about 50 percent had information that contained errors.

Building off these insights, they developed a user-friendly tool called the Data Provenance Explorer that automatically generates easy-to-read summaries of a dataset’s creators, sources, licenses, and allowable uses.

“These types of tools can help regulators and practitioners make informed decisions about AI deployment, and further the responsible development of AI,” says Alex “Sandy” Pentland, an MIT professor, leader of the Human Dynamics Group in the MIT Media Lab, and co-author of a new open-access paper about the project.

The Data Provenance Explorer could help AI practitioners build more effective models by enabling them to select training datasets that fit their model’s intended purpose. In the long run, this could improve the accuracy of AI models in real-world situations, such as those used to evaluate loan applications or respond to customer queries.

“One of the best ways to understand the capabilities and limitations of an AI model is understanding what data it was trained on. When you have misattribution and confusion about where data came from, you have a serious transparency issue,” says Robert Mahari, a graduate student in the MIT Human Dynamics Group, a JD candidate at Harvard Law School, and co-lead author on the paper.

Mahari and Pentland are joined on the paper by co-lead author Shayne Longpre, a graduate student in the Media Lab; Sara Hooker, who leads the research lab Cohere for AI; as well as others at MIT, the University of California at Irvine, the University of Lille in France, the University of Colorado at Boulder, Olin College, Carnegie Mellon University, Contextual AI, ML Commons, and Tidelift. The research is published today in Nature Machine Intelligence.

Focus on finetuning

Researchers often use a technique called fine-tuning to improve the capabilities of a large language model that will be deployed for a specific task, like question-answering. For finetuning, they carefully build curated datasets designed to boost a model’s performance for this one task.

The MIT researchers focused on these fine-tuning datasets, which are often developed by researchers, academic organizations, or companies and licensed for specific uses.

When crowdsourced platforms aggregate such datasets into larger collections for practitioners to use for fine-tuning, some of that original license information is often left behind.

“These licenses ought to matter, and they should be enforceable,” Mahari says.

For instance, if the licensing terms of a dataset are wrong or missing, someone could spend a great deal of money and time developing a model they might be forced to take down later because some training data contained private information.

“People can end up training models where they don’t even understand the capabilities, concerns, or risk of those models, which ultimately stem from the data,” Longpre adds.

To begin this study, the researchers formally defined data provenance as the combination of a dataset’s sourcing, creating, and licensing heritage, as well as its characteristics. From there, they developed a structured auditing procedure to trace the data provenance of more than 1,800 text dataset collections from popular online repositories.

After finding that more than 70 percent of these datasets contained “unspecified” licenses that omitted much information, the researchers worked backward to fill in the blanks. Through their efforts, they reduced the number of datasets with “unspecified” licenses to around 30 percent.

Their work also revealed that the correct licenses were often more restrictive than those assigned by the repositories.

In addition, they found that nearly all dataset creators were concentrated in the global north, which could limit a model’s capabilities if it is trained for deployment in a different region. For instance, a Turkish language dataset created predominantly by people in the U.S. and China might not contain any culturally significant aspects, Mahari explains.

“We almost delude ourselves into thinking the datasets are more diverse than they actually are,” he says.

Interestingly, the researchers also saw a dramatic spike in restrictions placed on datasets created in 2023 and 2024, which might be driven by concerns from academics that their datasets could be used for unintended commercial purposes.

A user-friendly tool

To help others obtain this information without the need for a manual audit, the researchers built the Data Provenance Explorer. In addition to sorting and filtering datasets based on certain criteria, the tool allows users to download a data provenance card that provides a succinct, structured overview of dataset characteristics.

“We are hoping this is a step, not just to understand the landscape, but also help people going forward to make more informed choices about what data they are training on,” Mahari says.

In the future, the researchers want to expand their analysis to investigate data provenance for multimodal data, including video and speech. They also want to study how terms of service on websites that serve as data sources are echoed in datasets.

As they expand their research, they are also reaching out to regulators to discuss their findings and the unique copyright implications of fine-tuning data.

“We need data provenance and transparency from the outset, when people are creating and releasing these datasets, to make it easier for others to derive these insights,” Longpre says.

Share Button