Mum spends £24k a year on daughter’s cannabis oil

Emma Applyby fundraises to afford the £2,000 monthly bill for her daughter’s medicinal cannabis.

Share Button

‘I won’t swim in water polluted with antibiotics’

Visitors to a Derbyshire waterway say they are horrified at drug pollution levels.

Share Button

New methods for whale tracking and rendezvous using autonomous robots

Project CETI (Cetacean Translation Initiative) aims to collect millions to billions of high-quality, highly contextualized vocalizations in order to understand how sperm whales communicate. But finding the whales and knowing where they will surface to capture the data is challenging — making it difficult to attach listening devices and collect visual information.

Today, a Project CETI research team led by Stephanie Gil, Assistant Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), have proposed a new reinforcement learning framework with autonomous drones to find sperm whales and predict where they will surface.

The research is published in Science Robotics.

This new study uses various sensing devices, such as Project CETI aerial drones with very high frequency (VHF) signal sensing capability that leverage signal phase along with the drone’s motion to emulate an ‘antenna array in air’ for estimating directionality of received pings from CETI’s on-whale tags. It demonstrates that it’s possible to predict when and where a whale may surface by using these various sensor data as well as predictive models of sperm whales dive behavior. With that information, Project CETI can now design algorithms for the most efficient route for a drone to rendezvous — or encounter — a whale at the surface. This also opens up possible conservation applications to help ships avoid striking whales while at the surface.

Presenting the Autonomous Vehicles for whAle Tracking And Rendezvous by remote Sensing, or AVATARS framework, this study jointly develops two interrelated components of autonomy and sensing: autonomy, which determines the positioning commands of the autonomous robots to maximize visual whale encounters; and sensing, which measures the Angle-of-Arrival (AOA) from whale tags to inform the decision-making process. Measurements from our autonomous drone to surfaced tags, acoustic AOA from existing underwater sensors, and whale motion models from previous biological studies of sperm whales are provided as inputs to the AVATARS autonomous decision-making algorithm, which in turn aims to minimize missed rendezvous opportunities with whales.

AVATARS is the first co-development of VHF sensing and reinforcement learning decision-making for maximizing rendezvous of robots and whales at sea. A well-known application of time-critical rendezvous is used with rideshare apps, which uses real-time sensing to note the dynamic paths and positions of drivers and potential riders. When a rider requests a ride, it can assign a driver to rendezvous with the rider as efficiently and as timely as possible. Project CETI’s case is similar in that they are real-time tracking the whale, with the goal of coordinating the drone’s rendezvous to meet the whale at the surface.

This research advances Project CETI’s goal of obtaining millions to billions of high-quality, highly contextualized whale vocalizations. The addition of diverse types of data will improve location estimates and routing algorithms — helping Project CETI meet that goal more efficiently.

“I’m excited to contribute to this breakthrough for Project CETI. By leveraging autonomous systems and advanced sensor integration, we’re able to solve key challenges in tracking and studying whales in their natural habitats. This is not only a technological advancement, but also a critical step in helping us understand the complex communications and behaviors of these creatures,” said Gil.

“This research is a major milestone for Project CETI’s mission. We can now significantly enhance our ability to gather high-quality and large-scale dataset on whale vocalizations and the associated behavioral context, putting us one step closer to better listening to and translating what sperm whales are saying,” said David Gruber, Founder and Lead of Project CETI.

“‘This research was an amazing opportunity to test our systems and algorithms in a challenging marine environment. This interdisciplinary work, that combines wireless sensing, artificial intelligence and marine biology, is a prime example of how robotics can be part of the solution for further deciphering the social behavior of sperm whales,” said Ninad Jadhav, Harvard University PhD candidate and first author on the paper.

“This project provides an excellent opportunity to test our algorithms in the field, where robotics and artificial intelligence can enrich data collection and expedite research for broader science in language processing and marine biology, ultimately protecting the health and habitat of sperm whales,” said Sushmita Bhattacharya, a postdoctoral researcher in Gil’s REACT Lab at SEAS.

More information:

https://www.projectceti.org/

Share Button

Sleeping for 2: Insomnia therapy reduces postpartum depression, study shows

While many people believe that poor sleep during pregnancy is inevitable, new research has determined that cognitive behavioral therapy for insomnia (CBTi) while pregnant can not only improve sleep patterns but also address postpartum depression.

Researchers from UBC’s Okanagan and Vancouver campuses, as well as the University of Calgary, discovered that delivering CBTi during pregnancy significantly reduces postpartum depressive symptoms after a baby arrives.

“Early intervention is crucial for infant and parental mental health,” says Dr. Elizabeth Keys, an Assistant Professor in UBCO’s School of Nursing and a study co-author. “Our research explores how addressing sleep problems like insomnia can lead to better mental health outcomes for families, helping parents and their children thrive.”

CBTi is a therapeutic intervention that identifies thoughts, behaviors and sleep patterns that contribute to insomnia. Treatment includes challenging or reframing misconceptions and restructuring habits to improve sleep quality.

“CBTi is the gold standard for the treatment of insomnia and has consistently been shown to improve symptoms of depression,” says Dr. Keys. “Its treatment effects are similar to antidepressant medications among adults, but with fewer side effects, and is therefore often preferred by pregnant individuals.”

Sixty-two women assessed for insomnia and depressive symptoms participated in the study — with half randomly assigned to an intervention group and half to a control group.

“We found that CBTi during pregnancy significantly improved sleep and reduced postpartum depressive symptoms for participants,” explains Dr. Keys. “These are enormously encouraging results for anyone that has struggled in those early weeks and months with their newborns.”

Results indicate that effective insomnia treatment during pregnancy may serve as a protective factor against postpartum depression.

“Our study adds to the growing evidence that treating insomnia during pregnancy is beneficial for various outcomes,” Dr. Keys says. “It’s time to explore how we can make this treatment more accessible to pregnant individuals across the country to improve sleep health equity.”

The research highlights the interdisciplinary collaborations between researchers across Canada and UBC’s Vancouver and Okanagan campuses. Dr. Elizabeth Keys is from UBCO, while Dr. Lianne M. Tomfohr-Madsen, a Canada Research Chair in Mental Health and Intersectionality, is based at UBC Vancouver.

Dr. Keys and Dr. Tomfohr-Madsen are lead investigators on the Canadian Institutes of Health Research (CIHR) Sleep Equity Reimagined team and Canadian Sleep Research Consortium members.

Share Button

How fruit flies achieve accurate visual behavior despite changing light conditions

Researchers identify neuronal networks and mechanisms that show how contrasts can be rapidly and reliably perceived even when light levels vary.

When light conditions rapidly change, our eyes have to respond to this change in fractions of a second to maintain stable visual processing. This is necessary when, for example, we drive through a forest and thus move through alternating stretches of shadows and clear sunlight. “In situations like these, it is not enough for the photoreceptors to adapt, but an additional corrective mechanism is required,” said Professor Marion Silies of Johannes Gutenberg University Mainz (JGU). “Earlier work undertaken by her research group had already demonstrated that such a corrective ‘gain control’ mechanism exists in the fruit fly Drosophila melanogaster, where it acts directly downstream of the photoreceptors. Silies’ team has now managed to identify the algorithms, mechanisms, and neuronal networks that enable the fly to sustain stable visual processing when light levels change rapidly. The corresponding article has been published recently in Nature Communications.

Rapid changes in luminance challenge stable visual processing

Our vision needs to function accurately in many different situations — when we move in our surroundings as well as when our eyes follow an object that moves from light into shade. This applies to us humans and to many thousand animal species that rely heavily on vision to navigate. Rapid changes in luminance are also a problem in the world of inanimate objects when it comes to information processing by, for example, camera-based navigation systems. Hence, many self-driving cars depend on additional radar- or lidar-based technology to properly compute the contrast of an object relative to its background. “Animals are capable of doing this without such technology. Therefore, we decided to see what we could learn from animals about how visual information is stably processed under constantly changing lighting conditions,” explained Marion Silies the research question.

Combination of theoretical and experimental approaches

The compound eye of Drosophila melanogaster consists of 800 individual units or ommatidia. The contrast between an object and its background is determined postsynaptic of the photoreceptors. However, if luminance conditions suddenly change, as in the case of an object moving into the shadow of a tree, there will be differences in contrast responses. Without gain control, this would have consequences for all subsequent stages of visual processing, resulting in the object appearing different. The recent study with lead author Dr. Burak Gür used two-photon microscopy to describe where in visual circuitry stable contrast responses were first generated. This led to the identification of neuronal cell types that are positioned two synapses behind the photoreceptors.

These cell types respond only very locally to visual information. For the background luminance to be correctly included in computing contrast, this information needs narrow spatial pooling, as revealed by a computational model implemented by co-author Dr. Luisa Ramirez. “We started with a theoretical approach that predicted an optimal radius in images of natural environments to capture the background luminance across a particular region in visual space while, in parallel, we were searching for a cell type that had the functional properties to achieve this,” said Marion Silies, head of the Neural Circuits lab at the JGU Institute of Developmental Biology and Neurobiology (IDN).

Information on luminance is spatially pooled

The Mainz-based team of neuroscientists has identified a cell type that meets all required criteria. These cells, designated Dm12, pool luminance signals over a specific radius, which in turn corrects the contrast response between the object and its background in rapidly changing light conditions. “We have discovered the algorithms, circuits, and molecular mechanisms that stabilize vision even when rapid luminance changes occur,” summarized Silies, who has been investigating the visual system of the fruit fly over the past 15 years. She predicts that luminance gain control in mammals, including humans, is implemented in a similar manner, particularly as the necessary neuronal substrate is available.

Share Button

‘Crazy’ to leave breast tissue behind – Paterson

Rogue surgeon Ian Paterson gives evidence into the death of a patient, one of 62 inquests being held.

Share Button

Ed Davey ‘minded’ to vote against assisted dying bill

Sir Ed fears elderly and disabled people might feel pressured to end their lives if they felt like a “burden”.

Share Button

Warning tax rises could force care homes to close

Social care providers say the sector is in “unprecedented danger” without more funding.

Share Button

Mother says NHS failed to give daughter safe care

Phoebe Ockenden, who has learning difficulties, was left in a chair in A&E for seven hours.

Share Button

It’s not to be. Universe too short for Shakespeare typing monkeys

A monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of Shakespeare purely by chance, according to the Infinite Monkey Theorem.

This widely known thought-experiment is used to help us understand the principles of probability and randomness, and how chance can lead to unexpected outcomes. The idea has been referenced in pop culture from The Simpsons to Hitchhiker’s Guide to the Galaxy and on TikTok.

However, a new study reveals it would take an unbelievably huge amount of time — far longer than the lifespan of our universe, for a typing monkey to randomly produce Shakespeare. So, while the theorem is true, it is also somewhat misleading.

Mathematicians, Associate Professor Stephen Woodcock and Jay Falletta from the University of Technology Sydney (UTS), decided to examine the theorem using instead the limits of our finite universe.

“The Infinite Monkey Theorem only considers the infinite limit, with either an infinite number of monkeys or an infinite time period of monkey labour,” said Associate Professor Woodcock.

“We decided to look at the probability of a given string of letters being typed by a finite number of monkeys within a finite time period consistent with estimates for the lifespan of our universe,” he said.

The serious but light-hearted study, A numerical evaluation of the Finite Monkeys Theorem, has just been published in the peer-reviewed journal Franklin Open.

For number-crunching purposes, the researchers assumed that a keyboard contains 30 keys including all the letters of the English language plus common punctuation marks.

As well as a single monkey, they also did the calculations using the current global population of around 200,000 chimpanzees, and they assumed a rather productive typing speed of one key every second until the end of the universe in about 10^100 years — that’s a 1 followed by 100 zeros.

The results reveal that it is possible (around a 5% chance) for a single chimp to type the word ‘bananas’ in its own lifetime. However, even with all chimps enlisted, the Bard’s entire works (with around 884,647 words) will almost certainly never be typed before the universe ends.

“It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors muse.

“This finding places the theorem among other probability puzzles and paradoxes — such as the St. Petersburg paradox, Zeno’s paradox, and the Ross-Littlewood paradox — where using the idea of infinite resources gives results that don’t match up with what we get when we consider the constraints of our universe,” said Associate Professor Woodcock.

In the era of generative AI, the Infinite Monkey Theorem, and its finite version, perhaps also challenge readers to consider philosophical questions around the nature of creativity, meaning and consciousness, and how these qualities emerge.

Share Button