EurekAlert! Staff Picks

Each week, our team members share their favorite recent news releases, stories that caught their eye, sparked their curiosity, or made them think. We hope you’ll find them just as interesting!

Seth Rose

Seth Rose

Editorial Content Manager

Scientists devise way to track space junk as it falls to Earth

"Space junk" has always seemed like an interesting and underrated problem to me. A lot of the things we think about in space are so distant, black holes and galaxies unfathomable distances away. Turns out though that one of the most pressing space-related problems is close enough to us to be essentially ecological: in space, no one can hear you litter.

...or at least they can't until your litter reenters the atmosphere. When space debris falls back down to Earth, it does so fast enough to create the same sort of sonic booms produced by jets breaking the sound barrier. Johns Hopkins scientist Benjamin Fernando and collaborators at Imperial College London used this fact to tackle an aspect of the space junk problem from an unexpected direction: seismology. They set up a network of 127 seismometers in California to track the path of debris left by a Chinese satellite entering the atmosphere. Results were encouraging, noticeably more accurate and faster than traditional radar estimates. With luck, their research will give us a much better general handle on when and where our space junk returns to us.

We have no idea what most of the universe is made of, but scientists are closer than ever to finding out

I love when a press release can open with a claim as absurd-sounding as "we have no idea what most of the universe is made of". That might feel like marketing hyperbole in a lot of other contexts, but when the subject is dark matter/energy, it's not far off the scientific mark. Astronomy is always a subject where the objects of study are usually theoretical in the sense we can't reach out and observe them in person, but even in most of those cases we know where things theoretically are. Not so with dark matter: it's all theory, a huge proportion of the total matter in the universe that we can't see or touch or feel at all even if we could get to it.

That quality also makes writing about it extremely difficult, and this release from Texas A&M University does an excellent job bridging the gap. It covers some of recent research Dr. Rupak Mahapatra has contributed to a team of scientists using a super-sensitive detector called TESSARACT, but also takes care to frame his work with a generous and not too technical overview of the dark matter "problem" and how they're aiming to move the needle on solving it. I understand the details on the topic probably even less than I understand most in astronomy, but the sheer novelty of it means I love reading about it every time it comes up and good interlocutors are always appreciated.

How your brain understands language may be more like AI than we ever imagined

I grew up loving stories about artificial intelligence, especially ones that tackle questions of the "humanity" of that intelligence (shout out Legion from Mass Effect, you will always have a soul in my book). While ChatGPT is a far cry from Hal 9000 (for now), it's still so incredibly surreal to be living as an adult in a world in which we can reasonably ask some of those same questions of actually existing technologies.

Researchers at Hebrew University of Jerusalem, Google Research and Princeton University asked the question by studying language processing in humans: they took electrocorticography recordings of a group of people after 30 minutes of listening to a podcast and tracked how different parts of the brain were activated as they processed the information. Their findings observed a layered "step-by-step" pattern where early responses sort out basic visual info in one area of the brain and later processing of context happens later in "deeper" layers. According to the researchers, this process is not all that dissimilar from the multi-tiered approach AI uses to "understand" the world.

What stands out at me so starkly from this story is not just that the human brain readings apparently matched AI methods closer than expected, but that this understanding of our own brain needed to be filled in in the first place. You hear people talk about how modern AI is a "black box" where we can set the output parameters but don't really understand exactly what's going on under the hood. And yet...is our understanding of our own brains that much more advanced*? Even with all our modern technology and centuries of research to build off of, there are still aspects of the brain that we just don't have a full understanding of how they work, and I like the idea that only once we have an even vaguely similar parallel to compare to will we be able to truly complete that understanding.

(*it is, I just think it's still an interesting question)