PolyU research unveils mechanoelectrical perception in sea urchin spines, empowering next-generation biomimetic sensors
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 29-Apr-2026 03:15 ET (29-Apr-2026 07:15 GMT/UTC)
A research team has successfully developed a deep neural network (DNN) model capable of predicting nuclear charge density distributions with high precision. Trained on advanced theoretical data, this model outperforms existing methods in accuracy and has yielded a comprehensive global dataset of charge densities spanning a wide range of nuclides. This achievement provides invaluable data support for research in nuclear physics, atomic and molecular physics, and related fundamental fields.
A team of Korean researchers has developed the world’s first technology that can freely connect and disconnect core computing resources such as memory and accelerators with “light” in next-generation artificial intelligence (AI) datacenters. Electronics and Telecommunications Research Institute (ETRI) announced the development of a new optical switch based datacenter resource interconnection technology (Optical Disaggregation, OD). This technology is regarded as a core next-generation optical network technology that is designed to resolve the shortage of computing resources due to the increasing AI services and that enables faster and more efficient operation of future datacenters.
A computational method called scSurv, developed by researchers at Institute of Science Tokyo, links individual cells to patient outcomes using widely available bulk RNA sequencing data. The approach uses single-cell reference datasets together with patient survival data to infer the contributions of individual cells within complex tissues. The model identified cell populations associated with survival across several cancers, offering a way to uncover disease-driving cells and support the development of more targeted treatment strategies.
Seamless integration between electronics and the human body is the goal, and Ga-LMs are key to this transformation. Essential properties,such as, fluidity, conductivity, and biocompatibility, enable Ga-LMs to form stretchable, self-healing circuits, paving the way for advanced wearables, soft robotics, and medical implants that promise to redefine human-machine interaction.
Neuroimaging analysis in brain disorders faces a persistent challenge: brain signals are complex and high-dimensional, while high-quality labeled datasets remain limited. This review article systematically examines how self-supervised learning can help address that gap by learning meaningful representations directly from unlabeled neuroimaging data. It covers major methodological families, including contrastive, generative, and hybrid generative-contrastive approaches, and discusses their applications in functional MRI, EEG, and multimodal brain network analysis.
The review argues that self-supervised learning offers more than annotation efficiency. It may enable more transferable and clinically useful representations for disease screening, diagnosis, and prognosis across heterogeneous datasets and disorders. At the same time, interpretability, data heterogeneity, missing modalities, and clinical validation remain major barriers. Future work will likely focus on stronger multimodal fusion, better cross-site generalization, and more clinically adaptable model design.