Groundbreaking research compares prompt styles and LLMs for structured data generation - Unveiling key trade-offs for real-world AI applications
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 30-Dec-2025 00:11 ET (30-Dec-2025 05:11 GMT/UTC)
Nashville, TN & Williamsburg, VA – 24 Nov 2025 – A new study published in Artif. Intell. Auton. Syst. delivers the first systematic cross-model analysis of prompt engineering for structured data generation, offering actionable guidance for developers, data scientists, and organizations leveraging large language models (LLMs) in healthcare, e-commerce, and beyond. Led by Ashraf Elnashar from Vanderbilt University, alongside co-authors Jules White (Vanderbilt University) and Douglas C. Schmidt (William & Mary), the research benchmarks six prompt styles across three leading LLMs to solve a critical challenge: balancing accuracy, speed, and cost in structured data workflows.
Structured data—from medical records and receipts to business analytics—powers essential AI-driven tasks, but its quality and efficiency depend heavily on how prompts are designed. “Prior research only scratched the surface, testing a limited set of prompts on single models,” said Elnashar, the study’s corresponding author and a researcher in Vanderbilt’s Department of Computer Science. “Our work expands the horizon by evaluating six widely used prompt formats across ChatGPT-4o, Claude, and Gemini, revealing clear trade-offs that let practitioners tailor their approach to real-world needs.”
Asia-led, “sovereign-by-design” platform built for secure, decentralised pathogen intelligence-sharing across borders aims to break data silos and provide faster “time to actionable insight” of outbreaks, from detection to control measures being in place.
Concordia University researchers unveiled a new audio-tokenization method, FocalCodec, that compresses speech into compact tokens while preserving meaning and quality. Concordia University
By using binary spherical quantization and focal modulation, FocalCodec dramatically reduces bitrate — making speech easier for large language models to process. Concordia University
In listening tests with 33 participants, reconstructed speech was judged nearly indistinguishable from original recordings, showing the new method keeps natural sound.