MathEval: a comprehensive benchmark for evaluating large language models on mathematical reasoning capabilities
Higher Education PressMathematical reasoning is a fundamental aspect of intelligence, encompassing a spectrum from basic arithmetic to intricate problem-solving. Recent investigations into the mathematical abilities of large language models (LLMs) have yielded inconsistent and incomplete assessments. In response, we introduce MathEval, a comprehensive benchmark designed to methodically evaluate the mathematical problem-solving proficiency of LLMs in various contexts, adaptation strategies, and evaluation metrics. MathEval consolidates 22 distinct datasets, encompassing a broad spectrum of mathematical disciplines, languages (including English and Chinese), and problem categories (ranging from arithmetic and competitive mathematics to higher mathematics), with varying degrees of difficulty from elementary to advanced. To address the complexity of mathematical reasoning outputs and adapt to diverse models and prompts, we employ GPT-4 as an automated pipeline for answer extraction and comparison. Additionally, we trained a publicly available DeepSeek-LLM-7B-Base model using GPT-4 results, enabling precise answer validation without requiring GPT-4 access. To mitigate potential test data contamination and truly gauge progress, MathEval incorporates an annually refreshed set of problems from the latest Chinese National College Entrance Examination (Gaokao-2023, Gaokao-2024), thereby benchmarking genuine advancements in mathematical problem solving skills.
- Journal
- Frontiers of Digital Education