2023年武清区中小学乒乓球比赛获奖名单
AGIEval
displayName: AGIEval
license:
- MIT
taskTypes: []
mediaTypes:
- Text
labelTypes: []
tags:
- attrs: null
id: 11864
name:
en: ''
zh: 文本检索
publisher:
- Microsoft
publishDate: '2023-04-01'
publishUrl: https://huggingface.co/datasets/lighteval/agi_eval_en
paperUrl: https://arxiv.org/pdf/2304.06364.pdf
---
# 数据集介绍
## 简介
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. For a full description of the benchmark
## 引文
```
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
```
## Download dataset
:modelscope-code[]{type="git"}
魔搭社区 收录