Skip to content

Commit

Permalink
add mgsm datasets (open-compass#1081)
Browse files Browse the repository at this point in the history
* add mgsm datasets

* fix lint

* fix lint

* update mgsm

* update mgsm

* ease code spell

* update

* update

* update

---------

Co-authored-by: Leymore <[email protected]>
  • Loading branch information
2 people authored and BunnyRunnerX committed May 14, 2024
1 parent c4418a9 commit 3b8615d
Show file tree
Hide file tree
Showing 7 changed files with 216 additions and 0 deletions.
67 changes: 67 additions & 0 deletions configs/datasets/mgsm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# MGSM

## Introduction

The following introduction comes from the abstract in [Language models are multilingual chain-of-thought reasoners](https://arxiv.org/abs/2210.03057)

```
We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset into ten typologically diverse languages.
```

## Official link

### Paper

[Language models are multilingual chain-of-thought reasoners](https://arxiv.org/abs/2210.03057)

### Repository

[MGSM](https://github.com/google-research/url-nlp)

## Examples

Input example I:

```
Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of "Answer:". Do not add anything other than the integer answer after "Answer:".
Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?
```

Output example I (from GPT-4):

```
Answer: 18
```

## Evaluation results

```
dataset version metric mode llama-3-8b-instruct-hf
-------------- --------- ------------- ------ ------------------------
mgsm_bn b65151 accuracy gen 14.4
mgsm_de 2cc8ae accuracy gen 60
mgsm_en 5de71e accuracy gen 76
mgsm_es d6b459 accuracy gen 61.6
mgsm_fr 813e3c accuracy gen 54.4
mgsm_ja 04424f accuracy gen 42.8
mgsm_ru 400469 accuracy gen 62.8
mgsm_sw 9e41ed accuracy gen 0.8
mgsm_te 346d97 accuracy gen 0
mgsm_th e70bee accuracy gen 44
mgsm_zh d5cf30 accuracy gen 28.4
mgsm_latin - naive_average gen 50.56
mgsm_non_latin - naive_average gen 32.07
mgsm - naive_average gen 40.47
```

## Reference

```
@article{shi2022language,
title={Language models are multilingual chain-of-thought reasoners},
author={Shi, Freda and Suzgun, Mirac and Freitag, Markus and Wang, Xuezhi and Srivats, Suraj and Vosoughi, Soroush and Chung, Hyung Won and Tay, Yi and Ruder, Sebastian and Zhou, Denny and others},
journal={arXiv preprint arXiv:2210.03057},
year={2022}
}
```
4 changes: 4 additions & 0 deletions configs/datasets/mgsm/mgsm_gen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from mmengine.config import read_base

with read_base():
from .mgsm_gen_d967bc import mgsm_datasets
56 changes: 56 additions & 0 deletions configs/datasets/mgsm/mgsm_gen_d967bc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import JiebaRougeEvaluator
from opencompass.datasets import MGSMSDataset, MGSM_Evaluator, mgsm_postprocess


ALL_LANGUAGES = ["bn", "de", "en", "es", "fr", "ja", "ru", "sw", "te", "th", "zh"]

LANG_TO_INSTRUCTIONS = {
"en": """Solve this math problem. Give the reasoning steps before giving the final answer on the last line by itself in the format of "Answer:". Do not add anything other than the integer answer after "Answer:".\n\n{question}""",
"bn": """এই গণিতের সমস্যাটি সমাধান করুন। চূড়ান্ত উত্তর দেওয়ার আগে যুক্তিসম্পন্ন পদক্ষেপ প্রদান করুন। চূড়ান্ত উত্তরটি একক সংখ্যা হিসাবে "উত্তর:" এর পরে শেষ লাইনে দিন। "উত্তর:" এর পরে অন্য কিছু যুক্ত করবেন না।.\n\n{question}""",
"de": """Löse dieses Mathematikproblem. Gib die Schritte zur Begründung an, bevor du die endgültige Antwort in der letzten Zeile alleine im Format "Antwort:" gibst. Füge nichts anderes als die ganzzahlige Antwort nach "Antwort:" hinzu.\n\n{question}""",
"es": """Resuelve este problema matemático. Proporciona los pasos de razonamiento antes de dar la respuesta final en la última línea por sí misma en el formato de "Respuesta:". No añadas nada más que la respuesta entera después de "Respuesta:".\n\n{question}""",
"fr": """Résolvez ce problème de mathématiques. Donnez les étapes de raisonnement avant de fournir la réponse finale sur la dernière ligne elle-même dans le format de "Réponse:". N'ajoutez rien d'autre que la réponse entière après "Réponse:".\n\n{question}""",
"ja": """の数学の問題を解いてください。最終的な答えを出す前に、解答の推論過程を記述してください。そして最後の行には "答え:" の形式で答えを記述し、その後には整数の答え以外何も追加しないでください。\n\n{question}""",
"ru": """Решите эту математическую задачу. Объясните шаги рассуждения перед тем, как дать окончательный ответ в последней строке сам по себе в формате "Ответ:". Не добавляйте ничего, кроме целочисленного ответа после "Ответ:".\n\n{question}""",
"sw": """Suluhisha tatizo hili la hesabu. Toa hatua za mantiki kabla ya kutoa jibu la mwisho kwenye mstari wa mwisho peke yake katika muundo wa "Jibu:". Usiongeze chochote kingine isipokuwa jibu la integer baada ya "Jibu:".\n\n{question}""",
"te": """ఈ గణిత సమస్యను పరిష్కరించండి. చివరి సమాధానాన్ని ఇవ్వదానికి ముందు తర్కాత్మక అదుగులను ఇవ్వండి. చివరి పంక్తిలో మాత్రమే 'సమాధానం:' అనే ఆకారంలో చివరి సమాధానాద్ని ఇవ్వండి సమాధానం: తర్వాత పూర్ణాంక సమాధానానికి తప్పించి ఎదేనా చేర్చవద్దు.\n\n{question}""",
"th": """แก้ปัญหาคณิตศาสตร์นี้ ให้ให้ขั้นตอนการใช้เหตุผลก่อนที่จะให้คำตอบสุดท้ายในบรรทัดสุดท้ายโดยอยู่ในรูปแบบ "คำตอบ:" ไม่ควรเพิ่มอะไรนอกจากคำตอบที่เป็นจำนวนเต็มหลังจาก "คำตอบ:"\n\n{question}""",
"zh": """解决这个数学问题。在最后一行给出答案前,请提供推理步骤。最后一行应该以 "答案: " 的形式独立给出答案。在 "答案:" 后不要添加除整数答案之外的任何内容。\n\n{question}""",
}

mgsm_datasets = []
for lang in ALL_LANGUAGES:
mgsm_reader_cfg = dict(input_columns=["question"], output_column="answer")

mgsm_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(role="HUMAN", prompt=LANG_TO_INSTRUCTIONS[lang]),
]
),
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=512),
)

mgsm_eval_cfg = dict(
evaluator=dict(type=MGSM_Evaluator),
pred_role="BOT",
pred_postprocessor=dict(type=mgsm_postprocess, lang=lang),
)

mgsm_datasets.append(
dict(
type=MGSMSDataset,
abbr=f"mgsm_{lang}",
path=f"data/mgsm/mgsm_{lang}.tsv",
reader_cfg=mgsm_reader_cfg,
infer_cfg=mgsm_infer_cfg,
eval_cfg=mgsm_eval_cfg,
)
)
1 change: 1 addition & 0 deletions configs/summarizers/example.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from .groups.tydiqa import tydiqa_summary_groups
from .groups.xiezhi import xiezhi_summary_groups
from .groups.scibench import scibench_summary_groups
from .groups.mgsm import mgsm_summary_groups

summarizer = dict(
summary_groups=sum([v for k, v in locals().items() if k.endswith("_summary_groups")], []),
Expand Down
9 changes: 9 additions & 0 deletions configs/summarizers/groups/mgsm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
ALL_LANGUAGES = ["bn", "de", "en", "es", "fr", "ja", "ru", "sw", "te", "th", "zh"]
LATIN_LANGUAGES = ["de", "en", "es", "fr", "sw"]
NON_LATIN_LANGUAGES = ["bn", "ja", "ru", "te", "th", "zh"]

mgsm_summary_groups = [
{'name': 'mgsm_latin', 'subsets': [f'mgsm_{lang}' for lang in LATIN_LANGUAGES]},
{'name': 'mgsm_non_latin', 'subsets': [f'mgsm_{lang}' for lang in NON_LATIN_LANGUAGES]},
{'name': 'mgsm', 'subsets': [f'mgsm_{lang}' for lang in ALL_LANGUAGES]},
]
1 change: 1 addition & 0 deletions opencompass/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@
from .mathbench import * # noqa: F401, F403
from .mbpp import * # noqa: F401, F403
from .medbench import * # noqa: F401, F403
from .mgsm import * # noqa: F401, F403
from .mmlu import * # noqa: F401, F403
from .multirc import * # noqa: F401, F403
from .narrativeqa import * # noqa: F401, F403
Expand Down
78 changes: 78 additions & 0 deletions opencompass/datasets/mgsm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import re

from datasets import Dataset

from opencompass.openicl.icl_evaluator import BaseEvaluator
from opencompass.registry import LOAD_DATASET

from .base import BaseDataset


@LOAD_DATASET.register_module()
class MGSMSDataset(BaseDataset):

@staticmethod
def load(path: str):
src_lines = open(path, 'r', encoding='utf-8').readlines()
data = {'question': [], 'answer': []}
for lines in src_lines:
question, answer = lines.strip().split('\t')
data['question'].append(question)
data['answer'].append(answer)

dataset = Dataset.from_dict({
'question': data['question'],
'answer': data['answer']
})
return dataset


LANG_TO_ANSWER_PREFIX = {
'en': 'Answer',
'bn': 'উত্তর',
'de': 'Antwort',
'es': 'Respuesta',
'fr': 'Réponse',
'ja': '答え',
'ru': 'Ответ',
'sw': 'Jibu',
'te': 'సమాధానం',
'th': 'คำตอบ',
'zh': '答案',
}


def mgsm_postprocess(text: str, lang: str) -> str:
answer_prefix = LANG_TO_ANSWER_PREFIX[lang]
if answer_prefix not in text:
return ''
answer_text = text.split(answer_prefix)[-1].strip()
numbers = re.findall(r'\d+\.?\d*', answer_text.replace(',', ''))
return numbers[-1].rstrip('.') if numbers else ''


class MGSM_Evaluator(BaseEvaluator):

def score(self, predictions, references):
assert len(predictions) == len(references)

num_correct, total = 0, 0
details = {}
for index, (references_answer, predictions_answer) in enumerate(
zip(references, predictions)):
if references_answer == predictions_answer:
is_correct = True
else:
is_correct = False

num_correct += is_correct
total += 1
details[str(index)] = {
'references': references_answer,
'predictions': predictions_answer,
'correct': is_correct,
}

accuracy = num_correct / total * 100
final_result = {'accuracy': accuracy, 'details': details}
return final_result

0 comments on commit 3b8615d

Please sign in to comment.