CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (2024)

Jiahao Zhao1,2Equal contribution.Work done on the Science and Technology Innovation Project of UCAS directed by SIAT.  Jingwei Zhu311footnotemark: 1  Minghuan Tan1Corresponding author.  Min Yang133footnotemark: 3  
Di Yang3
  Chenhao Zhang1,422footnotemark: 2   Guancheng Ye1,522footnotemark: 2   
Chengming Li6
  Xiping Hu6
1 Shenzhen Key Laboratory for High Performance Data Mining,
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences.
2 Jilin University.3 University of Science and Technology of China.
4 Huazhong University of Science and Technology.
5 South China University of Technology.6 Shenzhen MSU-BIT University
zhaojh2121@mails.jlu.edu.cn,{jingweizhu,di-yang}@mail.ustc.edu.cn,
{mh.tan,min.yang}@siat.ac.cn,ch_zhang@hust.edu.cn,{licm,huxp}@smbu.edu.cn

Abstract

In this paper, we introduce a novel psychological benchmark, CPsyExam, constructed from questions sourced from Chinese language examinations. CPsyExam is designed to prioritize psychological knowledge and case analysis separately, recognizing the significance of applying psychological knowledge to real-world scenarios.From the pool of 22k questions, we utilize 4k to create the benchmark that offers balanced coverage of subjects and incorporates a diverse range of case analysis techniques.Furthermore, we evaluate a range of existing large language models(LLMs), spanning from open-sourced to API-based models.Our experiments and analysis demonstrate that CPsyExam serves as an effective benchmark for enhancing the understanding of psychology within LLMs and enables the comparison of LLMs across various granularities.

CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations


Jiahao Zhao1,2thanks: Equal contribution.thanks: Work done on the Science and Technology Innovation Project of UCAS directed by SIAT.and Jingwei Zhu311footnotemark: 1and Minghuan Tan1thanks: Corresponding author.and Min Yang133footnotemark: 3andDi Yang3and Chenhao Zhang1,422footnotemark: 2and Guancheng Ye1,522footnotemark: 2andChengming Li6and Xiping Hu61 Shenzhen Key Laboratory for High Performance Data Mining,Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences.2 Jilin University.3 University of Science and Technology of China.4 Huazhong University of Science and Technology.5 South China University of Technology.6 Shenzhen MSU-BIT Universityzhaojh2121@mails.jlu.edu.cn,{jingweizhu,di-yang}@mail.ustc.edu.cn,{mh.tan,min.yang}@siat.ac.cn,ch_zhang@hust.edu.cn,{licm,huxp}@smbu.edu.cn


{CJK*}

UTF8gbsn

1 Introduction

The evaluation of language models has been an important topic with sustained vitality in the natural language processing community Chang etal. (2023).With the development of pretrained language models, such as GPTRadford etal. (2018, 2019) and BERTDevlin etal. (2019), their increasing abilities in executing a range of different linguistic tasks in different domains call for more challenging and inclusive settings with comprehensive human baselines.

Recently, researchers have constructed a series of benchmarks, such as GLUEWang etal. (2019b), SuperGLUEWang etal. (2019a) and CLUEXu etal. (2020), in order to evaluate natural language understanding (NLU) tasks.As more and more models show better performance on these tasks than human,benchmarks that have massive multi-task based on real-world exams such as MMLUHendrycks etal. (2021), CMMLULi etal. (2023) and CEVALHuang etal. (2023) are constructed to comprehensively assess the abilities of LLMs.There has been a new trend of constructing diverse benchmarks focusing on different abilities and knowledge across different domainsHendrycks etal. (2021); Li etal. (2023); Wang etal. (2023).

With the increasing adoption of LLMs in psychological counsellingLai etal. (2023) and mental health supportQiu etal. (2023), there’s an urgent need of a psychological evaluation benchmark to measure to what extend that current LLMs understand psychological knowledge.As far as we are concerned, the domain of psychology in Chinese is still overlooked by existing benchmarks.First and foremost, not all benchmarks for LLMs encompass knowledge of psychology, and those that do offer inadequate coverage.For example, CMMLULi etal. (2023) only have one subject related to psychology, CEVALHuang etal. (2023) does not even include psychology-related subjects.Secondly, although there have been concurrent works like PsyBenchZhang etal. (2023a) and PsyEvalJin etal. (2023), the questions in these benchmarks are either automatically generated by LLMs or of limited size.For example, PsyEval constructs Mental Health QA of 726 questions from MedQAJin etal. (2020) through keyword matching and manual screening.PsyBench is constructed from GPT-4 and focusing on the balance of knowledge, but limited on the size of dataset.

In this paper, we construct a large scale psychological evaluation benchmark CPsyExam from a series of Chinese examinations that contain psychology subjects.CPsyExam contains three kinds of questions, multiple-choice question answering(MCQA), multiple-response question answering(MRQA) and question answering(QA).Due to the specificity of the psychology domain, in order to comprehensively address the ability of LLMs in understanding of psychological cases, we then divide CPsyExam into two parts:(1) Knowledge(KG) contains factoid-oriented questions with a wider coverage of psychology concepts from real examinations for professional counsellors,(2) Case Analysis(CA) includes case-oriented questions focusing on method, diagnosis and treatment required during counselling.

We further compare the performance of recent general domain LLMs and psychological-specific LLMs on CPsyExam.Our experiments reveal that compared to the foundation models, these fine-tuned models exhibit marginal gains or no improvement in understanding psychological knowledge.In some cases, their ability to analyze cases may even be compromised.Evidently, LLMs still have room for improvement in terms of mastering psychological knowledge and applying it to psychological case analysis.CPsyExam serves as a valuable benchmark for advancing LLMs’ understanding of psychology.

Our work has the following contributions:

  1. 1.

    We provide a comprehensive and balanced dataset of Chinese psychology examination questions.

  2. 2.

    We propose a psychological assessment framework which includes the knowledge session and case-analysis session.

  3. 3.

    We construct the benchmark and release the SFT data which contribute to the enhancement of psychological competence in the LLMs.

CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (1)

2 Related Work

2.1 Datasets in Psychology Domain

In the psychological domain, language resources are abundant in mental health support in the format of question answering as well as conversations.For example, EmpathicReactionsBuechel etal. (2018) investigates peoples’ reactions to news stories and distinguishes between multiple psychological properties of empathy.EmpathicConversationsOmitaomu etal. (2022) constructs dyadic (two person) text conversations of crowd workers about the news articles.The two datasets are further adopted by WASSA Shared TasksTafreshi etal. (2021); Barriere etal. (2022, 2023) for empathy prediction.EmpatheticDialoguesRashkin etal. (2019) contains conversations with each conversation having two people discussing a situation that happened to one of them, related to a given feeling (32 emotion labels).EDOSWelivita etal. (2021) uses emotional dialogues from OpenSubtitlesLison etal. (2018) and adds annotations with 32 fine-grained emotions, eight empathetic response intents, and the Neutral category.

To enhance understanding of how empathy are expressed in context, it’s worth noting that several works offer extra annotations from more psychological perspectives.MentalHealthSubredditsSharma etal. (2020) develops aframework for characterizing the communication of empathy in text-based conversations and annotate a subset of 10k interactions on empathy.PsyQASun etal. (2021) adopts different strategies for annotation to present in-depth analysis of both lexical features and strategy patterns in the counseling answers.ESConvLiu etal. (2021) proposes an ESC Framework (which is grounded on the Helping Skills Theory) and provides annotations for help-seekers’ problems, emotions, feedback, and the support strategies.CHQ-SocioEmoAlasmari etal. (2023) contains a total of 1,500 questions and answers pairs and covers a range of social support categories, such as informational support, emotional support, esteem support, network support, and tangible support.

There are also datasets constructed from rewriting existing conversations using LLMs to enhance their topic coverage and expressiveness.AugESCZheng etal. (2023) is an augmented dataset derived from the crowdsourced ESConv corpus, involving prompting a LLM to complete full dialogues.SMILEQiu etal. (2023) employs ChatGPT to extend public single-turn dialogues from PsyQA into multi-turn ones.

Considering the privacy and security issues of psychology consultations, real counseling data between doctors and patients is still hard to access.CounselChatBertagnolli (2020) data origins from counselchat.com, a website where individuals can seek assistance from licensed therapists.“Reflections” is a fundamental verbal skill employed by mental health counselors to convey understanding and recognition of the client’s experiences and concerns.PAIRMin etal. (2022) is a dataset consisting of reflections portraying different levels of reflective listening skills.However, these datasets are usually not released to the public.

The distribution of datasets in psychology domain is not well balanced as most of them are focusing on empathy.Although these resources can be used to evaluate the levels of empathy in LLMs, there is still an urgent need for the evaluation of their knowledge of psychology.

2.2 Benchmarks of Large Language Models

Benchmarks of large language models can be categorized based on the domains they are focusing.There have been integrated evaluation benchmarks focusing on general domains as well as on specific domains.The evaluations usually adopt a diverse set of evaluation methods, such as zero-shot evaluation, few-shot evaluation, and chain-of-thought evaluation.

In the general domain, benchmarks typically comprehensively evaluate the scope and depth of a model’s academic and professional knowledge, such as MMLUHendrycks etal. (2021), CMMLULi etal. (2023), CEVALHuang etal. (2023), BIG-BenchSrivastava etal. (2023), HELMLiang etal. (2023), M3KELiu etal. (2023), XiezhiGu etal. (2023) and MMCUZeng (2023).

In specific domains, there are also benchmarks available focusing on the evaluation of expertise for LLMs.In the medical domain, various benchmarks have been proposed, such as webMedQAHe etal. (2019),NLPECLi etal. (2020),IMCS21Chen etal. (2022),MedMCQAPal etal. (2022) and CMBWang etal. (2023).In the financial domain, there are FinanceBenchIslam etal. (2023), PiXiuXie etal. (2023) and FinEvalZhang etal. (2023b).In the legal domain, there are LexGLUEChalkidis etal. (2022) and LegalBenchGuha etal. (2023).

There are also benchmarks integrating multiple high-quality open-source datasets from different domains, such as OpenCompassContributors (2023) and AGIEvalZhong etal. (2023).

However, in the psychological domain, despite the ongoing development of benchmarks like PsyBenchZhang etal. (2023a) and PsyEvalJin etal. (2023), we still think there’s a need to fill the gap using genuine questions crafted by experts and problems derived from psychological case analyses.

3 Dataset

We gather psychological data from publicly available resources and adopt taxonomy criteria specific to different subjects within the domain to further categorize the questions.

3.1 Design Principles

Multi-capability

In the realm of psychology, the significance of case studies is on par with that of psychological knowledge.Analyzing cases is often a testament to a practitioner’s ability to apply their skills within the field of psychology.Consequently, our dataset encompasses two key components: one designed to evaluate the LLM’s grasp of psychological knowledge (KG), and the other aimed at assessing the LLM’s proficiency in case analysis (CA).

Comprehensive but Balanced

The benchmark needs to be designed to accurately reflect LLM’s competence in the field of psychology within the Chinese context.Therefore, we anticipate that it could cover the majority of topics tested on the Chinese Psychology Examinations but with a balanced distribution of questions across various subjects.We identify a wide range of examinations that include psychology subjects under the Chinese examination system.

3.2 Data Collection

KnowledgeCase Analysis
MCQAMRQAQATotalMCQAMRQAQATotal
Train6,8522,2302,90411,9864472917790
Dev7642453221,331583189
Test2,3217811003,202600200100900
Reserved2,3217811003,202600200100900
Total12,2404,0373,42619,7211,2491,2122182,679
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (2)
The System of Psychology Examinations in China

Due to the advancement of psychology and initiatives by global organizations, there has been a growing awareness of mental health in developing countries, including China.Educationally, China has established an examination system for psychology that assesses the foundational psychological knowledge of practitioners from diverse occupations and provides professional certifications for individuals aspiring to enter the field of psychological counseling.

Below is a list of examinations for target groups:

  • PCE(Psychological Counselor Examination) The professional qualification examinations for first, second, and third-tier psychological counselors are designed to assess the competence and knowledge of individuals working in the field of psychology. These exams are typically structured as a series of multiple-choice or essay questions that cover various aspects of psychology.

  • TQE(Teachers’ Qualification Examination)The Teachers’ Qualification Examination is a standardized assessment for individuals who wish to teach psychology in primary, middle, and high schools, as well as in vocational schools and higher education institutions. This examination is designed to ensure that educators possess the necessary knowledge and skills to effectively teach psychology courses.

  • GEE(Graduate Entrance Examinations) It contains a wide range of subjects like general psychology, social psychology, experimental psychology, contemporary educational psychology, psychology and educational measurement, developmental psychology, modern psychology, and educational statistics.

  • SSE(Self-study Examination)The Self-study Examination is an examination that assesses an individual’s knowledge and understanding of psychology, specifically in the areas of educational psychology, medical psychology, advertising psychology, and journalism psychology. This examination is typically taken by individuals who are self-studying or not enrolled in a formal educational program.

which reflects the comprehensive examination system in China’s psychology education.

Crawling

Based on the categorization of examinations in psychology, we crawl public available resources online to construct a database of questions.The websites for the data crawling include ExamCoo111https://examcoo.com,StudyEZ222http://www.studyez.com/psychology/,Hxter333www.hxter.com,MXQE444http://tk.mxqe.com and book corpus on GEE.

Data Processing

We collect our data from both websites and books.For the data scraped from websites, we use a parsing program to extract the questions, while for the data from books, we extract it manually, resulting in structured data in a uniform format.Afterward, we preprocess all the data to eliminate duplicates and questions with incorrect formatting.We also remove questions that contain image links and standardize the question format by removing question numbers and option letters.Finally, we manually validate the dataset to ensure that there are no apparent grammatical errors in the questions.

We identify three types of questions from the preprocessed data, multiple-choice questions(MCQA), multiple-response questions(MRQA), and question answering questions(QA).MCQA questions are commonly used in domain-specific evaluations of LLMs, despite their potential uneven difficulty and the possibility of random guesses by models.This question type offers advantages in assessment and comparison due to the extensive availability of resources.Consequently, we constructed our dataset primarily based on multiple-choice questions in psychology.MRQA questions, although less common in most evaluation benchmarks, are frequently used in human tests to challenge candidates by requiring them to select all the appropriate answers.We include this question type to assess the abilities of LLMs in relation to psychology knowledge.QA questions typically involve diverse topics on psychology and the application of specific techniques.We gather this type of question to evaluate the generation abilities of LLMs in psychology.

CA questions can be distinguished from KG questions using some empirical methods like keywords matching.

3.3 CPsyExam

CPsyExam is a multi-task dataset of psychology designed to assess both knowledge and reasoning ability of LLMs in Chinese language.We use the collected 20k exam questions about psychology and further arrange them into tasks based on subjects and question types.

CPsyExam-KG

We align the taxonomy of CPsyExam-KG questions with the Chinese examination system for psychology.Then we select all the psychological subjects in each examination as a subcategory, the detailed directory list of which can be found in AppendixA.

CPsyExam-CA

Based on psychological counselling conventions, we divided the CA questions into three categories: Method, Diagnosis and Treatment.The Method category assesses the LLM’s ability to use the corresponding methodology in a specific case.The Diagnosis category pays attention to the LLM’s ability to diagnose the visitor’s illness.The Treatment category evaluates LLM’s ability to treat patients.

To facilitate supervised fine-tuning as well as few-shot learning, each dataset for the task will be further partitioned into train, dev, test and reserved.The test split will be used for the evaluation of LLMs.The reserved split will not be released and act as a control set for further evaluation.We sample psychology subjects uniformly under each exam, ensuring that the number of questions is consistent across all four exams.This approach is also used to create the test and reserve split.The remaining questions are all allocated to the train split.Statistics of the dataset is listed in Table1.We show three examples from both KG and CA in Figure2.

We hold that CPsyExam-KG and CPsyExam-CA are essential and complementary in assessing psychological competency.Together, they serve as a comprehensive evaluation not only of practitioners’ competency in psychology but also of the LLM’s expertise in the field.

3.4 Comparison with Existing Psychology Benchmarks

In the psychological domain, we have the ongoing development of benchmarks like PsyBench and PsyEval.However, PsyBench uses the method of inputting knowledge points into the GPT-4 to generate questions because of the pursuit of balanced knowledge.Although they utilize the method of error correction by experts to address the questions, the questions in our dataset were meticulously crafted from scratch by subject matter experts, resulting in a more professional and authoritative approach to assessing knowledge and setting options compared to the questions generated by GPT-4.

In the case of PsyEval, they focus on the field of mental health and want to measure LLM’s ability to be relevant in the field of mental health, whereas PsyExam focuses on all subjects related to psychology, and is superior to PsyEval in terms of the comprehensiveness of coverage of topics in the field of psychology.

ModelAvg.KnowledgeCase Analysis
Zero-shotFew-shotZero-shotFew-shot
MCMRMCMRMCMRMCMR
ChatGLM2-6B43.4649.899.8653.8114.8552.5016.0048.5020.00
ChatGLM3-6B42.2353.515.6355.755.5147.0017.0047.3313.50
YI-6B25.8133.260.2625.3914.0138.830.0020.0013.25
QWEN-14B30.6824.991.5438.1713.1920.332.0030.0014.00
YI-34B27.5225.031.1533.6918.1820.500.5022.338.00
MeChat-6B40.6250.244.1051.7911.9148.6713.5044.8310.50
MindChat-7B40.3949.256.2756.925.5140.835.0033.834.50
MindChat-8B21.0426.500.0026.500.1334.170.0034.170.00
Ours-SFT-6B43.7052.9510.5058.772.9446.505.5048.6713.00
ERNIE-Bot43.8552.486.6656.1010.3742.508.5050.6712.00
ChatGPT51.1557.4311.1461.5324.7147.339.0052.6729.50
ChatGLM64.5863.2926.1273.8542.1369.0020.5065.3342.50
GPT-467.4376.5610.7678.6343.7960.3313.0064.1739.50

4 Experiments

4.1 Experiment Setup

In this section, we benchmark a series of public accessible LLMs using CPsyExam.We choose both open-sourced LLMs with model size ranging from 6B to 34B, such as ChatGLM2-6B, YI-6B, QWEN-14B and YI-34B, and api-based LLM services like ERNIE-Bot-Turbo, ChatGPT, ChatGLM-Turbo and GPT4.In addition, a series of psychology-oriented models available online are also considered for comparison, such as MeChat555https://huggingface.co/qiuhuachuan/MeChat, MindChat666https://github.com/X-D-Lab/MindChat, and SoulChat777https://github.com/scutcyr/SoulChat.Specifically, MeChat is finetuned from ChatGLM2-6B.MindChat has released two versions, MindChat-Qwen-7B-v2 and MindChat-Qwen-1_8B.SoulChat has a version based on ChatGLM-6B but is not compatible with most of the frameworks for evaluation, which is therefore not considered for comparison.

As CPsyExam includes a training set for supervision purposes, we construct an instruction set for Supervised Fine-Tuning (SFT).In this work, we conduct SFT over ChatGLM3-6B, applying specific parameters to optimize the training process. Specifically, the SFT is carried out over 4 epochs with a batch size of 128. The learning rate is set to 1×1061superscript1061\times 10^{-6}1 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT, in order to balance the learning efficiency and the risk of over-fitting. These parameters were chosen based on preliminary experiments that aimed to maximize the model’s performance on validation sets.

4.2 Benchmarking Result

Performance of LLMs on MCQA and MRQA

We conduct both zero-shot and few-shot evaluations for each model discussed above.Given the focus of CPsyExam is on how models can perform over Knowledge and Case Analysis questions, we report them separately.We further differentiate MCQA and MRQA questions, as different models may have varying abilities to follow instructions.We also design a psychology-oriented prompt for the evaluation, which can be found in Figure5 in the Appendix. Evaluation results are listed in Table2.There are three sections in the table:(1) Open-sourced Models. Our findings indicate that: (a) increased model size does not necessarily ensure improved performance on the CPsyExam, and (b) models that excel in other domains, such as YI-34B on the medical domain, may not necessarily perform optimally on the CPsyExam.(2) Psychology-oriented Models. Compared to the foundation models, these fine-tuned models show marginal gains or no improvement in understanding psychological knowledge, and their case analysis abilities may even be compromised in some instances.(3) Api-based Models. GPT-4 continues to outperform all other API-based models by a significant margin in the knowledge setting. Conversely, ChatGLM-turbo performs exceptionally well in the Case Analysis setting.

Performance of API-based LLMs on Question Answering

Besides MCQA and MRQA, CPsyExam includes an extra QA test set to evaluate generation-based questions.We adopt GPT-4 to judge API-based LLMs used in this work.The prompt used for the scoring is listed in Figure6 in the Appendix.

ModelScore
ERNIE-Bot73.55
ChatGLM-turbo77.79
ChatGPT72.88
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (3)
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (4)
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (5)

The results suggest that ChatGLM-turbo has a better understanding of psychological knowledge and can be effectively prompted for psychological purposes.This may partially explain its strong performance on the MCQA and MRQA questions in CPsyExam.

4.3 Analysis from Multiple Perspectives

4.3.1 Analysis on model aspect

Does few-shot examples help?

When the model size is relatively small, in most cases, the performance improvement from few-shot learning is not very significant, and may even lead to negative effects. However, as the model size increases, the benefits of few-shot learning become much more pronounced. For example, the ChatGLM-turbo, which already performs well in zero-shot settings, saw its performance double after few-shot training on the CA task. This is likely because larger models have greater learning capacity and expressive power. Larger models are able to capture more complex patterns and latent semantic relationships in the data, enabling them to learn and generalize more quickly from smaller amounts of training data.

Performance between psychology-oriented models and the base model

Based on the experimental findings, the model that underwent a fine-tuning process to enhance its psychological capabilities did not outperform the base model in the experiments, and even showed a decline in performance. For example, MeChat is fine-tuned from the ChatGLM2-6B model, but its overall performance on the CPsyExam was weaker than that of the ChatGLM2-6B. We infer that this is because although MeChat’s fine-tuning enhanced its conversational abilities, this may have come at the cost of reducing its performance in knowledge reasoning and text comprehension tasks. The model may have over-adapted to the fine-tuning data, while neglecting the knowledge it had learned during the pre-training stage.

4.3.2 Analysis on benchmark aspect

Analysis of MCQA Questions

Due to the persistent low performance on MRQA questions, we focus solely on MCQA questions for error analysis.Regarding CPsyExam-KG, we perform analysis at the examination level, as depicted in Figure3a.For CPsyExam-CA, we delve into various aspects of case analysis, presented in Figure3b.By examining both figures, we determine that GPT-4 exhibits a stronger grasp of psychological knowledge across all examinations, yet it continues to face challenges with case analysis questions.The major gap for GPT-4 comes from Diagnosis and Treatment.

Analysis of Performance at Subject Level

For each subject included in CPsyExam, there are at least 32 questions, which exceeds the number of questions typically found in quizzes designed for human participants.We have selected the top two models based on their performance over the CPsyExam benchmark to visualize their performance across each subject.Some subjects can be grouped together due to their shared background and domain similarity, which we have chosen to merge initially.

The results for ChatGLM-Turbo and GPT-4 are presented in Figure4.Despite being the two best-performing models on our CPsyExam benchmark, ChatGLM-Turbo shows insufficient robustness against certain subjects and is still outperformed by GPT-4 across a variety of subjects.We also observe that specific subjects, such as "Psychology in Advertising", are more challenging than others.

5 Discussions

5.1 Data leakage problem

It is true that a significant number of questions in CPsyExam are derived from publicly accessible information on the web. However, the answers on selected websites are not directly correlated with the questions in a straightforward manner, making it challenging for large language models (LLMs) to recall the correct answers. To address this challenge, we carried out relatively complex data preprossessing in order to obtain the final question-answer pairs. Our experiments support this approach. Even psychology-focused models like MeChat and MindChat, which leveraged Chinese psychology materials, scored only around 50% correct. In contrast, general-domain models like GPT-4 and ChatGLM-turbo, which were less likely to have encountered the questions before, performed better, achieving 67.43% and 64.58% correctness respectively. Additionally, we reserved a portion of the dataset for future validation, which can help mitigate potential data leakage problem.

5.2 CPsyExam compares to English psychology standards

EPPP exam is the examination for professional practice in psychology of North American. It and CPsyExam both divide the entire examination into two parts, namely the knowledge part and the skill part. Therefore, we will compare these two parts of the exams separately.

In the knowledge part

EPPP primarily assesses eight aspects of psychology knowledge. You can find corresponding subjects for almost every subject in the GEE and PCE part of CPsyExam-KG, as both exams aim to evaluate students’ grasp of psychological knowledge. Furthermore, because the EPPP focuses on the evaluation of psychological counselors, while CPsyExam-KG aims to assess a broader range of LLM’s psychological abilities, there are additional sections in CPsyExam that are not present in the EPPP, such as SSE and TQE. These sections include questions that involve psychological analysis of phenomena related to various industries.

In the skill part

EPPP primarily assesses six main areas of content. In this aspect, there are significant differences between CPsyExam-CA and EPPP. CPsyExam-CA focuses on assessing whether the test taker can analyze complex case scenarios and provide correct answers, examining their ability to solve problems in psychological contexts. On the other hand, EPPP places more emphasis on assessing the key skills and professional competence of psychologists. This difference may stem from the different target test takers of the two exams. CPsyExam-CA aims to assess large language models (LLMs), while EPPP targets human test takers. Therefore, although both exams evaluate performance in real-life scenarios, the content of the test questions differs.

6 Conclusion

In conclusion, CPsyExam is a benchmark for psychology that is crafted from human-generated questions and covers a broad range of subjects within the Chinese examination system.It is designed to assess LLMs’ proficiency in both psychological knowledge and case analysis.CPsyExam is not only suitable for benchmarking LLMs but also provides a valuable resource for comparing the differences in psychology education across different countries.

Limitations

  1. 1.

    Using GPT-4 to evaluate QA scores might be influenced by its own knowledge, and in the future, expert scoring will be introduced to provide a combined score for the QA section, improving the reliability of the evaluation.

  2. 2.

    We adopted multiple-choice questions to assess LLM’s proficiency in psychology, but the ability of LLMs to answer such questions accurately may impact the overall score, potentially leading to inaccurate results.

Acknowledgements

This work was partially supported by the China Postdoctoral ScienceFoundation(2023M733654) and Guangdong Basic and Applied Basic Research Foundation(2023A1515110496).

References

  • Alasmari etal. (2023)Ashwag Alasmari, Luke Kudryashov, Shweta Yadav, Heera Lee, and Dina Demner-Fushman. 2023.CHQ- SocioEmo: Identifying Social and Emotional Support Needs in Consumer-Health Questions.Scientific Data, 10(1):329.
  • Barriere etal. (2023)Valentin Barriere, João Sedoc, Shabnam Tafreshi, and Salvatore Giorgi. 2023.Findings of WASSA 2023 shared task on empathy, emotion and personality detection in conversation and reactions to news articles.In Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, pages 511–525, Toronto, Canada. Association for Computational Linguistics.
  • Barriere etal. (2022)Valentin Barriere, Shabnam Tafreshi, João Sedoc, and Sawsan Alqahtani. 2022.WASSA 2022 shared task: Predicting empathy, emotion and personality in reaction to news stories.In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 214–227, Dublin, Ireland. Association for Computational Linguistics.
  • Bertagnolli (2020)Nicolas Bertagnolli. 2020.Counsel chat: Bootstrapping high-quality therapy data.
  • Buechel etal. (2018)Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Ungar, and João Sedoc. 2018.Modeling empathy and distress in reaction to news stories.In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758–4765, Brussels, Belgium. Association for Computational Linguistics.
  • Chalkidis etal. (2022)Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Katz, and Nikolaos Aletras. 2022.LexGLUE: A benchmark dataset for legal language understanding in English.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4310–4330, Dublin, Ireland. Association for Computational Linguistics.
  • Chang etal. (2023)Yupeng Chang, XuWang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, YiChang, PhilipS. Yu, Qiang Yang, and Xing Xie. 2023.A survey on evaluation of large language models.
  • Chen etal. (2022)Wei Chen, Zhiwei Li, Hongyi Fang, Qianyuan Yao, Cheng Zhong, Jianye Hao, QiZhang, Xuanjing Huang, Jiajie Peng, and Zhongyu Wei. 2022.A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets.Bioinformatics.Btac817.
  • Contributors (2023)OpenCompass Contributors. 2023.Opencompass: A universal evaluation platform for foundation models.https://github.com/open-compass/opencompass.
  • Devlin etal. (2019)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.BERT: Pre-training of deep bidirectional transformers for language understanding.In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
  • Gu etal. (2023)Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, and Yanghua Xiao. 2023.Xiezhi: An ever-updating benchmark for holistic domain knowledge evaluation.
  • Guha etal. (2023)Neel Guha, Julian Nyarko, DanielE. Ho, Christopher Ré, Adam Chilton, and AdityaNarayana etal. 2023.Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models.
  • He etal. (2019)Junqing He, Mingming Fu, and Manshu Tu. 2019.Applying deep matching networks to chinese medical question answering: A study and a dataset.BMC Medical Informatics and Decision Making, 19(2):52.
  • Hendrycks etal. (2021)Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021.Measuring massive multitask language understanding.In International Conference on Learning Representations.
  • Huang etal. (2023)Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models.In Advances in Neural Information Processing Systems.
  • Islam etal. (2023)Pranab Islam, Anand Kannappan, Douwe Kiela, Rebecca Qian, Nino Scherrer, and Bertie Vidgen. 2023.Financebench: A new benchmark for financial question answering.
  • Jin etal. (2020)DiJin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020.What disease does this patient have? a large-scale open domain question answering dataset from medical exams.arXiv preprint arXiv:2009.13081.
  • Jin etal. (2023)Haoan Jin, Siyuan Chen, Mengyue Wu, and KeZhu. 2023.Psyeval: A comprehensive large language model evaluation benchmark for mental health.ArXiv, abs/2311.09189.
  • Lai etal. (2023)Tin Lai, Yukun Shi, Zicong Du, Jiajie Wu, Ken Fu, Yichao Dou, and Ziqi Wang. 2023.Psy-llm: Scaling up global mental health psychological services with ai-based large language models.
  • Li etal. (2020)Dongfang Li, Baotian Hu, Qingcai Chen, Weihua Peng, and Anqi Wang. 2020.Towards medical machine reading comprehension with structural knowledge and plain text.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1427–1438, Online. Association for Computational Linguistics.
  • Li etal. (2023)Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023.Cmmlu: Measuring massive multitask language understanding in chinese.
  • Liang etal. (2023)Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, and DilaraSoylu etal. 2023.Holistic evaluation of language models.Transactions on Machine Learning Research.Featured Certification, Expert Certification.
  • Lison etal. (2018)Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018.OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora.In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
  • Liu etal. (2023)Chuang Liu, Renren Jin, Yuqi Ren, Linhao Yu, Tianyu Dong, Xiaohan Peng, Shuting Zhang, Jianxiang Peng, Peiyi Zhang, Qingqing Lyu, Xiaowen Su, Qun Liu, and Deyi Xiong. 2023.M3ke: A massive multi-level multi-subject knowledge evaluation benchmark for chinese large language models.
  • Liu etal. (2021)Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, YuLi, Zhou Yu, Yong Jiang, and Minlie Huang. 2021.Towards emotional support dialog systems.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3469–3483, Online. Association for Computational Linguistics.
  • Min etal. (2022)DoJune Min, Verónica Pérez-Rosas, Kenneth Resnicow, and Rada Mihalcea. 2022.PAIR: Prompt-aware margIn ranking for counselor reflection scoring in motivational interviewing.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 148–158, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  • Omitaomu etal. (2022)Damilola Omitaomu, Shabnam Tafreshi, Tingting Liu, Sven Buechel, Chris Callison-Burch, Johannes Eichstaedt, Lyle Ungar, and João Sedoc. 2022.Empathic conversations: A multi-level dataset of contextualized conversations.
  • Pal etal. (2022)Ankit Pal, LogeshKumar Umapathi, and Malaikannan Sankarasubbu. 2022.Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering.In Proceedings of the Conference on Health, Inference, and Learning, volume 174 of Proceedings of Machine Learning Research, pages 248–260. PMLR.
  • Qiu etal. (2023)Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, and Zhenzhong Lan. 2023.Smile: Single-turn to multi-turn inclusive language expansion via chatgpt for mental health support.
  • Radford etal. (2018)Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018.Improving language understanding by generative pre-training.
  • Radford etal. (2019)Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019.Language models are unsupervised multitask learners.
  • Rashkin etal. (2019)Hannah Rashkin, EricMichael Smith, Margaret Li, and Y-Lan Boureau. 2019.Towards empathetic open-domain conversation models: A new benchmark and dataset.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
  • Sharma etal. (2020)Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020.A computational approach to understanding empathy expressed in text-based mental health support.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263–5276, Online. Association for Computational Linguistics.
  • Srivastava etal. (2023)Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu AwalMd Shoeb, and AbubakarAbid etal. 2023.Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.Transactions on Machine Learning Research.
  • Sun etal. (2021)Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021.PsyQA: A Chinese dataset for generating long counseling text for mental health support.In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1489–1503, Online. Association for Computational Linguistics.
  • Tafreshi etal. (2021)Shabnam Tafreshi, Orphee DeClercq, Valentin Barriere, Sven Buechel, João Sedoc, and Alexandra Balahur. 2021.WASSA 2021 shared task: Predicting empathy and emotion in reaction to news stories.In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 92–104, Online. Association for Computational Linguistics.
  • Wang etal. (2019a)Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and SamuelR. Bowman. 2019a.SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. Curran Associates Inc., Red Hook, NY, USA.
  • Wang etal. (2019b)Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and SamuelR. Bowman. 2019b.GLUE: A multi-task benchmark and analysis platform for natural language understanding.In International Conference on Learning Representations.
  • Wang etal. (2023)Xidong Wang, GuimingHardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, etal. 2023.Cmb: A comprehensive medical benchmark in chinese.arXiv preprint arXiv:2308.08833.
  • Welivita etal. (2021)Anuradha Welivita, Yubo Xie, and Pearl Pu. 2021.A large-scale dataset for empathetic response generation.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1251–1264, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Xie etal. (2023)Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, and Jimin Huang. 2023.Pixiu: A large language model, instruction data and evaluation benchmark for finance.
  • Xu etal. (2020)Liang Xu, Hai Hu, Xuanwei Zhang, LuLi, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, BoShi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, HeZhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020.CLUE: A Chinese language understanding evaluation benchmark.In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762–4772, Barcelona, Spain (Online). International Committee on Computational Linguistics.
  • Zeng (2023)Hui Zeng. 2023.Measuring massive multitask chinese understanding.
  • Zhang etal. (2023a)Junlei Zhang, Hongliang He, Nirui Song, Shuyuan He, Shuai Zhang, Huachuan Qiu, Anqi Li, Lizhi Ma, and Zhenzhong Lan. 2023a.Psybench: a balanced and in-depth psychological chinese evaluation benchmark for foundation models.
  • Zhang etal. (2023b)Liwen Zhang, Weige Cai, Zhaowei Liu, Zhi Yang, Wei Dai, Yujie Liao, Qianru Qin, Yifei Li, Xingyu Liu, Zhiqiang Liu, Zhoufan Zhu, Anbo Wu, Xin Guo, and Yun Chen. 2023b.Fineval: A chinese financial domain knowledge evaluation benchmark for large language models.
  • Zheng etal. (2023)Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, and Minlie Huang. 2023.AugESC: Dialogue augmentation with large language models for emotional support conversation.In Findings of the Association for Computational Linguistics: ACL 2023, pages 1552–1568, Toronto, Canada. Association for Computational Linguistics.
  • Zhong etal. (2023)Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023.Agieval: A human-centric benchmark for evaluating foundation models.

Appendix A Subjects in Psychology Examinations

SubjectNumberExamination
Psychology for Primary School Teachers2,215TQE
Psychology for Middle School Teachers3,970TQE
Psychology for Higher Education Teachers1,602TQE
First-Tier Psychological Counselors785PCE
Second-Tier Psychological Counselors1,698PCE
Third-Tier Psychological Counselors2,107PCE
General Psychology1,606GEE
Developmental Psychology864GEE
Social Psychology206GEE
Personality Psychology188GEE
Psychological Statistics and Measurement950GEE
Experimental Psychology781GEE
Management Psychology210GEE
Abnormal Psychology217GEE
Educational Psychology528GEE
Clinical and Counselling Psychology205GEE
Physiological Psychology in Education103SSE
Education Psychology in Education108SSE
Experimental Psychology in Education108SSE
Developmental Psychology in Education107SSE
Developmental and Educational Psychology in Education71SSE
Medical Psychology in Medicine117SSE
Psychology of preschool education in Medicine174SSE
School Psychology in Medicine95SSE
The Psychology of Human Relationships in Medicine135SSE
Mental Health in Medicine108SSE
Mental Health and Counselling in Medicine229SSE
Public Relations Psychology in Medicine154SSE
Cognitive Psychology in Medicine108SSE
Psychology in Medicine108SSE
Introduction to Psychology in Medicine103SSE
Psychological counselling and guidance in Medicine131SSE
Psychology of Advertising in Literature107SSE
Psychology of Journalism in Literature109SSE
Social Psychology in Management103SSE
Managerial Psychology in Management122SSE
Tourism Psychology in Engineering108SSE
Consumer psychology in Economy108SSE
Psychological foundations of agricultural extension108SSE

Appendix B Prompts Used for Evaluation

CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (6)
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (7)
CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations (2024)
Top Articles
Latest Posts
Article information

Author: Dean Jakubowski Ret

Last Updated:

Views: 6005

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Dean Jakubowski Ret

Birthday: 1996-05-10

Address: Apt. 425 4346 Santiago Islands, Shariside, AK 38830-1874

Phone: +96313309894162

Job: Legacy Sales Designer

Hobby: Baseball, Wood carving, Candle making, Jigsaw puzzles, Lacemaking, Parkour, Drawing

Introduction: My name is Dean Jakubowski Ret, I am a enthusiastic, friendly, homely, handsome, zealous, brainy, elegant person who loves writing and wants to share my knowledge and understanding with you.