Process for the development of the certificate
The establishment and delivery of the program was a multistage process designed to conform to the Division of Healthcare Professionals and Accreditation Council for Continuing Medical Education’s (ACCME) standards [22, 23]. The process began with a gap analysis informed by a review of the literature and the personal experiences of the course director-an experienced senior statistician actively engaged in advising HCPs on statistical analysis and study design. The next phase of the process advanced with the collaboration of the CPD division at Weill Cornell Medicine-Qatar. This engagement was essential to ensure that the course development adhered to accreditation requirements and that measurable learning outcomes were identified.
To align with accreditation standards, a scientifically diverse advisory committee was convened to deliberate on key aspects such as course content, software selection, and the practical applicability of the curriculum within their respective fields. Through this collaboration a three-day CPD course in biostatistics was designed, comprising of introductory, advanced and intermediate-level workshops. Conducted over three consecutive weekends, the workshops were designed to promote research output by providing HCPs with foundational skills for organizing and managing data using IBM-SPSS software, along with comprehensive knowledge and practical expertise in the analysis and interpretation of biostatistical data. Optional written assignments were designed to be completed following each workshop with the aim of fostering active learning and reinforcing the skills acquired during the sessions. A Certificate in the Analysis of Medical Data was awarded to participants who successfully completed all three assignments. The inclusion of a certificate was informed by recurring feedback from participants across various CPD programs at our institution, where certification was frequently identified as a desirable feature. It was therefore hoped that offering a certificate would enhance participant motivation and engagement in the program.
The three sets of workshops were delivered face-to-face during two cohorts, the first in 2019 and the second in 2020 and were open to all HCPs.
The workshops were designed to provide comprehensive training in biostatistical methods, as outlined in Table 1, which lists the specific statistical tests and objectives covered.
While the intermediate and advanced workshops each addressed multiple statistical topics, the sessions were intended as introductory overviews rather than in-depth technical training. Each topic—for example, linear regression, logistic regression, and confounding—was covered at a conceptual level, with an emphasis on practical understanding and relevance to clinical research. The workshops incorporated real-world examples and software demonstrations to illustrate core principles, focusing on interpretation of results and common applications. Given the constraints of a one-day format, the content was carefully selected to provide a broad yet accessible foundation, suitable for participants with varying levels of prior experience. Supplementary materials, including all presentation slides and datasets for hands-on practice, were provided to support continued learning beyond the workshop. Participants who opted to pursue a certificate submitted post-workshop assignments and received detailed, individualized feedback. In cases where errors were identified, participants were encouraged to revise and resubmit their work. This iterative feedback process reinforced key learning objectives, particularly in selecting appropriate statistical methods, executing analyses using software, and interpreting results accurately.
Each workshop level—introductory, intermediate, and advanced—was structured to progressively build participants’ skills and understanding of biostatistics. Participants had the flexibility to register for individual workshops based on their specific needs and interests, rather than being required to attend all three sessions.
Participants engaged in hands-on experience with each statistical test listed in Table 1. The course tutor provided case studies and mock datasets to participants prior to the sessions. These materials were carefully crafted to simulate real-world research scenarios, allowing participants to apply the biostatistical methods in a practical context.
During the workshops, participants used IBM-SPSS software to practice each biostatistical method. This included performing data analysis, interpreting results, and understanding the application of statistical tests in healthcare research. The hands-on activities were integral to reinforcing theoretical knowledge and enhancing participants’ competence in using statistical software for data analysis.
By integrating case studies and practical exercises, the workshops aimed to equip participants with the skills necessary to conduct robust statistical analyses and apply these techniques in their professional practice.
Approximately one week post completion of each workshop, participants received the optional assignment. Assignments were completed using IBM-SPSS software in addition to written questions evaluating interpretation and understanding. Coursework underwent assessment by the course director and participants who successfully completed all three assignments were awarded with the certificate. The course was accredited by the Accreditation Council for Continuing Medical Education (ACCME).
Design and data collection
A cross-sectional descriptive analysis utilized anonymous routine data collected during course administration and program evaluation. This involved a post-activity survey distributed upon completion of all accredited CPD activities. To address longitudinal objectives, including enhanced research output, an additional survey was administered 12-months following the final workshop. This extended timeframe aimed to ensure a thorough evaluation of the program’s impact on research productivity and provided participants with sufficient time for reflection on encountered barriers and challenges. The survey items were developed in line with ACCME accreditation standards, which are underpinned by Kirkpatrick’s Four-Level Training Evaluation Model [24]. Accordingly, the surveys assessed participant satisfaction with the workshop (Level 1), self-reported knowledge and competence (Level 2), and perceived application of statistical skills in research and practice (Level 3). Additionally, operational data from the CPD division included attendee numbers and numbers of participant certificates issued. Participation in the evaluation survey was voluntary and anonymous, and by completing the survey, participants provided implicit consent. Due to the anonymous nature of the post-activity surveys, individual progression through the workshops could not be tracked.
Post-activity evaluation
One week following each workshop, participants received an email a link to the post-activity evaluation for accredited CE/CPD activities. The survey was designed to assess whether course objectives were met and is a standard survey sent to all participants completing CPD courses at WCM-Q (available in the supplementary file). The evaluation distinguished between multiple domains of impact, including perceived knowledge acquisition, competence, and performance, through separate survey items. These domains align with standard CPD outcome levels and were analysed individually. The survey consisted of single-item questions and was not designed for psychometric validation; therefore, internal consistency measures such as Cronbach’s alpha were not applicable. Completion was incentivized by offering a certificate of completion for each workshop.
The survey covered aspects such as registration, marketing, venue, speakers, and disclosure of commercial bias. This paper focuses on responses regarding the course’s perceived impact and achievement of objectives, rated on a 5-point Likert scale from strongly disagree to strongly agree. Questions included: 1) New knowledge acquisition, 2) Impact on competence, 3) Impact on performance, and 4) Potential effect on patient outcomes. Additionally, participants rated the workshop format on a 3-point Likert-scale (Yes, somewhat, no), with an open-ended question for further comments. The post-activity evaluation survey is a standard evaluation tool sent to all participants and was designed based on accreditation criteria (see Supplementary File).
Program evaluation
In February 2021, an anonymous survey was emailed to all attendees of the 2019 and 2020 workshops using the Qualtrics survey distribution tool. This survey was designed to assess the specific objectives of the course and consisted of single-item Likert-scale and open-ended questions aligned with the course’s educational objectives (available in the supplementary file). As the survey was designed for descriptive evaluation rather than psychometric testing, internal reliability analysis (e.g., Cronbach’s alpha) was not conducted.
The survey aimed to evaluate impact of the workshops in supporting participants to draft and publish a manuscript as well as to understand the interpretation of data and perform statistical analyses (Fig. 2). Responses to 7 items (Fig. 2) were rated on a 5-point Likert scale. The survey also asked participants to confirm their certificate completion status, specify the sessions they attended, and provide reasons for not completing the program. Three open ended questions were also included to gauge potential difficulties participants encountered with completing the ‘Certificate in the Analysis of Medical Data’ as well as barriers towards achieving the long-term outcomes, and an opportunity for further comments.
Each participant completed the follow-up survey only once, regardless of how many workshops they attended. As such, Fig. 2 reflects unique individual responses rather than pooled data from multiple sessions.
The 2019 and 2020 cohorts were selected as they represent the first two fully completed cycles of the CPD biostatistics course. Evaluating these cohorts allowed for a structured assessment of the program’s immediate and long-term impact. Furthermore, the 2021 follow-up survey provided a sufficient post-course timeframe to assess changes in research productivity and application of biostatistical skills in practice.
A flowchart summarizing data collection and ass assessment in available in Supplementary Fig. 1.
Analysis
Data from the surveys were summarized using frequencies and percentages using IBM-SPSS software (version 20, Armonk NY, USA). Due to the anonymous nature of the two surveys, it was not possible to cross-link responses between them. Open ended questions were categorised thematically and presented as frequencies of occurrence.
Due to the descriptive nature of this study, no formal sample size calculation was performed. All eligible participants from the 2019 and 2020 cohorts were invited to participate in the evaluation surveys. Response rates were calculated by dividing the number of completed surveys by the total number of invited participants and expressing this as a percentage.
The project received ethical exemption from the local Institutional Review Board (IRB reference number: 24 − 00022), as it involved routine program evaluation in an educational setting and did not involve collection of identifiable data.
link

