Skip to main content
Articles

Confirmatory Factor Analysis of the Public Health Associate Program Service-Learning Scale: A Validation Study


Abstract

Service-learning programs play an important role in the recruitment and development of the public health workforce. Such programs serve as necessary pathways for trainees to enter public health and related fields (McClamroch & Montgomery, 2009; Horney, et al., 2014; Yeager, Beitsch, & Hasbrouch, 2016; Leider, Resnick, & Erwin, 2022; Leider et al., 2023), providing participants with hands-on career experience and supplying organizations access to a pool of early career applicants (Furco, 1996; Cashman & Seifer, 2008; Thacker et al., 2008; Meritt & Murphy, 2019; Markaki, et al., 2021). Service-learning participants offer valuable insight into program quality and effectiveness, and gathering this input through surveys is among the most widely used approaches to evaluate training and professional development programs (Gelmon, et al., 2001; Brown, 2005; Kirkpatrick & Kirkpatrick, 2006). Although certain scales designed to evaluate different components of service-learning have been examined previously (e.g., Eyler, et al., 1997; Shiarella, et al., 2000: Moely, et al., 2002; Snell & Lau, 2020; Lee et al, 2021), the overall body of evidence derived from psychometric evaluation is limited (Gelmon et al., 2001; Toncar, et al., 2006; Ma et al., 2019; Snell & Lau, 2020). This is particularly true for service-learning programs in public health and related fields and in programs sponsored by non-academic institutions.

The Public Health Associate Program (PHAP) Service-Learning Scale (PSLS) (Appendix) was first developed in 2016. It was designed to evaluate participant experience and satisfaction with PHAP, a service-learning fellowship program managed by the Centers for Disease Control and Prevention (CDC). Using an exploratory factor analysis (EFA), the initial pilot of assessment of PSLS provided evidence of validity and reliability and as an underlying factor structure for the scale (Colman et al., 2018). For the pilot study, EFA was more appropriate methodology because the scale was still in development and hypothesized factors had not been generated (Kelloway, 1995). As explained by Hurley et al., (1997), psychometric research on a particular scale can be phased, beginning with the EFA study and succeeded by a CFA study to see what can be confirmed. The current study purpose is to reexamine and confirm previous findings of the factor structure of subscales and provide evidence of its validity using a confirmatory factor analysis (CFA). While this sample for these psychometric evaluations has been limited to PHAP participants, if the instrument is validated, this scale has utility for a plethora of service-learning programs.

Downloads:
Download PDF
View PDF

Published on
2024-12-31

Peer Reviewed