Research Articles

A Content Analysis of the Emergent Scholarship on Digital and Open Badges in Higher Education Learning and Assessment

Authors: ,

Abstract

Higher Education has seen its fair share of innovations, many of which were made possible by digital technologies. Digital and open badges (DOBs) are emergent technologies that many believe could further reform, even disrupt, key tenets in higher education, including learning, assessment and credentialing. This study examined the emerging scholarship and best practices on DOB adoption in higher education through the lens of peer-reviewed publications. A content analysis explored two questions related to publication patterns and research goals (question 1) and assessment practices supported by DOBs and stakeholder perceptions of DOBs (question 2). Findings revealed non-empirical papers were more likely to focus on the reform-related potential of DOBs while empirical applications focused on traditional concerns such as student motivation and engagement, and conventional approaches to assessment. Stakeholder perception of the value and role of DOBs were also mixed. Limitations, implications, and further study are discussed.

Keywords: Content Analysis, Digital Badge, Open Badge, Higher Education Assessment, Student Learning

How to Cite: Haughton, N. A. & Singh, P. R. (2019) “A Content Analysis of the Emergent Scholarship on Digital and Open Badges in Higher Education Learning and Assessment”, Issues and Trends in Learning Technologies. 7(2). doi: https://doi.org/10.2458/azu_itet_v7i2_haughton

A Content Analysis of the Emergent Scholarship on Digital and Open Badges in Higher Education

A Content Analysis of the Emergent Scholarship on Digital and Open Badges in Higher Education

Noela A. Haughton
University of Toledo

Priti R Singh
University of Toledo

Higher education is subjected to demands of accountability and reform in terms of cost and student success (Morris, 2016), with some American states also demanding shorter times to degree and alternate pathways that recognize prior learning in other settings (Ohio Department of Education, n.d.). Education reforms refer to policies and practices that promote academic quality including learning and assessment, and student success in terms of ease of access, achievement, retention, and completion (Klein-Collins, 2012; Klein-Collins & Wertheimer, 2013). Some information and communications technologies (ICTs), such as asynchronous web-based distance education, open-source materials, flipped classrooms, and massive open online courses (MOOC's), have been credited with several reforms that have changed, even disrupted (Christensen, Horn, & Johnson, 2008) the access to and delivery of higher education (Leon & Price, 2016; Morris, 2016).  It is also important to remember that ICTs have not always delivered on their promises, including those related to technology integration, teaching practices, and student learning (Burns, 2013; Clark, 1991; Cuban, 1996, 2001; Cuban & Jandric´, 2015; Whitworth, 2012). Thus, the debate about the benefits and drawbacks of ICTs continues to be ongoing (Burns, 2013). Within the context of this debate is the belief that one newer technology – the digital badge – could motivate learners and prompt further reform, even disrupting some key tenets of learning and assessment in higher education (Casilli & Hickey, 2016; Gibson, Ostashewski, Flintoff, Grant, & Knight, 2015).

A digital badge is a web-enabled token (O'Byrne, Schenke, Willis, & Hickey, 2015) that represents some form of learning (Buckingham, 2014) and may be awarded by institutions, organizations, groups, and individuals. The badges themselves are visual representations of knowledge, experience, or achievement of some kind (Carey, 2012; Frederiksen, 2013; Mozilla Foundation, 2016) that can be linked and shared in multiple spaces, including web pages, blogs, and various social networking sites such as LinkedIn and Facebook (Buckingham, 2014; Stone, 2015). The ability to represent skills and achievement "at a more fine-grained level" (p. 21) enables colleges and universities to document learning outcomes in a new way (Bowen & Thomas, 2014).

Open badges are digital badges that follow the Mozilla Open Badges Infrastructure (OBI), a non-proprietary standard that allows organizations including colleges and universities to create and issue badges that are sharable across a variety of platforms (Brown & Thomas, 2014). The metadata embedded within the open badge's image include issuer, criteria for earning the badge, and the supporting evidence submitted and assessed (Jovanovic & Devedzic, 2015; O'Byrne, Schenke, Hickey, Willis, & Quick 2015; Mozilla Foundation, 2016). The OBI also enables badge earners to accumulate badges from multiple issuers in backpacks (Mozilla Foundation, 2016), a privately stored collection of open badges, which can then be parlayed as knowledge artifacts in multiple settings to multiple stakeholders, including future employers (Berge & Muilenburg, 2016; Mozilla Foundation, 2016). As of October 2016, close to a million badges were connected to millions of issuers and learners around the world (Mozilla Foundation, 2016). These numbers include higher education applications to recognize and communicate student learning and other competences (Derryberry, Everhart, & Knight, 2016).

To date, there have been multiple initiatives to synthesize and describe the emergent scholarship and best practices on digital and open badges (DOBs). An annotated bibliography by Grant and Shawgo (2013) included over 200 conference proceedings, scholarly journal articles, and book chapters. Otto and Hickey (2014) derived 37 design principles from an analysis of 30 educational programs. Liyanagunawardena, Scalzavara, and Williams (2017) examined peer-reviewed literature on open badges spanning the years 2011 to 2015. There are at least two edited volumes that discuss a variety of digital and open badge-related applications and issues in multiple settings. Within this context is the need to further describe DOBs' adoption in higher education while being mindful of prior failures to live up to expectations (Burns, 2013; Clark, 1991; Cuban, 1996, 2001; Cuban & Jandric´, 2015; Whitworth, 2012). The present study's purpose was to extend the prior initiatives by conducting a content analysis of peer-reviewed journal articles to further describe how DOBs are being conceptualized and implemented in higher education learning settings. Peer review is critical to developing a collective knowledge base and building a scientific community (Schaffner, 1994). This synthesis examines peer-reviewed articles published in English language journals between 2012 and 2018. Conference papers, conference proceedings, books, and book chapters were excluded to minimize duplications.

Conceptual Framework

Redefining Learning

Learning results in change, which is inferred from various forms of evidence (Barron et al., 2015). Bloom's three domains – cognitive, affective, psychomotor (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) – and related taxonomies of evidence (Anderson & Krathwohl, 2001), are widely recognized and commonly understood learning frameworks. Another recent typology describes learning as incidental, continuous, and context independent, and classifies learning as formal, non-formal, and informal (Singh & Duvekot, 2013). The attributes of formal learning are that it is deliberately sought by learners, it is structured, it occurs in traditional classroom settings such as those in colleges and universities, and it leads to credentials such as degrees, diplomas, and certificates. Non-formal learning is also deliberately sought by learners in workplace settings and may be structured or semi-structured. Non-formal learning does not always lead to credentials, and it may complement formal learning. Informal learning recognizes that learning also occurs as a natural part of development and results from experiences pursued for pleasure and personal enrichment, not for credentials.

The formal, non-formal, and informal typology supports the view of learning being "lifelong and life-wide" (UNESCO, 2009, p. 1), thus reflecting the need for flexible assessment practices and procedures that decouple learning from its setting. The recognition of prior learning (RPL), for example, facilitates alternate pathways to formal higher education by using established higher education qualification frameworks to recognize, validate, and accredit prior learning regardless of setting (Singh & Duvekot, 2013). DOB technologies are considered to be uniquely positioned to support these and other flexible assessment practices (Finkelstein, Knight, & Manning, 2013; Gibson et al., 2015; Peck, Bowen, Rimland, & Oberdick, 2016), potentially reforming higher education assessment practices.

Assessing Learning

Assessment as a practice involves the systematic collection, review and use of quantitative and qualitative information to improve student learning and support various policy and program decisions (Palomba & Banta, 2001), and is influenced by a number of factors, such as the type of learning outcomes (Nitko & Brookhart, 2011). Cognitive and teachable outcomes, which examine gains and achievement in academic subject matter (hard skills), are usually assessed by large-scale or classroom-based direct assessments within a formal learning setting. Traditional sources of evidence are tests, quizzes, and papers. Direct assessments are oftentimes complemented with indirect assessments via self-report inventories that assess non-cognitive outcomes, such as beliefs, feelings, and perspectives about important personal attributes (soft skills), individual differences, and the learning experience itself. Direct evidence is visible and self-explanatory while indirect evidence is less clear and can be less convincing (Suskie, 2009). Performance-oriented outcomes require demonstrations of mastery by the completion of an authentic task that typically integrates both hard skills and soft skills (Rothwell & Gaber, 2010; Voorhees, 2001) and assessed by performance (or authentic or alternative) assessment (Suskie, 2009).

The purposes, uses, and interpretation of assessment results are also key factors. Formative uses guide and monitor learning while summative uses evaluate the achievement of learning outcomes (Nitko & Brookhart, 2011). Interpretation of assessment evidence can be relative to peers' performance (norm-based) or relative to a specific standard (criterion-based) (Suskie, 2009). Even though assessment decisions are typically made by the instructor, peer evaluations may also be used to support assessment planning and decisions. Additionally, valid decisions and inferences about learning should be based on evidence that meets basic criteria of reliability and fairness (Nitko & Brookhart, 2011; Suskie, 2009).

The classification of learning in terms of formal, non-formal, and informal has implications for higher education assessment practice, which largely focuses on knowledge gained in formal settings. Higher education disciplines could also reconsider how critical soft-skills are captured and assessed throughout the formal and informal curriculum, especially in performance-based experiences such as internships. Being able to successfully accredit knowledge, especially when supported by disciplinary, competency-based qualification frameworks that align with existing degree outcomes, could support reforms such as increased access, personalized learning, and shorter time to degree (Klein-Collins, 2012; Klein-Collins & Wertheimer, 2013). These ideas are also intertwined with many of the promises of DOBs. 

Digital Badges Promises and Applications

Literature on DOBs offers several promises in the areas of credentialing, recognizing learning, motivation, competence-based assessment, soft skill assessment, and professional development (Casilli & Hickey, 2016; Fanfarelli & McDaniel, 2015; Grant, 2016; Jovanovic & Devedzic, 2015). The DOB as a micro-credential documents learning outcomes and experiences (Gamrat, Toomey, Zimmerman, Dudek, & Peck, 2014), including various competences in non-formal and informal settings (Casilli & Hickey, 2016; Derryberry  et al., 2016; MacArthur Foundation, 2015; Peck et al., 2016). Hence, DOBs could potentially serve as an alternative credentialing system and/or as an extension of current higher education credentialing systems (Casilli & Hickey, 2016; Grant, 2014, 2016; Peck et al., 2016). Thus, DOBs along with prior learning assessment, could increase access to higher education through flexible pathways.

The level of DOB adoption is reflected in the number of DOBs issued (Grant, 2014; Gibson et al., 2015; Mozilla Foundation, 2016). Thus, a description of how higher education is conceptualizing and actually implementing DOBs continues to be timely. Of interest is whether these published accounts reflect the promises of reforms in terms of: changing how learning is captured, represented, and shared; supporting prior learning assessment and competency-based assessment; creating alternate pathways to degrees and certifications; capturing often-overlooked important soft skills; increasing student motivation; and making inroads into how colleges and universities represent and credential learning.

Research Questions

The broad research question guiding this review involves how DOBs are being conceptualized and implemented in higher education. Two sub-questions further facilitate a description of the research in terms of publication patterns, conceptualization based on the promises of DOBs, and the actual implementations of DOBs in learning and assessment contexts.

Research Question 1 (RQ1): What are the publication patterns of digital and open badges in higher education settings? These results are presented within two sub-questions. RQ1A: What are the publication characteristics? This will provide a description of the published accounts including study type (primary or empirical vs. non-empirical), publication rates, journal focus (reflective of discipline interest in DOBs), and country of origin. RQ1B: How do the research goals of empirical and non-empirical publications differ? Of interest is whether both empirical and non-empirical publications are focused on reform-related ideas and the duality of the promises of technology versus its implementation. These results also set the context describing the learning and assessment settings within which DOBs were implemented. 

Research Question 2 (RQ2): What are the assessment practices and results of the digital and open badge implementations? These results address two sub-questions. RQ2A: What assessment and pedagogical practices are supported with DOBs? Of special interest is identifying reform-related practices that are associated with the promises of DOBs, namely recognizing non-formal and informal learning, supporting alternate pathways to degrees, and implementing alternate forms of credentials. RQ2B: What is the perceived value of DOBs? This describes the perceived value of DOBs primarily through the eyes of DOB recipients.

Content Analysis Methods and Procedures

Content analysis provided a suitable research method for examining the selected sample of publications within a standardized form according to a conceptual framework (Babbie, 2010). The six-step process is described in Figure 1. Step 1 established the unit of analysis, peer-reviewed journal publications related to DOBs in higher education. Step 2 was the development of the initial a priori codebook. Selection of studies (Step 3), coding (Step 4), data preparation (Step 5), and, data analysis and reporting (Step 6) were the last four iterative steps. Codebook refinements and additions were made throughout the process until the sample was determined. A concurrent mixed methods approach (Creswell, 2012) was used to analyze the final dataset and report findings.  

Selection of Studies

Figure 1, which is informed by Liyanagunawardena, Scalzavara, and Williams (2017), also details the sampling procedures (Step 3). The EBSCO databases and Google Scholar were searched using keywords "digital badge" and "open badge". Each initial search yielded 658 and 582 entries, respectively. Both result sets were merged into one dataset, which was further screened for conference proceedings and editorials. The final sample, as of November 2018, included 63 articles published between 2012 and 2018.

Diagram of content analysis
          procedures, selection of studies, and screening
Figure 1. Content analysis procedures, selection of studies, and screening

Content Analysis Codebook

Table 1 describes the coding scheme and its alignment with the research questions. Section 1 focused on publication characteristics. Section 2's contents were informed by Hickey and Otto (2014) and described the technological aspects of the DOBs.

Table 1. Partial Content Analysis Codebook
Section 1: About the Publication (RQ1)
Type of Study: primary or non-empirical
Publication Year:
Country of Origin: examples include the United States, Canada, Finland, Turkey
Journal Focus: areas include Information / Educational & Communications Technology, Medical Education, and Library & Information Science;
Number of Keywords:
Actual Keywords: examples include certification, assessment, gamification, motivation
Section 2: Badge Technology (RQ2)
Badge Type: open badge or digital badge
Badge Platform: examples include BadgeUSA, Credly, Blackboard, Canvas, Mozilla
Metadata: Issuer; Assessment Criteria; Assessment Evidence; Date Issued
Section 3: The Assessment (RQ2)
Content Area: sample content areas include English Composition, Educational Technology
Knowledge Setting: formal, non-formal, informal
Education Level: undergraduates, graduate students, professional development
Learning Domain: cognitive, affective, psychomotor
Cognitive Taxonomy: knowledge, comprehension, application, analysis, synthesis, evaluation
Type of Outcomes: cognitive, affective, psychomotor
Skill Type: hard or soft
Source of Evidence: direct or indirect
Sources of Evidence: tests, quizzes, papers, projects, affective measures / self-report inventories
Assessment Decision: formative, summative
Badge Awarder: peers or instructor
Qualitative Feedback / Perception:
Section 4: Research Methods (RQ2)
Research Approach: quantitative, qualitative, mixed methods
Sampling Units: students, faculty employees, other
Sample Size:
Study Results: descriptive, inferential, both
Quality of Design Discussed: reliability, validity, credibility strategies, none

Minimum metadata were captured because most publications did not include them. Section 3 had the most codes and focused on important higher education assessment practices. Section 4's codes were informed by best practices as outlined in Creswell (2012) and McMillan (2012).

Data Analysis and Procedures

Coding was done iteratively in multiple phases to support the validity of the process and the subsequent results. Entries were coded separately and independently by each author. Both authors met routinely to review all aspects of each research phase to establish intercoder agreement (Creswell, 2012). The quantitative data were summarized using Statistical Package for Social Sciences (SPSS) Version 25. The qualitative data were inductively summarized for themes (Creswell, 2012; McMillan, 2012). Author information and other identifying characteristics were not included in the results. Given the "non-human subjects" nature of this research, Institution Review Board approval was not sought.

Results

RQ1: What are the publication patterns of digital and open badges in higher education settings?

RQ1A: Publication Characteristics. Figure 2 summarizes the 63 journal articles. Thirty-six (58.1%) were non-empirical descriptions of DOBs. Descriptions included their potential benefits and applications in higher education settings. The remaining 27 (41.9%) were primary studies of DOB implementations. Most papers were published in 2015 and 2016, with a noticeable decline in 2017. The number of publications peaked in 2015 (n=20, 31.7%), followed by 13 (21%) in 2016. The vast majority of the publications were from the United States (n=43, 68.3%), followed by the United Kingdom (n=4, 6.3%), and Australia and Canada (n=3, 4.8%). The first author's country was used in cases where multiple co-authors were from different countries. The majority of the publications appeared in ICT journals (n=31, 49.2%), followed by Library Information and Management, including Customer Service (n=8, 12.7%).

Charts of digital and open
          badges publication rates (2012-2018) and journal focus
Figure 2. Digital and open badges publication rates (2012-2018) and journal focus.

The remaining publications appeared in journals focused on:

RQ1B: Research Goals of Primary and Non-Empirical Publications. Figure 3 describes the research goals by study type, as reflected by original keywords. Keyword usage was inconsistent and somewhat duplicative across the 63 papers. For example, it is likely that assessment of learning, assessments, assessments testing/assessment all refer to assessment. However, given this study's unobtrusive measures approach (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981), original keywords were used. Primary studies were more likely to have keywords than non-empirical publications, r(63) = -.263. p < .05. With the duplication issues in mind and excluding open badge, digital badge, badges, and similar terms, the most frequently used keywords in these papers were: motivation and gamification (six times each), followed by e-learning (4), and engagement (3). The most frequently used keywords in non-empirical papers were: assessment (7 times), credential (5), gamification (4), and certification (3).

Word cloud images: research
          purpose of journal articles based on keyword usage
Figure 3. Research purpose of journal articles based on keyword usage.

Assessment included alternative-assessment, learner-centered-assessment, and performance-assessment. Learning included connected-learning, assessment-of-learning, online-learning, adult-learning, informal-learning, and learning.

In summary, DOBs were of interest to a variety of disciplines but had the highest concentration of papers in Information and Communications Technology / Educational Technology followed by Library Information, Management, and Customer Service. The sample included papers published between 2012 to 2018, with the publication rates peaking in 2015, followed by 2016. Most of the papers originated from the United States, followed by the United Kingdom. This is likely to be, at least in part, an artifact of the decision to focus on English language journals. The analysis results so far, though encouraging, show preliminary evidence of the duality of the promise of DOBs versus their actual, more traditional, implementation.

Implementation studies mostly focused on motivating and engaging students, which are traditional higher education concerns. Though present in some supporting literature, reform-related ideas that speak to the DOB promises, namely how knowledge is captured and credentialed, prior learning recognition, alternate pathways to degrees, and changing how universities represent and credential learning, were not the dominant research goals of primary studies. Rather, these ideas were the focus of the non-empirical papers. The remainder of this content analysis focused on the 27 primary studies and RQ2 results.

RQ 2: What are the assessment practices and results of the digital and open badge implementations?

For context, this section begins with an overview of the 27 primary studies' research methods, as shown in Table 2. Most studies used descriptive, non-experimental designs, including 17 (63%) mixed methods, eight (29.6%) quantitative, and two (7.4%) qualitative. Approximately half discussed validity and reliability.  Most cases used non-probability convenience samples (96.3%), with sample sizes ranging from one to 2,448. Multiple DOB implementations were explored in three studies. Hence, the actual number of DOB implementations were between 29 to 31, depending on how the courses were clustered and the research outcomes reported. The most studied participants were undergraduates, approximately 2,910 students in nine studies and ten DOB implementations. Nine studies, accounting for ten DOB implementations, explored non-formal professional development settings, and the majority of the participants were university staff, including faculty and student workers. Direct assessments in the form of quizzes, projects, and examinations, were the primary data sources. Indirect assessment data were also gathered in most studies. Of interest were satisfaction with and/or perception of DOBs (14 studies, 51.9%) and individual differences, motivation, and learning style (9 studies, 33.3%).

Qualitative data were gathered from open-ended survey questions, in-person and phone

interviews, focus groups, and course comments including peer reviews. Five studies (18.5%) used LMS logs to examine various student behaviors including levels of motivation and engagement. Finally, in terms of DOB platforms, several institutions (n=17, 70%) developed in-house systems, some with OBA-standard plug-ins to promote sharing across systems and contexts. The most frequently cited external platforms were Mozilla Backpack and Canvas (four times each), Credly and Moodle (n=3), and Blackboard (n=2).

Table 2. Summary of Research Methods in Primary Digital and Open Badge Research in Higher Education, 2013 to 2018
Research Component Number of Studies (n=27)
Design
Non-experimental
Quasi-experimental
22 (81.5%)
5 (18.5%)
Approach
Qualitative
Quantitative
Mixed Methods
2 (7.4%)
8 (29.6%)
17 (63%)
Badge Recipients
Students (graduate & undergraduate)
Teachers in University-based Professional Development
University Staff Professional Development
Multiple Recipients
Other University-based Professional Development Attendees
13 (48.1%)
2 (7.4%)
5 (18.5%)
5 (18.5%)
2 (7.4%)
Sampling Method
Non-probability
Probability
26 (96.3%)
1 (3.7%)
Sample Size
Sample size < 31
Sample size > 30 < 100
Sample size > 99 < 500
Sample size > 499
6 (22.2%)
10 (37.1%)
6 (22.2%)
5 (18.5%)
Data Sources (will not add to 27)
Academic Assignments
Survey About Badges
Other Survey: Motivation / Learning Style / Individual Difference Scale
LMS logs
Qualitative Data (survey comments, interviews, focus group)
25 (92.6%)
14 (51.9%)
9 (33.3%)
5 (18.5%)
11 (40.7%)
Data Analysis
Descriptive (quantitative)
Descriptive (qualitative)
Descriptive (mixed methods)
Descriptive & inferential (quantitative)
Descriptive &inferential (mixed methods)
4 (14.8%)
2 (7.4%)
11 (40.8%)
4 (14.8%)
6 (22.2%)
Quality
Credibility/Reliability/Validity Mentioned or Detailed 14 (51.9%)

RQ2A: Assessment and Pedagogical Practices.  Table 3 describes the assessment and pedagogical conditions of the DOB implementations. The knowledge settings in the 13 (48.1%), nine (33.3%), and five (18.5%) studies were in formal, non-formal, and mixed, respectively. Five mixed studies explored multiple implementations in different settings. All the implementations in formal settings assessed cognitive learning outcomes, 11 (84.6%) with affective, and one with affective and psychomotor outcomes. DOBs were mostly awarded summatively (76.9%) by instructors (92.3%), in satisfaction of specific completion criteria that coincided with typical course milestones such as assignment and module completion. These academic outcomes were graded primarily by combining direct and indirect assessments (84.6%). The cognitive taxonomy levels that could be determined were higher order (n=11), and artifacts included discussion posts, peer reviews, a variety of course projects, tests, and quizzes. DOBs were used to recognize high performers (n=7), recognize completion of various artifacts (n=8), encourage a range of positive academic behaviors such as motivation, engagement, timeliness, and creativity (n=10), and credential and recognize soft skills such as collaboration, quality peer review, mentoring (n=4). Four studies examined the role of individual differences such as gender, learning style, and personality type on outcomes such as academic achievement, number of badges earned, and motivation. Hard skills and soft skills attainment between the badge groups (experimental) and non-badge groups (control) was not consistently different in statistical terms. This was also the case for learning style and personality style. DOBs appealed more to students with pre-existing interests in gaming and technology (n=2). Seven studies reported positive perceptions and behaviors related to the DOB experience, including spending more time on assignments, improving positive academic behaviors (self-monitoring, preparation, collaboration, and engagement), feeling more motivated, and supporting the continued use of badges within the course.

Table 3. Summary of Assessment and Pedagogical Practices Supported by Digital and Open Badges, 2013 to 2018
Assessment Conditions Formal (n=13) Non-Formal (n=9) Mixed (n=5) Total (n=27)
Badges by Recipients' Education Level
Undergraduates Only
Graduate Students 
University Professional Development
Higher Education (Mixed with Students & PD)
Other University-Related Professional Development
Academic Professional Association
Cannot be Determined
9 (100%)
4 (100%)
0
0
0
0
0
0
0
5 (100%)
0
3 (100%)
1 (100%)
0
0
0
0
2 (100%)
0
0
3 (100%)
9 (100%)
4 (100%)
5 (100%)
2 (100%)
3 (100%)
1 (100%)
3 (100%)
Learning Domain
Cognitive Only
Affective Only
Cognitive & Affective
Cognitive, Affective & Psychomotor
Cannot be Determined
0
1 (33.3%)
11 (57.9%)
1 (100%)
0
3 (100%)
2 (66.7%)
4 (21.05%)
0
0
0
0
4 (21.05%)
0
1(100%)
3 (100%)
3 (100%)
19 (100%)
1 (100%)
1 (100%)
Cognitive Taxonomy Level
Lower Order
Includes Higher Order
Not Applicable: Affective Only
Cannot be Determined
1 (16.7%)
11 (64.7%)
1 (50%)
0
4 (66.6%)
4 (23.5%)
1 (50%)
0
1 (16.7%)
2 (11.8%)
0
2 (100%)
6 (100%)
17 (100%)
2 (100%)
2 (100%)
Assessment Source
Direct
Indirect
Direct & Indirect
1 (40%)
1 (33.3%)
11 (57.9%)
4 (60%)
2 (66.7%)
3 (15.8%)
0
0
5 (26.3%)
5 (100%)
3 (100%)
19 (100%)
Assessment Decision
Formative
Summative
Formative & Summative
N/A: Affective Domain
Cannot be Determined
0
10 (50%)
2 (100%)
1 (33.3%)
0
0
7 (35%)
0
2 (66.7%)
0
1 (100%)
3 (15%)
0
0
1 (100%)
1 (100%)
20 (100%)
2 (100%)
3 (100%)
1 (100%)
Who Awards Badges
Instructor
Instructor & Peers
12 (48%)
1 (50%)
9 (36%)
0
4 (16%)
1 (50%)
25 (100%)
2 (100%)
Overall Badge Result
Positive
Negative
Mixed
N/A: Describes Awarding and Not Results of Badges
4 (66.7%)
3 (50%)
5 (62.5%)
1 (14.2%)
2 (33.3%)
2 (33.3%)
2 (25%)
3 (42.9%)
0
1 (16.7%)
1 (12.5%)
3 (42.9%)
6 (100%)
6 (100%)
8 (100%)
7 (100%)

The nine non-formal studies (11 DOB implementations) related to professional development and continuing education. University-based professional development was in the areas of Library Management, Information, and Customer Service (such as basic library skills, library services, policies and procedures, and effective communication) and Academic Practice (such as general university teaching, the use of ICTs in teaching, and developing effective assessments). In all these cases, DOBs were awarded by instructors summatively and used criterion-based, direct assessments to credential learning outcomes. DOBs were also implemented to motivate participants and to complete various trainings. Cognitive taxonomy levels that could be determined were evenly split between lower and higher levels, and artifacts included discussion posts, application scenarios, projects, tests, and quizzes. Indirect assessment focused on the participants' learning experiences, including thoughts about the DOBs as compared with traditional certificates. One purely affective implementation explored the respondents' attitudes and motivations to share research-related data.

The five "Mixed Settings" studies included three ex post facto summaries, which may have included some non-formal learners. One study's stated goal was to recognize and credential informal learning for participants in unattended MOOCs who may be interested in pursuing formal higher education. The content areas, such as Basic Mathematics, were introductory. Learners varied in terms of prior post-secondary experience and included professionals and retirees. DOBs were awarded upon completion of criterion-based, direct assessments in the form of quizzes. The actual number of participants who went on to formal learning, including those who used their DOB credentials to support this transition was not presented.

A second ex post facto study summarized badges awarded in one academic year but did not provide specific learning context and assessment details. The third study focused on levels of engagement in three MOOCs by comparing the number of badges awarded to module completers versus the number of certificates awarded to course completers. Their findings suggested micro-credentials rather than certificates were more accurate reflections of engagement, which in turn, might be a more positive reflection of MOOC-related learning outcomes.

RQ2B: Perceived Value of Digital and Open Badges. Perception of and experiences with DOBs were mixed across groups, as shown in Table 4. Undergraduates' motivation, engagement, and satisfaction tended to be positively related to the number of badges earned. Students also felt DOBs were useful when grades were absent, when they were directly related to specific content, and good adjunct learning supports. On the negative side, some students thought DOBs were "childish," not worth the effort, and unnecessary when a traditional grading system was present. Levels of motivation to earn DOBs fell in two studies. Two other studies reported technological and logistical difficulties. Students in one study thought grades were more valuable than DOBs and that they would not use DOBs in the future. Students in another study had positive perceptions of their DOB experience but also felt DOBs were useless. Negative perceptions and behaviors were also reported in a non-badge control group. Some of these students reported decreased motivation, jealousy, and resentment of the competition earning DOBs created.

Graduate students also gave DOBs mixed reviews. Some thought they were "kind of cool," kept them updated, and like undergraduates, felt DOBs were useful as an adjunct assessment method. Some felt motivated (n=2) and reported increased levels of interest in the course (n=1) and organization (n=1). On the other hand, DOBs were not useful (n=3) and were not shared professionally (n=2).

Table 4. Summary of Perceived Value of Digital and Open Badges by Education Level, 2013 to 2018
Perception of Digital and Open Badges by Education Level
Undergraduates Implementations Only (n = 10) positive 3; negative 3; mixed 4
positive perceptions and experiences
number of badges related to motivation, engagement, satisfaction (7), & achievement (3); credentialing, recognition (3); credential soft skills (1)
sample comments: "Allow students to see progress"; "I get prepared to receive a higher score"

negative perceptions and experiences

unnecessary when grades are present (1); erodes intrinsic motivation over time (3); difficult logistics (2); distraction (3)
sample comments: "not worth the extra effort"; "waste of time, childish (don't need stickers)"; "jealousy can occur"
Graduate Implementations Only (n=4) positive 1; negative 1; mixed 2
positive perceptions and experiences
motivation / quantity & quality of engagement (2) interest (1); organization (1)
sample comments: "keep me updated"; "kind of cool"; useful as an adjunct method

negative perceptions and experiences
not useful (3); not shared (2), technology gap & logistics (1); engagement (1); did not democratize (1)
sample comments: "underwhelmed", "nothing special", "did it for the grade"; "kindergarten star system"; "prefer bar graph"
All Professional Development (n=10) positive 2; negative 3; mixed 3, not determined 2
positive perceptions and experiences
explicit learning outcomes (1); motivation / meaningful (2); recognition (1); see potential (2)
sample comments: "happy to receive"; "reminder of what I do"; "related with my bonus points"; convenient to display (1)

negative perceptions and experiences
logistics (2); discontinued (1); not meaningful / does not motivate (4); prefer certificates (1); lack of recognition by other institutions (1)

Students in one study also experienced logical difficulties with navigating between multiple platforms, which led to confusion and frustration. Some students reported being "underwhelmed" by DOBs and linked them to a "kindergarten star system."

DOB implementations in non-formal settings followed the theme of mixed reviews. On the positive side, there was a general recognition of the potential value of DOBs in professional development. Two groups of recipients felt motivated, were "happy to receive badges," shared DOBs on social media, and liked when DOBs were linked to explicit learning outcomes. In addition to logistical difficulties which lead to a discontinuation of use in one setting, recipients in three studies felt DOBs would not motivate them, were not meaningful, were concerned about DOBs' value beyond their current institution, and preferred actual certificates. Perceptions about DOBs were not reported in the informal implementations.

In summary, the 27 journal articles that examined DOB implementation focused on a variety of content areas primarily in formal settings. Implementations mostly supported non-reform practice. Learning environments were generally traditional, focused on formal learning of traditional academic content with learning goals primarily related to both cognitive and affective outcomes. DOBs were mostly awarded summatively to students, mainly by instructors upon completion criterion-based direct assessments.

In some undergraduate contexts, the number of DOBs received were related to higher levels of achievement, time spent on assignments, motivation, and satisfaction. Perceptions of the role and value of DOBs were mixed at best in formal and non-formal settings and across education levels. DOBs were thought to be valuable when they helped to make learning outcomes explicit and credential soft skills. Negative perceptions were typically linked to the time and effort needed to achieve the badge, the perceived lack of usefulness in the presence of grades and traditional certificates, negative emotions such as jealousy, and the childishness of receiving "stickers." So, far and based on this sample of studies, DOBs failed to democratize the assessment process as most badges were created and awarded by instructors. In most cases, the DOBs themselves were not shared professionally or in other contexts. Three studies reported logistical issues related to connectivity with third-party open badge platforms, leading to the suspension of implementation in one non-formal context. Finally, and further confirming the results of RQ1B, none of these studies reported progress on reforms, such as prior learning recognition, alternate pathways to degrees, and changing how universities represent and credential learning.

Discussion

This study, like others, had limitations, the primary being the very emergent nature of DOB technology and, therefore, the available sample of publications. While every effort was made to find all eligible publications in the review period, it is always possible that other eligible cases were excluded unintentionally. The number of papers was further limited by the decision to focus exclusively on English-language journals. Additionally, limitations of the peer review process itself include potential bias (Derrick & Pavone, 2013) and poor reliability (Bjork & Hedlund, 2015). Therefore, with these limitations in mind, very preliminary statements will be made based on the findings of this content analysis.

There are certainly a lot of possibilities for DOBs and their potential applications in higher education learning and assessment. The evidence from this synthesis suggests DOB implementations are concentrated within disciplines and interest areas, mostly related to ICTs. Many authors offer innovative ideas in which DOBs can be used to capture and document learning (Blackburn, Porto, & Thompson, 2016; Carey, 2012; Casilli & Hickey, 2016; Derryberry et al., 2016; Gibson, 2016). Implementation of these ideas would likely lead to increased access, alternative pathways to degree, shorter degree completion, and, consequently, lower costs (Klein-Collins, 2012; Klein-Collins & Wertheimer, 2013).

Peer-reviewed evidence from actual DOB implementations is still not available in large enough numbers, as reflected in the current sample of primary studies. While noteworthy and valuable to potential DOB adopters, these implementations were largely traditional in their approach and do not address key reforms. Most did not stray from traditional approaches: summative assessment of mostly cognitive outcomes of traditional students in formal learning settings. Ten implementations supported credentialing in non-formal settings. While one informal application discussed credentialing the informal knowledge of learners who may pursue formal education, the actual conversion of informal knowledge to formal knowledge was not discussed. Also, the learners in question were already part-time students in formal settings.

Most implementations linked achievement of badges to specific learning outcomes but did not mention competency-based assessment. Those that did were inconsistent in their definition, which aligns with previous findings (Zlatkin-Troitschanskaia, Shavelson, & Kuhn, 2015). Zlatkin-Troitschanskaia et al. also found that most competency-based implementations used local assessment systems, which limits generalizability. Thus, in keeping ICTs not always delivering (e.g., Cuban, 1996, 2001) DOB implementations were mostly traditionally focused and not living up to the promises of the technology.

On the positive side, the sample of implementations does, however, provides several examples of how potential users can implement DOBs in a variety of learning situations. The range of research methods and approaches used are transferable to a variety of assessment and learning contexts providing readers with clear sets of best practices. These best practices include criteria for awarding DOBs, linking these criteria to learning outcomes, and using various methodological approaches to document and examine the results.

Also important is the critical insight for potential users who can anticipate and therefore navigate the documented challenges. One such challenge involves the logistics of using third party DOB platforms that are not integrated into course LMSs. Potential users can also approach their own adoption with realistic expectations of what DOBs will help them to accomplish in their learning environments. For example, the following learner profiles are not likely to be motivated purely by the potential to earn DOBs: mature learners; learners who are already intrinsically motivated; learners with limited pre-existing interests in technologies; and learners who are pursuing advanced studies and professional development. Learners at all levels, however, appreciate clear expectations and milestones, which DOBs can help to provide. Also, depending on instructional goals, negative motivation and emotions can occur as unintended consequences. Therefore, the results of this synthesis can help potential adopters make the right implementation decisions, based on their respective instructional needs, assessment and evaluation conditions, current technology access, and learner characteristics.

Conclusion

This content analysis examined DOB adoption in higher education settings by exploring research questions related to trends, assessment conditions, and assessment outcomes. 63 peer-reviewed journal articles published between 2012 and 2018, including 27 primary studies, were analyzed. The majority of these papers originated from the United States and appeared in a variety of disciplinary journals, most of which are ICT / Educational Technology focused. DOB technology is still in its emergent phase. Additional publications, including more documented implementations, are likely to become available. Thus, periodic examinations of the peer-reviewed literature and whether the technology is delivering on its promises, continue to be warranted. 

Though adoption is growing, as reflected in DOBs awarded (Grant, 2014; Gibson et al., 2015; Mozilla Foundation, 2016), the findings from this synthesis were mixed. The enthusiasm for these technologies and their related promises and potential are real. However, findings based on the current sample revealed that the peer-reviewed literature is mostly driven by the ICT discipline and others with similar interests. Researchers, educators, and other interested stakeholders should be mindful of technologies' mixed history and prior failures to deliver on potential (Burns, 2013; Clark, 1991; Cuban, 1996, 2001; Cuban & Jandric´, 2015; Whitworth, 2012). In addition to periodic examinations of the literature which include new publications, future research should also address this study's limitations, namely including documented DOB implementations that are published in non-English language journals. The current analysis is just one of many efforts to document the unfolding story of an emerging technology in higher education settings.