Is Competency the New Mastery?

In 2014, I started seeing more articles about Competency-Based Education (CBE) as the new approach to higher education degrees. In 2013, I think that “mastery” might have been the buzzier word. Mastery got a big push last year from things like Khan Academy and founder Sal Khan’s belief that mastery of a skill or concept before moving on was what was lacking in American education overall.

A simplified explanation of the difference, perhaps from the view of an employer, is measuring what they know (mastery) versus what they can do (competency).

Is competency the new mastery? I did some searching and turned up a piece called “Competency vs. Mastery” by John F. Ebersole (president of Excelsior College) on the Inside Higher Ed site that compares these two approaches to “validating” learning.

He suggests that “competency” could be akin to subject matter “mastery” and might be measured in traditional ways – examinations, projects and other forms of assessment.

Ask that hypothetical woman-on-the-street if they would rather hire someone who had mastered a skill or was competent, I suspect mastery would win out. Of course, that person’s ability to apply what they have mastered into a practice might still be in question.

It may be semantics, but considering someone to be “competent” sounds to many people like “adequate.” That article gives as an example those instructors we have experienced as students who had “complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers.”

What would you say a subject matter exam measures? Mastery? Might an undergraduate have mastered subject matter or skills but still not be competent in her chosen field?

Looking online at the available books on competency-based education and training, most of them are in healthcare and clinical supervision, which is also the programs discussed in the article. Does the CBE approach work with other disciplines?

Some interest in CBE comes from that often-heard idea that employers don’t view new college graduates as ready to do the job. They expect to have to further train the new hire who has “mastered content, but [is] not demonstrating competencies.”

Ebersole says that “To continue to use ‘competency’ when we mean ‘mastery’ may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us.”

Yesterday, I was thinking about differentiating mastery and competency in the light of movements such as competency-based education and degree programs.

The Mozilla Open Badge project and other initiatives have tried to standardize the use of badges for documenting learning. I like the idea but I don’t see that badges have made any serious entry into educational institutions.

Badges have been used to mark what a person knows or what they can do. Proponents say using them is more student-centered and more about real student learning. It’s certainly more real than using seat time and time on task as a measurement. Because a student has completed 9 credits hours proves nothing, and more often we hear that employers also question that getting an “A” grade for those 9 credits also doesn’t prove any mastery or competency. Enter competency-based or evidence-based approaches to learning.

I still think about the merit badges I earned in scouting when this topic comes up. The badges were extrinsic motivators and they worked for me and most of my fellow scouts. You wanted to get them. I liked the ceremonial awarding of them at meetings and the recognition. My mom and my “den mother” were pretty conscientious about signing off that I had completed the requirements to earn them. But much of the work I had to do was on the “honor system” and I’m sure I cut corners on some things and got away with it.

If I earned a badge for “climbing” (as in rock and mountains), would you say I was competent at the sport? Would you say I had mastered it? I don’t think I’d be comfortable saying either one of those things. I had learned about it and I had done some actual activities involved with it. I had not mastered it and I’m not sure a real climber would say I was competent enough to do it on my own or very seriously.

As Bernard Bull and others have pointed out, this same critique can be leveled at letter grades. Do both make school about “earning instead of learning?”

We also associate badges with video games and in the gamification of learning they play an important role. In the pure gaming environment, earning badges, points, power pills or whatever tokens are given sometimes does take precedence over learning. Then again, some games aren’t much interested in learning.

It’s better to think of badges as markers, milestones of progress rather than as a goal.

The Mozilla project and others have tried to give more trust in badges as credentials and educational currency. Education has always valued tests, score and credits as evidence of learning even though we have been arguing about it for hundreds of years and continue to do so.

If the organization awarding the badge is credible, then the real concern is what evidence is being used to determine the completion. As with the goals and objectives we now hold as important in schools, some things are more easily measured.

Want to earn the “Miler” badge? Then run a mile in under 5 minutes and have it verified by the teacher or coach. Want to earn the “Team Player” or “Leadership” badges?  Then… play on a team… be the captain…  Hmmm. Those are tougher things to measure.

Students, teachers and schools have talked for a long time about trying to get away from a reliance on just grades, but grades persist. Portfolio assessment and other movements have made a dent in some instances, but the quantifiable test score still wins the day. That stopwatch on the mile runner is easily validated. Today there is more testing and data being used and more complaints about its use.

Learning Beyond Letter Grades was a course offered last year that examined why so many schools use and rely on letter grades. “Where did they come from? What do they tell us and fail to tell us about the learners? What is the relationship between letter grades, student learning, and assessment?” That’s a lot to ask in a six-week course, but it comes from this desire many of us have to consider authentic and alternative assessments, peer assessment, self-assessment and badges.

Some badges set an expiration date, meaning the badge bearer will need to return for more training or provide updated evidence to keep the badge.  That’s an idea from the world of professional development, licensing and credentials. If you earned a computer programming or phlebotomy badge in 2001, should it still be valid today? Perhaps not.

Perhaps the most difficult hurdle in launching a competency or mastery-based program might be how to assess/validate learning. We have been hitting that one back and forth for centuries.

This post also appeared at Serendipity35

The ABD Club

ABD stands for “all but dissertation,” which is a description of a student who has finished coursework and perhaps also passed comprehensive exams, but has yet to complete and defend the doctoral thesis. It is a kind of club, though you don’t really see people putting the ABD bumper sticker on their car.

Last weekend, I wrote about “The Art of Procrastination” and rethinking what is and isn’t true procrastination. That led me to think about why so many doctoral students, myself included, give up on that degree.

I had read an article by Rebecca Schuman  about the Ph.D. Completion Project. It estimates the ten-year completion rate for the degree. For STEM disciplines, it is 55–64 percent. It’s 56 percent in the social sciences, and 49 percent in the humanities.  So about half of those in these doctoral programs don’t make it after a decade of working at it. Some of those people don’t even make it all the way to the dissertation phase. I am in that particular club.

David D. Perlmutter wrote a series that focused on the “getting it done” aspects of the document accepts that there may be factors beyond your control but pushes the completion agenda.

The Ph.D. Completion Project graphs start leveling out around year 8 and since the dissertation begins in Year 3 or 4), we can assume a lot of these folks are into the dissertation phase before they bail out.

ABDs live in an odd parallel universe of academia. They clock up years of research and tuition bills, but come away with nothing to show but three scarlet letters they can wear.

Some of them can get teaching jobs at 2-year colleges, or with some impressive job experiences or big publications might get a position (non-tenure, probably) at a 4-year school.  It has been suggested that a new kind of degree between an M.A. and a doctorate might be offered — an “MFA” in other areas.

I attended a party for a friend last summer who has finally completed the dissertation and degree. He is in his late 50s. He started late and plowed ahead because he enjoyed learning. He is an adjunct professor at a nearby university and I doubt that he expects to pick up a full-time position at this stage of his life. That’s a good place to be because the odds are against him.

I have written about procrastination on another blog of mine, and it’s not that I don’t get things done. Part of my problem has always been putting too many things on that never-ending “To Do” list.

The things undone on those lists are a constant cause of stress and a sense of failure. I lay a lot of guilt on myself about all the things I do to avoid doing the things I really need to do – like making and drinking a few cups of coffee while staring at the sky on the deck, taking the dirty laundry downstairs, writing a blog post, watering the plants, taking a walk.

But of late, I have been rethinking procrastination, and I’m not the only one doing that. Scientists who study procrastination find that most of us are lousy at weighing costs and benefits across time. For example, we might avoid doctor and dental appointments, exercising, dieting, or saving for retirement. We know they have benefits, but the rewards seem distant and we may even question those benefits. What if that money is not there when I retire? What if we don’t live long enough to retire?

Most of us prefer to do things with short-term and small rewards. The benefits of that coffee break, watering the plants or writing a blog post may be small or even dubious, but we see an immediate result. I like the coffee and it might give me some energy. The plants need me to survive, and I enjoy looking at them, I like completing things, even if it’s a post that take me only an hour to finish. It is finished. Checking things off the To Do list. gives me a wonderful feeling

Friends tell me I am very productive. And some articles I have read say that productive people sometimes are very poor at distinguishing between reasonable delay and true procrastination.
Reasonable delay can be useful. I will respond to the request for information from my colleague tomorrow after I talk to someone about it and gather more information. But true procrastination – not responding to the colleague for no reason, or watering the plants and making coffee just to avoid the inevitable – is self-defeating.

It is a way to rethink blaming yourself. I don’t mean that you’re off the hook. I’m not giving myself a free pass on procrastinating in all cases. I’m rethinking the why of the delay.

Do I regret not finishing that doctorate? the time when it would have benefited me is now past, so I don’t regret it now. I found alternate paths to what I wanted to do and I really did not enjoy the work required to get the degree.

Now if I can just find out when the next meeting of the ABD Club occurs. I have a lot to talk about with that crew.