Student Learning Outcomes (SLO) Assessment began in the
second term of the Bush Administration as one response to the growing
discontent voiced by business people and parents, who thought that higher
education was turning out graduates at a very high cost but without the skills and
knowledge needed to be productive workers and citizens. Accrediting agencies adopted the approach and
it is gradually filtering into every level of higher education.
Here at Ashland considerable resources were
put towards assessment, both in dollars and in faculty, student, and administrators' time. On pain of not being re-accredited,
we were expected to develop a “culture of assessment” (as opposed, say, to a
culture of learning). But is there any
solid evidence that SLO assessment actually improves student learning? In this column, Dean Erik Gilbert argues that
no one knows. It is high time this question was asked and answered in a serious way. Not everyone argues that SLO Assessment is
completely useless, though there are some who do. The real questions are these:
does the time, effort, and money spent on SLO assessment yield increments of
learning commensurate with the costs? And, does the approach to teaching, learning,
and the human soul implied in assessment actually do some harm to teaching and learning? Here are Erik Gilbert's conclusions:
If advocates could point to evidence that good assessment has led to improvements that are external to the process itself — like changes in a college’s reputation, ranking, or employment prospects for its students — I suspect faculty would give it more support.
Assessment is one of those things that we keep telling ourselves will pay off if we could just get it right, but we never seem to get there. It’s time for us to demand that the accreditors who are driving assessment provide evidence that it offers benefits commensurate with the expense that goes into it. We should no longer accept on faith or intuition that learning-outcomes assessment has positive and consequential effects on our institutions — or students.Here is the whole column:
Does Assessment Make Colleges Better? Who Knows?
By Erik Gilbert
AUGUST
14, 2015
Last year the younger
of my two sons went off to college. As we went through the search process, we
looked at university and department websites, checked faculty research
interests, looked for evidence of faculty involving students in their research,
flinched at the prices, marveled at the climbing walls, and considered quality
of the food on campus. Basically we did all the things a typical middle-class
family would do in a college search, along with a few insider concerns like
looking at faculty publications and grants and checking that the university
libraries had at least one of my books. In retrospect one question that never
crossed my mind was, "I wonder what this place’s assessment program is
like?" I suspect I am not alone in this.
My lack of curiosity about
assessment when making an important choice about my children’s education
probably surprises no one, but it should. It’s unsurprising in that no one,
higher-ed insider or not, ever seems to worry about this when choosing a
college. No admissions officer ever touted his institution’s assessment
results. No parent ever exclaimed, "Suzy just got into Prestigious College
X. I hear they are just nailing their student learning outcomes!" But it’s
still a little surprising in that I am a professor and an administrator who has
been involved in assessment in various forms for a long time. I have been
dutifully doing assessment in my classes almost since I started teaching a
decade and half ago.
Every year on my
annual productivity report I write a mandatory and usually somewhat contrived
narrative describing the ways in which I have changed my courses and teaching
in response to the assessment data from the previous year. As an administrator,
I sit on the Learning Outcomes Assessment Committee that oversees the institution’s
assessment program and on the Graduate Council where we routinely critique new
program and course proposals for the failings of their assessment plans.
So, what does it say
that I looked at climbing walls, not assessments, when making a significant and
expensive decision about my sons’ educations? It says that I, like virtually
everyone else, don’t think that good assessment makes good universities and
well-educated students or that bad assessment makes bad universities and poorly
educated students. In fact, I am starting to wonder if assessment may actually
do more harm than good.
What got me thinking
about this was a New Yorker article by
Atul Gawande on unnecessary medical testing and the low-value and sometimes
harmful medical interventions that result from it. Drawing upon a number of
recent studies, Gawande argues that much medical testing is unnecessary and
that in addition to not providing useful information it can also lead to over
diagnosis and over treatment. In one of his examples, he reports that
ultrasound testing for thyroid cancer has made it possible to detect
microcarcinomas that would have gone unnoticed before. These rarely pose a
threat, but patients and surgeons find it difficult not to treat anything that
sounds that scary. Gawande uses the example of Korea where thyroid cancer
surgeries have increased drastically and thyroid cancer has become the most
commonly treated form of the disease. However, mortality rates from thyroid
cancer have not changed, while serious side effects from the all the surgeries have increased.
I saw unmistakable
parallels to assessment in universities. Are we using assessment to find minor
shortcomings in our teaching and curriculum, changing what we do in the hopes
of remedying those shortcomings, and in the long run having no real positive
effect on the quality of our graduates and institutions? Are we, in effect,
finding and treating harmless academic microcarcinomas rather than real
problems? And, if so, what might be the consequences of all this?
Has anyone looked into
whether assessing student-learning outcomes over many years has made American
colleges, or students, better in some way? Has anyone tried to compare institutions
with different approaches to assessment? I am a historian so I am not familiar
with the education research, but as best I can tell from a literature search
and from asking people in the field the answer is "no."
To be fair, there is
nothing directly comparable to mortality rates in higher education. Figuring
out what makes one university better than another one or better than it was 10
years ago is tricky. But given the amount of time, effort, and money that goes
into assessment, it would be helpful to have a track record of its efficacy.
Does assessment cause
actual harm? Probably not in the way unnecessary medical treatment does, but
there are opportunity costs associated with it. And most troubling of all is
that the fundamental premise of assessment is that the problems we need to test
for and try to fix are found in the classroom and the curriculum. So while we
are agonizing about whether we need to change how we present the unit on
cyclohexane because 45 percent of the students did not meet the learning
outcome, budgets are being cut, students are working full-time jobs, and debt
loads are growing.
People who work in
assessment complain that faculty treat it as merely a compliance issue; that we
just tick the boxes and don’t use the data to improve student learning. No
doubt this it true. Advocates may be able to point to modest improvements in
student learning in specific programs or courses with evidence generated by
assessment instruments, but this is worryingly similar to surgeons patting
themselves on the back for taking out tumors without checking to see if their
interventions are affecting mortality rates.
If advocates could
point to evidence that good assessment has led to improvements that are
external to the process itself — like changes in a college’s reputation,
ranking, or employment prospects for its students — I suspect faculty would
give it more support.
Assessment is one of
those things that we keep telling ourselves will pay off if we could just get
it right, but we never seem to get there. It’s time for us to demand that the
accreditors who are driving assessment provide evidence that it offers benefits
commensurate with the expense that goes into it. We should no longer accept on
faith or intuition that learning-outcomes assessment has positive and
consequential effects on our institutions — or students.
Erik Gilbert is
associate dean of the Graduate School and a professor of history at Arkansas
State University.
No comments:
Post a Comment