Learning Communities at Mid-Central University
Sean is an area coordinator at Mid-Central University (MCU) in the residence hall system. As such, Sean has responsibility for four buildings, each housing about 240 students, four graduate assistants (one for each building), and 16 resident assistants. MCU is a regional institution, with most of its students majoring in education, business, or liberal arts. Predominantly, the students are the first in their families to attend college and many have significant amounts of federal financial aid.
Sean is in her second year of service at MCU and noted that as opposed to other institutions with which she was familiar, MCU did not have any learning communities (LC). Sean had served as a graduate assistant in the residence halls at State University while pursuing her master's degree and was used to having many learning communities in residence halls. She was surprised that MCU did not have any learning communities when she interviewed for her position but decided to accept the position with the hope that learning communities could be established though no promises were made that LC units would be established at MCU. She spent her first year investigating why MCU did not have any of these special residential units and it turned out that a variety of reasons contributed to the lack of learning communities, among them the philosophy of the residence department, lack of funding, and potentially, lack of student interest.
From Sean's point of view, the idea behind a learning community was to use the concept to improve retention. In the pilot project she was developing, two learning communities would be implemented in the trial program. Twenty students majoring in business would be assigned to one of the learning communities and another 20 education majors would be assigned to the other learning community. The students in each learning community would be assigned to three courses in the curriculum and a community advisor (CA) would be hired to provide support and enrichment, such as organizing study groups, arranging for tutoring as necessary, and organizing a field trip for the student participants once per month in the fall semester.
Sean briefed her staff at the end of the first academic year about wanting to implement two trial learning communities the next academic year. The concept was foreign to many of the staff and several asked this question: How did Sean know that the students needed this experience? Sean indicated that such would be a part of a pilot project of learning communities that was being planned.
She managed to convince the assistant director of student housing for residential programs that implementing two learning communities on a trial basis was worth undertaking but she was cautioned by Sami, the assistant director, that she would run into a series of hard questions as she had conversations with other member of the central office staff. And, Sami was clear about one central concern that was paramount in his mind: Whenever programs were implemented, senior staff would want to know how the program could be improved from one year to the next.
Sean also met with the fiscal officer of the residence life department who wanted to know what the cost of the program would be. Sean thought that adequate compensation for the community advisors would be a free room plus a monthly stipend of $100 for each CA plus an operations budget for each LC of $2,000 for modest programming efforts. The fiscal officer left Sean with this question: How would Sean demonstrate that the resources were used wisely?
The final discussion Sean had was with the director of the residence life department, Casey. While Casey was generally supportive of the program, there were some doubts about the effort required to implement learning communities. Would the establishment of the learning communities be worth Sean's time? Are the outcomes Sean has identified consistent with the purposes of residence halls at the university? What about staff time in organizing room assignments for the participants? Wouldn't working with the Registrar's office and the two academic programs, business and elementary education, take a lot of time? How would the benefits of the program be communicated to senior administrators? Wouldn't recruitment of participants take a tremendous effort? And, most important, how would Sean determine if the program made a difference?
Sean is faced with a daunting number of questions related to assessment, because without data she really can't answer the questions posed by the various administrators who will have an influence as to whether or not the learning communities will be implemented on a trial basis and what the future of these new units might be. We cannot be certain if Sean was ready for all of the questions raised by these administrators, even though learning communities are common on many campuses (see Benjamin, 2015).
Defining Assessment, Evaluation, and Research
Before we move further into this chapter, it is important that we are clear by what we mean by assessment. We'll also compare and contrast the term assessment with evaluation and research, since the terms often are used interchangeablyâhowever, to our way of thinking, each represents a very different purpose.
Assessment
We think the definition of the term assessment that we introduced in the first edition of this book is still relevant in contemporary student affairs practice. We defined assessment this way:
âAssessment is any effort to gather, analyze, and interpret evidence which describes institutional, departmental, divisional, or agency effectivenessâ (Upcraft & Schuh, 1996, p. 18).
To this definition we would add program or initiative effectiveness. In the case of our example, an assessment of the learning community initiative at MCU would be conducted to determine the extent to which the program achieved its goals. It is also important to note that for the purposes of this book, we are interested in students in the aggregate. We will be addressing individual student learning to the extent described in Chapter 4. We would, in the context of this volume, be interested in the aggregate scores of students who might have taken the College Senior Survey (http://www.heri.ucla.edu/cssoverview.php) or the National Survey of Student Engagement (http://nsse.iub.edu/) if the instrument measured an aspect of the student experience pertinent to the study being conducted.
Effectiveness, for the purpose of this definition, can take on many dimensions. Most important, we think of effectiveness as a measure of the extent to which an intervention, program, activity, or learning experience accomplishes its goals, frequently linked to how student learning is advanced. Goals will vary from program to program but typically they are linked to the goals of a unit, the division in which it is located, or the goals of the institution. So, for example, at a commuter institution with no residence halls, the development of community as an institutional goal might have a different definition than the development of community at a baccalaureate college where nearly all students live on campus.
Evaluation
We also defined the term evaluation in the first edition of this book but we think evaluation needs a bit of updating and for that we rely on the work of Suskie (2009). We defined evaluation, in effect, as the use of assessment data to determine organizational effectiveness. Suskie provides a more nuanced definition of evaluation by asserting, ââŚthat assessment results alone only guide us; they do not dictate decisions to usâ (p. 12). She adds that a second concept of evaluation is that ââŚit determines the match between intended outcomesâŚand actual outcomesâ (p. 12). In our LC example, we might learn that participation in the learning community programs does not result in increased retention but we might find out that students who participate earn a higher grade point average at a statistically significant level. If the LCs were established with a goal of improving retention and that did not occur, the higher GPAs may or may be sufficient evidence to determine that the LCs should continue.
Suskie (2009) adds that evaluation also ââŚinvestigates and judges the quality or worth of a program, project, or other entity rather than student learningâ (p. 12). We might find, for example, that participation in the LCs resulted in improved retention for the participants. But, suppose if when all the costs are tallied in our case study, what was found was that the program cost $8,990 per student. In the case study, it is important to note that the resources of MCU are modest, and with 40 students proposed to participate in the programs (20 in the education LC and 20 in the business LC), if the aggregate cost was $359,600, this amount is likely to be far more than could be sustained by the university's budget. So, while the goal of the program (increased retention) was met, the costs were prohibitive. Strictly speaking, the data suggested that the program was a success (retention was improved), so from an assessment point of view it should be continued, but from an evaluation perspective, it should not (the program was cost prohibitive).
Research
Our experience is that student affairs educators can be worried by the thought of undertaking assessments because they think what they are contemplating is conducting a research study, similar to writing a dissertation as part of completing a doctoral degree or conducting a study that would form the basis for a manuscript that would be submitted to an international journal with rigorous acceptance rates. We submit that such is not the case with assessment. Rather, we assert that while research methods are used in the process of conducting an assessment, we are not advocating a level of rigor that would be required to complete a doctoral dissertation. Suskie (2009), again, is informative on this point: âAssessmentâŚis disciplined and systematic and uses many of the methodologies of traditional researchâ (p. 14). She adds, âIf you take the time and effort to design assessments reasonably carefully and collect corroborating evidence, your assessment results may be imperfect but will nevertheless give you information that you will be able to use with confidence to make decisionsâŚâ (p. 15).
We would like to identify several distinctions between assessment and research that further illustrate th...