Assess and Refuel: The Academic Semester in Review


Assess and Refuel: The Academic Semester in Review

13_Phunkod/Shutterstock

“The paradox of education is precisely this — that as one begins to become conscious, one begins to examine the society in which he is being educated.” – James A. Baldwin

Several years ago (pre-pandemic, 2019), I authored an article entitled “Are You Ready for All of Your Incoming Students?” At that time, I used the above quote from author and activist James A. Baldwin. His views on education continue to resonate deeply in a society that is experiencing higher education challenges regardless of the size, mission, and vision of colleges and universities.

Toward addressing at least one challenge ‘within the society in which [the student] is being educated,’ I offer a look at the importance of assessing our efforts.

What Time Is It?

It is November, and there should be a buzz of activity within institutions toward assessing how the ‘launch’ of the academic year/fall semester has unfolded as well as where the institution has ‘landed’ with successful student outcomes in these few weeks before the semester ends. Department chairs, deans, faculty, and institutional research office personnel should be working in tandem to discuss assessment data and what it portends for next steps; put another way, continuous improvement. Unpacking assessment data at the close of the semester when faculty and students are away for the winter break will accomplish absolutely nothing toward ‘refueling’ for the spring semester.1 Truth? Simple truth.

Where Do We Begin?

Department chairs, deans, and institutional research staff should begin by assisting faculty, academic support staff, and student affairs staff in understanding the importance of both aggregation and disaggregation in implementing effective evolutions toward enhanced student success for the forthcoming semester. The reason for understanding this difference is actually overly simplistic yet too often overlooked. Why so? Too many times, the quicker fix in the examination of assessment data leans into [more] aggregation of data which often leads to limited success in continuous improvement.

Consider this…

When data is aggregated, it is most often used to obtain a ‘big picture’ of how a process/learning platform/student engagement strategy, et.al. moves forward. The focus, therefore, centers more on generalized inclinations and broadly defined strategic planning. Conversely, the disaggregation of data leans forward with in-depth analyses with specific lens on student cohorts, achievement levels in specific classes, definitive time periods, financial platforms, specific professors, academic support staff, and student affairs support staff.

Is this important? Indeed, as it is the specificity that moves toward refueling in terms of the necessary changes impacting curricula (slight changes to major ones), class scheduling, redefining cohort groups, trajectories for academic support, trajectories for student engagement activities, and effective student financial processes to move forward with successful student learning outcomes.

The question now becomes…

What Kind of Assessment Data Exists?

What has been collected, documented, and tracked from various departure points?

Consider this as an example… If large percentages of first-year students are having challenges in successfully matriculating out of core/general education courses during their first semester, how much assessment exists that can catalyze continuous improvement for the upcoming semester?

While every institution is unique in its chosen assessment tools and how they are actually employed, it is yet important to consider a few questions when the assessment process begins this month, and it is even more important to consider these questions if various levels of assessment work are not taking place at your institution in the month of November:

  • What pre-assessment tools were employed?
  • When were the results of the pre-assessment reviewed to align with the targeted goals, objectives, and outcomes for the fall semester?
  • Who participated in the pre-assessment review (in terms of creating the assessment tools)? Why? Are other key individuals missing from this process?
  • Based on the aggregation and disaggregation of the pre-assessment data, were mid-point assessment tools deemed necessary? Why or why not?
  • Have post-assessments been completed? If not, when? Remember! It is November…
  • What cohorts [of students] were assessed? Why?
  • What is the history of your institution in the use of cross-referencing assessment data within all sectors of the college and/or university — academic units, academic support units, student engagement/student affairs units, residential and commuter life units, enrollment management/scheduling units, and fiscal/bursar affairs units — all of which impact student success?
  • Is there a process in place to make necessary changes — based on the findings from the assessment data — before the start of the next semester?
  • If not, how will this lack of process impact successful student outcomes according to the established goals, objectives, and outcomes for the academic year?

The goal of this conversation is to catalyze higher education leaders to not only think logically about the necessity of assessing student achievement but also to assess [their] timing in the use of assessment data to effect continuous improvement. It is important to avoid assessment as an exercise of necessity for accreditation purposes with limited or no use of the data contributing to ineffectiveness in student success.

This article does not attempt to instruct on specific ways to implement, manage, and utilize your assessment data; however, specific strategies addressing the previous questions can be offered in future articles. Stay tuned…

Happy ‘refueling’ toward evolutionary success in student achievement.



Source link