Menu

Institutional Effectiveness

 

Office of Institutional Effectiveness (IE)

The mission of the Office of Institutional Effectiveness (IE) is two fold:

  1. To promote educational effectiveness throughout Berkshire Community College by collecting, analyzing, and disseminating data concerning students, faculty and staff, educational programs, administrative and support services, and institutional planning, policy development and decision making;
  2. To promote continuous improvement of student learning through assessment across all academic programs, facilitating the assessment process, and providing support to all college departments with their assessment efforts.
Specific Project in mind?

Please fill out our Data Request Form and we will follow up with you.

Data Request Form

Contact Us & Request Data

Do you need any help with:

  • Accreditation and Program Review?
  • Research and Survey Design?
  • Data Management and Data Analysis?

Contact Us

Assessment Guidance

First Things First

You are not alone. The Office of Institutional Effectiveness serves as a support staff for all institutional assessment activities. Whether you are brand new to assessment or you are a grizzled veteran, whether you need a lot of help or just a little, IE is a resource that is available to you.

The Importance of Assessment

How do you know that the things you are doing are working? This question is the essence of assessment. It is best to think of assessment as a continuous cycle. The cycle begins with analysis of evidence (Pre), then moves to action that is designed to affect change (Intervention), and then back to analysis of evidence to determine efficacy of that action (Post) and identify the need for any further action, which would start another cycle. By measuring at the beginning and end of the process, we can understand the effect that our actions have. When we assess our activities, we can move beyond feeling like we accomplished our work to knowing that we did.

  • What is a KPI?

    A Key Performance Indicator (KPI) is a measurable value used to evaluate an outcome. We often use KPI’s such as Graduation Rates, Retention Rates, and Course Pass Rates to evaluate the overall work of the college, but KPI’s can be as grand or as granular as the situation demands. Here are a few tips for selecting meaningful KPI’s:

    • Select KPI’s that match the scale of the work. For instance, the Library may decide that number of students served is a KPI they want to focus on instead of graduation rate. We sometimes refer to this as “taking the pulse close to the heart”. KPI’s like graduation are the culmination of hundreds (if not more!) of factors and should only be used to assess the college at a holistic level. Select KPI’s that are appropriate to the scale of the work.
    • Select KPI’s that you cannot directly manipulate. Continuing with the Library example, they could decide to measure their success based on the number of books in their collection. The problem with this approach is that it does not account for how useful the books are or how they enhance the collection. The Library could buy one thousand copies of the same book and this metric would go up, but the increase in service to students would be minimal. The Library could choose to buy books that are cheaper (so they can buy more), but less useful than more expensive books. Changing behavior solely to increase the KPI’s without regard for the real-world impact is known as “dial turning” and is a pitfall of selecting KPI’s that you can artificially control.
    • Select KPI’s that are reliable. Every metric has an inherent amount of instability, known as variance. Metrics with a low variance are stable across time and/or different groups and are good candidates for KPI’s. When there is little natural fluctuation, it is easier to see if changes you make are having an effect. Other metrics have high variance with significant changes from year to year or group to group with no simple underlying cause. This happens most when groups are small and a shift of just a few members can tip the scales or when outcomes rely on complex and numerous inputs that are difficult to isolate. Selecting stable metrics gives confidence that any changes you are seeing are a product of your work and not natural fluctuation in the data.
    • Understand the difference between Lagging and Leading indicators. Those who are familiar with Formative vs Summative Assessment will recognize the dynamic at play here. The aptly named Lagging indicators occur at the end of an event and measure what happened in the past. They are usually easy to measure, but don’t provide the opportunity to change past outcomes. Leading indicators measure early in an event, sometimes even before an event occurs. They can be more difficult to measure, but they provide an opportunity to influence the outcome. Both types of indicators are useful for different purposes; be sure you understand which one you are using and why.
  • On Benchmarks

    Benchmarking measures BCC against its peers. Many of the large, institutional reports that are available have built-in benchmark cohorts for comparison. The IPEDS and VFA systems both include benchmark groups with national peers using Carnegie Classification (more on Carnegie Classification below). Most reports will have a section listing peer institutions and the criteria used to select them.

    However, some do not which may prompt you to create your own benchmark group from data that are available to you. When deciding which institutions to use for benchmarking, it is important to select those schools that are like us in the ways that are important to the data being studied. While criteria can vary depending on the data set, you will generally want to select schools with matching Carnegie Classifications. For BCC that would be Small (<1000 FTE), Public, 2-year, and Rural. Within the MA state system, school matching these criteria are Greenfield CC and Cape Cod CC.

  • On Projections/Goals

    You may be asked to make projections or set goals for the future. Remember that projections and goals are targets to aim for, but that you probably won’t hit all of your marks for a myriad of reasons. After all, if we could accurately predict the future we would all be lottery millionaires. That said, there are a few things you can do to improve your projections. If we think of projections as a continuum from “Guessing” to “Certainty”, we hope to move away from Guessing and toward Certainty using a few tips:

    • Use strong rationale for your projection/goal. If you project that a student outcome will increase by 5% as a result of a project, you should know why. Picking 5% because it is a satisfyingly round number is a weak rationale. Picking 5% because three other schools who have tried the same intervention have each seen at least 5% increase is a much stronger rationale.
    • Set appropriate goals. We would all love to see 100% completion, retention, and graduation, but that simply is not realistic. Setting unrealistic goals risks demoralizing and alienating those you need achieve that goal. Good goals can be a stretch, but need to be attainable to keep people engaged. For internal work, there is usually nothing wrong with setting a goal that will be difficult (but not impossible) to achieve. Even if you fall a little short, that is still progress and should be celebrated! There may also be some times when you want to set “safe” goals that you know you can achieve like for an outside accreditor or grantor where goals need to be met to satisfy conditions. Use your best judgement.
    • Choose the right metric. Limit your projections/goals to things that could reasonably be affected by the intervention. For instance, making projections about an increase in graduation rate based on a change in new student orientation is probably reaching a bit too far. There are hundreds, if not thousands of events between orientation and graduation that impact that outcome. It would be better to look for changes in something like the SENSE or CCSSE surveys which measure student connections in the first few weeks of a semester.
    • Beware statistical pitfalls. BCC is a small school and we normally deal with small populations. As a result, we often see natural fluctuations in student behavior that can appear to be something on the surface, but that don’t hold up under statistical scrutiny. Looking at full-time freshmen retention rates from year to year, the rate swings from the high 40’s to the mid 50’s and back again with regularity. The high variance of this metric makes using it for projection problematic. This is an area where the Office of Institutional Effectiveness can be a great help.
    • Use concrete language.  The more concrete the language you use, the easier it is assess.  Words like “increase” “decrease” “add” and “eliminate” are strong words that aid in assessment.  Soft language, like “foster” “encourage” “promote” “strive” and “help”, is very difficult to measure and makes assessment much more difficult. 
  • Data Sources

    There are a number of data sources available to measure student outcomes, each with their strengths and weaknesses. The table below lists several data sets that are available to the college community. While these data sets certainly cannot answer every question, they are a great place to start looking at the college.

    The PMRS has been singled out as of particular interest by the MA Commissioner of Higher Education in the Strategic Planning process.

    Name

    Target Population

    Frequency

    Governing Body

    Provides Benchmarks

    Type of Data

    IPEDS

    Students

    Annual

    Federal

    Yes

    Outcomes

    PMRS (HEIRS)

    Students

    Annual

    State (MA)

    No

    Outcomes

    VFA

    Students

    Annual

    Independent

    Yes

    Outcomes

    NECHE Data First Forms

    Students

    Annual

    NECHE

    No

    Outcomes

    SENSE/CCSSE

    Students

    Biennial

    Independent

    Yes

    Early Connections

    Great College to Work For

    Staff

    Biennial

    Independent

    Yes

    Satisfaction

    Name

    Strengths

    Weaknesses

    IPEDS

    Highly standardized

    Focus on Freshmen

    PMRS (HEIRS)

    Highly standardized, Reflect Commissioner’s Priorities

    Scope Limited to Commissioner’s Priorities

    VFA

     

    Focus mainly on most successful students

    NECHE Data First Forms

     

     

    SENSE/CCSSE

    Best measure of early connection

     

    Great College to Work For

    Disaggregated by work group