Standard 2. Assessment System and Unit Evaluation
The Education Unit at California University of Pennsylvania (CalU) uses its assessment system in a variety of ways. Changes in programming may be initiated as a review from the Unit, when analyzing data during TEAM Day activities or they may be initiated during program reviews. An example of program initiated review may include discussion of certification exam problem areas. When reviewing overall certification examination scores, such as Praxis II and PECT, programs may elect to change components that better meet the needs of candidates seeking certification in that area. An example of a Unit initiated program change would include the idea of field experiences. When responding to the needs of candidates who indicated, through responses on a variety of surveys, that additional field experiences were necessary to become more acclimated to the environment in which they would eventually teach, the Unit arranged earlier field experience opportunity for candidates. The Special Education Program not only began field experience opportunity during the candidates’ freshman year, they also became creative with scheduling to allow for students to spend longer time periods in the placement during the day.
In response to the need to create a more comprehensive unit-wide data collection system to drive decision making, as well as in response to reviewer comments from the prior review, the California Unit Professional Assessment System (Cal-U-Pass) and which includes LiveText as a data management system. Since development, and to ensure validity and reliability, coordinated review and revision has taken place with multiple stakeholders both within (including at the department level) and outside the unit, and within the professional community at large to improve the assessment system. This assessment system is described in the Teacher Education Handbook and is aligned with the Conceptual Framework, professional standards for all relevant SPAs, NCATE, the Commonwealth of Pennsylvania Standards for the Preparation of Professional Educators, and the unit and University mission statements. Cal-U-Pass allows unit improvements in that it was designed to complete aggregation and dissemination of data from multiple assessment sources on the individual candidate and unit level (in Initial and Advanced Programs) and at key transition points. Individual candidate data at the undergraduate level are gathered via. Signature Assessments at the following transition points: acceptance as a pre-education candidate, admission to teacher education, recommendation for student teaching, completion of student teaching, graduation, and certification. Similarly, individual candidate data in the advanced programs are gathered at the following transition points: conditional acceptance, candidacy, recommendation for student teaching, completion of student teaching, comprehensive exams, and certification. Individual candidate data are analyzed at both the program and unit level resulting in program and unit revisions. Additionally, data are collected from faculty administration and members of the professional community (e. g. employers, field experience teachers, cooperating teachers, etc.) to inform program and unit operations. Cal-U-Pass is directed by the Dean of CalU COEHS with input from the Associate Dean, the COEHS Governance Committee, and departmental committees and representatives. [Unit Assessment System Handbook See Standard 2.4.a]
The unit utilizes a secure web-based system called LiveText for the collection, maintenance, and utilization of data collected at key transition points in the initial and advanced programs. Candidate data are uploaded into LiveText and disseminated regularly to programs for review by the unit LiveText Coordinator. Programs review program data at department meetings and make revisions to programs to ensure quality. Program revisions and unit data are reviewed at bi-monthly meetings of the College of Education and Human Services Governance Committee which consists of the Dean, Associate Dean, Department Chairs, Graduate Coordinators, and the Director of Student Teaching. [Governance Meetings see Standard 2.4.h]
Further, the Education Unit conducts an annual Teacher Education Accreditation Meeting (TEAM) Day to review all unit data and to make unit improvements. Multiple stakeholders attend this meeting including faculty, administration, students, and public school personnel. Examples of agenda items from October 2011 include data analysis of GPA and candidate disposition, student teaching evaluations, portfolio and the effect on student learning, professional development program, conceptual framework survey results, admission to teacher education appeals and Praxis I, and unit operations. The structure of TEAM Day includes necessary training of attendees and working subgroups that review, analyze data, and create action steps to improve unit operations. All subgroup interactions and suggestions are transcribed so that necessary suggestions can be implemented. Finally, attendees are surveyed as to the perceived utility of this method of unit evaluations and future TEAM Day activities are planned based upon the transcribed data. Post TEAM Day, all information is disseminated to departments to further inform program revisions. [TEAM Day Materials see Standard 2.4.h]
The Cal-U-Pass assessment system was designed around the Conceptual Framework for Teacher Education and Educational Specialists that assesses three major areas. These include Knowledge, Professional Practices, and Professionalism. Each of these principles is assessed at multiple times and by multiple stakeholders to ensure that CalU is providing quality programs and graduates. For initial and advanced programs, candidates are assessed at multiple transition points. Decision Point 1 is acceptance as a pre-education candidate (Initial) or conditional acceptance (Advanced). Data are recorded as to credits and GPA for initial programs and undergraduate GPA for advanced programs. Decision Point 2 is admission to teacher education (Initial) or candidacy (Advanced), where candidates apply to change from pre-education status to a candidate in a specific major (or into a graduate program). At this point, credits and GPA are assessed as well as completion of Professional Seminar Series. Additionally, scores from the Pre Service Academic Performance Assessment (PAPA) obtained from Pearson (post April 2012) are compiled and reviewed. These data are maintained by the COEHS Dean and are disseminated through the Governance Committee to departments to make program decisions based on passing rates and scores in testing domains [Governance Minutes 2013 February 18 see Standard 2.4.h] In designated Introductory courses, the candidate completes the Conceptual Framework Survey (Level 1) and Candidate Professional Disposition Instrument (Level 1). Simultaneously, the candidate identifies both a faculty member and a person who has a professional relationship with the candidate from outside the university to complete the Candidate Professional Disposition Instrument (Level 2). These instruments are located in LiveText. [Unit Key Assessments see Standard 2.4.b]
Decision Point 3 is recommendation for student teaching for both Initial and Advanced programs. The following data are collected at this transition point: GPA and Praxis II / PECT scores, nine Professional Seminars, the Conceptual Framework Student Survey (Level II), and two Candidate Professional Disposition Instrument (Level 2). At this decision point, the candidate must submit Signature Assessment 6, the Common Portfolio for review. [Teacher Education Handbooks see Standard 2.4.f] This signature is used by all programs and is based on the Interstate New Teacher Assessment and Support Consortium (INTASC) as well as individual SPA principles. Additionally, CalU has developed three additional principles that a student must meet based on data analysis, for a total of 13 principles. Candidates must include two artifacts for each of the principles in the portfolio that is housed in LiveText and submit it to an assigned reviewer. Each principle can be scored as unsatisfactory, below expectations, meets expectations, and exceeds expectations, with the passing criteria of a total score of 26 and no score of 0 in order to be recommended for student teaching. Data are aggregated to inform program decisions. Decision Point 4 for both Initial and Advanced programs is completion of student teaching. The following data are collected at this point: Conceptual Framework Student Survey (Level III), Candidate Professional Disposition Instrument (Level 3), Pennsylvania Statewide Evaluation Form for Student Professional Knowledge and Practice (PA 430 form), and cooperating teacher evaluations. Initial program candidates must also pass a departmental exit interview found in LiveText. Decision Point 5 is graduation. For Initial programs, candidates are recommended for graduation if all requirements are met from the above decision points. For Advanced programs, graduate candidates are required to successfully pass the comprehensive exams. Each department has identified the structure of the comprehensive exam in that some departments utilize oral exams, written exams, or portfolio presentations. The final decision point is certification. Candidates are recommended for endorsement through verification of successful completion of all components of a program of study. Post-graduation, both Initial and Advanced students are asked to complete the Conceptual Framework Student Survey (Level IV) and employers are contacted and asked to complete the Conceptual Framework Student Survey (Level V) to assess knowledge, professional practices, and professionalism and to inform the programs and unit. All data from these instruments are collected in LiveText for dissemination and discussion.
Each program is assessed through the analysis of all data collected at the various decision points discussed above. Data housed in LiveText are aggregated to identify programmatic themes, issues, and solutions. Disaggregate data are also examined to fine tune and individualize adjustments. Information is disseminated to programs on a continual basis through access to LiveText data bases and through the Governance Committee. Summary data reports are prepared yearly for the Signature assessments. They are reviewed both at the unit and program level. Sample data reports include the following (along with those mentioned in previous parts of the report): Candidate Professional Disposition Data Level 1-3 [See Standard 1.3.e], Conceptual Framework Survey Data Levels 1-3 [See Standard 1.4.d], and Alumni [Standard 1.4.i] and Employer Survey [Standard 1.4.j]. Individual departments schedule time during twice monthly meetings to review data and make program decisions. Each program is also assessed through individual SPA principles and data collection in the form of Key Assessments. These Key Assessments are detailed in course syllabi. Data from all Key Assessments are collected each semester, posted in the University operating system, and reviewed both at the individual student level and overall score per standard level to make programmatic decisions and revisions. [Unit Key Assessments see Standard 2.4.b]
Unit operations are assessed by collecting information from a variety of audiences which include students, faculty/staff, employers, public school personnel, and graduates. At the individual course level, student evaluation standardized scores as well as written comments are used to make improvements to courses. The quantitative and qualitative data along with adjustments are included in each faculty member’s annual review. As mentioned previously, several instruments are used to gather feedback from outside the university and post- graduation. Unit data is also discussed at the COEHS Advisory Board meeting (held annually). This Board is comprised of various stakeholders both within the College and from the professional community. Finally, the Dean of COEHS has the responsibility to effectively manage, coordinate, and oversee the governance, planning, budget, personnel, and facilities of the unit. The Dean’s performance is evaluated yearly by the University Provost.
Data-based program and unit improvements are made on a continuous basis. This is due to the accessow granted by LiveText to the rich data collected through the Cal-U-Pass assessment system. For example, Pennsylvania has recently changed the licensure tests from Praxis to PAPA/PECT. Results from the “Modules” have shown a decline in scores, thus hindering student progression through programs. Data has been disseminated to each department related to these scores and an analysis of specific content areas has been done. Based on this analysis and department discussions, course syllabi are being revised to ensure inclusion of knowledge and review sessions have been created and are currently offered to prepare students for these new tests. The following paragraphs will detail a sample of additional program improvement decisions.
The Technology Education (TED) undergraduate program has made changes based on professional program data to update advisement sheets every two years based on nationwide technology and engineering education teacher preparation data, PDE certification requirements, and NCATE accreditation requirements. Further, syllabi in certain classes, (TED 100, 300, 450, 451) now include more rigorous instruction in assessment and pedagogy. Rubrics and assignments in TED 462 (student teaching practicum) have been revised to reflect current issues facing student teachers in lab-based programs.
The Technology Education (TED) undergraduate program has developed tutorial sessions based on test data (formerly Praxis, currently PAPA) that include test preparation sessions for all students to facilitate changes from Praxis I to PAPA, test preparation sessions for upper level students to facilitate changes in Praxis II Technology Education test (0050) and the newest version Praxis II Technology Education test (0051), and test preparation sessions for upper level students to facilitate success on Praxis II Fundamental Subjects test (0511).
A major focus of the Early, Middle, and Special Education (EMS) Department, based on data review has been on field experiences. See Standard 3 for a detailed description. Blocked course sequences were introduced by the Special Education Program and have been implemented in the entire department as of 2009. Revisions to this model are made yearly in response to student and mentor teacher feedback and Pennsylvania Department of Education mandates. EMS redesigned mentor teacher evaluations for students in field courses, based on the need to connect to standards for NAEYC and AMLE. Currently, EMS is proposing a new reading course for the PK-4 program, to include content area reading instruction, based on low test scores in PECT.
Next, analysis of data from Cal-U-Pass yielded a finding that Secondary Education students were scoring lower in their portfolios on Principle Five related to classroom management than on other principles. Consequently, program faculty rearranged the course sequence so that the classroom management course will be taken prior to student teaching (and portfolio presentation). Social studies education students were scoring lower on Principle Seven related to instructional planning when compared to other secondary students. The instructor for that course was not using the departmental formats and policies. This led to a change in instructor and the scores are rising up to meet the same levels as the other content areas. A new course was placed on standards-aligned instruction at the beginning of the secondary education programs so that students will begin their studies with the idea of Understanding by Design, standards, and instructional design as a background for all of their other courses. According to the GPA calculators, students had poor grades in one particular course more often than others. A review of that course found it unnecessary to the secondary education standards or appropriate for secondary education students. It has been replaced by a different course on the newer degree sheets. Finally, mathematics education students historically struggled with the Praxis II test. A review of scores on the sub-areas of the test identified lower-level math concepts as a problem area. Investigation found that the students had taken all of their coursework in trigonometry, pre-calculus, and similar concepts back in high school and started directly with calculus at the college level. Thus, the Calculus IV requirement has been removed on the newer degree sheets to allow students to do college course in those topics where they had gaps and needed a more in-depth preparation.
Additionally, there are several notable examples of data-based programmatic decision making in the advanced programs. For example, the IRA Standards for Reading Specialists (2010) defines three roles for which we should be preparing reading specialist candidates: 1) Reading teacher working with struggling readers, 2) Mentor and/or coach to classroom teachers supporting teacher and, 3) Literacy Leader developing, leading, or evaluating schools or district preK-12 reading and writing programs to meet the needs of all students. When writing the SPA Report for the RSP Program it became evident that the program thoroughly addressed the first role but did not thoroughly address the other two roles: The reading specialist as classroom teacher mentor/coach and the reading specialist as a school and district literacy leader. In response to this program self-evaluation, two major changes have occurred. First, the RSP 702 Literacy Assessment course now includes a field component in which candidates must work with a classroom teacher to analyze classroom assessments, make instructional recommendations to address patterns of need, and to demonstrate at least one research based instructional strategy to address the identified need. The purpose of this course revision was to address the second role of a reading specialist: Supporting teaching learning as a mentor/coach. Second, in Spring 2013, the syllabus of record for RSP 706 was totally revised to address the third role of reading specialist: literacy leaders in schools and districts PK - 12. The revised course, RSP 706 Literacy Leadership has been revised to address issues related to leadership, adult learning, and school or district-wide data driven instructional planning. The course includes 20 hours of field experience in which candidates must collaborate with building leaders and reading specialist to analyze school literacy assessment data, identify an area of need, develop a plan of action, and plan a staff development session to equip teachers to implement that plan of action with research based instruction. This revised syllabus is currently moving through the approval process with the intent of implementing the revised course beginning fall 2014.
In the school psychology advanced program, outcomes data are reviewed on seven different measures yearly to evaluate program effectiveness and measure student learning. Complete results for three of the seven measures are also submitted to the University Outcomes Committee each year (as is true of all programs). One example of using assessment data to inform instruction is the School Psychology Praxis Examination result. Aggregate scores in each subtest are analyzed and program adjustments are made when scores are below average compared to the national normative sample. In comparing the 2012, 2011 and 2010 results, students in 2012 scored lower in Area IV Consultation and Collaboration. The group mean in this area fell from the average to the below average range. Area IV was identified for monitoring in 2010, however in 2011, results reflected student improvement. Given the current drop in Area IV, it is again suggested that this Area is monitored and further recommended that coursework in this domain is reviewed and adjusted. As a result, the Program Coordinator has selected a new text for PSY 756 Consultation and Group Process and is working with a new instructor to revise the course to ensure consistency with NASP expectations.
As a result of the collecting and sharing the data located within the system, the Unit, programs within the unit and individual faculty work to better meet the specific needs of students within each of the programs.
The following were identified as Areas for Improvement (AFIs) during the last NCATE visit in 2006.
The unit has not completed data driven studies of reliability, validity, or bias issues on their various instruments.
Overall, the unit is continuing to take effective steps to eliminate bias in assessments and to establish fairness, accuracy, and consistency in its assessment process and program and unit operations. All data are securely and safely stored adhering to student privacy and FERPA standards. All candidates, initial and advanced, are informed of requirements for completion of programs at the time of admission and are fully informed of progress at each decision point to provide ongoing feedback, through meetings with their advisors. Candidates are also provided individual course syllabi and rubrics explaining embedded assessments and expectations. These syllabi and rubrics (including Key Assessments) have been standardized, and are readily available to all candidates and faculty. [Key Assessments see Standard 1.4.c]
The issue of validity of instruments including both unit instruments and program instruments such as detailed rubrics, etc. has been addressed in a number of ways in the Cal-U-Pass system. First, all instruments are aligned with national, state, and corresponding SPA standards. Reviews of instruments at multiple levels have been conducted to ensure construct validity. Instruments were (and are continually) reviewed by faculty, administrative bodies, and professionals in the field in an effort to provide transparency of the process, with all protocols and reports readily available. Multiple measures are administered at multiple times to ensure reliability. For example, the Conceptual Framework Survey and Dispositions Survey are administered several times at key transition points throughout a candidate’s program, and are completed by different key individuals in an attempt to triangulate the data and to ensure reliability of results. This practice also gives candidates the opportunity to acquire the knowledge, skills, and dispositions that are being evaluated. Those data are compiled and analyzed at the individual candidate level to assess reliability. A further check of reliability is evidenced by the use of the Common Portfolio in the advanced programs, for example. In order to successfully pass the comprehensive exam (Decision Point 5 for advanced programs), the graduate candidate must present the common portfolio to a panel of three faculty, who individually score the standards. Post presentation, faculty compare individual scores, discuss any items in which there is not score agreement to strive for agreement, and provide a final score, thus ensuring inter-rater reliability.
In an effort to specifically address fairness in assessment, several steps have been taken beyond what is mentioned above. Discussions between faculty and field/mentor teachers, student teaching supervisors, and school personnel occur each semester to avoid any bias in evaluation of the candidates. Finally, due process procedures are available for candidates at the University and at the unit and departmental levels to settle any disagreements with evaluation at the transition points. The unit, through the Dean’s office, maintains formal candidate complaints/appeals as well as mediation of the complaints and resolutions. [Appeal for Teacher Education Window, Appeal for Grade Decisions, & Office of Social Equity Policy see Standard 2.4.f]
The collection of data for the initial teacher certification candidates is primarily at the individual level
The purchase of LiveText has begun to address the lack of unit data. While programs individually and the unit as a whole have always made data-based decisions, the process by which this has occurred has not been readily transparent in the past, nor has there been a formal system in place to collect, analyze, or maintain unit-wide aggregate data. LiveText has provided platform by which unit data, as well as individual data is collected and systematically reviewed resulting in both program and unit revisions. It has also allowed all stakeholders “availability” of and to the data at the individual, program, and unit levels.
The systematic use of data for programmatic change is limited
As indicated in narratives in Section 2.2.b of this report. The authors have provided substantial examples of how the Unit and the various programs have made programmatic changes based on the review of collected data. Also included in the narratives are examples of changes at the individual instructor or class level. CalU continues to be dedicated to the educational needs of its candidates and to the needs of the P-12 students who work with the candidates. Therefore the living program that is the Educational Unit will continue to evolve as the year’s progress.
Standard 2.4.a Unit Assessment System
Standard 2.4.b Admission Criteria and Data
Unit Key Assessment – Praxis I Averages
Standard 2.4.c Unit Assessment Statement – Fair, Consistent, Bias Free
Standard 2.4.d Unit Assessment Statement – Collection and Sharing of Data
Standard 2.4.e Data & Summary on Key Assessments
Reference Standard 1.4.c Artifacts
Reference Standard 1.4.d Artifacts
Reference Standard 1.4.f Artifacts
Standard 2.4.f Managing Candidate Complaints
Standard 2.4.g Candidate Complaints
Candidate Complaints Available During Onsite Visit
Standard 2.4.h Changes to Courses, Programs, & Unit
TEAM Day 2011