Spring 2012 IFP Assessment Report: Student Learning Outcomes


Between 2009 and 2011, efforts to assess the Intellectual Foundations Program at FAU (IFP) focused exclusively on written communication and mathematical reasoning competencies. The procedures used to assess these were idiosyncratic to the departments and programs overseeing the courses in those Foundations. Moreover, the results were examined solely as a function of the unique concerns of those departments and programs. They were not fully analyzed in relation to the IFP learning objectives, per se, nor were they evaluated against a common performance criterion.

For these reasons, a new assessment plan for the Intellectual Foundations Program was developed for the Spring 2012 semester. The plan was purposefully designed to allow FAU to accomplish several goals. The first goal was to examine ALL competencies represented across ALL foundations within the IFP in order to gather baseline information about student performance for both the overall IFP competencies as well as for the Foundation-specific competencies within the IFP. Moreover, it was designed to evaluate the adequacy of using direct assessment measures derived from student performance on course components that were already embedded within the structure, content, and pedagogy of each IFP course (e.g., assignments, exams, quizzes, classroom activities).  Finally, a uniform benchmark of “average” or “C” level performance for student achievement was established against which all courses and learning outcomes could be compared.

The Spring 2012 assessment was conducted on seventy-seven IFP courses offered by twenty-eight departments across eight colleges. A representative section for each course (i.e., 3 credit hour lecture or 1 credit hour laboratory) was chosen from the total of 615 lecture, laboratory, and discussion sections that appeared on the semester schedule. The sampled sections were chosen via random sampling modified with poststratification to ensure that the selected samples featured typical enrollment size, student type, campus location, delivery mode, and instructional staff . Five out of the 77 IFP courses offered in the Spring 2012 were not assessed because of various issues (e.g., lack of representativeness per the above criteria, miscommunication with instructor, etc.). Thus, a total of 72 IFP courses—both lectures and laboratories—were included in the assessment. The instructors of record for the sampled courses, their department chairpersons, and their deans were all notified of their selection for inclusion in the assessment.

Procedures

A special assistant (Dr. Jennifer Peluso) to the Associate Provost for Assessment and Instruction (Dr. Janet Cramer) was appointed in February, 2012 to serve as the Assessment Coordinator for the Intellectual Foundations Program (see email memo from Associate Provost Cramer to the College Deans here). An assessment plan was developed in consultation with the university’s Team for the Assurance of Student Learning (TASL), a group consisting of assessment coordinators within each college and the University Libraries (see sample email memo from the IFP Assessment Coordinator Peluso to TASL here). The instructors of record for the sampled courses, their department chairpersons, and their deans were all notified of their selection for inclusion in the assessment (see the sample email memo to the College of Education Dean and Chairpersons here). Chairs coordinated the participation of their faculty in this process (an example from the Department of Music in the Dorothy F. Schmidt College of Arts and Letters is provided here).

Because the time available to conduct the assessment was limited, instructors were asked to identify course measures for the IFP SLOs. Instructors were given the choice to either select specific questions, prompts, or excerpts from assignments, exams, and learning activities already incorporated into their courses or to develop a separate IFP assessment tool in consultation with the Assessment Coordinator. One hundred percent of the faculty chose to use materials they had previously developed for their courses; they submitted these SLO measures (and any associated grading rubrics) to their college TASL representatives and assessment committees for review. All materials were then forwarded to the Assessment Coordinator. Examples of the kinds of proposals for assessment measures that were submitted by faculty are shown in Table 1 below. Once all proposed assessment measures and rubrics were reviewed and approved by college TASL representatives, college assessment committees, and the Assessment Coordinator, IFP course faculty scored their students’ course work according to their predetermined rubrics and grading schema. Individual course materials for this IFP assessment are retained offline in the Undergraduate Studies Office.

Table 1. Examples of Proposed Course Measures for Intellectual Foundations Program Student Learning Outcomes Submitted by Faculty

Course

Department/ College

Instructor

Foundation

Material

DAN 2100   Appreciation of Dance

Theatre and Dance/Arts and Letters

Dr. Clarence Brooks

VI. Creative Expression

NSP 1195   Reflections From the Other Side of the Bed

Nursing

Dr. Valerie Grumme

I. Written Communication

GLY 2010L Evolution of the Earth Lab

Geosciences/ Science

Mr. Christopher Makowski

III. Science and the Natural World

PSY 1012 General Psychology

Psychology/ Science

Dr. Krystal Mize

IV. Society and Human Behavior);

URP 2051 Designing the City

Urban Planning/Design and Social Inquiry

Dr. Asli Ceylan Oner

V. Global Citizenship

The grading rubric for only one course was judged not to be in alignment with IFP SLOs. This was for a weekly two-page essay assignment from PHI 2010 Introduction to Philosophy on the topic of “Ethics” (V. Global Citizenship); the rubric was included in the PHI 2010 course syllabus. In order to use this assignment for the IFP assessment, a general assessment rubric (covering both the General IFP SLOs and the specific SLOs for Foundation V) was developed by the IFP Assessment Coordinator and refined by a faculty review team (four faculty in the Dorothy F. Schmidt College of Arts and Letters from the Departments of English, History, Philosophy, and Sociology, see the revised assessment rubric here). All review team members were normed in their use of the rubric. Of the 152 essays submitted by the instructor for the assessment, each review team member independently evaluated 10 essays selected at random from the class set (a total sample of 26.3%).

At the end of the Spring 2012 semester, all instructors of sampled IFP courses submitted to the Assessment Director the results obtained from their embedded assessment measures. The raw results for each IFP course were compared to a general benchmark of “average” or “C” level performance (e.g., 70% correct, 70% points earned, 70% of the class reached a level of performance specified by the instructor, etc.). The following language was used to standardize the evaluation of student performance across all IFP Foundations and learning outcomes, as well as across all courses within the IFP. This was especially important since many instructors utilized embedded measures that were idiosyncratic to their courses:

Below Benchmark:”    Results were below 70%

Meets Benchmark:”     Results were between 70 and 79%

Exceeds Benchmark:” Results were 80% or above

These benchmark results were summarized into a full report with tables for the aggregated results across all IFP courses as well as for each of the individual Foundations.

Course-level results were communicated back to instructors, department chairpersons, and deans with a request for a department interpretation of the results, a proposed plan to use those results for continuous program improvement, and documentation/evidence. Two memos communicated guidelines and processes for these steps: one distributed on June 1, 2012 (linked here) and one distributed on June 5, 2012 (linked here). Some examples of materials that departments submitted to implement and document program improvement are listed in Table 2 below.

Table 2. Examples of Materials Submitted by Instructors and Departments to “Close the Loop” in Response to IFP Assessment Results

Course

Department/ College

Instructor

Foundation

Material

ANT2511 Introduction to Biological Anthropology (lecture)

Anthropology/Arts and Letters

Dr. Kate Detwiler

III. Science and the Natural World (Lectures)

  • Final Report (includes instructor narratives)
  • Minutes from 6/6/12 meeting with Dept. Chairman

BSC1010L Biological Principles Laboratory

Biological Sciences/ Science

Ms. Geri Mayer

III. Science and the Natural World (Laboratories)

  • Final Report (includes instructor narratives)
  • Example of revised quiz content and question types

SYG2010 Social Problems

Sociology/ Arts and Letters

Dr. Gregory Lukasik

IV. Society and Human Behavior

THE2000 Appreciation of Theatre

Theatre and Dance/Arts and Letters

Dr. Desmond Gallant

VI. Creative Expression

WOH2022 History of Civilization II

History/Arts and Letters

Dr. Claudia Dunlea

V. Global Citizenship

The above procedures were followed for all IFP Foundations except for Foundation I. Foundations of Written Communication and for Foundation II. Mathematics and Quantitative Reasoning.

I. Written Communication

All of the courses included in this IFP Foundation are part of the Writing Across the Curriculum (WAC) program at FAU, which conducts its own rigorous assessment process annually. The WAC assessment involves the evaluation of argument-driven papers written by all students enrolled in a randomly selected sample of WAC courses. Papers are evaluated with a standard university-wide rubric that includes 12 dimensions and 4 levels of competency. The WAC assessment rubric is posted on the WAC program web site at www.fau.edu/WAC/docs/AssessmentRubric.doc  or you can click on a link for it here. In order to avoid overburdening the faculty teaching IFP Foundation I courses with assessment responsibilities for two different programs, a separate IFP assessment of Foundation I was not conducted. Instead, the WAC Director (Dr. Jeffrey Galin) produced an IFP assessment matrix wherein each of the twelve domains of the WAC assessment rubric was uniquely associated with one of the four student learning outcomes identified for Foundation I of the IFP. (Click here to view this IFP-to-WAC map.)

II. Mathematics and Quantitative Reasoning

The Department of Mathematical Sciences has conducted an assessment of key mathematical reasoning competencies in many of its lower division courses since 2009. It does so through the use of questions (selected by a departmental assessment committee) embedded within quizzes and exams administered in these courses. The department’s assessment committee meets annually to review the questions and results, and to refine the lower division curriculum based on these results. This work has is regularly reported in the university Assessment Database.

For the Spring 2012 IFP Assessment, the Department of Mathematical Sciences produced a map showing how the Mathematics Assessment Questions align with IFP Learning Outcomes. This map is linked here.

Student Learning Outcomes Findings

Two sets of tables were generated from the results submitted from the IFP course instructors in the Spring 2012 semester. One set was generated from the raw qualitative and quantitative results; those tables are available here as an Excel workbook. The other set was generated for benchmark results for the aggregated results across all IFP courses as well as for each of the individual Foundations; those tables are available here as an Excel workbook.

The findings for the IFP’s General Student Learning Outcomes will be presented first; results for the Foundation-specific Learning Outcomes will be presented afterwards. For the sake of continuity the Results, Program Improvement, and Recommendations for each set of learning outcomes will be described together.

General Student Learning Outcomes

(It should be noted that the first of the General IFP SLOs, that students will “Demonstrate knowledge in several different disciplines,” was not assessed during the Spring 2012 assessment cycle.)

Table 3 below shows the number of courses that were found to meet or exceed the benchmark for the general competencies across all courses.

Although there was some variation in how many courses actually measured the General SLOs, courses that did measure them were distributed across all Foundations within the IFP and were in sufficient numbers.

Table 3. Number of Courses Meeting or Exceeding Benchmarks in General IFP Competencies Overall                                                                          

Learning Outcome

Below
Benchmark

Meeting
Benchmark

Exceeding
Benchmark

Total Meeting or Exceeding Benchmark

1. The ability to think critically

20

20

23

43 (68%)

2. The ability to communicate effectively

14

6

25

31 (69%)

3. An appreciation for how knowledge is discovered, challenged, and transformed as it advances

19

9

23

32 (63%)

4. An understanding of ethics and ethical behavior

8

10

23

33 (80%)

Average

15.2

11.2

24

34.8 (70%)


These results show that the university as a whole is close to achieving an overall satisfactory level of performance in student achievement of the General Student Learning Outcomes. Achievement of these general competencies did vary greatly among the different foundations, particularly in science courses. The number of courses that included measures of the general competencies also varied.

Currently, there is no general definition of each of the IFP’s SLOs to which faculty may refer when selecting measures or interpreting their results. Moreover, some of the General SLOs are agnostic with respect to a specific context for the competencies they refer to. For example, many faculty were unclear as to whether the General SLO about communication referred specifically to written communication, or to any mode of communication (e.g., oral, graphical, etc.). Also, there was confusion as to whether the SLO about “ethics and ethical behavior” referred to academic integrity/Honor Code issues, or to broader societal ethical issues and problems that may have been identified and addressed within a course (e.g., human participation in research, confidentiality, U.S. or international law, etc.). Anecdotal evidence, therefore, suggests that many faculty had difficulty selecting (with confidence) assessment measures for the General SLOs from their courses. It is for this reason that the results from this first cycle of IFP Assessment results should be interpreted with caution.

Foundation-Specific Student Learning Outcomes

The following table provides a summary of the benchmark achievement levels for competencies identified for each of the individual Foundations within the IFP. It is evident that performance levels varied both within Foundations as well as between Foundations. The results for each Foundation are described separately below.

Table 4. Number of Courses Meeting or Exceeding Benchmarks in IFP Competencies Within Each Foundation
                                                                          

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

I.  Written Communication

 

 

 

 

 

1. Produce clear writing that performs specific rhetorical tasks

3

1

1

2 (40%)

2. Respond critically to a variety of written   materials in order to position their own ideas and arguments relative to the arguments and strategies of others

3

1

1

2 (40%)

3. Use writing not only to communicate but also to think critically—examining assumptions that underlie the readings and their own writing

3

1

1

2 (40%)

4. Demonstrate an understanding of the ethical standards that apply to the use of external sources in one’s writing

4

0

1

1 (20%)

Average

3.2

0.8

1

1.8 (35%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

II. Mathematics and Quantitative Reasoning

 

 

 

 

 

1. Demonstrate an understanding of   mathematical theories and their applications

7

3

1

4 (36%)

2. Be able to identify and apply mathematical concepts most appropriate to solving quantitative problems

7

3

1

4 (36%)

Average

7

3

1

4 (36%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding Benchmark

III. Science and the Natural World (Lectures Only)

 

 

 

 

 

1. An understanding of the nature of science, including important principles and paradigms

7

3

4

7 (50%)

2. An understanding of the limits of scientific   knowledge and of how scientific knowledge changes

5

4

4

8 (62%)

3. An understanding of the nature of scientific inquiry and its ethical standards, in particular how to pose questions and how to develop possible explanations

4

1

7

8 (65%)

4. An ability to discern claims based on rigorous
scientific methods from those based on illogical or incomplete scientific
methods

4

3

4

7 (64%)

Average

5

2.8

4.8

7.5 (60%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

III. Science and the Natural World (Labs Only)

 

 

 

 

 

1. Demonstrate an understanding of how   experiments are conducted                       

2

2

3

5 (71%)

2. Be able to analyze resulting data                

2

3

3

6 (75%)

3. Be able to draw appropriate conclusions from such data

2

4

2

6 (75%)

Average

2

3

2.7

5.7 (74%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

IV. Society and Human Behavior

 

 

 

 

 

1. Be able to identify patterns of human behavior

1

3

5

8 (89%)

2. Demonstrate
an understanding of how political, social, cultural, or economic institutions influence human behavior

4

0

6

6 (60%)

3. Understand key social science methods and the theoretical foundations behind these methods

3

4

3

7 (70%)

4. Be able to apply social science methods to the analysis of social, cultural, psychological, ethical, political, technological, or economic issues or
problems

3

2

4

6 (67%)

Average

2.7

2.2

4.5

6.8 (71%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

V. Global Citizenship (Global Perspectives Only)

 

 

 

 

 

1. Identify different individual, cultural, and national identities

1

1

6

7 (88%)

2. Demonstrate an understanding of the   economic, political, environmental and/or social processes that influence human action/interaction

4

0

4

4 (50%)

Average

2.5

0.5

5

5.5 (69%)

 

Learning Outcome

Below
Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding   Benchmark

V. Global Citizenship (Western Identities Only)

 

 

 

 

 

1. Identify different individual, cultural, and national identities

0

1

3

4 (100%)

2. Demonstrate an understanding of the   economic, political, environmental and/or social processes that influence human action/interaction

1

1

2

3 (75%)

Average

0.5

1

2.5

3.5 (88%)

 

Learning Outcome

Below Benchmark

Meeting Benchmark

Exceeding Benchmark

Total Meeting or Exceeding Benchmark

VI. Creative Expression

 

 

 

 

 

1. Identify
one or more forms/genres of creative expression

1

1

7

8 (89%)

2. Demonstrate
an understanding of the theory or methods behind the creative expression(s)

1

3

5

8 (89%)

3. Demonstrate
an understanding of the social, cultural, or historical context of the creative expression(s)

0

3

7

10 (100%)

Average

0.7

2.3

6.3

8.7 (93%)

I. Written Communication. The WAC program has developed a rigorous assessment program to determine whether courses the University has certified as writing intensive are indeed enabling students to improve their writing.  Between ten and fifteen courses have been selected each semester since 2008 for assessment via a stratified, random sample each term (approximately five courses during the summer), and students in those courses are asked to submit a first and final draft of an argument-driven paper that was written near the end of the term.  Each May, between fifteen and twenty raters spend at least two days norming before the rating process begins.  Ratings are made along twelve dimensions of writing quality (trait scores) . These efforts are described in reports posted to the WAC web site at http://www.fau.edu/WAC/assessment/results.php .

The first full WAC assessment cycle collected student work from WAC courses for both 2008-2009 and 2009-2010. However, interface glitches, norming mistakes, and missing data impacted the reliability of the data such that a valid baseline for WAC assessment could not be established. Those process issues were corrected for the 2010-2011 assessment cycle (for which approximately 325 papers were rated). The only statistically significant results from 2010-2011 were that papers from upper division courses outperformed papers from the lower division. This result was almost identical for 2011-2012 except for two lower division honors section and one lower division Anthropology course.

Results from 2010-2011 showed that overall scores for students in ENC 1101 College Writing 1 were not statistically distinct from second semester ENC 1102 College Writing 2 scores once the data were adjusted for the effects of course grade. 2011 and 2012 scores showed the same results. Only one trait out of the 12 on the assessment rubric (“Academic Tone”) has reliably shown a statistically significant difference between College Writing 1 and 2 students. This difference did not hold in 2011-2012. A different trait (“Citation Format”) is the only trait that shows an apparent difference. In general, students with higher grades generally earn higher rubric scores regardless of whether they are in ENC 1101 or ENC 1102. Analyses also show that some content-driven courses tend to produce better writing scores compared to others (e.g., Nursing and Social Work vs. Communication and Philosophy, respectively).

WAC assessment results have led to a series of specific actions undertaken by the WAC program as well as by the departments offering specific WAC-certified courses. For example, when results showed that students in ENC 1101 and ENC 1102 did not differ from each other in their writing scores, despite the fact that the courses are taken in sequence (there should be improvement from one semester to the next), the Writing Committee redesigned the ENC 1102 course to include new materials and pedagogies. These changes are described in the assessment reports posted on the WAC web site at http://www.fau.edu/WAC/assessment/results.php . Although findings in the last two cycles suggest that these pedagogical changes have made a difference in student writing performance, the WAC program will continue to collect data for another cycle or two before making additional changes to the ENC 1101/1102 curriculum.

Changes at the department level have also been implemented as a result of writing assessment findings. These changes have been facilitated via annual consultations between the Director of the WAC program and those departments teaching WAC classes. These discussions are held to review assessment results and to develop actions for improvement and have led to the submission of several formal proposals for WAC Development Grants by these departments. The WAC Development Grant program, a sample of successful proposals, and outcomes from these activities are described on the WAC program web site at www.fau.edu/wac . For example, in the last year:

  • The Department of Civil Engineering recently instituted a uniform rubric and training program for assessing report writing across all labs in its program. 
  • Social Work developed and now continues to teach a gateway WAC course to all incoming majors.  Prior assessment results show that Social Work students, most of whom are transfer students, consistently outperform students in the lower division.
  • Nursing is currently engaged in self-assessment to evaluate the quality of writing of their students and plans to develop a faculty training program to better prepare faculty to help their students write.

For the Spring 2012 assessment cycle, the WAC Director (Dr. Jeffrey Galin) produced an IFP assessment matrix wherein each of the twelve domains of the WAC assessment rubric was uniquely associated with one of the four student learning outcomes identified for Foundation I of the IFP. The WAC to General IFP Learning Outcomes Map is posted on the web site for IFP Assessment along with a report showing the findings of the reanalysis of the 2011-2012 WAC assessment data.  Only 4 IFP courses of a possible 7 were included in the random stratified sample of WAC courses in the 2011-2012 assessment process. Three lower division courses showed scores that were almost identical in the lower 2 range on a scale of 4-1.

The instructors of two courses in Foundation I (ENC 2452 and NSP 1195) assessed student achievement of IFP Learning Outcomes independently of the WAC assessment process. The nursing course assessed both General Student Learning Outcomes and the Foundation-Specific Outcomes. The special topics English course (an honors Chemistry course) only assessed independently the General Student Learning Outcomes. The independent measures in the nursing course exceeded the benchmark for all of the Foundation-Specific Learning Outcomes.

I I. Mathematics and Quantitative Reasoning . Twelve of the 13 courses that comprise this Foundation are taught by faculty in the Department of Mathematical Sciences. One is taught by the Department of Philosophy (PHI 2102 Logic).

Benchmark results showed low levels of achievement among students in these IFP courses. Absolute levels of performance on embedded questions (raw data measured as % correct) from Mathematics courses were also quite low. Students enrolled in the Philosophy course on Logic (PHI2102) met or exceeded the benchmark for all SLOs.

The Department of Mathematical Sciences has targeted specific areas of student weakness in its program improvement efforts. The departmental response to these results can be found here. For example, the department has noted a “deterioration of ‘proficiency’ in basic algebraic manipulation skills” among students in its courses over the last few years. Moreover, it is aware that some students circumvent the placement process (ALEKS exam) through various means and enroll in mathematics courses for which they are not prepared. To respond to these concerns, the department will implement a “First Day Quiz” in some classes in Fall 2012 to assess the correlation between students’ ALEKS scores and their prerequisite skills. Students with discrepant scores will be placed in lower-level math courses.

Pedagogical changes have also been implemented in several mathematics courses to improve the scaffolding of student learning and quantitative reasoning. For instance: 

  • MAC 1105 (College Algebra): Because the lowest score (51 percent) was for a question dealing with the number of roots of the quadratic equation, faculty felt that more time was needed on the quadratic equation. Also, because their experiment with instructional models showed 10 percent higher grades for a 2 + 2 form of instruction (two hours of lecture and two hours of computer-based drill in a lab with the assistance of an instructor or peer tutors) compared with a more traditional 3 + 1      approach  (three hours of lecture and one hour of instructor recitation), the department decided to continue the 2 + 2 sections.
  • MAC 1147 (Precalculus Algebra and Trigonometry): In response to finding a lower score in the most recent semester compared to the previous one (36 percent as compared to 54 percent), the faculty reviewed what was covered in the course and found that the math problems in the assessment questions had never been addressed in quizzes, examinations, or in any reviews in the most recent semester. They developed strategies to ensure coverage in the future.  
  • MAC 2281 (Calculus for Engineers): Students in this course reserved for engineers consistently underperform students in MAC 2311, which is the same course but for general population students. To better motivate engineering students, faculty have changed textbooks and infused more applications into the course.

Finally, some of the Mathematics faculty are currently participating in a program conducted by FAU’s Center for eLearning (CeL) to certify faculty who have been trained in the best practices in instructional design and online learning. These Math faculty are developing versions of their IFP courses for online delivery. The Department has already begun collecting data from their first “CeL certified" online course, MAC 1105 College Algebra.

The PHI 2102 Logic course (Philosophy) included in Foundation II was assessed for the first time in Spring 2012. Students in this course met or exceeded everyone of the Foundation-specific learning outcomes. Nonetheless, the Department of Philosophy has endorsed a plan to supplement the course textbook and other course materials with new online study guides and other materials through a publisher’s learning portal/management system. These efforts are described in the course assessment report for PHI 2102 retained in the Undergraduate Studies Office.

III. Science and the Natural World (Lectures and Labs combined). A variety of disciplines and specialty areas is represented in the lecture and laboratory courses in this Foundation (i.e., Anthropology, Biology, Chemistry, Engineering, Geosciences, Physics). All lecture courses reported below benchmark performance across all of the SLOs (range: 50 to 66.7%). The opposite was true for the laboratory courses (range: 71.4 to 75%). The number of lecture courses (N=15), however, was almost two times the number of laboratory courses (N=8).

The faculty teaching Science lecture and laboratory courses expressed genuine concern about the low levels of student performance. Many kinds of appropriate actions are planned to enhance student learning and ameliorate weaknesses (see Table 4 below).

Table 5. Examples of Actions Proposed by Instructors to Improve Student Performance in III. Science and the Natural World (Both Lecture and Laboratory Courses)

Course

Department/ College

Learning Outcome

Changes To Be Made

ANT2511 Introduction to Biological Anthropology (lecture)

Anthropology/Arts and Letters

4. An ability to discern claims based on rigorous scientific methods from those based on illogical or incomplete scientific methods

Course instructor will increase the number of group work activities in order to improve students' ability to evaluate pseudoscience. 

AST2002 Introduction to Astronomy

Physics /Science

1. An understanding of the nature of science,   including important principles and paradigms

 

Instructors will include more lecture material on the various models used to explain the universe (e.g., transition from the geocentric to the sun-centered universe). Also, include more clicker and exam questions to evaluate this item.

CHM 2045L General Chemistry 1

Chemistry/Science

3. Be able to draw appropriate conclusions from such data

Data reveal that complex “mole to mole reaction” relationships need more emphasis in lecture.  Instructor will change the content of lab sessions to include a longer discussion of this concept with more examples prior to beginning the lab
session.

EGT2831 Nature: Intersection of Science, Engineering, and   the Humanities

Environmental Engineering/ Engineering

3. An understanding of the nature of scientific inquiry and its ethical standards, in particular how to pose questions and how to develop possible explanations

Course needs to include more exercises that will build on student skills.  Pedagogically, instructors should offer more hands-on lab assistance for students to improve
comprehension of various models.

IV. Society and Human Behavior. Departments from across many of FAU’s colleges contribute to this Foundation: Anthropology, Economics, Geosciences, Education, Public Administration, Psychology, Sociology, and Urban Planning. The percentage of courses reporting student performance at or above benchmark levels was sufficient for two outcomes (1. Identify patterns of human behavior, 88.9%; 3. Understand key social science methods and the theoretical foundations behind these methods, 70%). The other two outcomes showed fewer courses reaching benchmark levels of achievement (2. How various societal institutions influence human behavior, 60%; 4. Application of key social science methods to analyze issues or problems, 66.7%). The majority of instructors of courses in this category proposed to adjust their lectures to emphasize more of these methods and principles and to give students more practice applying them in class with clicker questions, writing assignments, and targeted quizzing.

V. Global Citizenship. The courses in this category are very diverse, coming from disciplines such as Anthropology, Education, Geosciences, Linguistics, Social Work, Sociology, History, and Philosophy. They are divided into two categories: Global Perspectives (N=8) and Western Identities (N=4). A majority of courses in each of these categories reported high levels of student achievement on the Learning Outcome related to different individual, cultural and national identities (overall, 91.7%; Global Perspectives, 87.5%; Western Identities, 100%). Fewer courses reported successful achievement at benchmark levels for the Learning Outcome associated with societal processes that influence human action/interaction evidenced much lower (overall, 58.3%; Global Perspectives, 50%; Western Identities, 75%).

The implementation of weekly quizzes, more explicit coverage of diverse influences in lectures, the inclusion of more primary sources as readings, and greater clarity in questions and assignment instructions are all strategies that faculty have chosen to implement to improve student achievement in these courses. Several faculty pointed to the fact that the large student enrollments in their courses limits departments’ ability to effectively use writing or discussion as pedagogical elements in their courses.

VI. Creative Expression. The majority of courses in this Foundation were found to meet or exceed benchmark levels across all Student Learning Outcomes (Genres: 88.9%; Theory or Methods: 88.9%; and Context: 100%). Every instructor proposed additional measures to maintain or increase student performance in these outcomes, including the rewording of clicker questions and exam items for greater clarity or to require higher order thinking

Recommendations for Future Assessment Cycles

FAU continues to refine its measures and procedures for collecting information about how well the general education curriculum supports the development of core undergraduate competencies. These efforts have been evolving since before the inception of the Intellectual Foundations Program (IFP), and they will continue to do so. Because the first comprehensive assessment of the IFP was only conducted recently (spring 2012), FAU considers its IFP assessment plan a work in progress. Faculty are still engaged in an evaluation of the aggregated findings and are currently revising the initial assessment plan to enhance the validity, reliability, and sustainability of the assessment plan, its strategies, and its measures. Moreover, the process to support continuous program improvement will be successively honed over the next few assessment cycles. The initial steps in this process that have already taken place are described below.

One goal of the most recent IFP assessment effort was to collect a comprehensive set of baseline measures for ALL competencies represented across ALL six categories within the IFP rather than focusing on only two categories of competence (i.e., written communication and mathematical reasoning). This was accomplished. Although time constraints limited the size of the sample from which student performance data were collected (for most courses, only one section offered during the Spring 2012 semester was included in the assessment), every foundation was represented with multiple courses from a variety of disciplines. The process allowed for an examination of the general IFP competencies as well as the specific competencies of all of the Foundations comprising the university’s general education program.

A second goal of the recent assessment cycle was to evaluate the adequacy of using direct assessment measures derived from student performance on course components that were already embedded within the structure, content, and pedagogy of each IFP course (e.g., assignments, exams, quizzes, classroom activities). This goal was achieved in two ways. First, the process required faculty to literally review the instructions and components of assignments, exam questions, course activities, and the like to judge which ones BEST reflected the general and Foundation-specific competencies of the IFP. This had never been done before on such a large scale at FAU. Although the derivation of assessment measures was constrained by time limitations (a couple of weeks)--so that instructors had to balance what they thought were the most meaningful measures with which measures would efficiently provide information about students’ levels of competence--faculty were able to select relevant measures independently with minimal support from IFP Assessment Coordinator and some feedback from colleagues with assessment expertise within their own colleges. For the most part, faculty relied upon small numbers of summative measures such as clicker questions, multiple choice and true-false questions embedded on quizzes and exams, and short answer or simple written responses on assignments. Nevertheless, most of the measures selected had adequate face validity and were deemed acceptable by chairpersons and college-level assessment representatives. Future assessment cycles will provide faculty with more time to plan assessment strategies in advance so that better measures can be derived including: larger sample sizes of questions; more comprehensive or longer writing assignments (and appropriate evaluation rubrics); and measures that provide information about how competencies develop over time (e.g., elements that allow students to respond to formative feedback).

The strategy of using direct measures embedded within course assignments also established a ubiquitous expectation that course instructors for even lower-division “non-major” courses should be engaged in a continuous process of examining the assumptions about student learning underlying their specific assignments, pedagogies, and course materials. The strategy served as a catalyst for conversation both about the “how’s and why’s” of lower-division assessment and about the relationship between IFP assessment and the assessment of undergraduate degree programs (in the major). Although most of these conversations occurred informally, they did raise awareness about some best assessment practices already being implemented in FAU courses, departments, and programs. They also helped identify some assessment gaps that need to be improved in the near future. These informal conversations crossed departmental and college lines and have led to plans to establish some Faculty Learning Communities and other faculty development programs on the topic of assessment in 2012-2013 and beyond. These opportunities will be coordinated between the Scholarship of Teaching Office in the Center for Teaching and Learning and the Office for the Associate Provost for Assessment and Instruction.

A final goal of the first full-scale assessment of the IFP was to determine the feasibility of using a uniform benchmark of “average” or “C” level performance for student achievement (70%) against which all courses and learning outcomes could be compared. This goal was met.

A great deal of feedback about the initial IFP assessment plan and strategies was provided by course instructors, department chairpersons, and college assessment committees in informal conversations, focus groups/open houses, and written reports at the end of the first assessment cycle. A summary of this feedback and a series of specific recommendations for future assessment cycles were delivered to the university’s Core Curriculum Committee by the IFP Assessment Coordinator. The recommendations were included in a memo which is linked here . Most of the recommendations relate to the need to establish a formal assessment plan and clear reporting structure for future assessment cycles. However, some specific changes to the learning outcomes were also recommended because of concerns about a variety of measurement issues.

The chief problem identified relates to how the undergraduate competencies within the IFP were defined. The specificity of the learning outcomes varies greatly among the categories. Some learning outcomes refer to an individual discrete skill. Others are multifaceted and include more than one skill or domain of knowledge. These variations are found both between the overall learning outcomes and the category-specific outcomes, but they also occur within the categories themselves. For instance, the laboratory course learning outcomes for category III, Science and the Natural World, are (for the most part) focused on single skills that are concrete, directly measurable, and (in many cases), common across laboratory curricula even in different disciplines (e.g., the how’s and why’s of analyzing data don’t vary that much between Anthropology, Biology, Chemistry, Geosciences, etc.). On the other hand, most of the individual lecture course learning outcomes contain multiple concepts (e.g., current “limits of scientific knowledge” may be different than the processes by which “scientific knowledge changes”) and/or ambiguous concepts (e.g., “the nature of scientific inquiry”) that may not be defined in the same way across different disciplines. Without the guidance of a common definition for these concepts, there is a greater probability that lecture instructors chose more diverse types of questions covering a wider variety of content than laboratory course instructors, and this may have contributed to the wide range in the raw results reported for each course, as well as the range in the percentage of courses found to meet or exceed the benchmark for student competence in each foundation. The extent to which low (or high) performance may reflect such measurement issues—and not purely student competence—is unknown.

It will be critical for the success of future IFP assessment efforts to refine the language of the student learning outcomes for different categories within the IFP. Input from the stakeholders in this process—the departments and colleges that offer these courses—will be essential so that faculty may align their course objectives with the IFP SLOs and coordinate the development of their measures with each other for reliable assessment outcomes. Standard rubrics for the evaluation of student work (especially written materials) should be created that build upon the initial rubric for the Philosophy course in category V, Global Citizenship, created for the first assessment cycle.

To address this, it has been recommended that an IFP Assessment Committee be formed in the fall 2012 semester. This committee should be comprised of representatives from each academic department that contributes to the IFP, representatives from college assessment committees, and personnel from key support units like the Center for Teaching and Learning, the Office of Institutional Effectiveness and Analysis, and the Office of the Associate Provost for Assessment and Instruction. Along with ongoing feedback provided by the FAU community (see the IFP feedback form posted on the IFP Assessment website), these additional efforts will contribute a more valid and reliable assessment plan.

 

 

 

 

 Last Modified 3/6/14