EDU 637

E-Brary of Resources

UNIT 1

Designing and Teaching a Course. (1998). Speaking of Teaching, 9(2).

This article, published in the Stanford University Newsletter of Teaching, outlines a lecture by Russell Fernald, a professor and director of the biology program at Stanford. In short, the article discusses various aspects of designing and teaching an online course, including setting objectives, determining format, course design, the use of teaching assistants, evaluation, etc. The article states that “designing a course is one of the abilities faculty are assumed to have picked up somewhere in their graduate preparation, although few have taken a pedagogy course while in graduate school” (Designing and Teaching a Course, 1998)—a statement I, myself, cannot argue with, as I’d never taken one before recently. Therefore, the purpose of this article is to inform others of some techniques one can use. For example, the article recommends considering the audience to whom you’ll be teaching. In his talk, according to the article, Professor Fernald recommended thinking about which students would be taking the class, and whether it was a required course or not. From there, Professor Fernald also cautioned against turning required classes into ones that students dread—we’ve all been there so, as educators, we should endeavor to improve for the next generation of students (Designing and Teaching a Course, 1998).

Overall, this article has a good amount of useful information in how to design an online course, and effective teach said course.

UNIT 2

Tilghman, S.B. (2011). Designing and Developing Online Course Assessments. Review of Higher Education and Self-Learning, 4(9), 31-34.

This article regarding online course assessments seeks to answer three distinct questions, for the purpose of looking at how those assessments can be designed more effectively. These three questions are: How can we construction successful assessment strategies and frameworks that are specifically designed for online learning environments; how can instructors ensure their assessments are aligned with course objectives, activities, and assignments; and what technologies can be implemented to support the various assessment options? In the end, the conclusion is that, in order to design the most effective online course assessment, we need to ensure that: we select appropriate objective and performance assignments for online students; assessments are based on what students should know in order to function in real-life situations, based on critical course content, with students displaying proof of knowledge; we use both formative and summative assessments to confirm that students have absorbed the information and can utilize it in real-life situations.

UNIT 3

Livingston, M. (2012). The infamy of grading rubrics. English Journal, 102(2), 108-113.

In this article, Michael Livingston explains why he still uses, and likes, rubrics for grading student writing, even in the face of scathing comments by others who are critical of using them. To start, he used an anecdote from World War II, specifically President Franklin Roosevelt’s infamy speech. Livingston went on to state that he used this anecdote because “President Roosevelt declared war on Japan. He did not declare war on aviation. It seems like a silly thing to note…the attack would not have happened without planes, so why not declare war on them? No more planes, no more attacks, so a war on aviation” (Livingston, 2012). Livingston goes on to state that this same faulty logic, as it were, is why people are so against rubrics. Throughout the article, Livingston rebuts argument after argument against rubrics, explaining why those arguments are faulty. For example, he points out one argument made by Alfie Kohn, in which Kohn argues that rubrics create “superficial writing, cannot produce truly objective results, and should never be shared with students,” (Livingston, 2012). Livingston argues that what Kohn is really railing against is lazy instructors and lazy grading, stating that he (Kohn) is disturbed by the way rubrics are used by instructors with no concern for student improvement, but that improper use of a tool is not the fault of the tool—it’s the fault of the user.

Livingston continues the article by explaining how he uses a rubric in writing assignments—namely, by using a more measurable standard that also allows for student creativity. For example, some of the things Livingston uses are: thesis—is it arguable, is it clear, does it make sense; arguments—do the paragraphs relate to the thesis, did I support my claims with evidence. By using these standards in his rubric, Livingston is able to teach his students his definition of good writing, which is “to take stances, to take chances, and to make strong rhetorical arguments based on evidence, all conveyed within the bounds of proper practice” (Livingston, 2012). As a historian, I can’t say he’s wrong. And I may just use components of his rubrics in my own classrooms.

Anglin, L., Anglin, K., Schumann, P.L. & Kaliski, J.A. (2008). Improving the Efficiency and Effectiveness of Grading Through the Use of Computer-Assisted Grading Rubrics. Decision Sciences Journal of Innovative Education, 6(1), 51-73.

The purpose of this article was to discuss the efficiency of computer assisted grading rubrics, which is similar to a traditional paper rubric, in which the instructor creates the criteria and the performance standards, and evaluates student submissions. However, it differs in that, once those evaluations are entered, the instructor presses a button, and the rubric provides “predetermined, standardized statements to each student explaining the strengths and weaknesses of their response” (Anglin, Anglin, Schumann, & Kaliski, 2008). The point of this, obviously, is to make life easier for the instructor, as well as making feedback more consistent and objective. The theory is that by making these assessments more objective and consistent, students will get more meaningful feedback, so the point of the study was to test this theory and see if this was, in fact, the case. The study included four different groups—handgraded (no rubric), handgraded (with a rubric), electronic grading (no rubric), and using the computer assisted rubric.  According to the study, instructors using the computer assisted rubric graded almost twice as fast as those handgrading with no rubric, so the efficiency theory proved correct. According to the study, there was no difference in effectiveness of the comments.

It’s odd—I would have figured that students would have preferred feeling as though their instructor was invested in their education—that they mattered. To my mind, I would have thought that students would not have been happy with impersonal feedback. I would be mistaken. If anything, there are some signs that the authors of the study were correct—consistent, objective feedback does seem to help students more.

UNIT 4

Roberts, M. (2013). Creating a dynamic syllabus: A strategy for course assessment. College Teaching, 61(3), 109-110.

This remarkable short yet fully informative article discusses the idea behind a syllabus being a tool for course assessment. As we know, a syllabus is a tool that helps with course assessment at the end; what the author does is make two copies of her syllabus—one for her students at the beginning, and another that she edits throughout the school year whenever she has “a fresh idea for a course or encounter a glitch” (Roberts, 2013). She includes editorial corrections and calendar adjustments, reminders of problems with assignments, tests, or policies, and (for her the most important) pedagogical improvements. Using these comments, she’s able to assess what worked and what didn’t, as well as possible improvements for future semesters.

This strikes me as a wonderful idea for instructors to do. By making the syllabus “dynamic,” as opposed to “static,” instructors have a better idea for how to improve their course.

University of Central Florida, (2005). Program Assessment Handbook: Guidelines for planning and implementing quality enhancing efforts of program and student learning outcomes. 

This entry is less an article and more a “how-to” guide for, as the title states, planning and implementing standards to improve programs and student outcomes. The stated purpose is to “provide academic programs with a framework for developing an assessment plan with the objective of improving an academic program” (University of Central Florida, 2005). The handbook is split into six chapters, with each one answering a different question about assessment. Chapter one, for example, asks the question of what is assessment and why should you assess, and it discusses the purposes and characteristics of assessments. Chapter four asks how you define student learning outcomes, and stresses the importance of explicitly defining expectations and standards.

Overall, this is a very helpful tool for institutions to use in order to assess an overall program.

Palloff, R.M. & Pratt, K. (2009). Assessing the online learner. San Francisco, CA: Jossey and Bass, Inc.

Part of our textbook reading was to understand evaluation and assessment tools. This section of the textbook gives ideas as to how to build rubrics, as well as giving samples. Additionally, there are sections on how to develop good feedback to give to students. This section of the textbook also explains different assessments for different assignments. For example, there should be a different rubric for a presentation vs. a research paper vs. a portfolio. Additionally, it covers self-assessment techniques as well as peer-assessment.

All of this information is important to learn—as a graduate student (unless we’re in an education program), we’re rarely given the opportunity to learn how to teach. We’re expected to just…know somehow. This textbook gives us a head start in that.

Peirce, W. Course Assessment Handbook. Prince George’s Community College.

This Course Assessment handbook is an overall guide intended to help faculty at one community college effectively assess their courses. Within the handbook is an entire section devoted to the development of rubrics. Peirce breaks it down into four steps: 1.) Decide whether you want an analytical rubric, which measures each part of the student’s work separately, or a holistic one, which combines them; 2.) Construct a primary trait scale (or, in less fancy terms, a rubric); 3.) Obtain consistency in instructions and conditions; and 4.) Norm the scorers.  For step 2, Peirce recommends using Effective Grading: A Tool for Learning and Assessment by Barbara Walvoord and Virginia Anderson, as well as The Art and Science of Classroom Assessment: The Missing Part of Pedagogy by Susan Brookhart, which can be found in the ASHE-ERIC Higher Education Report 27(1), 1999. When he speaks of “norming the scorers,” Peirce means there should be a set standard by which a number of different scorers can be on the same objective page for scoring. Peirce gives a procedure for ensuring that the scorers are on that same page, so as to ensure consistency across the rubric.

UNIT 5

iNACOL. (2011). National Standards for Quality Online Teaching.  

This report/scorecard is for the purpose of course assessment. It is broken down into 11 separate sections for assessment, ranging from instructional design to assignments and assessments. The iNACOL scorecard scores the instructor more than it scores the course itself, so some of the assessment criteria include whether the teacher “knows and understands the techniques for developing a community among the participants” (iNACOL, 2011). Another such criteria is whether the teacher ‘is able to provide a clear explanation of the assessment criteria for the course to students.” Sadly, because this scorecard is just for instructors, it focuses on whether they have done/can do their job, and doesn’t focus on the institution itself at all. Even the instructional design section (which, depending on the program, may have nothing to do with the instructor) focuses on the teacher, with such criteria as “the online teacher is able to incorporate multimedia and visual resources into an online module.”

Overall, the iNACOL scorecard is a good tool for evaluating instructors, but not so good in evaluating a course.

UNIT 6

Rusk, M. (2002). Sloan Consortium (Sloan-C) working toward quality standards for online courses. Community and Junior College Libraries, 11(1), 65-68.

In this article, Mike Rusk gives us an overview of what, exactly, the Sloan Consortium is. He writes that the consortium, founded in 1995, was started for the purpose of “serving the higher education community with information on existing online degree programs and the tools and material to support them” (Rusk, 2002). He goes on to say that it also provides a place of collaboration for the 100-plus institutions that are members of the consortium, and includes that the consortium publishes “a set of criteria for online programs that are perhaps a beginning step toward actual standards” (Rusk, 2002). Given that this article was published 13 years ago, I believe we have gone even a step farther than that, given that Sloan-C has published in recent years a Quality Scorecard to evaluate and assess online courses.

Berridge, G.G., Renney, S. & Wells, J.A. (2012). eFact: Formative assessment of classroom teaching for online classes. Turkish Online Journal of Distance Education, 13(2), 119-130.

This article discusses how instructors keep track of students throughout a course, using the Electronic Formative Assessment of Classroom Teaching (eFACT). Formative assessment is defined as the different methods instructors use to evaluate student absorption of information in order to modify teaching and learning activities to improve student attainment (Crooks 2001)[1]. The purpose of eFACT was to gather anonymous student feedback for the purpose of making changes to the course while it was in session. The hope was that the use of eFACT would “affect the quality of the delivery method of the course by giving instructors immediate feedback as students reflect on their learning” (Berridge, Penney, & Wells, 2012). According to the authors, the use of eFACT was a “power-shift at mid semester, giving instructors the opportunity to adjust their teaching methods” (Berridge, Penney, & Wells, 2012). This resulted in students commenting that face to face discussions were more valuable than discussion boards, but that computer issues, technology issues, and outside life issues were hindrances. That being said, the use of eFACT meant that instructors had immediate feedback on what worked and, more importantly, what didn’t. 

[1] An extra source was used here for the definition of “formative assessment.” Crooks, T. (2001). “The Validity of Formative Assessments.” British Educational Research Association Annual Conference, University of Leeds, September 13–15, 2001.

UNIT 7

*NO ENTRY REQUIRED

UNIT 8

Vonderwell, S., & Boboc, M. (2013). Promoting Formative Assessment in Online Teaching and Learning. Techtrends: Linking Research & Practice to Improve Learning, 57(4), 22-27. doi:10.1007/s11528-013-0673-x

This article discusses using formative assessment in their graduate classes for the purpose of designing assessment activities to improve online teaching and learning by making use of student learning data. Some of the various assessment techniques they used were a reflection paper, role play, and a one minute paper (in which students were asked questions such as “what did you learn today?” and “what questions do you still have in mind?”). Additionally, the authors found that checking in with students before any synchronous chat sessions was helpful in identifying any issues with the assigned readings, etc. They conclude their article by reminding instructors that “it is useful to be aware of students’ connectivity options, so that we do not heighten the digital divide by asking students to use technological tools that could not be readily available to them” (Vonderwell & Boboc, 2013). The example they give is in regards to the speed of Internet connection, especially when there are required online chats. This, obviously, is only an issue when it comes to synchronous activity which, as we all know, not every program uses.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s