ASM events
This conference is managed by the American Society for Microbiology
Table of contents
No headers


Knight, J. K., & Wood, W. B. (2005). Teaching More by Lecturing Less. Cell Biology Education, 4(4), 298–310.     doi:10.1187/05-06-0082

The authors of this paper used a very wide variety of methods to quantify student learning gains. An upper-level developmental biology class was taught in parallel by two professors, both of whom taught in “traditional lecture” style the first semester, and then in “active-learning” style the second semester. This structure allowed for a number of controlled comparisons to be made. They used pre- and post-test multiple-choice concept inventories as a general measure of student learning gains, and they report that the most useful method of analysis involved calculating normalized learning gains, which equal the actual gain divided by the possible gain. In this way, they were able to meaningfully compare learning gains between students who had different scores on the pre-test.  I especially like this method, and it seems relevant to my research question. They also used student interviews and a qualitative student attitudes survey to gauge effectiveness of the different teaching methods in their classrooms.


Kozeracki, C. A., Carey, M. F., Colicelli, J., & Levis-Fitzgerald, M. (2006). An Intensive Primary-Literature–based     Teaching Program Directly Benefits Undergraduate Science Majors and Facilitates Their Transition to Doctoral     Programs. CBE-Life Sciences Education, 5(4), 340–347. doi:10.1187/cbe.06-02-0144

This research article highlights, for me, the importance of including controls in education studies. This research project assessed the effectiveness of a year-long journal club in increasing students’ scientific literacy and critical thinking skills. Three types of data were collected: end-of-year student evaluations, alumni surveys, and student interviews. The authors conclude that the journal club positively impacted the students, but there are no comparisons to pre- data—only mean scores are presented from the end-of-year evaluations or interviews. The use of multiple kinds of data and data analyses is very useful, but without either a control group and/or a pre-assessment, it is difficult to draw meaningful conclusions. The questions asked on evaluations and surveys also do not seem well aligned with the stated goals of the research project, and so it is difficult to differentiate overall student satisfaction from increases in conceptual knowledge or critical thinking skills.


Shannon, S., & Winterman, B. (2012). Student Comprehension of Primary Literature is Aided by Companion     Assignments Emphasizing Pattern Recognition and Information Literacy. Issues in Science and Technology     LibrarianshipRetrieved from

Using a controlled study, these authors examined students’ abilities to comprehend and integrate primary research sources into assigned laboratory reports, comparing one section of the course in which multiple class sessions were devoted to learning how to read research articles with paired reading assignments, versus two control sections in which a single “one-shot” library session taught students how to use Web of Knowledge.  Unaffiliated researchers examined student lab reports and assigned comprehension and integration scores of 1-3.  The authors found a significant increase in the comprehension scores of students in the experimental session, but no difference in integration scores. A pre-assessment could have been useful in providing evidence of learning gains within the experimental and control sections.

Smith, M. K., Wood, W. B., & Knight, J. K. (2008). The Genetics Concept Assessment: A New Concept Inventory     for Gauging Student Understanding of Genetics. CBE-Life Sciences Education, 7(4), 422–430.        doi:10.1187/cbe.08-08-0045

This project uses a wide range of strategies for measuring the effectiveness of an assessment—that is, gauging how meaningful the answers are that students provide on an assessment. The authors developed a genetics concept inventory, and it was vetted with three different methods: student interviews and pilot testing to identify jargon and unclear questions, and faculty interviews to determine whether or not the inventory would be useful in their genetics courses. The multiple-choice questions on the inventory all included student-provided distractors, and the questions were aligned to pre-established course learning goals. One particularly nice aspect of this study was that the inventory was used at three completely different institutions, with different genetics courses. Besides using the genetics concept inventory to calculate normalized learning gains, the authors also calculated item difficulty and item discrimination. Item difficulty was calculated as = total # correct/ total # of responses; item discrimination was a measure of how accurately a student’s answer to a single question reflected their overall performance on the entire test, and was calculated as = (# of correct responses by top 33% of students - # of correct responses by bottom 33% of students) / (total # of responses/3).


Wenk, L., & Tronsky, L. (2011). First-Year Students Benefit from Reading Primary Research Articles. Journal of     College Science Teaching, 40(4), 60–67.

This article explored the use of intensive primary research article analysis in an introductory-level biology class as a means of improving students' critical thinking skills and ability to understand research papers. During the course, students were trained in how to analyze research papers, and there were a number of class discussions of papers throughout the course. All 41 students in an intro course were given both pre- and post-tests that consisted of a primary research article reading assignment and accompanying analysis questions (such as “What question is addressed by this research?”). Their responses to these tests were coded for “misunderstanding”, “mentions”, or “explains”, and these scores were compared between pre- and post-. The researchers observed a significant increase in “explains” answers and significant decreases in “misunderstands” answers for most of the student responses. It would have been nice to see a comparison to some sort of control group, to provide firmer evidence that the instructors’ methods were in fact contributing to the students’ learning gains.

Tag page
You must login to post a comment.