ASM events
This conference is managed by the American Society for Microbiology

Annotated Bib '10-11

Table of contents
No headers

All the annotated bibs you did for Scholars 2010 have been compiled to this space. (also in word doc below)

  

1. Koenig, Kathleen, "Building Acceptance for Pedagogical Reform through Wide-Scale Implementation of Clickers", Journal of College Science Teaching, v. 39 (3): 46-50, Jan 2010.  

This article describes a physics professor's attempt to encourage other faculty at her institution to incorporate an active learning strategy into their classroom, the use of clickers. Much like me, Kathleen has been incorporating active learning and interactive engagement techniques into her classroom for several years. She has found success with these methods and wants to encourage her colleagues to deviate from the traditional lecture only approach taken in their courses. By making the use of clickers in the classroom non-threatening for faculty (she purchases them and shows them how to use the clickers), she is able to convince several faculty members to try the clickers in their classroom. The results are that the faculty member do like using them and find out that the students are more engaged and more likely to attend class regularly when the clickers are incorporated. This paper discusses a key component of my SoTL project in that I hope to encourage other members of my Division to incorporate active learning into the classroom. I have met resistance to this for all the same reasons that Kathleen discusses in her article. Although this article is interesting, I am choosing to focus more on the impact that the use of clickers has on the development of student thinking. This paper will be valuable later when I return to encouraging my colleagues to be more innovative in their teaching approach.  

  

2. Mollborn, Stefanie, Hoekstra, Angel, "A Meeting of Minds": Using Clickers for Critical Thinking and Discussion in Large Sociology Classes", Teaching Sociology, v. 38 (1): 18-27 2010.  

This article focuses on the research question I am most interested in asking, does the use of clickers improve student critical thinking skills? Most of the article focuses on explaining how to use clickers in the sociology classroom. An important point made about using the clickers to foster critical thinking is that the quality of the clicker questions determines how much thinking the students are involved in. If you write basic knowledge level questions, students will be glad to be able to answer the question easily (my experience), but this does not challenge them to think. That said, the article hints at the fact that the development of critical thinking skills in the students is in the hands of the instructor. This is helpful, but I want to know how to assess whether or not my students are improving in their ability to think critically. The authors only present anecdotal evidence of the effectiveness of clickers in this respect. They surveyed and interviewed students, which to me seems like perceptive evidence. I will keep looking for an article that explains how I might actually test students to see if their ability to think critically has improved over the semester when clickers are used.  

  

3. Mayer, Richard E., Stull, Andrew, DeLeeuw, Krista, Almeroth, Kevin, Bimber, Bruce, Chun, Dorothy, Bulger, Monica,Campbell, Julie, Knight, Allan, Zhang, Hangjin, "Clickers in college classrooms: Fostering learning with question methods in large lecture classes", Contemporary Educational Psychology, v. 43: 51-57, 2009.  

 This is the type of article I have been looking for. This article uses a research format similar to one that I would like to use for my study and my question is an extension of the research done for this article. In this paper, the authors evaluate the use of clickers as a change in technology. There are three educational psychology classes, a control class, a clicker class and a non-clicker class. The control class is taught using the "traditional" lecture approach. For the other two courses, students practice answering multiple choice questions about course material. The difference is that the clicker class incorporates the clicker system while the non-clicker class has the students answering questions with pencil and paper. I really like this study because of the quasi-experimental format, the way data is collected, analyzed and presented. Plus, this study addresses the question: why not just ask multiple choice questions with pencil and paper and/or just ask for a show of hands?  There is a difference! My question is, can clickers (plus the appropriate pedagogical methods) be used to promote critical thinking in students.   

  

4.  Crossgrove, Kirsten, Curran, Kristen L.; "Using Clickers in Nonmajors- and Majors-Level Biology Courses: Student Opinion, Learning, and Long-Term Retention of Course Material", CBE - Life Sciences Education, v. 7 (1): 146-154, Spring 2008. 

This article reviews some papers written about the effects of clicker use on student learning. The articles include conflicting results. Some of the studies report a positive effect associated with the use of clickers, others report no effect. The differences are not surprising because as highlighted by the authors writing this introduction, each researcher has asked a slightly different question and/or used different methods to investigate the effects of clicker use on student learning. Based on this, the authors chose to focus on measuring how teaching methods associated with using clickers impact student learning. This paper is important to my research question because the authors are comparing the use of clickers in nonmajors vs. majors courses and they collect data about long-term retention of information. The authors clearly explain how they obtained IRB approval. They detail the use of statistical analysis and method for assessing the long-term retention of information. Results of this study reveal that four months after the course, the nonmajors retained more content knowledge than the majors. The authors were not able to tease out the effect of clicker use on student learning from the use of an active learning strategy, which had been previously adopted by the instructors. Overall, the results of this study added to the lack of clarity regarding the effects of clicker use on student learning. 


5.  Gauci, Sally A., Dantas, Arianne M., Willimas, David A., and Kemm, Robert E.; "Promoting student-centered active learning in lectures with a personal response system", Advanced Physiology Education, v. 33: 60-71, 2009.
This study focuses on the how-to aspects of engaging students during lecture, which is different than my research question. What I found useful about this article is the description of various methods of assessment. The authors used a combination of student questionnaires, student interviews, and exams to assess the impact of clicker use on student engagement during lecture. I am still searching for articles that assess improved critical thinking skills associated with the use of clickers which is the question that most intrigues me. This will be a very useful article as I sort out methodology and choose appropriate and informative assessment measures for my study.
 

 ************************* 

 

1. Chaplin, S. (2009) Assessment of the Impact of Case Studies on Student Learning Gains in an Introductory Biology Course. Journal of College Science Teaching, vol. 39, 72-79. 

Several measures of student performance in an introductory biology course were measured to assess whether case-based teaching leads to better mastery and deeper understanding.  The measures included comparing the point difference between the last and first exams, comparing how well students answered questions at different levels of Bloom's taxonomy on these exams, and finally comparing the total distribution of exam points.  The various statistical analyses indicated a significant improvement in higher thinking level skills inthe students in the case-based learing cohort.  Although the case based learning cohort was compared to a traditional lecture cohort and most variables (entering SAT scores, same instructor, etc) were kept constant, a major drawback of this study was the limited sample size. 

 

2. Ebert-May, D., Batzli, J., and Heejun, L. (2003) Disciplinary Research Strategies for Assessment of Learning. BioScience, vol. 53, 1221-1228.    

   

3. Fife, E. (2007) Using Focus Groups for Student Evaluation of Teaching. (2007) MountainRise, vol. 4, 1-19. 

As I continue to develop cases, I have used small informal focus groups to assess whether my goals in helping the students learn are being met.  This article on formalizing the focus group assessment may play a big role in my future work. 

  

4. Herreid, C.F. (1994) Case Studies in Science: A Novel Method of Science Education. Journal of College Science Teaching, 221-229.    

As I consider how to assess the success of my current case studies embedded in my lecture and lab courses, I found it useful to go back to one of my original sources describing case studies.  The author describes types of case studies, how to write them, and applications of each type.  He presents no data on the efficacy of the methodology, but rather outlines in details how one could implement this strategy in the classroom.  He acknowledges that this may not "be the best method to deliver a plethora of facts, figures, and principles. . . . [but] is ideal to develop higher-thinking skills."  

  

5.  Noblitt, L. Vance, D and DePay Smith, M. (2010) A Comparison of Case Study and Traditional Teaching Methods for Improvement of Oral Communication and Critical-Thinking Skills. Journal of College Science Teaching, 26-32.  

 

 

1.    Bond, L. Toward Informative Assessment and a Culture of Evidence. A report from the Carnegie Foundation for the Advancement of Teaching's Strengthening Pre-Collegiate Education in Community Colleges. Stanford, California: The Carnegie Foundation for the Advancement of Teaching, 2009.  

This report describes the characteristics of the instructional change that happened over a series of years when 11 community colleges in California participated in the SPECC. The scope of the changes and the context in which these faculty and administrators are very different from what I am trying to accomplish, but I appreciate the way the change is described as one that makes participants more open to collecting and using evidence gathered from their own teaching.  

 

2.    Bonner, J. A Biology Course for the Less-Than-Prepared Prospective Biology Major. Bioscene 35 (1), 74-81 (2009). 

This peer-reviewed article caught my eye because it discussed how faculty at the College of Notre Dame of Maryland set up a special section of Introductory Biology for incoming students who had low math scores. One of the correlations they noted, and that we have seen at our institution, is that students who have low math scores do not succeed as well in Introductory Biology. The author was involved in setting up a special section that allowed less-than-prepared students to explore the content in a more structured and less didactic way. Again, the setting in which this change occurred was quite different from our large research university, but the similarities in student demographics suggest that some of these strategies could be used to address the needs of our at-risk students. 

Objective: Design and evaluate an introductory biology course for potential biology majors with low Math SAT scores. The course design emphasizes techniques that reinforce the use of math in understanding and making connections between biological concepts. 

Conclusion:  93 students enrolled in the course, and 78 students passed. 47 of 51 students who passed the course and took the follow-up biology course succeeded with an average grade of C+, the same as the well-prepared students who immediately took the course.  

Methods and Rationale:  Correlations were calculated between biology students' grades in Introductory Biology over 5 semesters and the students' MSAT scores, and a significant positive correlation was observed. Based on this correlation, a foundations course was designed based on the educational literature: 

  • Six units of the course are not developed sequentially, so that students are not compromised by "falling behind." 
  • The six units broaden the biological literacy of the incoming students 
  • The course is built around biological problems that require mathematical solutions 
  • A multi-step, major out-of-class research assignment stretches students' ability to manage their time 
  • Class sessions make explicit the importance of strategy development in solving biological problems and integrating biological knowledge. 

To analyze results, descriptive statistics (grade, completion rate) were noted for 93 students who enrolled in this course, and the course performance in the subsequent introductory biology course was noted for students who took that course in the semester immediately following the new foundations course. Because the goal was to make students more successful in introductory biology OR to help them make decisions that this major pathway was not a good match with their skills and interests, these results were reported in the context of how many students failed or withdrew, how many persisted, and what their scores were.  

 

3.    Smith, M.K., W. B. Wood, and J. K. Knight. The Genetics Concept Assessment: A New Concept Inventory for Gauging Student Understanding of Genetics. Cell Biology Education 7: 422-430 (2008).  

I am interested in putting together assessments that target conceptual understanding of the topics in Introductory Cell and Molecular Biology, and I appreciate the detail these authors put into describing how they created, validated, and used the 25 questions in this concept inventory. It was particularly striking that all the faculty involved in teaching were involved in identifying learning objectives and in identifying areas of student misconceptions. I also was drawn to the way the authors used the concept inventory questions in a pre-post design in their introductory classes, while delivering them again at the beginning of the genetics classes most student took immediately following-- a really nice way to look at how this type of assessment data can be used. I believe that I will be drawing largely on this and other concept inventories to create assessments for my own class. 

Objective: Development, validation, and implementation of a concept inventory for genetics. 

Conclusion: The 25-question Genetics Concept Assessment measures student understanding across nine learning goals for majors' and nonmajors' genetics courses when correlated with course exam scores and interviews. These questions can be used as pre-post assessments; to compare learning outcomes from different modes of instruction; and can be used to focus in on specific learning goals to inform ongoing curriculum decisions.  

Methods used and rationale: To develop the questions, course instructors were interviewed to develop learning goals representative of this type of course; based on these and the literature on student misconceptions about genetics, a pilot assessment was developed. The pilot assessment was administered to students; student-supplied wrong answers were used to replace the distracters, and jargon was reworded. Questions that were answered correctly by >70% of students were rewritten to result in an assessment that was moderately difficult, so that it was possible to measure improvement. Validation was performed through interviews with 33 students of different achievement levels and using instructor input, followed by input from colleagues at other institutions who administered the assessment in their genetics courses. Finally, the assessment was administered in 5 courses (3 institutions, 607 students) to generate descriptive statistics  in which student learning gains (100 X (post-pre)/(100-pre)) were calculated. For a 321-person class, learning gains were correlated with exam scores 

  

4.    Yerushalmi, E., C. Henderson, K. Heller, P. Heller, and V. Kuo. Physics faculty beliefs and values about the teaching and learning of problem solving. I. Mapping the common core. Physical Review Special Topics- Physics Education Research 3, 020109 (2007). 

This article is less practice-based than most of the articles I am drawn to. It characterizes the beliefs of six physics instructors at one institution, drawing on course artifacts and instructor interviews centered around teaching a problems-based physics course. The outputs from the analysis were a set of concept maps representing the instructors' beliefs about various aspects of their teaching role, including what they believed about students and what experiences students need to succeed in their physics course. I do not anticipate doing research in this way on my own, but I appreciated the way the authors grouped their findings into emergent themes and then drew on evidence to substantiate the assertions. What struck me about their findings was that these six, randomly chosen instructors did have a common set of beliefs with respect to teaching, learning, and problem solving in physics. I am fairly sure that a similar approach would demonstrate that my fellow Introductory Biology instructors and I are quite divergent. Am I fooling myself about this? Should I do some work to answer this question definitively? 

 

5.    Klymkowsky, M. W., K. Garvin-Doxas; M. Zeilik. Bioliteracy and Teaching Efficacy: What Biologists Can Learn from Physicists. Cell Biology Education 2(3): 155-161 (2003).  

This essay, the first in a series  that describes these authors' work in developing a biology concept inventory, was particularly impressive because it discussed the important role that concept inventories played in redefining physics education, and because it discussed the fact that "bioliteracy" is critically important in the lives of citizens-- and that conceptual understanding is what is required, because the specifics change. This might be a good resource to use when advocating the use of these kinds of tools with my colleagues. The article also addresses the fact that one all-encompassing concept inventory for all of biology is difficult because of a lack of consensus about what should be taught, and it points out how astronomers have addressed this argument.  

Cao, L. and J. Nietfeld. 2007. College students’ metacognitive awareness of difficulties in learning the class content does not automatically lead to adjustment of study strategies. Australian Journal of Educational & Developmental Psychology 7:31-46.
The purpose of this study was to (1) observe student ability to identify learning challenges in a class and (2) study the relationship between identifying those challenges and subsequent study strategy selection and test performance.  The study involved college students in an educational psychology course.  During each normal class period, students completed a monitoring worksheet that prompted them to describe concepts that they found difficult to understand and how they would improve their understanding.  The worksheet also included 3 review, multiple choice questions.  Students indicated their confidence in answering each question on a 0-100% scale. Analysis of the open-ended responses by constant comparative method (something I should probably read up on) revealed seven different categories of perceived student difficulty and four categories of study strategies.  Statistical analysis revealed no relationship between perceived difficulty, strategy selection, and performance on quizzes.  The authors indicate that feedback was given on the review questions.  However, no feedback was given on strategy selection and rehearsal methods were most often selected.  I would argue that the ability to identify learning challenges does not necessarily indicate students know how to address those challenges.  Perhaps if students received feedback about how to approach difficulties with “understanding relationship between concepts” (one type of perceived difficulty reported in the paper), for example, they might learn to select better strategies over time.  The paper provides a good example of alternatives to Likert-scale surveys for obtaining data on student metacognitive awareness.

Cooper, M., and S Sandi-Urena.  2009. Design and validation of an instrument to assess metacognitive skillfulness in chemistry problem solving. J. Chem. Educ. 86: 240-245.
Cooper and Sandi-Urena describe the creation and validation of a survey that assesses the ability of chemistry students to perform problem solving activities.  The outcome is a 27-question instrument called the Metacognitive Activities Inventory that the authors claim is valid and reliable.  Tests for instrument reliability revealed Cronbach’s coefficient values > 0.85, suggesting the survey is indeed reliable.  While the face validity of the instrument seems sufficient, the evidence for content validity is not as strong.  The authors note that further studies will address the relationship between self-reported metacognition use and complex task solutions.  I found it interesting that the authors were unable to extract specific factors through factor analysis and attribute this to the interrelated nature of metacognition skills, a characteristic other researchers have also reported. The paper provides a useful example of survey development and analysis.   


Dunlosky, J. and J. Metcalfe. 2009. Metacognition. Sage Publication Inc.: Los Angeles, CA.

This book provides an overview of metacognition research.  The authors describe the different methods and analyses used to study metacognition and basic metacognitive judgments.  The chapters on Judgments of Learning (JOL) and Education are most pertinent to my interests. JOL are an individual’s ability to monitor learning.  Most of the studies performed by psychologists are done in the lab and involve students learning paired words.  As I was reading about these studies, I noted that while this is useful to provide evidence for metacognition theories, it is not particularly important in considering the role of metacognition for higher order cognitive tasks in an applied setting.  Indeed, the authors concede this point and further note the complexities of education-based research in the Education chapter.  The Education chapter explained the relationship of metacognition and models of student self-regulated learning.   This book provided useful background information and explained some of the theories guiding metacognition research.  

 

Isaacson, R. and F. Fujita. 2006. Metacognitive knowledge monitoring and self-regulated learning: academic success and reflections on learning. Journal of the Scholarship of Teaching and Learning 6: 39-55.
This study tested the hypothesis put forth in several previous published reports that high achieving students have better metacognitive knowledge monitoring skills than low achieving students.  Since I am interested in applying my research to a group of students who have struggled to succeed in science courses (low math placement students, disproportionately underrepresented minorities), this is the type of research that is of particular interest to me.  Before each exam, students answered a series of questions about how they studied for the exam, what they expected to score on the exam, and what score was necessary to meet their goals and expectations.  Immediately following the exam, students did postdictions—they estimated performance on the exam.  This differs from predictions where students estimate how they will do before they take the exam.  The exam format was very intriguing.  It was comprised of 40 questions; 18 were knowledge/comprehension (1 point each), 18 were application (2 points each), and 4 were analysis/synthesis (3 points each).  Students selected 30 of the 40 questions.  In order to get an A in the course, students must correctly answer some of the more difficult questions.  The authors did extensive statistical analysis to determine the relationships of these different factors.  Their overall conclusion is that high achieving students had better metacognitive monitoring skills than low achieving students. The study design is interesting and I may be able to adapt and/or build upon these ideas.

Martin, B., Mintzes, J., and Clavijo, I.  2001. Restructuring knowledge in Biology: cognitive processes and metacognitive reflections. International Journal of Science Education 22: 303-323.
This study examined learning by college students in an upper level biology course and their own understanding of how they learned. Students completed concept maps 4 times during the semester, which were evaluated for changes in structural complexity.  Changes in the concepts maps were determined for categories such as concepts, relationships, hierarchies, and branches and the scores served as a quantitative assessment of learning.  Students also completed a survey called the Inventory of Learning Processes as a measure of student metacognition.  Individuals with the 5 highest and 5 lowest scores participated in clinical interviews to further assess metacognition; responses from two interviewees were included in the paper (one high scorer and one low scorer).  Among other conclusions, the authors suggest that, “successful learners in the natural sciences may excel in self-awareness, and the ability to monitor, regulate and control their own learning.”  I believe their work provides strong evidence for that argument.  This paper addresses meaningful learning in college level biology and the effect of metacognition on that learning.  This is dissimilar to many of the psychology studies that use the learning of paired words to measure metacognitive abilities and therefore much more applicable to my own study.  
************** 

 

1)  Novak JD (2002), "Meaningful Learning: The Essential Factor for Conceptual Change in Limited or Inappropriate Propositional Hierarchies Leading to Empowerment of Learners", Science Education , 86 (4), 548-571. (http://www3.interscience.wiley.com/journal/94518600/issue) 

Builds on Ausubel's work to define meaningful learning and what must be present for students to incorporate new knowledge into their current understanding.  Focuses on concept mapping as a technique and its use to develop deeper, more rich understanding of content. Describes why we must help students rebuild their faulty concept maps.  While not the focus of the question I will research, this is the ultimate goal as I see it and I do not want to lose sight of it while examining the current question. 

2)  Van Hoewyk D (2007), "Using a Case-Study Article to Effectively Introduce Mitosis", Journal of College Science Teaching, 36 (6), 12-14.  
(http://proquest.umi.com/pqdlink?did=...=309&VName=PQD
) 

Describes the method used in a non-majors Biology class to introduce the difficult concept of mitosis. Students are presented with an article from the New York Times that discusses the age of cells in the body. This article is presented to students prior to receiving instruction on the topic in lecture.  Lecture begins with discussion of article, followed by standard lecturing on mitosis with short group work at end of class.  Will likely be helpful as a template to think about the method of implementing my ideas. 

3)  Herreid CF (2007), "Start with a story: the case study method of teaching college science", NSTA Press, 2007. (http://books.google.com/books?id=HdX...page&q&f=false) 

Everything you ever wanted to know about case studies.  My method for implementation will most likely be a case study type approach and this book will be useful for understanding case study research and methodology. While focus is clearly about implementation in smaller groups and classes, there is a section for applying this to large classes like mine. 

4)  Osborne J, Simon S, and Collins S (2003), "Attitudes towards science: a review of the literature and its implications", International Journal of Science Education, 25 (9), 1049-1079. (http://dx.doi.org/10.1080/0950069032000032199) 

A comprehensive review of what is known about student attitudes about science.  Provides the past 20 years of research and a look to the future for further research and implications for teaching.  Understanding what is known about student attitudes will be crucial to crafting appropriate exercises because engagement will only occur if I can combat the inappropriate attitudes or find a way to increase task value. 

Quote = "... a better understanding of the attributes of science classroom activities that enhance ‘task value’ might make a significant contribution to how the quality of students’ experience might be improved. Eccles and Wigfield (1995) describe ‘task value’ as the degree to which an individual believes that a particular task is able to fulfill personal needs or goals and it consists of three components: interest, or the enjoyment that a student derives from engaging in a task; importance, or the degree to which a student believes it is important to do well on a task; and utility, or the degree to which an individual thinks a task is useful in reaching some future goal." 

5)  Feldon DF (2010),"Why Magic Bullets Don't Work", Change, March/April 2010. (http://www.changemag.org/Archives/index.html) 

Discusses cognitive load and how it relates to students trying to learn new material.  Gives helpful ideas as to how to manage cognitive load in a lecture.  My goal in this research will be to decrease the cognitive load on the students by addressing point 1 below. 

Quote = "Thus, the three driving principles of CLT (cognitive load theory) are: 1 ) present content to students with appropriate prior knowledge so that the intrinsic load of the material to be learned does not occupy all the available working memory , 2) eliminate extraneous load, and 3) judiciously impose germane load to support learning." 

6)  Hulleman CS and Harackiewicz JM (2009), "Promoting Interest and Performance in High School Science Classes", Science 326:1410-1412. 

Succinctly discusses set up of experiment in high school classes (many things need to be thought about before beginning my intervention).  Objective was to see if connection to science on student terms was helpful and motivating for the lower achievers.  Used minute paper/essay throughout semester to gauge what students were learning.  Does indeed show improvement for lower achievers. 

7)  Wu J (2009), "Linking Assessment Questions to a Research Article to Stimulate Self-directed Learning and Develop High-order Cognitive Skills in an Undergraduate Module of Molecular Genetics", CBE—Life Sciences Education, 8:283–290. 

Discusses experimental module that was used to incorporate research articles into early undergraduate class. 

"The present research aimed to develop a novel continuous assessment (CA), in which test questions were linked to research articles. Integration of a research article into CA can bring concepts to life, cultivate curiosity, and nudge students to become antotelic or self-directed through the creation of ever-new application questions without concern about repetition. This new assessment mode goes beyond traditional assignment of scores and brings students into a new learning landscape. For comparison, research articles and linked questions were also administered to students in lectures and tutorials instead of incorporation with a CA in different semesters. Our results showed that only the novel CA model could effectively motivate students toward self directed learning." 

Used interviews, questionnaires, LMS scores, etc.  CA module works and motivates students toward self-directed learning. 

 

In considering the relevant literature for my project, I have split the project into two major parts: (1) what preexisting understandings and behaviours influence student success in first-year biology classes and (2) how and what needs to be evaluated to understand student learning gains in our existing and newly designed courses.  In the first instance, I was interested in understanding how characteristics like motivations, approaches to learning and past educational success influenced student success and satisfaction in university biology classes.  

 

 I started from Biggs' work on the Study Process Skills Questionnaire (Biggs, J.B. (1987). Student Approaches to Learning and Studying. Research Monograph. Hawthorn, Australia: Australian Council for Educational Research.), Zeegers' long-term work on student approaches to learning (Zeegers, P. (2001). Approaches to learning in science: a longitudinal approach. British Journal of Educational Psychology 71(1), 115-132.) and information on the CLASSE (Classroom Survey of Student Engagement) tool developed at the University of Alabama (http://assessment.ua.edu/CLASSE/Overview.htm).  In reviewing this, and other literature, it became clear that in addition to choosing an effective methodology, there were issues with some of the available survey tools; while the Canadian and American education systems are similar in many respects, there are substantial differences related to Canadian culture and context.  The first "article" that I have chosen in my bibliography is based on a large project lead by Noel Entwistle at the University of Edinburgh that evaluated student learning in many universities in the United Kingdom.  I feel that this comprehensive project and the resulting survey instruments are carefully considered and fit better with the goals of my project and the culture of my students.  For the second part of my project, I investigated the various ways that researchers had measured gains in student's conceptual understanding of energy, evolution and information.   I have chosen articles that have informed our pre- and post-assessment instruments and also highlight the challenges and debates around creating concept inventories generally. 

 

Hounsell, D., & Entwistle, N. (2005). Enhancing Teaching-Learning Environments in Undergraduate Courses. Retrieved from http://www.etl.tla.ed.ac.uk/publications.html 

This site outlines a large, discipline-specific, teaching and learning project completed in the United Kingdom aimed at understanding the factors influencing student learning and academic success with the goal of enhancing the learning environment.  I contains an overview of the project, all survey tools and links to publications arising from the project.  Of particular interest to me were the measurement instruments used in the project (http://www.etl.tla.ed.ac.uk/publications.html#measurement).  This summer, I have modified and piloted two of these instruments in our classes: the Learning and Studying Questionnaire and the Experiences of Teaching and Learning Questionnaire.  I like these instruments as prompts are based on student comments (although many had to be modified because of language differences) and they focus on both the students' approach to learning and how the activities in the classroom aided (or not) in their understanding of course material.  The surveys can be modified to focus on a specific course unit/module or on an entire course (as is my case).   

 

Smith, J. I. & Tanner K. (2010). The problem of revealing how students think: Concept inventories and beyond. CBE-Life Sciences Education, 9, 1-5.
With the publication of the Force Concept Inventory for physics education (modeling.asu.edu/R&E/FCI.PDF) similar instruments have been under development for other disciplines.  Smith and Tanner (2010) describe how the potential power (and indeed promise) of concept inventories to measure gains in student understanding, scientific literacy and even the teaching effectiveness of faculty must be tempered with a critical evaluation of their effectiveness.  They discuss the significant impact that concept inventories have had on driving pedagogical developments in both physics and biology education but question whether these inventories actually measure conceptual understanding and should be used to inform instruction.  They highlight problems associated with the vocabulary and the format of the texts.  Further, they question whether the learning gains associated with pre- and post-assessment should be necessarily attributed to learning in the classroom and question whether they give information about the depth of conceptual understanding.  They focus the last part of their discussion on a concept I find very interesting - the maturaion of learners from novice to expert thinkers.  This article helped me to consider how we could more effectively measure gains in student learning by possibly using a blended approach between multiple choice and open-response questions.  This article provides context for the articles by Nehm and Schonfeld (2008) and Anderson (2002) discussed below. 

  

Anderson, D. L., Fisher, K. M. & Norman, G. J. (2002). Development and Evaluation of the Conceptual Inventory of Natural Selection. Journal of Research in Science Teaching, 39(10), 952-978. 

The Conceptual Inventory of Natural Selection (CINS) consists of 20 multiple choice items designed to assess students' understanding of natural selection.  The CINS was developed based on student responses to interview questions and focuses on 10 major concepts in natural selection.  The CINS was tested/validated with 270 undergraduate students taking a non-majors biology class.  The results indicated that the final version of the CINS had good internal validity and therefore a reasonable tool to support "constructivist and socioconstructivist learning".  Although this instrument has been used by many faculty to measure gains in student understanding of evolutionary processes, the criticisms discussed by Smith and Tanner (2010) should be carefully considered.  The CINS does not provide researchers with an indication of why a student answered a question in a particular way; it could be that the student truly understands the conceptual underpinnings of the question but it could also be equally true that they guessed the correct answer or that the distractors were poor or do not accurately reflect the misconception held by the student.  Further, I found that the repetition in the wording of the questions despite the three separate scenarios used (finches, lizard and guppies) in some instances compromised questions in the test. Although we have chosen some of the questions from this test for our own instrument, in most cases we have substantially modified the questions to be simpler for students to understand without the long set up for the multiple choice questions (a copy of the CINS can be downloaded at the following link... bioliteracy.colorado.edu/Readings/Natural%20Selection%20CI.pdf 

  

Nehm, R. H. & Schonfeld, I. R. (2008) Measuring Knowledge of Natural Selection: A comparison of the CINS, an open-response instrument and an oral interview.  Journal of Research in Science Teaching, 45(10), 1131-1160.   

In this paper, the authors compare the ability of three methodologies to identify student misconceptions regarding natural selection.  They compared the CINS (described above) with an open-response instrument (ORI) and face-to-face student interviews.  While they found that CINS and ORI are valid instruments, student responses varied from those in the interviews.  Furthermore, I was concerned to see that the internal reliability statistics for the CINS in this study in some cases varied dramatically from that reported by Anderson et al (2002).  For instance, the most difficult question identified by Anderson was found to be the "easiest" in this study.  Based on the commentary and results in this paper, we decided on a mixed approach to our pre and post assessments where both multiple choice and open response questions would be asked.  It is important to note that while the criticisms of the CINS in this paper seem reasonable, the student population tested was biology majors rather than the non-majors tested in Anderson et al's original paper.  I think that this paper highlights the difficulty in identifying misconceptions around evolutionary theory but I found it very helpful in considering the design of our assessment instruments. 

  

Entwistle, N. (2010). Taking Stock: An overview of key research findings. In J. Christensen Hughes & J. Mighty (Eds.). Taking Stock: Research on Teaching and Learning in Higher Education (pp. 15-57). Montreal and Kingston: McGill-Queen's University Press, Queen's Policy Study Series. 

There are so many things that I like about this article!  It contains and excellent discussion of a number of research projects aimed at understanding how teaching and learning environments impact student learning (including the ETL project mentioned above).  Additionally, this is one of the first articles that I have read that discusses philosophical/psychological aspects of learning, knowledge etc... without making my eyes glaze over.  I found that the figures in the paper were not only helpful in explaining concepts like dualism/relativism but the heuristic model of interacting influences on student learning really helped me to think about my research project and how I approach teaching.  I just received the book a few weeks ago but it seems like a real gem.  I have had a chance to read a number of the articles and they are all very thought-provoking. 

 

 

1) Chaplin, Susan.  “The name assigned to the document by the author. This field may also contain sub-titles, series names, and report numbers.Assessment of the Impact of Case Studies on Student Learning Gains in an Introductory Biology Course.”  Journal of College Science Teaching, v39: n1. p. 72-79, Sep 2009.  

This study examined student performance in a lecture-based vs. a case-study based Introductory  

Biology course.         The results of the study showed that case-based teaching that emphasized problem 

solving and discussion significantly improved student performance on exams throughout the semester. 

Additionally, case-based studies enhanced students' abilities to correctly answer application- and  

analysis-type questions.   This study directly relates to my research question of how to improve  

students’ abilities to apply what they learn as well as enhance their analytical thinking skills 

  

2) Herried, Clyde.  “The Wisdom of Groups.” Journal of College Science Teaching, v 39: n2. P.62-64,Nov 2009.  

Dr. Herreid, a well-known leader in implementing case-studies in group format at SUNY Buffalo, discusses the influence of group work in developing problem solving skills.  He discusses how he is able to utilize the case-study method in a group setting.  He also discusses the richness of creating diverse  

groups to enhance problem solving outcomes.  I have actually attended one of Dr. Herreid’s case-study   workshops as a post-doc at Emory University.  We simulated the implementation of the case-study   method in a group setting.  I have carried out the case-study method in my own classroom in a group  setting.  However, I am interested in taking it to the “next level” by determining the influence of having  students write their own case-studies and act as the mediator for the class in carrying them out. 

  

3) Henderson, Charles; Heller, Kenneth; Heller, Patricia; Kuo, Vince H.; & Yerushalmi Edit 

.“Instructors’ Beliefs and Values about Learning Problem Solving.” Physics Educational Research  

Conference (Rochester, NY, July 2001).  

This presentation focused on hypotheses regarding faculty beliefs about how their students learn to  solveproblems in their introductory courses.   The authors utilized structured interviews and a concept  map based analysis to assess common core faculty belief.  They found that faculty believe that students  learn problem solving primarily through a process of reflective introspection while they practice solving   problems and getting assistance from example problem solutions.   

  

 4) Nowlton, Dave S.  “Preparing Students for Educated Living: Virtues of Problem-Based Learning across the Higher Education Curriculum.  In: New Directions for Teaching and Learning, n95, p. 5-12, 2003. 

Nowlton equated that the actual process of becoming educated and developing problem-solving skills are parallel and directly related.  He argues that it is the responsibility of professors to engaged students in problem-based learning and that if they choose to ignore this responsibility we are not going to be developing educated individuals.  

 

5)  Ertmer, Peggy A.; Stepich, Donald A.  “Case-Based Instruction in Post-Secondary Education: Developing Student’s Problem-Solving Expertise.”  Source- Not listed in ERIC.  Pub-type: Reports-Research. 1999.  

This study directly relates to my research question: Do case studies effectively improve students’ problem-solving skills and analytical thinking skills?  This study exploredThis study was changes in students' problem-solvingskills  as they analyzed instructional design case studies during a semester-long course. The study is based on 19 students at two Midwestern universities qho analyzed six to ten case studies as part of their course assignments.   They collected both quantitative and qualitative data. Comparisons were made both within and across students, as well as across time, to examine patterns and changes in students' problem-solving approaches. Researchers found that students did not perform like experts on a regular basis and that coached expertise or external factors were more influential than internal factors.  Additionally, suggestions for supporting the development of students' problem-solvingskills within a case-based course are included.  

 

1. Assessing the Factors Deemed to Support Individual Student Intrinsic Motivation in Technology Supported Online and Face-to-Face Discussions. Schroff & Vogel (2009). Journal of Information Technology Education Vol 8: 59-86. 

Summary:  

Research has established that intrinsic motivation has a positive effect on learning and academic achievement. In order to investigate the phenomenon of intrinsic motivation in technology supported learning environments, this paper investigates the factors deemed to support individual student intrinsic motivation in online discussions. A research model is presented based on research into motivation, and the specific areas of self-determination and curiosity provide a framework for the model. 

 Notes: 

This article is a useful starting point because it referenced several other articles on the relation between motivation and learning. It is also introduced me to the idea of "intrinsic" versus "extrinsic" motivating factors. The report provides the questionnaire that the authors had developed for their study, so that will provide a good starting point. However, I am not sure if adapting the entirety of the questionnaire is appropriate for what I would like to achieve. I would like a more light-weight questionnaire so that the assessment process does not feel like a burden to the students. One other encouraging aspect of the article is that the study contains a sample size of 77, making it a target that I can achieve at my institution.  

  

2. Motivation in action: Towards a process-oriented conceptualisation of student motivation. Zoltan Doernyei (2000). British Journal of Educational Psychology, Vol 70: 519-538.  

Summary:  

It is argued that the `time’ dimension is relevant to the study of motivation in at least two crucial areas: to account for (a) how motivation is generated and (b) how it fluctuates and further develops over time. A focus on the temporal dimension is particularly important for the understanding of student motivation because in prolonged learning activities such as mastering a school subject a major motivational function is to maintain the motivational impetus for a considerable period (often several years) against a number of distracting influences. In order to illustrate the temporal conception of motivation, a `Process Model of Student Motivation’ is presented and various theoretical pros and cons are discussed. Finally, practical implications are demonstrated by providing a taxonomy of motivational strategies rooted in the process-oriented approach, with one specific aspect, the students’ action control and self-motivation, specially highlighted in order to show the compatibility of the approach with current research on student self-regulation. 

 Notes: 

This is one of the articles referenced in the Schroff & Vogel article. This article presents a very detailed explanation on conceptualizing motivation as a function of time in a classroom. The article was useful for understanding the theory behind motivational studies, but it does not contain a lot of immediately useable units.  

  

3. Science Motivation Questionnaire: Construct Validation With Nonscience Majors. Glynn, S.M., Taasoobshirazi, G., & Brickman, P. (2009). Journal of Research in Science Teaching, Vol 46: 127-146.  

Summary:  

 This study examined how 770 nonscience majors, enrolled in a core-curriculum science course, conceptualized their motivation to learn science. The students responded to the Science Motivation Questionnaire, a 30-item Likert-type instrument designed to provide science education researchers and science instructors with information about students’ motivation to learn science. The students’ scores on the Science Motivation Questionnaire were reliable and related to students’ high school preparation in science, GPA in college science courses, and belief in the relevance of science to their careers. An exploratory factor analysis provided evidence of construct validity, revealing that the students conceptualized their motivation to learn science in terms of five dimensions: intrinsic motivation and personal relevance, self-efficacy and assessment anxiety, self-determination, career motivation, and grade motivation. Women and men had different profiles on these dimensions, but equivalent overall motivation to learn science. Essays by all of the students explaining their motivation to learn science and interviews with a sample of the students were used to interpret Science Motivation Questionnaire scores. The findings were viewed in terms of a social-cognitive theory of learning, and directions for future research were discussed. 

  

Notes: 

This article provided a 30-item questionnaire that measures student motivation in learning science. These authors broke down motivation into 5 dimensions, where each is addressed by a set of questions in the questionnaire. I am uncertain whether I can simply use this questionnaire in a content-specific class.  

  

Methods: 

The authors constructed a Likert-style questionnaire to assess student motivation in learning as 5 different components. They administered the questionnaire to a test sample of 770 students and performed statistical analyses on the results (Quantitative). The questionnaires also contained a section that allowed students to provide written feedback (Qualitative).  

Conclusions: 

1. The quantitative and qualitative results are in agreement. 

2. The different components have correlations with each other.  

  

4. Factors influencing academic performance of students enrolled in a lower division Cell Biology core course. Soto, J. & Anand, S. (2009). Journal of the Scholarship of Teaching and Learning, Vol 9: 64-80.  

  

Summary:  

Students’ performance in two semesters of our Cell Biology course was examined for this study. Teaching strategies, behaviors, and pre-course variables were analyzed with respect to students’ performance. Pre-semester and postsemester surveys were administered to ascertain students’ perceptions about class difficulty, amount of study and effort put into the course, and professional goals. Chi-square (χ2) tests of independence showed that completion of chemistry requirements, passing the laboratory component of Cell Biology, homework, and attendance were related to passing our course. Logistic regression showed that perfect attendance followed by GPA, were the most important factors associated with passing the course. 

 Notes: 

This article provides an example of a publication on examining pedagogical effects on student learning in a scholarly context. It resembles what I have in mind in terms of publications from my own future work. It offers a good template for a manuscript. The article is also about cell biology, which is the same subject area that my research will be conducted in.  

 Methods Used:  

 The study used Quantitative methods in several ways. It measured course grades against several critieria, such as attendance and grade in the pre-requisite for the class. It also compared learning differences in two sections of class using different teaching techniques (lecture available on CD-ROM).  

The study also used a Qualitative measure, which is a post-course survey. However, the paper did not include this survey. 

 Conclusions: 

1. Attendance is positively correlated with passing the course. 

2. Most students did not feel that completion of the pre-requisite helped them in the current course, and the statistical analyses concurred.  

3. Based on the pre/post survey on study habit, the expected study habit is different from the actual self-reported study pattern.  

4. The use of CD-ROM, the frequency of use, and the time of use, did not affect whether students passed the course or not.  

  

5. Clickers in the Large Classroom: Current Research and Best-Practice Tips. Caldwell, J.E. (2007). CBE Life Sciences Educatiob Vol 6: 9-20.  

Summary:  

  

Audience response systems (ARS) or clickers, as they are commonly called, offer a management tool for engaging students in the large classroom. Basic elements of the technology are discussed. These systems have been used in a variety of fields and at all levels of education. Typical goals of ARS questions are discussed, as well as methods of compensating for the reduction in lecture time that typically results from their use. Examples of ARS use occur throughout the literature and often detail positive attitudes from both students and instructors, although exceptions do exist. When used in classes, ARS clickers typically have either a benign or positive effect on student performance on exams, depending on the method and extent of their use, and create a more positive and active atmosphere in the large classroom. These systems are especially valuable as a means of introducing and monitoring peer learning methods in the large lecture classroom. So that the reader may use clickers effectively in his or her own classroom, a set of guidelines for writing good questions and a list of best-practice tips have been culled from the literature and experienced users. 

 Notes: 

This article is not related to assessing student motivation, but one of my current projects is to figure out if the data I gathered from my students on the use of clickers can be publishable. So this article provides a good introduction to the topic and offers many references to follow up on.    

 

 

Herreid, C. F.  Case studies in science:  a novel method in science education.  (1994).  Journal of College Science Teaching.  87(4), 221-229.  

  

Herreid, a longtime supporter and developer of case method in the science classroom, reports anecdotally that case methodology benefits those students who are typically deterred by the atmosphere of the lecture based science classroom.  He reports higher attendance rates in his case based courses as opposed to lecture courses, the acquisition of higher order thinking skills in his students, and an emphasis on learning as opposed to memorizing facts.  He views case methodology as a way to “show students how their esoteric learning impacts on the world and is dependent on political and social currents (Herreid, 1994)”. 

 

Yadav, A., Lundeberg, M. A., DeSchryver, M., Dirkin, K., Schiller, N. A., Maier, K., & Herreid, C.F.  (2007).  Teaching science with case studies:  a national survey of faculty perceptions of the benefits and challenges of using cases.  Journal of College Science Teaching.  34(1), 34-38.  

Many of Herreid’s anecdotal findings (above, Herreid 1994) are supported by case teaching faculty.  In 2006, a national survey was conducted to capture faculty’s experiences with case method in the science classroom.  One hundred and thirty nine faculty were identified via a roster from a national case study teaching conference and invited to participate in the survey by email.  A response rate of 73% was obtained and the responders were mostly teaching at the university level with 4% of respondents teaching at the high school level.  Twenty-three states were represented in the responses and 62% of respondents were women.  

The results of the survey showed that faculty perceive case methodology as a pedagogy that can address some of the common problems associated with teaching science.  A majority of faculty (93.8%) agreed that students were more engaged in class when using cases and that students were better able to apply course content to practical applications (91.3%).  Faculty also mostly disagreed (87.5%) with the notion that students retained less course content where cases were used.  

 

Herreid, C. F.  "Clicker" Cases:  Introducing case study teaching into large classrooms (2006).  Journal of College Science Teaching.  36(2), 43-47. 

 

Ebert-May, D., Brewer, C., & Allred, S (1997). Innovation in large lectures---teaching for active learning.  Bioscience.  47(9), 601-607. 

 

Burrowes, P. A.  A student-centered approach to teaching general biology that really works:  Lord's constructivist model put to a test (2003).  The American Biology Teacher.  65(7), 491-501. 

 

 

My literature search for articles discussing learning (educational, instructional) objective specificity and effectiveness was not as straightforward as I would have thought. I learned that an abundance of sources described how to write good learning objectives in the 1960’s through 1990’s. Given the widespread use of learning objectives, I expected to encounter many recent studies regarding the effectiveness of learning objectives, but I had difficulty. I found ERIC to be the most helpful of the suggested databases, although I could not locate the majority of the references I would have liked to see. I used bibliographies to guide my search to some extent, although this was difficult since many of the articles I was starting with were rather old. One thing I did find to be helpful was conducting searches on the websites for specific journals, once I was able to identify these journals. Two journals that I found to be particularly useful were CBE--Life Sciences Education and Journal of Research in Science Teaching. 

  

1. Mager, R. F. (1997). Preparing Instructional Objectives [electronic resource]: A Critical Tool in the Development of Effective Instruction (3rd ed.). Atlanta: CEP Press. 

  

This electronic textbook is written by Robert Mager, who is credited with initiating the movement toward using learning objectives in general education. This book presents a foundation for writing effective learning (instructional) objectives. It focuses on the “do’s and don’ts” of writing objectives and describes the essential elements of a good learning objective as: (1) the performance expected; (2) under what conditions; and (3) the criteria of acceptable performance. It stresses specificity as being generally favorable; however, the context of specificity is a little bit different than what I had in mind, seeming to contrast more with “vague” than “general”. A section is devoted to how much detail to include in objectives. In it, Mager pushes for maximum detail, implying that leaving out detail and causing the student to study more than was necessary is betrayal or deception…I do not necessarily agree with this. The “under what conditions” part of the objective seems to be what I would like to manipulate in my research. 

  

2. Naz, B.A. “Presentation on Instructional Objectives”. Institute of Education and Research. Gomal University, 2009. 

  

This presentation is not peer-reviewed, but I found it to be a good introduction to the subject of instructional objectives, including references. The author distinguishes between “informational” and “instructional” objectives. The former does not include conditions and criteria (using Mager terminology).  

  

3. Crowe, A., Dirks, C. & Wenderoth M. P. (2008). Biology in bloom: Implementing Bloom’s taxonomy to enhance student learning in biology. CBE--Life Sciences Education, 7, 368-381. 

  

This peer-reviewed article describes a tool (Blooming Biology Tool, BBT) developed to assist faculty in college life science courses align their expectations and teaching methods, as well as to enhance student study habits. Emphasis is placed on matching course activities with learning goals (performance expectations). Discusses a masking that occurs when a higher Bloom’s level concept is provided ahead of time and then asked on a test--it becomes a recall level question in this case--this is aligned with my fear of providing too much detail in objectives. Authors also developed Bloom’s-based Learning Activities for Students (BLASt) to help students prepare at different levels (develop different study strategies). The strategy for a lecture course involved group activities centered around discussing the Bloom’s level of questions posed during the lecture (“blooming”). Metacognition (student’s being able to monitor their learning) was also emphasized. Providing data to students as to how they performed at each Bloom’s level throughout the course helped them make adjustments throughout the quarter (using the BLASt techniques). Overall, this paper presented a very thoughtful system for improving a biology course. However, results were anecdotal. I think I could use some of the concepts as a framework for my own research, but seeking evidence of its success. 

  

4. Babin, P. (1987). Instructional Objectives. A publication of the Teaching Resources Service, University of Ottawa. 

  

This monograph/teaching guide caught my attention, because it addresses a couple of issues in which I am specifically interested: (1) how precise instructional objectives should be and (2) consideration of differences among students when constructing objectives. Different viewpoints are addressed regarding these issues, but nothing is tested--only anecdotal evidence or opinion backs the viewpoints. States “if it has not been stated as an objective, it will not be on the exam.” I feel this is oversimplified in that there are different ways to state things in learning objectives. Specific concepts can be grouped and generalized, for example. Benefits of continually refining objectives are outlined. 

  

5. Phillips, J.A. Instructional objectives in economics teaching: Philosophy, Effect, and Extent of Use. Source unknown. 

  

Although the source of this paper is unknown (found full-text article in ERIC database), I am including it, because it includes content from the unique perspective of economics education. I find this interesting, because costs and benefits of instructional objectives are analyzed very thoroughly, as would be done by an economist. Instructional objectives are defined similarly to other sources, with the addition of providing a rationale for learning. The author argues for very explicit learning objectives, and even states that it is important to consider the order presentation of such objectives to students to ensure that concepts build from simple identification to complex application. The author states that literature is lacking of evidence of effects of instructional objectives (although this paper is probably from pre-1980. The author did cite one of his own papers in which he failed to demonstrate a significant gain as a result of the use of instructional objectives (citation provided). Note a few citations are provided pointing to studies in other disciplines that did find benefit of instructional objectives. 

 

 

1.       Seymour, E., A.-B. Hunter, S. L. Laursen, and T. Deantoni. 2004. Establishing the benefits of research experiences for undergraduates in the sciences: First findings from a three-year study. Science Education 88:493-534, and Hunter, A.-B., S. L. Laursen, and E. Seymour. 2007. Becoming a scientist: The role of undergraduate research in student's cognitive, personal, and professional development. Science Education 91:36-74. 

This pair of works represents the most complete study of student learning and development in a research-intensive atmosphere in the literature today.   These authors performed interviews of students and faculty who participated in summer research programs at small liberal arts colleges.  They found that the undergraduate research experience is a powerful tool. Students who participate in independent research apprenticeships (where faculty/student pairs work on a research project) significantly affect student’s cognitive development and personal growth. These apprenticeships involved attendance and presentations at scientific meetings. Such experiences clarified and confirmed student decisions to become a scientist. They found that students who participated in intensive summer research programs displayed significant gains in their abilities to think and work like a scientist. While these results are not surprising, there were other unexpected outcomes − increased confidence, improved communication skills, and greater clarification of professional goals. I hope to pattern at least part of my project after this study. They made significant use of interviews, and while they provide rich information, they are also labor-intensive. Some of their questions could be molded into a more efficient format. 

  

  

2.       Carson, S. 2007. A new paradigm for mentored undergraduate research in molecular microbiology. CBE-Life Sciences Education 6:343-349. 

These authors inserted research experiences into their classroom. They transformed pepper plants with iron utilization genes as part of a Biotechnology course. These authors generated data on the effectiveness of the approach and found large increases in student research skills (measured by a self-reported questionnaire) after the intensive research experience. The authors also found that although participating faculty spent more time on this project than a more traditional laboratory, the approach took less time than mentoring individual research projects. The project offered a good questionnaire that I could use to measure student acquisition of research skills. 

  

3.       Lopatto, D. 2010. Undergraduate research experiences support science career decisions and active learning. CBE-Life Science Education 6:297-306. 

This project measured the effectiveness of undergraduate research programs at many participating institutions using the Survey of Undergraduate Research Experiences (SURE), a tool for assessing educational experiences after students participate in a summer research internship. SURE can also determine if undergraduate research programs achieve their goals of attracting and supporting research talent, especially of minority students. Most students (57-62%) who entered summer research programs did not change their post-baccalaureate plans to pursue a career in the sciences. Only a small percentage (2-3%) reported changing their plans based on their summer research internship. However, student’s critical thinking and learning abilities increased significantly after their research experiences, indicating the value of the programs. I am glad to know the SURE evaluation tool exists, and will consider it for use in my project. I would like to see more objective assessment of gains in student abilities, however. Relying on student self-reported responses may skew the outcomes. Testing of abilities before and after the research experience by an independent evaluator would provide more robust data. 

  

4.       Desai, K. V., S. N. Gatson, T. W. Stiles, R. H. Stewart, G. A. Laine, and C. M. Quick. 2008. Integrating Research and Education at Research-Extensive Universities with Research-Intensive Communities. Advances in Physiology Education 32:136-141. 

This work is typical of much of the literature I reviewed, and I include it only for illustration. The authors had the great idea of pairing a graduate student with a team of undergrads to perform research in animal physiology. The research teams performed laboratory experiments, participated in research workshops, received mentoring by the graduate student, and shared their findings via a web-based system. This is a great idea, but the data they shared involved the numbers of students participating and the goals of the program. Unfortunately, no studies were done of the effectiveness of the approach. While the lack of data are frustrating, I’ll try to incorporate the idea of peer mentoring, perhaps by a senior year student, and the web-based data dissemination approach into my research classroom. 

  

5.       Kinkel, D. H., and S. E. Henke. 2006. Impact of undergraduate research on academic performance, educational planning, and career development. Journal of Natural Resources and Life Science Education 35:194-201. 

This project followed a cohort of undergraduates after they participated in a research mentor program that included performing and presenting original research. These students performed significantly better academically (0.5 increase in their GPA), graduated sooner, and got jobs sooner than students in the control group. This work displays the benefits of studying students over longer time frames, and maintaining contact with them after they graduate. 

 

My project will focus on assessing the effect of introducing Research Oriented Learning Activities (ROLA) into the undergraduate Pathogenic Microbiology laboratory course at the University of Maryland (UMD). I have been developing these activities with the Host Pathogen Interaction (HPI) teaching group at UMD. Three ROLA activities were implemented in the Fall semester of 2009 and I have worked this past year on improvements to implement this Fall. 

  

1.      Marbach-Ad, G., Briken, V., El-Sayed, N., Frauwirth, K. Fredericksen, B., Hutcheson, S., Gao, L., Joseph, S., Lee, V., McIver, K. S., Mosser, D. M., Quimby, B. B., Shields, P. Song, W. Stein, D. C., Yuan, R. T. and Smith, A. C., “Assessing Student Understanding of Host Pathogen Interactions Using a Concept Inventory”, J. Microbiol. Biol. Edu. 10:43-50, 2009. 

  

This article describes the development of the HPI concept inventory that I have used for the past four years to assess student learning in pathogenic microbiology. The step-by-step process of developing and implementing the inventory in a series of HPI associated classes is reported and some preliminary student data is given. The most interesting point of the article to me is the re-iterative nature of the process and the emphasis that this re-iterative process is required to successfully develop an assessment tool. The major result presented was the observation that students retain the knowledge from the pre-requisite course (general microbiology) as they enter the upper level HPI courses. 

  

2.      Phillips, A., Robertson, A., Batzli, J., Harros, M., and Miller, S. “Aligning Goals, Assessment, and Activities: An Approach to Teaching PCR and Gel Electrophoresis”, Cell Biology Education 7: 96-106, 2008. 

  

This paper reports the development and implementation of a lab activity aligned with specific learning goals and assessment of the activity based upon those learning goals. In beginning to think about publishing the lab activities I have developed for the pathogenic microbiology course, I have used this article as a template for designing my activities and hopefully for assessment of those activities this Fall. In particular I like the flow chart given in figure 1 and the breakdown of learning goals into broad and specific goals shown in table 1. I have all of my students work from one of the TOLA activities that I implemented last Fall and I hope to assess them in a similar manner (during the second half of this summer) as the authors assessed their students work to identify basic students misconceptions that I may address and the re-assess this Fall. 

  

3.       Anderson, T. R., and Schönborn, K. J., “Bridgingthe educational research-teaching practice gap – Conceptualunderstanding, Part 1: The multifaceted nature of expert knowledge”,  Biochem. Molec. Biol. Educ. 36, 309–315, 2008.  

 

Schönborn,K. J., and Anderson, T. R.,  “Bridging the educationalresearch-teaching practice gap – Conceptual understanding,Part 2: Assessing and developing student knowledge”,  Biochem.Molec. Biol. Educ. 36, 372–379, 2008.  

  

In the first article of this two part series, the authors describe the nature of expert versus novice knowledge and discuss what types of cognitive skills are crucial for student’s development of expert conceptual understanding. I found this article helpful in clarifying the continuum of learning and in identifying various modes of assessing the different level of learning along that continuum. Since Pathogenic microbiology is an upper level course that builds on the knowledge gained in the general microbiology course I realize that I need to teach the more expert level skills and properly assess those skills. 

In the second article, the authors briefly reviewthe instruments and approaches available for measuring conceptualunderstanding and cognitive skills related to life science.They conclude by proposing a series of questions for assessingthe cognitive skills necessary for conceptual understandingin biochemistry. 

  

4.      Howard, D.R., Miskowski, J.A., Grunwald, S.K., and Abler, M.L., “Assessment of a Bioinformatics across Life Science Curricula Initiative”, Biochem.Molec. Biol. Educ. 35, 16-23, 2007. 

  

This article describes the use of three different assessment tools for the analysis of the effectiveness of an integrative multi-departmental bioinformatics science curriculum – a student self-assessment of learning, a content exam, and faculty surveys. Of interest to me was the idea of using both problem solving and detailed recall questions to separate the true ‘experts’ from the novices. In addition, their results indicate that student’s self-assessment mirrored their results on the content exam. Ultimately, the authors conclude that although results from the three evaluation techniques agreed with each other, each technique provided unique and valuable assessment information. From this article, I hope to utilize all three forms of assessments for the ROLA activities that I will implement in pathogenic microbiology this Fall. 

  

5.      Wood, W.B., “Innovations in Teaching Undergraduate Biology and Why We Need Them”, Annu. Rev. Cell Dev. Biol. 25, 5.1-5.20, 2009. 

  

This is an excellent review by Bill Woods of the field of discipline-based educational research (DBER) and discusses a number of pedagogical approaches to increasing student learning. Of particular interest to me were the six course/classroom design principles listed as well as his discussion of the various pedagogies. I plan to refer to this review often as I assess the three ROLA activities I implemented in pathogenic microbiology last Fall and work to improve them for implementation this coming Fall. 

 

This short bibliography pertains to the broad field of Undergraduate Research Education and more specifically literature that looks at undergraduate research experiences (URE) so that I can begin to see if anything exists on my large research question  of ‘How does student management software impact undergraduate research experiences’. Thus far, it appears that the idea of student management software or even of focusing on ‘strategies to aid in mentoring large numbers of undergraduate research students' does not exist. Some literature exists on the benefits of UREs and so this may be the place to start for assessing how the software that we are developing will impact the URE. Some literature also exists on challenges to undergraduate research mentoring. This can be used to provide a rationale for using the software and perhaps can be developed into an assessable research question.  

Annotated Bibliography:  

1.   Sadler, T.D. & McKinney L. (2010). Scientific Research for Undergraduate Students: a Review of the Literature. Journal of College Science Teaching 39, 43-49.   

This is an important article since it is a comprehensive review of ‘empirical studies of undergraduate research experiences in order to critically evaluate the outcomes of these efforts’. This article helps us to set the stage for the work that we want to pursue and it also gives concrete ideas for what gains may be measured for UREs. It focuses on learning outcomes such as career aspirations, confidence, nature of science, intellectual development, content knowledge and skills. Since my research question looks at impact, then focusing on these key areas will help to build the knowledge in that area.  

  

2.   Crowe, M.  & Brakke, D. (2008) Assessing the Impact of Undergraduate-Research Experiences on Students: An Overview of Current Literature. CUR Quarterly, 28 (4),43-50.    

Article is available online at (http://www.cur.org/quarterly/jun08/summer08CroweBrakke.pdf(Accessed April 20, 2009). This is one of my favorite articles since it gives a review of the impact of undergraduate research experiences and underscores the flaws in many studies and critically assesses the state of research in this area. This is a wonderful article for quoting as it critically looks at the state of URE and our knowledge of it.   

  

3.   Wenzel, 2003. Enhancing Research in the Chemical Sciences at Predominantly Undergraduate Institutions, A report from the Undergraduate Research Summit.   

(http://abacus.bates.edu/acad/depts/c...el/summit.html) (Accessed April 20, 2009).  

This is an interesting article although a bit old. This was the result of a summit of PUIs and give some insight into the problems and pitfalls of undergraduate research at these institutions. There is a lot of commonality with the sentiments raised and the state or my institution-so it is a good background reference for putting my work into context.  

  

4.   U.S. Department of Education, Report of the Academic Competitiveness Council, Washington D.C. 2007. (http://hub.mspnet.org/index.cfm/14287 

This is an important reference since it is looking broadly at STEM education and the pitfalls on a national basis- ‘Officials from federal agencies with education programs aimed at improving America's competitiveness in science, technology, engineering, and mathematics (STEM) engaged in a yearlong endeavor to assess their programs' success and to identify areas for improvement for current and future programs. This effort, carried out by the Academic Competitiveness Council (ACC) and led by Secretary of Education Margaret Spellings, lays the groundwork for sustained collaboration among STEM education programs across federal agencies that will greatly strengthen America's competitiveness.’  

  

5.   Adedokun, O.A., Dyehouse, M., Bessenbacher, A., & Burgess, W. D. (2010). Exploring Faculty Perceptions of the Benefits and Challenges of Mentoring Undergraduate Research, Paper presented at the Annual Meeting of the American Educational Research Association (Denver, CO, Apr 30-May 4, 2010)  

This paper notes that ‘the involvement of undergraduate students in the research process has evolved from a “cottage industry” into “a movement” ’. UREs are increasingly becoming a critical component of baccalaureate STEM education. This paper notes that the literature focuses on ‘the benefits to students’ with little or no examination of the benefits and challenges to participating faculty’. Using the cognitive apprenticeship model as a theoretical framework, descriptions of the benefits and challenges accruing to faculty are drawn from analysis of their responses to open-ended questions. Of significance to my work is the question ‘What challenges did you encounter in your involvement in the UR experience? The faculty (45%) recognized Timing and scheduling seem to be the greatest challenge that faculty faced. In addition they recognized the other academic commitments of the students and the challenges of scheduling research activities around students’ classes. This is a good plug for some of the benefits of student management software to addressing these challenges.  

  

Other relevant references to the area:  

1.   Lopatto D. 2007 Undergraduate research experiences support science career decisions and active learning. CBE Life Sci Educ. 6:297-306. Follow up to Lopatto D. 2004. Survey of undergraduate research experiences (SURE): First findings. Cell Biology Education 3:270-277.  

Lopatto is one of the few investigators into evaluating UREs on a large scale using HHMI funded UREs.   

2.   NSF funded URE opportunities ‘ Research Experiences for Undergraduates’ (http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5517&from=fund) (Accessed April 20, 2009)  

3.   National listing of URE Programs (http://www.the-aps.org/education/ugsrf/SumResLINKS.htm) (Accessed April 20, 2009).  

4.   STEM Education Coalition (http://www.stemedcoalition.org/) (Accessed April 15, 2009).  

5.   The National Conferences on Undergraduate Research (NCUR) (http://www.ncur.org/ugresearch.htm) (Accessed April 20, 2009).  

6.   NATIONAL COLLEGIATE INVENTORS AND INNOVATORS ALLIANCE, 2006 conference, Institutionalizing Entrepreneurship at Primarily Undergraduate Institutions, (http://www.nciia.org/conf_06/papers/pdf/kussmaul.pdf) (Accessed April 20, 2009).  

7.    Adhikari, A.  and Nolan, D.  2002. But “What Good Came of It at Last”?: How to Assess the Value of Undergraduate Research. Notices of the American Mathematical Society, Vol 49, no 10, pg 1252 to 1257. (http://www.ams.org/notices/200210/comm-nolan.pdf) (Accessed April 20, 2009).  

8.   NSF 02-057: The 2002 User-Friendly Handbook for Project Evaluation, a basic guide to quantitative and qualitative evaluation methods for educational projects (http://www.nsf.gov/pubs/2002/nsf02057/start.htm) (Accessed April 20, 2009).  

9.   NSF 97-153: User-Friendly Handbook for Mixed Method Evaluations, a monograph “initiated to provide more information on qualitative [evaluation] techniques and how they can be combined effectively with quantitative measures” (http://www.nsf.gov/pubs/1997/nsf97153/start.htm)   

10.                Online Evaluation Resource Library (OERL) for NSF’s Directorate for Education and Human Resources, a collection of evaluation plans, instruments, reports, glossaries of evaluation terminology, and best practices, with guidance for adapting and implementing evaluation resourceshttp://oerl.sri.com/home.html  

11.                Field-Tested Learning Assessment Guide (FLAG): This website is designed for Science, Math, Engineering, and Technology Instructors who are interested in new approaches to evaluating student learning, attitudes, and performance. It has a primer on assessment and evaluation, classroom assessment techniques, discipline-specific tools, and resources - all in a searchable, downloadable database, http://www.flaguide.org/  

12.                Student Assessment of Learning Gains (SALG): An on-line survey that measures student perceptions of their learning gains due to any components within a course. Faculty can modify a template to match any and all features of their courses, have their students take the survey on-line, and have the data returned to them as either raw data or with simple statistical analysis, http://www.salgsite.org  

13.                Bunce, D., and Cole, R. 2008 Nuts and Bolts of Chemical Education Research (An American Chemical Society Publication, Oxford University Press.  

14.                Grabowski, J. J. , Undergraduate research efficacy web page, http://cwt4.chem.pitt.edu/ugrad/reu/efficacy.htm (Accessed April 15, 2009)  

 

 

  

I have been writing a start-up grant in conjunction with my Biology Scholars Research residency. I wrote it on my residency project - I figured it would be a good idea to have some funding to do this work. I assembled quite a lot of useful resources as I wrote the grant, and I also started to think quite differently about the project. My initial project was focussed on the introduction of new, better, more engaging assessment items for UG science students. I was also keen on getting them to see the value of each assessment item while reducing their perception of "risk" in attempting something new.  The more I read, however, the more I started to wonder "Are the students even ready for lots of new, better, more exciting assessment items?" and more to the point "Are the lofty goals I had in mind achieveable, given the risk-averse nature of the undergraduate student." I have decided to scale down a bit, and first work out how ready our students are for empowered assessment. After that, I will know how far to push the envelope on assessment choice without creating a volcano of disquiet amongst the students.  

  

In my grant application I say: "I propose three questions that I believe are important for the improvement of BIOC2000 and of other generalist undergraduate courses in the Faculty of Science at UQ. Firstly, other than grades, what do the students want from their second year biochemistry and molecular biology course experience? Secondly, although we consistently test students with “science centric” methods, are students receptive to the idea of altering assessment regimes to provide student-empowering mixed-ability learning opportunities while maintaining academic rigour and minimizing “costs”? Lastly, what ideas do students have for new assessment items that they see as relevant and valuable to them?"  

  

Useful References: 

  

1) Francis, R.A. (2008) An investigation into the receptivity of undergraduate students to assessment empowerment.", Assessment and Evaluation in Higher Education, 33:5,547-557.  

This is a really nice paper that gives a working model (including survey questions and methodology) to assess student receptivity to empowered assessment. I found its discussion of "empowerment" very useful. It incorporated philosophical ideas about motivation that I hadn't considered before. It also talks about power structures in education, and defines three important factors that define student interest in taking risk:  

  

(1) the role of the lecturer and confidence in the lecturer as assessor,  

  

(2) their personal understanding of the assessment process and criteria to which they are currently subjected, and (3) the potential for empowerment to take place at the community rather than the individual level.  

  

Abstract quote: "This paper presents the results of a pilot study into the receptivity of first- and third-year undergraduate geography students to various mechanisms and concepts associated with assessment empowerment. Some receptivity to empowerment relating to choice of assessment was observed in first-year students but the greatest receptivity was found in third-years, at both individual and community empowerment levels. Third-year students displayed an increased desire for assessment choice, criteria choice and community empowerment, and decreasing confidence in the lecturer as assessor. Based on these initial results, a methodology for incorporating assessment empowerment into undergraduate teaching is outlined." 

  

2) Patall, E.A., Cooper, H., Civey Robinson, J. (2008) "The Effects of choice on intrinsic motivation and related outcomes: a meta-analysis of research findings.". Psychological Bulletin, 134:2,270-300 

  

I found this article helpful, because it describes a meta-analysis in which the authors attempt to quantify the effects of activity choice on intrinsic motivation, effort, task performance, and perceived competence displayed by the choosers. This is valuable, because there are so many papers floating around in which the effect of choice is tested on either a very small group, or in a very specialised scenario, and it becomes difficult to determine how effective such a choice might be in your own educational situation. I am interested in providing assessment choice to students, but I would like to do it in a manner that does not increase their stress and decrease their motivation. The authors provide an illuminating introduction in which they discuss self-determination theory, the idea of learned helplessness, and the variables associated with choice. These variables include (i) type of choice, (ii) number of choice options, (iii) the presence of an external (extrinsic) reward for choosing, (iv) the presence of a control group who did not get a choice, and (v) the presence of pressure to choose a particular option. The findings of the study were rather disappointing. Mostly there seemed to be no effect of choice on any of the outcomes tested. The authors were, however, able to determine that there is a significant effect associated with the number of choice options available. Two to four options produces the highest levels of satisfaction. Below this (obviously) is no choice, and above this the choice becomes too stressful and takes too much time. If anyone is good at statistical analysis methods, I would like to talk to them about this paper, because I don't understand some of the terms they used in their stats. What, for example, are "yoked" studies? 

  

  

3) Aldous, C. (2006) Engaging Pedagogies in Mathematics and Science Education: Some Key Ideas, Issues and Implications for Research and Teaching in South Australia. AARE Conference papers, 2006. ISSN 1324-9339 web: http://www.aare.edu.au/06pap/ald06755.pdf 

  

This paper provides an insightful analysis of different ways to think about, or contextualise, science education. I found it full of wise advice and thought-provoking questions. Aldous discusses six important ideas: 

  

i) equity in science education: the idea that everyone should be given access to science training, and everyone can learn science (not just the "smart" kids),  

  

(ii) science as a form of service to humanity and (iii) science awareness as a basic form of literacy: although governments see scientists and scientific literacy as essential for the health of a society, students often see it as "too hard", or "irrelevant", or "not a secure job", or just outside their realm of experience and interest. Science is hard, and it does not present a secure career trajectory. So how do we encourage students to go into it? Students these days are not as "noble" as they once were, so the idea of chasing an elusive research goal for the sake of possible glory or contribution to the planet is not appealing to many of them. The simple answer is probably financial incentives. If scientists made more money, they would probably be more socially admired.  

  

(iv) dimensions of knowledge: How do we define scientific understanding. Is it processes, content, or context? How does this impact on our course design and assessment objectives and tools? How does this affect national curriculum design for specific science disciplines? 

  

(v) Affective and cognitive responses to maths and science: we tend to focus on the cognitive response to science and maths, rather than fostering an affective response as well. When was the last time you sold science as intrigue, mystery, and wonder, or maths as beauty, elegance, or aesthetics? To me, this is one of the reasons I got into science. The experiments were so beautifully thought out and that fascinated me on an aesthetic level. Do we help our students see that on a daily basis, and harness that excitement to drive the students' interest?  

  

(vi) the connection between maths, science, and technology: do we do enough to promote to students that idea that the technology and problems they engage with in daily life are very often solved using science? If not, how do we do it? 

  

  

4) Seymour, E. & Hewitt, N.M. (1997) "Talking About Leaving. Why Undergraduates Leave the Sciences." Westview Press, CO. 

Seymour and Hewitt and sociologists at U.Colorado. They have culled over 600 hours of interviews with undergraduates from STEM disciplines in which they discuss the factors that affected retention in science, maths, and engineering (SME) courses. I am reading this book at the moment, and it is an eye-opener about the way in which traditional fire-hose teaching methods for SME subjects affects students. The two major reasons for leaving science are cited as (i) "Loss of interest in the subject matter" (or increasing interest in a different field of study) and (ii) Poor teaching by SME faculty. Obviously we can't change the subject matter - science is science, but we can give students the choice of constructing assessment and learning opportunities around topics that interest them. 

  

5) Perdigones, A. Garcia, J.L., Valino, V., & Raposo, C. (2009) "Assessing heterogeneous student bodies using a methodology that encourages the acquisition of skills valued by employers', Assessment and Evaluation in Higher Education, 34:4, 389-400. and  

 

6) Peat, M., Taylor, C.E., & Franklin S. (2005) 'Re-engineering of undergraduate science curricula to emphasise development of lifelong learning skills', Innovations in Education and Teaching International, 42: 2, 135-146 

Both these papers focus on the development of generic skills in undergraduate students. This is something I am struggling with when talking to colleagues who co-teach courses with me. They are often still in the mindset of teaching content only, and assessing content memorisation rather than application of material in real-world scenarios.  My colleagues are also unwilling to set assessment items that are aligned with content. They prefer to rely on the final exam, with bad results. Both of these articles gave me inspiration about ways to approach generic skills development in our students while reinforcing content mastery as well. Article (6) is by a lecturer who taught me when I was an undergraduate (Mary  Peat). It's exciting to see that she is still an interested and innovative teacher. Article (6) also describes many of the general issues we experience in Australian universities, using the University of Sydney as a case-study.   

  

§                     ERIC, Educational Resources Information Center.  (http://www.eric.ed.gov/)  This is a searchable bibliographic database where you will find many of the references important in educational research.  The database is run by the government and is free. There is a special talent to searching using ERIC, and it is described at the end of this document in more detail.  

  

§                     PubMed.  Despite what was said above, there are several education-oriented journals published by scientific societies and science education articles within mainstream science journals.  These can be found in PubMed.  You can also go directly to some of the top ones, such as: Life Science Education (http://www.lifescied.org/) and Journal of Microbiology & Biology Education (http://jmbe.asm.org) which is open-access and searchable. 

  

§                     Various psychology databases (such as PsycINFO).  There is a rich field of educational psychology – the origin of much of the work represented in the How People Learn series of books.  

  

§                     Mountain Rise.  This is one of an emerging set of journals devoted to SoTL research.  (http://mountainrise.wcu.edu/index.php/MtnRise). 

  

§                     Journal of Cognitive and Affective Learning.  A slightly more established journal that also features SoTL research.  (http://www.jcal.emory.edu/Past issues still exist but no longer printing new issues. 

  

§                     Lesson Study.  This site focuses on the use of Lesson Study principles in teaching at the college level, but is has an emerging array of case studies available, too.  (http://www.uwlax.edu/sotl/lsp/) 

  

§                     Carnegie Foundation.  There are some useful resources on the Carnegie website that may be of interest: 

o                                the Higher Ed work in the CASTL program (http://www.carnegiefoundation.org/scholarship-teaching-learning) 

o                                the K-12 work in both the Quest and CASTL programs (http://www.carnegiefoundation.org/previous-work/k-12-teacher-education)   

o                                the galleries (http://gallery.carnegiefoundation.org/gallery_of_tl/keep_toolkit.html) 

o                                and the publications section of their website (http://www.carnegiefoundation.org/publications/index.asp) 

  

§                     The Peer Review Site.  (http://www.courseportfolio.org/peer/pages/index.jsp)  This site is a wonderfully rich resource of teaching portfolios submitted by professors and reviewed by their peers. 

  

§                     The Visible Knowledge Project.  (http://crossroads.georgetown.edu/vkp/)  This site contains electronic portfolios from one of the first national SoTL projects involving over 70 professors from institutions across the country. 

 

 

 

I assembled this annotated bibliography to help begin to design my investigation into the question of learning outcomes in face to face compared with hybrid courses.  I am hoping to not only learn what is already published out there on the subject but look for methods used to evaluate two different classroom methodologies.  The literature I chose deals with comparing two classroom settings, mostly face to face with completely online courses.   I found the majority of my papers through the ERIC database.   

 

1.       Lobel  M, Neubauer M, and Swedburg R, “Selected Topics from a Matched Study between a Face-to-face Section and a Real Time Online Section of a University Course”,  International Review of Research in Open and Distance Learning 6(2): 1-17,  July 2005.    

In this article the researchers look at discussion effectiveness in the online versus face to face classroom.  The setting is a university interpersonal studies class.  They used Kolb’s theory of experiential learning to conduct class sessions online in a chat room style and then in a face to face classroom.  This article was interesting to me because it showed some interesting methods to measure class connectedness and measuring discussions.  The researchers used “Participation Pattern Diagrams” to show that the online students discussed among themselves more whereas the face to face students discussed more with the instructor.   Although I will be measuring other aspects of the classroom outcome first, this is a great article for discussing my next step in learning about students connectedness and interactions online. 

 

2.       Lunsford E and Bolton K, “Coming to Terms with the Online Instructional Revolution: A Succsess Story Revealed Through Action Research”,  Bioscene 32 (4): 12-16, December 2006.  

This article deals with biology instruction at a community college level.  They evaluated learning in two different formats online and face to face using an end of the semester exam.  They found identical scores on the “post” test.  The sample size was really small however (9 students)  and they only looked at non-majors in their assessment.  Although interesting and demonstrating that there is no difference between online and face to face, this article leaves some work undone. 

 

3.       Kushnir LP, “When Knowing More Means Knowing Less: Understanding the Impact of Computer Experience on e-Learning and e-Learning outcomes”, Electronic Journal of e-Learning 7 (3): 289-300, 2009.  

This article examines feelings of student overload in the e-learning environment.  The article touches on an important aspect of my research question which is student characteristics, Do students that take online courses have a particular learning style or motivation than traditional students?  They examined the layout of the information and took into account the experience students had with computers.  They found that contrary to their hypothesis, that students with more computer experience did not necessarily do better at learning on line than students with less computer experience. 

4.       Smelser LM, “Making Connections in our Classrooms:  Online and Off”,  Paper presented at the Annual Meeting of the Conference on College Composition and Communication, 53rd, Chicago Il, March 20-23 2002.  

This article addresses the hybrid environment where material is presented online as well as face to face.  It looks at the level of instructor interaction as key to success in students that are online.  This article also details information about students that are struggling seem to struggle more in an online environment compared with face to face students.  This article discussed more than it actually studied but it certainly gave me some information to work with.  It has good references that need to be followed up.   

 

My interest lies in determining how to use self-assessment as an effective tool for increasing metacognition and learning, particularly in our introductory biology series (which is taken by first-year students).  

  

Andrade H, and Valtcheva,A, “Promoting Learning and Achievement Through Self-Assessment”, Theory Into Practice 48:12-19, 2009. 

This paper focuses on the use of criteria-referenced self-assessment as a key element of formative assessment where “students reflect on the quality of their work, judge the degree to which it reflects explicitly stated goals or criteria, and revise accordingly.” The authors state that certain conditions including modeling, cueing, direct instruction, and practice are necessary for effective self assessment. Furthermore, student engagement in the process can be increased by following three steps: i) articulate expectations; ii) performing self-assessment; and iii) revising (using feedback from self-assessment). Prior research on criteria-referenced self-assessment in writing and mathematics indicates both quantitative and qualitative improvements in student work and attitudes.  

This article is relevant to my area of interest because it quite simply and clearly points out basic considerations for effectively using self-assessment in a course. In addition, the article reinforced the need to have revision and clearly defined criteria as essential components of the self-assessment process.    

  

McMillan JH, and Hearn J, “Student Self-Assessment: The Key to Stronger Student Motivation and Higher Achievement”, educational HORIZONS 87(1):40-49, 2008.  

The authors begin by defining and describing the “student self-assessment cycle”, which includes the components of self-monitoring, self-judgment, and identifying and implementing correctives. This cycle allows students to not only judge their own work, but to improve it by identifying “discrepancies between current and desired performance.” This article reinforces the importance of criteria as mentioned in the Andrade and Valtcheva article discussed above. In addition, McMillan and Hearn emphasize the role of the student in the process of designing self-assessment as research indicates that when students are involved in criteria and goal setting student achievement increases. One area of the paper that I found particularly useful was the discussion of Rolheiser’s four stages of teaching student self-assessment, which serves as a “growth scheme” for both the teacher and the student. This article also was relevant to my research because it made me think about how I’ve been using self-assessment in my classroom up until this point, which is primarily reflective in nature. I’m beginning to see how I’m missing some important components needed to make self-assessment more effective for my students. 

  

Young A, and Fry JD, “Metacognitive awareness and academic achievement in college students”, Journal of the Scholarship of Teaching and Learning 8(2):1-10, 2008.  

Metacognition can be broken down into metacognitive knowledge (what we know about our cognitive processes) and metacognitive regulation ( activities we use to facilitate our learning). In the present study, the researches wanted to determine if there was a correlation between metacognition and “broad based measures” of academic achievement. These include metrics such as GPA and end-of-course grades rather than single measures such as individual test or assignment scores. After providing a review of past studies that used the metacognitive awareness inventory (MAI) in a variety of ways, the authors present their own findings. In summary, they found strong correlation between the MAI and broad measures and no significant correlation between the MAI and a single-event test score. My main interest in this article was to learn more about the MAI as a tool for assessing student metacognition. I did learn more about this tool as well as others that could be helpful to me in the future. In addition, I gained some insight as to what types of metrics might actually be helpful to me as I try to evaluate student metacognition and its relationship to student learning gains. 

  

Nordell SE, “Learning How to Learn: A Model for Teaching Students Learning Strategies”, Bioscene: journal of college biology teaching 35(1):35-43, 2009.  

Many students find it difficult to make the transition from high school to college academics. To address this issue, many schools provide students with some form of study skills workshop in order to help better prepare them for success. I was particularly interested in Nordell’s study because it focuses on not only teaching students study skills, but helping them to be better at self-assessing and identifying their study strengths and weaknesses. This paper presents a relatively detailed explanation of the workshop used in the department of Biology at St. Louis University; the main topic areas of the workshop are self-assessment of learning techniques and study skills strategies. Results (based on exams before and after the workshop) indicated that those students who attended the workshop performed significantly better on exam 2. Consistent with what I see at my own institution, the workshop was attended primarily by students that were already doing well in their biology course. This brings up the question of whether low-achieving students don’t attend such a workshop due to their inability to self-assess and recognize that they are in need of help. This article unfortunately didn’t really address this question. However, the concrete examples provided in the article have given me solid ideas for techniques to help increase student metacognition in my classes.  

  

Walser TM, “An Action Research Study of Student Self-Assessment in Higher Education”, Innovations in Higher Education 34:299-306, 2009.  

In this study, Walser implemented self-assessment as an instructional tool to “give students responsibility for monitoring and attaining progress and as a way of encouraging students to develop reflection as a personal trait.” In contrast to many studies I have seen where self-assessment is used throughout an entire course, Walser used self-assessment at three specific points in the semester (beginning, middle, end). The majority of students reported that the self-assessment activities were an effective instructional method and that the activities helped them to reflect on their own performance. Walser reports that the activities provided her with useful feedback about the course and also strengthened relationships with students. This study is of interest to me because it presents a way of using self-assessment that is distinct from how I’ve used it in the past and how I’ve envisioned using it in the future. This paper also provided some useful insights into student perceptions of self-assessment and its use to their current and future learning. 

 

This annotated bibliography is designed to include article on improving how students assess their knowledge of the material and structure their learning. I want to try using formative or pre-assessments to get students to understand their own misconceptions of the material, their pre-knowledge of the material and to get a feel for the depth of which they will need to understand the material for success on the summative assessments. Therefore, I need to obtain data on the efficacy of formative assessments, best practices in structuring formative assessments and on evaluating the efficacy of formative assessments in student learning. 

  

Cliff W, Freeman S, Hansen PA, Kibble JD, Peat M, and Wendroth MP, “Is formative assessment an effective way to improve learing? A symposium at Experimental Biology 2008”, Adv Physiol Educ 32:337-338, 2008. 

http://advan.physiology.org/cgi/reprint/32/4/337 

 This article deals with using the formative assessment as a tool to determine student deficiencies and misconceptions. This is a direct link what I would like to try with my students to help them identify areas of weakness prior to their summative assessments. This article is useful in both its topic, scope, and the references that it contains and represents a base of work on which I would like to build. 

  

Dobson J, “The use of formative online quizzes to enhance class preparation and scores on summative exams”, Adv. Physiol. Educ 32: 297-302, 2008 

http://advan.physiology.org/cgi/reprint/32/4/297 

 This article also is a study on which I would like to build. I like the prospect of using online formative evaluations for two reasons; online evaluations do not take valuable class time; and students can potentially re-assess as needed. It will be good for methods of evaluation as well. 

  

Kibble, JD, “Use of unsupervised online quizzes as formative assessment in a medical physiology course: effects of incentives on student participation and performance”, Advan. Physiol. Edu. 31:253-260, 2007. 

 http://advan.physiology.org/cgi/content/full/31/3/253 

 Cautions and solutions for students misuse of formative assessments when incentives are used. Some things to think about…. 

  

  

ERIC #: EJ880311 

Goubeaud K, “ How Is Science Learning Assessed at the Postsecondary Level? Assessment and Grading Practices in College Biology, Chemistry and Physics” J Sci Educ and Tech 19(3):237-245, 2010. 

 Hopefully useful as I learn more about effective methods for formative assessments. 

  

  

ERIC #: EJ877801 

Rossiter D, Petrulis R, and Biggs C, “A Blended Approach to Problem-Based Learning in the Freshman Year” Chemical Engineering Education 44(1):23-29, 2010 

 Web site: http://cee.che.ufl.edu/index.html 

 Designing intentional learning combining lecture, online and active learning techniques. Hopefully a resource for pulling it all together 

 

I am currently involved in a research project on how writing-to-learn activities affect the development of ‘ecological thinking’  (Berkowitz 2007) and ecological literacy in college students. As we analyze our data, we realize that we need clearer and more quantitative measures of ecological literacy to pair with our qualitative observations and coding of student writing.  I am now interested in looking into how we could develop a pre/post instrument that could diagnose misconceptions and detect conceptual change. My bibliography reflects my initial foray into this daunting but exciting area of science education research.

1) Anderson, D.L., K.M. Fisher, G.J. Norman. (2002). Development and evaluation of the conceptual inventory of natural selection. Journal of Research in Science Teaching, 30(10), 952-978.
I have used this concept inventory in my classes and am motivated to use it as a model as we think about developing an ecological concept inventory. This paper charts the course to developing, testing, and refining a concept inventory. The last sentence of the paper is especially inspiring: “If this diagnostic test can be used to assess instructional methods or to stimulate conceptual change in these students, we will have met our goal in producing it.”

2) D’Avanzo. C. 2008. Biology concept inventories: overview, status, and next steps. BioScience 58, 1079-85
The title of this paper pretty much explains it all and will be a useful reference since it provides an overview of the biology efforts. I was intrigued with the Diagnostic Question Clusters and need to look into these more closely. http://www.biodqc.org

3) Garvin-Doxas, K.  M. Klymkowsky, and S. Elrod. (2007). Building, Using, and Maximizing the Impact of Concept Inventories in the Biological Sciences: Report on a National Science Foundation–sponsored Conference on the Construction of Concept Inventories in the Biological Sciences ,CBE Life Science Education 6(4): 277-282.
This paper is a continuation of the ideas introduced in the 2003 paper by Klymkowsky et al. It is a summary of the efforts to date (well, in 2007) of various groups of people interested in developing concept inventories involving some aspect on biological knowledge. The participants split into subgroups to explore more specific areas. I was particularly interested to see that there was an Ecology subgroup. As I look at the current iteration of the BCI, I do not see much in the way of ecological concepts so I think that the ecology group did not pursue a united effort in the same degree that the genetics group did. http://bioliteracy.colorado.edu

4) Jordan, R., F. Singer, J.Vaughan, and A. Berkowitz. (2009) What should every citizen know about ecology? Frontiers in Ecology and the Environment, 7 (9), 495-500.
This paper was very exciting to me as this is the question I think about all of the time during my teaching. It is a fantastic summary of past and present thought on various frameworks of ecological literacy. It discusses the tension between “content” and “values” in the teaching and learning of ecological concepts and their application to environmental issues. The authors call for more discussion involving wider groups of people, but more importantly, they call for people to come to some sort of a resolution on the development of a common framework of ecological literacy.

5) Klymkowsky, M. W., K. Garvin-Doxas, and M. Zeilik. (2003). Bioliteracy and teaching efficacy: what biologists can learn from physicists. Cell Biology Education, 2,155-161.

My husband is a physics professor and has used the Force Concept Inventory in his classes for several years. I have been intrigued by this and have wondered whether biologists could develop a similar inventory. This paper mentions the exciting impacts on physics teaching that the wide-spread implementation of the FCI has had. It explores the need for a similar inventory in biology and discusses the impediments to the process, but charts a possible path towards achieving such and endeavor.

6) Libarkin, J.C., (2008). Concept Inventories in Higher Education Science. Prepared for the National Research Council Promising Practices in Undergraduate STEM Education Workshop 2 (Washington, D.C., Oct. 13-14, 2008). Retrieved July 7, 2010 from http://www7.nationalacademies.org/bo...ed_Papers.html

I was happy to find a relatively current, comprehensive overview of the published concept inventories across all science disciplines, in addition to those in development. This paper also contains a great deal of useful information on how these types of assessments are developed.

7) Schneider, S.H. (1997). Defining and teaching environmental literacy. Trends in Ecology and Evolution, 12(11), 457.

I keep this one page paper tacked to my bulletin board right by my computer in my office. It helps keep me focused on the “big ideas” rather than trying to teach every last little fact and idea to my students about ecology. I really like the way that the author states that citizens do not need to know all of the facts, but should have a general understanding of scientific methods and a familiarity with the social-environmental interface. Most importantly, they need to be able to ask the “right” questions of scientists: “1) what can happen, 2) what are the odds, and 3) how do you know? “

Other papers I found that are useful:

Anderson, C.W. (2010) Learning Progressions for Environmental Science Literacy
Prepared for the NRC National Standards Framework Committee, March, 2010 Retrieved July 7, 2010 from http://edr1.educ.msu.edu/EnvironmentalLit/index.htm

Berkowitz, A.R. (2007). Towards a definition of ecological literacy. Retreived July 7, 2010 from http://www.ecostudies.org/people_sci..._thinking.html

Berkowitz, A.R., M.E. Ford and C.A. Brewer. (2005). A framework for integrating ecological literacy, civics literacy and environmental citizenship in environmental education. 227-266 In: Johnson, E.A. and M.J. Mappin (Editors). Environmental Education and Advocacy: Changing Perspectives of Ecology and Education. Cambridge University Press, Cambridge.

Cutter-Mackenzie, A. and R. Smith. (2003). Ecological literacy: the “missing paradigm” in environmental education. Environmental Education Research, 9(4), 497-524.

D'Avanzo, C.  (2008) Symposium 1. Why is Ecology Hard to Learn. Bulletin of the Ecological Society of America, 89(4), 462-466.

Stamp N., M. Armstrong, and J.Biger. (2006) Ecological misconceptions, survey III: The challenge of identifying sophisticated understanding. Bulletin of the Ecological Society of America 87(2),168-174

Tanner, K, and D. Allen (2004). Approaches to Biology Teaching and Learning: From Assays to Assessments—On Collecting Evidence in Science Teaching. Cell Biology Education, 3, 69-74

Tanner, K, and D. Allen (2005). Approaches to Biology Teaching and Learning: Understanding the Wrong Answers—Teaching toward Conceptual Change. Cell Biology  Education, 4, 112-117

 

1.  Hoskins S.G., Stevens L.M., and R.H. Nehm.  2007. Selective use of the primary literature transforms the classroom into a virtual laboratory. Genetics 176:1381-1389. 

This article presents a really interesting method of using primary literature in an upper-division majors course. The authors designed a 3-credit hour course in which students worked through a series of related articles over the course of the semester. The CREATE (consider, read, elucidate hypothesis, analyze and interpret data, and think of the next experiment) method uses four articles from the same laboratory to illustrate the progression of scientific ideas as well as to promote students’ development of analytical skills and familiarity with technical literature. Although I do not plan to develop a similar course (not least because I don’t have any room for another new course in my current teaching load), I found some of the methods described in this paper to be really interesting and could be translated into using pieces of data or single journal articles in my classes. For example, the authors describe their approach to helping students understand complex procedures and link the methods to data in a particular figure; they ask students to draw cartoons and flow diagrams to illustrate the methods. This strikes me as a good way to make sure that students understand the sequence of important steps in a method without either copy-pasting from the methods section or getting bogged down in details such as buffer composition; I think this technique would be useful for both my non-majors (nursing) and majors classes. I also like the idea of asking students to think of the next experiment based on an individual paper and then having the students read the next paper from that lab so that they can see what the investigators actually did. This would be most useful to me when using literature in my majors classes as I would be able to devote more time to individual topics in those classes. 

  

 2.  Clark I.E., Romera-Caldaron R., Olson J.M., Jaworski L., Lopatto D., and U. Banerjee.  2009. “Deconstructing” scientific research: a practical and scalable pedagogical tool to provide evidence-based science instruction. PLoS Biology 7(12):e1000264. 

 This article also describes a separate course focused on literature and research, rather than incorporating primary articles or data into content-based courses. However, again, I find that it has some interesting ideas that I think I can adapt to the courses that I teach. This course is targeted at 1st and 2nd year students from a variety of majors. The format involves having a guest speaker give a typical research seminar, and then an instructor spends the next four weeks “deconstructing” the seminar with students. Then the researcher comes back for a final class session in the series to answer students’ questions. I have asked students to work through individual figures in an article, but I think it would be an interesting experiment to have them work through a research presentation like this. The most useful thing that I got from this paper was a link to a validated survey on student learning gains from research-related activities, the CURE (Classroom Undergraduate Research Experience) survey. This is a publicly accessible, validated survey administered by faculty at Grinnell College. This would be a great assessment tool for how determining what skills, if any, students improve upon in courses where they are exposed to research through primary literature or seminars. Again, this might be most useful for my majors classes, but could potentially be adapted to my non-majors classes. The authors used this assessment to demonstrate the students gained nearly as many skills in a variety of areas in this course as a comparison group of students who went through a summer lab research experience. 

  

3.  Russell J.S., Martin L., Curtin D., Penhale S., and N.A. Trueblood. 2004. Non-science majors gain valuable insight studying clinical trials literature: an evidence-based medicine library assignment. Advances in Physiology Education 28:188-194. 

 This article is one of relatively few that I have found so far that describes the use of primary literature in non-majors classes. In this case, the class is a non-majors general studies Human Health course. The authors designed a project called the “Responsible Patienthood Project” in which students choose a disease, find nine sources of information about the disease, and prepare a poster presentation. Those nine sources must include four primary articles. The students choose one of the primary articles to focus on in-depth for half of their poster presentation. The authors designed a pre- and post-survey assessing students’ acceptance and trust of information from a variety of different sources. I think that this is an excellent survey tool for finding out whether students are thinking more critically about scientific information after the project. Since one of my goals for using primary literature in non-majors classes is to expose students to original data and make them aware that the data are sometimes summed up to neatly in “sound bites” in media summaries, I am interested in adapting the pre- and post-survey in this article for my own use. 

  

  

4.  Gehring K.M. and D.A. Eastman. 2008. Information fluency for undergraduate biology majors: applications of inquiry-based learning to a developmental biology course. CBE-Life Sciences Education 7:54-63. 

 This article presents a good example of using multiple modes of assessment for students’ learning from inquiry-based methods of instruction. The authors set up a series of literature-based assignments that were linked with two laboratory projects. As with many of the other references I’ve been reading, this would be most applicable to my majors courses rather than my non-majors courses. The authors combined focus group interviews (hard to do in my large non-majors classes, but a good way to really find out how well students understood the articles they read) with graded assignments and student self-assessment evaluations of their skills improvement following the inquiry-based activities. Although their data were somewhat limited based on the numbers of students involved, I thought this was a good example of multiple assessments. 

  

5.  Gillen C.M. 2006. Criticism and interpretation:  teaching the persuasive aspects of research articles. CBE-Life Sciences Education 5:34-38. 

 This essay was very thought-provoking. It describes a problem that I encounter but had not fully articulated: one of my goals is for students from all majors to think critically about scientific and medical information, yet students are usually conditioned to think of anything written in an authoritative source as accurate and infallible. Dr. Gillen discusses the challenges that students face in reading and critically discussing primary literature. First they must understand what was done, including terminology and techniques; and second, they need to be prompted to analyze the data and draw their own conclusions, which may disagree with the authors’ conclusions. Dr. Gillen offers some suggestions of ways to encouraged students to draw their own conclusions and think critically about the data. I would like to use some of his suggestions in future literature-based assignments in all of my classes. 

 

 

My research is going to focus on best practices for teaching students in non-majors biology courses.   There is a breadth of literature on this topic, and I definitely had trouble focusing in on my topic.  I found myself chasing down many side roads as I read through my articles. The question for me becomes:  How do we best engage this population of students so the deepest learning can occur in spite of the natural resistance these students have for learning the material we are going to cover?  What is the best approach?  How do we find out if it makes any difference at all what method we use? 

  

1.     Knight, J. K., & Smith, M. K. (2010). Different but equal? How nonmajors and majors approach and learn genetics.  CBE – Life Sciences Education, 9: 34-44, doi: 10.1187/cbe.09-07-0047 

 This paper is helpful for my project as it contains relevant observations concerning the attitudes of majors and nonmajors toward learning genetics.  The student attitudes, study time and study techniques were compared.  Content assessment was made using a validated test – the Genetics Concept Assessment.  I have already used this assessment test in my course with mixed results.  The authors of this paper suggest that changing the approach for nonmajors so the course is relevant and topical may help them to be more engaged in the material and the class, which then leads to learning gains. 

  

2.     Russell, J. S., Martin, L., Curtin, D., Penhale, S., & Trueblood, N. A. (2004). Non-science majors gain valuable insight studying clinical trials literature: an evidence-based medicine library assignment. Advances in Physiology Education, 28: 188-194, doi: 10.1152/advan.00029.2003 

 These authors show that non-science major students show significant learning gains when they are assigned a project involving the use of primary literature.  The students are allowed to choose a topic for their Responsible Patienthood Project.  This assignment  culminates in a poster presentation.  Using surveys and questionnaires, the authors concluded that students would be able to become better informed in making decisions for their own health care and would be more empowered to seek information in appropriate venues – not just WebMD or Wikipedia.  This article is another example of how the education of non-majors can lead to significant, scientific learning. 

  

3.     Wood, W. B., (2009).  Innovations in teaching undergraduate biology and why we need them.  Annual Review of Cell and Developmental Biology, 25: 93-112, Retrieved from arjournals.annualreviews.org by University of Richmond 

 This is a wonderful article that touches on almost all the topics I have come across in my reading of the literature.  The author offers a summary of different teaching approaches and introduces a concept I had not seen before – DBRE, or discipline-based educational research.  He discusses the current notions of how students learn and how this knowledge can be applied to the classroom.  He points out that most university faculty are not aware of or do not know how to make their teaching more effective.  He summarizes practices that have been shown to yield promising results of increased student learning.  There is so much information in this article that I simply cannot summarize everything here.  One thought though, that sticks with me since it is a question I am asking, and I quote “Traditional teaching methods do not prevent the progress of superior students …..but they fail the majority of students…”  Therefore, changing our teaching methods will benefit the population of students that need it the most. 

  

4.     Crowe, A., Dirks, C. & Wenderoth, M. P., (2008) Biology in Bloom:  implementing Bloom’s taxonomy to enhance student learning in biology. CBE – Life Sciences Education, 7: 368-381. 

 This paper presents the Blooming Biology Tool (BBT) as an assessment tool which can be used as a guide for development of classroom teaching activities.  I am particularly intrigued by the authors description of using the students in the classroom to develop questions according to Bloom’s taxonomy at all levels.  The authors show that study skills improve and students are capable of learning to use Bloom’s taxonomy to write and identify questions.  Students can also apply their knowledge of Bloom’s to their studying practices and evaluate the levels at which they understand the scientific concepts.  This finding is of particular interest to me – one of my questions has always been how to help students find out what they do not know.  Students always tell me they studied so hard, yet their grades do not reflect their effort.  This paper gives me some ideas to help students recognize how they need to study and look at the material. 

  

5.     Wright, R. L. & Klymkowsky, M. W. (2005) Points of view: content versus process: is this a fair choice? CBE – Life Sciences Education, 4: 189-198 

 I find this article useful for the development of the idea of “a lived curriculum in biology: first do no harm”.  I believe the article is important to the SoTL since it simply points out the idea that we are in the midst of a so-called revolution to change the way we teach, yet there are examples dating back to at least 440 BC where scientific literacy had been promoted as an important goal.  I agree with the author that we need to use examples of what has been done in scholarly research so we are not constantly re-inventing the wheel.   

 

 

 

 

Tag page

Files 1

FileSizeDateAttached by 
 Annotated Bib – Scholars 2010.doc
annotated bib 2010
231.5 kB18:20, 14 Jul 2010mwenderothActions
You must login to post a comment.