ASM events
This conference is managed by the American Society for Microbiology

Project Updates

Table of contents
No headers

November 2011
The IRB process was fairly painless but not necessarily very transparent.  I submitted Exempt Category #1 forms (Educational Setting category) in mid-August after some confusion about the forms necessary and the format of the consent form.  Surprisingly, this Category actually required a signature on the consent form.  Although this did not seem make any sense compared to the other exempt categories, my attempts to get the reasoning for this (i.e. aspects of my study vs just the regulations) were not productive.  In fact, it was harder to get straight answers from IRB personnel, as compared to my past experiences with IACUC personnel.  But, I could hear Loretta’s voice telling us to just go with the flow and do what they want, so I bit my tongue (very hard!), threw logic out the window and filled out the forms EXACTLY as they wished.  That was obviously the correct approach because the application sailed through and was granted Exempt status !  The moral of this story… succumbing to any innate stubbornness (which I have a lot of…) is not the best way to operate, especially with regulators

A major accomplishment during this semester was designing qualitative assessments (i.e. ranked and open response questions) to determine: 1.  what subconcepts, within the broad hypothalamic-pituitary-target organ axis concept (see note below under 'obstacles'), did the students perceive as most difficult ? ; 2. Did the students perceive that the progressive series of case studies presented helped them learn these difficult subconcepts ?  A very quick look at these responses verified that the subconcept of "feedback loops' was the most difficult for the students, as I was assuming, and that the students did believe the case studies were helpful.  And, as a bonus, in the open response questions, many students commented that discussing the bioethics was interesting and helped them 'want' to learn.  I am a bit concerned about how best to design and word these type of assessments; wondering how much you can bias the answers you receive.  I do think I need to tinker with the ones I used and am hoping that the videos in our next assignment will be helpful.

A major obstacle was the use of the same questions in successive quizzes or exams all within a few weeks of each other.  Interacting with the students revealed that the questions were not that challenging and they could easily remember the question/answer between the quantitative assessments. So, I quickly revised the approach and made a list of "subconcepts" that 'fit under' the general concept of the HPG axis.  Then I wrote a set of questions that would test the students about each of the subconcepts and used one of each of these 'subconcept' questions, as appropriate, in the quizzes or exams.  I am assuming that since all questions under a subconcept are testing their knowledge/understanding of this subconcept, it should be reasonable to rotate these questions to prevent a 'recall bias' of sorts between quantitative assessments.  Next year I will take more time to refine these questions.  I will also include these questions to test student retention on the final exam, since the specialized topics we are discussing now at the end of the semester do not concentrate on the HPG.
 

I realize this is long-winded but this narrative has been 'therapeutic' and also serves as a nice record for me of what has happened.  Thanks for wadding through this and I would appreciate any feedback :)

Tag page
Viewing 1 of 1 comments: view all
HI Laura,

Sounds like you are well under way! Hope the bite marks on your tongue have fully healed- glad you got your IRB approved.

I understand why you had to substitute out questions (especially if the pre and post were coming in short order), and sorting them by subconcept is important. I am wondering if they are all going to be equally difficult, and if that will need to be considered during your data analysis? Can you perhaps get some senior undergrads (past students) or grad students to take ALL your questions, then look at the % correct for each one, to have some kind of difficulty index in there? So if they all miss one on the post-test, but it was a really hard one, then you can factor that into your analysis. Or if you do scantrons, you should be ale to get a report that will give you an indication of the difficulty of each question. I am sure that there is a way to handle this, but I would suggest that you keep it in mind... In addition to "difficulty" (which could be due to all kinds of things, including just wording), you could also (and maybe you already have) characterize them by e.g. Bloom's taxonomy-that might be another way to address this issue. If we could ask them infinitely many questions, it would probably all sort out in the end, but given the constraints of doing this in the context of a class, it is something to keep in mind.

And as you said, you can still refine the questions, and maybe field-test them with a group of grad students (orUGs- whatever the appropriate audience is)- and if you are doing that, you can also take some time to do a focus group to ask them what they thought each question was about... (and was it what you thought it was about).

I'll be looking forward to seeing the results!

-Michele S.
Posted 16:05, 28 Nov 2011
Viewing 1 of 1 comments: view all
You must login to post a comment.