Part 1: Worksite: At the “A-Team” (associate deans and unit directors in academic affairs) meeting, I sat next to the head of Academic Advising, Anna Lynn Bell. I mentioned to her that I’d love to talk with her about transfer student advising, because of my problem of practice for this program in trying to improve completion rates on the MREST. She immediately said, of course and please and after Thanksgiving? I am happy to have made a contact outside the libraries who will be an important partner in this work.
The topic for the A-Team meeting was orientation dates for transfer students and incoming freshmen for the next three years. I learned several things from this. First, it was confirmed for me that JMU admits approximately 1,000 transfer students per year. Second, I had not realized that these students start at each semester (not just the fall). About 670 start in the fall and attend orientation in May/June; 300 start in the spring and attend orientation in December, and about 20 start in the summer and attend orientation in March. This complicates the outreach to these students, but also potentially gives me the ability to study one group and then implement and intervention with another. Third, Anna Lynn noted that advising for transfers is handled by full-time advisors in her office, not with departmental faculty. This is a thread I’ll want to pursue; if it persists for the students’ entire career at JMU, then it will be easier for me to work with the advisors than if they were distributed across the colleges.
Part 2: Observation: I attended the second JMUse Cafe session of the academic year on Tuesday, 11/15/2016. The topic was “STEM and Society: Public Roles of Scientists and Engineers.” JMUse Cafe is a periodic open session sponsored by a group of organizations at the university (including LET), that aims to “provide an informal and lively forum for students, faculty, staff and the Harrisonburg Community to explore together topics of public interest.” This year the primary partner is the JMU Civic Engagement Task Force (CETF), so the four topics are all about civil discourse in a different discipline. The first event focused on journalism, this one was about science, and the next two are about the arts and business.
The session was set up with a panel of four experts plus a moderator who is both an engineering faculty member and a member of the CETF (Elise Barrella). Panelists were Brianne Kirkpatrick, owner of WatershedDNA (a genetic counseling firm); Zachary Pirtle, a NASA engineer; Brad Reed, a planner with the Virginia Department of Transportation; and Emily York, a JMU faculty member in Integrated Science and Technology. When I counted, there were 36 audience members, of which 13 appeared to be undergraduates and two were children (one my own). I was surprised that there were fewer students than normal; it is possible that the timing of the event (these are usually held on Thursdays, but with JMU closed starting Friday at 5pm that was an unappealing option) suppressed attendance. Students are also often offered extra credit for attending; we see dramatic swings in student attendance based on this.
The session started in a pretty standard format for a panel presentation; Elise asked four questions that they each addressed. The first question was about what the biggest challenge is facing science communication. Every one of the panelists described some aspect of wickedity, though without directly referencing the theory. They talked about complexity, the inability to identify all of the variables, competing visions of problem and solution, and the problem of unintended consequences. The fourth question, about what the role of facts and scientific literacy in public discourse and planning, resulted in an honest-to-goodness “wicked problems” reference in the wild. The VDOT planner said, “What people don’t realize is that transportation problems are wicked, in which interests are plural and knowledge is limited.” I nearly fell out of my chair I was so excited!
After this section of the session, the room broke up into small groups to discuss an ethical question that had been submitted by one of the panelists. This was a nice way to address some of the comments that the panel had made while applying concepts to a real-world news example. We did this twice, with the whole group coming back together after each one to share thoughts. The first case was about self-driving cars and whether they should be programmed to preferentially protect the driver over pedestrians in the event of a crash, and the second case was about genotyping newborns.
The final question to the panel was “What is the most pressing science issue in your field?” Every one of them talked about policy matters – how to engage the public, how to educate the public about what the difference between “theory” and “scientific theory” is, helping people understand the importance of sustainability, and how to engage the public to make change. It was a very interesting panel, and a nice example of the overlap between science and politics.
Part 3: Readings:
Lewis, C. (2015). What is improvement science? Do we need it in education?. Educational Researcher, 44(1), 54-61. doi: 10.3102/0013189X15570388
This paper reviews and applies a large body of literature, including the Langley 2009 article, surrounding improvement science in the context of education. As compared to experimental research, this branch welcomes and incorporates variation and modification among sites into the scaling up of educational innovations. Lewis presents four behaviors specialized to improvement science:
- Recognize and understand how experimental and improvement science differ, and when to use the one or the other. Improvement science is particularly appropriate in settings where variation could be significant, or capacity across varying situations needs to be built.
- Recognize and understand that “generalizable knowledge” can take different forms, including tools, descriptions of change processes, and the knowledge contained in people’s heads.
- Create space for “practical measurement” that emphasizes brief and specific assessments, including ones that (as Mertler would say), “polyangulate” the data.
- Learn from programs that are outside the tight context, by building capacity for evolution through understanding the local context and building “ownership of change” among practitioners.
“Education research would benefit from inclusion of improvement science, which has methods tailored to rapid prototyping and testing, tools for detecting and learning from variation, and affordances to learn from widely different contexts. Improvement science recognizes the capacity of practitioners to engage in disciplined inquiry using tools and ideas from another site and not only to faithfully implement a researcher-designed program. Improvement science also recognizes the right of practitioners to use currently available innovations, rather than waiting for a ‘proven’ program. Finally, improvement science recognizes that educational improvement will not occur solely through advances in basic disciplinary knowledge but will also require advances in ‘the system of profound knowledge’ needed to enact it.”
Part 4: Integrations: There are some interesting overlaps between Lewis’ description of improvement science and Sutton & Rao’s recommendations for effectively scaling up organizations. In particular, I am struck by Lewis’ emphasis on encouraging “Buddhist” approaches to educational program scaling, as well as the use of “load busters” such as tools for assessment and implementation. I also see some of these in the dissertation that I reviewed, including in Lindsey’s apparent surprise that instructors were less Buddhist in their implementation of her TISS tools than she had expected. Annother connection from Lewis back to previous readings is to Hargreaves’ et al and Bentley (2nd International Handbook of Educational Change), with the concept that the buy-in of the front-line teachers is necessary for scaling up / implementation & diffusion of change.
Questions to keep in mind:
- How can I build in plan-do-study-act into the ongoing research into the support of transfer students on the MREST?
- If the difference here from action research is the inclusion of “different types of generalizable knowledge” (such as tools), how can I include those in my work?
- What are the ethical nuances of focusing on transfer students?
- How can I bring together the different players at the university with localized knowledge to form a more cohesive & effective support program for these students?
- From whom can I learn, in a similar but not identical context?