As part of the UNC Master’s of Educational Innovation, Technology and Entrepreneurship program, students are placed at internships. I took a bit of a less conventional route and found an internship that works for me at the Population Council’s Girl Innovation, Research, and Learning Center, where I have been serving as a research assistant since July 2021. I worked the first month-and-a-half full-time (40hrs/week) and eased into part-time (20 hrs/week) after MEITE began. Though this can be a balancing act as a full-time student, I’m enjoying getting a fully immersive experience!
Since the beginning of my internship, I have had the immense opportunity to work a number of projects including:
- Adolescent Atlas for Action (A3), a data visualization tool geared for policymakers and other stakeholders who seek a comprehensive understanding of the policy context: which policies are present, where might gaps be filled, and where might there be misalignment between policy and implementation? The tool visually displays a checklist for a list of 113 countries 56 policies. The policies cut across 9 domains including education, gender, sexual and reproductive health, nutrition and more.
- Assisting with a grant proposal, as well as a scoping review analyzing the limitations of existing literature on sexual and gender minority (SGM) adolescents
- Creating a tool to map indicators related to migration, digital access, contraceptive use, and empowerment from about 20 questionnaires/datasets
- Authoring a blog/insights piece on menstrual cup uptake and climate adaptation
- Writing two methods briefs on the A3 process and the Adolescent Data Hub methodology
- Current work: Developing slide decks and webinar materials for onboarding interns to understand the background on: 1) systematic review observing education and health outcomes and 2) using the Covidence platform for filtering abstracts and texts
Design Thinking Meets Research
DEFINE
Perhaps my favorite part of the design thinking modules was the section on “defining the wicked problem.” Wicked problems are so often defined singularly, resulting in a response or intended outcome that is tunnel-visioned or reductive. For example, in a workshop on child marriage, researchers pointed out that program interventions often viewed education as a silver bullet for preventing child/early/forced marriage. Specifically, interventions have focused on keeping girls in school, and thus using gender parity ratios and enrollment percentages as key indicators for “girls’ education” or “gender equity.”
Working in proximity to researchers has enabled me to see the conflicts that arise between research, development and policy–areas that I had always imagined dovetailed perfectly. Over the last few months, I’ve had the privilege of listening in on webinars and presentations by researchers–many rooted in the non-western contexts they study–who are asking more nuanced questions about the nature and motivations of child marriage, leaning into empathizing with young people. For example, how might adolescent see themselves as having agency when they opt to marry before age 18? Perhaps they might see themselves as finding an alternative to an unwanted arranged marriage. Perhaps they see themselves as escaping an undesirable situation in their family households. And in studying these possibilities, how does this shift the problem or point to related issues around child protection, gender-based violence or resources in the system?
At Pop Council, we face a recurring question that is hard to grapple with in research: “What is–and isn’t–working?” I’ve deeply considered the importance of the “isn’t working” part of this question, asking myself the pitfalls of deconstructing past interventions as opposed to seeing ourselves as building upon foundations. Ever the optimist, I’ve been challenged to see that deconstructing with a critical eye is necessary, because so much of what was assumed to “work” in the “developing world” was fraught with epistemological concerns and rooted in colonization. Research work lends itself well to a cycle of defining and redefining problems. In the process of linking indicators intended outcomes, it’s easy to get lost in the definition of and complexity of the issue.
IDEATE
The ideate space of the Design Thinking process is perhaps the part of the process that is least associated with research, or at least “traditional” academic research. What I’ve really appreciated about my time working on the A3 project has been the opportunity to see how research works in an applied setting where research is used to develop products geared beyond researchers. In terms of the A3 website, my work started after the “ideate” phase, and the 70+ policy indicators I researched were already brainstormed by a former intern. However, what I’ve learned about the ideate phase in this role is that ideating doesn’t necessarily happen in one neat stage–it is something that one comes back to multiple times. In the process of revisiting the list of indicators that were selected, I found myself eliminating policies, bringing them back, digging for databases and revising the list over and over again.
PROTOTYPE
The end of October and beginning of November made the A3 project really jump off the page as we saw the research actually take form in a website instead of the Word documents and Excel spreadsheets we had been working on.
I particularly reflect on the internal user testing phase as I observed just how tricky it can be to test within a team. Having worked exclusively on the A3 project, I found myself unsure of what I could possibly comment on as a user, making me realize just how crucial it is to get an outside perspective (ideally, the target learner/user perspective) as we’ve discussed in Dr. Ryoo’s course. As we approach our internal and public launches, I continue to consider how we can get user testing for potential users that the A3 targets, particularly policymakers and donor agencies that are considering where development projects might most benefit from funding.
TESTING
I’m finding that testing never quite ends, especially without a checklist! As my supervisor and I continue adding on to our notes of prototype features that need to be revisited, I’m attuned to the real-world dynamics of how testing in theory and testing in practice can clash. This is all the more reason why it is so important to pay attention the common mistakes made in testing sessions. While my supervisor and I are using fine-tooth combs to identify every spelling error, page cut-off, color shading and other details, it is no wonder that we might overlook major issues that users might encounter. Additionally, in the communications and threads between us and the developer team that must make updates based on our feedback, it is all the more critical that we are clear with our future testers what we are looking for feedback on.
I’ve been thinking about a list of prompts that I’m considering suggesting that we include in our messaging to testers:
- What would you describe as the main feature of this product?
- In your own words, list or describe what the A3 is.
- What are some words that come to mind after navigating the website.
- How do you think this product might be used?
- Who do you think might be able to use this product?
- What features did you like? What features did you not like? Why?
- What features were confusing?