Goal Organization: From Chaos to Coherence
October 06, 2016
In a previous post, we detailed the four steps we’re following to build learning trajectories from the computer science education research literature:
- Collect goals from the literature;
- Sort the goals into manageable groups;
- Order goals and groups into progressions; and
- Illustrate the goals via instructional activities.
We’re excited to be coming to the end of our focused efforts on collection. Though we expect that more articles will be added to the database over time, we have nearly completed our reviews of our initial pool of scholarly literature. From 109 articles, we extracted 678 learning goals. The collection of articles included empirical studies, analyses of student artifacts, explanations of taxonomies, comparisons of programming environments, and more, and the variety in article types is mirrored in the variety of the learning goals themselves. The goals run the gamut in terms of content, support, and level of specificity.
Here is just a sampling:
- “[U]nderstand the limits of computers and computation.” (Armoni and Gal-Ezer, 2014, p. 56)1
- “Recogniz[e] the need for … step-by-step instructions.” (Dwyer et al., 2013, p. 135)2
- “[U]se a solution as a component in a larger problem.” (Fuller et al., 2007, p. 165)3
- “[A]ppropriately choose among loop constructs.” (Taylor et al., 2014, p. 270)4
Excited to have such a rich collection of learning goals to work with, we sat down to begin the process of sorting and ordering the goals. And we promptly realized that we were not sure what to do next.
Our goal is to look for similarities across goals and create groups that support generalizations such as this: “Four experts have theorized that children in grades K-2 can effectively debug programs with simple errors. Two empirical studies support this theory, providing evidence that Kindergarteners and first graders can fix bugs that involve changes in sequence or missing instructions.” But alas, inspection of the data did not suggest clear ways to do this. Manual sorts by different staff members led to disparate results, and subtle differences among goals made us nervous about automated methods. The key question we’re struggling with is: To what extent should our value judgements be used to organize the goals? That is, when is human judgement necessary for this task, and when does it bias the results?
To answer this question, we chose a subset of goals, namely, the goals tagged as focusing on ideas of debugging or efficiency, and began a two-pronged approach at organization. One approach is to apply automated clustering techniques based on keywords, relevance measures, and so forth. The other approach is to have various project staff members sort the goals into groups and then “triangulate our value judgements” by conducting cluster analyses that use the categories defined by individuals to help define categories that are representative of our collective expertise.
Creating coherence from chaos – a tall order, but we hope that we are on our way.
References
-
Armoni, M., & Gal-Ezer, J. (2014). Early computing education: Why? what? when? who? ACM Inroads, 5(4), 54-59. ↩
-
Dwyer, H., Boe, B., Hill, C., Franklin, D., & Harlow, D. (2013). Computational Thinking for Physics: Programming Models of Physics Phenomenon in Elementary School. In Engelhardt, Churukian, & Jones (Eds.). 2013 PERC Proceedings (pp. 133-136). College Park, MD: American Association of Physics Teachers. ↩
-
Fuller, U., Johnson, C. G., Ahoniemi, T., Cukierman, D., Hernán-Losada, I., Jackova, J., … & Thompson, E. (2007, December). Developing a computer science-specific learning taxonomy. In ACM SIGCSE Bulletin (Vol. 39, No. 4, pp. 152-170). New York: ACM. ↩
-
Taylor, C., Zingaro, D., Porter, L., Webb, K. C., Lee, C. B., & Clancy, M. (2014). Computer science concept inventories: Past and future. Computer Science Education, 24, 253-276. ↩