Choices. The volume of apps and software available to support student learning is mind boggling. As a school administrator, I’m bombarded by emails and phone calls from software companies asking me to look at their product and believe their claims that they have the *best* solution for my students and teachers. I’m sure some of them are good, but with limited resources (including time), it’s hard to take a careful look at all of them. I guess that’s what summer is for!
Historically, instructional software and apps could be broken into five types: drill and practice, tutorials, simulations, instructional games, and problem-solving software. Each type fit a particular need for teachers and students. Increasingly, software developers are creating programs that blur the lines between the types, or claim one type while actually serving the function of another. Below is some clarification of what these types of instructional software represent, as well as some examples.
Drill and Practice
Drill and practice are the repetitive activities that cement new information into a learner’s mind. It should take place after instruction as a reinforcing activity (Roblyer, 2016). Most drill and practice programs provide immediate feedback in the form of either a simple right/wrong indication, or more detailed information about the correct and incorrect answers.
SpellingCity is a drill and practice program that allows teachers to create lists of vocabulary words for students to study. There are a variety of games that can be used to test vocabulary definitions and spelling. SpellingCity has both free and paid versions.
Tutorials
Tutorials provide step by step instruction, usually in a linear manner. One benefit of tutorials is that provide the learner with the ability to pause, review, or skip ahead, according to their needs (Roblyer, 2016). Tutorials can be as simple as a video of someone explaining as they solve a problem on a whiteboard, or as complex as a detailed animation of a process taking place at a cellular level. Increasingly, drill and practice software includes elements of tutorial, when it provides a “how to” video after incorrect answers.
The Make Me Genius video channel on YouTube provides direct instruction on scientific concepts for elementary age students. These videos have a distinct accent from India, but contain exceptional information with high level academic vocabulary.
Simulations
A computer simulation is a digital model of a phenomena or environment that allows the user to interact with various components to change the outcome. A simulation is often used when access to physical manipulatives is inappropriate, expensive, or dangerous. Simulations may be used as a follow up to a “wet lab” in order to provide students with additional experiences, without the need for additional lab supplies.
PhET from Colorado University has many HTML5 simulations that model a variety of processes, mostly physical science, appropriate for elementary school. The simulations give students an opportunity to explore relationships such as between gravity and orbits, or forces and motion.
Instructional Games
Instructional games have specific rules and competitive elements designed to engage and motivate students. In elementary science, there is a great deal of overlap between simulations and games. Many games simulate specific experiences, while building in scoring, badging, or competitive elements. Other games, such as the Magic School Bus games, provide an element of gamification while reviewing some basic science concepts, in what is essentially an online worksheet activity.
Problem-Solving Software
Problem-solving software is that which engages students in critical thinking, decision-making, hypothesis testing, and ultimately generation of a solution. Most problem solving software includes elements of tutorial and simulation, and may include a game-style interface as well.
In elementary science, robotics tools such as Logo, Lego Mindstorms, and Sphero provide a problem-solving environment.
Regardless of the type of software being considered, the most important question for educators needs to be whether the software, program or app will deliver on its promise to improve learning. Not every software is appropriate for all students at all times: a program like Quizlet is an excellent way for students to practice their vocabulary in a collaborative environment, with elements of gamification to keep students’ interest, but won’t teach or reinforce concepts. National Geographic videos provide outstanding instruction in scientific topics, but in isolation won’t build higher order thinking skills. The right tool at the right point in a lesson is crucial, and teacher must always keep the “end game” in mind when selecting instructional software.
Roblyer, M.D. (2016). Integrating educational technology into teaching (7th Ed.). Allyn & Bacon.
A glimpse into my experiences learning and leading with educational technology.
Thursday, July 9, 2015
Monday, June 29, 2015
Vision Statement for Use of Technology
Revolution doesn’t happen when society adopts new technologies - it happens when society adopts new behaviors.
- Clay Shirky, Here Comes Everybody, p. 160
With educational technology, I believe we have passed the tipping point - technology is no longer simply a tool, or even a process, but an environment. It’s ubiquitous, pervasive, and is happening with or without educator consent. It enables things never before possible, and students are doing those things, again with or without us. “Any time, any place” learning is no longer just a catchphrase for the few students enrolled in online courses, but an apt description of what our students’ lives are like. Schools must embrace educational technology, using it to its fullest potential in order to generate enthusiasm, optimize resources, remove barriers to learning, and develop ICT skills (Roblyer, 2016).
I believe that the use of technology tools has the potential to improve learning. This is not to say that inserting a piece of technology into a classroom, or even into students’ hands, will somehow transform learning. It is important, therefore, to note the shift in the way we define technology. From being an add-on, to a tool, to being “integrated” into the curriculum - up until now technology has been a thing apart, and something that teachers chose whether or not to use. But the Common Core State Standards cannot be accomplished without integrated technology use. The projects, activities, and expectations for students are riddled with outcomes that are best accomplished through the use of technology. Selecting the right technology for the problem requires an analysis of affordances, and choosing the tool with the greatest relative advantage. Different strategies and different tools can be the “best fit” for different students at different times. I think that instruction is most effective when a teacher has a wide variety of tools in their arsenal that all facilitate research-based strategies. It makes no more sense to say that an iPad improves learning than it does to say that a pencil improves learning.
Richard Clark (1986) looked at dozens of studies that compared teaching with technology with teaching in the traditional manner, and found that use of technology had no effect on student learning, if everything else remained the same. Kozma (2001) notes that “Whether or not a medium’s capabilities make a difference in learning depends on how they correspond to the particular learning situation - the tasks and learners involved - and the way the medium’s capabilities are used by the instructional design” (p 107). The Clark-Kozma debate is one of tool vs. process; if we use technology as a replacement for other tools there is likely to be no significant difference in learning, while if we take advantage of the affordances of the tool we may change instruction and learning. And thus the research on instructional strategies and learning experiences should be the driving force behind technology integration.
I believe that instruction should be judged not by the use of technology, but by the content and the interaction it facilitates. Technology isn’t a strategy or a pedagogy or an instructional behavior, it is a powerful tool that allows us to change the way we teach and has affordances that can potentially improve educational outcomes for a wide range of students.
Clark, R. E. & Salomon (1986). Why should we expect media to teach anyone anything? In Clark, R. E. (Ed.), Learning from media: Arguments, analysis, and evidence. Greenwich, CT: Information Age Publishing.
Kozma, R. (2001). Robert Kozma’s counterpoint theory of “learning with media”. In Clark, R. E. (Ed.), Learning from media: Arguments, analysis, and evidence. Greenwich, CT: Information Age Publishing.
Roblyer, M.D. (2016). Integrating educational technology into teaching (7th Ed.). Allyn & Bacon.
![]() |
| Licensed image from PresenterMedia |
I believe that the use of technology tools has the potential to improve learning. This is not to say that inserting a piece of technology into a classroom, or even into students’ hands, will somehow transform learning. It is important, therefore, to note the shift in the way we define technology. From being an add-on, to a tool, to being “integrated” into the curriculum - up until now technology has been a thing apart, and something that teachers chose whether or not to use. But the Common Core State Standards cannot be accomplished without integrated technology use. The projects, activities, and expectations for students are riddled with outcomes that are best accomplished through the use of technology. Selecting the right technology for the problem requires an analysis of affordances, and choosing the tool with the greatest relative advantage. Different strategies and different tools can be the “best fit” for different students at different times. I think that instruction is most effective when a teacher has a wide variety of tools in their arsenal that all facilitate research-based strategies. It makes no more sense to say that an iPad improves learning than it does to say that a pencil improves learning.
Richard Clark (1986) looked at dozens of studies that compared teaching with technology with teaching in the traditional manner, and found that use of technology had no effect on student learning, if everything else remained the same. Kozma (2001) notes that “Whether or not a medium’s capabilities make a difference in learning depends on how they correspond to the particular learning situation - the tasks and learners involved - and the way the medium’s capabilities are used by the instructional design” (p 107). The Clark-Kozma debate is one of tool vs. process; if we use technology as a replacement for other tools there is likely to be no significant difference in learning, while if we take advantage of the affordances of the tool we may change instruction and learning. And thus the research on instructional strategies and learning experiences should be the driving force behind technology integration.
I believe that instruction should be judged not by the use of technology, but by the content and the interaction it facilitates. Technology isn’t a strategy or a pedagogy or an instructional behavior, it is a powerful tool that allows us to change the way we teach and has affordances that can potentially improve educational outcomes for a wide range of students.
Clark, R. E. & Salomon (1986). Why should we expect media to teach anyone anything? In Clark, R. E. (Ed.), Learning from media: Arguments, analysis, and evidence. Greenwich, CT: Information Age Publishing.
Kozma, R. (2001). Robert Kozma’s counterpoint theory of “learning with media”. In Clark, R. E. (Ed.), Learning from media: Arguments, analysis, and evidence. Greenwich, CT: Information Age Publishing.
Roblyer, M.D. (2016). Integrating educational technology into teaching (7th Ed.). Allyn & Bacon.
Thursday, April 23, 2015
Final Analysis
During my class this semester, I feel like I have gained an understanding of and appreciation for Educational Design Research (EDR). while I still find it similar to action research, it is clearly much more involved and rigorous. As I preview educational programs in my role as site administrator, I will be keeping EDR in mind, and looking for evidence that the program I’m reviewing has gone through an iterative process that used a variety of data to inform the final product.
I’m unlikely to use EDR for my dissertation. As an administrator in a school district, I’m in an awkward power position with practitioners. In addition to the time commitment that I think EDR takes to “do it right”, there is an element of embedded access that I find problematic when looking at my personal goals for a dissertation completion schedule. Conducting multiple iterations during a single school year requires one to be very closely linked to the research situation, and it’s both impractical and unethical to conduct this sort of research at my school site with teachers that I evaluate! While I have an appreciation for EDR/DBR as a research methodology, I don’t see it as being a practical choice for my dissertation.
I believe that peer review is a very powerful tool. As a recipient of peer feedback, I tend to quickly scan for things I agree with or recognize as easy corrections. I then go back and think through the revisions suggested by reviewers, and either keep them if I think they require more thought, or delete them if I feel like the suggestion is misguided or answered elsewhere. For the most part, I find that the comments are thoughtful and fairly accurate, and I very much appreciate having another set of eyes on my work. When I provide peer feedback, first and foremost I enjoy reading what other students are learning. I am picky about whose work to review, looking for those that match my background or work situation, at least in some way. There have been few peer review activities that haven’t taught me something of value, often outside of the topic of the class! When adding comments, I think carefully about my choice of words, since I know and respect the others in my cohort and don’t want to hurt anyone’s feelings. But I’m honest as well, again because I respect my colleagues and want my feedback to be meaningful. I also tend to double check my technical suggestions - I’m more likely to verify in the APA style manual when correcting someone else than when I’m doing my own writing!
Peer review does require a bit of trust, and a bit of knowing each others' style. In a class that contains a majority of students who have been together for 3 years, mixed with 2 newcomers who do not have the same history and are not at the same point in their educational career, there were some challenges. I suspect all of us tried to be inclusive, but it was more difficult to relate in some ways. It's a lesson I will keep in mind with my own students.
I’m unlikely to use EDR for my dissertation. As an administrator in a school district, I’m in an awkward power position with practitioners. In addition to the time commitment that I think EDR takes to “do it right”, there is an element of embedded access that I find problematic when looking at my personal goals for a dissertation completion schedule. Conducting multiple iterations during a single school year requires one to be very closely linked to the research situation, and it’s both impractical and unethical to conduct this sort of research at my school site with teachers that I evaluate! While I have an appreciation for EDR/DBR as a research methodology, I don’t see it as being a practical choice for my dissertation.
I believe that peer review is a very powerful tool. As a recipient of peer feedback, I tend to quickly scan for things I agree with or recognize as easy corrections. I then go back and think through the revisions suggested by reviewers, and either keep them if I think they require more thought, or delete them if I feel like the suggestion is misguided or answered elsewhere. For the most part, I find that the comments are thoughtful and fairly accurate, and I very much appreciate having another set of eyes on my work. When I provide peer feedback, first and foremost I enjoy reading what other students are learning. I am picky about whose work to review, looking for those that match my background or work situation, at least in some way. There have been few peer review activities that haven’t taught me something of value, often outside of the topic of the class! When adding comments, I think carefully about my choice of words, since I know and respect the others in my cohort and don’t want to hurt anyone’s feelings. But I’m honest as well, again because I respect my colleagues and want my feedback to be meaningful. I also tend to double check my technical suggestions - I’m more likely to verify in the APA style manual when correcting someone else than when I’m doing my own writing!
Peer review does require a bit of trust, and a bit of knowing each others' style. In a class that contains a majority of students who have been together for 3 years, mixed with 2 newcomers who do not have the same history and are not at the same point in their educational career, there were some challenges. I suspect all of us tried to be inclusive, but it was more difficult to relate in some ways. It's a lesson I will keep in mind with my own students.
Sunday, April 5, 2015
It's Not "Whether" It Works, But "How"
As my understanding of design-based research (DBR) grows, so does my appreciation of how well it fits my philosophy of teaching and learning. The readings in this module helped my understand that "design" in DBR is used in 2 different ways - the researcher creates (designs) an intervention or learning phenomenon, and then collects data that allows them to create a model (design) or guidelines (design principles) that can be used to generalize the intervention into other environments. I think this is how good teaching works, when the teacher has sufficient time and ability to collect the data needed. It s a more rigorous version of piloting an intervention or program, and then making changes to the program based on what actually happens in the classroom. This is how I've designed model lessons in the past, and it's how my district has developed units of study that are disseminated across schools.
Data collection is the challenge in any type of research. Dr. William Sandoval notes that there is a tendency among researchers, particularly novice ones, to collect everything possible and then try to figure out what is needed later. Since there is as much a need for thick description in DBR as there is in case study research, this can end up being a huge amount of data! Sandoval says that, instead, a researcher should have a clear plan of data collection, and should have a reason for collecting every piece of data that is collected. While that makes perfect sense, I think it probably takes a fair amount of experience to know what data will be relevant and which will be extraneous. My fear, as I'm sure is true for most novice researchers, is that I will begin writing my results and realize there's a gap in my data! I'm not sure how one overcomes the challenge of too much data, though I suspect that working in close communication with experienced researchers who can make recommendations.
![]() |
| licensed image by PresenterMedia |
Wednesday, March 11, 2015
More about DBR
As I continue my course in design-based research (DBR), I still struggle with how it might actually fit into the repertoire of doctoral students. The iterative nature of DBR seems to take quite a long time, and might not have a fixed end point. It seems difficult to predict the exact number of iterations it will take to get to generalizable design principles. One thought I had is about the possibility of doing an informal version of DBR as a school site leader; the team of researcher/practitioners would be the teachers in a grade level, and their PLC meetings would be the format for hypothesizing design principles and determining how to test those principles. Minor iterations would probably occur every 2-3 weeks, and the teachers implement and revise. While I’m sure they wouldn’t consider their conclusions to be design principles, I think the strategies and recommended practices teams come up with might indeed fall into that category.
In the readings over the past month, Joseph (2004) helped me to better understand the ways in which other research approaches also study real-world learning situations, and the difference in philosophies that might make a researcher select design-based research. It seems that it’s all about the outcome; if a researcher wants design principles, they might select DBR. If they want to simply examine a phenomenon, or determine the effectiveness of a strategy without necessarily modifying the design, they would likely choose another research approach. Obrenović (2011) describes a process of DBR that I found very similar to what we learned in our Project Management class, but also talks about how DBR might use a selection of quantitative and qualitative approaches in the various stages of design in order to inform changes. As I read Anderson and Shattuck (2012), I began to wonder if the Response to Intervention (RtI) programs that we use in my district were created using design-based research. I know the interventions have been extensively tested, and I know they all have significant bodies of research about their reliability and validity, but I wonder about their genesis. As I reflected on the issues Anderson and Shattuck raise about researchers who are also the designers and implementers, it makes me think that programs such as Read 180 were probably developed by one group of people, and then validated by others. I don’t know that’s the case, but I would predict that getting a reliable rating from the What Works Clearinghouse probably precludes the designer being the researcher.
In the readings over the past month, Joseph (2004) helped me to better understand the ways in which other research approaches also study real-world learning situations, and the difference in philosophies that might make a researcher select design-based research. It seems that it’s all about the outcome; if a researcher wants design principles, they might select DBR. If they want to simply examine a phenomenon, or determine the effectiveness of a strategy without necessarily modifying the design, they would likely choose another research approach. Obrenović (2011) describes a process of DBR that I found very similar to what we learned in our Project Management class, but also talks about how DBR might use a selection of quantitative and qualitative approaches in the various stages of design in order to inform changes. As I read Anderson and Shattuck (2012), I began to wonder if the Response to Intervention (RtI) programs that we use in my district were created using design-based research. I know the interventions have been extensively tested, and I know they all have significant bodies of research about their reliability and validity, but I wonder about their genesis. As I reflected on the issues Anderson and Shattuck raise about researchers who are also the designers and implementers, it makes me think that programs such as Read 180 were probably developed by one group of people, and then validated by others. I don’t know that’s the case, but I would predict that getting a reliable rating from the What Works Clearinghouse probably precludes the designer being the researcher.
Sunday, February 8, 2015
Design-Based Research
I am currently taking a class in design-based research (DBR), which holds great appeal for me. In design-based research, the researcher proposes a strategy, tool, process or curriculum (the design), and then tests and refines it within a real-world context. Through repeating iterations, the researcher is able to refine the design, and then hopefully generate some principles or theories about best practice. To me, it seems that this is the way interventions and curriculum should be designed. Sometimes programs adopted in schools are "research-based" but they have never been tested with diverse groups of 34 kids with a single teacher who is learning the program on the fly. Unfortunately, that's the implementation reality for most K-12 public schools. Sometimes interventions and curriculum make that transition to the real world, and sometimes they do not. I believe that a rigorous design-based research process would probably make programs more capable of being implemented by real teachers in real schools.
I am still a little confused about the different possible outcomes of DBR. In general, the intent is to develop a set of design principles. I am not clear on exactly what a design principle is, and how well it will correlate to other situations. In many of the DBR studies I reviewed or read reviews of, it seems that the outcome was a very narrow set of guidelines that were applicable to that particular intervention or program. I can imagine a study in which broad design principles are generated, but I would think it would take several years in several contexts in order to create generalizable principles.
I created the flowchart concept map above to show my understanding of design-based research. As with any research, the first step is to determine what the topic is to be studied, and determine what prior research has been conducted. In DBR, the next phase is design, which is followed by implementation. During implementation, data is collected and analyzed, and then the design is "tweaked" to make it stronger. The design is implemented again until it is a perfect solution, or the researcher has either completed enough iterations to develop design principles or has run out of time or money! The process is complete when the researcher publishes their findings in the form of a generalizable theory or design principles. Although the wording on my concept map in the redesign phase is a little tongue in cheek, I suspect those are the actual questions a researcher asks of themselves as they progress through the process.
I am still a little confused about the different possible outcomes of DBR. In general, the intent is to develop a set of design principles. I am not clear on exactly what a design principle is, and how well it will correlate to other situations. In many of the DBR studies I reviewed or read reviews of, it seems that the outcome was a very narrow set of guidelines that were applicable to that particular intervention or program. I can imagine a study in which broad design principles are generated, but I would think it would take several years in several contexts in order to create generalizable principles.
I created the flowchart concept map above to show my understanding of design-based research. As with any research, the first step is to determine what the topic is to be studied, and determine what prior research has been conducted. In DBR, the next phase is design, which is followed by implementation. During implementation, data is collected and analyzed, and then the design is "tweaked" to make it stronger. The design is implemented again until it is a perfect solution, or the researcher has either completed enough iterations to develop design principles or has run out of time or money! The process is complete when the researcher publishes their findings in the form of a generalizable theory or design principles. Although the wording on my concept map in the redesign phase is a little tongue in cheek, I suspect those are the actual questions a researcher asks of themselves as they progress through the process.
Thursday, May 22, 2014
The Right Tool for the Right Job - Part II
Approaching the end of the year and thinking about the future, I am evaluating my technology yet again, and planning for the future. I've found that my iPad is absolutely my "go to" device, and I carry it around campus before school and during classroom walkthroughs. Although at first people were overly interested and/or intimidated by seeing me arrive with a tablet, I find that teachers (and students) no longer pay attention to it.
In Evernote, my current schema is to start a new note every day called Admin Log, which begins with a list of the things I need to do for the day. I usually start by copying the prior days' notes, so it's got all of the things I didn't get to previously. It includes both personal and professional items. I use tags for the school site I'm at, and for the grade level(s) I observe. It is very easy to sort by tags and see all of the 2nd grade notes, comments, and images. Some of my daily notes end up being quite long, particularly if I take several pictures, and I don't think the search feature in Evernote desktop works very well. Those factors make it difficult to find the right note or the right information within my notes.
For next year, I'm planning to organize my notes into more logical notebooks that are more easily searchable. After a conversation with D, I'm thinking about having separate notebooks for each teacher, so I can easily get back and see the observations I've done and photos I've taken. I can also attach PDFs and send emails to a notebook, so that makes it a one-stop shop for all of the data I'll need for evaluation and supervision. I'll have a different notebook for facilities, and others as I see the need. Parent contacts and discipline are a couple of things I'm still noodling about, as I'm not sure how useful it is to have that information solely within my account. Functionally, it's possible that the old fashioned binder is still the best way to deal with those items.
I'm still not sure how to deal with my to do list. Wunderlist just doesn't "speak" to me, and I find it quite cumbersome to use (Sorry Brian!). I'd kind of prefer to have everything together, but Evernote isn't really a very good tool for tracking longer projects. On this side, I'll have to just keep playing around with it to find a system that works.
Good thing that summer is coming up so I can get all this organized!
In Evernote, my current schema is to start a new note every day called Admin Log, which begins with a list of the things I need to do for the day. I usually start by copying the prior days' notes, so it's got all of the things I didn't get to previously. It includes both personal and professional items. I use tags for the school site I'm at, and for the grade level(s) I observe. It is very easy to sort by tags and see all of the 2nd grade notes, comments, and images. Some of my daily notes end up being quite long, particularly if I take several pictures, and I don't think the search feature in Evernote desktop works very well. Those factors make it difficult to find the right note or the right information within my notes.
For next year, I'm planning to organize my notes into more logical notebooks that are more easily searchable. After a conversation with D, I'm thinking about having separate notebooks for each teacher, so I can easily get back and see the observations I've done and photos I've taken. I can also attach PDFs and send emails to a notebook, so that makes it a one-stop shop for all of the data I'll need for evaluation and supervision. I'll have a different notebook for facilities, and others as I see the need. Parent contacts and discipline are a couple of things I'm still noodling about, as I'm not sure how useful it is to have that information solely within my account. Functionally, it's possible that the old fashioned binder is still the best way to deal with those items.
I'm still not sure how to deal with my to do list. Wunderlist just doesn't "speak" to me, and I find it quite cumbersome to use (Sorry Brian!). I'd kind of prefer to have everything together, but Evernote isn't really a very good tool for tracking longer projects. On this side, I'll have to just keep playing around with it to find a system that works.
Good thing that summer is coming up so I can get all this organized!
Subscribe to:
Posts (Atom)


