DPA 8109 Assignment 1 program evaluation – analysis of study design

Capella University DPA8109

Program Evaluation: Analysis of Study Design

Introduction

The purpose of this paper is to read and analyze the 2009 journal article “Measuring Change in a Short-Term Educational Program Using a Retrospective Pretest Design” by Moore and Tananis. After the article has been read, the paper would need to be analysis by addressing the following discussion questions. In addition to, this study was to evaluate and document the impact of the PGSIS on the participants, by gathered data that was used for design programmes for subsequent programs activities and forecasts planning purpose.

Discussion

Identify a research design employed by a selected research study

According to the literature, Burns and Grove (2003) define a research design as “a blueprint for conducting a study with maximum control factors that may interfere with the validity of the findings” (CHAPTER 3 Research design and methodology, 2017). Similarly, Parahoo (1997) describes research design “a plan that describe how, when and where the data can to be collected and analyzed” (CHAPTER 3 Research design and methodology, 2017).

This study has adopted both research design elements of qualitative and quantitative approach, in this research study. The data collection and analysis were coupling with a retrospective pre/post designs, and that was carried out in two phases. The first phase started after a series of interviews with the participants that were divided into two groups (Moore & Tananis, 2009). “One group was given the Rokeach Dogmatism Scale (RDS), as a pretest and as a posttest, whereas the other group was given the RDS as a posttest and a retrospective pretest” (Moore & Tananis, 2009). Which was coupled with a cross-sectional design, the researchers used collect the data on all relevant variables, at one time from various scores and unique database.

The interviewees employed the retrospective design, as a primarily focus in this program education for these participants, where the outcomes are expected to improve in their knowledge and skills (McDavid, Huse, & Hawthorn, 2013, p. 179).

In continuation of the first phase, the interviewees used several other data collection instruments such as; surveys, observation, tape-recorded, and questionnaires. To that end, the first phase was completed. The second phase consisted of data interpretation and analysis of variance (ANOVA) for which the implications of these research designs being used.

Compare and contrast experimental and quasi-experimental designs.

In comparison, implementing an experimental and quasi-experimental research designs are essential and intricate core of the research field as well, thus it provides a causes and effects causal relationship. In abstract terms, this means the relationships between a certain actions, X, which alone creates the effect of Y (McDavid, Huse, & Hawthorn, 2013, p. 91-92). Even though, there has been some significant changes in U.S. federal government guidelines; McDavid, Huse, & Hawthorn, 2013, p. 91-92), it has been reported that experimental and quasi-experimental are especially useful in addressing evaluation questions about the effectiveness and impact of programs (Leedy, & Ormrod, 2016; O’Sullivan, Rassel, & Berner, 2010, p. 70-74).

In contrast, experimental design offer “a greater degree of control which results in a greater internal validity, whereas, quasi-experimental designs, they do not control all confounding variables and so they cannot completely rule-out some alternative explanations for the results they obtain” (Leedy, & Ormrod, 2016, p. 189). Both aspects of the designs would be suitable, when the researchers can empirically assess the differences in the two groups (Moore & Tananis, 2009).

Analyze the data collection efforts presented in a research study.

Moore & Tananis (2009) explained the researchers employed multiple several data collection, to gather the data. The researchers establish baselines retrospectively to one group for the pre-test and the other was for the post-test instrument designs (McDavid, Huse, & Hawthorn, 2013, p. 177). In addition to, observing the participants, conducted interviews and completed questionnaires (closed and opened questions). Along with surveys, focus group, Lickert scale, Rokeach Dogmatism Scale (RDS), and Cronbach’s x which is an objective measure that gathers previous relevant literature on the subject matter. And, coupled with a cross-sectional design whereas, the data was collected across three consecutive years (Moore & Tananis, 2009).

Justify methodological recommendations for data collection with regard to program evaluation.

The methodologies consisted of quantitative and qualitative techniques, along with a mixture of the retrospective pre/post designs, which primarily focused on the program education for participants, and the outcomes to determine if their knowledge and skills has improved (McDavid, Huse, & Hawthorn, 2013, p. 179).The researchers implemented those particular data instruments to shed light on which methodology, yield less biased measures of the program effectiveness. Thus, ensured the research designs, data collection were scientific and consistent to enhance the accuracy, validity and reliability of research findings as it relates to program evaluation (Harrell, & Bradley, 2009).

Justify and discuss quantitative and qualitative techniques.

Qualitative and quantitative methods are the paradigm that serves as a significant piece to a program evaluation and a phenomenon. Qualitative is a different approach to collect its data; observation, audio recording, interviews, case studies, questionnaires ethnographic and likes of other qualitative research designs. Most researchers are more comfortable, using a qualitative design, because it allows the researcher to immerse and yield a richer and different perspective from the participants.

Quantitative is more a statistical method that employs and tests; the maximize reliability and validity of this statistical tool. In addition to testing hypothetical generalization that emphasizes the measurements and analysis of causal relationships between variables (Golafshani, 2003). This writer would definitely recommending using both methodologies for a program evaluation; quantitative can show its results and how it impact the program effectiveness outcomes. Thus, qualitative can examine, compare, contrast and interpret the data. In sum, both methods can explore the complex issues of any program evaluation and phenomenon.

Conclusion

In sum, even though the researchers used various research designs to obtain their findings. They still must be experienced and skilled enough to know what designs to use, in any research project, in order for the research to be valid and reliable.

References

CHAPTER 3 Research design and methodology. (2017). Retrieved from http://ais.utm.my/researchportal/files/2015/02/Example3-Res-Design.pdf

Golafshani, N. (2003). Understanding Reliability and Validity in Qualitative Research. TheQualitative Report, 8(4), 597-606. Retrieved from http://nsuworks.nova.edu/tqr/vol8/iss4/6

Harrell, Margaret C. and Melissa A. Bradley.(2009). Data Collection Methods: Semi-Structured Interviews and Focus Groups. Santa Monica, CA: RAND Corporation. Retrieved from http://www.rand.org/pubs/technical_reports/TR718.html.

Leedy, P. D., & Ormrod, J. E. (2016). Practical research: Planning and design (11th ed.). Upper Saddle River, NJ: Pearson. ISBN: 9780133741322.

McDavid, J. C., Huse, I., & Hawthorn, L. R.(2013).Program evaluation and performance measurement: An introduction to practice (2nd ed.). Thousand Oaks, CA: Sage Publications. ISBN: 9781412978316.

Moore, D., & Tananis C. A. (2009).Measuring change in a short-term educational program using a retrospective pretest design. American Journal of Evaluation, 30(2), 189–202.

O’Sullivan, E., Rassel, G.R. & Berner, M.(2010). Research Methods for Public Administrators (5th ed). Pearson Education, Inc. Publications. ISBN 10: 0321628373 ISBN 13: 9780321628374

Place an Order

Plagiarism Free!

Scroll to Top