Name:

College:

Date:

 

SOCIAL SCIENCES RESEARCH PAPER SAMPLE

Action Research and Accountability in Social Work

Introduction

            Action research, which is also known as participatory action research, refers to a reflective process in which people working with others engage in progressive problem solving. Such individuals may be working as a team or even as parts of “communities of practice” in improvement of the way in which they address diverse issues as well as solve various problems within their team or the community at large (Ferrance, 2000, p. 6-7). Accountability is a term used in governance and ethics and it has a number of meanings. Accountability towards the clients as well as the public is considered a core value of social work practice and the rationale underlying credentialing efforts and licensure. Accountability needs the social workers to incorporate some competence and knowledge in their use of the evidence-based intervention, evaluation of practice outcomes and the capacity of contributing towards research endeavors (Mizrahi, 2008, p. 322).

            Project evaluation has been termed as a systematic method of collecting, analyzing, and using information in answering questions about programs, policies, and projects (ACF, 2010, Para 1) especially concerning their efficiency and effectiveness. Action research, accountability, and evaluation can be helpful in avoiding repetition of the 1930s to 1950s tendencies of social work professions reminiscent through efforts of maintaining portability, credibility, as well as reimbursement credentials of the social work field. The purpose of this paper is to examine the action research approaches in the treatment as well as evaluation of programs.

Action Research Approaches to Treatment and Program Evaluation

            Process evaluations are helpful in answering of the questions concerning the extent to which a certain program is meeting procedural and administrative goals and it should generate viable and valuable information concerning the structure and operations of a program. Some of the frequently raised concerns in program evaluation include how effective such a program is meeting its goals; the attitudes or perceptions of key practitioners and justice system leaders; perceptions and attitudes of the providers of treatment and the public health officials; the community leaders’ attitudes; as well as how the key shareholders within jurisdiction perceive the program’s value and effectiveness (Clunies, 1996, p. 39). Good treatment and evaluations of program assess performance of program, measure the impacts on communities and families, as well as document success. With such information, the programs are better positioned to direct their limited resources towards where they are highly required and most effective within communities (ACF, 2010, Para 1). In understanding evaluative research, there is need of understanding systems model approach. Any society’s organization, program, or functional unit can be symbolized as an interrelated parts series, which work jointly in organic or cybernetic fashion. The evaluator must thus be aware of such things as inputs, activities, events, results, outcomes, and feedback accrued from every program (Garr, 2008, p. 1).

            Program evaluation is a continuous process but not just a one-time event, which starts before the actual start up of programs. Thus, the evaluator needs to be selected in addition to the development of the evaluation design while a program is still in its designing stage. This allows such an evaluator as well as the planners of the program to address the key issues of evaluation jointly. For instance, identification of the appropriate measures of performance and articulation of the goals of the program need to be clearly identified prior to the commencement of implementation of the program. The collaborative process, which is coupled with ongoing communication as well as exchange of essential information, should thus continue through the program’s life (Clunies, 1996, p. 39).

            Evaluators must focus on the program goals, both in terms of the impacts it has on individuals as well as the impacts on systems of collaboration and the entire community (Clunies, 1996, p. 39). Experimental designs are preferred methods of outcome evaluation. They are conducted in two groups: an experiential group, which participates in full range in activities of the program and has eligibility to all evaluation services; and the control group, which receives no services from the program or obtains only the services available before the program was implemented. In this approach, defendants are assigned one of the two groups. The prime advantage of the experimental designs is its ability of generating a higher confidence degree in the findings of differences between the two groups’ outcomes (Kabe & Gupta, 2007).

            Another approach of evaluation is the use of quasi-experimental designs, which is preferred in case an experimental design is proved as inappropriate, for any reason. This approach allows scientifically rigorous outcomes’ examination. To an extent, the design looks like an experiential approach but falls short of random assignment- the key ingredient. They often seem inferior to the randomized experiments with respect to the internal validity. Quasi-experimental designs have a compelling feature in that they are frequently implemented than the other randomized approaches (Cohen et al., 2007, p. 282).

Conclusion

            In conclusion, both accountability and program evaluation needs the social workers to incorporate some competence and knowledge in their use of the evidence-based intervention, evaluation of practice outcomes and the capacity of contributing towards research endeavors. Evaluation is a process of assessing the successfulness of a program in terms of how it achieves its set goals and objectives.

References

Administration for Children & Families (ACF). (2010). The Program Manager’s Guide to Evaluation, Washington, DC: US Department of Health and Human Services. Retrieved on December 14, 2011 from: http://www.acf.hhs.gov/programs/opre/other_resrch/pm_guide_eval/reports/pmguide/program_managers_guide_to_eval2010.pdf

Clunies, S. (1996). Treatment Drug Courts: Integrating Substance Abuse Treatment with Legal Case Processing, London: Diane Publishing.

Cohen, L., Manion, L., Morrison, K. & Morrison, K.R.B. (2007). Research Methods in Education, New York: Routledge.

Ferrance, E. (2000). Themes in Education: Action Research, retrieved on December 14, 2011 from: http://www.lab.brown.edu/pubs/themes_ed/act_research.pdf

Garr, T. (2008). Program Evaluation and Policy Analysis, retrieved on December 14, 2011 from: http://www.drtomoconnor.com/3760/3760lect08.htm

Kabe, D.G. & Gupta, A.K. (2007). Experimental Designs: Exercises and Solutions, New York: Springer.

Mizrahi, T. (2008). Encyclopedia of Social Work, Volume 1, Oxford: Oxford University Press.