Brassard, M. & Ritter, D. (1994). The Memory Jogger II. Methuen, MA:GOAL/QPC. A good basic guide on the various ways to evaluate quality for CQI (continuous quality improvement) programs. Covers all the methods, i.e., Activity Network Diagram, Affinity Diagram, Brainstorming, Cause & Effect/Fishbone Diagram, to Nominal group Technique, Pareto Charts, etc.
Capezio, P. & Morehouse, D. (1993). Taking the Mystery Out of TQM. A Practical Guide to Total Quality Management. Now TQM will no longer be a mystery, but a tool in ensuring good outcomes. And excellent intro to all the instruments commonly used with TQM, as well as a history of those who developed the various techniques.
Center for Health Policy Studies & Center for Quality of Care Research and Research (1995). Final Report:Understanding and Choosing Clinical Performance Measures for Quality Improvement: Development of a Typology. Rockville, MD: AHCPR. A research report that tries to summarize what is currently known about outcomes research, from process to outcome measures. Excellent summary charts of what current researchers are using and collecting with performance measures they have developed to evaluate quality of care.
Center for Quality of Care Research and Education, Harvard School of Public Health, & Center for Health Policy Studies. (1994). Development of a Typology of Clinical Performance Measures for Quality Improvement. Results of Literature Search. A brave attempt to present what these two groups learned about the state of outcomes research - massive confusion. A great resource for parties and agencies currently involved in such research, and an extensive listing of the types of measures currently being used.
Delamothe, T. (1994). Outcomes into Clinical Practice. London, England: BMJ Publishing Group. A compilation of presentations made by noted British experts regarding their experiences with outcomes research under Great Britain's National Health Service (NHS). Good insight into problems encountered by the various medical disciplines as they try to standardize the analysis of various types of data to assess whether or not treatments and therapies really work.
Kazandjian, V.A. (1995). The Epidemiology of Quality. Gaithersburg, MD: Aspen Publishers. A good text that covers what is currently known about outcomes research - theory and practice. While the author pitches the use of epidemiologic methods in organizing health data, he doesn't effectively support his contention by the way the chapters are organized. Chapter 8 is a must-read for anyone who wants to understand what Continuing Quality Improvement is all about.
Martin, L.L. & Kettner, P.M. (1996). Measuring the Performance of Human Service Programs. CA: Sage Publications. AN EXCELLENT PRIMER FOR DEVELOPING PERFORMANCE MEASURES. The perspective is more social service, but the idea of performance measurement is generic when it comes to any public program that is supposed to make a difference for the public it serves. If you think outcome performance and output performance measures are the same except for the spelling, then you need to read this book.
US General Accounting Office (March, 1997). Measuring Performance. Strengths and Limitations of Research Indicators. GAO (GAO/RCED-97-91). A look at how research can become more accountable for the money being spent, both in the public and private sector.
U.S.G.A.O. (May, 1997).(GAA/HEHS/GGD-97-138) Managing for Results. Analytic Challenges in Measuring Performance. An evaluation of how federal agencies are doing in trying to meet the GPRA (Government Performance and Results Act of 1993) requirements. While it is understandable why federal agencies balk at coming up with performances measures, it is an important step towards professional accountability to those who pay the bills.
U.S.H.H.S./U.S. Public Health Service/CDC. (October, 1996). Conference Summary Report- Moving Toward International Standards in Primary Care Informatics: Clinical Vocabulary. Summarizes the 1995 New Orleans meeting about developing consensus on how to computerize patient records. A difficult task on an international scale, but maybe we can still learn something from the Tower of Babel.
U.S.H.H.S./U.S. Public Health Service/CDC. (1992). Using Chronic Disease Data. A Handbook for Public Health Practitioners. Using available US government statistics for research: mortality data, hospital discharge data, behavioral risk factor data; age-adjustment techniques; categorizing diseases; legislative mandates regarding data.
U.S.P.H.S./Office of Public Health & Science. (1996). Cost-effectiveness in Health & Science. Project Summary. A good appraisal of the weaknesses associated with cost-effectiveness studies, and how to correct the methodology to improve their use in outcomes research.
Abramson, J.H. (1994). Making Sense of Data. 2nd Edition. NY:Oxford University Press. A good text for interactive learning courses. Brief explanations are given about major epidemiological concepts that are reinforced by exercises. Answers with explanations are given in following chapters. Really needs a good teacher to make the most of this text. Hard for those who (like me) likes to study concepts in a systematic way. However, does have a good section (at the end) on meta-analytic studies - how to conduct and critique them.
Chalmers, I., & Altman, D.G. (Editors) (1995). Systematic Reviews. London, England: BMJ Publishing Group. A systematic review is another name for research review (See review on Light & Pillimer's book). A methods-based text on how to choose what you would include in a systematic review. While favoring clinical trials, related literature such as corrections, letters to the editors and other criticisms of such trials are considered important in a comprehensive synthesis work. Introduces the Cochrane Collaboration, which is a European effort that seeks to collect and disseminate systematic reviews electronically. This is one way such reviews can stay current as our knowledge base constantly changes. A good critique on the limitations of meta-analysis gives this text a balance approach as to what a researcher can expect from systematic reviews.
Gehlback, S. H. (1993). Interpreting the Medical Literature. 3rd Edition NY: McGraw Hill. Great text on what to look for when you are trying to conduct a literature review and there are over thousand articles in each area you should evaluate to do a good job. Everything from looking at the study design, the appropriate use of statistics, and the validity of the interpretations and conclusions the authors have arrived at in a research study. Reading this book will save you time as you will tend to get overwhelmed by how much research has already been done and you are just trying to make sure you are not duplicating the work of what appears to be a whole universe of ardent researchers at the cutting edge of every area you are trying to explore.
Girden, E.R. (1996). Evaluating Research Articles From Start to Finish. CA: Sage Publications. Want to learn how to critique research, or, do you just want to be a saavy research consumer? Girden will walk you through one example of each research design, and then you are on the own on the second example. Useful in providing you with a method to look at research, with a list of questions to ask yourself (and the author) about the usefulness of the research.
Lang, T.A., & Secic, M. (1997)How to Report Statistics in Medicine. Annotated Guidelines for Authors, Editors, and Reviewers. PA: American College of Physicians. FINALLY, a guide on how medical research should be reported in the literature!! With this text in print, now there really is no reason for poorly written research reports to be in print, nor for any medical researcher not to know how to write up their research in an appropriate fashion. And for research consumers - ths text will tell you what you should be looking for when you read the literature.
Light, R.J. & Pillimer, D.B. (1984). Summing Up - The Science of Reviewing Research. Cambridge, MA: Harvard University Press. [THE AUTHORITY ON RESEARCH REVIEWS] Literature review for its own sake is known as a research or systematic review. If you want to summarize a body of research in a summary statistic, then that's meta-analysis. This text provides an excellent overview of how to synthesize research findings - summarizing the most poignant points. The authors debate the value of "narrative" (qualitative) and "number" (quantitative) approaches to reviewing the literature. Should we pursue "verstehen," or is "effect size" enough? How does script formation affect the way we analyze information? A good adjunct to Creswell.
Reigelman, R.K. & Hirsch, R. (1989). Studying a Study and Testing A Test. How to Read the Medical Literature. MA: Little, Brown and Company. BEST TEXT ON CRITIQUING THE MEDICAL LITERATURE. Most examples deal with clinical medical research. Excellent coverage on explaining why deficiencies are deficiencies, with cases to practice your newly-developed critiquing skills. You never thought you had it until you complete a case and find things wrong... and wow! wah-la - an informed research consumer!
US General Accounting Office (GAO-PEMD-92-19) Cross Design Synthesis: A New Strategy for Medical Effectiveness Reseach. (1992) An excellent overall of meta-analysis, as it was understood then - in 1992. "Cross-synthesis analysis" attempts to assess outcomes of clinical studies by meshing findings with available non-study data. This technique theorizes that the weaknesses of randomized clinical trials can be compensated for by a database of patients treated by doctors outside of clincial trails. Thus, combining data from trials and database analyses would make the most of the strengths of each kind of collected data.
PUBLISHED ON THE WEB: December 4, 1999; February 23, 2001
© Copyright 1999 - 2019 Betty C. Jung