Invited Article: Lost in a jungle of evidence: We need a compass
Andrew N.Wilner, MD, FACP, FAAN,, Goat Island Neurology, 111 Durfee Street, P.O. Box 831, Fall River, MA 02722[email protected]
Submitted January 22, 2009
I enjoyed reading the articles on classification of evidence by Drs. Gronseth and French. [1,2] The authors' explication of terminology such as "concealed allocation," "masking," and "active control equivalence trials" helps put us all on the same footing when attempting to classify articles in the literature.
Their recommendation is that future articles should be classified by their respective authors which would highlight the strength of the findings.
I would appreciate the authors' input on the following questions:
1. Many physicians express disappointment in published Practice Parameters because the quality of the evidence is often too low to produce high quality recommendations. To what extent is the failure of articles to reach Class I due to lack of "concealed allocation" or other problems that could have been easily remedied in the design and execution of the study (i.e., using sequentially numbered opaque envelopes instead of systematic allocation)?
2. The authors provide clear examples of how articles are classified but how often do the experts disagree when classifying an article?
3. Are manuscripts submitted to Neurology (or other publications) rated with these classification of evidence systems to determine whether they should be published?
4. The authors should expand on the statement, "These classification schemes have been developed using empirically validated criteria for study strength". [1] This is important because many people are reluctant to endorse the concept of "evidence based guidelines" when the classification scheme that determines the "strength of the evidence" is not clearly based on any evidence. Or is it? For example, what is the evidence that an 80% cutoff point for "study retention" affects the quality of the data significantly differently than a 79% or 81% cutoff, or any other number?
I appreciate the authors' efforts in helping to bring the benefits of systematic classification of evidence to clinical care.
Reference
1. French J, Gronseth G. Invited article: Lost in a jungle of evidence. We need a compass. Neurology 2008;71:1634-1638.
2. Gronseth G, French J. Invited Article: Practice parameters and technology assessments: What they are, what they are not, and why you should care Neurology 2008; 71: 1639-1643.
I enjoyed reading the articles on classification of evidence by Drs. Gronseth and French. [1,2] The authors' explication of terminology such as "concealed allocation," "masking," and "active control equivalence trials" helps put us all on the same footing when attempting to classify articles in the literature. Their recommendation is that future articles should be classified by their respective authors which would highlight the strength of the findings.
I would appreciate the authors' input on the following questions:
1. Many physicians express disappointment in published Practice Parameters because the quality of the evidence is often too low to produce high quality recommendations. To what extent is the failure of articles to reach Class I due to lack of "concealed allocation" or other problems that could have been easily remedied in the design and execution of the study (i.e., using sequentially numbered opaque envelopes instead of systematic allocation)?
2. The authors provide clear examples of how articles are classified but how often do the experts disagree when classifying an article?
3. Are manuscripts submitted to Neurology (or other publications) rated with these classification of evidence systems to determine whether they should be published?
4. The authors should expand on the statement, "These classification schemes have been developed using empirically validated criteria for study strength". [1] This is important because many people are reluctant to endorse the concept of "evidence based guidelines" when the classification scheme that determines the "strength of the evidence" is not clearly based on any evidence. Or is it? For example, what is the evidence that an 80% cutoff point for "study retention" affects the quality of the data significantly differently than a 79% or 81% cutoff, or any other number?
I appreciate the authors' efforts in helping to bring the benefits of systematic classification of evidence to clinical care.
Reference
1. French J, Gronseth G. Invited article: Lost in a jungle of evidence. We need a compass. Neurology 2008;71:1634-1638.
2. Gronseth G, French J. Invited Article: Practice parameters and technology assessments: What they are, what they are not, and why you should care Neurology 2008; 71: 1639-1643.
Disclosure: The author reports no disclosures.