View Single Post
  #103  
Old 08-06-2010, 12:39 AM
JB3 JB3 is offline
Registered User
 
Join Date: Jan 2010
Location: RI
Posts: 7,461
Thats a very interesting link mikemover. thanks

For the record, when I stated that I would like to hear of a successful defense with a concealed gun, that was a serious request, not sarcastic. Life would be easier if there was a punctuation mark that indicated sarcasm or seriousness.

So basically this site is stating that the NCVS is skewed based on the way the testing is done? I read through the Dr. Klecks answer, and he does make a compelling case as to why. The tables of the thirteen other surveys are a little hard to understand though.

I was doing some reading on the PDF though, and I am not totally convinced the the NCVS is completely inaccurate. The PDF is very interesting.
This is discussed on page 10 of the PDF off that link-

False positives. Regardless of which estimates one believes, only a small fraction of adults have used guns defensively in 1994. The only question is whether that fraction is 1 in 1,800 (as one would conclude from the NCVS) or 1 in 100 (as indicated by the NSPOF estimate based on Kleck and Gertz's criteria). Any estimate of the incidence of a rare event based on screening the general population is likely to have a positive bias. The reason can best be explained by use of an epidemiological framework. 15 Screening tests are always subject to error, whether the "test" is a medical examination for cancer or an interview question for DGUs. The errors are either "false negatives" or "false positives." If the latter tend to outnumber the former, the population prevalence will be exaggerated. The reason this sort of bias can be expected in the case of rare events boils down to a matter of arithmetic. Suppose the true prevalence is 1 in 1,000. Then out of every 1,000 respondents, only 1 can possibly supply a "false negative," whereas any of the 999 may provide a "false positive." If even 2 of the 999 provide a false positive, the result will be a positive bias—regardless of whether the one true positive tells the truth. Respondents might falsely provide a positive response to the DGU question for any of a number of reasons:
• They may want to impress the interviewer by their heroism and hence exaggerate a trivial event.
• They may be genuinely confused due to substance abuse, mental illness, or simply less-than-accurate memories.
• They may actually have used a gun defensively within the last couple of years but falsely report it as occurring in the previous year—a phenomenon known as "telescoping."
Of course, it is easy to imagine the reasons why that rare respondent who actually did use a gun defensively within the time frame may have decided not to report it to the interviewer. But again, the arithmetic dictates that the false positives will likely predominate. In line with the theory that many DGU reports are exaggerated or falsified, we note that in some of these reports, the respondents' answers to the followup items are not consistent with respondents' reported DGUs. For example, of the 19 NSPOF respondents meeting the more restrictive Kleck and Gertz DGU criteria (exhibit 7), 6 indicated that the circumstance of the DGU was rape, robbery, or attack—but then responded "no" to a subsequent question: "Did the perpetrator threaten, attack, or injure you?" The key explanation for the difference between the 108,000 NCVS estimate for the annual number of DGUs and the several million from the surveys discussed earlier is that NCVS avoids the false-positive problem by limiting DGU questions to persons who first reported that they were crime victims. Most NCVS respondents never have a chance to answer the DGU question, falsely or otherwise.
__________________
This post brought to you by Carl's Jr.

Last edited by JB3; 08-06-2010 at 01:03 AM.