Department of Medical Education and Clinical Sciences
Washington State University
MD MSc FAAFP FASNC FACNM
Medical imaging, biostatistics, bioethics
Thomas F. Heston and John Y. Jiang PeerJ
Introduction Patients with suspected thoracic pathology frequently get imaging with conventional radiography or chest x-rays (CXR) and computed tomography (CT). CXR include one or two planar views, compared to the three-dimensional images generated by chest CT. CXR imaging has the advantage of lower costs and lower radiation exposure at the expense of lower diagnostic accuracy, especially in patients with large body habitus. Objectives To determine whether CXR imaging could achieve acceptable diagnostic accuracy in patients with a low body mass index (BMI). Methods This retrospective study evaluated 50 patients with age of 63 ± 12 years old, 92% male, BMI 31.7 ± 7.9, presenting with acute, nontraumatic cardiopulmonary complaints who underwent CXR followed by CT within 1 day. Diagnostic accuracy was determined by comparing scan interpretation with the final clinical diagnosis of the referring clinician. Results CT results were significantly correlated with CXR results (r = 0.284, p = 0.046). Correcting for BMI did not improve this correlation (r = 0.285, p = 0.047). Correcting for BMI and age also did not improve the correlation (r = 0.283, p = 0.052), nor did correcting for BMI, age, and sex (r = 0.270, p = 0.067). Correcting for height alone slightly improved the correlation (r = 0.290, p = 0.043), as did correcting for weight alone (r = 0.288, p = 0.045). CT accuracy was 92% (SE = 0.039) vs. 60% for CXR (SE = 0.070, p < 0.01). Conclusion Accounting for patient body habitus as determined by either BMI, height, or weight did not improve the correlation between CXR accuracy and chest CT accuracy. CXR is significantly less accurate than CT even in patients with a low BMI.
Thomas F. Heston Ovid Technologies (Wolters Kluwer Health)
Thomas F. Heston Royal Society of Chemistry (RSC)
The article looking at the effect of Montmorency tart cherry juice supplementation on 50 to 80 years old people with normal cognitive function concluded that supplementation may improve cognitive functioning.
Thomas F. Heston and Joshuel A. Pahang Southern Medical Association
Thomas F. Heston, Anndres H. Olson, and Nicholas R. Randall AME Publishing Company
For the last 25 years it has been widely accepted that diabetes mellitus is associated with a twofold or greater risk of clinical atherosclerotic disease (1). Long-standing elevated blood sugar levels, as measured by the hemoglobin A1c level, have been shown to be independent of major cardiovascular risk factors including age, body mass index, systolic blood pressure, serum cholesterol, cigarette smoking, or history of cardiovascular disease (2).
Thomas F. Heston Wiley
To the Editor: The predictive value of a diagnostic test is highly dependent upon disease prevalence. Disease prevalence, however, varies widely from patient to patient. Since patients in whom the diagnosis is unclear are the ones most likely to get a diagnostic test, it is helpful to standardize the predictive value of a diagnostic test to a disease prevalence of 50%. Even more useful would be to present predictive values standardized to disease prevalences of 25%, 50%, and 75%. I thank the authors for their thoughtful analysis of how to best present the predictive value of a test. They emphasize that the predictive value of a test varies significantly as the disease prevalence changes. In my previous Letter to the Editor on this topic (1), I proposed that researchers not only present the raw, unadjusted predictive value of a diagnostic test, but also present the predictive value of the test based upon a standardized 50% disease prevalence. What the authors propose, in brief, is that researchers use the Predictive Summary Index (2) which standardizes predictive values based on the estimated disease prevalence in a large population. They suggest that this is a more useful way to determine the overall gain in information from a diagnostic test than my proposal to standardize predictive values to a prevalence of 50%. While using the Predictive Summary Index (PSI) may be useful for making population-based policy decisions, it adds little useful information to practicing clinicians attempting to apply research findings to individual patients. The primary reason for this is because disease prevalence is not a fixed value but varies widely from individual patient to patient. Since diagnostic tests are most frequently ordered when the diagnosis is unclear (ie, the pretest likelihood of disease is around 50%), standardizing predictive values to a prevalence of 50% may be more meaningful to the practicing clinician than using the PSI. For example, after doing a history and physical, I will estimate the likelihood of disease based on a wide range of variables unique to my patient. When the disease of interest is very highly likely, or very unlikely, then additional diagnostic testing is not helpful. On the other hand, if the unique characteristics of my patient do not clearly indicate a specific diagnosis, this is when I order additional diagnostic tests. In this situation, I do not clearly know whether my patient has, or does not have, the disease of interest, ie, my clinical judgment is that the likelihood of disease is in an intermediate range. When my level of diagnostic certainty is no better than a coin flip, what is most useful is the predictive value of a test standardized to a disease prevalence of 50%. Population prevalence can vary widely due to the demographic group(s) included. Was the patient presenting for the first time to a rural family physician, or presenting to a subspecialist at a tertiary care center after an extensive workup? Is the patient male, or female? Diabetic or not diabetic? Prediabetic? How old is the patient? What is the family history? What is the patient’s occupation? Where does the patient live? These factors are all taken into account when doing a history and physical. Generating a PSI value for each demographic would not only be nearly impossible, but also confusing and impractical for practicing clinicians to apply. However, if I knew the predictive value of a diagnostic test standardized to disease prevalences of 25%, 50%, and 75%, then I could reasonably estimate its value to the individual patient in front of me. The PSI may be useful when looking at populations; however, standardizing predictive values to a 50% disease prevalence may be more useful to clinicians treating individual patients.
Thomas F. Heston Southern Medical Association
Thomas F. Heston Ovid Technologies (Wolters Kluwer Health)
To the Editor: We read the article from the What Is the Optimal Method for Ischemia Evaluation in Women (WOMEN) trial by Shaw et al1 with great interest. It relates to extremely important issues in the evaluation of women with chest pain and possible angina pectoris. The primary goal for the trial was to determine whether the negative predicted value of the 2 screening techniques, myocardial perfusion imaging in comparison with exercise treadmill testing, were similar. Women with chest pain were expected to be at intermediate or high pretest risk for ischemic heart disease.1 To evaluate such a test, (1) there …
Thomas F. Heston Southern Medical Association
Thomas F. Heston Wiley
Pilz et al (1) should be congratulated on their research looking at the negative predictive value in patients with a normal adenosine-stress cardiac magnetic resonance imaging (MRI). However, the data they present is incomplete. The crux of the matter is this: Does the negative predictive value have any clinical usefulness when it is derived from a population where only negative tests are evaluated? The answer is no. We need to know the overall prevalence of disease in the entire population undergoing the test before making a judgment as to its diagnostic utility. Predictive values vary strongly with disease prevalence (2). Even for poorly accurate tests, when the prevalence of disease is very low, the negative predictive value is high. On the other hand, the sensitivity and specificity of a test are resistant to changes in disease prevalence. The authors do not supply prevalence data in their population of all patients undergoing cardiac MRI (ie, both those with positive and negative results). The surrogate risk calculator that looked at overall mortality cannot substitute for hard numbers of patients with obstructive disease. Just as an exercise, building on their data, let us suppose the following: 1) the same number of people have a positive test as a negative test; 2) the prevalence of disease in those with a negative test is 3.8% (6/158); and 3) the prevalence of disease in those with a positive test is more than 5 times greater, at 20% (32/158). This means that the test would have a sensitivity of 84%, a specificity of 55%, a negative predictive value of 96%, and an overall population prevalence of obstructive disease of 12%. Now, change prevalence of disease to 75% while keeping the sensitivity and specificity fixed (since they are resistant to changes in prevalence). The negative predictive value is now only 54% and the odds of having no obstructive disease given a negative test is only about 1-to-1. This example demonstrates Baye’s Theorem: altering the pretest probability affects the posttest probability. A better way to present a predictive value would be to include the value based on the sample population, and also the value calculated at a 50% disease prevalence (the ‘‘standardized predictive value’’). To calculate the standardized predictive value, first accurately determine test sensitivity and specificity. Then, set the prevalence of disease to 50% while keeping the sensitivity and specificity fixed. Now, calculate the predictive value. This standardized predictive value would allow readers to reduce prevalence bias when comparing one diagnostic test with another. It would enable readers to more rapidly grasp the true clinical value of a test, without any need for further calculation.
Thomas F. Heston Springer Science and Business Media LLC
Dr. Sharma and colleagues should be congratulated on their research of ST segment depression on adenosine stress testing and normal myocardial perfusion imaging; however, the data seem to imply something quite different than the stated conclusions. Only patients with normal myocardial perfusion and ST depression with adenosine stress were evaluated. Since all patients had ECG evidence of ischemia, it is not possible to conclude that the ‘‘specificity of ECG changes during adenosine infusion for the detection of severe obstructive CAD is poor.’’ To determine specificity, both positive and negative test results must be evaluated, but this was not done. If the authors are referring to patients with 5 minutes or more of ST segment depression vs those with a shorter duration, then they would need to only look at patients undergoing angiography, and compare those with more vs less than 5 minutes of ST segment depression. The authors conclude ‘‘patients with multiple coronary risk factors, particularly diabetes mellitus, should undergo further investigation.’’ Why? All patients had a 0% incidence of a hard cardiac event (cardiac death or nonfatal myocardial infarction) during follow-up. Further investigation will not improve upon 0%, and their data provide no proof that angiography or revascularization was beneficial. A literature review may allow this hypothesis, but their data do not. Nevertheless, this research is helpful because it reaffirms the value of a normal myocardial perfusion scan. Their data demonstrate that even if adenosine stress testing shows ST segment depression, in patients with a normal myocardial perfusion scan the annual risk of a hard cardiac event is very low.
S. J. Goldsmith, W. Parsons, M. J. Guiberteau, L. H. Stern, L. Lanzkowsky, J. Weigert, T. F. Heston, E. Jones, J. Buscombe, and M. G. Stabin Society of Nuclear Medicine
VOICE Credit: This activity has been approved for 1.0 VOICE (Category A) credit. For CE credit, participants can access this activity on page 17A or on the SNM Web site (http://www.snm.org/ce_online) through December 31, 2012. You must answer 80% of the questions correctly to receive 1.0 CEH (Continuing Education Hour) credit.
T.F. Heston and R.L. Wahl E-MED LTD
Abstract Molecular imaging plays an important role in the evaluation and management of thyroid cancer. The routine use of thyroid scanning in all thyroid nodules is no longer recommended by many authorities. In the initial work-up of a thyroid nodule, radioiodine imaging can be particularly helpful when the thyroid stimulating hormone level is low and an autonomously functioning nodule is suspected. Radioiodine imaging can also be helpful in the 10–15% of cases for which fine-needle aspiration biopsy is indeterminate. Therapy of confirmed thyroid cancer frequently involves administration of iodine-131 after surgery to ablate remnant tissue. In the follow-up of thyroid cancer patients, increased thyroglobulin levels will often prompt the empiric administration of 131I followed by whole body radioiodine imaging in the search for recurrent or metastatic disease. 131I imaging of the whole body and blood pharmacokinetics can be used to determine if higher doses of 131I can be given in thyroid cancer. The utility of [18F]fluorodeoxyglucose (FDG) positron emission tomography (PET) is steadily increasing. FDG is primarily taken up by dedifferentiated thyroid cancer cells, which are poorly iodine avid. Thus, it is particularly helpful in the patient with an increased thyroglobulin but negative radioiodine scan. FDG PET is also useful in the patient with a neck mass but unknown primary, in patients with aggressive (dedifferentiated) thyroid cancer, and in patients with differentiated cancer where histologic transformation to dedifferentiation is suspected. In rarer types of thyroid cancer, such as medullary thyroid cancer, FDG and other tracers such as 99mTc sestamibi, [11C]methionine, [111In]octreotide, and [68Ga]somatostatin receptor binding reagents have been utilized. 124I is not widely available, but has been used for PET imaging of thyroid cancer and will likely see broader applicability due to the advantages of PET methodology.
Thomas F. Heston Southern Medical Association
Rickets, characterized by the failure of growing bones to mineralize, is now known to be caused by vitamin D deficiency. During the Industrial Revolution, rickets was noted to be rampant among underprivileged infants in the northern United States and several large cities in Europe. An autopsy study done in 1909 found that over 95% of infants dying before 18 months of age had histopathological evidence of rickets. In 1919, Edward Mellanby determined that rickets was due to a nutritional deficiency of a fat-soluble nutrient. Although adequate sun exposure cures vitamin D deficiency, it remained an endemic condition in the United States until the food supply, primarily milk, was fortified. This led to the near eradication of the disease by the 1950s. In spite of the continued fortification of food products with vitamin D, this strategy is not working as currently implemented. Hypovitaminosis D is making a comeback. It is now recognized as a common problem in all age groups worldwide. Much of the deficiency is attributed to more people working indoors and the increased use of sun protection when outside. However, vitamin D insufficiency is common even in sunny climates. Furthermore, as the benefits of breast feeding are recognized, more infants are exclusively breast fed, increasing the prevalence of hypovitaminosis D. Addressing this issue is important. We now know that hypovitaminosis D is associated with a large number of other medical conditions, including heart disease, colon cancer, diabetes, and overall mortality. In heart disease, hypovitaminosis D has been associated with the metabolic syndrome, an increased cardiac workload, and hypertension. The etiology of these relationships between hypovitaminosis D and cardiac disease is unclear; however, it is beyond that which would be expected from an increase in cardiac risk factors alone. Although it is still unknown why vitamin D has such widespread effects on cardiovascular health, the reason may be partly due to vitamin D’s negative regulation of the reninangiotensin system and its direct effects upon smooth muscle calcification and proliferation. In the current study published in this issue of the Southern Medical Journal, a systematic review of randomized controlled trials published in medical literature that addresses vitamin D levels and blood pressure was undertaken. The inclusion criteria for the meta-analysis were rigorous. Only randomized controlled trials were included. Studies must have measured participants’ baseline and follow-up blood pressures, or reported the change in blood pressure. The primary analysis required that all studies included were cohort studies, not just cross-sectional observational reports. The meta-analysis was also unique in its inclusion of only randomized controlled trials that looked at the effect of vitamin D supplementation in normotensive or hypertensive adults. Participants with other diseases such as kidney disease or diabetes, which could confound the analysis of the effect of vitamin D supplementation on blood pressure, were excluded from the primary analysis. The authors found 244 studies on their initial search of medical databases. Of these, they were able to identify 4 randomized controlled trials meeting their inclusion criteria. All 4 studies included a change in blood pressure as a primary or secondary outcome. The studies used cholecalciferol (vitamin D3) as the nutritional intervention. Supplemental dosages for the 4 studies were 200 IU per day on one paper, 400 IU per day on two papers, and a 100,000 IU one time dose on the last paper. While the individual studies showed only mild effects that were not always statistically significant, the meta-analysis of the studies found a statistically significant drop of 2.4 mm in systolic blood pressure. The clinical impact of this meta-analysis is uncertain, given the small change in blood pressure with vitamin D supplementation. However, it is important to note that these are population statistics, not individual patient levels. Based on these findings, a significant minority of patients will show a 5 mm Hg drop, or more, in their blood pressure. In some instances, this can make a meaningful difference. In clinical practice, it may be worthwhile to check serum 25-hydroxyvitamin D levels in vulnerable patients with risk factors. Vitamin D supplementation in these deficient individuals may help reduce their cardiovascular risk while also decreasing their risk of bone disease. Perhaps the greatest value of this meta-analysis is its addition to growing evidence that the problem of hypovitaminosis D may need to be addressed at a population-wide level. A small change in blood pressure over the entire population would have a significant effect on overall cardiovascular morbidity and mortality. The large prevalence of hypovitaminosis D around the world suggests that current food fortification strategies are not working, and that hypovitaminosis D may be causing unnecessary, premature cardiac disease. From the Johns Hopkins International, Baltimore, MD; and International American University, Vieux Fort, St. Lucia. Reprint requests to Thomas F. Heston, MD, Johns Hopkins International, 5801 Smith Avenue, McAuley Hall, Suite 305, Baltimore, MD. Email: firstname.lastname@example.org Accepted March 26, 2010. Copyright © 2010 by The Southern Medical Association 0038-4348/
T. F. Heston Society of Nuclear Medicine
TO THE EDITOR: The excellent paper by Dr. Sharp and colleagues compared the diagnostic utility of 123I-metaiodobenzylguanidine (MIBG) with 18F-FDG (1). They found that 18F-FDG is superior to 123I-MIBG in stage 1 and 2 neuroblastoma and that 123I-MIBG is superior to 18F-FDG in stage 4 neuroblastoma. The authors comment that for socioeconomic and radiation exposure reasons, a reduction in the total number of imaging procedures may be desirable in neuroblastoma patients. In this setting, what is important is not necessarily which test is superior. Rather, we want to know if one of these imaging tests can be safely eliminated. The answer is no. Not in early-stage neuroblastoma, and not in late-stage neuroblastoma. The authors found that in 10 of 10 patients with early disease, 18F-FDG was equivalent or superior to 123I-MIBG. But the 95% confidence interval for this ranges from about 72% to 100%. Thus, it remains statistically possible that 18F-FDG may be inferior to 123I-MIBG in up to 3 of 10 patients. We thus conclude that 123I-MIBG scanning cannot be safely eliminated in early neuroblastoma, although 18F-FDG works particularly well. In stage 4 disease, 123I-MIBG was superior in 24 of 40 patients, whereas 18F-FDG was better in 8 of 40 patients. Yes, 24 of 40 is different from 8 of 40 (P , 0.001), but so what? The more pressing question is whether 8 of 40 is significantly different from 0 of 40. That is, can we safely eliminate 18F-FDG scanning in stage 4 patients? No. Their data indicate that up to 3 of 10 latestage patients will benefit from 18F-FDG scanning, even though 123I-MIBG performs better. The authors make a valuable contribution by giving us the relative superiority of each agent during the course of neuroblastoma. However, their data also indicate that 123I-MIBG scanning cannot yet be safely eliminated, nor can 18F-FDG scanning be safely eliminated, in the evaluation of earlyor late-stage neuroblastoma.
Thomas F. Heston and Alan Cheng Southern Medical Association
Atrial fibrillation is the most common sustained cardiac arrhythmia in people. It is characterized by uncoordinated atrial activation associated with an irregular and often rapid ventricular response. Atrial flutter is a closely related supraventricular tachycardia. Atrial flutter, the second most common atrial arrhythmia, is a reentrant rhythm notable for an atrial rate typically between 240 to 400 beats per minute. It commonly includes an atrial ventricular block, with depolarizations to the ventricle that most often are conducted at a 2:1 ratio, with a 4:1 block being the next most common form. Atrial fibrillation affects approximately 2.3 million people in the United States and is expected to increase to 5.6 million by 2050. The overall estimated prevalence in the general population is 0.4–1%. It is uncommon before 60 years of age, but rapidly increases after that. It doubles in prevalence with each decade of age, affecting approximately 10% of the population for those in their eighties. As such, it is a common medical condition treated by family physicians, internists, cardiologists, and other clinicians taking care of the elderly. Not only is atrial fibrillation common, but it also is associated with serious health consequences. It accounts for about a third of hospital admissions for cardiac arrhythmias, and increases the risk of stroke and of overall mortality. Both men and women with atrial fibrillation are about three times more likely than matched controls to develop heart failure. Atrial fibrillation is a costly public health issue, primarily due to hospital care for persistent or permanent atrial fibrillation. The leading cause for hospital admission was for cardioversion, followed by heart failure and implantation or change of pacemaker. Thus, both for public health reasons and for the benefit of individual patients, it is important that the proper patients be identified for cardioversion, and the clinical care be tailored to the individual. In this issue of the Southern Medical Journal, Aloul et al provide important insights into the success of direct current cardioversion in patients with atrial fibrillation and flutter. They found that patients with fine atrial fibrillation and atrial flutter were not only more likely to be successfully cardioverted, but were also more likely to remain in normal sinus rhythm one month later. The decision about whether or not and how to cardiovert a patient and how to do it is multifaceted and primarily based upon age, symptoms, and evidence of structural heart disease. Should the patient be cardioverted, and, if yes, should chemical or direct current cardioversion be used? Is pre-procedure anticoagulation indicated? What medication regimen should be followed post-procedure? These issues require an in-depth understanding of the individual patient’s relative risks and benefits. The significance of the increased likelihood of maintaining sinus rhythm at one month in patients with fine atrial fibrillation is unclear. It is already known that most patients after a single cardioversion without prophylactic antiarrhythmic drugs will have a recurrence of their arrhythmia within the year. However, this finding of success at one month may have particular importance in patients where the effect of their arrhythmia on symptoms is unclear. Often, patients will have other comorbid conditions that can result in the same symptoms as those resulting from atrial fibrillation. They may have fatigue or other vague symptoms that may or may not be due to their arrhythmia. In these patients, when there is a reasonable likelihood they will stay in sinus rhythm for at least a few weeks, it may be worthwhile to see what effect restoration of sinus rhythm has on their symptoms. If they are symptomatically much better after this trial, then more aggressive approaches including catheter-based pulmonary vein isolation may be indicated, in an attempt to maintain longterm sinus rhythm if they should have recurrence of their arrhythmia. This paper gives us some insight as to who would likely maintain sinus rhythm long enough to at least determine a symptom-rhythm correlation. There are several limitations to their study that need to be kept in mind. The primary limitation is that the sample population was overwhelmingly male (74 out of 76, or 97%). The number of attempts at direct current cardioversion was not clearly stated. The value of knowing the one month success rate, versus the one year success rate for cardioversion is not known. The numbers of patients in the fine atrial fibrillation category was low and raises concern about the fragility of their statistics. Because of these limitations, the clinical implications of the research by this group have yet to be determined. Nevertheless, with the global aging of the population, the proper identification and treatment of these arrhythmias will continue to be an important topic. An improved knowledge of the electrocardiographic characteristics affecting the success From the Department of Radiology and Section for Cardiac Electrophysiology, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, MD.
Thomas F. Heston and Zsolt Szabo Wiley
Purpose: We present a case of incidentally noted giant cell arteritis in a patient undergoing 18F‐fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT imaging. The patient was originally referred to PET/CT for staging of his renal transitional cell carcinoma.
Thomas F. Heston and Richard L. Wahl Elsevier BV
In a recent issue of the Journal , Tamaki et al. () found that in their study sample of 106 consecutive patients with stable congestive heart failure (CHF), those experiencing a sudden cardiac death (SCD) had on average a higher washout rate of iodine-123 metaiodobenzylguanidine (MIBG WR)
T F Heston BMJ
To the editor: In the study looking at cardiac CT angiography (CTA) using perfusion scintigraphy as the reference standard,1 cardiac CTA had a sensitivity of 75%, specificity of 98%, positive predictive value of 68%, and negative predictive value of 99%. The authors concluded that cardiac CTA had a moderate sensitivity, a moderate positive …
Thomas F. Heston Southern Medical Association
Pharmacologic stress testing plays an important role in the diagnosis of patients with known or suspected coronary artery disease. In many busy nuclear cardiology laboratories, 30 to 40% of all nuclear stress tests are pharmacologic. This makes it particularly important for clinicians to understand the indications, complications, and optimal way to perform pharmacologic stress testing. Thus, the comprehensive review by Patel et al is timely and valuable. By following these guidelines, our patients can receive the best care and experience the fewest side effects. Although patients seem to understand the reasoning behind an exercise stress test, the pharmacologic stress test can be confusing. A frequent question is why the patient is undergoing a pharmacologic stress test instead of an exercise stress test. It is important to note that an exercise treadmill test is generally considered inadequate when patients cannot reach 85% of their predicted maximum heart rate, cannot reach a workload of 5 METS, or cannot exercise for at least 3 minutes. If the patient is unable to exercise to these levels, then pharmacologic stress serves them much better. Other important reasons to perform pharmacologic stress testing include conditions such as aortic stenosis, left bundle branch block, a paced rhythm, recent myocardial infarction, and severe arterial hypertension. Because of the varying reasons, the decision to perform one stress modality over another is often not made until the patient is interviewed and examined on the day of the stress imaging procedure. The possibility of either stress test needs to be made clear to both the referring clinicians and to the patients so that there is no surprise when an exercise treadmill test is converted to a pharmacologic stress test or vice versa. Another important concern is whether pharmacologic stress testing is as diagnostically useful as exercise treadmill testing. Pharmacologic stress testing in conjunction with nuclear imaging is just as effective at risk-stratifying patients as is exercise stress testing. Although a treadmill stress test in patients without contraindications is preferred, the sensitivity and specificity of pharmacologic stress testing in conjunction with nuclear imaging is equivalent to an exercise stress nuclear study in the diagnosis of coronary artery disease. However, this does not mean that the post-test probability of disease is the same in patients with a normal pharmacologic stress study as those with a normal exercise stress study. Patients requiring pharmacologic stress often have a greater number of comorbid conditions. While the accuracy of nuclear scanning is similar for both groups of patients, those that require pharmacologic stress testing have a slightly worse prognosis. Thus, it is not surprising to note that patients with a normal pharmacologic stress nuclear scan have a hard cardiac event rate of about 1 to 2% per year, compared with an event rate of 1% or less per year in patients with a normal exercise stress nuclear scan. Nevertheless, newer prognostic scoring systems may enable clinicians to risk-stratify patients undergoing pharmacologic stress testing into a very low risk category with an annual hard event rate of less than 1%. Finally, another frequent question patients have in regards to pharmacologic stress testing is its safety and side effects. A recent multi-center international trial found that the rate of cardiac death in patients undergoing dipyridamole stress testing was nearly identical with that of patients undergoing exercise stress testing (1 death out of 10,000 stress tests). Other pharmacologic agents have similarly low, hard cardiac event rates. –11 Although the incidence of side effects is relatively high, these tend to be minor and short-lived. Pharmacologic stress testing is a timely and important topic which will only increase as the baby boomer generation ages. This, along with the rapid increase in imaging technology, makes the present a good time for new physicians to learn about and older physicians to update their knowledge on pharmacologic methods of stress testing.
Dale A. Baur, Thomas F. Heston, and Joseph I. Helman Jaypee Brothers Medical Publishing
Abstract Nuclear medicine studies often play a significant role in the diagnosis and treatment of oral and maxillofacial diseases. While not commonly used in everyday dental practice, the dental provider should have a conversational knowledge of these imaging modalities and understand the indications and limitations of these studies. The purpose of this review is to discuss the nuclear medicine studies that have applications in the head and neck region as well as their indications, limitations, and diagnostic conclusions that can be drawn from these studies. Citation Baur DA, Heston TF, Helman JI. Nuclear Medicine in Oral and Maxillofacial Diagnosis: A Review for the Practicing Dental Professional. J Contemp Dent Pract 2004 February;(5)1:094-104.
Thomas F Heston, Douglas J Norman, John M Barry, William M Bennett, and Richard A Wilson Elsevier BV
The purpose of this study was to determine if an expert network, a form of artificial intelligence, could effectively stratify cardiac risk in candidates for renal transplant. Input into the expert network consisted of clinical risk factors and thallium-201 stress test data. Clinical risk factor screening alone identified 95 of 189 patients as high risk. These 95 patients underwent thallium-201 stress testing, and 53 had either reversible or fixed defects. The other 42 patients were classified as low risk. This algorithm made up the "expert system," and during the 4-year follow-up period had a sensitivity of 82%, specificity of 77%, and accuracy of 78%. An artificial neural network was added to the expert system, creating an expert network. Input into the neural network consisted of both clinical variables and thallium-201 stress test data. There were 5 hidden nodes and the output (end point) was cardiac death. The expert network increased the specificity of the expert system alone from 77% to 90% (p < 0.001), the accuracy from 78% to 89% (p < 0.005), and maintained the overall sensitivity at 88%. An expert network based on clinical risk factor screening and thallium-201 stress testing had an accuracy of 89% in predicting the 4-year cardiac mortality among 189 renal transplant candidates.