A SPECIAL ARTICLE

The Complications of Diabetes in the Gastrointestinal Tract

Read Article

Diabetes and hyperglycemia have immediate and long term effects in the gastrointestinal (GI) tract. In this article, we comprehensively review the complications of diabetes in the gastrointestinal tract including oral and esophageal candidiasis, gastroesophageal reflux disease (GERD), esophageal dysmotility, gastroparesis, small intestinal bacterial overgrowth, diarrhea and nonalcoholic fatty liver disease (NAFLD). We also describe the increased risk of GI malignancies in these patients.

Diabetes mellitus affects millions of people worldwide. With improved therapy, many patients are living longer, albeit with complications of the disease. Diabetes and hyperglycemia have immediate and long term effects in the gastrointestinal (GI) tract. We comprehensively review the complications of diabetes in the gastrointestinal tract including oral and esophageal candidiasis, gastroesophageal reflux disease (GERD), esophageal dysmotility, gastroparesis, small intestinal bacterial overgrowth, diarrhea and nonalcoholic fatty liver disease (NAFLD). We also describe the increased risk of GI malignancies in these patients.

Priya Simoes, MD1 Lisa Ganjhu, DO, AGAF2 1Department of Medicine, St Luke’s Roosevelt Health Center, Mount Sinai Health System, New York, NY 2Division of Gastroenterology, NYU Langone Medical Center, New York, NY

Diabetes mellitus (DM) affects 9.3 % of the population of the United States (U.S.) and roughly 1.7 million new cases are diagnosed each year. Four percent of those diagnosed are type 1 diabetics.1 Diabetes currently affects 382 million people worldwide, and this is expected to increase to 592 million by 2035.2 With improved therapies, survival has increased and many patients live with long-term complications of the disease. Diabetes affects nearly every organ system including the gastrointestinal (GI) tract. The major gastrointestinal complications of diabetes are oral and periodontal infection, esophageal candidiasis, esophageal dysmotility, gastroesophageal reflux disease (GERD), gastroparesis, small intestinal bacterial overgrowth (SIBO), diabetic diarrhea, nonalcoholic fatty liver disease (NAFLD) and increased risk of GI malignancies.34 Here we comprehensively review the effects of diabetes on the gastrointestinal tract.

Oral Cavity

Candidiasis And Periodontal Disease


Epidemiology


Diabetes has several oral manifestations including fungal
infections, periodontal disease, mucosal ulcerations,
xerostomia and aguesia. The reported incidence of oral
candidiasis is 11 to 30% and periodontal disease is 30 to 60% among poorly controlled diabetic patients.56

Pathophysiology

Candida species normally colonize the oral cavity of healthy individuals, without causing infection. Hyperglycemia causes an increase in buccal mucin and glucose and decreased activity of salivary antimicrobial factors like lysozymes.7 This increases proliferation of buccal candida, the most common species being Candida albicans.8 Compromised neutrophil function from decreased adherence, chemotaxis and phagocytic function in the presence of uncontrolled hyperglycemia predispose to periodontal destruction and infection.6

Diagnosis and Treatment

Symptoms usually include loss of taste and cotton-like sensation in the mouth. Oral candidiasis is diagnosed on inspection of the oral cavity; exam often reveals whitish, adherent plaques with erythematous and friable underlying mucosa.

Oropharyngeal candidiasis can initally be treated with topical antifungal agents such as nystatin 400,000 to 600,000 units daily in a swish and swallow manner.

If topical agents are ineffective, oral “azoles”, such as fluconazole or itraconazole are the mainstay of therapy. A loading dose of 200 mg followed by 100 to 200 mg daily for 7 – 10 days is recommended.9 Resistant cases may be treated with an oral suspension of amphotericin B.10

Esophagus


Candidiasis


Epidemiology and Pathophysiology

Diabetes mellitus is associated with higher incidence of esophageal candidiasis, especially among elderly patients.

Diagnosis and Treatment

The typical presenting symptoms are odynophagia, dysphagia, heartburn and reflux in a patient with oral candidiasis. Rarely, it may present with gastrointestinal bleeding.

Upper endoscopy, often revealing whitish adherent plaques, ulcerations or stricturing of the mucosa, may be performed to confirm the diagnosis. Biopsy or brushings yield yeast and pseudo hyphae invading the mucosa and positive fungal cultures. Empiric treatment with oral azoles may be started in an uncontrolled diabetic with odynophagia and dysphagia. If the candida infection is resistant to fluconazole, alternate azoles such as itraconazole, voriconazole or posaconazole can be used. Intravenous caspofungin is preferred to amphotericin B for treatment failures.11

Esophageal Dysmotility and Gastro Esophageal Reflux Dsease (GERD)
Epidemiology

The prevalence of GERD is estimated at 10-20 % in the Western world.12 Diabetes is associated with a prolonged esophageal transit time and a 1.6 times higher risk of developing GERD than the general population, particularly among women and young patients.13,14,15

Pathophysiology

Esophageal dysmotility in diabetes is multifactorial with damage to interstitial cells of Cajal and vagal/autonomic neuropathy. This is characterized by smaller amplitude and velocity of lower esophageal contractions.16,17 As a result, there is impaired esophageal peristalsis with frequent retrograde waves, decreased lower esophageal sphincter (LES) tone and frequent transient LES relaxation causing heartburn, regurgitation and dysphagia.18,19

Diagnosis

GERD is diagnosed clinically from typical symptoms of regurgitation and heartburn. Endoscopy may be performed for atypical, unresponsive or alarm symptoms. Ambulatory pH monitoring and esophageal manometry are recommended prior to surgical treatment.20

Treatment

Lifestyle modifications such as weight loss, avoidance of meals two to three hours before bed time, elimination of foods triggering symptoms and elevation of the head of the bed are recommended. GERD is treated symptomatically with proton pump inhibitors (PPIs). If symptoms are unresponsive to PPIs, endoscopic or laparoscopic fundoplication surgery may be considered.20

Stomach


Gastroparesis

Gastroparesis is a motility disorder characterized by delayed gastric emptying in the absence of mechanical obstruction.

Epidemiology

Gastroparesis generally develops in long standing diabetes of more than 10 years duration with autonomic dysfunction. One third of gastroparesis is attributable to diabetes.21,22 The disorder is female predominant (4:1 compared to males), and a higher prevalence has been described in type 1 diabetes.23 Gastroparesis is characterized by nausea, vomiting, bloating, epigastric pain and early satiety.

Delayed gastric emptying was associated with other comorbid conditions such as hypertension, cardiovascular disease and retinopathy and may lead to more frequent hospitalizations among patients with diabetes.24

Pathophysiology

Autonomic neuropathy, damage to the interstitial cells of Cajal by hyperglycemia and oxidative stress result in decreased gastric motility, impaired pyloric relaxation and increased post prandial resistance subsequently leading to delayed gastric emptying.25 Acute hyperglycemia (> 288 mg/dl) can be associated with increased gastric emptying time and worsening symptoms.26

Diagnosis

The diagnosis is made by nuclear tests measuring the gastric emptying of solid phase meals. Evaluation is prompted by symptoms, poor glycemic control or in patients in whom oral hypoglycemic medications known to slow down gastric emptying time are being considered.27 The gold standard is gastric emptying scintigraphy or scinti scanning. A radionuclide labeled low fat, solid meal is ingested and the gastric emptying time is calculated by observing the fraction of the meal remaining in the stomach at baseline, 1, 2 and 4 hours after ingestion. Having patients observe an overnight fast and well controlled blood sugars (< 275 mg/dl fasting) ensure accuracy of the test.28 Modified scinti scanning, which measures the gastric emptying over a shorter time period, has lower sensitivity and specificity.

Gastric emptying breath test (GEBT), which measures gastric emptying by measuring the breath excretion of CO2 labeled with C13 radioisotope incorporated into a meal, has shown comparable sensitivity and specificity to scinti scanning in studies, but requires further validation.27,29 Upper endoscopy, performed after an overnight fast, that shows evidence of food retention in the stomach may also assist in making the diagnosis of gastroparesis.

Wireless capsules (“Smart Pill”) are used to measure the pH, temperature and pressure in the GI tract. A small capsule, with the ability to transmit data to a receiver worn around the patient’s neck, is ingested. It measures gastric emptying time by sensing the abrupt change in pH as the capsule passes from the stomach into the duodenum and thus is used to diagnose gastroparesis.30,31 Wireless capsule studies have a sensitivity and specificity of 83 % compared with gastric scintigraphy.29

Treatment

Management strategies generally consist of optimizing glycemic control, improving hydration and nutritional status, controlling symptoms and managing complications.

Dietary factors play an important role in gastroparesis. Small, frequent low fat, low fiber meals are recommended with 55% to 60% of dietary calories from carbohydrates, 15% from protein and 25% to 30% from fat. Vitamins and minerals should be replaced through oral supplementation. If patients are unable to tolerate solid meals, more liquid calories are recommended since they empty more easily by gravity 32

The mainstays of pharmacologic therapy are prokinetics such as metoclopramide, erythromycin and domperidone, which hasten gastric emptying. Erythromycin has the maximum effect on gastric emptying and is generally used for acute symptom management.32 Newer drugs such as 5HT4 receptor agonists (prucaloprid/velusetora)33 and muscarinic antagonists (acotiamide) are being studied for their efficacy in gastroparesis.34 Non pharmacologic methods such as intrapyloric botulinum toxin injection and gastric electrical stimulation are used to treat medically refractory gastroparesis.35 Anti-diabetic drugs, such as GLP1 analogs, that slow gastric emptying should be discontinued. Repeated hospitalizations in gastroparesis are usually for nausea, vomiting and pain management. Additional diagnostic testing rarely changes management and should be avoided to decrease prolonged hospitalizations and increased healthcare costs.36 Refractory nausea may be treated with tricyclic antidepressants (TCA), phenothiazines and antihistamines such as meclizine. Selective serotonin reuptake inhibitors and low dose TCAs may also help improve abdominal pain.29

Small Intestine and Colon


Small Intestinal Bacterial Overgrowth


Definition and Epidemiology

Small Intestinal bacterial overgrowth (SIBO) refers to an increase in the number of bacteria or a change in the composition of the small bowel microbiome.

The incidence of SIBO in diabetic patients with autonomic neuropathy varies between 30 and 60%.37,38

Pathophysiology

Decreased intestinal motility, malabsorption and increased intestinal secretion cause SIBO in patients with diabetes. The disorder may present with abdominal pain, bloating, distention, flatulence and diarrhea and may result in nutritional deficiencies, chronic anemia, steatorrhea, and malnutrition.

Diagnosis

The gold standard for making the diagnosis of SIBO is a culture of jejunal aspirates. Noninvasive testing methods include hydrogen and methane breath testing after an oral glucose or lactulose load. An early peak in breath hydrogen or methane production is due to bacterial fermentation of glucose in the small bowel, indicating SIBO.39 However, as a false positive early peak in the production of hydrogen and methane can occur with bacterial fermentation of glucose or lactulose in the cecum, this test has limited sensitivity, though good specificity.40

Treatment

Glycemic control and cyclical antibiotics are the treatments of SIBO in patients with diabetes. Metronidazole (750 mg/day) or rifaximin (1200 mg/ day) are the antibiotics of choice for SIBO.41,42 Prebiotics and probiotics may be used to modify the inflammatory response, however, they are contraindicated in patients who have lactobacilli overgrowth. Since SIBO is thought to be secondary to motility disorders, pro- kinetics such as metoclopramide or cyclic gut lavage with polyethylene glycol may be of benefit.

Diabetic Diarrhea
Definition and Epidemiology

Diarrhea in patients with diabetes may be caused by diabetes itself, coexisting conditions or by medications. It is often described as episodic, explosive diarrhea in the absence of an infectious or non-infectious cause.43 The prevalence of diarrhea among diabetic patients is estimated at 15%.44

Pathophysiology

Several mechanisms as to the etiology of diarrhea in diabetic patients have been postulated; these include autonomic dysfunction of the enteric neurons from autoantibodies as well as enteric inflammation with increased IL6 levels causing alteration of intestinal motility.45 Exocrine pancreatic insufficiency, celiac disease, small intestinal bacterial overgrowth and microscopic colitis have an increased prevalence in patients with diabetes and may cause diarrhea.46 Medications such as biguanides and acarbose inhibitors and dietary products like sorbitol based sweeteners may also contribute.45 Poor glycemic control may also worsen the diarrhea.46

Treatment

Strict glycemic control, eliminating possible food triggers and treatment of underlying causes form the basis of treatment. Pharmacotherapy is with anti- motility agents such as loperamide or tincture of opium. Topical or oral clonidine and somatostatin analogs may be used for severe symptoms.47,48

Liver


Non Alcoholic Fatty Liver Disease
and Non Alcoholic Steato Hepatitis


Definition and Epidemiology

Nonalcoholic fatty liver disease (NAFLD) is characterized by hepatic steatosis in the absence of secondary causes. In the United States, NAFLD is the most common chronic liver disease. NAFLD is commonly associated with diabetes as part of the metabolic syndrome, which also includes central obesity, low levels of high density lipoprotein (HDL), hypertriglyceridemia and hyperglycemia.49 Non- alcoholic steatohepatitis (NASH) is characterized by hepatocyte fat accumulation with concomitant hepatocyte injury and fibrosis. Studies have reported a 69 to 87% prevalence of NAFLD and 60% prevalence of NASH in patients with diabetes, compared with a roughly 20% prevalence of NAFLD and 3-5 % prevalence of NASH in the general population.49, 50

Pathophysiology

Development of NAFLD and NASH in DM is explained by a “two-hit” hypothesis. Accumulation of triglycerides in hepatocytes is considered to be the first step. The second hit is linked to the formation of advanced glycation end products that produce oxidative stress on hepatocytes and increase the fibrogenic potential of the stellate cells.51,52 Hyperinsulinemia and increased insulin resistance are associated with greater hepatic inflammation and fibrosis and patients with diabetes and NAFLD have a higher risk of progression to NASH.53

Diagnosis

Liver biopsy remains the gold standard for diagnosing NAFLD/NASH and should be performed in patients at high risk and in whom a competing etiology cannot be excluded. Imaging by ultrasound, computed tomography (CT) or magnetic resonance (MRI) may not accurately assess the degree of fibrosis and steatohepatitis. However, when coupled with non-invasive fibrosis markers like ultrasonic fibro-elastography and fibrosis prediction scores, they have excellent specificity and sensitivity and are gradually replacing liver biopsy for diagnosis of NASH. 53, 54

Treatment

Weight loss of 3 to 5% of total body weight will improve steatosis and greater than 10% weight loss improves steatohepatitis. Other lifestyle modifications reducing alcohol consumption and increasing exercise are also recommended. Over the years, several clinical trials, involving many medications and supplements have been undertaken in an effort to improve NASH and NAFLD. While some have shown promise, there is insufficient evidence to support the use of any drug as the sole treatment for NASH in diabetics.55

Gastrointestinal Malignancies

Diabetes mellitus is associated with an increased risk of various GI malignancies. Two mechanisms have been hypothesized.

  • Insulin receptor (IR) and insulin-like growth
    factor-1 receptor – (IGF-1R) pathway:



    Chronic hyperinsulinemia leads to up regulation
    of IGF-1R, epidermal growth factor (EGF)
    and its downstream pathways. This results
    in cellular proliferation, angiogenesis and
    inhibition of apoptosis, which promote tumor
    development in the pancreas and pre-malignant
    advanced adenomatous polyp formation in the
    colon. 56,57,58,59



    Hyperinsulinemia also leads to increased
    pro-inflammatory cytokines like interleukin
    6 (IL- 6) and decreased anti-inflammatory
    compounds such as adiponectin subsequently
    causing hepatic inflammation and fibrosis,
    which are precursors to HCC.60
  • Receptor for advanced glycation end products
    (RAGE):



    Advanced glycation end products accumulate at
    an accelerated rate in diabetes. In vitro studies
    show that up regulation of RAGE is associated
    with inflammation and tumorigenesis in colon
    and pancreatic cancer.61,62

Pancreatic Adenocarcinoma
Epidemiology

Several studies have shown an increased risk of pancreatic adenocarcinoma in diabetes and there is roughly around 70% prevalence of diabetes or impaired glucose tolerance in patients with pancreatic cancer.64,65 It is unclear whether diabetes is causal in the pathogenesis of pancreatic cancer or whether it is an effect of it.

New onset diabetes is peculiar to pancreatic cancer with a recent diagnosis of diabetes conferring a 50% greater risk of malignancy than long-standing (> 5years) diabetes65,66 Pannala et al. demonstrated that new onset diabetes associated with pancreatic cancer resolved after a curative resection.67

Diagnosis

Presenting symptoms are weight loss, epigastric pain, anorexia, painless jaundice and nausea. Several imaging modalities may be used to diagnose suspected pancreatic cancer. Diagnostic accuracy of CT varies between 73%-87 % depending on the size of the mass.68 MRI is superior to CT with 90% sensitivity; this increases to 97% when combined with magnetic resonance cholangiopancreatography (MRCP).69

Endoscopic ultrasound (EUS) provides clear resolution images and allows for needle sampling of pancreatic cells. It has 98% sensitivity and is becoming the diagnostic modality of choice for pancreatic cancer.70 Tumor markers such as carbohydrate antigen 19-9 (CA 19-9) and carcinoembryonic antigen (CEA) are used more commonly as prognostic indicators and for assessing response to treatment.

Colorectal Cancer (CRC)
Epidemiology

Diabetes is associated with an increased risk for colon cancer in both men and women and increased risk of rectal cancer in men.71 Diabetic patients with colon cancer have increased perioperative mortality, disease recurrence, worse response to chemotherapy, more treatment complications and increased risk of hepatic decompensation.72,73,74

Diagnosis

Presenting symptoms are usually hematochezia or melena, change in bowel habits, abdominal pain, unexplained iron deficiency anemia or weight loss. Rectal cancer may cause tenesmus and rectal pain.75

Screening for colon cancer focuses on detecting pre malignant polyps to prevent them developing into advanced disease. Colonoscopy is considered the test of choice as polyps can be removed and suspicious lesions can be biopsied. Other noninvasive screening methods include CT colonography and stool DNA testing. Assays to detect blood in stool such as fecal immunochemical testing (FIT) and guaiac occult blood testing (gOBT) have roughly 65%-80% sensitivity and 85%-95% specificity for CRC detection. However, sensitivity is lower for detecting advanced adenomas.76

Hepatocellular Carcinoma
Epidemiology

Long standing NAFLD and NASH can lead to cirrhosis and the development of cirrhosis is the greatest risk factor for developing hepatocellular carcinoma (HCC). Diabetes confers a two to three times increased risk of HCC, especially in older, Caucasian patients.64 Prevalence of diabetes may be double among HCC patients compared with controls, which remained significant even after adjusting for confounding factors like alcohol use, hepatitis B or C infection, obesity and hemochromatosis.77 A longer duration of diabetes may increase the risk of developing HCC.78

Diagnosis

Cirrhosis is the most important risk factor with 1 % to 6 % of cirrhotics developing HCC annually.79

Surveillance with ultrasound (US) at 6 months intervals has been associated with a reduction in mortality in these patients.80 Alfa fetoprotein (AFP) levels have historically been used for surveillance, but is no longer recommended.81

Typical appearance on four phase (unenhanced, arterial, venous and delayed) CT scan and on MRI both have excellent sensitivity and specificity (> 90%) for lesions >2cm. Dual imaging with MRI and ultrasound has excellent positive predictive value for smaller lesions.82

Management

While no specific recommendations exist for early screening of patients with diabetes for GI malignancies, knowledge of the increased risk warrants further investigation of gastrointestinal symptoms in these patients.

Metformin has been associated with decreased risk of pancreatic and hepatocellular carcinoma and is protective against colorectal cancer among patients with diabetes. Conversely, insulin and sulfonylureas have been associated with an increased risk of malignancy, supporting the hypothesis that hyperinsulinemia has tumorigenic effects.83,84,85

CONCLUSION

Knowledge of the gastrointestinal complications of diabetes is important for physicians to make an appropriate diagnosis, manage symptoms and improve the quality of life of patients living with long- standing diabetes. Awareness of the increased risk of malignancies in this population may help in early referral and diagnosis.

Download Tables, Images & References

NUTRITION ISSUES IN GASTROENTEROLOGY, SERIES #143

Low Residue vs. Low Fiber Diets in Inflammatory Bowel Disease: Evidence to Support vs. Habit?

Read Article

Inflammatory bowel disease (IBD) involves chronic inflammation of the gastrointestinal tract. Patients with IBD may experience gastrointestinal distress through abdominal pain, cramping, diarrhea, and hematochezia. Despite lack of evidence to support the practice, IBD patients are often instructed to limit fiber or residue during active flares to reduce gastrointestinal distress. The same advice is common when intestinal strictures are identified or suspected, to reduce the risk of obstruction. Low residue and low fiber diets are often recommended interchangeably, although they comprise two distinct diets. This review discusses the similarities and differences between “residue” and “fiber” and presents the studies that have evaluated the use of low residue and low fiber diets in IBD.

Neha D. Shah, MPH, RD, CNSC Berkeley N. Limketkai, MD Digestive Health Center, Stanford Health Care Division of Gastroenterology & Hepatology Stanford University School of Medicine, Palo Alto, CA

INTRODUCTION

Inflammatory bowel disease (IBD) encompasses two primary disorders: Crohn’s disease (CD) and ulcerative colitis (UC). It is characterized by chronic inflammation of the gastrointestinal tract.1 Common clinical manifestations of IBD include abdominal pain, cramping, diarrhea and hematochezia. CD patients, in particular, may also experience intestinal strictures with obstructive symptoms, abscesses, and fistulizing disease.

Dietary recommendations for IBD patients have been highly variable, largely due to the dearth of research data available to guide clinical practice. Nonetheless, IBD patients are often instructed to limit their consumption of fiber or residue during an active flare in order to help minimize gastrointestinal distress, particularly when intestinal strictures are suspected. Recommendations for a “low residue” or “low fiber” diet are often used interchangeably and incorrectly as synonymous terms; although there are similarities between the low residue and low fiber diets, they are indeed distinct diets with different theoretical effects on digestion.

The purpose of this review is to clarify what constitutes “residue” and “fiber”, discuss their physiologic effects on digestion and present the studies that have investigated the use of low residue and low fiber diets in IBD.

RESIDUE vs. FIBER What is Residue?

Historically, the term “residue” has denoted the by-products of the digestive process that were eventually defecated in stool. This definition covers the gamut of partially or completely undigested food particles, ash, gastrointestinal secretions, intestinal epithelial sloughing, and bacterial waste. In early canine studies, the appearance of stools after food consumption was generically called “residue”.2 In other discussions, residue was described as “crude fiber”: a component of foods not digestible by pancreatic and intestinal enzymes in humans and therefore not available for intestinal absorption.3 Crude fiber, also called “roughage” or “bulk”, is primarily comprised of cellulose, hemicellulose and lignin. Finally, residue has also been designated as any food that increased stool output, including meats, fats and dairy products, even if the foods underwent enzymatic digestion or had low amounts of crude fiber.

The actual composition of a high or low “residue” diet has been a topic of investigation for over a century, yet it continues to have no clear definition. In early canine studies, the rate of passage of foods was thought to correlate with digestion of foods and appearance of residue. In an early study from 1884, Müller found that meat fed to dogs produced stool similar to that defecated during fasting.4 Later in 1905, Heile published that up to 98% of lean meat and 100% of rice were absorbed.5 By contrast, milk increased bulk and accelerated the passage of stool. In 1928, Hosoi et al. systematically evaluated intake of various foods and then measured stool output in dogs with ileal fistulas.2 Proteins (e.g., lean meats, hardboiled egg) and some carbohydrates (e.g., rice, bread) were found to have a slow rate of passage through the intestines. Foods with little or no cellulose did not produce residue even 8 hours after consumption. On the other hand, fruits (e.g., canned pineapple, skinless apple, banana and prunes) had an increased rate of passage, with residues appearing within an hour of consumption. Whole milk and Swiss cheese significantly increased residue within 15-30 minutes after consumption, an effect assumed to stem from lactose malabsorption. Gelatin, broth, hardboiled eggs, lean meat, liver, rice, farina and cottage cheese produced the least amount of residue, whereas fruits, baked potatoes, bread, lard, butter and whole milk produced the largest residue. The authors concluded that “the best basis for a low residue diet is lean meat… rice, hard boiled eggs, sugars (except lactose), and probably small amounts of fruit juices, tea and coffee” and that “less material will be carried into the colon if the diet is kept fairly dry.”

What is Fiber?

In an early definition, fiber was considered to be the non-digestible plant-based components of cell walls that are not found in foods of animal origin.6 Subsequent definitions were broadened to include associated plant- based substances, such as gums, mucilages, pectin and phytates. Other definitions specified methods of extraction of fiber from foods. In 2001, the Institute of Medicine (IOM) proposed new definitions for “dietary fiber” and “added fiber” to standardize the definition. Similar to an earlier description of crude fiber, “dietary fiber” is defined as “non-digestible carbohydrates and lignin that are intrinsic and intact in plants” and “added fiber” is defined as the “isolated, non-digestible carbohydrates that have beneficial physiological effects in humans.” Thus, the “total fiber” is then considered the sum of both dietary and added fiber (http://www.nal. usda.gov/fnic/DRI/DRI_Proposed_Definition_Fiber/ proposed_definition_fiber_full_report.pdf). See Table 1 for the definitions of what constitutes residue and fiber.


The physiologic effects of fiber vary based on its
chemical composition and properties, with solubility,
or the ability to dissolve in water, being the most salient
property. As such, fiber has often been classified as
“soluble fiber” or “insoluble fiber” with respect to
its role in health, particularly in the management of
gastrointestinal disorders, cardiovascular disease,
diabetes, and obesity. The IOM recommends phasing
out these terms and instead favors classifying fiber by
its fermentability and viscosity, since these properties
better outline the physiological benefits of fiber on
gastric and small bowel function. Fermentability refers
to the ability of colonic bacteria to digest the fiber,
while viscosity concerns the ability of the fiber to hold
water, thicken stool, and resist flow.6 See Table 2 for
definitions of the physiochemical properties of fiber.

The fibers traditionally considered to be soluble
are generally fermentable and viscous; they include
guar gum, pectin, some hemicelluloses, fructo-
oligosaccharides and inulin. Once fermented by colonic
bacteria, they contribute to the production of short chain
fatty acids (SCFAs) – such as acetate, butyrate, and
propionate – that are used as fuel by colonocytes.7
The SCFAs, especially butyrate, stimulate growth of
beneficial nonpathogenic intestinal bacteria, which are
thought to play an anti-inflammatory role, enhance
immune function, and optimize intestinal barriers
to pathogenic bacteria. Additionally, the increase in
intestinal bacteria through fermentation contributes
to bulk in stools. The insoluble fibers, including
cellulose and lignin, provide “roughage” and are
generally considered to be non-fermentable and non-
viscous. By increasing bulk and frequency of stool, they
promote passage of stool through the intestinal tract
and ease defecation.8 The food sources of fermentable
fibers include bananas, potato, brown rice, and oats,
whereas good sources of non-fermentable fibers include
bran, nuts, seeds, whole wheat and skins of fruits and
vegetables.9 See Table 3 for a review of fiber types and
food sources of each.

To complicate matters, most foods have a mix of various fibers. Although the fiber content of foods is easily found on food labels listed as “dietary fiber,” unfortunately, there is no requirement that manufacturers classify the fiber further.

RESIDUE AND FIBER DIET STUDIES IN IBD
The Low Residue Diet

The low residue diet has traditionally been used to reduce fecal volume in a number of situations: to treat diarrhea, keep wounds free from stool, promote wound healing in patients with decubitus ulcers or those who have undergone rectal surgeries. As there is no consensus on the composition of residue, studies that evaluated the low residue diet have used various definitions of residue and degree of food restrictions. We speculate that the low residue diets in these studies would limit intake of crude fiber, meats and dairy.

There are limited studies on the use of the low residue diet in IBD. A two-year prospective Italian trial compared long-term effects of a low residue diet, which the authors defined as the elimination of whole grains, legumes and all fruits and vegetables (except for bananas and skinless potatoes). Many clinicians would call this a “low fiber” diet and a regular diet in 71 adult patients with active CD.10 Eighty-five patients were identified for possible participation and all patients, except for five recently diagnosed patients, had already been prescribed a low residue diet. After eliminating 4 patients due to radiologic strictures and another 10 patients due to lack of willingness to adhere to the low residue diet, the remaining 71 patients were randomized to either continue following the low residue diet or transition to an unrestricted Italian diet. The consumption of dairy was allowed as tolerated in both groups. There were no differences in outcomes between the two groups when evaluating rates of flares, intestinal obstructions, need for a hospital admission and/or need for surgery. The addition of fiber into the “regular” Italian diet was tolerated well overall. The study authors concluded that patients should be encouraged to eat an unrestricted diet as tolerated.

The Low Fiber Diet

In contrast to the low residue diet, the low fiber diet only restricts fiber; however, to our knowledge there appears to be no consensus of what defines a low fiber diet. Review of various patient education handouts on the low fiber diet, reveals a trend to limit the type of fiber that is traditionally viewed as insoluble. Although there are limited studies to support this practice, the low fiber diet is often recommended to those who have, or who are suspected to have, intestinal strictures to reduce risk of obstruction. Some clinicians may favor and instruct patients to follow the diet if small bowel bacterial overgrowth has arisen from the intestinal strictures. There are very limited studies to support the restriction of fiber during an IBD flare. Quite the contrary; the inclusion of fiber has generally been of greater interest to researchers and has been suggested to play a role in treatment as well as maintenance of remission-in both CD and UC, mainly via the anti- inflammatory properties of SCFAs. Nonetheless, the bulk of evidence remains inconclusive in supporting fiber as a treatment or as maintenance therapy in IBD. See Table 4 for a comparison of the restrictions between the two diets

Crohn’s Disease

The use of fiber in CD has produced inconsistent outcomes. In an early four-year prospective study, 32 adult patients with active CD, including those with intestinal strictures, were treated with a fiber rich diet in addition to corticosteroids, azathioprine and sulfasalazine, as appropriate.11 The participants were compared with historical controls who were matched by age, site of disease at diagnosis, previous resections and disease duration. The fiber rich treatment group was found to have fewer and shorter stays in the hospital. Only one patient in each group underwent bowel surgery for stricturing disease. The average intake of fiber was 33 g per day by the patients on the fiber rich diet, much more than the national average intake of 20 g per day. The study suggests that fiber restriction may not be necessary and that fiber may instead have a favorable impact on the prognosis of CD patients. In another two- year single blinded study, 352 adult patients with mildly active or inactive CD on sulfasalazine were randomized to either receive a low or a high fiber diet.12 The weekly average for fiber consumption was 110 g (16 g/day) for the low fiber group and 195 g (28 g/day) for the high fiber group. There were no significant differences in intestinal surgeries, hospital admissions, or outpatient treatments between the two groups.

Ulcerative Colitis

Evidence for the use of fiber in UC also remains inconclusive. In an open label, multicenter randomized trial, 102 adult patients in UC remission were divided into three groups to receive 20 grams of Plantago ovata seeds (fermentable fiber), daily mesalamine or a combination of the seeds with the mesalamine.13 Treatment failure was seen in 14 of 35 (40%) patients in the seeds group, 13 out of 37 (35%) in the mesalamine group and 9 out of 30 (30%) in the seeds and mesalamine group. An increase in fecal butyrate levels was found in the groups that were given the seeds. While the authors suggested that fiber may have comparable effectiveness as mesalamine, the sample size is too small to assert this conclusion.

CONCLUSION

There is no consensus of what constitutes residue; in contrast, clear definitions of fiber exist, as does a wealth of data on various types of fiber and the fiber content of foods. However, the low residue and low fiber diets are not synonymous, and due to lack of a clear definition of the former, these diets may impose different limitations on dietary choices. The low residue diet is more restrictive as in addition to restricting some fiber; it also limits meats and dairy. Low residue or low fiber diet prescriptions are common in clinical practice for symptomatic IBD patients, despite a lack of research on their efficacy. If these diets are utilized there should be careful follow up, and if symptomatic relief does not occur, then restrictions should be lifted. Regarding intestinal strictures, there is insufficient evidence to encourage or discourage the use of a low fiber diet in this patient population. However, some clinicians may argue a low fiber diet would be worth trying in patients who have developed small bowel bacterial overgrowth as a consequence from the intestinal strictures. See Table 5 for a summary of findings.

Download Tables, Images & References

FRONTIERS IN ENDOSCOPY, SERIES #20

Diagnosis and Management of Barrett’s Esophagus

Read Article

Marisa Belaidi, Virendra Joshi MD, AGAF Louisiana State University School of Medicine, Ochsner Clinic Foundation, New Orleans, LA

INTRODUCTION

First described by surgeon Norman Barrett in the mid-twentieth century, Barrett’s Esophagus (BE) refers to the transformation of normal stratified squamous epithelium to simple columnar epithelium in the distal esophagus, secondary to chronic injury and inflammation. Because intestinal metaplasia is a major risk factor for esophageal adenocarcinoma, the incidence of which has been steadily increasing in recent years, early detection and treatment have become a major priority in the gastrointestinal community. Historically, surveillance practice standards for patients with a history of BE followed the Seattle Protocol, a system that required physicians to obtain several random, 4-quadrant biopsies for every 1-2cm of involved esophageal mucosa under the visualization of white-light endoscopy (WLE).1 However, a growing body of evidence suggests that, in addition to being both time-consuming and inconsistent in detecting dysplasia, this method does not more reliably predict the detection of cancer.2 This revelation, coupled with increased understanding of the underlying histology and pathogenesis of Barrett’s Esophagus, has led to the creation of several new tools aimed at improving detection rates and yielding better patient prognoses. This review will describe the most current diagnostic techniques and treatment options of Barrett’s Esophagus and early esophageal cancer, their basic risks and benefits, and their clinical applicability.

DETECTION Chromoendoscopy

In the setting of BE, chromoendoscopy refers to the topical application of dye to esophageal mucosa in order to improve tissue visualization on endoscopy (Figure 1). Based on their staining properties, dyes are divided into one of three categories: contrast, absorptive (vital) or reactive. Methylene Blue (0.5%-1.0%), a vital dye taken up by actively absorbing epithelial cells, is by far the most frequently studied stain in the assessment of BE.3-6 Though initial reports by Canto et al. yielded promising results regarding the diagnostic capabilities of MB for dysplastic tissue, subsequent studies have yielded discordant conclusions regarding its efficacy.3 Additionally, skeptics have cited a lack of technique standardization, variability in operator skill, and inconsistency of staining and interpretation as major deterrents to this diagnostic test.

In 2013, Qumseya et al.1conducted a meta-analysis and systemic review of several “advanced imaging technologies” including chromoendoscopy and virtual chromoendosopy, to determine the diagnostic yield of these techniques in comparison to traditional random biopsy sampling. Based on their observations, they concluded that the newer imaging modalities improved detection of dysplasia/cancer by 34% (95% CI, 20%- 56%; P <.0001), which may lend some credibility to the controversial staining technique.

VIRTUAL (ELECTRONIC) CHROMOENDOSCOPY
Narrow Band Imaging (NBI)

First described by Gono et al. in 2004,6 Narrow Band Imaging (NBI) has been touted as a more practical, cost-effective and user-friendly alternative to chromoendoscopy (Figure 2). In NBI, short wavelengths of blue (440-460 nm) and green (540- 560 nm) light pass through an electronic endoscopic filter to penetrate the superficial esophageal mucosa.6-9 The shallow penetration of blue and green light allows for enhanced visualization of surface pit pattern morphology. Additionally, the peak absorption spectrum of hemoglobin lies within this narrow band spectrum, thus clarity of specific superficial vascular patterns is markedly improved.3,6

Initial studies have yielded promising outcomes in the detection of high-grade dysplasia (Sensitivity 96%, specificity 94%),3 lending credibility to the observation that pathologic tissue in BE exhibits abnormal mucosal and/or vascular patterns. However, the procedure is not yet standardized and requires further study to determine efficacy in detecting low-grade metaplasia.

Autofluorescence Imaging (AFI)

AFI is a relatively new endoscopic procedure that utilizes autofluorescence, a technique that exposes esophageal tissue to short wave light (usually UV or Blue light), causing intracellular substances called fluorophores to emit longer wavelength fluorescent light.3,19-22 Normal, metaplastic and dysplastic tissue types each exhibit unique autofluorescence spectra due to differing tissue architecture, chromophore content (especially hemoglobin) and fluorophore composition.19 These discrepancies may help distinguish tissue types and aid in the detection of high grade dysplasia and/ or early esophageal cancer in the setting of BE. However, data is currently limited and this procedure has repeatedly been associated with a very high false positive rate (as high as 50%).3

HIGH-MAGNIFICATION ENDOSCOPY
Confocal Laser Endomicroscopy (CLE)

Confocal laser endomicroscopy (CLE) is a novel endoscopic technique that can be used to identify early neoplastic changes in BE. Based on the concept of confocal microscopy, CLE employs an endoscopic or probe-based laser (Argon Blue, 488 nm) which focuses light on the tissue of interest. Fluorescent light from the tissue then reflects back through a pinhole aperture and is recorded by a photodetector, which relays a high- resolution image to an attached monitor for immediate observation and future retrieval.1,10-14 Available in either an endoscope-based (eCLE) or probe-based (pCLE) system, CLE provides real-time in vivo histologic imaging of esophageal mucosa at up to 1000-fold increased magnification. Intravenous contrast (typically 2.5-5 mL of 10% flourescin sodium)11 is administered concurrently to improve visualization of tissue and cell structures. To ensure the highest image quality, the endoscopist must achieve adequate contact between the endoscope/probe and mucosal surface, which can be aided via gentle suction to stabilize the connection.10

In order to classify BE, eCLE utilizes the Mainz Confocal Barrett’s Classification System. Established by Keisslich et al, the Mainz Criteria relies on cell and vessel architecture to distinguish nondysplastic BE (columnar mucosa with goblet cells in a villiform pattern, normal capillaries) from neoplastic BE (irregularly shaped black cells, leaky capillaries),10-111 pCLE relies on the Miami criteria for dysplastic BE, which includes irregular vessels, fusion of villi and crypts and epithelial irregularities (thickness, inhomogeneity, dark border).

Additionally, recent studies indicate that combining eCLE with high-definition white light endoscopy may increase diagnostic yield of neoplasia versus traditional random biopsy sampling, without requiring as many unnecessary biopsies.

Optical Coherence Tomography (OCT)

Optical coherence tomography (OCT) is a well- established imaging modality that utilizes low- coherence interferometry (near-infrared light) to generate high-resolution (10 – 15-µm), three dimensional cross-sectional imaging of in vivo tissue pathology.15-17 Initially utilized in ophthalmology, OCT has proved helpful in the diagnosis of various medical conditions, including specialized intestinal metaplasia in BE (sensitivity 81%). It has also been used in the detection of buried Barrett’s epithelium following radiofrequency ablation.18 However, the scan does not allow simultaneous biopsy of tissue, cannot clearly distinguish low-grade vs. high-grade dysplasia in the context of BE, and would be a tedious and impractical method to survey the full length of the esophagus.16


Volumetric Laser Endomicroscopy (VLE) is a
next-generation technology, based upon OCT. While
traditional OCT relies on Time Domain Interferometry
to measure depth intensity, VLE utilizes Fourier
Domain Interferometry (optical frequency domain
imaging). This proprietary swept-source laser enables
dramatically greater resolution (~7 microns) and faster
acquisition times (100x faster) than conventional OCT.
Its optical probe enables volumetric (circumferential

  • longitudinal) measurement of the entire distal
    esophagus.16
    Suter et al.17 recently conducted a small, single- center feasibility study to assess the safety and practicality of VLE-guided biopsy in vivo, from which they determined that VLE-guided esophageal biopsy is a well-tolerated procedure that may have utility as a “first-look” procedure to mark tissue regions of interest for subsequent biopsy and therapeutic guidance. TREATMENT In addition to new detection methods, treatment options for Barrett’s Esophagus with high-grade dysplasia (HGD) and early esophageal adenocarcinoma (EAC) are constantly evolving. Once regarded as the gold standard of treatment, esophagectomy has been largely replaced with endoscopic, organ-sparing therapies that are equally as efficacious and incur significantly lower morbidity and mortality rates than traditional surgery. RESECTION
    Endoscopic Mucosal Resection (EMR) and Endoscopic Submucosal Dissection (ESD) EMR and ESD are two types of endoscopic treatment aimed at eradicating both HGD and intramucosal EAC in the setting of Barrett’s Esophagus. Due to their relative non-invasiveness, high post-procedure remission rates (up to 97%) and ability to provide a definitive histologic diagnosis, resection techniques are becoming increasingly popular therapeutic modalities.22,23 While EMR is primarily utilized in the treatment of small (<2cm) superficial tumors, ESD is preferred for larger (>2cm), more extensive lesions whose histologic accuracy and clinical outcome may be jeopardized by piece-meal resection. Both procedures require a high operator skill level and can be associated with adverse events such as bleeding, stricture formation (more common in ESD) and esophageal stenosis. Resection techniques may also be combined with ablative therapies for a potentially superior response rate with less post-procedure complications. TISSUE ABLATION
    Radiofrequency Ablation (RFA) Over the past 5 years RFA has become a primary therapy for high-grade dysplasia in Barrett’s epithelium. RFA replaced other previously used ablative therapies including Argon Plasma and Nd:YAG laser. RFA refers to a process of controlled thermal injury (damage is limited to mucosa and lamina propria) that occurs secondary to the induction of an electromagnetic field via an alternating electrical current. The exothermic reaction necessary for RFA can be generated by one of two specific devices: the HALO360 and the HALO90 (BARRX Medical, Inc, Sunnyvale, CA, USA). Shaheen et al. conducted a multicenter, sham- controlled trial in which they assessed the efficacy of radiofrequency ablation versus a sham procedure in 127 patients with dysplastic BE. The authors concluded that RFA was associated with a significantly higher rate of complete eradication of both dysplasia (81.0%-90.5% vs. 19.0-22.7%, P<0.001) and intestinal metaplasia (77.4% vs. 2.3%, P<0.001) compared to the control25. Overall, RFA is generally a safe, well-tolerated procedure. Side effects may include mild non-cardiac chest pain, nausea, and bleeding.22 Cryotherapy Cryotherapy is an ablative noncontact ablative method that delivers a cryogen (most commonly liquid nitrogen, LN2) under low-pressure spray with a decompressive gastric tube to the BE esophageal mucosa. The cryogen is administered in several cycles of rapid freezing (-196?C) and slow thawing. The extreme flash freezes tissue, selectively kills cells while preserving collagen matrix, creates vascular stasis and induces an analgesic effect.26-28 In contrast to burning techniques, spray cryotherapy promotes less fibrosis and preserves underlying tissue architecture. It is a safe, well-tolerated, non-toxic therapy that literature indicates may serve as a compliment to other established technologies for BE, including EMR, or may be of use when all other treatments are ill advised.27 28 Notably, cryotherapy should be avoided during pregnancy, in the setting of compromised or damaged tissue and in the event of increased anatomical flow resistance (gas evacuation). Cryoablation can successfully eradicate residual Barrett’s in patients with esophageal cancer post chemo- radiation.28. In a retrospective review of 32 patients Greenwald et al. found CE-HGD was 100% (32/32), and CE-IM was 84% (27/32) at 2-year follow-up. At last follow-up (range 24-57 months), CE-HGD was 31/32 (97%), and CE-IM was 26/32 (81%). Recurrent HGD was found in 6 (18%), with CE-HGD in 5 after repeat treatment. One patient progressed to adenocarcinoma, downgraded to HGD after repeat cryotherapy. BE segment length =3 cm was associated with a higher recurrence of IM (P = .004; odds ratio 22.6) but not HGD. No serious adverse events occurred. Stricture was seen in 3 patients (9%), all successfully dilated.34 A prospective study of patients undergoing cryoablation suggested recurrent disease commonly involves the area just below the NSCJ ( neosquamocolumnar junction). Surveillance endoscopies should include this area to accurately identify patients with disease recurrence.33 In addition, cryotherapy appears to be a promising new strategy for salvage therapy of patients who “fail” thermal therapy with RFA. Photodynamic Therapy (PDT) PDT is characterized by the activation of a chemical photosensitizer present in neoplastic tissue (most commonly porphimer sodium) by an endoscopic laser. The activated photosensitizer then reacts with oxygen, producing free radicals that induce cell damage and eventual apoptosis.22 PDT has often been used as an adjunct therapy to proton pump inhibitors (PPI) because the combination has shown superior eradication of BE with HGD and prevention of disease progression than either treatment alone. However, remission rates are significantly lower than esophageal resection techniques and side effects may include photosensitivity, fever, dysphagia, recurrence and progression of disease.30 One of the major side effects seen in a multicenter study by Overholt et al.30 was incidence of strictures, which correlated with number of applications and overlap of treatment. Overall, 36% of patients developed strictures, which were managed successfully with dilations. 12% of patients developed strictures after one PDT as opposed to 32% from two treatments and 9% after a third treatment. Additionally, they reported no improvement in the rate of stricture formation when oral steroids were administered after PDT.32 Careful patient education is critical for the management of side effects and to reduce the risk of photosensitivity reactions.31 CONCLUSION
    There has been a paradigm shift in the detection and
    management of Barrett’s esophagus over the past
    decade with use of advanced imaging for detection and
    management dysplastic Barrett’s and early esophageal
    cancer. Novel advanced imaging modalities as Narrow
    Band Imaging (NBI), Confocal Laser Endomicroscopy
    (CLE ) and more recently developed volumetric
    endomicroscopy (VLE) will continue to impact
    early detection by serving as “red flag technologies”
    helping to target biopsies and decrease sampling
    errors. Dysplastic Barrett’s and early cancer can now
    be treated by ablative therapies (thermal, non-thermal)
    and endoscopic mucosal resection (EMR, ESD) with
    minimal morbidity. These minimally invasive diagnostic
    and therapeutic strategies should be individualized and
    tailored to individual patient needs.

Download Tables, Images & References

INFLAMMATORY BOWEL DISEASE: A PRACTICAL APPROACH, SERIES #95

The Gut Microbiome – Clinical Implications for the Practicing Gastroenterologist

Read Article

These are exciting times in microbiome research. In this article, we discuss a field that simply did not exist a few years ago and has exponentially expanded to become one of the hottest in all of biomedicine. As techniques develop, become more rapid and less costly, the delineation of the true extent of the role of our bacterial fellow travelers in health will soon be realized.

Sara Iqbal MD,1 Eamonn M.M. Quigley MD FRCP FACP FACG FRCPI2 1Department of Medicine 2Division of Gastroenterology and Hepatology, Houston Methodist Hospital and Weill Cornell Medical College, Houston, TX

The Human Gut Microbiome: The Basics

In recent years biomedical research has witnessed a paradigm shift away from an exclusive focus on the human genome and its functions and towards a greater understanding of our fellow travelers: the microorganisms that live within and on our bodies. Formerly studied exclusively in terms of their pathogenic and disease-promoting potential, bacteria, viruses, archaea, fungi and other microorganisms are now being examined in a completely different light – as commensals critical for the homeostasis of the host. Such studies have not only opened a new platform for research into disease pathophysiology, but also revealed the potential for developing new management strategies for several disease states and syndromes. Much of the progress in this field can be attributed to major and ever evolving developments in technology, which now permit the rapid and complete identification of all of the bacterial inhabitants of a given locus.1 With these technologies comes new terminology, a terminology that will be new to many and confusing to some. To facilitate the reader’s access to the literature on this field a list of the more commonly used terms and their definition is provided in Table 1. You will notice one striking omission from this list: “flora”. This term, which dates from the time when bacteria were included in the plant kingdom, has now been largely abandoned and replaced by “microbiota”.

The results of the human genome project were a surprise to researchers with the discovery of only 20- 25000 genes, about one fifth of what was expected.2 So, to look for the missing pieces in the puzzle, other sources of genetic information were explored; giving birth to the concept of the microbiome and ultimately to the human microbiome project.3 It turns out that microbes are not “mere bugs in our system” but in fact are playing a very important symbiotic role. Insights into the function of these organisms have been provided, in the first instance, by an interrogation of their genome, through metagenomics and thereby, to the identification of genes linked to certain biological functions. Correlations with function have been taken a step further through the application of metabolomics and other techniques that identify the products of bacterial synthetic and metabolic processes.4,5

While the microbiome of each individual is quite distinct at the level of individual bacterial strains, data from a European consortium indicated that at a higher level of organization, some general patterns can be identified across populations.6 They identified three broad groupings driven by the predominance of certain species: Prevotella, Bacteroides and Ruminococcus. Enterotype prevalence seemed independent of age, body mass index or geographic location but may be driven by differing dietary habits. Indeed, the importance of diet in shaping, both in the short- and in the long-term, the composition of the microbiome is now a subject of active study and the impact of dietary changes on the microbiome may well have been underappreciated in former studies.7

The Microbiome in Health

At birth the intestinal tract is sterile. The infant’s gut is first colonized by maternal and environmental bacteria during birth and continues to be populated through feeding and other contacts.1,8 The mode of delivery (vaginal birth vs. caesarean section), diet (breast milk vs. formula), level of sanitation and exposure to antibiotics all influence the development of the infant’s microbiome.8-11 By 2 to 3 years of age, the child’s microbiota fully resembles that of an adult in terms of composition.1,12,13

Thereafter the microbiota is thought to remain relatively stable until old age when changes are seen possibly related to alterations in digestive physiology and diet.13-15 It needs to be emphasized that there are relatively few longitudinal studies.

What Regulates the Microbiota?

Because of the normal motility of the intestine (peristalsis and the migrating motor complex) and the antimicrobial effects of gastric acid, bile and pancreatic and intestinal secretions, the stomach and proximal small intestine, though certainly not sterile, contain relatively small numbers of bacteria in healthy subjects.1,16 The microbiology of the terminal ileum represents a transition zone between the jejunum containing predominantly aerobic species and the dense population of anaerobes found in the colon. Bacterial colony counts may be as high as 10 9 colony forming units (CFU)/ mL in the terminal ileum immediately proximal to the ileocecal valve, with a predominance of gram-negative organisms and anaerobes. On crossing into the colon, the bacterial concentration and variety of the enteric microbiota changes dramatically. Concentrations of 1012 CFU/mL, or higher, may be found; comprised mainly of anaerobes such as Bacteroides, Porphyromonas, Bifidobacterium, Lactobacillus and Clostridium, with anaerobic bacteria outnumbering aerobic bacteria by a factor of 100-1000:1. The predominance of anaerobes in the colon reflects the fact that oxygen concentrations in the colon are very low; the microbiota has simply adapted to survive in this hostile environment.

Though most studies of the human gut microbiota have been based on analyses of fecal samples it must be pointed out that at any point along the gut differences are also evident between bacterial populations resident in the lumen and those adherent to the mucosal surface. These mucosa-associated bacterial species and strains will not be accurately represented in fecal samples, a major limitation of this approach. It stands to reason that bacterial species resident at the mucosal surface, or within the mucus layer, are those most likely to participate in interactions with the host immune system whereas those that populate the lumen may be more relevant to metabolic interactions with food or the products of digestion.

Antibiotics, whether prescribed or in the food chain, have the potential to profoundly impact the microbiota.16 In the past, it was believed that these effects were relatively transient with complete recovery of the microbiota occurring very soon after the course of antibiotic therapy was complete. However, while recent studies have confirmed that recovery is pretty rapid for many species, some species and strains show more sustained effects.17 Furthermore, antibiotic exposure and related disruptions of the microbiome may be especially critical in infancy as the microbiome develops.

The Functions of the Microbiome

It is now abundantly evident that an intact microbiome is essential for many aspects of the development of the gastrointestinal tract including such vital components as the mucosa-associated immune system, immunological tolerance, epithelial and barrier function, motility and vascularity. The resident commensal microbiota continues to contribute to such homeostatic functions during life as pathogen exclusion, immunomodulation, upregulation of cytoprotective genes, prevention and regulation of apoptosis and maintenance of barrier function.18

The sophistication of the relationship between the microbiota and its host is elegantly illustrated by the manner in which the immune system of the gut differentiates between friend and foe when it encounters bacteria.19 At the epithelial level, for example, a number of factors may allow the epithelium to “tolerate” commensal (and thus probiotic) organisms. These include the masking or modification of microbial associated molecular patterns that are usually recognized by pattern recognition receptors (PPR’s), such as Toll-like receptors (TLR’s)20 and the inhibition of the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-.B) inflammatory pathway.21 Responses to commensals and pathogens may also be distinctly different within the mucosal and systemic immune systems. For example, commensals such as Bifidobacterium infantis and Faecalobacterium prasunitzii, have been shown to differentially induce regulatory T cells (Tregs) and result in the production of the anti-inflammatory cytokine, IL-10.22 Other commensals may promote the development of T helper cells, including TH17 cells and result in a controlled inflammatory response which is protective against pathogens in part, at least, through the production of IL- 17.23 The induction of a low-grade inflammatory response (“physiological” inflammation) by commensals could be seen to “prime” the host’s immune system to deal more aggressively with the arrival of a pathogen.24 It is also now evident that host-microbe immune interactions are bidirectional; innate immune responses can shape the microbial ecology of the gut and this, in turn, can influence the development of disease susceptibility in the host.

Some of the metabolic functions of the microbiome have been known for years: the ability of bacterial disaccharidases to salvage unabsorbed dietary sugars, such as lactose, and alcohols and convert them into short-chain fatty acids (SCFAs), the synthesis of nutrients and vitamins, such as folate and vitamin K, the deconjugation of bile salts25 and metabolism of certain drugs (e.g. sulfasalazine). Now a fuller picture of the metabolic potential of the microbiome is being revealed and includes the production of other chemicals, including neurotransmitters and neuromodulators, which can modify other gut functions, such as motility or sensation, or even influence the development26 and function27 of the central nervous system, thereby leading to the concept of the microbiota-gut-brain axis.28-30

The Gut Microbiota and Disease

The idea that the bacterial contents of the gastrointestinal tract could contribute to symptoms and disease is not a new one; the role of enteric bacteria in hepatic encephalopathy was described over 50 years ago and several other human ailments have been clearly defined as originating from a disturbed microbiome and/or how it interacts with the host. Well accepted examples are listed on Table 2. The availability of high-throughput sequencing techniques, as well as exciting data from animal experiments, has spurred a host of studies of the microbiome in almost every known gastrointestinal, liver and pancreaticobiliary disease. Based on such studies, a role for the microbiome and/or host-microbiome interactions has been proposed for a long list of diseases and syndromes, some of which are listed on Table 3. While, in some instances, such as inflammatory bowel disease (IBD), there is compelling evidence for a role for microbe-host interactions in disease pathogenesis, in others, this remains more speculative. It must be emphasized that, for most of these disorders, available data describe a mere association and no conclusions can be drawn with respect to causation.

With respect to disease causation, the period of maturation of the microbiota may be critical; there is accumulating evidence from a number of sources that disruption of the microbiota in early infancy may be a critical determinant of disease expression in later life. It follows that interventions directed at the microbiota later in life may, quite literally, be too late and are, potentially, doomed to failure.

Helicobacter Pylori

Helicobacter Pylori, one of the most studied of all bacteria, provides a beautiful illustration of host-microbe interactions with the disease phenotype resulting from infection with this fascinating organism reflecting complex interactions between bacterial properties, host factors and other environmental influences, including the resident gastric microbiome. For example, certain Bifidobacterium strains display anti-Helicobacter effects through the production of antimicrobial peptides.31

Diarrheal Illness


Infectious diarrheas, still a major cause of morbidity and
mortality worldwide, represent an overwhelming assault
on the commensal microbiome and the host. Pathogens
have evolved a number of strategies to survive in the gut
and evade immunological and physiological responses
by the host. Here again, microbe-host responses play
a critical role; some bacteria take advantage of the
host’s inflammatory response to its presence to create
a favorable environment that allows them to outgrow
resident microbes. For example, gastroenteritis due to
Salmonella typhi has been well studied in terms of the
genetic adaptations of the pathogen and the role of the
host immune system in determining disease outcome.

Antibiotic-associated diarrhea and its most
concerning manifestation, Clostridium difficile-
associated disease (CDAD), is a potent reminder of what
can happen when we disrupt the normal microbiome,
albeit with good intentions. Some individuals seem
especially susceptible to the development of CDAD
when administered broad-spectrum antibiotics and it has
been shown that some of this susceptibility may reside
in the composition of the pre-exposure microbiota.32
Evidence suggests that the predilection to C. difficile
illness is largely a function of how resilient the indigenous
microbiota is following an antibiotic assault, with some
bacterial communities being better able to recover than
others. The management of CDAD is now complicated
by the emergence of hypervirulent strains and an ever-
increasing rate of recurrence following initial treatment
with metronidazole or vancomycin. Recurrence rates of
25 percent or more are now commonly reported. The
role of an indigenous healthy microbiome is perhaps
most dramatically illustrated by the overwhelming
success of fecal microbiota transplantation (FMT) in
the management of recurrent CDAD.

Irritable Bowel Syndrome (IBS)

Several strands of evidence suggest a role for the gut microbiota in IBS.33 First and foremost among these is the clinical observation that individuals can develop IBS de novo following exposure to enteric infections and infestations, post-infectious IBS (PI-IBS).34 More contentious has been the suggestion that IBS subjects may harbor small intestinal bacterial overgrowth (SIBO).35 More indirect evidence for a role for the microbiota can be gleaned from some of the metabolic functions of components of the microbiota. Thus, changes in bile salt deconjugation could, given the effects of bile salts on colonic secretion, lead to changes in stool volume and consistency. Similarly, changes in bacterial fermentation could result in alterations in gas volume and/or composition. Further evidence comes from the clinical impact of therapeutic interventions, such as antibiotics, prebiotics or probiotics, which can alter or modify the microbiota. Sequencing studies have shown that IBS patients, regardless of subtype, do exhibit a fecal microbiota that is clearly different from control subjects.36 Such studies have demonstrated reduced microbial diversity in IBS37 and the existence of different IBS subgroups38 defined by the relative proportion of the two major phyla, Firmicutes and Bacteroidetes, as well as significant changes at species and strain level.38,39 The primacy of these microbial shifts and their potential to disturb mucosal or myoneural function in the gut wall, impact on the brain-gut axis, or induce local or systemic immune responses remains to be defined.

Obesity, the Metabolic Syndrome and Related Disorders

A considerable body of basic research suggests an important role for the microbiota in the development of obesity and related disorders, such as the metabolic syndrome.40,41 Qualitative changes in the gut microbiota have also been identified in man but findings have been less clear-cut. Nevertheless, a microbial signature predictive of the development of type II diabetes has also been identified and FMT was shown to restore insulin sensitivity in a small study among individuals with the metabolic syndrome.42 Fundamental to all theories of the role of the microbiota in these disorders is the concept that a shift in the composition of the microbiota towards a population where bacteria that are more avid extractors of absorbable nutrients, results in the availability of these nutrients for assimilation by the host; thereby, contributing to obesity.40

Colorectal Cancer

Recent studies have identified specific signatures in the gut microbiome associated with colorectal cancer (CRC) and suggested that the microbiome may serve as a valuable screening tool; the efficacy of this approach in clinical practice has yet to be demonstrated. While microbiome-based analyses, on their own, can detect precancerous and cancerous lesions, combining such data with body mass index, a known clinical risk factor of CRC, and occult blood testing, provided an excellent discrimination between healthy individuals to those with malignant and premalignant lesions.40

While a disturbed microbiota has been linked with CRC, defining a causal link has proven more problematic. In recent years, research has focused on identifying bacterial species or strains that are particularly linked with CRC.44 Two bacteria in particular, Fusobacterium nucleatum and Escherichia coli, have been consistently associated with CRC. Proposed pathways to cancer formation related to bacteria have included bacteria- induced chronic inflammation leading to cell proliferation or the direct effects of bacterial virulence factors inducing tumor formation.45

Inflammatory Bowel Disease (IBD)

A considerable body of experimental and clinical evidence indicates that the microbiota and microbiota- host interactions are critical to the pathogenesis of IBD.45 Defining the precise nature of the fundamental pathophysiology has proven more challenging; is it an abnormal microbiota, an abnormal host immune response to a normal microbiota or some combination of these factors? There is some evidence for the presence of a disturbed microbiota in IBD but results are not consistent. For example some studies demonstrated that patients with Crohn’s disease (CD; either colonic or ileal) exhibited microbiota profiles distinctly different from those of healthy controls or patients with ulcerative colitis (UC). Furthermore, the fecal microbiota in patients with ileal CD differed from that in patients with predominantly colonic disease.47 In contrast, data from a twin study suggested that the microbiome was abnormal in UC also.48 Several factors contribute to sorting out the role of the microbiome in IBD: the heterogeneity of the disease population, diet, medications and disease activity. For example, it is distinctly plausible that changes in the microbiome seen in IBD could reflect the consequences of inflammation and have nothing to do with causation. Longitudinal studies of the gut microbiota throughout the course of the disease are needed.

Liver Disease and its Complications

That the microbiota-gut-liver axis plays an important role in the occurrence of infectious and noninfectious complications of liver disease is well established. More recent is the proposal that the microbiota could be involved in the pathogenesis of liver diseases, such as non-alcoholic liver disease (NAFLD).49 From a considerable body of experimental and some clinical data some common themes have emerged. Thus, a disturbed microbiota (small intestinal overgrowth and/ or qualitative changes in the microbiota), impaired gut barrier function and the host immune response have been shown to conspire to impact on liver metabolism (contributing to lipogenesis, for example), promote inflammation and even contribute to the progression to fibrosis, cirrhosis and hepatocellular carcinoma. The microbiota has also been implicated in alcoholic liver disease. Alcohol impairs the host immune response.50 and its metabolites can conspire with lipopolysaccharide (LPS) produced by Gram-negative bacteria to induce liver injury.51 The microbiota also contributes to alcohol-related liver injury by promoting the growth of endotoxin-producing gram-negative bacteria in the gut and increasing intestinal permeability.

The role of antibiotic therapy is well established in the prevention and management of hepatic encephalopathy and infectious complications of liver disease.52 Now microbiota-modulating strategies are being explored in the management of liver disease per se. For example, the probiotic organism Lactobacillus rhamnosus, has been shown to promote gut homeostasis by modulating the growth of Gram- negative bacteria53 and restoring intestinal barrier integrity; as a consequence liver fat content and circulating levels of pro-inflammatory cytokines are reduced.54,55

Therapeutic Modulation of the Microbiome

While it is undoubted that food is the primary modulator of the microbiome, it is not the only one. Specifically, extensive antibiotic use in modern animal husbandry exerts a selective pressure for antibiotic resistance that eventually spreads to the human microbiome. Because of the rapid and efficient transfer of resistance genes from one bacterium to another, even nonpathogenic (so-called commensal) bacteria can carry and express resistance genes.

Probiotics and prebiotics aim to confer a health benefit by modulating the microbiome. Prebiotics selectively stimulate the growth and/or activity of bacteria that contribute to colonic and host health.56 Probiotics may provide benefits through the multiple aforementioned mechanisms whereby the normal commensal microbiota interacts with the host. While the traditional concept of probiotics is based on the functions of live organisms, it is evident that dead bacteria, bacterial components or bacterial metabolites are biologically active. For example, probiotics have the potential to either stimulate or suppress host immunity via microbe-derived immunomodulatory molecules.57 A complete discussion of the use of probiotics in man is beyond the scope of this review. Suffice it to say that, given the current regulatory climate, major quality control issues surround the probiotic market. At the very least a probiotic should be characterized at genome level and should have been demonstrated to survive passage through the digestive tract to its desired site of action. Furthermore, clinical claims should be supported by high quality clinical trial data. Although there is no such thing as zero risk, probiotics are generally regarded as safe and truly probiotic-related adverse events in healthy individuals and those seen in an ambulatory care setting have been vanishingly rare.58

CONCLUSION


These are exciting times in microbiome research. A
field that simply did not exist a few years ago has
exponentially expanded to become one of the hottest in
all of biomedicine. As techniques develop, become more
rapid and less costly, the delineation of the true extent of
the role of our bacterial fellow travelers in health will
soon be realized. In terms of disease states, while many
tantalizing associations have been described, defining
causation will take some time given the heterogeneity
of many disease populations, the dynamics of the
microbiota over time, the bidirectional nature of
interactions between the host and the microbiome and
the impact of so many confounding factors. There is
much to be done.

Download Tables, Images & References

LIVER DISORDERS, SERIES #3

Hepatocellular Carcinoma

Read Article

There are many factors that determine the appropriate treatment for patients with HCC including underlying liver disease, size, number of lesions, vascular involvement and extravascular disease. Further studies are needed to help refine the best treatment options in patients with early-,intermediate-, and late-stage disease who are either neither transplant candidates or unresectable. With current screening guidelines as well as improved and well-tolerated treatments for hepatitis C now available, there is hope that both the incidence and mortality from HCC will decrease.

Nitin Sardana MD, Elie Aoun MD, Division of Gastroenterology and Hepatology, Allegheny Health Network, Allegheny General Hospital, Pittsburgh, PA

Hepatocellular carcinoma (HCC) is the second most common cause of death worldwide from liver cancer. HCC is typically seen in underlying cirrhosis secondary to viral hepatitis, which is common risk factor. While direct acting antivirals may reduce the development of HCC due to hepatitis C in upcoming years, the increasing incidence of nonalcoholic fatty liver disease (NAFLD) make the trend of HCC difficult to predict. Surveillance for HCC should be performed in high risk patients. The diagnosis of HCC can be made radiographically in the majority of cases. There are many factors that determine the appropriate treatment for patients with HCC including underlying liver disease, size, number of lesions, vascular involvement and extravascular disease. The treatment of early stage and advanced disease is fairly clear. Further studies are needed to help refine the best treatment options in patients with early-, intermediate-, and late-stage disease who are either neither transplant candidates or unresectable.

INTRODUCTION

Primary liver cancer is the fifth most common cancer in men and ninth most common in women, accounting for 554,000 and 228,000 cases, respectively.1 Hepatocellular carcinoma (HCC) accounts for over 90% of all primary liver cancers. In most areas, its incidence is therefore a close approximation of the incidence of hepatocellular carcinoma.2 While the highest rates of liver cancer are in Africa and Asia, the incidence in the United States is also alarming.1 There will be an estimated 33,190 new cases of liver cancer diagnosed in the United States this year alone with approximately 23,000 deaths.3 The incidence of HCC in the United States varies with age, gender and race. The highest incidence is in those older than 65 years, males, Asians and Pacific Islanders.4 The median age at diagnosis is 63 years.3 Historically, the incidence of HCC has continued to climb through the years as has the mortality. Although the mortality rates continued to increase in the United States between 2007 and 2010, there was no significant increase in the incidence of HCC.4 With current screening guidelines as well as improved and well-tolerated treatments for hepatitis C now available, there is hope that both the incidence and mortality from HCC will decrease.

Risk Factors

HCC usually develops in the background of underlying cirrhosis.5 The risk of developing HCC in patients with cirrhosis increases depending on the etiology of the cirrhosis. These etiologies include hepatitis C, hepatitis B, hereditary hemochromatosis, Wilson’s disease, autoimmune hepatitis, alpha-1 antitrypsin, primary biliary cirrhosis (PBC), alcohol and non-alcoholic steatohepatitis-induced cirrhosis. Furthermore, risk factors specific to the etiologies can increase the incidence of HCC even higher.

Overall, the 5-year cumulative risk of developing HCC ranges from 4% in patients with primary biliary cirrhosis to 30% in those with cirrhosis from chronic hepatitis C infection.6 The stage of cirrhosis also correlates with the risk of developing HCC.6 There is an estimated 4-fold increased risk of developing HCC in those with cirrhosis as compared to those with chronic hepatitis.6 Approximately 80% of HCCs are attributable to chronic viral hepatitis.7 Even in those patients with HCC in whom cirrhosis is not present, there are typically histologic changes consistent with underlying liver disease including steatosis, varying degrees of fibrosis, dysplasia or iron overload.8

Patients with hepatitis C-induced cirrhosis have the highest 5-year cumulative risk of developing HCC.6 This incidence also varies depending on geography, where the 5-year cumulative risk is 30% in Japan and 17% in the United States and Europe.6 Among those with hepatitis C-induced cirrhosis, there are a subset of patients at higher risk than others. Patients with hepatitis C-induced cirrhosis who are older at diagnosis or at time of infection, males, elevated bilirubin, decreased platelets, presence of esophageal varices or physical examination findings of palmar erythema or spider angiomata have an increased risk of developing HCC.9,10 There is also an increased incidence of HCC in patients with hepatitis C-induced cirrhosis with comorbid conditions of porphyria cutanea tarda, hepatic steatosis, hepatitis B and alcohol use (>60g/day).11-14 Patients who are co-infected with hepatitis C and HIV tend to be diagnosed with HCC at an earlier age and sooner after their diagnosis of hepatitis C than those infected with HCV alone.15 There does not appear to be a significant difference in the incidence of HCC based on the genotype or viral load of hepatitis C.16

Similar to patients with hepatitis C, those with hepatitis B-induced cirrhosis are at increased for HCC, and there is geographic disparity. The 5-year cumulative incidence of HCC among patients with hepatitis B in East Asia is 15% compared to 10% in Europe. Older age, degree of thrombocytopenia and liver firmness on physical examination are associated with an increased risk of developing HCC.17 The incidence of HCC is also highest among those with hepatitis B-induced cirrhosis, less with chronic hepatitis B and is least common in inactive carriers.6 Patients with occult hepatitis B (presence of HBV DNA who are hepatitis B surface antigen-negative) are also at increased risk of developing HCC.18 While the risk associated with having a high HBV DNA or a hepatitis B e-antigen at time of diagnosis is unclear, the risk of developing HCC is lower for patients who clear hepatitis B surface antigen either spontaneously or with treatment.6,19 Co- infection of hepatitis B with hepatitis D increases the risk for developing HCC threefold.20 Aflatoxin exposure significantly increases the risk of patients with hepatitis B developing HCC.21 Similar to patients with hepatitis C, there is an increased risk of HCC in patients with hepatitis B who consume alcohol.14 Patients with alcohol-induced cirrhosis, even in the absence of chronic viral hepatitis, are also at increased risk of developing HCC with a 5-year cumulative incidence of 8%.6 Alcohol may also have a direct carcinogenic effect on the liver and may lead to HCC even in the absence of cirrhosis.14

Patients with cirrhosis related to etiologies other than chronic viral hepatitis are also at increased risk for HCC. The degree of their risk for some of these etiologies, however, is not as well defined. The risk of developing HCC in the setting of cryptogenic cirrhosis has been reported by Marrero et al to be as high as 29%.22 Among these patients, however, a significant proportion of these patients may have underlying non- alcoholic fatty liver disease (NAFLD).22 In their study, NAFLD accounted for up to 13% of the patients with HCC.22 While this incidence may be an overestimate, with an estimated prevalence of 30-40% in the United States and 6-35% worldwide, NAFLD is an important risk factor in the development of HCC.23 There are data to suggest that the duration cirrhosis may be longer in patients with NAFLD-related cirrhosis.24,25 There also appears to be an association with obesity and the risk of development and death from HCC.26,27 The risk of developing HCC is twice as high among patients with diabetes.28

Even among metabolic liver diseases, there is a wide range in the risk of HCC. While the risk of developing HCC among patients with Wilson’s disease is extremely low, the 5-year cumulative incidence among patients with hereditary hemochromatosis is 21%.29 The risk of developing HCC in patients with alpha-1 antitrypsin deficiency only increases once they develop cirrhosis.30 Similarly, patients with primary biliary cirrhosis are at increased risk with advanced fibrosis.31 HCC is seen predominantly in men with PBC, and the overall 5-year cumulative incidence is 4%.31 Cardiac congestive liver fibrosis is not thought to be a typical risk factor in the development of HCC, this has been reported in the literature.32 Coffee consumption is one well-studied association that actually decreases the risk of HCC.33 Recognizing the risk factors for the development in HCC is paramount in the surveillance, prevention and early recognition of the disease for improved outcomes.

Surveillance

The decrease in mortality with surveillance, the noninvasive means of testing and the difference between early and late detection are key factors as to why surveillance for HCC is recommended in high- risk patients by the American Association for the Study of Liver Diseases (AASLD).34 It has been reported that biannual alpha fetoprotein (AFP) and ultrasound imaging decreases the mortality from HCC by 37% in patients with past or present hepatitis B infection.35 The benefit of surveillance is also supported by data that show the poor prognosis in patients in whom the diagnosis is made only after they are symptomatic.36

The two most well-studied serologic markers for detection of HCC are AFP and descarboxyprothrombin (DCP), also known as prothrombin induced by vitamin K absence II (PIVKA II). The difficulty in supporting AFP as a screening test is that, depending on the cut-off level, the sensitivity or specificity may be suboptimal. A case-control study evaluating its efficiency at diagnosing HCC showed that its sensitivity was approximately 60%, and its positive predictive value was only 25.1% at a 5% tumor prevalence at a value of 20ng/mL.37 Raising the cut-off only decreases the sensitivity even further. There had also been some hope that PIVKA II could be an adequate serologic test used in the surveillance of HCC. However, data show that while it may be a useful diagnostic test, its role in surveillance is limited.38 The sensitivity of PIVKA II was 74% at a cutoff of 40 mAU/mL.39 While the use of the combination of AFP and PIVKA II increases the sensitivity in the diagnosis of HCC, the specificity is still only 74%.39 Ultimately, the use of AFP, PIVKA II or the combination of the two is inadequate to recommend universally in the surveillance of HCC.39 Groups continue to evaluate these in specific subset of patients. For example, a recent study showed that a combination of the PIVKA II and AFP may aide in the early detection of HCC in patients with hepatitis B.40 If further studies support this, guidelines may support its use. In the future, there may also be a role of various novel biomarkers to measure response to therapies.41

In addition to serologic tests, radiologic testing has also been studied for surveillance of HCC. Ultrasound is the diagnostic test of choice and is recommended by the AASLD in the surveillance for HCC.34 Advantages of ultrasound include the lack of radiation associated with its use, the ease of accessibility and its relatively low cost compared to other imaging modalities. However, the disadvantages of ultrasound testing are that it is operator dependent, a likely decreased sensitivity in obese patients and the overlap of ultrasonographic appearance of other lesions in the background of cirrhosis.42 Thus, ultrasound sensitivity has been reported to be between 65% and 80%.43 Computed tomography (CT) and magnetic resonance imaging (MRI) are not appropriate imaging modalities for surveillance because of the radiation exposure, the risks of contrast or gadolinium administration as well as the cost. Imaging with CT may be considered in obese patients in whom ultrasound is non-diagnostic.

Ideally, surveillance could be performed with a combination of serologic and radiologic tests. Unfortunately, the data do not support this. In a study of more than 9,000 patients in China, patients with hepatitis B underwent surveillance with ultrasound and AFP.44 The false positive rate was 7.5% when combining the two modalities. Ultimately, despite its limitations, ultrasound still has a high specificity and thus, is a more appropriate test than the current serologic markers in the surveillance for HCC.45

The AALSD recommends surveillance for HCC in high-risk patients every six months.34 This recommendation is based on a study that showed a survival benefit of semiannual surveillance compared to annual surveillance in patients with hepatitis B.46 In an attempt to simplify guidelines, the AASLD generalized these findings to all high-risk patients.34 A more recent study that showed smaller, less advanced tumors were detected, and patients had longer survival when surveillance was performed every six months as opposed to every twelve months to support the AASLD guidelines.47

Surveillance for HCC is recommended for patients at increased risk. Based on cost-effectiveness models, the AASLD considers that it is cost-effective to perform surveillance in patients with hepatitis B if the expected HCC risk is greater than 0.2% per or hepatitis C if the risk of developing HCC is greater than 1.5%/year,34 the difference likely owing to the varying prevalence between the two etiologies As previously mentioned, the incidence of HCC in etiologies of cirrhosis other than chronic viral hepatitis has not been clearly defined. While the guidelines do not make clear recommendations for many of these populations, it is reasonable to perform surveillance for HCC in any patient with cirrhosis until further data suggest otherwise.

There is no role of surveillance of HCC in patients with hepatitis C who do not have cirrhosis. There was a large prospective study of 12,000 men in Taiwan that showed a significant increase in the risk of developing HCC in patients with hepatitis C, although the data should be interpreted with caution as it included both cirrhotics and non-cirrhotics.48 Another prospective study of approximately 1,000 patients estimated that the risk of developing HCC in patients with hepatitis C who are not cirrhotic to be 0.8% per year.49 The AASLD deemed it cost effective to screen for HCC in patients with hepatitis C without cirrhosis only if the annual incidence was >1.5% per year. Thus, at this time, the evidence does not support surveillance for HCC in patients with hepatitis C who are not cirrhotic.

There are data, however, to support surveillance of HCC in certain subsets of patients with hepatitis B. Cost-effective analysis favors surveillance in patients with hepatitis B whose risk of developing HCC is >0.2% per year.34 Some of the risk factors for HCC in patients with hepatitis B have been described above. The presence or absence of these risk factors aide in the risk stratification. As in patients with hepatitis C, patients with hepatitis B with cirrhosis are at the highest risk of developing HCC and should undergo surveillance. The AASLD also recommends surveillance for adult Caucasian patients with active hepatitis B without cirrhosis, Asian male hepatitis B carriers older than 40 years, Asian females older than 50 years, hepatitis B carriers with a family history of HCC and African/North American blacks with hepatitis B.34 Co-infection with hepatitis C increases the risk for HCC, though there are no guidelines regarding surveillance in this population.

Diagnosis

Unlike many other solid organ tumors, there are many situations in which HCC can be diagnosed with a high degree of accuracy based on imaging alone.50 Specifically, four-phasic multidetector CT (unenhanced, arterial, venous and delayed phase) or dynamic contrast- enhanced MRI are used to diagnose HCC. Whether or not imaging alone is sufficient to diagnose HCC is also based on the size of the lesion, as the sensitivity and specificity of these modalities increase with increasing size of the tumor.51 Biopsy is generally avoided if possible, as a meta-analysis estimates the incidence of needle tract tumor seeding to be 2.7%.52

If there is concordance between contrast-enhanced ultrasound and MRI, the diagnosis of HCC can be made even if the lesion is less than 2 cm in patients with cirrhosis.53 However, the use of contrast agents for contrast-enhanced ultrasound has not been FDA approved in the United States. A single contrast- enhanced study, however, appears to be sufficient to diagnose HCC in patients with cirrhosis who were found to have nodules between 1-2 cm on surveillance.54 Thus, the AASLD does not recommend any further diagnostic testing in nodules >1 cm detected in patients at risk for HCC if there is arterial hypervascularity and venous or delayed phase washout on contrast-enhanced imaging.34 (Figures 1 and 2) If there are atypical features on imaging, then either a different contrast enhanced study or liver biopsy is recommended.34 For lesions less than 1 cm, the AASLD recommends serial imaging with ultrasound every three months as these lesions will likely be cirrhotic nodules.34

There are limited data that show that an AFP level of > 200 ng/mL in non-African American patients with hepatitis C related cirrhosis and a hepatic mass may be diagnostic of HCC.55 Previous data supported using this level as a cutoff to aid in the diagnosis of HCC in conjunction with imaging56,57. Current data, however, suggest that the use of AFP does not provide additional benefit to imaging.34,39,43

Staging

Once the diagnosis has been established, the next step is to determine the stage of HCC as treatment options vary depending on the stage of disease. There are multiple staging systems used for HCC, including the American Joint Committee on Cancer (AJCC) TNM system (last revised in 2010), Okuda system, Cancer of the Liver Italian Program (CLIP) score and the Barcelona Clinic Liver Cancer (BCLC) staging classification. Each of these scoring systems have their strengths and limitations.

The Okuda system, developed in 1985, includes tumor size, ascites, bilirubin and albumin to stage patients into three stages (Table 1).58 As this system does not include important factors that would alter treatment such as the presence of metastases or vascular involvement, it should not be used to make treatment decisions. It can provide prognostic information for patients however.

The CLIP score includes Child-Pugh stage, tumor morphology (uninodular, multinodular and extension) AFP and portal vein thrombosis.59 The CLIP system appears to be the best among the staging systems among patients who underwent transarterial chemoembolization (TACE).60 It also appears to be easier and more accurate than the Okuda classification.56

The AJCC TNM system was most recently updated in 2010. This system accounts for the size of the tumor, the number of discrete lesions, the presence of vascular involvement, lymph node involvement and the presence of distant metastases.61 Despite their importance on prognosis, the degree of fibrosis (Ishak classification) does not factor in on the stage.62,63 The benefit of the AJCC TNM system is that it (6th edition) has been validated in a cohort of patients who underwent liver transplantation and provided more accurate information regarding overall and recurrence-free survival as compared to six other staging systems, including the CLIP score and BCLC Group staging classification.64

The BCLC Group staging classification includes Okuda stage, extent of lesion, performance status, presence of constitutional symptoms, vascular invasion and extrahepatic spread (Table 2).65 Because of its ability to stratify patients into groups that would benefit from various treatments, the BCLC is the most commonly used staging system and is the staging system of choice based on the most recent AASLD guidelines.34

Treatment

Treatment options for HCC include surgical resection, liver transplantation, radiofrequency ablation (RFA), trans-arterial chemoembolization, radioembolization and systemic chemotherapeutic agents. The decision regarding the most appropriate therapy for a patient is based on their BCLC stage. The general principle of treatment is that more aggressive measures for earlier stage disease are used with the goal of providing curative therapy. Treatment of more advanced HCC is centered around palliation.


In years past, there were very few patients who
were diagnosed with very early stage HCC (BCLC 0,
defined as a solitary, asymptomatic lesion with diameter
< 2 cm without metastases. Improved surveillance
strategies, adherence to surveillance guidelines and
improved diagnostic tools are likely to increase the
detection of these very early stage HCCs. Surgical
resection is currently recommended for patients with
very early stage HCC and Child Pugh A cirrhosis with
a bilirubin <1 and no signs of portal hypertension. With
surgical resection, the overall 5-year survival is between
70-90%.66,67 Even after resection, there is a small risk
of recurrence.66,67 The presence of satellite lesions is
an independent risk factor for survival and recurrence
rate.66

Randomized controlled trials comparing surgical
resection to RFA have shown no survival difference
even though lesions were up to 5 cm is size.68,69 There
are still conflicting data regarding whether or not RFA
is a viable replacement for surgical resection in patients
with very early stage HCC. A recently published study of
52 patients with very early stage HCC confirmed these
findings by showing no difference in 1-, 3- and 5-year
overall and tumor-free survival rates when comparing
surgical resection and RFA.70 However, another study
of 237 patients showed that surgical resection provides
better overall survival and recurrence-free survival
compared to RFA.71 Given the conflicting data, surgical
resection is still the standard of care as the first-line
therapy for very early stage HCC.34 If patients with very
early stage disease have more advanced liver disease
or are otherwise not surgical candidates, either RFA or
liver transplantation should be considered.

Early-stage disease (BCLC A) is comprised of asymptomatic patients who are appropriate for resection, liver transplantation or percutaneous treatment.72 In patients who are surgical candidates, liver transplantation has been proven to be curative.73 In a landmark study by Mazzaferro and collegues, 48 patients with cirrhosis who had either a single HCC <5 cm or <3 lesions that were less than 3 cm in diameter underwent liver transplantation. There was a 75% 4 year-survival rate and recurrence-free survival was 83%.73 This study became the basis of what is now known as the Milan criteria. In addition to these size restrictions, the Milan criteria also includes the absence of vascular invasion and extrahepatic disease. Overall, the mean survival in patients with early-stage disease who underwent liver transplantation was shown to be 8.8 years, while patients who underwent surgical resection and RFA had a median survival of 4.3 years and 5.2 years, respectively.74 Unfortunately, due to the lack of available donor livers, transplantation is not always an option.

Intermediate-stage disease (BCLC B) includes asymptomatic patients with either a large or multinodular HCC and no evidence of vascular invasion or extrahepatic spread.72 The current guidelines support trans-arterial chemoembolization (TACE) as first-line therapy for patients in this group who are unresectable.34 This modality can be used as a bridge to liver transplantation as well. Trans-arterial therapy takes advantage of the dependence on the hepatic artery supplying HCC and usually includes a combination of injection a chemotherapeutic agent (suspended in lipiodol to expand exposure of tumor cells to the chemotherapy) followed by embolization of the hepatic artery.34 TACE is contraindicated in patients with vascular invasion due to the increased risk ischemia. A study from 2002 was ended early when it showed that patients who underwent chemoembolization had significant survival benefit as compared to patients who received symptomatic treatment.75 In this study, survival probability at 2 years was 63% compared to 27% in the control group. Patients with advanced (i.e., Child C) or decompensated cirrhosis are poor candidates for TACE as liver failure is a potential risk of this treatment.76

Advanced stage disease (BCLC C) includes both symptomatic and asymptomatic patients with vascular invasion and/or extrahepatic spread.72 End-stage disease consists of patients who are candidates for palliative treatment only because of the poor prognosis.72 Patients in either of these stages are candidates only for palliative treatment. Sorafenib is an oral multikinase inhibitor of platelet-derived growth factor receptor. It is a vascular endothelial growth factor receptor.77 Mouse models show that it inhibits tumor growth, vascularization and induces tumor apoptosis and hypoxia.78 It has significantly changed the median time to progression of the disease and has prolonged the median survival by almost three months from 7.9 months to 10.7 months.77 Patients with Child C cirrhosis with an Okuda score of 3 or an ECOG functional status >2 are defined as a terminal stage (BCLC D) and do not benefit from additional therapy.

Download Tables, Images & References

A CASE REPORT

Biliary Tubulopapillary Adenoma with Concurrent Biliary Stone Presenting with Pruritus and Obstructive Jaundice

Read Article

Benign tumors of the biliary tract are a rare cause of obstructive jaundice. We report the case of a 75 year old Caucasian male who presented with pruritus and obstructive jaundice who was noted to have a mass in the distal common bile duct (CBD) on computed tomography. Endoscopic retrograde cholangiopancreatography confirmed diffuse biliary dilation and a large filling defect. Balloon sweep resulted in the passage of small clumps of soft tissue in addition to stones. Positron emission test demonstrated no abnormal uptake. He underwent a Whipple procedure where a 3.5 cm pedunculated, polypoid mass was found in the CBD. Pathology revealed intraductal tubulopapillary adenoma with high grade dysplasia and microscopic mucin.

INTRODUCTION

Biliary papillary tumors account for 11% of all biliary ductal tumors1 and have malignant potential.2 They can be intrahepatic or extrahepatic in location3,4,5 and may be associated with tubular adenomas elsewhere in the gastrointestinal tract.6 Several cases of biliary papillary tumors have been reported from the far east.24,6-13 with few reported case series from the Western population.1,5 The term, biliary intraductal papillary tumor, has been used interchangeably with intraductal adenoma.6,7,14 Here, we present a rare case of benign biliary intraductal tubulopapillary adenoma with review of literature about biliary intraductal papillary tumors.

CASE

A 75 year old Caucasian male presented with a four month history of pruritus and weight loss. He had a past medical history of stage II prostate cancer in remission after being treated with hormone therapy. His abdominal examination was unremarkable. His liver enzymes revealed an alanine aminotransferase (ALT) of 127 IU/L, aspartate aminotransferase (AST) of 98 IU/L, alkaline phosphatase of 499 IU/L and a total bilirubin of 3.0 mg/dl. Computed tomography (CT) of the abdomen revealed a 3.4 cm enhancing mass in the distal common bile duct (CBD) with severe extra- and intrahepatic biliary dilation (Figure 1). Carbohydrate antigen 19-9 was 24.1 U/ml. Positron emission test (PET) scan demonstrated no abnormal uptake in the CBD. He underwent endoscopic retrograde cholangiopancreatography (ERCP) which showed diffuse biliary dilation and a large filling defect in the mid CBD with irregular ductal margins (Figure 2). Balloon sweep resulted in the passage of small clumps of soft tissue in addition to a large stone and sludge. Histology of the soft tissue suggested intraductal papillary neoplasm of the CBD. Endoscopic ultrasound (EUS) showed localized CBD mass with possible malignant cells on fine needle aspirate. Subsequently, he underwent a Whipple procedure where a 3.5 cm pedunculated polyposis mass was found in the CBD.

The resection margins were clear with benign periductal and peripancreatic lymph nodes. Pathology of the mass showed tubulopapillary adenoma with high grade dysplasia of biliary and gastric epithelial subtypes and microscopic mucin (Figure.3). The patient has since recovered well from the surgery.

DISCUSSION

Most intraductal biliary tumors are malignant with only 6% being reported as benign.14 World Health Organization (WHO) has classified biliary epithelial tumors as adenomas with or without dysplasia and carcinomas.15 Adenomas are further classified, based on their pattern of growth, as papillary, tubular and tubulopapillary. Biliary papillary tumors are composed of papillary proliferation of atypical biliary epithelium along with delicate fibrovascular cores and present as sessile or pedunculated polypoid mass within the bile duct.6,14 . About 50% are multiple and termed as biliary papillomatosis.2,8 Biliary papillary tumors encompass both benign tumors (papillary adenoma) with varying degrees of dysplasia and malignant tumors (papillary cholangiocarcinoma) with the latter accounting for 74% to 83% of the cases.1,2,5,8 Of all the types of cholangiocarcinomas, the papillary type of cholangiocarcinoma accounts for 2.9-8.9% and has a better prognosis.5,16,17 Biliary papillary tumors are further classified pathologically based on their epithelial subtype into gastric, intestinal and pancreato-biliary subtypes with the pancreato-biliary subtype accounting for more than 50% of cases.1,5,8)

While the data is conflicting, biliary intraductal papillary neoplasm could also be termed as intraductal papillary mucinous neoplasm of the biliary tract or IPMN-B. They share certain common pathological features with intraductal mucinous neoplasm of pancreas or IPMN-P such as papillary proliferation, similar epithelial subtypes and mucin production.8 The terminology and histopathology of IPMN-B is not well defined by WHO.15 Mucin hypersecretion is seen in only 25-35% of IPMN-B while it is seen in most of cases of IPMN-P.5,8,9 In one review of case series of biliary tumors, 36% of tumors previously named as cystic, papillary or mucinous tumors were re-classified as IPMN-B based on macroscopic and histopathological criteria used for diagnosing IPMN-P.5 Immunohistochemical analysis of biliary papillary tumors generally show a better prognosis for MUC 2 staining than for MUC 1 staining tumors.1,8,9 Biliary papillary tumors differ from biliary

cystadenoma by communication with the bile duct and absence of ovarian like stroma.18 Attempts have been made to study the inherent biology of biliary papillary tumors. In a study of genetic alterations, high level microsatellite instability was seen in 11.8% and low level microsatellite instability was seen in 35.3% of biliary papillary tumors.19

The median age for biliary papillary neoplasia is 68 years with a predilection for males.5 Cases of biliary papillary neoplasia from Asia have been associated with choledocholithiasis and parasitic infestations such as Clonorchis sinensis,2,24,25 but no such association has been found in the Western population.1 The common symptoms include abdominal pain,1,5 followed by obstructive jaundice,3,6 pruritus 7 and acute cholangitis.2 Bile duct stones may occur in association with biliary tumors which is believed to be the result of biliary stasis. Cases of bile duct rupture with implantation of tumor cells in the peritoneal space (pseudomyxoma peritonei) have been reported as well.3

Since it is slow-growing, it is possible to diagnose intraductal papillary neoplasia at an early stage, given the advancements in diagnostic procedures such as ERCP and cholangioscopy. CT scan and magnetic resonance imaging (MRI) often show hyperenhancement of the tumors within the bile duct.22 At cholangiography, the biliary ducts are noted to be dilated with intra luminal filling defects either due to the fixed or detached tumor with extrusion of soft tissue on balloon sweep .3,7,23 Mucin hypersecreting tumors may show mucin extruding out of the papilla.24,25 On cholangioscopy, sludge material is frequently noted to cover the papillary masses observed within the bile duct lumen. The masses are usually soft, friable and the surface color is bright yellow or pinkish.2 As it is difficult to detect malignant foci pre-operatively, diagnosis is usually made after radical resection such as Whipple procedure.6 Intra operative cholangioscopy may be needed to ascertain absence of macroscopic intraluminal extension of the tumors as they tend to spread along the epithelial surface of the bile duct.4 Elevated serum CA 19-9 is seen more frequently in mucin hypersecreting biliary papillary tumors likely due to cholestasis and cholangitis.2

Management of biliary papillary tumors is not well defined in the literature. As there is favorable prognosis after complete surgical resection and an inability to identify malignant foci pre-operatively, an aggressive surgical resection is recommended regardless of tumor size and extent.10-13 Types of surgeries performed depend on location of tumor and vary from pancreatoduodenectomy to hepatic resection.1,5 Since biliary intraductal papillary neoplasia are adenomas, they can recur after surgical resection.14 There are no guidelines regarding frequency or mode of surveillance of remnant biliary tract after surgical resection of biliary papillary tumors. In a study from Asia, comparing biliary intraductal papillary neoplasia to non-papillary biliary tumors, the five year survival after resection for biliary papillary adenoma was 90% as compared to 50%, 0% and 58%, respectively for papillary-cholangiocarcinoma, non-papillary- cholangiocarcinoma and IPMN-P.8 In a case series from the Western population, IPMN-B was found to have a component of invasive carcinoma in 74% of the cases1 and median 5 year survival for invasive IPMN-B was reported to be about 38%.5

CONCLUSION

We report a case of biliary intraductal tubulopapillary adenoma with high grade dysplasia which is a rare tumor in the western population and generally has a favorable prognosis as compared to non-papillary biliary tumors. Our case is also unique due to the concomitant presence of a common bile duct stone further emphasizing the possible predisposition of biliary intraductal papillary neoplasia to biliary stones. Further, we suggest that there is a need to reclassify biliary tumors to accommodate IPMN-B as a distinct entity similar to IPMN-P.

Download Tables, Images & References

FRONTIERS IN ENDOSCOPY, SERIES #19

Transgastric Endoscopic Necrosectomy Using a Dedicated Transluminal Stent

Read Article

James P. D. Walker, Kyle Eliason, Douglas G. Adler MD, FACG, AGAF, FASGE, Gastroenterology and Hepatology, University of Utah School of Medicine, Salt Lake City, UT

CASE REPORT

An 87-year-old female was referred to our institution for evaluation of several pancreatic fluid collections that had developed in the context of an episode of severe acute pancreatitis. The patient’s pancreatitis was presumed to be due to choledocholithiasis and prior to our evaluation she had undergone an ERCP with sphincterotomy and duct clearance as well as a cholecystectomy. The patient could not tolerate PO intake and was being fed via a nasojejunal feeding tube. The patient had previously been evaluated by surgery and interventional radiology, who did not feel that the patient was a candidate for surgery or percutaneous drainage of this large pancreatic fluid collection, respectively.

Contrast enhanced CT scan revealed multiple pancreatic fluid collections, although attention was mostly centered on a bilobed but internally communicating15x10cm collection causing significant extrinsic compression of the stomach (Figure 1). The patient was offered endoscopic transmural drainage and, after a discussion of risks and benefits, she accepted.

When evaluated by endoscopic ultrasound (EUS) the lesion was found to contain a large amount of solid debris and was thus felt to represent walled off pancreatic necrosis (WOPN), rather than a pseudocyst (Figure 2). EUS guided transmural access to the cyst was obtained with a 19gauge needle via a transgastric route. The cystgastrostomy was dilated to 6mm over a wire. A 15mm wide Axios stent (Xlumena, Mountainview CA) was advanced across the cystgastrostomy and deployed without difficulty (Figure 3). There was immediate drainage of approximately 1L of fluid consistent with cyst contents.

One week later, the patient underwent endoscopic pancreatic necrosectomy through the Axios stent with a standard EGD endoscope. Using a combination of nets, snares, and a rat tooth forceps, a large amount of necrotic pancreatic tissue was mechanically debrided with marked improvement in the appearance of the cyst cavity, although some debris remained (Figure 4). The necrotic cavity was lavaged with copious amounts of hydrogen peroxide mixed with sterile saline. Although the endoscopic portion of the procedure went well, the patient tolerated the procedure poorly from a respiratory perspective and thereafter declined further procedures given her age and overall situation. It was agreed that the Axios stent would simply be left in place to provide drainage of the pancreatic fluid collection.

A CT scan of her abdomen and pelvis obtained 5 weeks later showed essentially complete resolution of the large necrotic collection with the Axios stent still in good position (Figure 5). The patient still did not wish to undergo further procedures given her age and overall history and the stent was thus left in place.

Discussion

In the treatment of pancreatic fluid collections and walled off pancreatic necrosis (WOPN) from pancreatitis, there are three main approaches: surgical, interventional radiology (IR) and endoscopic. These approaches are effectively used either alone or in tandem based on the specifics of the patient’s disease, the comfort level of the care team and the stability of the patient’s condition.

All of these techniques can be used in isolation or in combination as required clinically and as the patient’s condition and severity changes over time. The first week or phase of disease management is mostly monitoring and supportive/pain control with possible antibiotic prophylaxis and fluid resuscitation.1 During this time surgery is usually not performed unless it is of an emergent nature due to the fact that surgery in this phase often exacerbates multiple organ failure.2 The next phase of management over the ensuing weeks typically includes such measures as contrast enhanced CT or MRI to assess fluid collections and necrosis for progression, maturation, and the presence of infection.3 In this phase, antibiotic treatment may be optimized and determination of sterile or infected pancreatic necrosis can be accomplished using fine needle aspiration cultures of pancreatic tissue if clinically indicated.4 In weeks four, five and six patients who are still stable typically remain under conservative medical treatment, while patients who are beginning to deteriorate will likely undergo more aggressive interventions.15 This is the phase of treatment when it is more common to see the minimally invasive surgical and laproscopic procedures as well as endoscopic drainage.

Waiting until at least four weeks after the onset of symptoms allows fluid collections to become walled- off and develop a mature wall and adherence to the stomach and/or duodenum, which facilitates endoscopic necrosectomy if this approach is chosen.5,6 After treatment is initiated, patients can have procedures repeated as necessary. Often cholecystectomy or ERCP with sphincterotomy is considered during this time to minimize recurrent biliary pancreatitis and any other gallstone or obstructive disease.7 Several complications can arise in this phase including vascular complications and pancreatic fistulas. These pancreatic fistulas can often be treated with endoscopic papillary stenting.8

Surgical methods to treat pancreatic fluid collections may include open necrosectomy, which was considered the ideal treatment in the past as part of a “step down” approach to therapy of acute pancreatitis. An open necrosectomy is typically performed by creating a midline or subcostal bilateral incision and depending on the extent and locality of the necrosis, a surgeon may access the pancreas through the lesser sac, gastrocolic omentum or the transverse mesocolon.9 Manual debridement is performed in one or more sessions. After the initial necrosectomy is complete, the abdominal incision is typically closed around a drain and repeat procedures are performed until the debridement is complete. Alternatively, the patient’s abdomen may be left open or a wound-vac may be placed to facilitate drainage and repeated trips to the operating room for debridement in the days ahead.

Another surgical option is a laparoscopic necrosectomy, which has grown in popularity due to its minimally invasive nature. A laparoscopic approach provides excellent access to the pancreas and allows for other maneuvers to be easily accomplished in the same setting i.e. cholecystectomy, feeding tube placement, etc.1,10 In one study, laproscopic necrosectomy procedures showed promising results although 7.1% were converted to open necrosectomy, 28.6% of patients developed a pancreatic fistula, and there was a wound infection rate of 10.7%.11 While these numbers may sound high, it should be emphasized that these are aggressive procedures being performed in very ill patients.

Another minimally invasive surgical approach is the retroperitoneal approach. This is performed in a number of ways, one of which is the video assisted retroperitoneal debridement (VARD). In the VARD procedure, a laproscopic camera is inserted through an incision centered on the 12th rib with insertion of laparoscopic devices as well. Fluid drainage along with debris removal can be accomplished followed by debridement of the necrotic cavity.1,12

Due to the more invasive nature of the surgical techniques they are associated with longer hospital stays and more cost to the patient than other procedures. They are therefore typically used in association with a therapeutic “step up” program that usually begins with a less invasive endoscopic or percutaneous IR procedure.1,13

IR placement of one or more percutaneous drainage catheters is commonly used in patients who may be too ill for endoscopy or surgery, an immature fluid collection in need of drainage, or with an acutely infected collection. A percutaneous drain can be useful for bridging unstable patients to more definitive procedures performed at a later date. One study showed a 100% success rate in hemodynamically stable patients (n=20) using percutaneous drains to treat necrotizing pancreatitis, success being defined as resolution of lesions at follow up IR procedures and via CT.14 Percutaneous drainage carries with it a risk of pancreatico-cutaneous fistulae, especially in patients with disconnected duct syndrome, where it can be as high as 45%.15 As such, evaluating ductal anatomy, typically via ERCP with stenting as appropriate, is sometimes helpful in this setting. In a review of percutaneous drainage as a primary treatment for necrotizing pancreatitis, 55% of patients had no need for further necrosectomy (214 out of 384).16 In patients that need extensive necrosectomy of solid tissue, other techniques are typically preferred over percutaneous drains.17

Endoscopic approaches to the drainage and debridement of pancreatic necrosis tend to be the least invasive but are nonetheless high-risk interventions. These approaches and can be used when a necrotic collection abuts the gastroduodenal wall or if the fluid collection communicates with the main pancreatic duct.

If the fluid collection does have communication with the main pancreatic duct, transpapillary drainage is often attempted using a plastic stent placed directly into the collection or bridging the communication with the duct.9 If the fluid collection is felt to be too large, have too much solid component, and/or does not communicate with the duct then transmural drainage can be achieved in many patients via a cystenteroscopy (most commonly a cystgastrostomy, less commonly a cystduodenostomy) created endoscopically and kept open with one or more plastic or metal stents. The cystenterostomy also provides a portal for repeated endoscopic debridement as necessary.

In a systematic review of endoscopic treatment of pancreatic fluid collections through transmural drainage, patients treated with metal stents had a success rate of 81.9% on average for various types of pancreatic fluid collections out of 124 patients treated with metal stents. The authors reported an 83.3% success rate for pseudocysts and 77.9% in patients with walled off necrosis (success being defined as a reduction in size > 50% or complete resolution). These same authors reported an adverse event rate of 23.3% including infection, bleeding, stent migration, occlusion etc (again emphasizing that endoscopic treatments are not low risk procedures). In the same study, plastic stents showed a success rate of 80.7% on average of 702 patients treated (85.1% for pseudocysts and 69.5% for walled off necrosis). The adverse event rate for plastic stents was 16.1%.18

There are now commercially available, dedicated transluminal stents with a wide enough bore (15 mm) for an endoscopic necrosectomy to be performed through the stent lumen itself. We use these stents frequently in our practice. These stents are designed for EUS-guided placement and are only now coming into clinical use. A study of one of these dedicated stent (n=22) showed a 100% clinical success rate and 100% technical success rate, with 10% of patients encountering complications, which included stent migration and hemorrhage.19 The main advantage of these dedicated stents is their wide lumen which both facilitates passive drainage to the GI tract of cyst contents and the fact that they can accommodate an endoscope so that the cavity can be entered as needed for endoscopic necrosectomy without having to remove the stent itself (as is often the case with plastic stents).

Overall, endoscopic treatment appears to be a good choice in the treatment of pancreatic fluid collections. One study of 116 patients (5 acute fluid collection, 8 necrosis, 30 acute pseudocyst, 64 chronic pseudocyst and 9 pancreatic abcess) treated via endoscopic drainage methods showed an 87.9% clinical success rate with resolution of collections and symptoms, and a 93.1% technical success rate of fluid collection resolution with or without resolution of symptoms. Collections recurred in 15.5% of patients and complications occurred in 11.2%. The most common complications were bleeding and pneumoperitoneum.20

Overall, methods for treating pancreatic fluid collections that develop following pancreatitis are numerous and allow for a customizable approach to treatment. There are many variables to consider with each patient’s management, including the provider’s comfort level and experience with some of these procedures. Careful consideration of all factors will be important to the patients’ outcomes.

Download Tables, Images & References

NUTRITION ISSUES IN GASTROENTEROLOGY, SERIES #142

Non-Celiac Gluten Sensitivity Where are We Now in 2015?

Read Article

Non-celiac gluten sensitivity (NCGS) is a term that is used to describe individuals who are not affected by celiac disease or wheat allergy yet who have intestinal and/ or extraintestinal symptoms related to gluten ingestion with improvement in symptoms upon gluten withdrawal. The prevalence of this condition remains unknown. In this paper, we will discuss the current advances in our understanding of NCGS including definition, epidemiology, clinical characteristics, diagnostic criteria and management.

Non-celiac gluten sensitivity (NCGS) is a term that is used to describe individuals who are not affected by celiac disease or wheat allergy yet who have intestinal and/or extraintestinal symptoms related to gluten ingestion with improvement in symptoms upon gluten withdrawal. The prevalence of this condition remains unknown. It is believed that NCGS represents a heterogenous group with different subgroups potentially characterized by different pathogenesis, clinical history, and clinical course. There also appears to be an overlap between NCGS and irritable bowel syndrome (IBS). Hence, there is a need for strict diagnostic criteria for NCGS. The lack of validated biomarkers remains a significant limitation in research studies on NCGS.

Anna Sapone MD PhD, Celiac Center, Division of Gastroenterology Daniel A. Leffler MD, MS, Director of Clinical Research, Celiac Center, Director of Quality Improvement, Associate Professor of Medicine at Harvard Medical School, Division of Gastroenterology Rupa Mukherjee MD, Celiac Center, Division of Gastroenterology, Department of Medicine, Instructor in Medicine at Harvard Medical School, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA

INTRODUCTION

The most common diseases caused by ingestion of wheat are autoimmune-mediated conditions such as celiac disease (CD) and IgE-mediated allergic reactions or wheat allergy (WA).1 CD affects roughly 1% of the general population. It is now increasingly clear that, besides CD and WA, an undefined percentage of the general population considers themselves to be suffering from problems due to wheat and/or gluten ingestion, relying largely on self-diagnosis. These individuals are generally considered to have gluten sensitivity (GS). An overlap between irritable bowel syndrome and GS has long been suspected and requires strict diagnostic criteria. Currently, the lack of biomarkers is a major limitation, and there remain many unresolved questions regarding GS. In this paper, we will discuss the current advances in our understanding of non-celiac gluten sensitivity (NCGS) including definition, epidemiology, clinical characteristics, diagnostic criteria and management.

Definition

Recent publications show that there is great interest in defining gluten-related disorders (See Figure 1). This term encompasses all conditions related to the ingestion of gluten-containing food. Included within this category is celiac disease (CD), a chronic, small intestinal immune-mediated enteropathy triggered by exposure to dietary gluten in genetically predisposed individuals characterized by specific autoantibodies against tissue transglutaminase 2 (anti-TG2), endomysium (EMA) and/or deamidated gliadin peptide (DGP).2 Wheat allergy (WA) is another gluten-related disorder that is defined as an adverse immunologic reaction to wheat proteins characterized by the production of wheat specific IgE antibodies that play a key role in disease pathogenesis. Cases of non-IgE-mediated wheat allergy also exist and can be confused with gluten sensitivity. Examples of WA include wheat-dependent, exercise- induced asthma (WDEIA), occupational asthma (baker’s asthma), rhinitis, and contact urticaria.1

In 2011, an international panel of experts met in London and reached consensus on a definition of non- celiac gluten sensitivity (NCGS). They defined NCGS as a “non-allergic and non-autoimmune condition in which the consumption of gluten can lead to symptoms similar to those seen in CD”.3 The consensus statement elaborated that symptoms in NCGS are triggered by gluten ingestion in the absence of celiac-specific antibodies (tissue transglutaminase [tTG], endomysium [EMA] and/or deamidated gliadin peptide [DGP]) and absence of enteropathy although an increased density of CD3+ intraepithelial lymphocytes (IELs) can be detected. Patients with NCGS have variable human leukocyte antigen (HLA) status and variable presence of IgG anti-gliadin (first generation) antibodies.3 NCGS is further characterized by resolution of symptoms with withdrawal of gluten and relapse of symptoms with gluten exposure. The clinical symptoms of NCGS can overlap with those of CD and WA. As our knowledge of NCGS continues to increase, this definition may require further modification in the future.

Epidemiology and Natural History of NCGS

The overall prevalence of NCGS in the general population is currently unknown largely because patients often self-diagnose and place themselves on a GFD without medical consultation. Anecdotal observations indicate that the prevalence ranges from 0.5% to 6% but this is based on studies with heterogenous study design and inconsistent definitions of the disease. In a large study of 5896 patients evaluated at the University of Maryland between 2004-2010, 347 patients fulfilled diagnostic criteria for NCGS leading to a prevalence of nearly 6%.1,4 Furthermore, data from the National Health and Nutrition Examination Survey (NHANES) for 2009- 2010 reported a possible prevalence of NCGS of 0.55% in the general U.S. population.5 Given the reported overlap between IBS and NCGS, epidemiologic studies on IBS can shed some light, albeit indirectly, on the frequency of NCGS. In one highly selected series of adults with IBS, the frequency of NCGS was reported to be 28% based on a double-blind, placebo-controlled gluten challenge.6 Furthermore, in a large study by Caroccio et al, 276 out of 920 (30%) of subjects with IBS-like symptoms based on Rome II criteria reported wheat sensitivity or multiple food hypersensitivities.7 It is estimated that the prevalence of NCGS in the general population is likely higher than that of CD (1%). The prevalence of NCGS in children is still unknown. Although risk factors for NCGS have not yet been identified, this disorder appears to be more common in females, with a male-to-female ratio of about 1:3, and in young/middle aged adults.

Due to a lack of longitudinal data and prospective studies on the natural history of NCGS, it is unclear if NCGS predisposes to any long-term complications. In the current literature, there are no reports of major complications such as intestinal lymphoma, gastrointestinal (GI) malignancies or associated autoimmune illness as observed in CD.

Pathogenesis

The pathophysiology of NCGS remains largely undetermined. A study by Sapone et al. has found that NCGS subjects have normal intestinal permeability compared to CD patients, intact level of protein expression that comprise intestinal epithelial tight junctions and a significant reduction in T-regulatory cell markers compared to controls and CD patients.4 Moreover, NCGS patients have an increase in the a and ? classes of intraepithelial lymphocytes (IELs) with no increase in adaptive immunity-related gut mucosal gene expression. These findings suggest an important role of the intestinal innate immune system in the pathogenesis of NCGS without an adaptive immune response.8 Unlike duodenal mucosa from CD patients exposed to gliadin in-vitro, intestinal mucosa from NCGS patients do not express markers of inflammation. Newer techniques such as examination of basophil activation in response to gluten or wheat stimulation might suggest alternative pathogenic mechanisms for NCGS.

Clinical Characteristics of NCGS

The clinical symptoms of NCGS are elicited soon after gluten exposure, improve or disappear with gluten withdrawal and reappear following gluten challenge, usually within hours or days. While this finding could be attributed to a placebo/nocebo effect, the 2011 study by Biesiekierski et al. argues for the existence of a true NCGS disorder. In a double-blind randomized, placebo-controlled study design, the authors found that IBS-like symptoms of NCGS were significantly higher in the gluten-treated group (68%) than subjects treated with placebo (40%).6

Studies suggest that the clinical presentation of NCGS follows an IBS-like picture characterized by abdominal pain, bloating, bowel irregularity (diarrhea and/or constipation) and systemic manifestations including “brain fog”, headache, joint and muscle pain, fatigue, depression, leg or arm numbness, dermatitis (eczema or skin rash) and anemia.1,4,9 In one study of IBS patients, the two most common extraintestinal manifestations with gluten challenge were “foggy mind” (42%) and fatigue (36%).9 Currently, data are lacking on the actual prevalence and type of intestinal and extraintestinal symptoms in patients with NCGS. Unlike CD, NCGS patients do not have an increased prevalence of autoimmune illness. In one group of 78 NCGS patients, none had type I diabetes mellitus and only one patient (1.3%) had autoimmune thyroiditis. This is compared to 5% and 19% prevalence for these autoimmune comorbidities, respectively, in a study of 80 patients with CD.9 With regards to psychiatric comorbidities, a recent study found no significant difference between patients with CD and NCGS in terms of anxiety, depression and quality of life indices.10 Overall, the role of NCGS in neuropsychiatric conditions (i.e. schizophrenia, autism spectrum disorders) remains a controversial and highly debated topic. However, NCGS patients reported more abdominal and non- abdominal symptoms after gluten exposure than CD patients (see Table 1).

In a recent retrospective review of IBS-like patients who underwent a double-blind placebo-controlled wheat challenge, nearly 25% of the patients were identified with NCGS. The study showed that a history of food allergy in infancy, coexistent atopic disease, multiple food intolerances, weight loss and anemia were more common in the NCGS group compared to the IBS controls.7 Therefore, it may be useful for physicians to enquire about these conditions in patients with IBS type symptoms to gauge the potential utility of a trial of gluten restriction.

NCGS and IBS

The relationship between NCGS and IBS is complex, and IBS-like symptoms are common in patients with NCGS. Vasquez et al. showed that gluten ingestion can elicit GI symptoms in non-CD patients, specifically, patients with diarrhea-predominant IBS (IBS-D).11 The IBS-D patients, particularly those with the HLA- DQ2 and/or DQ8 genotypes, had more frequent bowel movements per day on a gluten-containing diet, and this diet was associated with higher small intestinal permeability. This finding gave some insight into the role of the GFD in improving GI symptoms in IBS patients.

However, the exact role of gluten withdrawal in mitigating symptoms requires further investigation. In addition to gluten, it has been shown that wheat and wheat derivatives contain components such as amylase- trypsin inhibitors (ATIs) that can trigger symptoms in IBS patients. Another potential trigger for symptoms are the highly fermentable and osmotic, poorly-absorbed, short-chain carbohydrates (fermentable oligo-, di- and monosaccharide and polyols), also called FODMAPs which include fructans, galactans, fructose, lactose and polyols found in wheat, certain fruits, vegetables and milk as well as their derivatives.3 There is ongoing debate on the contribution of each of these diet components to symptoms experienced by patients with NCGS and IBS. In a placebo-controlled cross-over re-challenge study in 37 subjects with self-reported NCGS/IBS, subjects were randomly assigned to a reduced FODMAPs diet and then challenged with gluten or whey protein.12

All 37 subjects had improvement in their GI symptoms on the reduced FODMAPs diet without significant worsening of their symptoms when challenged with gluten or whey protein. It is important to note that the symptoms experienced by the NCGS patients cannot be attributed solely to FODMAPs since they often experienced resolution of symptoms with a GFD alone while still consuming FODMAPs from other sources such as legumes. However, this finding raises the possibility that some cases of IBS may, in fact, be due largely to FODMAPs and should not be classified as having NCGS. Therefore, there is a great need to identify and validate specific biomarkers that will play an important role in further defining NCGS as a clinical condition and clarify its prevalence in at- risk groups and the general population.

Laboratory Evaluation in NCGS

No specific biomarker has been identified for NCGS. However, trends in laboratory evaluation including serology, HLA genotyping and histology have been noted in patients meeting diagnostic criteria for NCGS.

CD Serology

Volta et al. investigated the CD serologic patterns in 78 patients with untreated NCGS. They found that 56.4% of the patients had elevated titers of “first generation” IgG-anti-gliadin antibody (AGA) compared to patients with untreated CD. The prevalence of IgG-AGA in NCGS was lower than that in CD patients (81.2%), but higher than in patients with connective tissue diseases (9%) or autoimmune liver disease (21.5%), and in healthy blood donors (2-8%). However, the prevalence of IgA-AGA in NCGS was low at 7.7%.9 Of note, the three key CD antibodies, IgA-tTG, IgG-DGP and IgA- EMA, were negative in all patients with NCGS except for a single low titer IgG-DGP.

HLA Genotyping

The CD-predisposing HLA-DQ2 and HLA-DQ8 haplotypes are found in roughly 50% of NCGS patients compared to 95% in CD patients and 30% in the general population.1

Histologic Findings

Sapone et al. compared small intestinal biopsy findings from patients with NCGS, CD and controls. Patients with NCGS had normal to mildly inflamed mucosa categorized as Marsh 0 or 1, while partial or subtotal villous atrophy (Marsh 3) with crypt hyperplasia was seen in all CD patients.4 In addition, the CD patients had increased intraepithelial lymphocytes (IELs) compared to controls. The level of CD3+ IELs in the NCGS patients was found to be in between that seen in CD patients and controls in the context of normal villous architecture. Other histologic findings that might be specific to NCGS patients include an increased level of activated circulating basophils7,13 and increased infiltration of eosinophils in the duodenum and/or ileum and colon.7,14

Diagnostic Approach to NCGS

As clinicians, it is important to suspect NCGS in a patient who presents with IBS-like symptoms such as abdominal pain, bloating, diarrhea and constipation as well as “foggy brain,” fatigue, headaches, joint or muscle pain that appear to improve on a GFD. Since these symptoms can also be seen with CD and, to a lesser extent, with wheat allergy (WA), these conditions need to be excluded in order to make a diagnosis of NCGS (see Table 2). Kabbani et al. have proposed a diagnostic algorithm to help differentiate NCGS from CD and WA15 (See Figure 2). The first step in the evaluation of a subject with symptoms responsive to a GFD is to check for the presence of celiac serologies (IgA-tTG and IgA/IgG DGP) on a gluten-containing diet. If the celiac serologies are negative and there is no IgA deficiency, a diagnosis of CD is unlikely, making NCGS a more likely diagnosis. Moreover, lack of symptoms of malabsorption (weight loss, diarrhea, nutrient deficiencies, iron deficiency anemia) and no CD risk factors (family history of CD, personal history of autoimmune illness) were found to further support a diagnosis of NCGS. WA allergy should similarly be evaluated for with IgE-based assays.

The authors found that incorporating a personal history of autoimmune illness, family history of celiac disease and nutrient deficiencies could help in the diagnostic model particularly in subjects with negative serology. Subjects with negative serology on a gluten- containing diet, no risk factors and no symptoms of enteropathy are highly likely to have NCGS and do not require further testing. Conversely, in a subject with negative serology but with typical symptoms of malabsorption or risk factors for CD, a biopsy is indicated. In a subject with borderline serology on a gluten-containing diet, the next step is HLA typing to determine whether a biopsy is indicated. A subject with borderline serology and negative HLA typing is considered to have NCGS. HLA typing is also useful to evaluate subjects suspected of NCGS or CD who self- start a GFD without a prior check of celiac serologies on a gluten-containing diet. Due to the high negative predictive value of the genetic assay, a diagnosis of CD can be effectively excluded with a negative finding. If HLA testing is negative in a subject without serology on a GFD whose symptoms are responsive to a GFD, the subject likely has NCGS and a gluten challenge would be unnecessary. However, if HLA testing is positive despite symptom resolution on a GFD, it is recommended that the subject undergo a gluten challenge followed by evaluation of celiac serologies. A gluten challenge is the monitored reintroduction of gluten containing food items usually over a two week period. The recommended daily gluten load is the equivalent of 1-2 slices of wheat bread.

Once CD and WA have been excluded clinically and by laboratory evaluation, a patient suspected of having NCGS should be asked to avoid a gluten-containing diet for at least 4-8 weeks. Gluten withdrawal is usually associated with significant improvement in symptoms within days. After the period of gluten withdrawal, a gluten challenge should be performed for confirmation of diagnosis. Since placebo effect from gluten withdrawal cannot be excluded entirely, a more ideal method for diagnosing NCGS is a double-blinded, placebo-controlled design, however this is unlikely to be feasible in most clinical practices.

Currently, research efforts are focusing on the use of an ex-vivo gluten challenge to distinguish patients with CD (treated and untreated) from NCGS, further classify NCGS and distinguish true NCGS from cases of mild CD without enteropathy. In the ex-vivo gluten challenge, cultured duodenal biopsies are exposed to gluten and maintained in various laboratory conditions to determine unique cytokine profile and histologic findings that can be used to classify different patient groups. This method would eliminate the need for a two week gluten challenge followed by an upper endoscopy with duodenal biopsies in patients already on a GFD in whom the diagnosis of CD is not clear. Patients frequently find the gluten challenge to be onerous and, in some cases, intolerable due to significant side effects from gluten exposure.

Management of NCGS

Successful treatment and management of NCGS is based on a multidisciplinary approach involving the primary care physician, gastroenterologist and nutritionist. It must be emphasized that dietary treatment should be implemented only after an appropriate diagnosis has been established. Patients with NCGS are advised to follow a diet with sufficiently reduced gluten content to manage and mitigate symptoms. Based on severity of symptoms, some patients may choose to follow a gluten-free diet (GFD) indefinitely. Since gluten-free food products are often not fortified with necessary vitamins and minerals, it is important to evaluate a patient with NCGS for any vitamin and mineral deficiencies and manage them appropriately. NCGS patients are typically advised to start a multivitamin. If a patient has persistent symptoms despite a low gluten or GFD, there should be consideration for other associated conditions such as lactose intolerance and/ or fructose malabsorption. These conditions can be evaluated for with breath testing and/or an empiric trial of a low FODMAP diet. It is important to also consider and exclude other conditions such as IBS and small intestinal bacterial overgrowth that can contribute to ongoing symptoms.

Since there is no biomarker for NCGS to monitor a patient’s status, clinicians are left to rely on symptom resolution. Based on our current understanding of NCGS, there is no intestinal or extraintestinal damage with gluten exposure. Since it is not yet known whether NCGS is a transient or permanent condition, it is strongly recommended by experts such as Fasano et al. that patients undergo periodic re-evaluation with reintroduction of gluten (e.g. every 6-12 months), particularly in the pediatric population, in an effort to liberalize the diet where possible.3 In clinical practice, however, many patients with symptom control on a low gluten or GFD are averse to intentional exposures to gluten. Currently, there are no guidelines on how best to monitor patients with NCGS.

Unanswered Questions and Future Research

The clinical spectrum of gluten-related disorders appears to be more heterogenous than previously appreciated. However, evidenced-based research in this area is lacking. Although NCGS is currently defined by gluten related symptoms in the absence of CD, this does not rule out the possibility that gluten could be “toxic” and have long-term clinical sequelae. A number of unanswered questions remain about NCGS that will dictate future research. What is the prevalence of NCGS both in the general population and in at-risk groups? What is/are its pathogenic mechanisms? Is the condition permanent or transient, and is the threshold of sensitivity the same or different for subjects and does it change over time in the same subject? Research on NCGS suggests that it may be a heterogenous condition comprised of several subgroups. There is a need for:


  • Prospective, multi-center studies on the natural
    history of this condition.
  • Biomarkers to properly diagnose and better
    define the different NCGS subgroups.
  • Research on the potential pathogenic role of
    other wheat components besides gluten and ATI,
    namely, FODMAPs in NCGS.

It is also anticipated that the definition of NCGS will undergo further modification with the accumulation of more data. In the meantime, it is important to have a standardized definition for NCGS to assist in diagnosis and to improve study design for future research.

Download Tables, Images & References

GASTROINTESTINAL MOTILITY AND FUNCTIONAL BOWEL DISORDERS, SERIES #9

Domperidone: Everything a Gastroenterologist Needs to Know

Read Article

Domperidone, first synthesized approximately 40 years ago, has been approved worldwide for specific clinical applications. However, in the United States it is only available through an FDA-approved Limited Access Program. In this article we review all the literature regarding its clinical efficacy and we provide a comprehensive list of recommendations and guidelines when considering initiating domperidone in patients that are suitable for this medication.

Domperidone, first synthesized approximately 40 years ago, has been approved worldwide for specific clinical applications. However, in the United States it is only available through an FDA-approved Limited Access Program. Patients with functional dyspepsia, gastroparesis, gastroesophageal reflux disease and refractory nausea and vomiting may benefit from the use of domperidone. The main limitation to using domperidone has been questions raised regarding cardiac toxicity, specifically QTc elongation that could potentially lead to fatal arrhythmias. Recent studies have not shown a significantly increased incidence of cardiac side effects even when domperidone was given at very high doses, two to three-fold greater than those typically described in the majority of the available literature. In this article we review all the literature regarding its clinical efficacy and we provide a comprehensive list of recommendations and guidelines when considering initiating domperidone in patients that are suitable for this medication.

Marco Bustamante-Bernal MD1 Priyanka Wani MD1 Richard W. McCallum MD2 1Department of Internal Medicine, Paul L. Foster School of Medicine, Texas Tech University Health Science Center 2Department of Internal Medicine, Division of Gastroenterology, Paul L. Foster School of Medicine, Texas Tech University Health Science Center, El Paso, TX

INTRODUCTION

Domperidone, first synthetized in 1974, has been approved for patient use throughout the world with specific clinical applications in gastroparesis, nausea and vomiting, gastroesophageal reflux disease, functional dyspepsia and more recently as adjunctive in small bowel capsule endoscopy. It is currently approved worldwide, however in the United States domperidone is only available through an FDA-approved Limited Access Program. It can be prescribed by physicians who apply for an Investigational New Drug (IND) protocol to provide this drug to patients with gastroparesis or other functional gastrointestinal (GI) disorders associated with nausea and vomiting where symptoms have been refractory to standard therapy or treatment limited by side effects of medications. Domperidone was not approved for use in the United States based on recommendations from the FDA review process to conduct clinical trials with larger patient numbers to further confirm its efficacy and safety.1 These trials were not subsequently performed or submitted to the FDA.

Our purpose in this publication is to provide physicians a comprehensive analysis about how they can best utilize domperidone in their practices, as well as update on available data for domperidone’s pharmacology and efficacy, with a major focus on safety with a full analysis of the recent questions that have been raised regarding cardiac toxicity.

Peak plasma concentrations are attained at 10 to 30 minutes after intramuscular and oral administration of domperidone respectively. Systemic bioavailability after intramuscular administration of domperidone is about 90%, whereas oral administration is 13 to 17%. The low systemic bioavailability after oral administration is explained by first-pass effect in the liver and gut wall metabolism.2

Distribution data in humans are lacking, but studies in rats with radiolabeled domperidone have shown wide distribution in body tissues except the central nervous system (CNS), where only very low concentrations occur. This is explained by the fact that domperidone minimally crosses the blood-brain barrier.

Domperidone undergoes rapid and extensive biotransformation by hydroxylation and oxidative dealkylation. After oral administration of 40 mg of radiolabeled domperidone, 31% of the radioactivity is excreted in the urine and 60% in the feces over a period of 4 days. The half-life is 7.5 hours in healthy subjects and is prolonged to up to 20 hours in patients with severe renal failure. However, since renal clearance is small compared to total plasma clearance, meaningful accumulation should not occur.2

Pharmacodynamics

Domperidone is a dopamine (D) antagonist with particular affinity for the D2 subtype receptors in the brain and the peripheral nervous system including the GI tract. Dopamine receptors in the chemoreceptor trigger zone, which can induce nausea, are blocked by the D2 receptor domperidone (Figure 1). Its mechanism of action in the GI tract is antagonism of apomorphine and dopamine induced changes in GI function. Stimulation of dopaminergic receptors inhibits gastric motility, resulting in symptoms such as post-prandial bloating and pain, premature satiety, nausea and vomiting. Dopamine antagonists, like domperidone and metoclopramide, inhibit this dopaminergic inhibitory effect resulting in net increase in acetylcholine release leading to improved GI motility, with the main effect being in the stomach and minimal effects in proximal small bowel. Unlike metoclopramide, domperidone does not cause any CNS side-effects since it essentially does not cross the blood-brain barrier with minimal evidence of presence in the brain3 (Figure 1.)

Dopamine is one of the neurotransmitters involved in mediating receptive relaxation of the stomach and dopamine antagonists partially inhibit this mechanism. Even after vagotomy, which decreases gastric motility, domperidone can still improve gastric emptying.4

The prokinetic effects of domperidone have broad implications in the upper GI tract, starting with small effects on the amplitude of esophageal contractions, but mainly to enhance antro-duodenal contractions, and better coordinate peristalsis across the pylorus resulting in acceleration of slow gastric emptying states.5 It has minimal effects on motility in the duodenum and proximal small bowel.

CLINICAL USES OF DOMPERIDONE
Functional Dyspepsia

More recently, according to the Rome III criteria, functional dyspepsia (FD) is considered to consist of two main subgroups: epigastric pain syndrome (EPS) and postprandial distress syndrome (PDS).6 PDS is characterized by early satiety, postprandial fullness, bloating, nausea and even vomiting.7 EPS is dominated by epigastric pain with some components of nausea and fullness.

Functional dyspepsia patients display a variety of abnormal digestive functions: delayed gastric emptying (30% of patients); accelerated gastric emptying (10%), and impaired gastric accommodation after meals (40%).8 Other data suggest that abnormal gastric sensation or visceral hypersensitivity, as well as psychosocial disturbance can be major determinants of symptom severity, particularly the epigastric pain component.9 The treatment of (FD) can be confusing because no medication is currently approved in the US, Canada or European Union for this specific indication.10 A reasonable treatment approach based on the current evidence, particularly in the EPS subgroup, is to initiate therapy with a daily proton pump inhibitor in Helicobacter pylori-negative patients. In the PDS subgroup where symptoms are induced or exacerbated by meals and pain is less prominent, prokinetic therapy would be preferred as an initial trial. In both settings if symptoms persist, particularly epigastric pain, a therapeutic trial with a tricyclic antidepressant may be considered for the goal of modifying brain-gut hypersensitivity, while another strategy is initiating therapy with an antinociceptive agent such as gabapentin or pregabalin.

Metoclopramide has been the only prokinetic utilized in the United States, since its approval by the FDA in the 1980’s for treating GERD and diabetic gastroparesis. Domperidone has also been studied for the treatment of FD. To date, 6 meta-analyses that describe the effect of domperidone in FD have been published.11 All of them are based on relatively small studies and numbers demonstrate superiority of domperidone over placebo in the treatment of FD. The analyzed studies using a domperidone dose of 30-60 mg/day for a total time of 2-6 weeks demonstrated a treatment effect of 30 to 63%. This data supports the theory of a treatment benefit for domperidone in FD. However an important unresolved issue is the short duration of treatment, since FD is a chronic condition. These studies in retrospect were addressing the PDS subset of patients classified by Rome criteria with primarily dysmotility-like symptoms, and this subgroup should be considered when contemplating initiating domperidone for functional dyspepsia.

Gastroparesis

Gastroparesis is a syndrome characterized by anorexia, bloating, early satiety, abdominal pain and vomiting, and is associated with objective evidence for delayed gastric emptying without evidence of any gastric obstruction. A major cause of gastroparesis is diabetes and it may be present in up to 30% to 50% of the gastroparetic patients. The idiopathic variety is also important and of equivalent frequency, and together both forms constitute more than 80% of all cases of gastroparesis.12

Although delayed gastric emptying is considered the cardinal finding in gastroparesis, it is clear that the pathogenesis of symptoms is complex and diffuse ranging from impaired fundic accommodation, related to impaired gastric inhibitory neurons,13 neuropathic changes involving the myenteric plexus,14 sensory nerve dysfunction.15 and gastric dysrhythmias.16

The evidence for using prokinetics is based on trials performed two or three decades ago, which in some cases may not have been as rigorously conducted in regard to numbers and population assessment.17 The dopamine D2-receptor antagonist, metoclopramide, is the only US FDA-approved medication for the treatment of gastroparesis and the recommended duration is no longer than a 12-week period.18 The reported serious adverse events such as tardive dyskinesia, dystonias and parkinsonism are always a “cloud” over the head of metoclopramide in balancing its efficacy. For more than 40% of patients unable to tolerate this agent or do not respond to metoclopramide, domperidone should be the next agent utilized.

The importance of domperidone in the management of gastroparesis is undeniable. The American Gastroenterological Association technical review on the diagnosis and treatment of gastroparesis, as well as the American Gastroenterological Association medical position statement for the diagnosis and treatment of gastroparesis recognizes domperidone as one of the most important treatment options currently unavailable for gastroparesis.19

An early study performed in 1997 involving 17 patients with gastroparesis and symptoms of nausea, vomiting, abdominal pain and bloating utilized domperidone 20 mg q.i.d. for an average of 23 months. Results showed a decrease in hospital admissions compared with before domperidone therapy (p <0.05), improvement in gastric emptying (p <0.05) and enhanced quality of life of 88% of patients. More recently, a multicenter, two-phase withdrawal study involving 208 insulin-dependent diabetic patients showed that domperidone is effective in treating moderate to severe upper GI symptoms independent of their gastric emptying status.20 This study also investigated two health-related quality of life measures of physical and mental components. Results at the end of the single-blind phase indicated that patients with a symptomatic response to domperidone also experienced significant improvements in health-related quality of life from baseline as measured by physical and mental component scores. Patients continuing on domperidone during the double-blind withdrawal phase maintained their clinical and health-related quality of life gains. In contrast, those in the placebo group experienced more gastroparetic symptoms and a decline in quality of life. Table 1 summarizes the published clinical trials with domperidone. However, trials investigating domperidone have been generally underpowered and often uncontrolled, so results must be interpreted with this caveat.

As the IND protocol is mainly utilized by gastrointestinal specialists in centers with institutional review boards, patients who do not have access to such centers might not be able to obtain domperidone. Conversely, physicians at a university research facility might not have enough patients for a large single center report. This situation could create a mismatch between the patient in need for treatment and the availability of prescribing physicians who have access to IND.33 Domperidone is also available through compounding pharmacies in the USA or through access to European pharmacies although this is not sanctioned as a “standard of care”.

The European dose schedule utilized for more than 30 years recommends dosing of 10 mg t.i.d or up to q.i.d. On the other hand, the clinical guidelines on the management of gastroparesis published in 2013.17 focusing on practice in the United States recommended a starting dose of 10 mg q.i.d. increasing up to 20 mg t.i.d. before meals and at betime. A recent study by Ortiz et al. showed that the use of domperidone at very high dose of 80-120 mg/day was well tolerated among the majority of the enrolled patients as well as being very efficacious, resulting in a 75% symptom improvement from baseline, for the treatment of gastroparesis and nausea and vomiting.1

Gastroesophageal Reflux Disease

Gastroesophageal reflux disease (GERD) is defined as a pathological condition when the amount of gastric contents refluxing into the esophagus exceeds the normal limit. Typical symptoms are heartburn and regurgitation, but the spectrum of symptoms ranges from asymptomatic patients, atypical chest pain, dysphagia, hoarseness and odynophagia.34 Complications of chronic GERD include esophageal mucosal damage, such as Barrett’s esophagus or stricture.

Despite the wide spectrum of abnormalities, three primary goals are applicable to all patients with GERD: 1) alleviation of symptoms, 2) resolution and prevention of complications, 3) prevention of recurrence.35

Proton pump inhibitors (PPI) are the most effective agents to treat GERD when compared to antacids, prokinetics, and H2 receptor blockers. They have few adverse effects and are well tolerated for long-term use. Due to the superiority and efficacy of PPI, treatment of GERD should start with an 8-week course of PPI. In some cases, PPI monotherapy cannot completely resolve symptoms in all cases of GERD; in this setting combination therapy with a prokinetic will further improve symptoms for some patients.36 Accompanying “dyspepsia-like” symptoms in addition to GERD are the most receptive to domperidone.

Motility modulating drugs exert their therapeutic effect in GERD by increasing the lower esophageal pressure, enhancing peristaltic contractions, improving esophageal clearance, and by accelerating gastric emptying.37

A randomized, double blind clinical trial by Ndraha evaluated the combination of PPI with domperidone in the treatment of GERD.38 Sixty patients were enrolled and separated in two groups, group A 30 patients received omeprazole 20 mg b.i.d and domperidone 10 mg t.i.d for 2 weeks, while group B 30 patients were only given omeprazole 20 mg b.i.d; symptoms were assessed after 2 weeks of treatment using the Frequency Scale for the Symptoms of GERD (FSSG).36 The FSSG score in the omeprazole + domperidone group after treatment (19.3 +- 11.3) was significantly lower than before treatment (26.7 +- 8.9, p <0.001) as well as significantly better than in the omeprazole group (from 23.9 +- 7.3 to 19.3+- 7.9, p <0.001). The mean improvement score in group A was 7.5 +- 5.9, while in group B was of 4.6 +- 3.3, and this difference was statistically significant (p=0.02). The author concluded that the combination of omeprazole with domperidone in highly symptomatic patients with GERD is superior to omeprazole monotherapy.

However, the true clinical efficacy of domperidone has not been confirmed in that data suggests ineffective healing of esophagitis despite improved symptom states. Ren et al. demonstrated in a recent meta-analysis (39) that combined therapy with a PPI and a prokinetic was associated with a greater symptomatic relief and a reduction in the number of reflux episodes, but there was no significant effect on 24-hour esophageal acid exposure time and healing of esophagitis demonstrated endoscopically. Their conclusion was that combining therapy can improve quality of life of patients with GERD.

Nausea and Vomiting

The antiemetic effect of dopamine is mediated by inhibition of D2 receptor activation in the area postrema and chemoreceptor trigger zone at the base of the fourth ventricle but outside the blood-brain barrier.40 (Figure 1)

The antiemetic properties of domperidone are well documented. In patients experiencing postoperative nausea and vomiting, IV domperidone was more effective than placebo.41 Neither domperidone nor metoclopramide was more effective than placebo when given prophylactically before induction or near the end of anesthesia for preventing nausea and vomiting.42

Nausea and vomiting associated with chemotherapy has been effectively controlled by domperidone when administered immediately before the cytotoxic regimen. It is more effective than placebo and compares favorably with metoclopramide in controlling vomiting as a result of moderately emetic chemotherapeutic agents.43,44 However, this chemotherapy phase of domperidone career relied on intravenous administration which now is no longer approved or being continued.

Domperidone has also been used to treat nausea and vomiting associated with other conditions including dysmenorrhea, head injury and intracranial lesions, hemodialysis, radiotherapy and migraine headaches. Most of these studies were open trials but did show some efficacy for these indications.3

The management of patients who need anti- Parkinsonian medications and other centrally acting dopamine agonists is often limited by the side effects of nausea, vomiting, anorexia and postprandial fullness. Metoclopramide is contraindicated in Parkinson’s disease because by crossing the blood-brain barrier it would antagonize levodopa therapy effects. On the other hand, domperidone is very useful in this setting because it inhibits peripheral dopaminergic activity without blocking central dopamine effects. Studies have shown that oral domperidone, at a dose of 60-150 mg/ day, decreases the incidence of nausea and vomiting in patients treated with bromocriptine, allowing them to tolerate higher doses of bromocriptine.45 Domperidone also improved gastric emptying and alleviated GI symptoms including nausea, vomiting, anorexia and abdominal bloating induced by levodopa. The beneficial effects of the anti-Parkinsonian drugs were not inhibited by domperidone, and no extrapyramidal side effects were reported with the use of domperidone.46

Small Bowel Capsule Endoscopy

Small bowel capsule endoscopy (SBCE) was introduced in 2001. It has since revolutionized the diagnostic workup for small bowel diseases.47 One of the major limitations of SBCE is the high percentage of cases in which the capsule does not reach the cecum by the end of the recording period and/or exhaustion of capsule’s battery life48 as reported in up to 30% of the procedures. It has been demonstrated that one of the risk factors for an incomplete SBCE is a long gastric transit time (GTT).49 Hence, there is rationale to use prokinetics to the procedure to decrease GTT and thereby potentially increase the rate of complete small bowel examinations.

Different prokinetics have been used in an attempt to increase completion rate (CR) and the diagnostic yield (DY) of SBCE. Metoclopramide remains the most commonly administered prokinetic. Domperidone has not been widely used in SBCE and the evidence base is limited.50,51 A retrospective study by Koulaouzidis et al.,52 analyzed the effect on CR, GTT and DY when using 10 mg of domperidone in liquid solution compared to no domperidone with the capsule ingestion. Results showed an increase in CR of 91.1% in the domperidone group vs. 84.3% in the other group (p= 0.04). The GTT was reduced in the domperidone group but it was not statistically significant compared to the non-domperidone group. Interestingly, the use of domperidone was associated with reduced DY for vascular, inflammatory and mass lesions. The study demonstrated that the use of domperidone increases the CR of SBCE but that there was no increase in DY, most likely secondary to interpreting the capsule images and related to domperidone use.

A prospective study by Westerhof et al.,53 analyzed the CR in 649 patients undergoing SBCE; 410 patients received domperidone 10 mg and 239 received erythromycin 250 mg 1 hour before the procedure. Results showed that CR was 86% after erythromycin vs. 80% after domperidone (p= 0.03); GTT was lower after erythromycin compared to domperidone (13 minutes vs 22 minutes, p= <0.001); however, there was no difference in DY, 50% vs 44%, respectively (p= 0.18). The authors concluded that the administration of erythromycin prior to SBCE increased the CR compared to domperidone, this is explained by the fact that domperidone’s motility effects do not extend beyond the duodenum whereas erythromycin induces diffuse small bowel motility effects.

SAFETY AND TOXICITY
Cardiac Toxicity

Domperidone is regarded as having similar properties to class III antiarrhythmic agents such as prolonging the action potential through blockade of distinct voltage- dependent potassium channels, thus delaying cardiac repolarization and prolonging QT interval, which can predispose to life-threatening ventricular arrhythmias such as torsades de pointes. The criteria for QT interval prolongation on an electrocardiogram (ECG) is >450ms in males and >470ms in females. Longer QT intervals are found in women compared to men.54 Osborn et al. reviewed the effect of intravenous use of domperidone in four women, of whom two had episodes of ventricular arrhythmias. Of note, the underlining cause for ventricular arrhythmia was attributed to hypokalemia.55 The intravenous form of domperidone no longer exists.

Based on questions of cardiac safety in Europe, there have been some recommendations for dosing and monitoring. However, we have performed a comprehensive literature research to analyze the concerns about cardiac events.

A Dutch case-control database study involving 1366 patients assessed the association between sudden cardiac death or sudden ventricular arrhythmia and domperidone use.56 A total of 1366 cases (62 involving sudden ventricular arrhythmia and 1304 sudden cardiac deaths) were matched to 14,114 controls by index date, sex, age, and type of practice. None of the patients who experienced sudden ventricular arrhythmia were using domperidone at the time of the event. The multivariable analysis controlled for QTc-prolonging drugs and medical conditions, smoking, alcohol use and CYP3A4 drug interactions. Among the 1304 patients with sudden cardiac death, only 10 were using domperidone at the time of the event, which translates to a statistically non- significant increased risk of sudden cardiac death (odds ratio [OR] 1.99, 95% confidence interval [CI] 0.80� 4.96). When these 10 patients were further stratified by daily dose (< 30 mg, 30 mg, and > 30 mg), the multivariable analysis showed an increased risk of sudden cardiac death for patients taking more than 30 mg per day (OR 11.4, 95% CI 1.99�65.2).

A very recently published study by Ortiz et al. did not find an association between the use very high dose of domperidone (more than three times the dose schedule in Europe) and an increased risk of cardiovascular events nor significant changes in QT interval.1 That study included 64 patients that were taking domperidone at doses of 80-120 mg/day for a mean duration of 8 months, some as long as 4 years. Results showed that 73% of the patients had symptomatic improvement in nausea and vomiting, 15.6% of patients had an increase of QTc at follow up but no cardiovascular events reported; 5% had palpitations without ECG changes and there were no sudden cardiac deaths.

Another relevant piece of information is that 2,000,000 prescriptions for domperidone were recorded in Canada in 2013 and between April 2003 and March 2010, it was recorded that 122, 333 elderly patients had domperidone on their prescription list in Ontario, Canada. Despite this large number of prescriptions and available warnings regarding cardiac side effects of domperidone, Health Canada had received only 18 (0.9 per 10,000) reports of serious adverse cardiac events but no deaths. In many of these patients, other risk factors for arrhythmias were also present.57,58

Moreover, to keep this in perspective, we know that other possible therapies for gastroparesis, specifically erythromycin, azithromycin, ondansetron and promethazine also have cardiac side effects.

Our conclusion from extensive literature review of the USA experience is that domperidone has the potential for cardiac side effects based on concerns for QT prolongation and increased risk of ventricular arrhythmias, but studies do not substantiate cardiac adverse events in patients receiving oral administration of domperidone, even at very high doses.

Endocrine Effects

Thyroid-stimulating hormone (TSH) and prolactin increased after domperidone administration, but there was no effect on cortisol secretion, aldosterone and 18- OHB.59 This indicates that domperidone has a direct effect on anterior pituitary rather than through a central hypothalamic mechanism. Unlike the mechanism of metoclopramide, domperidone has lipophobic properties therefore it effects are not based through the central dopaminergic receptors. The pituitary is outside the blood-brain barrier where domperidone can induce those hormonal effects. Domperidone’s endocrine effects on TSH have no clinical significance.

Prolactin is increased in everyone on domperidone. Comparative studies have reported similar degrees of increased serum prolactin concentrations in healthy subjects receiving domperidone or oral metoclopramide.60 However, few patients complain of symptoms including gynecomastia and nipple tenderness in 10% and galactorrhea in 5%. There is no association with prolactinomas or increased risk of breast cancer. Another observation is oligomenorrhea and rarely amenorrhea, although fertility remains unchanged. This is relevant since 80% of patients with gastroparesis are female and these side effects are regarded as more inconvenient than meaningful, and patients who benefit from domperidone are generally willing to accept them.61

Drug-drug Interactions

Dopamine antagonists should not be given in conjunction with monoamine oxidase inhibitors. Stimulation of D2 receptors causes inhibition of norepinephrine release from presynaptic nerve terminals. Antagonists of D2 receptors cause decreased inhibitory control, facilitating the release of norepinephrine.62

The main concern is a combination of domperidone with cytochrome CY3PA4 inhibitors. This enzyme is the main metabolic pathway for domperidone, therefore medications that interfere with this mechanism must be avoided. Inhibitors of CYP3A4 can block the metabolism of domperidone, resulting in increased plasma concentrations of domperidone, with the subsequent risk of increased risk of cardiovascular and endocrine side effects. These medications include azole antifungals (ketoconazole, fluconazole),1 protease inhibitors, macrolide antibiotics, calcium channel blockers, propranolol, metoprolol, HMG-CoA reductase inhibitors and newer anticoagulants such as apixaban.

TAKE HOME MESSAGES

Domperidone has been available for the treatment of gastrointestinal motility disorders throughout the world since the 1970’s. Unfortunately, domperidone is not easily available in the United States since the FDA withheld approval in 1989 due to borderline statistical significance related to sample size in the controlled clinical trials. It is mainly available through an IND process. Its efficacy relies on an anti-emetic effect by blocking D2 receptors centrally as well as the prokinetic property through blocking peripheral dopamine receptors in the gastric smooth muscle. It has an effective role in the treatment of gastric motility disorders, especially in patients that do not respond to diet modifications or develop side effects or have an inadequate response to metoclopramide. Interpreting the clinical significance and meaning of the concerns raised in the literature regarding cardiotoxicity when using domperidone require ongoing vigilance. While there are reports of QTc interval elongation and cardiovascular events related to the use of low dose domperidone, most studies and clinical experiences do not confirm this association. Moreover, data with prolonged dosing at 3-fold of the European dose shows no evidence for ventricular arrhythmias or cardiac death.

A dilemma has been created because of the statements made by authorities in some countries regarding the cardiotoxicity of domperidone. However, our extensive review does not support the conclusions made by these international agencies. At the present time, domperidone is an extremely effective treatment for gastroparesis and other disorders with nausea and vomiting and has an acceptable safety profile and risk- benefit ratio.

Our recommendations and guidelines for physicians who plan on initiating domperidone therapy in their practices are the following: 1) have a condition that would benefit from antiemetic and gastric prokinetic therapy; 2) document no response or presence of side effects secondary to metoclopramide and other anti-nausea/vomiting medications; 3) confirm no QTc elongation; 4) start at a dose of 20 mg q.i.d., 30 minutes before meals and before bedtime, to have a meaningful effect and if necessary increase gradually until achievement of therapeutic effect, sometimes requiring 120 mg/day; 5) treatment should be for a minimum of 3 months at the recommended doses in order to draw conclusions about its efficacy; 6) monitor other drug use to avoid CY3PA4 inhibitors; 7) monitor serum potassium and magnesium levels; 8) obtain ECG every 6 months; 9) discontinue domperidone if QTc interval becomes prolonged; 10) inquire about such symptoms as palpitations or chest pain.

Domperidone, although not Dom Perignon, is indeed the “champagne” of the prokinetic/antiemetic drug world and we hope this article will allow you to appreciate its clinical indications, efficacy, and most of all, safety, so your patients can benefit by instituting this agent into your practice. So “raise your glasses for a toast”, you have now acquired a new knowledge base for your practice.

Download Tables, Images & References

hacklinkJojobet Girişivermectin tabletMadridbet girişMadridbet girişMadridbetbaşakşehir masaj salonumeritbetanadoluslotCasibom GirişsuperbetinikimislimeritbetsuperbetinTophillbetcasibomJojobet GirişcasibomCasibom GirişHoliganbetmatbetgalabetGamdomMarsbahisCasibomJojobetJojobet