A SPECIAL ARTICLE

Chronic Hepatitis C Assessment and Treatment: Algorithms for Primary Care Providers

Read Article

Susan Ferguson, PA-C Physician Assistant Santa Rosa Gastroenterology, Santa Rosa Beach, FL.


In the past, chronic hepatitis C (CHC) was a challenge to manage as treatment protocols were complicated and medications had severe side effects. Primary care providers are often the first medical professionals who screen for CHC, but patients are frequently referred to specialists after hepatitis C is diagnosed. Moreover, health care providers in safety-net clinics should be screening their patients as low-income individuals are disproportionately affected by CHC. The new generation of CHC medications makes treatment options more feasible and cost-effective. Assessment and treatment protocols for CHC have been developed to serve the needs of primary care providers in private practice as well as safety-net clinics.

INTRODUCTION
Chronic hepatitis C (CHC) affects 3.4 to 6 million Americans.1 The prevalence of CHC is four times higher in populations that are below the federal poverty level in comparison to those who are two times the federal poverty level.2 Therefore, clinics that serve low-income patients should have a high priority for screening and treatment of CHC.2 Moreover, the cure rates for CHC increased when medical providers within safety net clinics had a comprehensive care plan when treating patients for CHC.1
The new CHC therapies have lower side effect profiles and can be prescribed to a larger selection of patients.3 The advent of pangenotypic medications for CHC enable primary care providers, whether in safety-net clinics or private practice, to effectively treat a larger cohort of patients.

Moreover, assessment and treatment algorithms can aid primary care providers in determining when it is appropriate to refer patients to specialists for CHC treatment.

Epidemiology
In 2016, there were 2,967 reported cases of acute hepatitis C in the United States.4 The overall incidence rate for new cases was 1.0 per 100,000, which is an increase from 0.8 cases per 100,000 in 2015.4 Furthermore, it is estimated that the actual number of cases are most likely 13.9 times greater than reported.4

There are more than 71 million people worldwide who have CHC.5 Approximately 70% of individuals with CHC were born between the years of 1945 to 1965.6 In the United States, males have approximately double the prevalence rate of CHC compared to women.6

Etiology
Hepatitis C virus (HCV) is the most common blood-borne infection in the United States.6 Common exposures to the virus include injection drug use, needle sticks, transfused infected blood product before 1992, and maternal transmission during birth.4 The hepatitis C virus can infrequently be transmitted through the use of personal hygiene products belonging to infected individuals, unregulated tattoos, and sexual intercourse with infected individuals.4 There is a common misconception that CHC is prevalent in the “baby boomer” generation as a result of high-risk behaviors, such as tattooing, sexual practices, and injection drug use.7 To the contrary, members of the high-risk group born between 1945 and 1965 were approximately 5 years old when the peak of Genotype 1A HCV occurred in 1950.7 The likely peak in 1950 occurred as a result of medical procedures performed during and after World War II, and the transition to disposable plastic syringes in the 1950s through the 1960s correlated with a decline in HCV incidence rates.7 However, the role of recreational injection drug use from 1920 to the late 1960s was contributory.7

Pathophysiology
Hepatitis C is transmitted parentally through blood and body fluids.8 Acute presentations of the illness may result in jaundice, which occurs in approximately 20% of individuals.8 The average incubation period for HCV is between six and seven weeks. The incubation time can be difficult to assess as infected individuals are often asymptomatic, and there is not a precise serological marker of early infection.9 Spontaneous resolution of the virus occurs in approximately 15% to 45% of infected individuals, and clearance usually happens within six months of contracting the virus.8 Moreover, clearance of HCV is bifurcated by gender with females having a 40% spontaneous clearance rate compared with 19% of males.6 Spontaneous clearance is influenced by genetic inheritance of the IL28b and DQB1 0301 allele on the class II2 major histocompatibility complex.10

Interestingly, patients who present with clinical symptoms of acute hepatitis C demonstrate higher rates of spontaneous clearance of the virus.10 For individuals who do not clear the virus, the annual rates of fibrosis progression are between 0.1 and 0.2 stages per year.10 Approximately 20% to 25% of CHC cases progress to cirrhosis.8 Out of every 100 infected individuals with CHC, 10 to 20 will develop cirrhosis, and there is a 1% to 5% annual risk of developing hepatocellular carcinoma.4 The progression from cirrhosis to hepatocellular carcinoma is due to several factors. Specific viral proteins act upon cell signaling pathways and inhibit tumor suppressor genes, signaling pathways that up-regulate cell growth and division are activated, and the loss of the tumor suppressor genes p53 and retinoblastoma cause increased carcinogenesis.11

Signs and Symptoms
Most individuals with CHC are asymptomatic, or they may have non-specific symptoms, such as lethargy or depression.4 However, CHC is linked to extrahepatic disorders, such as arthritic-like pain, kidney, cardiovascular, neuropsychiatric, and gastrointestinal conditions.12 Many individuals with CHC will eventually present with liver disease. Chronic hepatitis C was the known etiology for 17% of the reported cases of hepatocellular carcinoma in the United States from the years 2000 to 2010.13 Patients with advanced liver disease may manifest symptoms, such as ascites, jaundice, variceal bleeding, altered mental status, and pruritus.14

Diagnosis
Serology

The presence of HCV antibody (anti-HCV) can indicate active HCV infection, either acute or chronic; past infection that resolved with spontaneous clearance or treatment; or a false positive.15 Therefore, the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) recommend that serological testing for CHC should include both anti-HCV and nucleic acid testing for the presence of HCV RNA.16,17 The American Gastroenterology Association (AGA) recommends testing for anti-HCV, HCV RNA, HCV genotype, complete metabolic panel (CMP), INR, hepatitis B studies, and HIV antibody.14

Assessment of liver disease
The 2016 WHO guidelines state that assessment for fibrosis and cirrhosis in CHC infected individuals is necessary to properly stratify individuals into appropriate treatment protocols.17 The American Association for the Study of Liver Diseases (AASLD) and the Infectious Diseases Society of America (IDSA) collectively composed guidelines for evaluation of advanced liver disease, including fibrosis and cirrhosis. Liver disease can be assessed with liver biopsy, imaging, and/or non-invasive markers.18 The AGA recommends right upper quadrant ultrasound for initial imaging before treatment.14 Fibrosis staging can also be assessed with newer non-invasive technologies, such as transient elastography and magnetic resonance elastography.14 However, aspartate aminotransferase to platelet ratio index (APRI) or fibrosis-4 (FIB-4) testing can be utilized when medical resources are scarce.14,17 Both the APRI and FIB-4 tests have low and high cutoff values that can be utilized to assess the severity of fibrosis, and the APRI scoring system can also be utilized to estimate a patient’s probability of having cirrhosis.17 The cutoff values aid providers in the assessment of liver disease secondary to HCV and can be utilized to stratify patients for treatment.17 An APRI score below the low cut-off value of 0.5 indicates that a patient has only an 18% probability of having advanced fibrosis, and a FIB-4 score lower than 1.45 indicates an 11% probability of having advanced fibrosis.17 Moreover, the WHO guidelines stated that treatment could be deferred in patients who score below the low cut-off values on APRI and FIB-4 tests.17 However, patients with an APRI score above the high cutoff value of 2 have a 94% probability of having cirrhosis, and these patients would benefit greatly from treatment.17 Patients who have APRI or FIB-4 scores that fall between the low and high cut-off values can either be treated or monitored, depending on the availability of medications.17

Treatment Options for Chronic Hepatitis C
Overview
There are six separate genotypes for HCV (1- 6), and approximately 70% of all cases of CHC are Genotype 1.18 The prior treatment for CHC utilized interferon alfa-2a plus ribavirin, but these medications have severe side effects associated with them and achieve cure in only about 40% to 50% of patients.19 In 2009, the first direct-acting antivirals (DAAs), which directly target HCV replication, were approved for the treatment of CHC.20 By 2017, there were six different DAA combination therapies available for specific genotypes of CHC.19 The advent of pangenotypic medications for CHC almost obviates the need to genotype the virus; however, in certain CHC patients, viral genotyping is still necessary. Additionally, cirrhotic patients should be assessed for the severity of cirrhosis utilizing the Child-Pugh score.21 While glecaprevir/pibrentasvir is pangenotypic for non-cirrhotic, treatment-naive patients, genotyping is still necessary for treatment-experienced and/or cirrhotic patients as the duration of therapy has to be adjusted based on these parameters.22 Conversely, sofosbuvir/velpatasvir is a pangenotypic medication that can be utilized for non-cirrhotic or Child-Pugh A cirrhotic patients who are treatment-naive or experienced.23 Although there are other medicinal treatments for CHC, this review will be limited to the pangenotypic class of CHC medications as they enable providers to more easily treat patients at the primary care level.

NS5B inhibitors
The AASLD and IDSA collectively recommend sofosbuvir (SOF) for treatment of all genotypes of HCV.18 Sofosbuvir is administered orally, once a day.19 Sofosbuvir inhibits the HCV NS5B RNA-dependent RNA polymerase by incorporating into the HCV RNA, whereby it terminates the RNA chain.24 Sofosbuvir demonstrated an advantage over the first-generation DAA class of HCV medications because of its high genetic barrier to viral resistance.18 Furthermore, sustained viral response (SVR) improved when SOF was used in combination therapy.25 There is a low risk for medication interactions with SOF as it does not inhibit or induce the P450 cytochrome system; however, amiodarone is contraindicated with SOF because of the risk for fatal bradycardia.19 Additionally, given the fact that SOF is excreted through the renal system, it should be used with caution in renal patients and should not be given to patients with glomerular filtration rates (GFR) less than 30.19

NS5A inhibitors
There are numerous functions of the HCV viral NS5A protein, including roles in viral replication, assembly, and interactions with cellular functions.26 Velpatasvir (VEL), an NS5A inhibitor used in conjunction with SOF, is efficacious on all genotypes of HCV.19,20 Because less than 1% of VEL is renally excreted, there are no dosing adjustments in the context of renal dysfunction; however, caution should still be taken as VEL is given in conjunction with SOF.19 Concomitant administration of rifampin, efavirenz, or St. John’s Wort may decrease levels of SOF/VEL and should not be taken concurrently.19 Conversely, SOF/VEL may decrease therapeutic levels of digoxin so treatment levels should be monitored closely.19 Additionally, proton pump inhibitors (PPIs) should be discontinued during therapy with SOF/VEL as VEL concentration levels decrease with increases in gastric pH.23


NS3/4 protease inhibitor combination therapy
The AASLD and IDSA guidelines recommend the combination of glecaprevir (GLE) and pibrentasvir (PIB) for all genotypes of HCV.18 Glecaprevir is a second-generation NS3/4 protease inhibitor and PIB is a second-generation NS5A inhibitor.27 The GLE/PIB combination regimen has a high genetic barrier to resistance compared with first-generation medications27 and is excreted through the biliary system with minimal renal excretion.28 Moreover, GLE/PIB is designed for once-daily dosing, and treatment-naive patients without cirrhosis need only take the medication for 8 weeks.22 The GLE/PIB combination medication is not recommended for individuals with Child-Pugh B scores and is contraindicated for patients with Child-Pugh C scores.22 Additionally, GLE/PIB is contraindicated with concurrent use of rifampin or atazanavir.22

Challenges of Treating Chronic Hepatitis C Co-infection
Co-infection with hepatitis B (HBV) and/or human immunodeficiency virus (HIV) is associated with worse prognosis of CHC.18 Additionally, there is the possibility of a reactivation of HBV after initiating HCV therapy in patients who are co-infected with hepatitis B and C.18 Therefore, the WHO recommends that individuals who test positive for the hepatitis B surface antigen (HBsAg) be treated for HBV before they are treated for HCV.17

Concomitant alcohol or drug use
The WHO guidelines recommend performing alcohol and illicit drug intake assessments before initiating treatment.17 Moreover, U.S. Department of  Veterans Affairs (VA) recommends that patients who have histories of substance or alcohol disorders be considered on a case-by-case basis for treatment.29 The AASLD and IDSA guidelines contend that there is a lack of evidence supporting the practice of withholding treatment for HCV infected individuals who intake alcohol or illicit drugs.18 Moreover, the VA guidelines discourage providers from disqualifying patients for treatment based solely on the length of abstinence from alcohol or illicit drugs. Patients with active substance or alcohol disorders may be treated for CHC but coordination with substance treatment professionals is imperative for treatment success.29

Treatment experienced patients
Treatment-experienced patients need to be stratified by the type of prior failed therapy, the specific HCV genotype, and stage of liver disease.18 Once daily dosing of GLE/PIB can be utilized for treatment-experienced patients; however, the duration of therapy is between 8 and 16 weeks based on prior treatment failures, viral genotype, and liver disease state.22 In contrast to GLE/PIB therapy, SOF/VEL has a once daily dosing for 12 weeks, independent of genotype, prior therapy, or liver disease state.23

Patients with cirrhosis
The VA asserts that all CHC infected individuals are potential candidates for HCV therapy, including individuals with cirrhosis. Furthermore, patients with advanced liver disease are likely to benefit the most from therapy.29 However, HCV infected individuals with decompensated cirrhosis need to be evaluated by specialists in the management of complex liver disease before treatment is initiated.18

Treatment Monitoring and Curative Metrics
The WHO guidelines state that a complete blood count (CBC), renal function test, liver function test, and HCV RNA quantity be performed as a baseline before starting treatment and again at week 4 of treatment with the exception of HCV RNA quantity.17 However, the AGA guidelines assert that patients receiving treatment should be tested at week 4 with a CBC, CMP, and HCV RNA quantity.14 If HCV RNA is detectable at week 4, therapy should continue for 2 more weeks with a repeated HCV RNA quantity test at week 6.18 Therapy should be discontinued if there is a greater than 10-fold increase of HCV RNA from baseline at week 6; however, there are no set guidelines to discontinue therapy if there is less than a 10-fold increase of HCV RNA.18 Moreover, a 10-fold increase in ALT levels from baseline at any time during therapy should prompt discontinuation of treatment.18 For patients on either 12-week or 16-week regimens, HCV RNA quantity should be tested at week 12 and week 16 respectively.14 All patients should be tested for HCV RNA quantity at 12 weeks post-therapy.14,17,18

Sustained viral response at 12 weeks post-therapy (SVR12) is defined as an undetectable HCV RNA level in the blood and is considered the primary endpoint of treatment success.29 Pradat et al.5 stated that the use of direct acting antiviral agents (DAAs) achieved a 90% cure rate. Moreover, Evon et al.12 asserted that that DAA medications for CHC demonstrated a 95% cure rate. Kosloski et al.28 contended that the combination of GLE and PIB achieved a 99% cure rate in compensated cirrhotic patients treated for 12 weeks for all genotypes except genotype 3, for which a greater than 96% rate was achieved. Furthermore, Osawa et al.27 demonstrated a 93% cure rate of GLE and PIB combination therapy in treatment-experienced, non-cirrhotic patients who experienced prior virologic failure on DAA medication regimens. However, the cure rate decreased to 83% in treatment-experienced, cirrhotic patients.27 Treatment-naive patients with or without cirrhosis who were given SOF/VEL had cure rates of 95% to 99% depending on HCV genotype.19 Moreover, Feld et al.30 demonstrated a 99% cure rate with SOF/VEL in treatment-experienced patients.

Figure 1, “The Assessment Algorithm for Chronic Hepatitis C”, is designed in a decision-tree format for ease of use. The decision-tree contains common scenarios that primary care providers may encounter in assessing patients for possible chronic hepatitis C therapy. There may be circumstances in rural or resource scarce areas where specialty providers, including infectious disease specialists, are limited. In such cases, providers can refer patients to their local health department for further evaluation if need be. However, most areas do have gastroenterologists who can assist with both liver disease and complex treatment protocols.

Figure 2, “The Treatment Algorithm for Chronic Hepatitis C”, utilizes the currently available combination therapy of sofosbuvir/velpatisvir. Sofosbuvir/velpatisvir has a relative ease of use because it can be prescribed for any genotype of the hepatitis C virus. Additionally, the dosing durations are the same regardless of whether a patient is non-cirrhotic or has Child-Pugh A cirrhosis. Moreover, the medication has the same dosing duration whether or not the patient is treatment-naive or had unsuccessful treatments for CHC in the past. The side effects are minimal in comparison to prior CHC therapies and dosing is once a day.

SUMMARY
Patients with cirrhosis are in more urgent need of HCV therapy as their risk for conversion to decompensated cirrhosis or hepatocellular carcinoma is elevated, in comparison to non-cirrhotic individuals with CHC.29 However, the AASLD and IDSA guidelines recommend treatment for all individuals with CHC except those with short life expectancies who cannot be reasonably and positively affected by HCV therapy, liver transplant, or other directed therapy.18

Primary care providers are on the front lines for hepatitis C screening and need to have the proper tools to manage the disease process. Given the new medications available to treat CHC, treatment protocols have never been easier to administer. One of the main challenges of treating CHC patients at the primary care level is knowing when to refer patients for more specialized care. The assessment algorithm included in this article addresses the situations that may require specialty referrals. Additionally, the treatment algorithm utilizes SOF/VEL, which has a low side-effect profile; less onerous monitoring schedules; and does not require viral genotyping. Primary care providers can effectively utilize these algorithms to treat a much larger cohort of CHC patients both in private practices and safety-net clinics.

download tables, images & references

FROM THE PEDIATRIC LITERATURE

Detergent Pod Ingestions in Children

Read Article

Concentrated detergent pods are a cause of caustic ingestions in young children due to their similar appearance to candy. The authors of this study attempted to describe both endoscopy as well as bronchoscopy findings in children who had injuries from detergent pods by reviewing concentrated detergent pod ingestions occurring at a single, tertiary children’s hospital over 7 years (2010-2016). This retrospective study included children between 0 to 18 years of age who were exposed to caustic agents, including concentrated detergent pods. Children were excluded if they had a pre-existing disease of the esophagus (such as gastroesophageal reflux disease) or if they had a concurrent foreign body ingestion occurring with a caustic ingestion. Patient demographics, including esophagogastroduodenoscopy (EGD) and direct laryngoscopy-bronchoscopy findings were reviewed.

In total, 83 caustic ingestions occurred during this time period, and 23 of these cases (28%) were due to detergent pod ingestion. Detergent pod ingestions mainly occurred in males (61%), and most patients had gastrointestinal symptoms after ingestion (91%). The most common laboratory abnormality was metabolic acidosis which occurred in 39% of patients. Although no gastrointestinal complications such as esophageal stricturing occurred, 13% of patients had respiratory failure requiring intubation / mechanical ventilation. EGD was performed in 91% of patients with detergent pod ingestion, and 30% of these patients had esophageal edema, erythema, or ulcerations. There was a significant association between positive oral-pharyngeal findings and esophageal damage. Direct laryngoscopy-bronchoscopy was performed in 26% of the patient cohort, and damage to the upper airway was noted in 67% of such patients, including epiglottitis and glottis edema although there was no association between respiratory symptoms and airway findings.

Although this is a small study, the results suggest that EGD may not be necessary in children who undergo accidental concentrated detergent pod ingestion. On the other hand, it appears that respiratory failure is a risk and should be considered when such children present in the emergency room, hospital, or clinic setting.


Singh A, Anderson M, Altaf M. Clinical and endoscopy findings in children with accidental exposure to concentrated detergent pods. Journal of Pediatric Gastroenterology and Nutrition 2019; 68: 824-828.

Fellow’s Corner Submission Guidelines

Read Article

The Fellow’s Corner is open to Trainees and Residents only.

Guidelines:

  • Send in a brief case report.
  • No more than one double-spaced page.
  • One or two illustrations, up to four questions and answers and a three-quarter to one-page discussion of the case.
  • Case to include no more than two authors

Email your Case Report to Section Editor C.S. Pitchumoni, MD

Nutrition Issues In Gastroenterology, Series #156

Sterile Water and Enteral Feeding: Fear Over Logic

Read Article

Many practitioners believe they must utilize sterile water for administration into enteral feeding tubes, yet the data supporting this practice are very limited. This manuscript will expose the flaws in the rationale behind the practice and outline why other forms of potable water are not only acceptable, but preferred as the type of water to administer to patients with enteral feeding tubes.

Many practitioners believe they must utilize sterile water for administration into enteral feeding tubes, due to a fear of exposing the patient to potentially pathogenic infectious organisms, especially in critically ill patients, immunocompromised patients, or those with post-pyloric feeding tubes. However, the data supporting this practice are very limited. Enteral feeding tubes are not sterile devices; they are not placed or maintained under sterile conditions. Furthermore, the gastrointestinal tract is designed to handle foreign material and infectious organisms. This does not change in patients receiving enteral feedings. The recommendation to utilize sterile water for administration into enteral feeding tubes is both unjustifiable and costly. This manuscript will expose the flaws in the rationale behind the practice and outline why other forms of potable water are not only acceptable, but preferred as the type of water to administer to patients with enteral feeding tubes.

Todd W. Rice, MD, MSc, Associate Professor of Medicine Division of Allergy, Pulmonary and Critical Care Medicine Vanderbilt University School of Medicine, Nashville, TN

INTRODUCTION

A number of practitioners and healthcare systems have insisted on the use of sterile water in enteral feeding tubes. This use of sterile water is sometimes limited to administration of free water, and other times also includes any mixing of powder formula or medications. Many reserve this practice for enterally fed critically ill patients, immunocompromised patients, or those receiving post-pyloric enteral feedings that bypass the stomach. This is an interesting practice pattern, which has its origins in a few anecdotal reports of infections from contaminated water in patients who happen to be receiving enteral feeding. In fact, the last American Society of Parenteral and Enteral Nutrition (ASPEN) Safe Practices for Enteral Nutrition Therapy guidelines published in 2009 recommend the use of sterile water in certain populations of patients receiving enteral feeding.1 However, the use of sterile water in these situations is neither logical nor practical, but instead based on an irrational fear of harming patients.

Enteral Feeding

First, let’s examine the universal use of sterile water in enteral feeding tubes. The gastrointestinal system is not a sterile environment. From the oropharynx through the rectum, and every location in between, is saturated with commensal bacteria forming the normal flora and individual microbiome of each patient.2 When these bacteria are decreased, opportunistic organisms, such as Clostridium difficile are more easily able to multiply and cause infections. In addition, placement of enteral feeding tubes is not a sterile procedure – while gloves are donned for the placement, the gloves are not sterile and are intended to minimize soiling of the person placing the tube over preventing contamination of the tube. The patient is not taken to an operating room, the nares (or oropharynx) are not sterilized prior to placement, and a whole sterile field is not used during the placement procedure. Instead, these tubes are often placed at the bedside under non-sterile conditions by the bedside nurse. Furthermore, once placed, the enteral feeding tube is not maintained with sterility. It is not covered with a sterile dressing, nor is the hub sterilized with chlorhexidine or alcohol prior to access.

Medication Administration

Furthermore, the enteral feeding tube is often used for medication administration – medications which are delivered from the pharmacy (or kept in a medication dispensing unit on the floor), are not sterile. They are touched by numerous human hands, often without gloves, prior to administration to the patient. In fact, in order to be administered through an enteric feeding tube, medication in pill form often has to be crushed – which occurs using a non-sterile pestle and mortar or pill crusher kept on each floor or unit. The mortar and pestle, or pill crusher, are washed after each use, but not sterilized. Liquid medications are often dispensed in smaller quantity aliquots from large quantity storage containers in a non-sterile fashion. While the bottles used to dispense the liquid medications are clean, they are not handled under strictly sterile conditions. While this delineation of all of the non-sterile interactions with the enteral feeding tube may startle some practitioners, it should not cause concern. The gastrointestinal tract is meant to handle non-sterile conditions.

In addition to the huge number of bacteria present as its normal flora, the GI tract is also designed to handle exposure to extraneous organisms. The GI tract secretes a number of molecules, which help to protect against infectious insults as part of its normal function. Digestive enzymes may help kill some bacteria, bile salts may bind some bacteria, and IgA antibodies provide a level of immunity against bacteria that are not part of the normal flora.3 This allows us to eat without having to sterilize our food. While we often wash fruits, vegetables and other non-packaged food products, we do not wash them with sterile water, nor worry that they must be sterile prior to consumption. Similarly, we do not limit our consumption of water to only sterile water. Imagine having to find (or carry with you) sterile water for every time that you wanted or needed a drink of water.

Critical Illness

Yes, but that is in normal, non-sick humans. Is the critically ill patient different? Of course, the critically ill patient is different than a healthy individual. Many critically ill patients are not eating on their own and are dependent on enteral feedings. Therefore, they are not ingesting fruits or vegetables or non-packaged products. While this is true, many of the same facts above are also true. Medications administered to critically ill patients through enteral feeding tubes are not sterile – they are not handled with sterility in the pharmacy or ICU, they are not crushed under sterile conditions, and they are not administered using sterile technique. As soon as the enteral feeding tube is removed from its package, it loses any sterility that it had. It is placed through the nose or oropharynx, which have their own microbiome and are not sterile. The placement is done with non-sterile gloves, without chlorhexidine or povidone-iodine prep. Also every time the enteral feeding tube is accessed, it is not done under sterile conditions. The feeding tubes are not thoroughly washed with chlorhexidine or alcohol prior to touching them, administrations are not done using sterile gloves, the connectors or insertion end of the feeding tube is not sterilized with chlorhexidine or alcohol wipes prior to administration of anything through the tube, and the feeding tube is not maintained in a sterile sleeve or dressing (like the sterile protective sleeve that covers pulmonary artery catheters or dressing covering intravenous catheters to maintain their sterility). Some practitioners administer probiotics through the enteral feeding tube4 in certain critically ill patient populations, purposefully introducing bacteria into the enteral tube in an effort to replenish normal flora in the gastrointestinal tract, in order to prevent the overgrowth of pathogenic bacteria such as Clostridium difficile.

Immunocompromised Patients

What about immunocompromised patients receiving enteral feedings? They are a bit more complicated as they, by definition, do not have normal immune function. The use of probiotics in immunocompromised patients is currently discouraged as there are reports of bacteremia from the specific bacteria in the probiotic.4 However, like all patients, these patients do not have a sterile gastrointestinal tract. When they eat, they do not eat sterile food – despite a lack of evidence to support the practice, their diets may be modified to avoid fresh fruits or vegetables. However, their diet is not limited to sterile food. Their enteral feeding tubes are not placed, nor maintained, in a sterile fashion. In addition, the medications that they receive are also not sterile. While caution should be taken to not introduce known contaminated materials, including contaminated water, into their enteral system, their gastrointestinal tract still has adequate defense mechanisms to handle bacteria.

Post-pyloric Feeding

Lastly, some have advocated for the use of sterile water for post-pyloric tubes, or when the distal end of the enteral feeding tube terminates somewhere beyond the stomach. While the acid from the stomach represents one of the first lines of defense against bacteria, it is not the only line of defense, and the bile salts and IgA protective mechanisms of the gut are present in the small intestine and not the stomach. In fact, these are more effective at countering potential infectious organisms than the acid of the stomach.5 Furthermore, most patients receiving enteral feedings are also receiving some sort of acid suppression (i.e. histamine blocker, proton pump inhibitor, etc), so even patients who have gastric tubes likely lack much of the natural protection afforded by the acidity of the stomach. If there is concern that post-pyloric feeding bypasses this protective mechanism, there should be equal concern for our gastrically fed patients receiving histamine receptor blockade or proton pump inhibitors.

The caution about using non-sterile water, and recommendation for sterile water use, appears to come from two misunderstandings. First, there is an irrational fear of harming the patient by either introducing an infection with contaminated water or precipitating bowel necrosis. However, infections documented from contaminated water are not from enteral administration. The vast majority are pulmonary infections such as legionella, pseudomonas, or mycobacteria6-10 and according to Smith et al.., these respiratory infections almost assuredly were obtained by inhalation of contaminated droplets from the air and not from hematogenous spread from an initial GI source. Washing hands (with subsequent aerosolization of the water source) is more likely the culprit than enteral administration of the water, where the gastrointestinal tract has numerous defense mechanisms in place to prevent contraction of infectious organisms. Secondly, bowel necrosis is a rare event in patients receiving enteral feedings. One case report associates distilled water administration into the jejunum with bowel necrosis and perforation in a burn injury patient.11 Due to hypernatremia, the patient was receiving 400 mL of distilled water flushes every 2 hours. Data from a study in one rat suggest that electrolyte-free water may permit digestion of the bowel wall and predispose to perforation, compared to infusion of salt water.11,12 Even if these limited data are true, administration of sterile water does not ameliorate this risk. Sterile water is still electrolyte-free, and in fact, is likely more electrolyte free due to its sterile processing than other forms of drinkable water. Furthermore, although distilled water was utilized in the case report, there is no evidence that the use of sterile water in that patient would have prevented the necrosis. The two are as likely unrelated as they are coincidentally related. In addition, this case report does not provide evidence that tap water flushes into the jejunum in reasonable volumes pose any danger to humans.

In addition, there is a misunderstanding of different types of water (Table 1). There are not merely two options, sterile vs. contaminated. These represent two ends of the spectrum, with numerous options in between. Sterile water is verified to be free of all infectious organisms, is produced for use in sterile medical procedures, and must meet USP regulations.13 Due to this, it carries a higher cost compared to other forms of water. Potable (drinking) water is not sterile. It can come from many sources, including tap water, spring water, and filtered water.13 Bottled water, which is defined as water sealed in containers without added ingredients, is regulated by the Food and Drug Administration and is also not sterile. It often comes from springs and is treated to remove most of the infectious organisms and heavy metals and other impurities, but it is not sterile. Other types of water in the spectrum between sterile and contaminated include filtered, purified, distilled, disinfected, and tap water. These water types are also not sterile and should not be used in place of sterile water in medical situations where sterility is needed, like an operating room or lavaging a sterile body cavity (like the abdomen, thorax, urinary bladder, etc). Filtered water has been filtered through a physical, chemical, or biological process to remove many of the impurities14 and is an acceptable grade for drinking. Purified water is similar to filtered. It has been processed to remove impurities and is also acceptable for drinking.14 Distilled water is the steam from boiled water, which is allowed to condense in a separate container, leaving many of the solid contaminants behind. However, it is not sterile; it can still have some low level of bacteria present. Distilled water is also potable, meaning it is an acceptable grade for drinking. Disinfected water has been processed with a disinfectant, often chlorine, fluorine, iodine, or ultraviolet light to kill bacteria. While this process greatly diminishes the number of live infectious organisms in the water, it does not ensure sterility. Tap water simply describes water that is obtained from the tap, or spigot. The quality of tap water varies greatly, depending on the source of the water and any processing that occurs prior to delivery at the tap. Potable water should be used for administration into enteral feeding tubes. Often, tap water is potable and can be used. Most tap water in the United States is acceptable for drinking, but contaminated taps do exist. If there is any concern, tap water should not be used for drinking, even by healthy people. Caution should be taken to avoid using known contaminated tap water or water that personnel on the unit would not feel comfortable drinking. In these situations, tap water should also not be used for enteral feeding tube administration, regardless of whether or not the patient is critically ill, immunocompromised or has a feeding tube whose distal end terminates in the small bowel. However, even these situations do not require the use of sterile water. Other forms of potable water, namely filtered, purified, distilled, or even bottled water bought at the store can be used instead, and often at considerably less cost. Any water that healthcare personnel or other people drink (filtered, purified) is more than adequate for administration through enteral feeding tubes.

Despite this, one may ask, even if the risk of non-sterile water is very low, why not be safe and use sterile water for the highest risk patients receiving enteral feeds? Sterile water is not without downside. It is expensive, often costing up to $4 per liter (not to mention personnel to deliver to unit, storage space required on the unit, as well as nursing time spent retrieving, etc.). While this may sound like a minor expense, in patients receiving one to two liters of free water via feeding tubes each day (not an uncommon amount for a patient to be receiving), this would amount to $4-8 or more per day, or almost $1500-$3000 per year per patient. An added healthcare cost without benefit that will have to come out of someone’s budget – either the hospital’s or the patient’s. Many patients receive more than two liters of water each day, especially with medication administration, formula reconstitution, and feeding tube flushes. In addition, the use of sterile water is not maintaining sterility of the feeding tube – other things administered via the feeding tube, including medications, supplemental protein, and unclogging agents, are not sterile.15 Furthermore, obtaining sterile water is more difficult than other forms of potable water. Given its medical grade, and requirement for meeting USP standards, it is only available from places that sell medical goods or medications. It is not routinely available at the grocery, nutrition, or convenience store. This inconvenience may be onerous for the patient or caretaker, and many times, the patient (or caretaker) simply stops using sterile water, despite the guilt, which may occur. Furthermore, the use of sterile water in this fashion also creates environmental concerns as unneeded trash from empty containers (i.e. plastic bottles) must be disposed of someplace and is likely to end up in our landfills.

CONCLUSION

Given the soaring health care costs in this country, clinicians should always weigh the cost against demonstrated benefits of practices prior to implementation. The routine use of sterile water in enteral feeding tubes increases cost, is more labor intensive and is harmful to the environment, without any added benefit to our patients. Regardless of the condition of the patient or location of the distal ports of the feeding tube, the mandated use of sterile water is illogical, unfounded, and expensive. Instead, we should recommend against using known contaminated water (including contaminated tap water) or water which is known or thought to be non-potable. Any water administered into an enteral feeding tube should be potable, just like any water drunk by patients able to ingest on their own. When safe potable tap water is not available, numerous cheaper, more practical, and more easily accessible forms of potable drinking water exist than medical grade sterile water. Therefore, we should stop recommending the use of sterile water in our patients with enteral feeding tubes. Finally, see Table 2 for practical interventions before considering a switch to bottled water for enterally fed patients.

download tables, images & references

Nutrition Issues In Gastroenterology, Series #155

Refeeding the Malnourished Patient: Lessons Learned

Read Article

Symptoms of Refeeding Syndrome (RS) can vary from a mild fall in serum electrolytes to critical electrolyte disarray and even death in the most severe cases. The goals of this article are to help clinicians better understand the mechanism of RS, recognize patients at risk, and identify the clinical circumstances that may require special attention.

Refeeding Syndrome (RS) was first recognized in the 1940s in starved prisoners of war who suffered complications after being refed. Today, the problem has become more widely appreciated due to current advances in medical care and nutritional support. However, despite the increased recognition, no standard definition or treatment approach has been established by randomized clinical trials. Symptoms of RS can vary from a mild fall in serum electrolytes to critical electrolyte disarray and even death in the most severe cases. The goals of this article are to help clinicians better understand the mechanism of RS, recognize patients at risk, and identify the clinical circumstances that may require special attention.

Stacey McCray RD Program Coordinator, Medicine Nutrition Support Team. Carol Rees Parrish MS, RD Nutrition Support Specialist. University of Virginia Health System Digestive Health Center of Excellence Charlottesville, VA

CASES
Which of the following cases are refeeding? (Answers at the end)

For all cases, normal (UVA) reference ranges for electrolytes are as follows:
Phosphorus (Phos): 2.3 – 4.5 mg/dL (0.74-1.45 mmol/L)
Magnesium (Mg): 1.6 -2.6 mg/dL (0.66-1.07 mmol/L)
Potassium (K+): 3.4 – 4.8 mEq/L (3.4 – 4.8mmol/L)

Case #1

65 year old male admitted to the ICU with COPD exacerbation. Patient was well nourished prior to admission (just returned from a Caribbean cruise with his family). Now intubated and sedated. Enteral feeding initiated at a low rate within 24-48 hours of admission. Phosphorus level on hospital days 2 and 3, respectively: 1.7 mg/dL (0.55mmol/L) and 1.9 mg/dL (0.61mmol/L). Magnesium and potassium levels were within normal limits.

Case #2

65 year old female admitted with fever, UTI, and dehydration. History of hypertension and stroke four months ago when she was discharged to a skilled facility on a pureed diet with thickened liquids. Her weight on discharge to the facility was 65 kg; at the time of this admission she was 56 kg. She failed a swallow evaluation and enteral feeding was initiated via nasogastric tube. Next morning labs revealed:
Phos: 1.8 mg/dL (0.58 mmol/L)
Mg: 1.4 mg/dL (0.58 mmol/L)
K+: 3.1 mEq/L (3.1 mmol/L)

Case #3

45 year old female admitted from the ER with diabetic ketoacidosis. Eating well until 2 days ago when she became ill from a virus and stopped taking food and medications. Current weight 62 kg; usual weight: 65 kg. Receiving IV fluids at 125mL/hr, an insulin drip with potassium replacement. On admission her potassium level was 5.8 mEq/dL (5.8 mmol/L), phosphorus level was 4.6 mg/dL (1.49 mmol/L), and magnesium level was 2.5 mg/dL (1.03 mmol/L). Phosphorus level now: 1.4 mg/dL (0.45 mmol/L).

INTRODUCTION

Refeeding syndrome (RS) is the metabolic response to nutrient provision in a malnourished patient. The driving force behind RS is the physiologic shift from a starved, catabolic state to a fed, anabolic state. Under normal conditions, the body’s preferred fuel is carbohydrate. Carbohydrate is stored as glycogen in the liver for readily available energy. During starvation, glycogen stores are depleted, and the body responds by utilizing protein and lipid as the primary fuel source. This shift in fuel source results in decreased insulin levels and increased glucagon levels. Prolonged starvation will lead to decreased lean body mass as muscle is burned for energy. This results in decreased skeletal, cardiac, and respiratory muscle mass, as well as overall strength.

Prolonged periods without nutrition also result in total body loss of electrolytes (including phosphorus, magnesium, potassium), as well as vitamins and minerals. Serum electrolyte levels may not reflect total body stores as only about 1% of phosphorus and magnesium stores are reflected in the serum level.1,2 Serum electrolyte levels may remain normal despite overall depletion; this can be attributed to adaptation, intracellular contraction, decreased renal excretion, and/or dehydration.3-5

Insulin, in response to carbohydrate provision, is the primary stimulus for the cascade of events associated with RS. Insulin not only drives glucose into the cells, but also vitamins and electrolytes required for utilization of the substrate. This intracellular shift of electrolytes (and resulting decreased serum levels) account for many of the clinical complications associated with RS.

Signs and Symptoms

Symptoms of RS will vary from mild drops in serum electrolytes to severe electrolyte disorders with complications, or even death. Most symptoms will first occur between 1-3 days after refeeding is initiated,6 although in some cases up to 5 days.7 The duration of symptoms will vary based on the degree of malnutrition, feeding advancement and other factors. There is no standard definition for what defines RS or how many symptoms must be present to constitute RS. The majority of symptoms associated with RS are due to electrolyte dysregulation with cardiac, respiratory, neurologic and other systems affected (see Table 1). Cardiac arrhythmia is the most common cause of death from RS.3

Hypophosphatemia

Hypophosphatemia is the classic sign associated with RS. In fact, some authors have suggested that the term “Refeeding Hypophosphatemia” may be more appropriate for cases where hypophosphatemia is observed, and no other electrolyte disorders or symptoms of RS are present.7 In a review of 27 cases of RS, hypophosphatemia was documented in 96% of the cases.7

Phosphorus is required for a number of systems including the respiratory, neuromuscular, cardiac, endocrine, and hematologic systems.2,5,8 Phosphate is a component of adenosine triphosphate (ATP) and therefore is critical to providing energy to the cells. Phosphate is important in respiratory and cardiac muscle function, white blood cell function, nerve conduction, and oxygen delivery. Phosphorus is required for the pathway which allows for the release of oxygen from hemoglobin.9 Respiratory alkalosis or metabolic alkalosis can cause phosphorus redistribution, resulting in decreased serum phosphorus concentration.8 Hypophosphatemia has been shown to result in longer length of stay, longer ICU and ventilator days, and a higher mortality rate.10,11

Hypomagnesemia and Hypokalemia

Other serum electrolyte abnormalities are associated with RS, primarily magnesium and potassium. Magnesium is required for more than 300 enzyme pathways.1 Among its many functions, it is important in the synthesis of proteins and is required for normal muscle, cardiac and nerve function. Hypomagnesemia is defined as serum Mg < 1.8 mg/dL (0.74 mmol/L), although symptoms most often occur with Mg < 1.2 mg/ dL (0.5 mmol/L).1 Hypomagnesemia can lead to muscle weakness, ventricular arrhythmia, neuromuscular problems, metabolic acidosis and anorexia.

Hypokalemia (serum potassium < 3.5 mEq/L [3.5mmol/L]) can lead to weakness, paralysis, and confusion. Severe hypokalemia can lead to life threatening arrhythmias, cardiac arrest or sudden death. Because of the severity of potential complications, hypokalemia is rarely left unattended by the medical team and is usually replaced promptly.

A full list of complications associated with hypophosphatemia, hypomagnesemia, and hypokalemia is available.12

Other Complications

Complications other than electrolyte disarray may also occur. Increased carbohydrate provision may decrease water and sodium excretion, resulting in fluid overload. This is most common in severely malnourished patients, such as those with anorexia nervosa. Hyperglycemia can be seen as carbohydrate is provided to a body adapted to fat metabolism. Micronutrient deficiencies are likely if the patient has been without adequate nutrition for a prolonged period.

Thiamine should also be of primary concern in the patient at risk for RS. Depleted thiamine stores can lead to neurological compromise and other complications (Wernicke’s encephalopathy). Thiamine supplementation should be provided to patients with a history of alcohol abuse, as well as patients who are markedly malnourished for any reason. In addition, until better data is available, thiamine is provided in our institution both before and for the first few days of feeding in these patients. Theoretically, if thiamine is given without concurrent nutrient delivery, there may not be “recruitment” (i.e. demand for thiamine), and it is unclear whether the thiamine would be utilized. Several recent reviews of thiamine and Wernicke’s encephalopathy are available.13-15

Incidence

The true incidence of RS is difficult to determine, as there is not a standard definition for RS. The incidence reported in the literature varies greatly and is often based solely on the appearance of hypophosphatemia. Reported rates in specific populations include:

  • 34% of all ICU patients10
  • 10% in anorexic patients admitted to the ICU16
  • 15% of hospitalized patients17
  • 9.5% of patients hospitalized for malnutrition from gastrointestinal fistulae18
  • 48% of severely malnourished patients being refed19

This broad range in reported incidence is likely due the wide variety of patient populations reported upon; varying degrees of malnutrition among the populations, different criteria used to diagnose malnutrition, different definitions of RS, and varied refeeding protocols among institutions.

Confounding the identification of RS is the fact that electrolyte disorders have many causes in the hospitalized setting. Therefore, it is important to remember that not all low electrolyte levels are a result of RS. Metabolic or respiratory acidosis, sepsis, volume repletion, changing renal function, initiation or stopping of insulin drips, or other factors may affect phosphorus levels. Many medications may lower serum phosphorus as well. Patients, such as those with COPD, may experience hypophosphatemia when mechanical ventilation is initiated. This is due to the intracellular shift of phosphorus that occurs when pH normalizes as respiratory acidosis corrects.8 Table 2 lists some of the causes of hypophosphatemia in the hospitalized patient. The myriad of factors altering potassium and magnesium are outlined in an earlier article.12

Patients at Risk 9,12,20

Any patient who has been without adequate nutrition for a prolonged period of time may be at risk for RS. Critically ill patients may experience hypophosphatemia upon refeeding after a relatively short period of time (48 hours) without nutrition.9 Table 3 identifies some conditions that put patients at risk for RS. Interestingly, and likely to become more prevalent, is RS seen after severe weight loss from gastric bypass surgery.21,22 A recent study by Manning cites a low incidence of RS in alcoholics; however, it is important to note that these patients were not identified as malnourished, presented voluntarily, and were provided with an oral diet as desired.23 Patients with chronic alcohol abuse should be presumed to have a component of malnutrition and be provided with thiamine (opinion of authors).

The National Institute for Health and Clinical Excellence (NICE) in England and Wales published guidelines in 2006 for identifying patients at high risk for RS.24 While such screening tools may be helpful, it is often difficult to determine which patients will show signs and symptoms of RS. Zeki, et al. retrospectively reviewed the records of 321 hospitalized patients.17 The authors evaluated the risk for RS based on the NICE guidelines, and looked at serum phosphate levels before and after feeding initiation. Ninety-two patients (29%) were identified at risk of RS; of these, 23 patients (25%) developed refeeding hypophosphatemia (RH) compared with 26 patients (11%) who were not identified at risk, but still developed refeeding hypophosphatemia (p=0.003). This study demonstrates that not all patients identified at risk will show symptoms, and some patients not identified at risk will experience signs of refeeding. Other authors have also found that patients identified at risk do not always go on to develop RS.25

RS can occur when consistent nutrients are provided regardless of source-oral, enteral, parenteral nutrition or IV dextrose. While in the past, overzealous PN was associated with RS, other reports have shown that RS can occur when any source of nutrition is provided.7,17 In the Zeki article discussed above, the authors found that at-risk patients in the enteral group were more likely to develop hypophosphatemia than at-risk patients in the PN group.17 The authors postulate that lower levels of phosphorus seen in enteral feeding compared to PN may play a role, as well as increased stimulation of insulin secretion with enteral compared to PN due to first pass metabolism may be responsible.

Treatment

There is no one regimen that has been proven to prevent RS. Recently, one group undertook one of the first randomized, controlled trials to assess outcomes associated with a treatment regimen for RS in critically ill patients.26 Patients who experienced hypophosphatemia upon feeding (<2 mg/dL [0.65mmol/L] within 72 hours of feeding) were randomized to either the control group vs. calorie restricted group. Patients in the control group (n=170) continued on nutrition support per standard protocol. Patients in the calorie restricted group (n=169) received 20 kcal/hour for 2 days, and then advanced to goal in a stepwise manner over several days. The results of this study were mixed. In the short term, the calorie restricted group had higher phosphorus levels on days 1 and 2, and less hyperglycemia on days 1-4. There was no difference in the primary study outcome of days alive after discharge from the ICU. This study provides some additional information; however, nutrition support was initiated at standard levels and the study group had calories decreased if hypophosphatemia occurred. This is different than the more standard practice of beginning nutrition support conservatively in patients at risk for RS and replace electrolytes as the need arises (see Table 4).

The NICE guidelines recommend initiating calories at 10 kcal/kg in patients at high risk for RS.24 In the most severe cases, such as patients with anorexia nervosa, even lower levels may be recommended.16,24 Hofer, et al. evaluated 86 cases of severely malnourished anorexia nervosa patients (in 74.4% of cases, patients were less than 70% IBW).27 The authors evaluated the use of a refeeding regimen which included initiation of feeding at 10 kcal/kg/day, fluid and sodium restriction, and electrolyte supplementation and monitoring (for full protocol see article). They found a low incidence of complications with this protocol, and no deaths were reported.

For patients deemed to be at mild to moderate risk of RS (or those patients where risk is unclear and clinicians just want to err on the safe side), such low calorie levels may not be necessary. Prior to the NICE guidelines, a calorie level of 15-20 kcals/kg was generally recommended for most patients at risk for RS.12 and is currently used at our institution unless a patient is deemed to be at severe risk.

While repleting the malnourished patient is essential, repletion can only occur so quickly and a “rush” to refeed may lead to complications. On the other hand, once the refeeding calorie level has been established, there is no need to also initiate nutrition support below the designated refeeding calorie goal. For example: if the refeeding level is determined to be 1000 kcals/day, a continuous tube feeding rate for a 1 cal/mL product would be ~45mL/hr. We base our flow rates on 20-22 hours per day, as this is what is typically received.

When initiating nutrition support, all calorie sources should be taken into account such as: D5%, D10%, or calories coming from glucose or lipids in medication administration. These calories alone can cause RS in a malnourished patient. If it is not possible to stop any of these additional calorie sources, the nutrition support regimen should be adjusted to take these calories into account. Protein calories should always be included as part of the total calories.

Calories should be increased slowly as the refeeding risk subsides. An advancement of 200-300 calories every 1-3 days is generally recommended.28,29 It is important to make sure this advancement takes place so that patients are not left on hypocaloric feeding levels for a prolonged period of time. This is especially important if a patient is discharged home during the advancement period. It is also important to monitor whether patients are actually receiving and utilizing (see special circumstances discussed later), the nutrition prescribed during this period before further calorie advancement. Feeding interruptions, NPO status, hyperglycemia, malabsorption, or other issues may thwart efforts at nutrition support and leave patients still at risk for RS and further malnutrition.

Electrolyte Monitoring and Replacement

Electrolytes should be checked prior to initiation of nutrition support and low levels replaced. However, there is no need to withhold nutrition support until electrolyte levels are normal.20,24 Some guidelines recommend proactive supplementation of electrolytes, vitamins, and minerals.24,27 Thiamine should be provided to severely malnourished patients and those with a history of alcohol abuse or chronic vomiting prior to the initiation of nutrition support.

Serum electrolytes should be checked after 8 – 12 hours of nutrition support initially, then daily during the refeeding period (first 48-72 hours). The frequency and duration of electrolyte monitoring will vary depending on the degree of malnutrition and whether electrolyte disorders occur, as well as their severity.

Mild to moderate drops in electrolytes can be replaced orally/enterally in patients with a functioning GI tract. Severely low levels should be replaced intravenously. IV replacement may also be necessary for patients without a functional GI tract, those who do not seem to be responding to enteral replacement, or in other situations where oral replacement is not possible or is contraindicated. Specific guidelines for phosphorus and magnesium replacement are available.12

Oral magnesium replacement may be poorly absorbed and can have a cathartic effect, causing or exacerbating diarrhea. As an example, some forms of magnesium (such as Magnesium Citrate and Magnesium Sulfate) are available over the counter as laxatives. Consider slower, more gradual dosing of oral magnesium (smaller doses over the day, or given at night on an empty stomach before bed), or forms of magnesium which provide more free magnesium per dose to the GI tract (such as magnesium oxide).

Intravenous magnesium is often given as a bolus over 60 minutes. This exceeds the renal threshold for magnesium and the kidneys will dump 50% or more of this dose.30 Experience at our institution indicates a slower IV infusion of magnesium (over 10-12 hours) will be better utilized and retained. In this era of shortages and increasing healthcare costs, it is important to ensure the therapy being provided is being utilized.

Hypomagnesemia can exacerbate hypokalemia and make it more difficult to replace potassium. Potassium levels may not normalize until the corresponding hypomagnesemia is corrected. According to one report, 42% of patients with a low potassium level will also have a low magnesium level.31 Concurrent hypomagnesemia may also worsen the symptoms associated with hypokalemia. In addition, a recent review reports that patients with a history of alcohol abuse are often deficient in magnesium and discusses the role of magnesium in the treatment of Wernicke’s encephalopathy.14

SPECIAL CONSIDERATIONS
Renal Failure

Patients with renal failure are at high risk for malnutrition.32,33 However, due to the underlying disease state, these patients may have elevated serum electrolyte levels when feeding is first initiated. Serum levels may drop more gradually or over a longer period of time due to the “protective” effect of the renal failure. Therefore, in such patients, there may be a delayed response to refeeding. Electrolyte levels may need to be monitored over a longer period of time, and replacement may be needed after several days of nutrition support rather than in the first few days. Electrolyte replacement must be done carefully in patients with renal failure.

Hepatic Failure

Patients with severe end-stage liver disease will have depleted glycogen stores and may be unable to maintain serum glucose levels within a safe range.34 This also may occur in patients with anorexia nervosa or severe malnutrition from any cause. In patients unable to maintain their serum glucose level in a safe range, dextrose infusion (D10%) may be required. In some, this infusion may provide calories that exceed the refeeding calorie goal. However, maintaining serum glucose levels above 80mg/dL (4.44mmol/L) trumps any concern for RS. Electrolytes should be monitored closely and replaced as needed. Adequate thiamine, as well as other vitamins and minerals, should also be given during the first few days.

Hyperglycemia or Diabetic Ketoacidosis

If a patient presents with hyperglycemia, or becomes hyperglycemic after nutrition support is initiated, the refeeding process may be delayed as there is inadequate insulin to drive glucose and electrolytes into the cells. Hyperglycemia is essentially a continuation of the starved state (“starvation in the midst of plenty”). When insulin therapy is initiated, the refeeding response is accelerated. Clinicians should anticipate this response, and monitor for signs of RS. Note that if insulin therapy is delayed and the patient is hyperglycemic for the first days of feeding, the signs of RS may be seen later, once insulin therapy is initiated and glycemic control is achieved.

Treatment for diabetic ketoacidosis is a form of refeeding.12,35,36 Exogenous insulin provided to treat diabetic ketoacidosis will power glucose and electrolytes into the cells causing electrolyte levels to drop and supplementation will be needed. Of note, exaggeration of hypocalcemia can be seen with aggressive phosphorus repletion (8.5mmol/hr, or 6 g inorganic phosphate); therefore caution is required.37

GI Tract issues

Patients with gastrointestinal disease or malabsorptive disorders also may face unique challenges related to RS. Several scenarios can be seen.

first, it is not unusual for patients undergoing work- up on a GI service to frequently be made NPO for any number of reasons (procedures, GI bleed, access issues and symptoms). If the patient is to receive oral or enteral nutrition support, delivery may be inconsistent at best, and the total amount of nutrition provided may vary greatly from what is ordered. The amount of nutrition actually received needs to be determined in order to evaluate whether the patient has received enough nutrition for RS to occur. For example, a patient identified at refeeding risk may have an order to receive nutrition support on day #1, but be NPO off and on the next several days (while refeeding electrolytes are being monitored). If consistent nutrition actually starts several days later, RS may occur at that time. Ongoing monitoring should be coordinated with this timing. Also, clinicians may need to consider continuing thiamine supplementation, as the demand for thiamine occurs when patients are fed, not while they are NPO. It is unknown if patients will just excrete supplemental thiamine in the non-fed state, so until better data is available, it may be prudent to continue dosing until feeding is consistent for 3-5 days (opinion of the authors).

In patients with possible malabsorption, RS will not occur if the nutrition delivered is not absorbed (unless an IV source of nutrition is provided). For example, if a malnourished patient with suspected malabsorption shows signs of RS when enteral nutrition is provided, at least some absorption is occurring. This is certainly a �gross’ test at best, but it does provide some clues to the level of absorption. Patients with malabsorption may receive enteral nutrition for a period of time, but at some point require the initiation of parenteral nutrition due to failure to thrive, change in status, etc. Patients should be monitored for RS at this time also (even though they have been “fed” for some time).

BACK TO CASES
Case #1

Unlikely. Patient well-nourished prior to admission. However, mechanical ventilation and correction of respiratory acidosis in a patient with COPD can lead to significant hypophosphatemia (8).

Case #2

Most likely RS. Multiple electrolyte disorders in a patient with significant recent weight loss.

Case #3

Possibly. Resolution of DKA will mimic RS due to the potent effects of insulin driving electrolytes intracellular. Although this patient may be refeeding after experiencing excess loss of potassium, magnesium and phosphorus in the urine due to the catabolic effects of DKA, the exogenous insulin provided will accelerate serum drops as glucose and electrolytes move intracellularly.

SUMMARY

RS is a concern for any patient who has been without consistent or adequate nutrition for a prolonged period of time. Serious complications can be avoided with appropriate identification of patients at risk, slow initiation of feeding, and careful monitoring. An understanding of the causes and mechanisms of RS can aid the clinician in better caring for patients, as well as recognizing when special circumstances arise or additional care and monitoring may be needed.

download tables, images & references

Nutrition Issues In Gastroenterology, Series #154

Immunonutrition in 2016: Benefit, Harm or Neither?

Read Article

Clinicians remain intrigued by the potential to alter the immune response through nutrition, yet there is much debate on what is considered efficacious use of immunonutrition (IN). Here we discuss evaluation of out

Over the past two decades, there have been numerous clinical trials, meta-analyses, and systematic reviews on the use of immunonutrition (IN) in a variety of populations. Although clinicians remain intrigued by the potential to alter the immune response through nutrition, there remains much debate on what is considered appropriate and efficacious use of IN, including lack of consensus from critical care guidelines and the international nutrition support community. Clinicians practicing in nutrition support must first evaluate outcome benefit, as well as consider the patient population and cost when determining whether IN is appropriate. While administration of IN prior to or following elective GI surgery, may be beneficial in preventing post-op infectious complications and reduce hospital length of stay (LOS), there is inadequate evidence to support the routine use of IN among the critically ill population as a whole.

Kelly Roehl, MS, RDN, LDN, CNSC, Advanced Level Dietitian, Rush University Medical Center, Chicago, IL

INTRODUCTION

Infection is the most common cause of morbidity and mortality following surgery1 and during critical illness,2 potentially resulting in prolonged length of stay and increased hospital costs.3,4 Enteral nutrition (EN) support is currently provided as the standard of care in an effort to prevent degradation of lean body mass (LBM) for gluconeogenesis and prevent malnutrition, a risk factor for infectious complications. Over the past two decades, interest has moved to not only prevention of malnutrition, but also modulating the immune response through nutrition, often referred to as immunonutrition (IN). The potential for altering the immune system and associated clinical outcomes is exciting, but current research and practical implications are not robust enough to drive practice. The aim of this article is to review evidence to date on the safety, efficacy and recommendations for use of IN.

Overview of immunonutrition (IN)

Specific nutrients and dietary components, including arginine, glutamine, selenium, omega-3 (n-3) fatty acids, (eicosapentaenoic acid [EPA] and docosahexaenoic acid [DHA]), the omega-6 gamma-linolenic acid [GLA], nucleotides and/or antioxidants have been implicated for their potential to modulate the metabolic response to surgery or stress by enhancing immune function. Specialty enteral products have been developed to include nutrients that are believed to enhance or modulate the immune response (Table 1). Many of the IN enteral formulations currently available were designed for use among those undergoing gastrointestinal (GI) surgery, and are therefore elemental or semi-elemental as a presumed necessary criteria.

The composition of the IN enteral and oral products available varies greatly, not only in nutrients, but also the concentration of each specific component. Unfortunately, clinical trials of the individual potentially immune-modulating nutrients have either not been conducted, or have failed to demonstrate benefit.5,6 It has yet to be established which, how much (if any), when, and for whom IN may provide benefit.

Meet the “Immune-Modulating” Nutrients

Glutamine, most notably known as the primary fuel for enterocytes, lymphocytes and macrophages,5 is also a conditionally essential amino acid during metabolic stress. It serves as a substrate for gluconeogenesis, and may be oxidized for fuel for rapidly proliferating cells.8 Additionally, it is a precursor for renal ammoniagenesis, the process by which ammonia is excreted from the body.8

Arginine is a conditionally essential amino acid during metabolic stress as it is a precursor for many compounds within the human body. It is required for normal T- and B-lymphocyte and macrophage functions, and can be metabolized and utilized in collagen production by way of proline synthesis.9

Arginine stimulates secretion of growth hormone, insulin, and glucagon,10 and can be metabolized to nitric oxide, thereby altering blood flow, angiogenesis, epithelialization, and tissue granulation.11

Omega-3 fatty acids, specifically EPA and DHA, are believed to be immunosupressive by reducing the production of the pro-inflammatory omega-6 fatty acid, arachanonic acid, whose production results in higher levels of the pro-inflammatory eicosanoids, prostaglandins, leukotrienes, and thromboxanes.12 Furthermore, EPA and DHA are postulated to reduce macrophage adhesion, alter T-cell proliferation, and stabilize the cytokine response.13 Some have suggested that arginine and n-3 fatty acids may synergistically improve immune function with:

  • 1. arginine delivery improving cytokine and nitric oxide production,
  • 2. n-3 fatty acids reducing pro- inflammatory eicosanoid production, and
  • 3. increasing arginine availability by decreasing expression of arginase I, an enzyme responsible for degradation of arginine.14,15

Given the role of nucleotides in structural integrity of DNA and RNA, and involvement in the transfer of energy and coordination of hormonal signals, they are often added to IN formulas intended for use during times of stress and/or rapid tissue proliferation.7 Interestingly, the processing techniques utilized in the production of commercial EN formula results in the removal of nucleotides;therefore, some have suggested that standard EN products do not provide adequate nucleotide content for those experiencing metabolic stress.13

Antioxidants, including vitamins C and E, beta- carotene, and selenium are often added in an effort to reduce oxidative stress among patients with acute metabolic stress.

A number of formulas with varying IN compositions are available in the United States (Table 1). Some of these products have been used in research in attempts to demonstrate efficacy for their use, but many of the products have never been tested for efficacy or safety in the populations for which they are marketed or in a clinical trial of any kind.

Reviewing the Evidence

Although immune-enhancing nutrition has been explored in a variety of settings, including pulmonary, trauma, neurology, oncology, and critical care, much of the research has been conducted among patients with GI disorders, specifically elective surgeries for cancers of the GI tract. Those undergoing elective surgery are an attractive and easy group to study because enteral and/or oral nutrition support is often utilized to prevent unintended complications related to malnutrition as many patients struggle to meet nutrition requirements orally during the pre- and post-operative periods.

Over the past two decades, there have been at least 16 meta-analyses and systematic reviews to evaluate the efficacy of IN among patients undergoing elective surgery (Table 2) and the critically ill (Table 3), yet use of IN remains controversial, particularly among the critically ill. In fact, the most recent Guidelines for the Provision and Assessment of Nutrition Support Therapy in the Adult Critically Ill Patient, jointly published by the American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.) and Society of Critical Care Medicine (SCCM), recommends that immune-modulating EN formulations not be used routinely among medical ICU patients, reserving it for those with traumatic brain injuries and perioperatively in the surgical ICU populations.31 Additionally, they do not recommend routine use of fish-oil and antioxidant-containing EN among patients with ARDS or ALI, citing insufficient evidence and conflicting data. Much of the backing behind these recommendations stems from research with wide heterogeneity and inconsistency in outcomes, as well as meta-analyses. Since methodologic and funding concerns blanket much of the IN research, it is an important point to consider that the strength of any meta-analysis or systemic review is only as strong as the studies that they are comprised of.

Review of Efficacy for Use of IN Among Elective Surgical Populations

Among those undergoing elective surgery, most commonly for GI malignancy, improvements in post- operative infectious complications and LOS may result in reduction in cost of care. Additionally, pre-operative nutrition status, a topic that itself has a murky array of definitions, may explain the differences found in pre- op versus post-op IN outcomes.32

Despite at least 10 meta-analyses and systematic reviews (Table 2), it remains unclear which nutrients, how much, timing, length of treatment, and specific surgical populations may benefit from IN. Researchers generally conclude that provision of IN among patients undergoing elective surgery may reduce incidence of infection and decrease hospital LOS, but find no reduction in mortality. A more critical evaluation of the meta-analyses reveals wide heterogeneity with regards to population and volumes of feeding delivered, therefore potential differences in amount of IN components delivered. According to one group, perioperative administration of 500-1000 mL/day of an IN formula for 5-7 days prior to surgery, with continuation into the post-op period reduces infection, other complications and hospital LOS, regardless of preexisting nutrition status.33 Although they conclude that single-substrate administration does not impact clinical outcome, and describe a potential synergistic effect between arginine and fish oils, recommending that these nutrients be used together, this has yet to be proven. Given the variation in formula composition and actual amounts delivered in various studies, it is impossible to determine which specific nutrient is potentially improving outcomes, if any.

To offer fair comparisons between groups where nutrition is provided to both, allowing IN to be the intervention or treatment, nearly all of the randomized controlled trials (RCTs) that provide the basis for the meta-analyses and systematic reviews described in Table 2 compare administration of IN to standard EN. Similar reductions in LOS have been reported when IN was utilized in the pre- versus post-operative periods.34 Hegazi, et al. reported that pre-op oral IN only provided benefit when compared to those that received non- supplemented oral diets,25 suggesting that adequate delivery of basic nutrients results in prevention of post- op complications. However, given that the standard of care (control) is no nutrition intervention, perhaps the benefits of preoperative nutrition can be attributed to carbohydrate loading to maximize glycogen stores as recommended for Enhanced Recovery After Surgery (ERAS), which has been shown to significantly reduce complications and hospital LOS.35,36 Though some researchers have reported that pre-op carbohydrate loading may prevent loss of LBM,37-39 reduce insulin resistance, tissue glycosylation in the operative period, and optimize glycemic control post-op,40-42 direct comparisons have not yet been made. Is it simply the provision of extra (or adequate) calories above the ‘standard’ intake the patient would be able to consume usually in the pre-op period that is resulting in benefits? More research is needed.

Review of Efficacy for Use of IN Among Critically Ill Populations

Given the role infectious complications play in the critically ill population, any intervention that might decrease that risk is worthy of investigation. Generally, the outcomes of meta-analyses examining efficacy of IN among the critically ill (Table 3) are similar to those for the elective surgical population with regards to reduced incidence of infection and decreased hospital LOS, with no difference in mortality; however, some researchers16 suggest that provision of IN among the critically ill may result in adverse outcomes, and therefore, be a safety concern. Like that in the elective surgical population, the research for use of IN in the critical care arena is full of methodologic and heterogeneity concerns.

Much of the debate regarding efficacy of IN among critically ill patients surrounds the safety of its use – specifically relating to arginine. In a 2001 meta-analysis, Heyland et al.,16 concluded that arginine-supplemented IN provided no benefit among the critically ill, and may potentially result in adverse outcomes, a conclusion made due to a trend toward increased mortality among those receiving IN; however, these results were not statistically significant. Since this time, concerns regarding the safety of IN, specifically arginine supplementation, among septic patients has been hotly debated; however, research remains limited, and the debate has mainly surrounded three theories (though none confirmed):

  • 1. Sepsis results in arginine deficiency and supplementation may improve septic state.43
  • 2. Sepsis is caused by excess nitric oxide (NO) production. Since NO is the end-product of arginine metabolism that causes vasodilation, arginine supplementation may exacerbate the septic syndrome.43
  • 3. Arginine infusion among septic medical and surgical patients does not cause hemodynamic instability.44

As many of the IN products available contain a number of potentially immune-modulating components, and it remains unclear which (if any) nutrient may be providing the most benefit, researchers have attempted to scrutinize immune-modulating nutrients independent from nutrition delivery.

IN, Individual Delivery, Biomarkers and Outcomes Among the Critically Ill

As the goal of IN is to enhance the immune response, researchers have examined inflammatory biomarkers concurrently with clinical outcomes in attempts to demonstrate potential changes in outcomes. However, it is imperative to remember that changes in surrogate markers do not necessarily translate to differences in clinical outcomes, a point that is often missed in interpretation. One group concluded that delivery of IN EN containing n-3 fatty acids, glutamine and arginine among those with esophageal cancer undergoing concurrent chemotherapy and radiation resulted in a reduced rise in the inflammatory cytokines C-reactive protein (CRP) (p=0.001) and tumor-necrosis factor-alpha (TNF-a) (p=0.014) compared to those receiving standard EN support.45 It is important to note that although statistically significant change in markers of inflammation were found, these authors failed to connect their results to clinical outcomes, which is necessary to drive change in practice.

To further illustrate this point, researchers of the highly publicized ARDS Network Omega Trial (n= 272), administered n-3, GLA, and antioxidants separate from the enteral formulas twice daily.5 Although delivery of n-3 fatty acid increased plasma EPA concentration 8-fold, there were no differences in ventilator-free or ICU-free days among those receiving the supplemental immune-enhancing nutrients.

In the Reducing Deaths Due to Oxidative Stress (REDOX) trial comparing the effects of glutamine and/ or selenium administered separate from the EN formula, unexpectedly, researchers reported longer time to ICU and hospital discharge.6 Interestingly, post hoc analysis revealed that high dose glutamine and/or antioxidants may be associated with increased mortality, especially in those with multiorgan failure. Furthermore, Van Zanten et al.46 found that after adjusting for Acute Physiology and Chronic Health Evaluation II (APACHE II) scores, patients requiring mechanical ventilation that received an IN formula containing glutamine, n-3 fatty acids and antioxidants were found to have significantly higher 6-month mortality than those receiving an isocaloric, high protein formula (54% vs 35% in the control EN group, p=0.04). Conversely, a systematic review concluded that use of fish oil/antioxidant containing enteral formulas or supplements were associated with a reduction in ICU LOS and ventilator days; However, after excluding the Omega trials5 where fish oil was administered as a twice daily bolus outside of the EN, use of continuously administered EN containing fish oil was associated with a significant reduction in mortality (P=0.004).30

The influence of IN nutrients glutamine and selenium among patients requiring both enteral and parenteral support remains inconclusive. Although most have concluded that glutamine and selenium supplementation may result in reduction of nosocomial infections among the critically ill, researchers of one meta-analysis concluded that glutamine supplementation via enteral, parenteral, or a combination of these routes posed no benefit in overall mortality or hospital LOS, but did result in lower incidence of noscocomial infections among the critically ill.47 Furthermore, these researchers, as well as a separate group48 concluded that high-dose supplementation (>0.5 g/kg/day) significantly increased mortality among the critically ill, resulting in higher rates of infection, and longer ICU and hospital LOS. Appropriately, the A.S.P.E.N./SCCM 2016 guidelines suggest that supplemental enteral glutamine (above what is standard in EN formulas) NOT be supplemented in critically ill adults.31

Cost

The potential ability to reduce the cost of medical care was one of the driving forces behind initial efforts to study the impact effect(s) of IN on post-op morbidity, and continues to influence the decision to use IN. Researchers have suggested that IN enteral formulas may be cost-effective when used in specific populations and healthcare settings;49,50 However, this is only if they work, which remains unclear. Products with IN properties are significantly more expensive than standard preparations (Table 1) with some IN EN formulations costing up to 6 times that of a standard formula. Although nutrition support is widely accepted as a life-sustaining therapy, insurance coverage differs among payers and administration settings, making cost-benefit analyses complicated. Differences in coverage may depend on route of administration (oral, enteral or parenteral).51 Therefore, clinicians must be cognizant of coverage to prevent a cost burden not only to the patient, but also the healthcare system as a whole.

CONCLUSION

Despite the large volume of research conducted on efficacy of IN products over the past three decades, there is still no consensus on whether or not they provide benefit. More concerning, some suggest potential risk to the critically ill. Researchers have attempted to find a pattern of potential benefit by conducting meta-analyses and systematic reviews. However, overall, these have revealed no difference in the ultimate outcome of mortality in a variety of populations with enteral IN was compared to standard EN support, in either the surgical and critically ill populations. The literature is riddled with limitations, including research design, heterogeneity, and possible bias from conflicts of interest, thereby preventing the ability to draw solid conclusions and make specific recommendations for clinical practice.

Guidelines and recommendations for use are derived from research conducted by a relatively small group of individuals, many of which receive financial gain/funding from the makers of IN formulas. Given the lack of consensus and exorbitant cost associated with IN, clinicians must demand a well constructed, multi-center, non-biased robust study that addresses the limitations of previous research, and is designed to test the true efficacy of these formulas among critically ill patients.

Download tables, images & references

Nutrition Issues In Gastroenterology, Series #153

Vitamin D Deficiencies in Patients with Disorders of the Digestive System: Current Knowledge and Practical Considerations

Read Article

There is a considerably high prevalence of vitamin D deficiency in patients with various disorders of the digestive system, including cystic fibrosis, acute and chronic pancreatitis, celiac disease, short bowel syndrome and inflammatory bowel disease. In this article, we discuss the different causes of the vitamin D deficiency and the different strategies for normalization of the vitamin D status in patients.

There is a considerably high prevalence of vitamin D deficiency in patients with various disorders of the digestive system, including cystic fibrosis, acute and chronic pancreatitis, celiac disease, short bowel syndrome and inflammatory bowel disease. There are different causes of the vitamin D deficiency, and accordingly, there are different strategies for normalization of the vitamin D status in patients. In general, vitamin D normalization is beneficial for most patients. However, because there is evidence suggesting that vitamin D may be a negative acute-phase reactant, and as such is down-regulated during acute pancreatitis, it may be prudent to hold off on supplementing during the acute phase (and perhaps wait until the acute phase passes before checking levels) until there is evidence supporting benefit.

Zhiyong Han PhD1 Samantha L. Margulies2 Divya Kurian2 Mark S. Elliott PhD1 1Department of Biochemistry and Molecular Medicine, 2MD Class of 2016, The George Washington University School of Medicine and Health Sciences, Washington, DC

BRIEF INTRODUCTION TO VITAMIN D METABOLISM

Vitamin D had been considered as a dietary essential nutrient for the human body until it became obvious that it is a natural hormone of the human body. That is, vitamin D is synthesized in the human body and acts in ways that are no different from a steroid hormone.1

However, vitamin D synthesis is unique in that it depends on the irradiation of the epidermis of the skin by UV-B (ultraviolet B) light. Briefly, the energy of UV-B photons causes a structural change in 7-dehydrocholesterol, yielding pre-vitamin D3. The pre-vitamin D3 then spontaneously isomerizes to vitamin D3 (also called cholecalciferol). Vitamin D3 is exported to the blood circulation from the skin and is sequentially metabolized to:

  • 25(OH)D3 (25-hydroxyvitamin D3, or calcidiol) mainly by the enzyme, CYP2R1 (25-hydroxylase), in the liver
  • then to 1,25(OH)2D3 (1,25-dihydroxyvitamin D3, also called “calcitriol”) by the enzyme, CYP27B1 (1-α-hydroxylase), in the epithelial cells of the proximal convoluted tubules in the kidney.1

Of the different forms of vitamin D, only 1,25(OH)2D3 has biological activity.1 The 1,25(OH)2>D3

Vitamin D Deficiencies in Patients with Disorders of the Digestive System

in the blood circulation acts as an endocrine hormone to stimulate vitamin D receptor (VDR)-dependent gene regulation for intestinal absorption of Ca2+ and renal reabsorption of Ca2+ for the maintenance of a healthy blood Ca2+ level.1 PTH (parathyroid hormone) and reduced serum ionized calcium concentration induce renal CYP27B1 activity, thereby stimulating the renal production of 1,25(OH)2D3; increased concentrations of 1,25(OH)2D3, serum calcium or phosphorus have the opposite effect (reduce CYP27B1 activity).1Similar to other hormone systems, 1,25(OH)2D3 has a negative feedback effect on PTH production. Therefore, patients with chronic kidney disease have reduced production of 1,25(OH)2D3 and are likely to have increased PTH secretion and secondary hyperparathyroidism.

Although proximal tubules of the kidneys are the primary site of 1,25(OH)2D3 production, activated macrophages in extrarenal tissues have also been shown to possess CYP27B1.2 Thus conditions such as sarcoidosis can result in increased production of 1,25(OH)2D3 in macrophages which can lead to hypercalcemia.2 The 1,25(OH)2D3 produced in these tissues acts locally in an intracrine or paracrine fashion to stimulate VDR-dependent expression of genes to affect the functions of these cells.2 This explains the potential immunomodulatory effects of VDR-activating synthetic vitamin D analogs shown in some studies.3,4 The production of 1,25(OH)2D3 in extrarenal tissues is independent of the blood levels of Ca2+ and PTH.

Vitamin D Deficiencies in Patients with Disorders of the Digestive System

The 25(OH)D3 circulates in the blood at ng/mL concentrations with a half-life of approximately 15 days whereas the 1,25(OH)2D3 circulates at pg/mL concentrations with a half-life of approximately 15 hours.5 Therefore, blood levels of 25(OH)D3 are commonly used for the determination of vitamin D status in the body because of convenience for technique reasons. Although controversy exists in the literature, vitamin D status is defined by the Endocrine Society as follows:

  • Vitamin D deficiency is defined by a serum 25(OH)D3 level of ≤20 ng/mL
  • Vitamin D insufficiency by a serum 25(OH) D3 level of 21-29 ng/mL
  • Vitamin D sufficiency by a serum 25(OH) D3 level of ≥30 ng/mL.6

Given the above definition of vitamin D deficiency, vitamin D deficiency occurs in 40-60% of patients with various intestinal disorders, including celiac disease, short bowel syndrome, cystic fibrosis, Crohn’s disease and ulcerative colitis.7 In addition, up to 70% patients with acute or chronic pancreatitis develop vitamin D deficiency.8-11 Strikingly, it appears that over 40% of the patients with acute pancreatitis at the time of admission had severe vitamin D deficiency – that is, their serum levels of 25(OH)D3 were less than 10 ng/ml.10,11

The high rates of vitamin D deficiency in patients with intestinal disorders and chronic pancreatitis are closely correlated with increased incidence of osteopenia and osteoporosis or low-trauma fracture .7,12

Except in cases of acute pancreatitis, the causes of vitamin D deficiency in patients with disorders of the digestive system appear to include:

  • 1. Insufficient cutaneous synthesis of vitamin D
  • 2. Hyperparathyroidism secondary to hypocalcemia that results from calcium wasting
  • 3. Inflammation-associated conversion of 25(OH)D3 to 1,25(OH)2D3 (Table 1).7

However, vitamin D deficiency in patients with acute pancreatitis at the time of admission is likely an acute event caused by an as-of-yet unidentified mechanism that actively down-regulates the blood 25(OH)D3 level during pancreatic inflammation.10,11

Insufficient cutaneous synthesis of vitamin D in patients due to inadequate solar UV-B exposure is a major cause of vitamin D deficiency.7 The rate of pre- vitamin D synthesis in the epidermis is inversely related to the amount of melanin in the skin as melanin is an excellent UV-B absorbent, hence, the rate of vitamin D deficiency is higher in patients with darker skin than in those with fair skin in the same season (Table 1). Furthermore, since the solar UV-B doses are substantially reduced in geographical locations of high latitudes (>35o North [ex. Albuquerque, Memphis, Charlotte] or < 35o South [Adelaide, Auckland, Melbourne]) in the winter as a result of the solar zenith angle,13 the rate of vitamin D deficiency is much higher during the winter months than during the summer months of the same patient population.7 The significance of solar UVB exposure for the maintenance of a sufficient serum level of 25(OH)D3 is further highlighted by the fact that sunlight exposure, but not fat malabsorption, is a more important determinant of vitamin D levels in preadolescent children with cystic fibrosis,7 and that the amount of sunlight exposure, but not even an oral supplementation of up to 800 IU vitamin D/day, was the key determinant to the serum vitamin D level over a period of four years in a cohort of patients with cystic fibrosis.14

Hyperparathyroidism secondary to hypocalcemia, which can result from gastrointestinal loss of calcium, often develops in patients with celiac disease and gastrectomy,7 and possibly in others with steatorrhea. Thus, excessive renal conversion of 25(OH)D3 to 1,25(OH)2D3 due to hyperparathyroidism can cause a reduction in the serum levels of 25(OH)D3 in patients with celiac disease and gastrectomy.7

Active intestinal conversion of 25(OH)D3 to 1,25(OH)2D during an inflammatory flare is independent of blood levels of PTH and calcium and appears to be primarily induced by the inflammatory cytokine, TNFa, because neutralization of TNFa with therapeutic antibody can effectively restore the blood level of 25(OH)D3 in individuals with different inflammatory conditions.7 Thus, conversion of 25(OH)D3 to 1,25(OH)2D3 in the inflamed intestine “drains” 25(OH) D3 from the blood circulation to the inflamed intestine, and hence reduces the blood 25(OH)D3 level. The significance of the intestinal production of 1,25(OH)2D during an inflammatory flare is that 1,25(OH)2D3 acts locally to activate a biofeedback mechanism to enhance antimicrobial activity of macrophages and promote the intestinal epithelial barrier and tissue healing,15 and additionally, to cause suppression of the activities of pro-inflammatory T helper cells (i.e., Th1 and Th17 cells) in the intestine to prevent excessive inflammatory damage to the intestine.2

The severe vitamin D deficiency in patients with acute pancreatitis10,11 deserves special attention. In an observational study, 74.4% (58/78) of patients admitted with acute pancreatitis were found to be vitamin D deficient (< 20ng/mL) the first 2 days of their admission.10 In a prospective study, patients not only had documented vitamin D deficiency at the time of admission, there was a progressive decrease in the blood level of 25(OH)D3 from day 0 to day 2 as a result of pancreatic inflammation.11 Given that inflammation is known to cause macrophage-mediated conversion of 25(OH)D3 to 1,25(OH)2D3,2 which may lead to hypercalcemia,16 which can cause acute pancreatitis,117 it seems active down-regulation of the blood 25(OH)D3 level is beneficial for patients with acute pancreatitis: it would reduce the substrate available for the production of 1,25(OH)2D3 to a level that is high enough to cause hypercalcemia, which can exacerbate acute pancreatitis. Thus, it is reasonable to suspect 25(OH)D3 as a negative acute-phase reactant, specifically during the process of acute pancreatitis.

Normalization of Vitamin D Status in Patients with Disorders of the Digestive System

Not counting vitamin D supplements, there are two sources of vitamin D for the human body: vitamin D present in foods and vitamin D3 synthesized in the epidermis upon UV-B irradiation. Most natural foods, except certain fatty fish (e.g. salmon, bluefish, mahi-mahi and swordfish) and a few species of edible mushroom, are poor sources of vitamin D.7 There are few types of vitamin D-fortified food, such as milk, orange juice and breakfast cereals.7 Given the fact that a minimal daily intake of 600 IU of vitamin D is needed for individuals with minimal sunlight exposure to maintain vitamin D,18 it is unrealistic for many to rely on eating vitamin D-fortified foods to acquire adequate vitamin D3.

Therefore, it is important to inform patients that having sufficient solar UV-B exposure can result in cutaneous synthesis of the amount of vitamin D that the body needs. For the determination of sunlight exposure time to avoid sun burn, physicians could teach patients how to use the solar UV-B calculator (http://zardoz.nilu. no/∼olaeng/fastrt/VitD_quartMED.html) developed by Webb and Engelsen.19 Alternatively, physicians could use short-term UV-B light therapy to stimulate cutaneous synthesis of vitamin D in patients.20,21 However, the UV-B light treatment seems ineffective for patients with chronic pancreatitis;22 in addition, patients should be warned that UV-B exposure has added health risks, such as skin cancer.

Vitamin D absorption occurs mainly in the jejunum and terminal ileum.23 Therefore, patients with ulcerative colitis, which rarely involve the small intestine, may still have normal intestinal vitamin D absorption capacity. However, patients with active Crohn’s Disease or short bowel syndrome have reduced intestinal surface area for vitamin D absorption (Table 2). Patients with cystic fibrosis and chronic pancreatitis have fat maldigestion and malabsorption and consequently malabsorption- and diarrhea-mediated wasting of vitamin D (Table 2). The reduced intestinal absorption of vitamin D or wasting of vitamin D explains why daily intake of up to 800 IU vitamin D is ineffective in normalization of the vitamin D status in these patients,14 and it also explains why normalization of the vitamin D status in these patients require long term treatments with higher doses of oral vitamin D3 supplementation (Table 3).

In addition to vitamin D3 supplements, vitamin D2 (ergocalciferol) supplements are also widely used by patients. However, the blood level of 25(OH)D2 (25-hydroxyvitamin D2) cannot be accurately measured by certain commonly used methods, and thus the effect of vitamin D2 treatment on the blood vitamin D level cannot be accurately determined.24 In addition, it has been suggested that the use of vitamin D3 is preferable because vitamin D2 treatment is considered by some investigators to be less effective than vitamin D3 treatment.25

The Cystic Fibrosis Foundation guideline recommends that all patients maintain a serum 25(OH) D level of at least 30 ng/ml.26 Hall et al. reviewed available evidence and recommend to treat patients with cystic fibrosis less than five years old with 12,000 IU vitamin D3 bi-weekly and for older patients with 50,000 IU vitamin D3 bi-weekly.27

For most patients with celiac disease, adhering to gluten-free diets for 6 months or longer can result in the normalization of the vitamin D status (Table 3).7,28

For patients with chronic pancreatitis, long-term supplementation of extremely high oral doses of vitamin D, 20,000 – 60,000 IU/week or even 140,000 IU/week (20,000 IU/day), are required (Table 3).8

For patients with short bowel syndrome, very high doses of vitamin D3 may be needed. In addition, an intake of 1,500 mg calcium/day should be considered if the patients develop bone metabolic disorders (Table 3).29,30

For patients with Crohn’s disease, it is recommended that the blood level of 25(OH)D3 be raised to above 30 ng/ml.31,32 Numerous studies have demonstrated that treatment of patients with 2000 IU vitamin D3/day over a prolonged period of time is necessary to raise the blood levels of 25(OH)D3 to above 30 ng/ml (Table 3).7 In particular, the study by Raftery and colleagues31 demonstrated that if the blood level of 25(OH)D3 in patients with Crohn’s disease in remission is raised to above 30 ng/ml, it can reduce the rate of relapse and promote the maintenance of intestinal permeability and cause elevated serum levels of LL-37 (cathelicidin, an antimicrobial peptide that promotes intestinal healing and reduces intestinal inflammation) and higher quality of life score.

However, it should be noted that normalization of the vitamin D status in some patients may not be achieved even with super high oral doses of vitamin D3 supplementation (e.g., 10,000 to 50,000 IU vitamin D3 daily). Therefore, the symptoms of vitamin D deficiency of these patients may be treated with calcitriol (0.5 mg) or synthetic, biologically active vitamin D analogs, such as paricalcitol (1 mg) twice daily, daily or less frequently.3,4,33 However, there is no guarantee that calcitriol and vitamin D analogs would be easier to absorb. Also, it would be prudent for physicians to monitor the blood levels of calcium, phosphate and PTH in patients to prevent possible development of hypercalcemia.

CONCLUSION

It appears that elevation of the blood levels of 25(OH) D3 to >30 ng/ml is beneficial to patients with various disorders of the digestive system. To achieve this, it is reasonable to first start patients on clinically tried regimens, depending on the disorder, for 2-3 months (Table 3). If the treatment fails to achieve the goal, then different forms such as tablets (crushed), liquid, or higher doses and longer treatment may be needed; intramuscular injections of high-dose vitamin D3 (“Arachitol”, Solvay Pharmacia) may also be considered.34-36 Nevertheless, patients must be individually monitored on a regular basis to ensure that adjustment of dose, form, or route of administration of vitamin D is made in a timely manner.

Finally, even though it is not conclusive at the present time that 25(OH)D3 is truly a negative acute- phase reactant in the context of acute pancreatitis, given that inflammation-associated production of 1,25(OH)2D3 can cause hypercalcemia,16 which is an established cause of acute pancreatitis,17 it is critical that physicians conduct thorough investigations before deciding to give patients with acute pancreatitis vitamin D replacement therapy simply because their serum 25(OH)D3 levels are low at the time of admission.

Acknowledgement

We wish to thank the Department of Biochemistry and Molecular Medicine, The George Washington University School of Medicine and Health Sciences for providing support.

Download tables, images & references

FRONTIERS IN ENDOSCOPY, SERIES #54

Clinical Update on the Endoscopic Management of Ampullary Adenoma

Read Article

Gandhi Lanke1 MD, MPH Douglas G. Adler MD, FACG, AGAF, FASGE2 1Plains Regional Medical Center, Clovis, NM 2University of Utah School of Medicine, Gastroenterology and Hepatology, Salt Lake City, UT.


Ampullary adenomas (AA) are benign, and if untreated, can undergo malignant transformation in to ampullary adenocarcinoma. AA can occur sporadically or associated with familial adenomatous polyposis (FAP). The incidence of AA is increasing due to more frequent use of imaging and endoscopy. Management of AA includes Endoscopic ampullectomy (EA), local surgical excision and pancreaticoduodenectomy, depending on the size, lymph node involvement, ingrowth in to bile or pancreatic duct and presence or absence of advanced duodenal polyposis. Accurate preoperative diagnosis and staging is essential in the management of AA. Endoscopic ultrasound (EUS) can aid in preoperative risk stratification of size, regional nodal metastasis and ductal and vascular invasion in high risk ampullary lesions.

INTRODUCTION
The incidence of ampullary adenoma (AA) is increasing with the ever-more-frequent use of imaging and endoscopy. (Figure 1) AA can occur sporadically or in the context of genetic syndromes such as familial adenomatous polyposis (FAP).1 Adenocarcinoma of the ampulla of vater (AV) is relatively uncommon and it accounts for 0.2% of gastrointestinal cancers.2 Ampullary adenoma can transform in to ampullary adenocarcinoma. Intestinal mucosa near the ampulla is more prone to neoplastic transformation than any other site in the small intestine as there is a transition from pancreaticobiliary epithelium to small intestinal epithelium and it is constantly irritated chemically and mechanically.3 According to SEER data base, the incidence of ampullary cancer (AC) was 0.59 per 100,000 per year and more common in males than females.2 Accurate staging is important in preoperative assessment of adenoma. Endoscopic retrograde cholangiopancreatography (ERCP) and endoscopic ultrasound (EUS) can aid in the preoperative staging for AA. Endoscopic ampullectomy (EA) can be safe and effective in experienced endoscopist hands and can avoid surgical intervention in patients without ductal extension. This review article focuses on pathogenesis, diagnosis, indications, technique and outcomes of endoscopic management of AA.

ANATOMY
The AV is a spherical structure formed by the confluence of common bile duct (CBD), pancreatic duct (PD) and the distal aspect of the sphincter of oddi muscle. The duodenal papilla is a nipple-like structure located on the medial aspect of the second portion of the duodenum. The ampullary region is a transition from pancreaticobiliary epithelium to small intestinal epithelium.3 The AV is located behind the major duodenal papillae and is covered by small intestinal-type epithelium. The entry of bile in to the second portion of the duodenum is controlled by the smooth muscle fibers of the sphincter of oddi that open at the duodenal papilla and allows bile to flow in to the small intestine. Periampullary tumors can originate from pancreas, duodenum, distal CBD or structures of AV. Ampullary carcinoma can arise from within the AV.

Clinical Presentation
AA is usually found incidentally, and they are asymptomatic. However, they can present with jaundice, pruritus, abdominal pain, nausea, vomiting, anorexia, malaise, dyspepsia and melena.4 Obstructive jaundice is usually caused by compression of the distal bile duct by the tumor. Jaundice at initial presentation with pancreatic invasion and superior mesenteric lymph node can predict advanced stage ampullary carcinoma with poor prognosis.5 Iron deficiency anemia with blood loss can occur secondary to ulceration from ampullary tumors.6 Acute or recurrent pancreatitis can occur with the obstruction of pancreatic duct from ampullary tumor.7

Pathogenesis
AC develops from preexisting adenomas or flat preneoplastic lesions. Most AA develop sporadically. Patients with familial adenomatous polyposis (FAP) are more prone to colorectal adenoma and AA.8 Yamaguchi and Enjoji, defined three macrotypes of AC based on macroscopic appearance. Intramural protruding (intraampullary), extramural protruding (periampullary) and ulcerating AC.9 The common channel is formed by the intestinal mucosa of the ampulloduodenum and mucosa of the ampullo-pancreatico-biliary duct. Histologically, AC has 2 main types, which includes the intestinal type and the pancreaticobiliary type. The intestinal type resembles tubular adenocarcinoma of stomach or colon and the pancreaticobiliary type is characterized by papillary growth with scant fibrous cores.10 The AA of intestinal type can be tubular, villous or tubulovillous and closely resemble adenoma of the intestine.

Diagnosis
Accurate preoperative diagnosis and staging is essential in the management of AA. The side- viewing endoscope (SVE) allows better visualization of the morphological features of ampullary lesion and aids in acquisition of the tissue for biopsy during the procedure. To improve accuracy of diagnosis, atleast six biopsies from the ampulla and biopsy after several days of endoscopic sphincterotomy (ES) has been recommended by some authors, although in practice this does not always occur.11-13 Also, to prevent pancreatitis from forceps biopsy, the forceps should be directed away from the pancreatic duct orifice to avoid papillary edema (although in practice this is not always possible and pancreatitis can result even if biopsies are performed in this manner).14 Preoperative forceps biopsy with SVE can miss AC in about 15-60% of AA as high percentage of AA contain small foci of invasive adenocarcinoma.15-18 Endoscopic retrograde cholangiopancreatography (ERCP) can assess for the presence and extent of any intraductal extension, aids in prophylactic pancreatic duct stent to prevent post ampullectomy pancreatitis and allows biliary stenting to treat obstructive jaundice.19

Endoscopic ultrasound (EUS) aids in the assessment of depth of mucosal invasion, infiltration of the periampullary wall layers and pancreatobiliary ducts preoperatively. The role of routine EUS preoperatively when the size of the AA is less than 1 cm or no suspicious signs of malignancy like ulceration, induration or bleeding is not clear but is often performed per protocol at many centers.20 EUS has modest sensitivity of 77% and specificity of 78% for T1 lesions; sensitivity of 70% and specificity of 74% for nodal invasion.21 Many endoscopists use EUS universally before performing ampullectomy to evaluate the lesion thoroughly, others use it selectively, and some others do not use it at all.

Intraductal ultrasound (IDUS) allows EUS from inside the biliary and pancreatic ducts. IDUS probes can be inserted through the accessory channel of a duodenoscope during ERCP into either the biliary or the pancreatic duct. IDUS is superior to EUS in visualization of tumors of major duodenal papillae with accuracy of 100 % vs 59.3% (IDUS vs EUS); Sensitivity of 100% vs 75% (IDUS vs EUS) and specificity of 62.5% vs 50% (IDUS vs EUS) respectively.22 Despite these benefits, IDUS is rarely performed given the additional cost and the need for a second EUS processor. Computed tomography (CT) and transabdominal ultrasound (US) are not adequate for staging for ampullary tumors but they can identify biliary and pancreatic duct dilation.23 CT can also identify locoregional lymph nodes and distant metastasis. Magnetic resonance cholangiopancreatography (MRCP) can assess the extent of intraductal involvement non-invasively and identify pancreas divisum, in addition to the identification of biliary and pancreatic dilation but small ampullary lesions are often missed on MRI with MRCP.19

Management and Ampullectomy Technique:
Management of AA depends on the size, presence of concurrent duodenal adenomatosis, characteristics of the adenoma, endoscopic expertise and willingness of the patient to undergo surveillance after papillectomy. In general, AA less than 2-3 cm are more amenable to endoscopic removal, but there are case reports of EA for lesions less than 4.5 cm if there is no intraductal growth or malignancy.24-26 Endoscopic characteristics of the AA including firmness, ulceration, and non-lifting after submucosal saline injection are suggestive of possible malignancy and patients with these findings may not be candidates for endoscopic removal.25

The goal of endoscopic management of patients with AA should be complete excision of all adenomatous tissue when feasible. (Figure 2) En bloc or piecemeal resection may be performed. The advantages of en bloc excision include accurate histological assessment because of clear margins, an increased likelihood of complete removal of the AA, and potentially decreased procedural time.19 However, for large AA or lesions with limited endoscopic accessibility, en-bloc resection may not be feasible. Piecemeal excision is usually performed in these cases.

The equipment used includes thin wire snare of approximately 0.3mm size and microprocessor-controlled electrosurgical generator. There is no specific type of snare that is universally recommended when performing EA. Many snares have been used to perform ampullectomy. Snare size should fit the size of the target lesion, if possible. Depending on the size and morphological characteristics of the lesion, a variety of stiff-type snares can be used.27 The Spiral snare (20-mm spiral SnareMaster; Olympus, Tokyo, Japan) is preferred by some endoscopists to enable more tissue capture. The mini oval Acusnare (15 x 30-mm mini oval, Cook Medical, Brisbane, Australia) can be used to remove the residual tissue from the margin. For large exophytic lesions, Acusnare (25 x 55-mm AcuSnare [standard oval], Cook Medical, Brisbane, Australia) can be used. The use of thin wire snare maximizes the current density for swift transection and minimizes the risk of dispersion of the energy to the pancreatic orifice and thereby theoretically reduces the risk of late stenosis.28 Final snare selection is left to the endoscopist.

The role of submucosal injection of saline in EA is not clear. The anatomy is such that, the duodenal papilla is continuous with the AV and the AV is a confluence of the terminal part of the pancreatic and common bile duct that extends deep in to the muscularis propria layer of the duodenum. As a result, when submucosal saline injection is used for excision of the duodenal papilla, there can be tethering of the duodenal papilla to the ductal structures and can theoretically lead to incomplete resection as the ampullary lesion may not lift as expected which can make effective snare placement for enbloc resection difficult.29 However, many authors prefer to perform submucosal injection of saline prior to ampullectomy as it may reduce the risk of perforation and facilitate tissue removal. Most centers use normal saline, although some can add indigo carmine (0.04%) and epinephrine (1:100000) for submucosal injection. 28

All specimens after EA should be ideally retrieved for histological evaluation. Anti-peristaltic agents like glucagon or hyoscine butylbromide can be used to prevent migration of the specimen into the intestine.19 Commercially available retriever net or endoscopic suction can be used to retrieve the tissue but aspiration through accessory channels of duodenoscope can lead to fragmentation of tissue. However, sometimes the specimen may be lost if it rapidly passes beyond the reach following ampullectomy. Pinning of the specimen after flattening on to cork board or polysterene block can prevent curling of the specimen and can aid the pathologist for accurate assessment of lateral and deep margins.28 The duodenoscope is typically reintroduced after retrieving the specimen to examine for any signs of bleeding stigmata or active bleeding and residual adenomatous tissue. Ablation therapies including monopolar coagulation, bipolar coagulation, Nd:YAG laser, photodynamic therapy and Argon plasma coagulation (APC) can be used to treat residual adenomatous tissue based on the institutional availability and preference of the endoscopist.19,30 The benefit of ablation therapy is controversial, and some authors prefer APC than other modalities as it can limit the depth of tissue injury with the setting of 40-50 W. In general, APC is the most commonly used ablation method given its ease of use through the duodenoscope and widespread availability.

The role of routine prophylactic pancreatic stenting after ampullectomy to prevent pancreatitis is also not clear. Some authors advocate pancreatic stenting with 5 French stents only if pancreatic orifice is not visible after EA. Harewood et al. showed in a randomized study that patients who underwent pancreatic duct (PD) stenting after EA had decreased rates of post-ampullectomy pancreatitis when compared to those who did not undergo PD stenting.31 To prevent cholangitis from hemobilia when there is major bleeding and to ensure bile drainage, when there is concern for retroduodenal perforation, biliary stenting is recommended.28,32 To minimize the risk of pancreatic ductal injury after ampullectomy, the pancreatic stent should be removed in relatively short timeframe.28 Also, any residual visible adenomatous tissue can be removed at the time of pancreatic stent removal. While pancreatic duct stents are widely used when performing ampullectomy, not all endoscopists use them in this context.

Complications
Common early complications after EA include pancreatitis, bleeding, perforation, and cholangitis. Late complications include papillary stenosis, pancreatic duct stricture, bile duct stricture, and adenoma recurrence. Outcomes of EA are discussed in detail in Table 1. Pancreatitis can develop in up to 3-30% following EA.33,34 Prophylactic pancreatic duct stent can reduce the risk of developing pancreatitis and can reduce the severity of pancreatitis if it develops. Routine prophylactic pancreatic duct stent placement is advocated by some authors to prevent pancreatitis after EA, although some studies showed no difference among patients who underwent EA with or without a pancreatic stent.31,35 As EA constitutes a high-risk ERCP, rectal indomethacin is recommended unless the patient has a contraindication.36,37

Bleeding can be intraprocedural or delayed and it accounts for 2-30% of adverse events following EA.30,38 Intraprocedural bleeding can usually be controlled with adrenaline injection, balloon tamponade, coagulation forceps, stenting, and/ or hemoclip placement. Delayed bleeding can be mild and self-limited or severe and life threatening, however endoscopic intervention might be needed when there is hemodynamic compromise. Massive bleeding unresponsive to endoscopic intervention usually warrants angiographic embolization.

Perforation can be guidewire-induced, periampullary during sphincterotomy or luminal (usually occurring during the actual ampullectomy maneuver) and accounts for 2-10% of EA resections.34,38 Guidewire perforations are usually not causes of significant clinical injury. Early recognition of perforation and conservative management with intravenous (IV) antibiotics, bowel rest, and IV fluids are often all that is needed in the case of small perforations, and many of these can be managed nonoperatively. Most perforations are small and retroperitoneal, and do not warrant surgery. In some cases, endoscopic closure can be accomplished with endoscopic clips. Surgical intervention is required if the patient shows signs of acute abdomen or decompensation. For distal common bile duct (CBD) or periampullary injuries, fully covered Self- expandable metal stent (SEMS) can be beneficial. 39

Cholangitis is uncommon after ampullectomy if a biliary stent is placed and can be managed with IV antibiotics. ERCP with stent placement or replacement may be necessary for biliary drainage if conservative management fails. Papillary stenosis (2-17%) is usually a late complication after EA and it includes biliary and pancreatic duct stenosis.40,41 Papillary stenosis can arise as a consequence of scarring from the ampullectomy procedure itself. The treatment of papillary stenosis includes sphincterotomy, stent placement and balloon dilation. Catalano et al. showed in their study that papillary stenosis was seen more in patients who did not have pancreatic duct stent and they recommended prophylactic pancreatic duct stent to prevent post EA pancreatitis and pancreatic duct stenosis.30

Recurrence Rates and Follow Up Recurrence rates after EA vary from 11-30%.25,42 Risk factors for recurrence include large size, genetic predisposition, possibly absence of adjuvant thermal ablation (laser, APC) during initial EA to treat residual tissue.30 Recurrence is usually treated with endoscopy and if there is intraductal invasion or cancer, then surgical intervention is recommended. (Figure 3)

There are no specific guidelines on the follow up after EA, however there is some consensus on initial follow up endoscopy at 3 months and once every 6 months for a period of 2 years. Once there is complete eradication or no recurrence after 2 years of follow up, yearly endoscopy afterwards is recommended.43,44 Some authors recommend that after 2 years of initial endoscopy follow up, FAP patients should undergo endoscopic surveillance every 2-3 years for the rest of the life as there is 100-330 fold of developing duodenal cancer.45 For sporadic AA, endoscopic surveillance can be performed as clinically indicated.25 In patients with genetic predisposition like FAP or Gardner syndrome, the goal of surveillance is to detect high grade dysplasia and large lesions are more likely to have high grade dysplasia and in patients with sporadic AA, the goal of surveillance is to detect recurrence at the excision site.46 The severity of duodenal adenomatosis is graded by spigelman (0-IV) and with grade III, IV, there is more risk of recurrence of AA and high-grade dysplasia which makes endoscopic papillectomy less feasible.41 In patients with complex histories or unusual situations, follow up can be individualized.

CONCLUSION
Endoscopic management of AA is safe and effective when appropriately selected in the hands of experienced advanced endoscopist. A multidisciplinary team including gastroenterology, radiology, pathology, oncology and surgery is key in management of AA. Surveillance for recurrence should be individualized based on pathology, risk factors like large size and genetic predisposition. Surgery is recommended when there is intraductal extension in to common bile duct and/or pancreatic duct with invasive cancer on biopsy or large ampullary lesions that cannot be endoscopically treated.

download tables, images & references

NUTRITION ISSUES IN GASTROENTEROLOGY, SERIES #189

The Specific Carbohydrate Diet in Inflammatory Bowel Disease: The Evidence and Execution

Read Article

Maithili V. Chitnavis, MD Virginia Tech Carilion School of Medicine, Carilion Clinic Gastroenterology, Roanoke, VA. Kimberly L. Braly, RD, CD, CNSC Kimberly Braly Nutrition Services, Seattle Children’s Hospital Division of Gastroenterology Seattle, WA.


Nutrition, specifically exclusive enteral nutrition, has long been considered a therapeutic option in patients with inflammatory bowel disease; less is known, however, about the specific carbohydrate diet (SCD). The SCD supports avoidance of certain complex carbohydrates (thought to be pro-inflammatory in nature), thus promoting intestinal healing. Traditionally, a step-wise or staged approach has been used for SCD initiation, with progression from the most easily digestible foods to more complex foods over time. Close monitoring of laboratory parameters and anthropometrics is recommended. A multidisciplinary approach to the SCD is ideal, with access to a registered dietitian who is trained in, or has experience with, the SCD.

INTRODUCTION
Inflammatory bowel diseases (IBD) such as Crohn’s disease (CD) and ulcerative colitis (UC) are chronic, and in some, debilitating conditions that can affect any portion of the gastrointestinal (GI) tract in CD and the length of the colon in UC, and can be associated with disease relapse and progression. While many providers take a multidisciplinary approach to the care and treatment of patients with IBD, the focus of therapy for patients with IBD, particularly in the adult IBD population, remains largely based on pharmacologic options. Even so, many patients are eager to try alternative and complementary medicine as a therapeutic option for IBD.
Exclusive enteral nutrition (EEN) can be used for inducing and maintaining remission in both the pediatric and adult IBD populations, but adherence remains very challenging.1,2,3,4 and relapse becomes likely upon resumption of a normal diet.5 The specific carbohydrate diet (SCD) remains one such alternative, which allows for patients to eat “real food” and thus has piqued the interest of both patients and researchers. Fear of long-term consequences, lack of efficacy, and adverse reactions to medical therapy are often cited as reasons for patients to pursue the SCD; some perceive a greater benefit of the SCD compared to medical therapy.6,7

Nutritional books and the internet are filled with successful anecdotes of how the SCD has changed the lives of patients and has resulted in symptomatic remission, often in children, but results from large-scale clinical trials are lacking.2 For example, one online survey of 51 IBD patients revealed that 84% of patients experienced symptomatic remission on the SCD, with 61% of all patients off of all medical therapy.8 To date, only seven clinical trials researching the SCD subject are registered with ClinicalTrials.gov.

The “Specific Carbohydrate Diet” was published by Drs. Sidney and Merrill Haas in 1951, who found that taking a dietary approach to celiac disease and cystic fibrosis with pancreatic insufficiency resulted in complete disappearance of GI symptoms in their patients suffering from these conditions.9 However, Dr. Sidney Haas had been using the SCD to treat his patients for decades before this. The parent of one of his patients with ulcerative colitis, Elaine Gottschall, became a proponent of the diet as a treatment option for patients suffering from intestinal diseases, and popularized the SCD in her book, Breaking the Vicious Cycle: Intestinal Health Through Diet.

The SCD can be well balanced, but is very specific in the types of sugars and starches that are allowed (Table 1). The natural SCD aims to permit single sugars, or monosaccharides, such as those found in some fruits, certain vegetables, and honey, as opposed to disaccharides (sucrose, for example) and complex polysaccharides (starches) as shown in Table 2. Even so, certain starches such as those found in dried beans and lentils may be consumed as they have been tolerated by many patients.9 Research is ongoing as to what foods on the SCD are permissible and not permissible and the mechanism of action as to why these foods may be beneficial or harmful.

Carbohydrate intolerance is central to the basis behind the SCD. The concept of the SCD aims to promote strict avoidance of those foods which trigger gut inflammation by avoiding certain types of carbohydrates, noted above, as these are thought to exert the most influence over the intestinal microbiome.4,9 By avoidance of these carbohydrates, small bowel mucosal injury and bacterial overgrowth can be reduced, thereby preventing downstream effects such as diarrhea and malabsorption. In addition to the traditionally proposed mechanism of action behind the SCD, researchers are also investigating whether food additives and preservatives may play a role in the inflammatory process.

The Evidence
As previously mentioned, large-scale clinical trial data on the SCD in IBD is lacking,2 but several studies have explored the role of the SCD in IBD, particularly in pediatric patients.1,10-13 Suskind and colleagues conducted a retrospective study of seven pediatric patients with Crohn’s disease, excluding those on immunosuppressants, and found that all patients experienced symptom remission within three months of initiating the SCD, with either normalization or improvement in laboratory parameters such as CRP, hemoglobin, albumin, and fecal calprotectin.11

The same group conducted an internet-based survey of 417 pediatric and adult respondents with CD, UC, and indeterminate colitis and demonstrated that the majority reported achieving clinical remission of IBD on the SCD, with 33% of patients reporting clinical remission at two months after initiation of the SCD and 42% of patients reporting clinical remission at 6 and 12 months after continuation of the SCD.6 While this was based on survey data, it highlights the importance of the patient perspective and a perceived benefit for patients trying the SCD, many of whom were having symptoms such as abdominal pain, diarrhea, bloody stools, and limitations in their activity levels prior to initiation.6

Burgis and colleagues from Stanford conducted a retrospective study investigating the effects of the SCD in maintenance of remission in pediatric patients with CD over a one-year period.1 They found significant improvements in lab parameters (hemoglobin, albumin, ESR) with implementation of the SCD in patients who were treated with and without immunomodulatory therapy, and overall patient height and weight also improved. All effects persisted with liberalization of the diet, with the exception of weight gain (50% of patients lost weight when SCD was liberalized). While the study was small (n = 11 patients), this was the first study investigating liberalization of the SCD.

Another survey-based case series of 50 patients with UC, CD, or indeterminate colitis from Rush University Medical Center demonstrated that 66% of patients reported complete resolution of IBD symptoms on the SCD after an average of 9.9 months.7 The majority of these patients were adults, but pediatric patients were also included. On average, the diet was also rated to be 91.3% effective in controlling acute flare symptoms and 92.1% effective in maintenance of remission of IBD.

Although patients may experience symptomatic remission and improvement in laboratory parameters with dietary changes, the target of anti-inflammatory treatments in IBD remains mucosal healing. Studies investigating mucosal healing on the SCD are limited to the pediatric population, and although only small numbers of patients have been included, results are conflicting. Wahbeh and colleagues demonstrated a lack of mucosal healing in a retrospective study including 7 pediatric patients with Crohn’s disease on a modified SCD (e.g. permitting some foods normally restricted on the SCD), with a median duration of 26 months on the diet.12 Although one patient had mucosal healing on ileocolonoscopy, this patient had persistence of upper GI Crohn’s disease.

In contrast, Cohen and colleagues demonstrated small bowel mucosal healing on capsule endoscopy in a cohort of 10 pediatric patients with active Crohn’s disease who were started on the SCD.13 Interestingly, mucosal healing as measured by the mean Lewis score for capsule endoscopy was seen at 12 weeks, but did not persist at 52 weeks. Only four out of the 10 patients had normal-appearing small bowel at 12 weeks as measured by the Lewis score. Other parameters of clinical disease activity (Harvey-Bradshaw Index, Pediatric Crohn’s Disease Activity Index) also improved significantly at 12 weeks, and effects were also noted to persist up to 52 weeks.

Results of the DINE-CD study (Trial of Specific Carbohydrate and Mediterranean Diets to Induce Remission of Crohn’s Disease), a multicenter randomized, open-label trial headed by researchers at the University of Pennsylvania, should provide more evidence of the utility of the SCD in the adult Crohn’s disease population.14 Both symptomatic remission and reduction of bowel inflammation as measured by fecal calprotectin are the primary outcome measures in this study, which is scheduled to complete in mid-2019 and will be the largest investigation into the application of the SCD in IBD patients to date.

In many instances, nutrition has taken a “backseat” in the care of IBD patients in the United States, particularly in the adult population. In Europe and Asia, EEN is a first-line therapy in many instances. Future research on the SCD may allow dietary modification to come to the forefront of therapy options alongside pharmacologic therapy.

Nutritional Adequacy of the SCD
While the SCD is based on exclusion of carbohydrates, it has been shown to be nutritionally adequate in comparison to healthy peer reference diets; even so, certain deficiencies, particularly calcium and vitamin D, can occur.10 In a study by Braly and colleagues at the University of Washington which included eight pediatric IBD patients, the majority (64%) exceeded 100% of their recommended daily allowance for energy intake and all individuals consumed approximately three times the RDA for protein. Six out of the eight patients were able to gain weight during the study. However, 100% of patients had intakes below the RDA for vitamin D, and 75% of patients’ daily intakes were less than the RDA for calcium.10

Background and Diet Implementation
Traditionally, the SCD has been introduced using a step-wise or staged approach. Food introduction begins with the most easily digestible foods, advancing to more complex foods including raw fruits, vegetables, legumes and specific dairy products over a variable time period. Researchers are looking into expediting food introduction given the diet can be nutritionally lacking until the full SCD is reached (PRODUCE study15). Many pediatric GI providers do not currently use the staged approach as the evidence is lacking as to its efficacy over initiation of the full SCD.

For patients with IBD initiating the SCD, a multidisciplinary approach to their care, with access to a registered dietitian, is recommended to ensure adequate micro- and macronutrient content in diet and proper education on the SCD. Without proper nutrition counseling, the diet can be lacking in essential nutrients.10 Supplementation with an SCD multivitamin and/or vitamin D has also been suggested.10 A food journal detailing snacks and meals can also be helpful for patients working with dietitians. It is easy for foods with hidden prohibited ingredients to make their way into the diet without close monitoring. These hidden ingredients are typically found in pre-made foods, spices, and seasoning mixes. It is not recommended that patients who follow a vegan diet use the SCD as part of their therapy regimen given the difficulty in achieving adequate caloric and nutrient intake with the combined limited food options; however, vegetarians can have a nutritionally complete diet on the SCD. Additional resources regarding SCD-approved supplements, ingredients to be avoided, and meal/snack ideas can be found on the PRODUCE website .

When first presenting the SCD to a patient and family, diet implementation is most successful when the patient is on board with the diet and the whole family participates as much as able. Encourage patients to transition to the full SCD within a two-week time period, as they will need to prepare a pantry of new food items and purge foods that are not allowed on the SCD. In order to prevent inadequate nutritional intake, it is important to discuss and adhere to a timeline over which the full SCD can be initiated.
All diets can alter the microbiome; however, the ideal composition of beneficial versus harmful bacteria is yet to be determined. Research on microbiome modulation through diet in IBD is ongoing.15,16 An important aspect of the SCD is restoration of “good gut bacteria” in the form of a varied diet and the addition of probiotics through certain allowable fermented foods, and in particular, the SCD homemade yogurt. This yogurt is fermented for 24 hours, allowing for fermentation of the sugar, lactose. For this reason, many who are lactose intolerant tolerate the SCD yogurt in moderation. Yogurt-making instructions can be found on the PRODUCE website (see above). The SCD yogurt also provides calcium and vitamin D when cow’s milk is used and can be an excellent calorie source for patients struggling with low weight. It can be made with whole milk or even a mixture of half and half with whole milk. The SCD yogurt can be made with homemade nut milk or goat’s milk as well, but the nutritional content of these vary considerably.17

Providing nutritional supplementation when indicated, tips for social situations, weight loss prevention strategies, and resources that patients can reference for meal and snack ideas encourages diet adherence and success (see Tables 1-3).

Clinical Pearls for Frequently Asked Questions
Pre-made foods

SCD patients and families frequently ask about SCD convenience foods that come pre-made in order to save time. While some “convenience” SCD foods exist, the premise of the diet is centered upon more basic, whole foods. Additionally, many companies can change ingredients in their products allowing for proscribed ingredients to make their way into the diet unbeknownst to the patient. Therefore, intake of these foods on a regular basis is discouraged when the diet is being used as a treatment modality.

Probiotics
Families frequently inquire about supplementation with a probiotic. A varied diet with fresh fruits and vegetables, legumes, various proteins and healthy fats is one of the best sources of prebiotics and probiotics, which can help stimulate the growth of the intestinal microbiome. Additionally, on the SCD, the SCD yogurt can be a beneficial source of probiotics. Some patients dislike the taste of the yogurt alone, so it is recommended to add the SCD yogurt to smoothies, various dishes and baked goods.

Organic and/or Grass-Fed Meats
Patients are encouraged to select organic products whenever possible. Certified organic is a third-party certification that must meet USDA criteria. Organic foods cannot be irradiated, genetically modified or grown using synthetic fertilizers, chemicals, or sewage sludge. The organic label on meat and poultry means that it was not treated with hormones or antibiotics and was fed only organically grown feed (with no animal byproducts). Animals raised for organic meat must have access to the outdoors, and grass-eating animals must have access to pasture. Antibiotic resistance with eating high amounts of meat from non-organic sources remains a concern amongst providers.

Tips for produce include selecting organic options, when available, for foods listed on the “Dirty Dozen,” (a list of fruits and vegetables with the highest pesticide residues), published annually by the Environmental Working Group (https://www.ewg.org/foodnews/dirty-dozen. php). Conversely, selecting organic varieties of the produce on the “Clean Fifteen” list is not necessary, as these non-organic fruits and vegetables are least likely to contain pesticide residues according to the Environmental Working Group .

On the SCD, dietitians recommend that patients consume a balance between plant and animal-based proteins.

Monitoring
Like medication therapy, it is essential to monitor symptoms, laboratory parameters and anthropometrics to assess efficacy of the diet.

Labs
Inflammatory markers, stool calprotectin, hemoglobin/hematocrit, and vitamin D should be followed at the provider’s discretion.

Anthropometrics
Clinically monitoring weight changes, height velocity in children, and BMI is important in the setting of IBD, and while on an elimination diet.

Weight loss or linear growth deceleration can indicate inadequate caloric intake or may suggest ongoing inflammation.

Long-Term Outlook
Prospective studies are underway looking at the possibility of a more liberalized SCD as treatment in comparison to the traditional SCD in hopes that some patients may tolerate a more lenient diet while still maintaining remission of their disease (PRODUCE15).

CONCLUSIONS
The SCD has had significant support among patients with IBD and other GI disorders since it was popularized by Elaine Gotschall.9 While small studies have demonstrated nutritional adequacy, symptomatic remission, and improvement in laboratory parameters among IBD patients on the SCD, the studies have mainly been limited to the pediatric population, and these findings and others (e.g. mucosal healing) remain to be demonstrated in large-scale clinical trials. Increased awareness of SCD among patients, providers and dietitians has fueled an interest in more research into this dietary option, which may become an integral part of a multidisciplinary approach to IBD patient care in the near future.

download tables, images & references

FELLOW"S CORNER

Acute Colitis in a Recent Immigrant from the Philippines

Read Article

Paris Charilaou MD1 Devendra Enjamuri MD1 Andrew Korman MD2 1Gastroenterology & Hepatology Fellows, 2Gastroenterology & Hepatology Attending, Director of Advanced Therapeutic Endoscopy, Division of Gastroenterology & Hepatology Department of Medicine Saint Peter’s University Hospital/Rutgers, RWJ Medical School, New Brunswick, NJ


CASE PRESENTATION/INTRODUCTION
A previously healthy 38 year-old male from the Philippines immigrated to the United States months prior to presenting to the emergency department with a three-month history of diarrhea and diffuse, crampy abdominal pain associated with tenesmus and hematochezia. The diarrhea has been progressive with up to 10 watery, small-volume bowel movements a day. He has experienced a 20-pound weight loss over that period. There was no nausea, vomiting, fever, rash or arthralgias. He had not been taking any non-steroidal anti-inflammatory medications or antibiotics. On further history, he disclosed that he was homosexual. He had no family history of malignancy or inflammatory bowel disease. He did not smoke, drink alcohol or used illicit drugs.

His physical exam was unremarkable, other than severe pain and nodularity of the rectal mucosa, without blood or other palpable masses, on rectal exam.

Basic laboratory testing included a normal complete blood count and a comprehensive metabolic panel. Initial routine stool studies, including ova and parasites, were negative.

On colonoscopy, there were multiple erosions in the terminal ileum and multiple well-demarcated ulcers throughout the colon (Figure 1, blue arrows). Several nodules were found in the anus and rectum. Moderate architectural distortion with acute and chronic inflammation, reactive epithelial changes, with intracytoplasmic and intranuclear inclusions were seen on histopathology. (Figure 2)

QUESTIONS
Question 1. What is the most likely diagnosis?
Question 2. What is the next step in the management of this patient?
Question 3. What is the most important consultation you should consider at this time?
Question 4. What are the other gastrointestinal manifestations of this pathogen?

Answer 1.
In this adult patient with chronic diarrhea, we need to consider infectious and inflammatory bowel disease as the top two diagnostic categories. Considering the complete history and presentation of this patient, including his immigrant status, sexual practices and weight loss, human immunodeficiency virus (HIV) infection, and its potential complications, should be entertained. In doing so, one should consider the different entities that may be encountered depending on whether a patient has received highly active anti-retroviral therapy (ART) or not (see Table 1).1,2 Our patient was found to be HIV positive, with a CD4+ count of 69/µL, consistent with acquired immunodeficiency syndrome (AIDS).During the post-ART era, complications encountered are usually medication-associated adverse effects (Table 1).1,2 The pre-ART complications that need to be considered in an AIDS patient with chronic diarrhea include cytomegalovirus (CMV) colitis, cryptosporidiosis, microsporidiosis, mycobacterium avium complex (MAC), Shigella, Campylobacter jejuni, Clostridium difficile.2 Idiopathic AIDS-related enteropathy should also be considered if all diagnostic studies are negative. The most likely diagnosis at that time was CMV colitis, as it is the most common pathogen leading to chronic diarrhea in patients with AIDS and a CD4+ < 100/µL. The colonic biopsies stained positive for CMV and a diagnosis of CMV colitis was made.

CMV serum antigen, antibodies and polymerase chain reaction (PCR) cannot be used to determine invasive gastrointestinal CMV infection.3,4 Most patients have already been colonized by CMV and have seroconverted. The clinical picture,

endoscopic and pathologic findings are indicative of an invasive CMV infection that warrants treatment. CMV DNA levels have been shown to predict disease severity but do not play a role in diagnosing active gastrointestinal disease.3 In patients with diarrhea on ART, medications such as protease inhibitors, nucleoside reverse transcriptase inhibitors, delavirdine, maraviroc, raltegravir, cobicistat, and elvitegravir/cobicistat have been implicated and should be considered as potential etiologies1.

Answer 2.
The patient should be started on treatment for invasive CMV infection (Table 2) and also on ART for HIV. His diarrhea resolved after approximately three weeks of valganciclovir, while receiving ART.

Answer 3.
Every patient diagnosed with CMV infection should have fundoscopy to exclude CMV retinitis. Thus, an ophthalmology consult is mandated. CMV retinitis, if present, needs close follow-up to ensure remission and to prevent blindness.

Answer 4.
CMV can affect multiple gastrointestinal (GI) organs, most commonly in immunocompromised patients. CMV esophagitis, gastritis and enteritis are other luminal, gastrointestinal manifestations. Esophagitis usually presents with odynophagia (rather than dysphagia) and endoscopy may reveal multiple, shallow ulcers, which can be confirmed with biopsies taken from their center. At least three biopsies can have a sensitivity of 80%, reaching 98% with 10 biopsies.5 The differential diagnosis of CMV esophagitis includes HIV-associated idiopathic ulcers and herpes simplex (HSV) esophagitis. Gastritis will often present with non-specific symptoms of epigastric pain, nausea and vomiting. A specific, yet less common presentation, is that of postural epigastric pain with relief in supine position.6 Endoscopy may reveal ulcerations with erythematous mucosa in the antro-pyloric region. Enteritis, including duodenitis, may present with severe diarrhea, especially in post-transplant patients. Differential diagnosis includes lymphoma and graft-vs-host disease, which requires serial biopsies to differentiate from the latter.7 Potentially fatal complications include perforation and peritonitis.

CMV hepatitis may be seen in immunocompetent patients, commonly presenting as subclinical liver enzyme elevation, typically as a hepatocellular pattern. In symptomatic cases, liver enzyme elevations will be more severe, with signs of hepatic dysfunction and even portal vein thrombosis.8 Differential diagnosis should include hepatic granulomas, especially in patients with prolonged unexplained fevers. In post-liver transplant patients, CMV hepatitis from reactivation can lead or resemble acute allograft rejection, especially in sero-mismatched donor/recipient pairs (i.e. CMV-positive donor with CMV-negative recipient).9

CONCLUSION
In patients with chronic diarrhea and a clinical suspicion for HIV/AIDS, CMV colitis should be suspected as it is the most common pathogen implicated in these cases. Once the diagnosis is made and treatment is started, the patient should be referred to an ophthalmologist to rule out retinal involvement, which would otherwise necessitate close follow-up during the treatment period.

download tables, images & references

jojobethacklinkmarsbahisJojobet GirişcasibomJojobet GirişCasibomCasibomvaycasinoholiganbetcasibommarsbahis girişJojobettaraftarium24madridbet güncel girişmadridbet girişmadridbetGrandpashabet