Download Tables, Images & References

Stuck on You – A Familial Tale of Eosinophilic Esophagitis

Read Article

Michael Root, Medical Student, MS4. Marianna Papademetriou, MD, Fellow. David M. Poppers, MD, PhD, Clinical Associate Professor of Medicine. Division of Gastroenterology, NYU Langone Health New York University School of Medicine, New York, NY

INTRODUCTION

Eosinophilic esophagitis (EoE) is an IgE-mediated allergic condition of the esophagus characterized by dense eosinophilic infiltrates. The prevalence of EoE has been increasing over the last few decades and has become an important entity encountered by primary care physicians, gastroenterologists and allergists.1 An association with atopic conditions suggests that EoE may be driven by both genetic and environmental factors, including food-related exposures.2 Here, we present two cases of EoE in adult brothers with an update on the known genetic involvement in this disease.

Case Report

An 18 year-old male with allergic rhinitis and food allergies presented with abdominal pain, foul-smelling bowel movements and weight loss for one year. Esophagogastroduodenoscopy (EGD) revealed linear furrows throughout the esophagus (Figures 1a and b). Biopsies demonstrated eosinophilia up to 150 per high-power field (HPF) consistent with eosinophilic esophagitis (EoE). Serum allergen testing indicated an elevated serum IgE (380 IU/ml) and multiple food sensitivities. The patient was initially placed on a proton pump inhibitor (PPI) and scheduled for a repeat EGD to assess response and confirm the suspected EoE diagnosis. In the interim, the patient began experiencing dysphagia primarily to liquids but also to solids. Repeat EGD showed unchanged linear furrows in the mid and proximal esophagus (Figures 1c and d), however biopsies showed significant reduction in eosinophilia to 22/HPF. The patient’s symptoms improved after an 8-week PPI course and he has not required further treatment with topical steroids.

His brother is a 36 year-old with a history of gastroesophageal reflux disease (GERD) and one year history of progressive dysphagia to solids. He has no known allergies or atopic conditions. On initial EGD, the esophageal mucosa appeared normal (Figures 1e and f); however, biopsies yielded eosinophilia up to 150/HPF, concerning for EoE. His serum allergen panel was negative with a mildly elevated IgE (163 IU/ml). The patient completed an 8-week PPI course with improvement in dysphagia but has not undergone a follow-up EGD at the time of this manuscript. He is being followed by an allergist and primary care physician for further evaluation of potential food- triggers.

Discussion

Esinophilic esophagitis is an increasingly prevalent condition encountered by various health care providers. Until recently, the entity has been poorly understood and this has led to delayed diagnosis and treatment increasing the risk of complications including esophageal strictures.3 Understanding the heterogeneous clinical presentation and underlying patient demographics and risk factors, principally family history and association with atopic conditions, is crucial for timely and accurate diagnosis.

Here we describe two adult male siblings who presented with symptomatic EoE within the same year. Both were diagnosed with a high burden of eosinophilia on endoscopic biopsies, with some differences in the details of their clinical presentations and serologic and endoscopic findings. One patient demonstrated classic endoscopic findings of linear esophageal furrows with a history of atopic disease (food allergies and allergic rhinitis), whereas his sibling had a normal appearing esophagus and no history of atopy. The patients presented in the second and fourth decades of life, respectively, illustrating the delayed diagnosis and varying latency periods of disease manifestation seen in the adult population.

Studies support a heritable component in EoE with recurrence risk ratios (RRR) in first-degree relatives of patients ranging from 10-64 compared to the general population, which is a stronger relationship than that observed in other atopic diseases such as asthma.4 The RRRs were found to be highest in brothers (64) and fathers (42.9) of probands, compared to sisters and mothers.4 However, research on the relative contribution of genetics and environment to this condition is limited. Alexander et al. provides insight through analysis of nuclear family and twin cohorts of EoE probands. The authors estimate that the combined gene-environment heritability for the nuclear family cohort was 72% with common environment accounting for most of the observed variation.4 Furthermore, dizygotic twins had a significantly higher frequency of EoE than non-twin siblings, which suggests not only the importance of a shared environment but also the timing of early life exposures that may influence genetically-susceptible family members.4 In a population-based study, findings support an increased risk in first degree relatives but extend their analysis to include more distant relatives. Both second-degree relatives and first cousins showed an increased odds ratio (OR) of concordant disease, supporting the role of a heritable component in family members less likely to share common environments.5

Clinically, the heterogeneity of presentation as well as symptom overlap with more common conditions such as GERD have proven roadblocks to timely and accurate diagnosis of EoE. As discussed, our two patients presented with different clinical presentations in terms of disease latency, symptoms, and endoscopic appearance. While prior studies have not shown a statistically significant difference in signs and symptoms, endoscopic appearance, or atopic status between familial and sporadic EoE patients, our case study suggests there may be more variability within the familial EoE population than previously recognized.6

Despite different clinical presentations, the histopathologic similarity between the two patients presented here is consistent with reports of a genomic “EoE transcriptome” that may be conserved across EoE patients regardless of sex, age, or atopic/allergic status.7 The eotaxin-3 (eosinophil-specific chemoattractant) gene has been identified as a highly over-expressed gene in the transcriptome, with end-organ eosinophilia being strongly correlated with both eotaxin-3 mRNA levels as well as disease severity.7 The gene expression profile also differed from patients with chronic esophagitis, including GERD, highlighting a potentially unique downstream pathway for diagnosis and treatment of EoE.

According to the 2013 ACG Practice Guidelines, the first step in management of suspected EoE is an 8-week PPI trial followed by repeat EGD to assess clinical and histological response. A lack of PPI- response is consistent with EoE but a positive response places a patient in a category known as PPI-responsive esophageal eosinophilia (PPI-REE), requiring the physician to rule out GERD as a possible cause of this eosinophilia.8 Interestingly, in our two patients, both had clinical responses to an 8-week PPI trial, one of whom also showed endoscopic evidence of response with significantly reduced esophageal eosinophilia.

There is growing evidence that PPI-REE is not a separate clinical entity, but rather lies within the spectrum of patients with EoE. Patients with EoE and PPI-REE not only share the same demographics, clinical presentations, and endoscopic characteristics, but also have indistinguishable downstream immuno- histochemical profiles of certain inflammatory markers, including eotaxin-3.9 Furthermore, these same markers were useful in identifying EoE patients compared to controls with GERD or dysphagia.9 The strict cutoff of persistent eosinophilia ≥15/HPF for failing a PPI trial also blurs the distinction between these two entities. Strictly speaking, our patient would qualify as a failed responder despite clinical improvement and drastic reduction in esophageal eosinophilia, which is not uncommon in patients with typical EoE presentations. As such, there is growing support to reclassify PPI-REE as a subtype of EoE in which a PPI trial is a safe and effective first-line therapy rather than a diagnostic test.10

In conclusion, EoE represents an increasingly important condition to recognize in various clinical settings. Proper diagnosis and treatment can reduce the risk of long-term consequences such as esophageal strictures, and is also useful for family members who may have yet-undiagnosed disease. New insights into the genetic susceptibility and importance of early life exposures support a complex pathogenesis of this disease, of which our understanding is improving. Our case study adds to the growing body of evidence to support a familial inheritance of EoE while simultaneously highlighting the diverse clinical presentations that create additional challenges for healthcare providers.

Download Tables, Images & References

A Special Article

Real-Time Radiographic Identification of Contrast Consistency in Modified Barium Swallow Studies: An Alternative Technique

Read Article

Contrast agents of varying consistency, ranging from thin liquids to solids, are utilized in modified barium swallow (MBS) studies, typically without real-time radiographic labeling. We describe a method for identifying the contrast consistency using radiographic labels during each fluoroscopy sequence. Two cases demonstrating the utility of the radiographic labels in clinical practice are described in detail. Our labeling method is an easily implemented and cost-effective technique to promote increased accuracy. Real-time labeling of sequential MBS studies facilitate assessment on sequential studies and for different RIS/PAC systems and will reduce ambiguities and misinterpretations.

Rupert K. Hung, MD1 Jamie Muhly, MS CCC-SLP1 Mamie Gao, MS42 Gary Gong, MD PhD1 Martin Auster, MD, MBA1 1Johns Hopkins Medical Institutions, Baltimore Maryland 2Texas Tech University Health Sciences Center, Odessa, TX.

INTRODUCTION

Oropharyngeal dysphagia is a potential complication of numerous neurologic and muscular diseases including stroke, multiple sclerosis, Parkinson’s disease, dementia and myositis and may also arise through structural compression such as by head and neck cancers.1,2,3 Presently, modified barium swallow (MBS), or video fluoroscopic swallow study (VFSS), is the modality of choice in evaluating this type of dysphagia. This study utilizes different consistencies of contrast (thin liquid, nectar, honey, pudding and solids) to assess oral, pharyngeal and epiglottic dysfunction that may lead to the development of aspiration events with resulting pneumonia.4 In current clinical practice, multiple consistencies are imaged in succession as a patient may tolerate ingestion of one consistency but not another. This information is vital for the speech language pathologist (SLP) to determine appropriate dysphagia treatment plans and to accurately prescribe nutritional recommendations in order to prevent aspiration and malnutrition.5 Typically, identification of the administered consistency is provided verbally at the time of imaging or post-procedurally through either hand-written notes, post procedure annotations of images or, less commonly, through audio recording. Real-time radiographic labeling at the time of imaging displaying the consistency administered is typically absent. Despite the advantages of annotating the MBS exam to reduce ambiguity and potential miscommunications, its use is not common practice. Lack of real-time labeling by the examining radiologist may be due to perceived reductions in efficiency, increased work burden and cost of implementation. Furthermore, lack of adequate labeling for any individual study may be seen as inconsequential. With serial swallowing evaluations that are commonly performed in the setting of acute stroke to assess for neuromuscular recovery, the importance of reliable communication between the various consistencies administered becomes even more critical.

The volume of serial swallowing studies has progressively increased over the past decades due to more frequent assessments for dysphagia in patients with acute stroke, myositis and progressive neuromuscular diseases.1 As significant improvements in swallowing function may be rapid in some patients, correct identification of improvement compared to previous studies is important in preventing unnecessary invasive interventions such as placement of a long-term percutaneous endoscopic gastrostomy (PEG) tube or nasogastric tube (NGT), or initiation of total parenteral nutrition (TPN). This further highlights the importance of a reliable radiographic labeling technique.

With more frequent indications of serial assessment, regular labeling of the contrast consistency administered will enhance interdisciplinary communication and create less ambiguity and greater ease in evaluating both retrospective and serial studies, particularly if the studies are performed by different members of the healthcare team or from outside referrals.We therefore detail an alternative technique for the examining radiologist to provide real-time radiographic identification at the time of imaging of the contrast consistency administered.

Materials and Methods

Labels denoting different contrast consistencies using radiopaque alloy letters by Pb Markers, an online company specializing in custom-made markers, were created. Each label measured approximately 2 x 1 inches and cost $10 USD to procure. The labels were then reversibly attached by Velcro pads to a painter’s stick measuring 20 x 1.5 inches. One end of the painter’s stick displays the contrast consistency that is administered, and the other end holds the unused labels (Figure 1). At the beginning of each consistency trial of the swallowing study, the examining radiologist passes the end of the painter’s stick with the appropriate Pb label between the patient and the image intensifier. This provides radiographic identification of the contrast consistency in real-time at the time of imaging.

RESULTS
Case 1. Use of Radiographic Labels in Clinical Practice

A 77-year-old woman with a history notable for gastroesophageal reflux disease and diabetes, but without history of dysphagia, undergoes an MBS study (Figure 2). Panels 2A-2D show the placement of the radiographic labels prior to administration of the contrast material. Panels 2E-2H show the pharyngeal phase of swallowing with the contrast material corresponding to the labels on the upper panel. This swallowing study revealed grossly adequate oropharyngeal swallowing function with all contrast consistencies administered, without laryngeal penetration or aspiration. Notably, without adequate labeling of each consistency, one cannot reliably distinguish the contrast consistencies from fluoroscopic appearance alone.

Case 2. Improved Efficiency with Radiographic Labeling in Serial Evaluations

A 59 year-old male, hospitalized due to severe burn, underwent an MBS study which revealed pharyngeal deficits resulting in aspiration on initial assessment. Figure 3A and B show the initial swallowing assessment with thin and nectar liquids without any radiographic labeling at the time of imaging. A follow-up evaluation was completed one week later (Figure 3C and D). Non real-time labeling made it difficult to distinguish changes using different contrast consistencies on initial study as well as serial studies. With more frequent indications for serial assessment, as in this case study, real time labeling of contrast consistency will enhance interdisciplinary communication and create less ambiguity in evaluating both retrospective and serial studies from the same or different institutions.

DISCUSSION

We presented two cases where real-time radiographic labeling facilitated diagnostic evaluation by providing accurate and unambiguous identification of the contrast consistencies administered. Presently, the identification of the contrast consistency administered during fluoroscopy is typically provided verbally at the time of imaging between members of the healthcare team and annotated post-procedurally through hand-written, audio or digital means. Current practices may be prone to errors stemming from misinterpretations between team members at the time of imaging, and erroneous recall occurring post-procedurally due to delays in annotation.

We believe that adoption of a system in which the examining radiologist labels the contrast consistencies in real-time at the time of imaging would improve efficiency and reduce ambiguity and potential errors from miscommunication. Use of hand-written, audio or digital annotations do not always accompany the fluoroscopic images, leading to delays in assessment, particularly during retrospective reviews. It is interesting to note that some audio recordings do not become part of the patient’s electronic medical record (EMR) but are stored on a separate disc or on a dysphagia work station (DWS). Lack of adequate identification of the consistencies administered may also lead to ambiguity and potentially incorrect assessments of swallowing function by the healthcare team. This can result in improper recommendations for the patient that can have disastrous consequences for the patient including aspiration pneumonia, and decreased quality of life.4,5 Furthermore, lack of labeling may limit the utility of studies for future educational and research purposes.

The advantage of radiographic labeling in real-time is its intrinsic inclusion into the fluoroscopic images, reducing ambiguity associated with an unlabeled MBS study.

The advantages of real time labeling of MBS studies are particularly evident in serial evaluations for dysphagia. Retrospective review of the fluoroscopic images obtained from earlier studies may also be important for proper assessment of interval changes in swallowing function of the patient. As serial evaluations may be performed by different members of the healthcare team, proper communication of the contrast consistency administered in each trial is paramount to the proper nutritional and therapeutic recommendations made by the SLP.4 Common concerns to our method of real-time labeling includes the perception of decreased efficiency, the possibility of additional radiation exposure to the patient and operator and potential cost of constructing the radiographic labels. In our clinical practice, efficiency of the MBS studies was largely unchanged, and patients were not exposed to any significant additional radiation due to the same length of the exam. Furthermore, the operator is at no point in the direct radiation field due to the extended reach provided by the painter’s stick that is carrying the radiopaque labels. In terms of costs of implementation, the materials used to assemble our radiographic labels were inexpensive and readily bought from local or online retailers.

When the initial study of our burn patient was retrospectively reviewed for assessment of interval changes, there were significant ambiguities regarding which contrast consistencies were administered with each trial as more fluoroscopic sequences were obtained than the consistencies administered. Correct pairing of the contrast consistencies with their respective fluoroscopic videos was performed only after contact between the SLP and radiologist who were present at the time of the original study. The inefficiencies and ambiguities observed in this case would have been further accentuated in patients with more numerous studies and unlabeled trials, further highlighting the importance of regular labeling technique.

Consistent identification of contrast consistency may not be routinely performed due to perceptions that labeling increased work burden and reduces efficiency, and that lack of labeling has few adverse consequences. Implementation of our method is inexpensive and potentially may enhance communication between operators and reviewers of the examination. There are other methods that may be used to identify the contrast consistency at the time of examination, including audio recordings and annotations that are not universally utilized in current clinical practice.

CONCLUSION

We describe an alternative, easily implemented and cost-effective technique to provide real-time labeling of contrast consistencies administered during modified barium swallow studies. Consistent and adequate identification of contrast consistencies will reduce ambiguities stemming from poor labeling technique. Implementation of the method described above may lead to improved interdisciplinary communication, increased patient care and safety and may facilitate further education and research. A more detailed study of comparing our labeling technique with other institutions, would help validate its importance to the dysphagia patient.

Download Tables, Images & References

Dispatches From The Guild Conference, Series #8

Liver Cancer – From Detection to Treatment

Read Article

Hepatocellular carcinoma continues to be a signi cant cause of morbidity and mortality in patients with chronic liver disease. Among both men and women in the United States, death due to liver cancer has increased at the highest rate of all cancers in the past decade. Despite the improvements in imaging and therapeutics, only tumors diagnosed in early stages effectively respond to treatment. Loco-regional therapies, surgical resection and transplantation allow for improved survival. For patients with advanced stage disease, there is a need for novel and effective therapies. Here we discuss prevention, diagnosis and treatment of HCC.

Michael P. Curry, MD, Director of Hepatology, Beth Israel Deaconess Medical Center, Associate Professor of Medicine, Harvard Medical School, Boston, MA, Sentia Iriani, MD, Hepatology Fellow, Beth Israel Deaconess Medical Center, Boston, MA

EPIDEMIOLOGY

Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related death worldwide. The majority of disease burden occurs in Asia and sub-Saharan Africa due to endemic hepatitis B (HBV).1 In the United States, the incidence of HCC has more the doubled over the past two decades and is increasing. This is largely due to the growing number of patients with advanced liver disease from hepatitis C virus (HCV) infection and the burgeoning epidemic of non-alcoholic fatty liver disease (NAFLD). HCC has a strong male preponderance with a male to female ratio estimated to be 2.4. Among both men and women in the United States, death due to liver cancer has increased at the highest rate of all cancers in the past decade.2 Worldwide, HBV accounts for 54% of all HCC in adults and almost all childhood HCC cases. HCV is the major risk factor for HCC in Europe and North America.3 Hepatocellular carcinoma is 15 to 20 times higher in persons infected with HCV as compared with those without HCV, and most occur in patients with advanced fibrosis or cirrhosis. Cirrhosis is an important risk factor for the development of HCC and approximately one third of patients with cirrhosis will develop HCC over their lifetime.4 Roughly 30-40% of cases of HCC in Western countries occur in patients without HBV or HCV. These cases are related to alcohol, hemochromatosis, alpha- 1-antitrypsin deficiency, autoimmune hepatitis and possibly NAFLD, given the associations between and increased risk of HCC with obesity and diabetes.

Prevention

There are limited effective strategies proven to reduce the risk of HCC. Infant vaccination against HBV has proven to be the most successful preventive strategy for HCC and has been associated with the most dramatic reduction in the incidence of HCC in children ages 6-14 years.5 The use of antiviral therapy in chronic HBV has also been associated with reduction in HCC development.6 Historically, achieving a sustained virologic response (SVR) in patients with HCV infection using interferon has resulted in a decreased risk of future HCC across all stages of liver disease including cirrhosis.7 There is some controversy about the risk of HCC in HCV patients who have been treated with direct acting antiviral (DAA) therapy. Some studies suggest an increased risk of HCC recurrence and de novo HCC in patients with HCV cirrhosis treated with DAA.88 The use of HMG-coA-reductase inhibitors (“statins”) has been associated with a reduction in the risk of HCC in a meta-analysis of observational studies and randomized trials.9 Two meta-analyses have demonstrated an inverse relationship between coffee consumption and HCC supporting a reduced risk of liver cancer among individuals with and without a history of liver disease.10,11 Lastly, the use of metformin has been associated with a reduced risk of HCC in patients with diabetes.

Surveillance

The American Association for Study of Liver Disease (AASLD), the European Association for Study of the Liver (EASL) and the Asian Pacific Association for the Study of the Liver (APASL) recommend surveillance for HCC for patients at high risk of HCC development in order to detect tumors at an early stage when they are amenable to curative therapy. The rationale for surveillance is based on data from a randomized study comparing outcomes in HBV patients assigned to screening ultrasound (US) and alpha-fetoprotein (AFP) every 6 months versus no surveillance. In the surveillance group, HCC was detected at an earlier stage and curative treatment successfully resulted in a 37% reduction in mortality. Additional non-randomized studies of screening for HCC in cirrhosis support the role of surveillance for an earlier diagnosis, potential curative therapies and improvement in overall survival (Table 1).12

Despite these guidelines, surveillance is underutilized. In a study of patients diagnosed with HCC between 2005 and 2011, only 20% had undergone surveillance. Nineteen percent of patients had unrecognized cirrhosis, 20% had unrecognized liver disease, 38% lacked surveillance orders and 3% failed despite surveillance orders.13 A more recent study performed in the Veterans Administration health service showed that only 53.5% of patients had received surveillance in the two years prior to HCC diagnosis. However, only 23.1% of patients with NAFLD related HCC received surveillance as compared with 51.8% of HCV-related and 47.4% of alcohol-related HCC suggesting that more work needs to be done educating physicians on the correlation between NAFLD cirrhosis and HCC.14 In order to improve the effectiveness of HCC surveillance, there will need to be improved recognition of liver cirrhosis in all at risk populations and initiation and compliance with surveillance.

Diagnosis

Hepatocellular carcinoma develops in the background of a field defect of either viral infection or advanced fibrosis or cirrhosis.15 Hepatocarcinogenesis should be considered as a continuum with dedifferentiation from regenerative nodule through dysplastic nodule to early and subsequently overt HCC. Unlike regenerative and dysplastic nodules, which have portal and arterial blood supply, unpaired hepatic arteries solely supply HCC.16 This results in the characteristic vascular pattern on arterial enhancement and portal venous phase washout on cross sectional multiphase imaging. In 2005, the AASLD and EASL panel of experts adopted a new HCC radiological algorithm, which has been validated. The diagnostic accuracy of a single dynamic technique showing intense arterial uptake followed by “washout” of contrast in the venous- delayed phases has been demonstrated. Non-invasive diagnosis was established by one imaging technique in nodules above 2 cm showing the HCC radiological hallmark and two coincidental techniques with nodules of 1-2 cm in diameter (computed tomography (CT) and magnetic resonance imaging (MRI)). Recent updated AASLD guidelines have proposed that one imaging technique (CT or MRI) showing the HCC radiological hallmark suffices for diagnosing tumors of 1-2 cm in diameter.17 For tumors that meet radiological criteria, biopsy is no longer indicated. However, liver biopsy is recommended by AASLD, EASL and the National Comprehensive Cancer Network (NCCN) for nodules > 1cm if radiological criteria are not present on multiphasic imaging. The NCCN guideline also allows for repeat cross sectional imaging at a 3-month interval for nodules between 1-2cm to determine if the tumor characteristics have changed.3,17,18

Histological assessment of tissue obtained by needle biopsy of a nodule allows for assessment using a number of techniques to establish the diagnosis of HCC. The presence of an increase in clear to cytoplasm (N:C) ratio and degree of cellular atypia can provide clues to the presence of HCC. Disruption of the normal reticulin pattern adds additional evidence. The use of immunohistochemical assessment to demonstrate positive staining for HepPar1 and polyclonal CEA can establish origin of the cells and lastly the use of glypican 3, gluthamine synthase and heat shock protein 7 which are relatively sensitive for HCC can help in the definitive diagnosis.

Staging

Clinical staging of HCC is an essential part of the evaluation to assess prognosis and to guide therapeutic interventions. Numerous staging systems have been developed and are employed. The Chinese University Prognostic index (CUPI) and the Cancer of the Liver Italian Program (CLIP) have been validated, include prognosis based on tumor stage and sub-classify patients at advanced stages of liver cancer.19,20 The Japanese Integrated Staging (JIS) has been modified to include the biomarkers AFP, AFPL-3 and des- gamma-carboxy prothrombin (DCP).21 While there is no worldwide consensus as to which system should be used, the AASLD and EASL recommend use of the Barcelona-Clinic Liver Cancer (BCLC) staging system. The BCLC divides patient into 5 stages (0, A, B, C and D) according to established prognostic variables and allocates therapies based on tumor stage, functional capacity and degree of liver dysfunction.

Treatment

Treatment of HCC requires that due consideration be given to the tumor burden, stage of liver disease and the patient’s performance status. This is best assessed by a multidisciplinary team approach that includes hepatologists, surgeons, oncologists, radiologists and interventional radiologists, pathologists and radiation oncologists.

Surgical resection and liver transplantation are the mainstays of HCC treatment as they offer the best outcomes in patients with early disease stage and afford patients a five-year survival of 60-80%. Liver resection is the treatment of choice for patients with non-cirrhotic HCC. Improved outcomes for patients with cirrhosis and HCC has occurred as a result of refinements in surgical technique and appropriate selection of candidates. Some selection criteria to enroll appropriate patients for liver resection include a hepatic venous pressure gradient (HVPG) of < 10mmHg and a platelet count of > 100,000.mm3. Adjuvant and neo-adjuvant therapies have not been conclusively shown to decrease the risk of or recurrence of de-novo tumor in patients undergoing surgical resection for HCC.

Loco-regional therapy is considered first line treatment for patients not suitable for surgical resection. Additionally, this therapy may be utilized by transplant programs to “bridge” patients to transplantation or to downstage patients who are outside acceptable criteria for liver transplantation. Loco-regional therapies include local ablation of the tumor by chemical or thermal destruction, chemoembolization with conventional chemoembolization or drug eluting beads and radio embolization. Radiofrequency ablation (RFA) and transarterial chemotherapy are most commonly used for loco-regional therapy of HCC (Table 3). Percutaneous ethanol injection (PEI) and RFA are suggested for patients with BCLC stage A disease and tumors up to 3 cm. Transarterial chemoembolization (TACE) is the recommended treatment for intermediate stage HCC. Conventional TACE (cTACE) and drug eluting bead TACE (deb-TACE) are both used in patients with intermediate stage disease. Deb-TACE is better tolerated, however cTACE may offer better long term results. Radioembolization can be used in patients with intermediate stage disease who do not respond to, or have contraindications to TACE. It can also be applied in the setting of portal vein thrombosis or tumor thrombosis.

Liver transplantation (LT) is considered for patients with compromised liver function and small multifocal tumors or single tumors of modest size. Liver transplantation has the added advantage of curing the tumor as well as the underlying liver cirrhosis. However, in the equitable distribution of liver grafts, patients receiving liver transplantation for HCC should have the same outcome in as those patients undergoing liver transplantation for non-HCC indications. The Milan criteria, proposed in 1996, established the tumor size and number criteria for patients with HCC that demonstrated a similar survival as compared to non- HCC liver transplant recipients.22,23 Several other sets of criteria have been published and are utilized to select suitable candidates with HCC for liver transplantation in different geographic locations around the globe (Table 2).

Recurrence after Liver Transplantation

Although many single center studies have shown excellent post-transplant outcome for HCC using Milan or modestly expanded criteria for patient selection, registry data that reflect more global experience with liver transplantation have continued to show inferior results with HCC compared to non-HCC indications. There is an urgent need to identify reliable factors that can predict recurrence of HCC so that these patients can be excluded from liver transplantation. MacDonald et al. analyzed 11 pre-transplant recipient and donor variables in 1074 patients with HCC meeting Milan criteria to detect association with post-liver transplant tumor recurrence or mortality.24 Recurrence of HCC was seen in 6% of patients. Univariate analysis identified AFP at listing and at last time point prior to transplantation was associated with higher rate of recurrence. The last AFP prior to liver transplantation was associated with disease recurrence. The optimal cut off of last AFP was a value of > 300ng/dL with the highest odds ratio (OR) for HCC recurrence of 2.52.24, A model has been developed and independently validated to predict recurrence of HCC based on pre transplant characteristics. The AFP model contains 3 independent pre- transplant predictors of tumor recurrence (tumor size, tumor number and AFP level at the time of listing for liver transplant). A score calculated by addition of points for each variable can differentiate patients a low (= 2 points) and high risk (>2 points) of recurrence and survival after transplantation (Table 4).25,26

Systemic Therapies

Sorafenib is currently the only approved first line systemic therapy for the treatment of advanced HCC not amenable to surgical resection. This drug has shown survival benefits of three months.27 A subsequent trial demonstrated that sorafenib was associated with over survival (OS) times of > 20 months.28 Regorafenib, a second line multi-kinase inhibitor, has been shown to prolong survival in patients with advanced stage HCC that has progressed despite sorafenib.29 A number of immunotherapies are currently in clinical trial for patients with HCC. Nivolumab has demonstrated efficacy in providing durable responses in both regorafenib naïve and experienced patients with HCC.30

CONCLUSION

Hepatocellular carcinoma continues to be a significant cause of morbidity and mortality in patients with chronic liver disease. Despite the improvements in imaging and therapeutics, only tumors diagnosed in early stages effectively respond to treatment. Surveillance rates for HCC are low due to unrecognized cirrhosis. Loco- regional therapies, surgical resection and transplantation allow for improved survival. For patients with advanced stage disease, there is a need for novel and effective therapies.

Dispatches From The Guild Conference, Series #8

Download Tables, Images & References

Therapeutic Drug Monitoring in Inflammatory Bowel Disease – A Practical Guide

Read Article

Adarsh K. Varma, M.D., Attending Physician, Division of Gastroenterology and Hepatology, Henry Ford Health System Nirmal Kaur, MD, Director, Inflammatory Bowel Disease Center, Division of Gastroenterology and Hepatology, Henry Ford Health System, Seymour Katz, MD, FACP, MACG, Clinical Professor of Medicine, Director of NYU IBD Outreach Program, New York University School of Medicine, New York, NY

BACKGROUND

Tumor necrosis factor ? alpha (TNF-a) antagonist therapy is highly effective for the treatment of Crohn’s disease and ulcerative colitis, broadly termed inflammatory bowel disease (IBD). While this class of medication has revolutionized the field of IBD therapy, up to 30% of patients show no benefit when treated with a TNF-a antagonist, and another 40% lose response over time within one year of treatment.1 Therapeutic drug monitoring has emerged as a method to optimize treatment with TNF-a antagonists by guiding treatment decisions, increasing the long term durability of the medications, and maximizing the likelihood of a sustained clinical benefit with significantly fewer occurrences of secondary loss of response.2

Therapeutic drug monitoring with TNF-a antagonists involves measuring serum drug levels and anti-drug antibodies, and maintaining drug levels within a specific therapeutic window. The concept of therapeutic drug monitoring is not new, and is applied to solid organ transplant patients receiving immunosuppression with medications such as cyclosporine or tacrolimus, and to septic patients receiving antibiotics such as vancomycin and gentamycin.3,4 The main principle of therapeutic drug monitoring is to maintain patients within a specific therapeutic window, as high concentrations of drug may result in increased toxicity, low concentrations will be ineffective, and for TNF-a antagonists, low concentrations risk resulting in drug antibody formation as well.2

The TNF-a antagonists for which multiple studies have demonstrated the benefit of therapeutic drug monitoring in IBD include infliximab and adalimumab. Studies date back to 2003, and delineate that higher serum concentrations of infliximab and adalimumab are associated with more durable response, sustained clinical outcomes, decreased need for colectomy, and improved patient outcomes.5-8

Numerous studies have shown that higher serum drug concentrations of TNF-a antagonists are associated with improved patient outcomes.2 Furthermore, studies have also demonstrated that low or undetectable drug concentrations are linked to anti-drug antibody formation and are ineffective for achieving clinical remission.9-12

Types of Assays

Many different anti-drug antibody assays are available, and the detection of these antibodies is more variable than serum laboratory assays for drug concentrations. The available types of assays include the Enzyme-Linked Immunosorbent Assay (ELISA), Radioimmunoassay (RIA), Homogeneous Mobility Shift Assay (HMSA), Electro-chemi-luminescence immunoassay (ECLISA), and Functional assay. The ELISA and RIA anti-drug antibody assays are affected by the presence of drug, and these assays can give inaccurate results if there are drug concentrations present in the serum. The antibody assays that are not affected by the drug levels, termed drug-tolerant assays, are more expensive.13

How is Drug Monitoring Utilized in Clinical Practice?

Therapeutic drug monitoring can be performed reactively or proactively. Reactive testing involves testing the patient at the time of disease relapse or after a drug reaction has occurred. Proactive testing involves optimizing the dose of the drug within a therapeutic window to achieve clinical efficacy.1

Serum drug levels are measured as trough levels, as most studies of anti-TNF-a drug levels have tested trough levels, and as drug trough levels roughly correlate with activity of most drugs. The drug trough level is measured prior to intravenous infusion of infliximab, or prior to subcutaneous injection of adalimumab. Serum drug levels (non-trough) are generally measured during the maintenance phase of treatment for patients, after the induction phase.1

Algorithm 1 is a reactive testing algorithm, and this algorithm delineates steps to take if faced with a patient with worsening inflammatory bowel disease while on maintenance dosing of infliximab or adalimumab. These patients have demonstrated objective evidence of continued inflammation with elevated C-reactive protein (CRP) levels, fecal calprotectin levels, abnormal imaging studies, and/or endoscopy corroborating persistent inflammation secondary to inflammatory bowel disease despite adherence to infliximab or adalimumab.

The steps are as follows: first, a drug trough level is measured. If the patient has a therapeutic trough level as defined by a serum infliximab concentration > 3 µg/mL or adalimumab concentrations > 5 µg/mL, this patient should be switched to a different drug class with a separate mechanism of action or surgical intervention should be considered. If the patient has a sub-therapeutic drug trough concentration as defined by a serum infliximab concentration < 3 µg/mL or adalimumab concentration < 5 µg/mL, this patient should have an anti-drug antibody level measured, and most assays will perform this testing. If the anti-drug antibody level is negative, the patient will benefit from an increase in the dose of drug, acceleration of the interval between infusions, addition of an immunomodulatory medication, or transition to a different anti-TNF-a agent. If the patient has an anti-drug antibody level that is positive, the patient should be switched to a different anti-TNF-a agent or a different drug class; other causes of persistent inflammation should be investigated as well.

Reactive testing has been shown to be more cost effective than empiric dose adjustments, and it allows clinicians to understand if a patient is likely to benefit from dose escalation, or if the patient should be switched to another drug class altogether.

Currently, there are no guidelines when therapeutic drug monitoring should be performed, but the BRIDGe group (Building Research in IBD Globally) has issued the following recommendations: therapeutic drug monitoring should be conducted at the end of induction in patients with primary non-response, for patients with secondary loss of response, for patients who are on maintenance therapy and who are responding, and for patients restarting treatment after a drug holiday. The utility of testing at the end of induction in patients who are already responding to anti-TNF-a therapy is uncertain.14,15

Proactive Drug Monitoring

Proactive therapeutic drug monitoring entails optimization of drug to a specific therapeutic window. Proactive testing has been demonstrated to improve patient outcomes, and the available medical literature demonstrates benefits of proactive testing in the maintenance phase of treatment for patients.2 A pilot observational study of 48 patients demonstrated that a proactive approach more frequently identified patients with low trough concentrations, and resulted in a greater probability of remaining on infliximab, increasing the long term durability of the medication. Proactive therapeutic monitoring has also been shown to improve symptom scores, CRP levels, and decreases the need for rescue therapy.16

Central to the proactive strategy for IBD is the TAXIT trial, which was a one year randomized controlled trial at a tertiary referral center including 263 adults. These patients were split into two groups, with medication dosing adjusted based upon clinical features (reactive) or trough concentrations of infliximab (proactive). At the start of the trial all patients were dose optimized to a drug concentration of 3-7 µg/mL, and then 123 of the patients had dosing adjusted based upon their clinical features and CRP levels, which is the current standard of care, and 128 patients had dosing adjusted during maintenance to a therapeutic window of 3-7 µg/mL.17

The primary outcomes for this trial were measured at one year. At that time, no significant difference was seen in the primary end point for this study (Figure 1), which compared clinical remission between the two groups, likely due to two reasons: (1) at the start of the study, all patients regardless of treatment group were dose optimized, and (2) these patients were only followed for one year. Notably, by the end of one year, the curves begin to separate, and one could infer that they would separate even further over time, with higher relapse-free survival in patients who underwent proactive therapeutic drug monitoring.17

A number of secondary endpoints in this trial favored proactive drug monitoring for patients receiving infliximab: (1) patients receiving proactive treatment did not need rescue therapy as often as the clinical group (7% vs 17.3%, p = 0.004); (2) more patients in the proactive group maintained trough concentrations within the therapeutic window (74% vs. 17.3%, p < 0.001); (3) fewer patients had undetectable trough concentrations (OR 3.7; p < 0.001); and (4) costs were similar between both groups.17

A separate study by Cheifetz and colleagues from the BridgeIBD group followed patients being treated with infliximab for more than ten years, with a goal therapeutic window between 5-10 µg/mL (Figure 2A).16 Over time, proactive therapeutic drug monitoring maintained patients on infliximab for more than ten years, versus the patients undergoing reactive monitoring, many of whom appeared to demonstrate loss of response by ten years. In this same trial (Figure 2B),16 patients who attained a trough concentration of greater than 5 µg/mL fared much better than those patients who had low levels of drug, or patients receiving the standard of care. Notably, by the end of ten years most of these patients had lost response to infliximab.14,16

In clinical practice, algorithm 2 can be followed to proactively dose-optimize patients to a therapeutic window for infliximab and adalimumab. The steps are as follows: first a trough concentration is measured. If the drug trough concentration is undetectable, and anti-drug antibody level should be measured. If the anti-drug antibody level is detectable, the patient’s anti- TNF-a drug should be discontinued. If the anti-drug antibody is undetectable, the patient’s dose of anti- TNF-a drug should be increased or the interval between doses should be accelerated. If the patient’s drug trough concentration is sub-therapeutic, the patient’s dose of drug should be increased or the interval between doses should be accelerated. If the patient has a therapeutic drug concentration of infliximab between 3-10 µg/mL or adalimumab between 5-10 µg/mL, no dose adjustments are necessary. And lastly, if the patient has an infliximab of adalimumab concentration greater than 10 µg/mL, the dose of drug should be decreased or the interval between doses should be decelerated.14

The optimal therapeutic window is not completely known. Data exists for a goal trough of 3-7 µg/mL, while other data suggests a level of 5-10 µg/mL for infliximab and adalimumab.14 During the maintenance phase for stable patients, for infliximab, a trough level of 5 µg/mL or higher has been associated with clinical remission. For deep remission, a trough level of greater than 8 µg/mL could provide benefit. For adalimumab, clinical remission was seen at or above a level of 5 µg/ mL, and deep remission was seen at or above 8 µg/mL.8,17,21-26

Since guidelines regarding therapeutic drug monitoring are not yet available, the optimal therapeutic windows are unknown; patients with particularly severe disease may warrant a higher therapeutic window than a patient with mild disease.

Contributing Clinical Factors

There are multiple factors that play into the pharmacology of monoclonal antibodies, particularly regarding clearance. The presence of anti-drug antibodies is associated with higher drug clearance and worsened clinical outcomes. Addition of an immune-modulator such as thiopurine or methotrexate has demonstrated benefit, by reducing anti-drug antibody formation and increasing drug concentrations. Factors associated with poor outcomes include severe disease, high CRP levels, low albumin, and higher baseline TNF-a concentrations. Furthermore, patients with severe disease demonstrated a faster rate of drug clearance, via proteolytic catabolism by the reticuloendothelial system.27 Clearance is also increased in patients with higher body mass index and male gender.14,18

Economic Considerations

Data strongly show that reactive drug monitoring is more cost-effective than empiric dose escalation. Reactive testing prevents over-prescribing high doses of biologics. One study calculated associated costs over the course of one year, and reactive testing was found to be approximately $5,000 less per year than empiric dose escalation for patients. Moreover, the reactive testing in the algorithm previously provided allows for more accurate management for patients with secondary loss of response.23 Another study that looked at costs of over- prescribing high doses of infliximab without drug monitoring found that costs to patients were reduced by 56% when reactive testing was performed, versus empiric dose escalation. Notably, the drug assay used was inexpensive and thus cost effective.20

CONCLUSION

S For practical use, the following is recommended: knowledge of the test performed by one’s institution, whether the antibody assay is affected by drug concentrations, and the cost of testing would all be prudent. Understanding the therapeutic algorithms would increase the likelihood of improved outcomes and cost-effective care. Utilization of web-based resources to tailor therapy (http://www.bridgeibd.com/ anti-tnf-optimizer) to optimize outcomes for patients would be ideal as well.

Reactive testing is clearly beneficial as has been shown herein. With more research and time, proactive testing may become more widely utilized. Consider proactive testing after induction and following patients at least once per year during maintenance to ensure they are within the therapeutic window and do not develop a secondary loss of response.

Questions that remain to be answered include: is there a safety benefit to dose-reduction for patients with supra-therapeutic drug levels? Should drug monitoring be individualized to each patient, or should therapeutic windows be generalized to specific patient populations? Should more aggressive disease phenotypes warrant higher therapeutic windows? As many of these assays involve significant cost, determination of appropriate utilization is paramount. And lastly but importantly, will assays be different for other biologic agents including novel therapies such as vedolizumab, ustekinumab and biosimilars? Further research will certainly be warranted to address these questions.

Download Tables, Images & References

Frontiers In Endoscopy, Series #38

Endoscopic Management of Esophagorespiratory Fistulas

Read Article

Esophagorespiratory fistulas (ERFs) are pathologic communications between the esophagus and any portion of the respiratory tract. ERFs lead to recurrent aspiration that can cause lethal pulmonary infections and significantly decrease quality of life for patients. Treatment of ERFs has been shown to not only improve dysphagia and aspiration, but also lead to increased survival times. While there is limited outcome data to guide clinical decision-making, the purpose of this review is to describe the current literature that supports the various endoscopic techniques utilized to manage ERFs.

Judith Staub MD, Douglas G. Adler MD, FACG, AGAF, FASGE, Division of Gastroenterology and Hepatology, Utah School of Medicine, Salt Lake City, UT I.

INTRODUCTION

Esophagorespiratory fistulas (ERFs) are pathologic communications between the esophagus and any portion of the respiratory tract. ERFs lead to recurrent aspiration that can cause lethal pulmonary infections and significantly decrease quality of life for patients.1,2 Treatment of ERFs has been shown to not only improve dysphagia and aspiration, but also lead to increased survival times.3 While there is limited outcome data to guide clinical decision-making, the purpose of this review is to describe the current literature that supports the various endoscopic techniques utilized to manage ERFs.

II. ETIOLOGY

ERFs are classically divided into two broad categories, acquired and congenital, of which congenital are more common.4 Acquired ERFs can be further subdivided into benign and malignant. Benign ERFs can be iatrogenic and caused by luminal procedures such as bronchoscopy, endotracheal intubation, gastrointestinal endoscopy, or as a complication of esophageal stent placement.5,6,7 Esophageal inflammation and diverticulum are other known benign causes of ERF.4

Malignant ERFs are a devastating complication of esophageal, lung cancer, large B-cell lymphoma, neuroendocrine tumors, and other tumors.8 They are associated with lower patient survival times and clinical success rates when compared to patients with benign fistulas. Balazs described the incidence of fistulas in patients with esophageal cancer to be between 0.9 and 22%, but these may occur more frequently than documented given their difficult diagnosis at the end stage of malignant disease.1,2 ERFs in malignancy are usually a complication of disease progression and nearly half of patients with ERF have metastatic disease at the time of diagnosis.9 Palliative oncologic treatments including chemotherapy and radiation are not thought to directly cause ERF. Instead, they lead to ERF formation either by increasing survival times or decreasing tumor burden without leaving necessary tissue to maintain patency of the lumen.1,2

ERFs can be located at any point along the esophagus and respiratory tract. ERF in the proximal and mid-esophagus are most common. Fisutlae in the proximal esophagus have been shown to be the most difficult to manage and associated with the most adverse events and shortest survival time, while patients with distal ERF have the longest survival.8 Patients with mid-esophageal fistulae have intermediate survival. This may reflect the anatomic proximity of the proximal esophagus to the trachea, allowing for widespread contamination of both lung fields on aspiration.

III. NON-ENDOSCOPIC MANAGEMENT OF ERF
A. Operative Management

Operative management such as esophageal bypass with reconstruction, thoracotomy with direct suture closure, and esophageal defect with pedicled soft tissue flap interposition are treatment options in select patients, although these are all major surgical undertakings.10,11 For patients with acquired, non-malignant ERF, surgical options may provide the best opportunity for full recovery in good operative candidates. The choice of surgical technique is dependent on the etiology, size, and location of the fistula. Pre-operative requirements such as Eastern Cooperative Oncology Group (ECOG) status of 0-2 and lack of metastatic disease make surgery prohibitive for many patients with malignant ERF.9 Indeed, the vast majority of patients with malignant ERF are poor surgical candidates at the time of presentation and other palliative and therapeutic interventions are typically considered.

B. Concurrent Chemoradiotherapy (CCRT)

Historically, the presence of a malignant ERF was considered a relative contraindication for CCRT, but recent evidence has demonstrated significantly increased survival with CCRT in esophageal squamous cell carcinoma (SCC) complicated by ERF.12 Koike et al. studied the effect of 5-fluorouracil and cisplatin combined with full dose radiotherapy in patients with esophageal cancer complicated by malignant ERF. They found complete closure of esophago-mediastinal fistulae in 3/3 patients but only 4/13 patients with esophago-respiratory fistula achieved clinical success. A more recent study showed that CCRT combined with enteral nutrition can achieve promising improvement and closure of malignant fistulae.13

IV. ENDOSCOPIC MANAGEMENT
A. Bronchoscopy Monotherapy

Some patients may have contraindications to endoscopic management such as non-passable esophageal obstruction by tumor.14 Several studies have demonstrated successful endotracheal or endobronchial stent placement with improvement in clinical symptoms.14,15,16 However, the anatomical complexity of the respiratory system makes airway stenting a more challenging procedure compared to esophageal stenting. In addition to multiple branch points, different airway locations vary in their diameter, thickness, and nearby anatomic structures. These factors require that different airway stents are utilized according to the size and location of the malignant ERF.14

B. Esophageal Monotherapy
a. Esophageal Stents

The first stents to treat ERFs were rigid plastic tubes and were associated with a variety of complications and these older stents are now obsolete.17 Esophageal intubation in the form of self-expanding metal stents (SEMS) was first introduced in the 1980s for palliation of esophageal stenosis, and is currently the gold standard for endoscopic management of malignant ERFs.18 Advantages of SEMS include their ability to be constrained to small diameters on a delivery catheter, thus largely eliminating the need for pre-insertion dilatation.19 (Figure 1)

SEMS may be or fully covered or partially covered. Partially covered stents have the advantage of anchoring and embedding into the esophageal wall making them less prone to migration, but are susceptible to tumor ingrowth.20 In contrast, covered stents have higher rates of migration, but have been shown to have better palliation because of decreased need for re-intervention secondary to recurrent dysphagia.21 Covered stents are more easily retrieved. Thus, stent choice depends on the expected risks of stent migration or tumor overgrowth for the particular patient.

The literature reports high technical success rates defined by complete ERF closure following esophageal stent placement of nearly 100%.17 Adverse events have been reported in as many as 40% of patients but are generally minor.8 Complications of stent placement in ERF include aspiration, malposition, migration, ERF progression, and perforation.22 Stent migration is a common complication with a rate of 25 to 32% and may be secondary to insufficient expansion, tumor shrinkage due to chemo or radiation therapy, lack of a stenosis to help anchor the stent, or stent malposition.23

b. Over-the-Scope Clips

Endoscopically placed clips are an established method of sealing ERF. Through-the-scope clip (TTS) technology has been available for over 10 years and most endoscopists now have access to over-the-scope clips (OTSC), which are much larger than TTS clips.24 The OTSC system consists of a nitinol alloy clip that is equipped with teeth. The clip is preloaded on an applicator cap and mounted on the endoscope tip. The OTSC devices are available in several different sizes and configurations. OTSC have been used to treat ERFs due to their ability to grasp more tissue and provide greater compressive force.25,26 They have been generally used for treatment of small defects.27 Large ERF may be difficult to close by any method, including OTSC devices.

The OTSC method has lower therapeutic efficacy for closing fistulae when compared to esophageal perforations and leaks.27,28 The main barrier for successful sealing of ERF with OTSC is the ability to completely approximate the borders of the defect and suction damaged tissue inside the cap because ERF often have fibrotic and retracted rims. However, there are studies showing promise for treating ERFs with OTSC in conjunction with other interventions. A recent multicentre retrospective study examined 5 patients with OTSClips alone or in combination with esophageal stents, airway stents, or with stents and endoscopic sutures.8 One patient in this study had OTSC monotherapy and did not achieve clinical success. The remaining four patients with combination therapy achieved technical and clinical success in 4/4 patients. Additionally, evolving OTSC technology such as the Padlock Clip show promise for improved efficacy of these devices to treat ERFs.26,28

c. Atrial Septal Defect (ASD) and Ventricular Septal Defect (VSD) Occluders

A novel method for endoscopic closure of ERF is the use of ASD and VSD occlusion devices. These devices have been used for percutaneous closure of cardiac septal defects since the 1970s with a goal of inducing an endothelial response and closure of the defect.29 The device typically consists of two nitinol, self- expanadable, polyester coated discs connected by a thin waist that is compressed inside a loaded catheter. The two discs have different diameters after deployment.

The first reported successful closure of an ERF with a VSD occluder device was performed in 2006 after a patient with non-malignant ERF had failed other endoscopic options.30 Since then, ASD occluder devices have also been utilized with varying clinical success.31,32 The device is placed by maneuvering a guide wire endoscopically with fluoroscopic assistance into the fistula orifice from the esophageal side, and threading the wire through the hypopharynx such that both ends come out of the mouth. The occluder is then threaded through either orifice and deployed with one disc on either side of the fistula.

The most significant complication reported from use of ASD and VSD occluders is device migration to the airway, which may occur from incorrectly sized devices, physiologic esophageal peristalsis, extrusion by external source, or enlargement of the fistula.33,34 These patients often present with severe cough from bronchial obstruction by the device or pneumonia. Jiang describes a theoretical solution to this problem by using an endotracheal approach and placing the larger, distal disc in the esophagus.35 The structural design of the device favors its permanence. As it anchors into the fistula, it stimulates an inflammatory response and promotes granulation tissue and re-epithelialization over the device.

d. Parallel Airway and Esophageal Stenting

Combined placement of stents in both the esophagus and the tracheobronchial tree is another endoscopic method that has been utilized for treatment of benign and malignant ERF.36,37,38 (Figure 2) This method may be advantageous in circumstances in which there is concern for airway compression by an expanding esophageal stent, or in patients with combined symptoms of dysphagia, aspiration, and dyspnea. The stents are similar to those for monotherapy and include SEMs and airway Y stents or self-expanding metallic airway stents.38 The procedure is typically performed under general anesthesia, with airway stenting often performed first due to the small risk of airway compression by the expanding esophageal stent.39 A retrospective analysis by Schweigert demonstrated complete seal of malignant ERF in 9/9 patients using the parallel stent technique without anesthesia related complications.36 Five out of nine were able to have additional chemo or radiation therapy and 7/9 were able to return home. A more recent, larger study by Wlodarczyk examined 31 patients with malignant ERF and documented technical success in 100% of patients.39 Only 4 patients required re-intervention because of fistula recurrence, and nearly all patients achieved improvement in degree of dyspnea and dysphagia.

The most feared complications of dual stenting for ERFs are massive bleeding and respiratory compromise.38,39,40 The close proximity of the parallel stents may lead to pressure necrosis causing bleeding and, in rare cases, death. Binkert reported pressure necrosis when Gianturco-Rösch Z stents were used, as a result of tissue erosion at sites where stent struts were in direct opposition causing bleeding from the esophageal venous plexus. Wlodarczyk reported bleeding events in 7/31 patients with malignant ERF.39 A more recent study of 8 patients treated with dual stent placement, however, demonstrated similar adverse events to esophageal monotherapy without any major complications.8

The American College of Chest Physician Guidelines reports a grade C recommendation for stenting of both the esophagus and tracheobronchial tree to achieve the best results for symptom relief.41 Increased survival time in patients that received dual stenting for malignant ERF compared to airway stenting alone has been demonstrated in a larger, prospective study.36

e. Other Methods

Other methods that have been utilized for closure of ERF include fibrin glue, sutures, polyglycolic acid sheets, and argon plasma coagulation. Typically, these methods are used in conjunction with the aforementioned endoscopic techniques to promote direct closure of the fistula. Evidence for their use alone is limited but encouraging.

Fibrin glue is made of thombin and fibrinogen. With the addition of calcium and factor XIII, thrombin converts fibrinogen to fibrin and stimulates scar formation at the fistula site.42 Most of the literature on the use of fibrin glue for ERF comes from the pediatric population, where it is used for endoscopic management of congenital ERF. In select pediatric patients it has been shown to reduce morbidity and recurrence when compared to open approaches or alternative endoscopic techniques.42 Data is more sparse in the literature with regards to adult patients, however a study by Lippert et al identified 26 patients with fistulas in the esophagus treated with fibrin glue.43 Nine of these patients achieved success with fibrin glue alone, while the remaining 17 patients required either additional endoscopic therapy with stents or surgical intervention. A case report of a patient with a small, benign ERF secondary to mechanical ventilation demonstrated complete healing of the fistula after bronchoscopic administration of fibrin glue.44

Argon plasma coagulation (APC) functions by creating coagulation-induced inflammation/granulation along the fistula. Again, this method has been used in conjunction with other endoscopic methods to promote fistula closure. A case report from 2001 demonstrated complete closure of a benign ERF using APC with the addition of endoscopic sutures.45

The use of polyglycolic acid (PGA) sheets is another novel technique that has been described in recent case reports to promote complete closure of ERF. PGA sheets are bio-absorble synthetic polymers that are typically used to enhance the strength of sutures during surgical procedures and to prevent delayed perforation.46 A case report by Han describes complete closure of a post- operative fistula by endoscopically placing PGA sheets over the lesion and securing with endoclips and fibrin glue.47 The use of PGA sheets in this case increased the area of healthy mucosa available, thereby avoiding the need to clip inflamed tissue. Another case report by Tsujii describes utilizing PGA sheets as a scaffold inserted within an esophago-mediastinal fistula, then securing with fibrin glue.46 On re-imaging, the fistula was replaced by granulation tissue. A case report by Matsuura describes complete closure a large, post- esophagectomy ERF after repeated interventions with PGA sheets and fibrin glue.48 A report by Kinoshita demonstrated complete closure of an ERF secondary to Bechet’s disease with 10 repeated applications of PGA sheets combined with fibrin glue and endoclips. None of the above case reports described serious adverse events.49 Although based on limited data and requiring repeated applications, PGA sheets present a promising method to completely close benign or malignant ERF. V.

CONCLUSION

Benign and malignant ERFs pose both a technical and clinical challenge to today’s practitioners. Advances in endoscopic technique have broadened the tools available to allow for improved quality of life for patients suffering from the devastating effects of ERFs. Although the various endoscopic techniques pose different adverse events, an experienced clinician may select the appropriate intervention to maximize the risks/benefits of the procedure based on the size, location, and etiology of the fistula.

Download Tables, Images & References

Nutrition Issues In Gastroenterology, Series #167

Seeking Enteral Autonomy with Teduglutide

Read Article

Patients with short bowel syndrome (SBS) often struggle to maintain nutrition and hydration status. Using a combination of diet and pharmacotherapies, partial or full enteral autonomy may be achievable over time as the bowel adapts. There are select groups that cannot achieve autonomy with conventional therapy, but with augmentation of the absorptive capacity of their existing small bowel, autonomy may be feasible and quality of life significantly improved. With the approval of the GLP-2 analog intestinotrophic agent teduglutide, adaptation and intestinal absorption enhancement is possible. This article describes the course of 7 patients on maximal conventional SBS therapy who achieved either enteral autonomy or significant improvements in nutritional status, stool and urine output, and quality of life after initiation of teduglutide.

Andrew P. Copland MD, Assistant Professor of Medicine, Division of Gastroenterology and Hepatology, Brian Behm MD, MS, Associate Professor of Medicine, Division of Gastroenterology and Hepatology, Carol Rees Parrish MS, RD, Nutrition Support Specialist, University of Virginia Health System, Digestive Health Center, Charlottesville, VA

INTRODUCTION

Short bowel syndrome (SBS) causes severe malabsorption most often due to significant resection or defunctionalized segments of the small intestine. It presents an extremely difficult challenge for clinicians and is a life altering condition for patients. Despite aggressive management of SBS with dietary modification and gut slowing agents, the use of parenteral nutrition (PN) is often required to maintain adequate nutrition and hydration. Resource utilization and healthcare costs for the necessary care of short bowel patients are high and our current healthcare system does not lend itself to the time needed in caring for these patients.

The goal of autonomy from PN facilitates avoiding both complications of SBS as well as those of PN such as line infections and vascular thrombosis. Successful enteral autonomy is achieved by enlisting a multifaceted approach including: dietary modification to initiate and stimulate intestinal adaptation, fluid management, and pharmacotherapy. The goal of this approach is to gradually decrease (or eliminate) a patient’s dependence on intravenous fluids and PN. The introduction of intestinal mucosal trophic agents such as the glucagon- like peptide 2 (GLP-2) analog, teduglutide, has added another facet to this approach. Incorporating the strategic use of trophic agents such as teduglutide may reduce or eliminate PN requirements in previously PN- dependent patients.

Endogenous GLP-2 is a locally active hormone secreted from the terminal ileum/proximal colon promoting crypt cell growth and reducing enterocyte apoptosis, while also stimulating intestinal blood flow. Early research demonstrated that exogenous GLP-2 stimulated growth of the intestinal mucosa, which led to enhanced fluid and nutrient absorption. Clinical trials of the GLP-2 analog, teduglutide, demonstrated improved intestinal function in short bowel patients. The first randomized control trial performed to evaluate the efficacy of teduglutide in SBS showed a significant decrease in dependence on parenteral support over a 24- week period.1 The 28-week extension of this trial (52 total weeks of treatment) demonstrated that over 50% of patients had a 20% or more decrease in PN dependence.2 In a subsequent 2 year open-label extension trial, an even more significant reduction in total PN volume was described in patients treated with teduglutide with 15% (13/88) patients enrolled achieving full enteral autonomy.3

The following illustrates our institution’s experience with 7 cases where strategic use of teduglutide was utilized for short bowel patients on otherwise maximal therapy. Patient initials have been changed to protect patient confidentiality.

Case 1

WS is a 66 y/o female with a history of SBS as a result of mesenteric ischemia. Her GI anatomy consists of 105cm of normal proximal small bowel terminating in an end jejunostomy. She was unable to maintain adequate hydration status 18 months post-op from bowel resection with significant ostomy output making enteral feeding quite difficult. She was initiated on teduglutide after maximizing the use of high dose codeine. She was long past the hypersecretory phase. Over the initial 3 months of teduglutide therapy, she successfully transitioned from nightly IV hydration fluids to nocturnal oral rehydration therapy infused via a gastrostomy tube. Her urine output increased significantly during this time indicating improved hydration and her stool output became much more manageable. Over the first year of therapy with teduglutide, WS was able to gradually reduce nightly hydration via gastrostomy; 13 months after initiating teduglutide she was succeeding with oral nutrition and hydration alone. She now has enteral autonomy, her gastrostomy is removed, and she relies on careful management of her PO diet in addition to gut slowing agents, which we were also able to wean. She now volunteers at the hospital and exercises at the gym 3 times a week.

WS stands as an example for the potential for using intestinal growth factors such as teduglutide to free patients from the use of PN, IV fluids, and augmentation of nutrient absorption via gastrostomy. She also highlights the usefulness of gastrostomy tubes in this patient population. With careful management, many patients with significant malabsorption can be managed quite well by a combination of enteral feedings (or oral rehydration) instilled slowly through a nasogastric or gastrostomy tube via a pump. This can be a very effective tool in patients who are unable to wean from PN or IV fluids, and/or tolerate sufficient PO intake. See Box 1.

Case 2

LP is a 31 y/o gentleman with a history of SBS as a result of necrotizing enterocolitis as a child. His GI anatomy consists of 30cm of normal proximal small bowel anastomosed to 50cm of distal colon. Prior to starting teduglutide, he was transitioned from PN to a regimen of nocturnal enteral feeding via PEG in addition to nightly IV fluids. This was sufficient for him to maintain a marginal weight and urine output. Escalation of gut slowing agents was somewhat limited by a history of bowel obstruction. After initiation of teduglutide at a standard dose, he had a significant improvement in his weight over the initial 6 months of therapy. While his weight stabilized at his goal, he was able to decrease the amount of tube feeding required to maintain this weight in addition to oral intake. His urine output gradually improved to over 1L daily with a decreased IV fluid requirement. He has been maintained on teduglutide for 3 years and continues on a stable regimen at or around his goal weight. As his strength increased, he began working longer hours and started an exercise program. We had to increase his enteral feedings overnight to compensate for his higher energy expenditure.

LP experienced no significant side effects from initiation of teduglutide. Three years into therapy, he was admitted for 7 days for symptomatic small bowel obstruction. This was managed conservatively and required no surgical intervention. He was subsequently restarted on ? dose teduglutide 2 weeks after discharge, which he tolerated well and then progressed back to full dose. During this brief period, he experienced increased stool output and mild weight loss, so his nutrition support was adjusted as needed. See Box 2.

Case 3

JF is a 48 y/o female with a distant history of Roux-en-Y gastric bypass now with SBS as a result of a volvulus followed by significant bowel ischemia. Her GI anatomy consists of Roux-en-Y gastric bypass anatomy to small bowel to colon via an ileocolic anastomosis. She had been on PN for 7 years. However, because of a history of multiple line infections, it was decided to hold her PN, optimize gut slowing, and enlist intense diet and hydration instruction. She did well at first, but difficulty with adherence ultimately resulted in precipitous weight loss and significant fat-soluble vitamin deficiency. We then admitted her for a nocturnal enteral feeding trial with concurrent 72 hour fecal fat collection to determine if she had sufficient absorptive capacity. However her fecal fat wasting was profound so further enteral feeding was abandoned. She was restarted on PN and gradually made improvements in both her weight and hydration status. She was then started on teduglutide. She had a history of lower extremity edema prior to teduglutide initiation, but did not require titration of her diuretics during the first few months of therapy. She currently has manageable stool output and is on chronic narcotics for pain, but no additional gut slowing agents.

Initiation of teduglutide has resulted in improved weight gain and hydration status over the first 5 months of therapy and resolution of her lower extremity edema. This case is significantly complicated by the pre- existing Roux-en-Y gastric bypass anatomy, which requires particular attention to the risks of vitamin deficiency. Non-adherence played a partial role as well. Fluid retention is a known risk of teduglutide and may require careful observation for rapid weight gain and/or edema during the initiation weeks to months after starting therapy in those who are sensitive to fluid overload. As of this writing, she has reached her goal weight, and we are planning to decrease PN from 7 to 6 days, giving her a night off PN every week. See Box 3.

Case 4

MS is a 47 y/o gentleman with a history of SBS as a result of mesenteric ischemia secondary to thrombosis. His GI anatomy consists of approximately 200cm of small bowel terminating in an end-ileostomy. There is a suggestion of dilated, defunctionalized bowel on small bowel imaging. Although he has a significant amount of colon, it is not in continuity. Prior to starting teduglutide, he attempted discontinuation of his PN with dramatic weight loss and high ostomy output with attempts at increased PO. He was admitted for a combination SBS diet and nocturnal enteral feeding trial using a nasogastric tube concurrent with a 72-hour fecal fat collection and demonstrated significant malabsorption with high ostomy output despite high dose gut slowing agents. He had also experienced stomal stenosis, which required dilation. He was initiated on a half- dose of teduglutide because of the stomal stenosis and demonstrated a robust response while on a combination of PO intake with no change in his PN solution. His weight improved over 3 months nearing his goal weight and his ostomy output required decreasing doses of gut slowing agents. Over the subsequent few months, he experienced more discomfort around the stenosis of his end ileostomy altering his oral intake and worsening his ostomy output due to outflow diarrhea. A decision was made to revise this site with plan for placement of the colon back in continuity with the small bowel in an effort to facilitate improved enteral absorption and fluid management and possibly permit weaning of PN. See Box 4.

Case 5

CR is a 50 y/o female with a history of SBS as a result of surgery and radiation therapy for cervical cancer. Multiple surgeries for mechanical bowel obstruction resulted in her GI anatomy consisting of 45cm of viable small bowel and an additional 35cm of irradiated, defunctionalized bowel anastomosed to the colon with the sigmoid colon terminating in an end ostomy. Prior to starting teduglutide, CR was unable to reach her goal weight or control her stool output despite PN for nutrition support and attempts at gut slowing. CR has had a great deal of difficulty adhering to medication/ nutrition regimens and tracking her urine and stool outputs. Her clinical course has also been complicated by line infection. After initiation of teduglutide, her weight has remained relatively stable and she has tolerated a gradual tapering of her PN from 4 days/ week to 1L of IV fluids plus electrolytes and vitamins 3 days weekly. While she has not reached her goal weight, her current nutrition support strategy with IV fluids is lower risk relative to PN, and the current regimen, in combination with gut slowing and SBS diet, has been sufficient to maintain hydration and meets her preferences. See Box 5.

Case 6

KJ is a 57 y/o male with longstanding Crohn’s disease with prior small bowel resections in 1991 and 2001 with approximately 50% of the small bowel remaining. The patient also underwent liver transplantation in 2008 due to nodular regenerative hyperplasia. In 2012, he developed increasing GI symptoms in the setting of colonic pneumatosis. While the etiology of pneumatosis was never identified, and despite resolution on subsequent imaging, his GI symptoms did not return to baseline. PN and IV fluids were required to maintain hydration and adequate urine output. Attempts at tapering IV fluids led to reduced urine output and worsening renal function and 2 episodes of nephrolithiasis. The patient was started on teduglutide in June 2013. The patient had a significant reduction in stool volume after initiation; slower improvement in weight and urine output followed, and IV fluids were reduced 4 months later. By month 16 the patient was off IV fluids and was maintaining adequate hydration with oral intake alone. No changes in post-transplant medications were necessary, and he has enjoyed traveling both nationally and internationally without incident. See Box 6.

Case 7

NN is a 54 y/o male with SBS related to Crohn’s disease with multiple prior small bowel and colon resections with resultant transverse colostomy. His last surgery was in March 2016 at which time he underwent an ileocolic resection due to anastomotic stricturing. Postoperatively he struggled with high ostomy output, weight loss, and difficulty with hydration. Small bowel imaging showed approximately 100cm of small bowel remaining, and stool testing showed significant malabsorption with 97g of fat per 24 hours (normal 2-7g). He was initiated on PN in December 2016, but developed a line-related infection shortly after initiation. Teduglutide was initiated in February 2017; weight increased significantly after initiation and he was able to wean off parenteral support in July 2017. See Box 7.

DISCUSSION

Living with SBS is extremely challenging for patients, not only as a result of the short bowel and associated diarhea in particular,4 but also from the labor-intensive regimens that include careful meal planning, medication adherence, coordination of PN/IV fluids and enteral feedings from preparation to delivery to administration. All of these factors have a great impact on quality of life for patients with SBS. Anything clinicians can do to ease this burden is a worthy goal. This requires focusing on issues that are most important to patients. Interventions that decrease side effects of SBS are extraordinarily meaningful to patients-for example, one of our patients reported the following after initiating codeine for gut slowing instead of the loperamide he was previously using:

“I awoke at 4:30am and went fishing with the TPN hanging on my back. Bathroom once on the way there and once again at 2:30! That’s incredible!” and later, “with the decrease in my diarrhea on the codeine, I was able to take 2 hours off my usual 9 hour trip up to Virginia as I did not have to find a bathroom nearly as often!!”

The use of intestinotrophic agents such as teduglutide can play a vital role in improving patient quality of life by helping to manage core issues in SBS to include improving nutritional status, limiting stool/ostomy output, and decreasing dependence on parenteral support.

Careful patient selection is essential to maximizing the benefits of trophic agents while minimizing the risks. Gastrointestinal dysplasia and cancer are contraindications to teduglutide therapy given concern for the trophic effects on a potential malignancy. A history of biliary disease or pancreatitis is another. It should be clear that patients should maximize other potential therapies5-8 before initiating teduglutide given the significant cost. Following this, a careful discussion with the patient regarding the risks and potential benefits should be undertaken and reasonable goals set. It may not be possible for a short bowel patient to discontinue PN due to teduglutide therapy, but it may be reasonable to hope for fewer days of PN each week or significantly shorter infusion times, less diarrhea, and an increase in overall well-being.

Finally, it is crucial to keep in mind (and express to patients) that trophic agents such as teduglutide are not a substitute for a comprehensive SBS program, but rather an adjunct to this program.

Weaning from PN

The weaning of parenteral support can be a daunting task for short bowel patients, particularly those who have struggled in the past to maintain weight and hydration. We typically begin this process once a patient has stabilized at or near their goal weight, unless clinical course accelerates our decision (multiple septic episodes, unwieldy stool output on maximum anti-diarrheal/anti-secretory agents, etc.). For patients on PN, we make stepwise decreases in total fluid volume and macronutrient content (Table 1). Weekly labs are followed closely as are patient’s weights, stool output, and urine output, particularly in patients who attempt to increase their PO intake to compensate for decreased PN support, which may make stool output increasingly difficult to manage. This frequently requires the escalation or addition of anti-diarrheal agents such as loperamide or codeine.8 A subset of patients is unable to increase their PO intake successfully without uncontrolled stool output. Our experience has suggested that many of these patients can still work toward enteral autonomy through the use of a gastrostomy tube using a pump for nightly feedings at a decreased, but continuous rate for 8-12 hours. However, prior to placing permanent access, a nasogastric nocturnal feeding trial with concurrent 72- hour fecal fat collection is undertaken to ensure we do not drive stool output higher than manageable and to ensure absorption is adequate. Similarly, some patients succeed with an enteral backpack for daytime use to infuse enteral feeding or oral rehydration solutions. The portability of the enteral backpack is often a key to success and quality of life in these patients. A small subset of patients is unable to achieve enteral autonomy. This is most commonly the case in patients with severely foreshortened bowel, defunctionalized remaining bowel, or short bowel in the context of bariatric surgeries. Rather than a change in the above process, these patients often require a change in goals of weaning. For many patients, even one or two nights off of PN per week is very meaningful for their quality of life and often helps facilitate vacations and social functions. A similarly worthwhile alternative goal might be attempting to reduce total weekly PN in favor of IV fluids to support hydration, which has a more favorable risk profile particularly with regards to line infection.

CONCLUSION

Successful management of the challenges SBS patients face requires systematic and strategic intervention. It is important to avoid “throwing the kitchen sink” at patients to prevent overtreatment (which can act to drive stool output further), but also result in unnecessary healthcare costs. This article demonstrates how diet, hydration, and medication selection, including the judicious use of intestinotrophic agents, all play an integral role in the complex management of SBS. See Table 2 for additional resources for clinicians. References

Download Tables, Images & References

Liver Disorders, Series #7

Metabolic Diseases of The Liver – A Review

Read Article

Inherited metabolic liver diseases are a group of disorders caused by the pathologic accumulation of metals or misfolded proteins from disrupted normal metabolic pathways. The common diseases are hemochromatosis, Wilson disease (WD), alpha-1-antitrypsin deficiency (AAT) and glycogen storage diseases (GSD). New pathophysiologic understanding at the molecular level has changed clinical practice and research in recent years. This review article focuses on pathophysiology, clinical presentations, current management strategies and future directions.

Long Le, Duminda Suraweera, Gaurav Singhvi Olive View-UCLA Medical Center, Sylmar, CA

INTRODUCTION

Inherited metabolic liver diseases are a group of disorders caused by the pathologic accumulation of metals or misfolded proteins from disrupted normal metabolic pathways. The common diseases are hemochromatosis, Wilson disease (WD), alpha- 1-antitrypsin deficiency (AAT) and glycogen storage diseases (GSD). New pathophysiologic understanding at the molecular level has changed clinical practice and research in recent years. This review article focuses on pathophysiology, clinical presentations, current management strategies and future directions.

HEMOCHROMATOSIS
Pathogenesis

Hemochromatosis is a well-defined syndrome characterized by toxic accumulation of iron in the parenchymal cells of the liver, heart and endocrine glands. In normal homeostasis, iron load will trigger an interaction between various signaling proteins including HFE, transferrin receptor 2 (TfR2) and hemojuvelin (HJV) leading to the expression of hepcidin, an important hormone in iron homeostasis. Hepcidin binds to and causes the degradation of ferroportin (FPN) on the surface of duodenal enterocyte and macrophages.1 When ferroportin is down regulated, iron will not be released from enterocytes and macrophages into the plasma thus keeping plasma iron levels low. Hepcidin also inhibits enterocyte iron absorption from the gut. If one or more components of this pathway fails, hepcidin will not be expressed in sufficient quantity and plasma iron will rise leading to hemochromatosis.2 A defect to FPN will lead to hepcidin resistance and can result in hemochromatosis as well. In humans, hepcidin deficiency has been associated with HFE-associated, TfR2-associated and HJV associated hemochromatosis. Table 1.

The most well-known and most common form of hereditary hemochromatosis (HH) is HFE related hemochromatosis. This variant of the disease is associated with the homozygous polymorphic variant of the C282Y allele of the HFE gene. C282Y allele frequency is about 6%, and its prevalence of homozygosity among Caucasian is 1:2000 to 1:3000. A low penetrance of about 2% means disease manifestation is rare.

Clinical Presentation

The clinical presentation of hemochromatosis can vary widely depending on which organs are involved and the severity of iron overload. Symptoms range from simple laboratory abnormalities (elevated serum aminotransferase levels) to severe end organ damage (cirrhosis, liver fibrosis, hepatocellular carcinoma (HCC), restrictive cardiomyopathy, congestive heart failure, arrhythmia, gonadal dysfunction, glucose intolerance, diabetes). Environmental factors that could increase the risk of end organ damage includes excess alcohol consumption, pre-existing hepatic steatosis and coexisting viral hepatitis.3 However, the classic presentation of diabetes, skin pigmentation and cirrhosis has become increasingly uncommon given more sensitive lab tests and increased awareness of the disease. Typical symptoms include malaise, fatigue, decreased libido, arthralgia and hepatomegaly. The majority of the cases of hemochromatosis are diagnosed after detecting elevated serum transferrin-iron saturation (TS) and serum ferritin (SF) levels. In general, males usually have worse manifestation of the disease. Their ferritin levels are usually higher (>200 ug/L for females and > 300 ug/L for male); excess tissue iron (>25 umol/g liver tissue) is more common in males.

Diagnosis

The diagnosis of hemochromatosis should be considered in patients with the above non-specific symptoms and abnormal liver tests. Middle age men of Caucasian origin are especially susceptible. TS is almost always increased in affected patients. As the disease progresses, serum ferritin begins to rise indicating the accumulation of iron in tissue. If either test is abnormal (TS > 45% or ferritin above the upper limit of normal), then HFE mutation analysis should be performed. Serum ferritin can also be elevated in other conditions such as infection, alcoholic liver disease, chronic hepatitis B and C and nonalcoholic fatty liver disease. If the HFE mutation analysis shows C282Y heterozygosity or non-C282Y mutation, one should exclude other liver/hematologic diseases and consider liver biopsy. Figure 1.

Management

Once the diagnosis has been confirmed with genetic testing, the next step is to determine if liver biopsy is warranted. A ferritin level of > 1000 ug/L is associated with 20%-45% risk of having cirrhosis, therefore liver biopsy is recommended. Once the diagnosis has been confirmed, all first degree relatives should also be screened with gene testing.3

Despite the lack of randomized controlled trial of phlebotomy versus no phlebotomy, there is substantial evidence that early intervention will reduce morbidity and mortality of HH.4 In a survey of 2500 patients, “86% of patients reported some or all symptom improvement with phlebotomy and 65% of patients agreed that benefits of treatment outweighed the difficulties”.5 Treatment should be initiated in: 1) symptomatic patients and 2) asymptomatic patients with homozygous C282Y and markers of iron overload or increased level of hepatic iron.6 The removal of iron will relieve malaise, fatigue, skin pigmentation, abdominal pain, abnormal liver enzymes and even insulin requirements for diabetics.6 However, certain features of the disease are irreversible such as arthropathy, hypogonadism and advanced cirrhosis. Patients with cirrhosis should be screened for HCC.

Phlebotomy should be performed as follows: one unit (500cc) of blood should be removed weekly or biweekly with hemoglobin and hematocrit (H/H) check prior to avoid H/H from falling > 20% of the starting value. Ferritin should be checked every 10 phlebotomy sessions with a goal level of 50-100 ug/L. Most patients require maintenance phlebotomy to stay at goal. The frequency of maintenance therapy varies among patients. Dietary adjustments are not necessary in the treatment of hemochromatosis.3,6

WILSON DISEASE
Pathogenesis

Wilson disease is an autosomal recessive disease in which copper homeostasis is disrupted, leading to end organ damage from copper accumulation in tissues. Copper plays an important role in many cellular processes and serves as a co-factor for many enzymes such as cytochrome c oxidase (mitochrondial oxidation) and dopamine beta – hydroxylase (catecholamine production).7 In its free form, copper has high redox potential, and it can degrade cellular structures if left unescorted by its chaperone proteins.

Central to copper homeostasis is the ATP7B gene which codes for a copper-transporting P type ATPase. The ATP7B protein is expressed most abundantly in the liver cells and has been localized to the trans-Golgi network within a cell. The ATP7B protein functions to incorporate free copper to apoceruloplasmin to form a 6 copper binding structure known as ceruloplasmin. Ceruloplasmin carries up to 90% of copper in the plasma and also stores copper in peripheral tissues.8 Mutations to the gene can change the protein structure and functions that will lead to toxic accumulation of copper in the liver and brain. Wilson disease may present with hepatic, neurologic or psychiatric manifestations.

Clinical Presentation

Similar to hemochromatosis, Wilson disease’s clinical course is highly variable. In general, there are two forms of the disease; the predominantly hepatic form and the predominantly neurologic form. The hepatic form onset is usually earlier than that of neurologic form by several years, but most patients eventually develop both.7

The predominantly hepatic form affects about 40% of patients, and symptoms can vary from asymptomatic elevated liver enzymes, chronic hepatitis to liver cirrhosis and liver failure. It is often associated with a coombs-negative hemolytic anemia, acute renal failure and coagulopathy.9 Initial presentation could be as subtle as transient episodes of jaundice due to hemolysis.

In the predominantly neurologic form, initial symptoms may be mild and nonspecific. Characteristics symptoms include asymmetric tremors that can involve the trunk and head. Dystonia is another common symptom which presents in 10 to 60% of patients and is characterized by the abnormal posture of various body segments (involuntary head rotation, shoulder elevation, forceful eye closure, etc.). Memory decline, change in hand writing and lack of coordination have also been documented.7,9

Up to 10% patients exhibit non-specific psychiatric symptoms including attention deficit, depression, mood swings and even psychosis. Fortunately these symptoms may resolve with adequate therapy.7

Diagnosis

The diagnosis of Wilson disease is challenging given the non-specific symptoms and variable clinical course. Clinicians should suspect Wilson disease in patients with liver abnormalities with or without typical neurologic symptoms. See Figure 2 for diagnostic algorithm. A liver ultrasound is needed to assess for signs of cirrhosis.7

Management

All patients require lifelong drug therapy with liver transplant being the curative treatment in specific patient populations. Available treatments include chelators such as trientine and D-penicillamine or copper absorption inhibitors such as zinc salt.

D-penicillamine acts as copper chelating moiety. It also promotes urinary excretion of copper and induces production of metallothionein, an endogenous copper chelator. Trientine is another chelator that works by forming a stable complex with copper and promotes its urinary excretion. Zinc salt inhibits intestinal copper absorption by stimulating an endogenous copper chelator called metallothioneine.

Initial treatment focuses on having a negative copper balance with either of the chelators mentioned above. Treatment for this initial phase could last up to 6-12 months while aiming for a 24 hour urinary copper level of 800-1000 ug per day.10 The maintenance phase of therapy is done with either low dose chelators (compared to initial treatment) or zinc salts with the aim of 24 hour urinary copper secretion being approximately 200-500 ug per day. First degree relatives of any new patient must also be screened for Wilson disease.10 Zinc is also recommended in the presymptomatic stage of Wilson disease given its favorable side effect profile.

Liver transplantation is the only curative treatment and it should be considered in patients with fulminant hepatic failure or end-stage cirrhosis. The effect of a low copper diet remains unknown. Gene therapy and stem cell research showed some early promise in animal studies but needs further study.7,9

ALPHA 1 ANTITRYPSIN DEFICIENCY
Pathogenesis

Alpha-1-antitrypsin (AAT) is a glycoprotein synthesized in liver cells and other tissues. It inhibits a wide range of proteases including pancreatic trypsin, cathepsin G and neutrophil elastase, which plays an important role in host defense.11

AAT deficiency is an autosomal co-dominant condition. AAT is encoded by the SERPINA1 gene (also known as Pi for protease inhibitor). AAT can be deficient either qualitatively or quantitatively. There are many Pi mutations both heterozygous and homozygous, that can lead to low level, non-functional or complete absence of AAT. The terminology Pi MM (protease inhibitor, genotype MM, Pi MZ, Pi ZZ) is used. AAT deficiency primarily affects the lungs and liver by two different mechanisms: polymerizations in the liver and elastase over activity in the lungs.12

In the lungs, Z or null mutation results in ineffective and low level of AAT leading to elastase over activity which causes emphysema. In the liver, the Z variant causes conformational changes in the AAT protein leading to their polymerizations and subsequent accumulation in hepatocyte endoplasmic reticulum. This accumulation of misfolded protein is thought to lead to apoptosis and cirrhosis, though the exact mechanism remains unclear. The proposed pathophysiology has been supported in animal models where the over expression of Z allele is associated with cirrhosis.13 Table 2.

Clinical Presentation

In the lungs, the most common presentation of AAT deficiency is early onset emphysema usually in the 4th or 5th decades of life, notably in patients without a significant smoking history. Emphysema from AAT deficiency disproportionately affects the lung bases and is usually panacinar in pathology.12

In the liver, the disease follows a bimodal distribution of neonatal hepatitis and cholestatic jaundice in infants and chronic liver disease in adult. In infants, clinical symptoms include jaundice which can be easily mistaken for physiologic jaundice, bleeding diathesis, and change in urine color due to conjugated hyperbilirubinemia. Jaundice lasts for about 3 months on average. Other non-specific symptoms include slow weight gain, irritability and lethargy. Fortunately only 2-3% of PiZZ infants develop cirrhosis or fibrosis in childhood.14 The jaundice eventually clears in the majority of these infants however some will continue to have abnormal liver enzymes, hepatomegaly or splenomegaly.15,16

In adults, AAT deficiency can present as asymptomatic abnormal liver function tests, cirrhosis (seen in up to one-third of adult PiZZ patients) or hepatocellular carcinoma.

Diagnosis

The diagnosis of AAT deficiency can be confirmed by laboratory testing in three ways: AAT plasma or serum level, AAT phenotype, or AAT genotype. AAT deficiency testing should be performed in all patients with unexplained liver diseases.12,17

Serum AAT level can be measured accurately and is an acceptable initial test but has limitations. Heterozygous patients may have normal levels. AAT is also an acute phase reactant which can be elevated in inflammatory states. The gold standard of diagnostic testing is via phenotypic analysis, although there are drawbacks. Phenotyping is time consuming, not readily available and cannot distinguish between heterozygous and homozygous. Genotyping is generally more expensive but offers more information about the likelihood of clinical consequences.12 Liver biopsy is not required for the diagnosis except in uncertain cases and when other conditions need to be ruled out. In older adult patients, once the diagnosis is confirmed, annual liver enzyme testing is recommended for monitoring. All first degree relatives should also be screened.17

Management

AAT deficiency management depends on the severity and the organs involved. A major component of therapy consists of early detection and prevention of complications by reducing modifiable risk factors.

Lung Diseases

In patients with COPD, management includes standard treatment with bronchodilators, inhaled corticosteroids, pneumococcal vaccine, influenza vaccine and smoking cessation. Surgical treatment with lung volume reduction and transplant are available but clinical improvement remains inconsistent and controversial for the AAT deficient patients.12,17

Currently there are four different AAT augmentation therapies being investigated for the treatment of CODP: (1) intravenously human plasma derived augmentation, (2) augmentation by inhalation, (3) recombinant augmentation and (4) synthetic elastase inhibition.12

Injection of purified AAT protein has been shown to increase AAT level in the lungs of AAT deficient patients. However, only a modest reduction in FEV1 decline with weekly infusion was observed in a small, randomized trial18. Overall evidence for significant clinical improvement remains lacking.

Liver Diseases

Besides the standard management for liver failure and associated complications, there is no specific therapy for AAT deficient patients. Effective preventive measures include: hepatitis A and B vaccination with avoidance of hepatotoxins such as alcohol. AAT augmentation therapy is not effective in AAT deficiency related liver diseases. To date, liver transplant remains the only curative treatment for AAT deficiency liver disease. AAT deficiency continues to be a leading indication for liver transplant in pediatric patients with 5 year survival rate up to 90%.19 Liver transplant in adults occur less frequently but has a similar prognosis compared to liver transplant for other indications.12,19,20

The concept of chemical chaperones, where a synthetic compound would bind to the mis-folded AAT proteins to aid their secretion and avoid polymerization, have been explored. However the efforts were limited by the massive amount of drugs that would require for one to one binding. Currently, AAT liver gene silencing in animal models have been reported to be successful in suppressing liver damage and phase II trials have been announced.20

Glycogen Storage Disease
Pathophysiology

Glycogen storage disease (GSD) is a group of inherited heterogeneous disorders characterized by abnormal accumulation of glycogen in various tissues with an incidence of approximately 1 in 20,000 infants. Since glycogen usually serves as dynamic energy storage for muscle and liver, the disorders can be divided roughly into those that predominantly affecting the liver and those affecting muscle. These glycogen disorders are numbered in the order they were discovered and their severity with type I being the one discovered first and also the most severe variant. Based on prevalence, severity and liver involvement, this article will only discuss types I and III. The other two types, type IV and VI, also affect the liver but they are not as common and less severe.21,22

GSD Type I

There are two subtypes of type I glycogen storage disease (GSD I), type Ia and Ib, both having autosomal recessive transmission.23 Type I GSD typically presents early in infancy and was first discovered by von Gierke in 1929. The final step of gluconeogenesis and glycogen break down involves the translocation of glucose 6 phosphate (G6P) from the cytoplasm into the endoplasmic reticulum (ER) lumen where it is hydrolyzed into glucose and phosphate by glucose 6 phosphatase. GSD Ia is the true enzyme defect whereas GSD Ib is the transport defect.24 Both processes lead to build up of G-6-P and hypoglycemia.

GSD Type III

Similar to GSD I, GSD III is also an autosomal recessive condition with two subtypes, IIIa and IIIb, with an incidence of 1:100,000. The primary defect is a mutation in the AGL gene that leads to deficiency of the glycogen debranching enzyme (GDE). GDE participates in one of the last steps in converting glycogen to glucose-1- phosphate.

Clinical Presentation and Diagnosis
GSD Type I

Patients commonly present at 3-4 months of age with symptoms that include hepatomegaly, doll-like facies (fat deposit in the cheeks), growth failure and enlarged kidneys. Laboratory examination often reveals fasting lactic acidosis, hypertriglyceridemia, mild elevated LFTs and symptomatic hypoglycemia occuring 2-3 hours after meals.25 Both types have abnormal platelet aggregation and there may be excessive bleeding. GSD Ib is moderately associated with inflammatory bowel disease and recurrent bacterial infections such as otitis media and pneumonia due to neutropenia and neutrophil dysfunction. The diagnosis is usually suspected clinically and confirmed with gene analysis. Liver biopsy is no longer required for diagnosis.24 Long term complications include liver adenomas and renal disease. Progression to cirrhosis is rare though there has been case reports of liver cirrhosis in GSD Ib.26

GSD Type III

The median age of first clinical symptoms is about 8 months. Early symptoms are very similar to GSD I; including hepatomegaly, hypoglycemia, failure to thrive and recurrent illness/infections. Kidneys are typically not enlarged. GSD IIIa affects both muscle and the liver while only the liver is affected in GSD IIIb. Unlike GSD I, progressive liver cirrhosis and failure may occur. Hepatic complication incidence of 11% has been reported in a study of 175 patients.27 In the same study, cardiac complications occurred in 58% of patients with ventricular hypertrophy being the most common. GSD IIIa patients often have minimal muscle weakness in childhood that can later progress to distal muscle wasting.21 Diagnosis can be made via clinical symptoms and laboratory exam demonstrating deficient GDE in skin fibroblasts or lymphocytes.24 Gene analysis can confirm the diagnosis and identify the subtype.

Management
GSD Type I

Management focuses on maintaining euglycemia through dietary therapy which includes a combination of continuous nasogastric tube feeding (CNTF), uncooked cornstarch (CS) and regular oral feeds high in complex carbohydrates evenly distributed over 24 hours. The management frequently requires a specialist dietician. Frequent blood glucose monitoring is crucial for well controlled GSD. Fructose and galactose are usually restricted since they cannot be converted into free sugar. CNTF should be started at the time of diagnosis with the aim of providing 8-10mg/kg/min of glucose in an infant and 5-7mg/kg/min in an older child. Traditionally, CS is ingested at bedtime and a trial of CS therapy is often introduced between 6mo and 1 year of age.28 However consumption of cooked pasta, a more palatable alternative to CS and MCS, has been shown achieve adequate nighttime glucose control in older patients.29 Common complications of the disease such as hyperlipidemia, high uric acid level, and microalbuminuria can be treated with HMG-CoA reductase inhibitor, allopurinol and ACE inhibitors respectively. In type Ib patients, granulocyte stimulating factor is added to treat neutropenia and neutrophil dysfunction. Liver and bone marrow transplantation can be considered in patients with extremely low fasting glucose tolerance and severe immune compromise.

GSD Type III

Similar to GSD type I, the main stay of management is dietary. The regimen includes carbohydrates rich meals and nocturnal uncooked cornstarch. Unlike GSD type I, fructose and galactose do not need to be restricted. Some studies suggest that a high protein diet can help improve muscle strength and exercise tolerance besides and serve as substrate for gluconeogenesis.30 In those studies, relative daily protein intake was increased from 18% to 25%.31

CONCLUSION

Historically, metabolic diseases commonly presented with end organ damage, but with increased knowledge of these conditions and a high degree of suspicion patients can be diagnosed earlier. Various diagnostic criteria and screening methods, including sensitive blood tests and genetic testing, allow early treatment that can alter disease outcomes. As there may be a delay before these patients see a specialist, primary care physicians need to be familiar with the clinical presentations in order to send off the appropriate screening tests. The key is identifying abnormal liver tests in combination with non-hepatic disease presentations. These include endocrine and cardiac presentations with hereditary hemochromatosis, neuro-psychological with Wilson disease, and pulmonary with alpha-1-antitrypsin. We continue to make progress in our understanding at the molecular level in order to identify new potential targets of therapy. Finding curative treatments for many of these disorders remain challenging but gene therapy offers promise in glycogen storage disease, Wilson disease and alpha-1-antitrypsin.9,20,32

Download Tables, Images & References

Nutrition Issues In Gastroenterology, Series #166

Parenteral Nutrition Lipid Emulsions and Potential Complications

Read Article

Intravenous lipid emulsions (ILE) have become a crucial component of parenteral nutrition providing a source of essential fatty acids as well as non-protein calories. However, their long-term use has been associated with significant complications. This has led to the quest to identify a lipid emulsion that decreases complications and provides beneficial physiologic effects. Multiple plant and fish based sources of ILE have been identified and are in use throughout the world. In this review, we focus on the benefits and adverse effects associated with soybean oil (SO) ILE and subsequent generations of ILE.

Manpreet S. Mundi, MD1 Bradley R. Salonen, MD2 Sara L. Bonnes, MD2 Ryan T. Hurt, MD, PhD1,2 1Division of Endocrinology, Diabetes, Metabolism and Nutrition, Mayo Clinic, Rochester, MN 1Division of General Internal Medicine, Mayo Clinic, Rochester, MN

INTRODUCTION

Intravenous lipid emulsions (ILE) are a key component of parenteral nutrition (PN), providing a source of essential fatty acids (EFA) as well as non-protein calories. Development of a stable ILE took decades of work by leaders in the field before the introduction of the first stable ILE (Lipomul®; 15% cotton seed, 4% soy phospholipids, 0.3% ploxamer).1,2 Unfortunately, due to adverse effects felt to be from the emulsifying agent, as well as a non-extractable toxic substance in cottonseed oil, Lipomul® was removed from the market.3 Subsequently, work by Wretlind and Schuberth led to the introduction of a soybean oil (SO) based ILE as a 10% SO solution.4 Since that initial introduction, significant modifications have taken place in subsequent generations of ILE, largely in an effort to reduce omega-6 fatty acid (FA) concentrations.

Soybean Oil Based ILE (Generation 1)

SO ILE are composed of SO triglycerides enveloped by a phospholipid emulsifier allowing the triglyceride core to remain soluble in an aqueous PN mixture, similar to a chylomicron-like particle.2 The emulsifiers are typically provided in excess amounts to ensure that particles maintain a size of 200-600 nanometers (nm), thus allowing them to pass through the smallest capillaries.5 Due to this, a typical composition of first generation ILE is 10-30% SO, 1.2% egg yolk phospholipids, and 2.25% glycerin with calorie content ranging from 10- 11 kcal/g depending on concentration.2,6 Therefore ILE provide an excellent source of calories allowing for a reduction in the amount of dextrose used in PN. This distribution of calories is important because, after the technique of “hyperalimentation” in the US with a solution of glucose, fibrin hydrolysate, vitamins, and minerals was introduced by Drs. Wilmore and Dudrick, reports began linking the high dextrose, fat-free PN to hyperosmolar, hyperglycemic, non-ketotic diabetic coma, hypoglycemia, hepatic enzyme elevations, fatty liver, and essential fatty acid deficiency (EFAD).6-8 Meguid et al. subsequently performed a pivotal study showing that providing one-third of daily calories as SO ILE (10% Liposyn®) was associated with lower metabolic complications in 23 men, leading to a gradual change in the U.S. to include ILE in PN.8

In addition to serving as a calorie source, SO ILE also contain robust amounts of essential fatty acids, linoleic acid (18:2n-6) and a-linolenic acid (18:3n-3), all of which play a key role in structural stability of membranes, as well as in generation of cellular signaling molecules.9 Humans lack the ability to synthesize these fatty acids and must obtain them from plant sources such as seed oils.10 Minimal PN requirements of linoleic acid to prevent EFAD have been estimated to be at least 1% of total calories, with optimal levels being 3-4%, whereas a-linolenic acid requirements are even less at 0.2-0.5% of total calories.9,11 Given that Intralipid® contains 20% SO, with 52% of the fat as linoleic acid, only 2.9-8.7g/day of lipid or 29-87 mL of Intralipid® would be required to meet the essential fatty acid needs of a 60 kg individual receiving 25 kcal/kg/day.9

Despite these benefits, the high ratio of n-6 to n-3 polyunsaturated fatty acids (PUFA) in SO ILE can have adverse effects. These 18-carbon fatty acids are used to make 20- and 22-carbon derivatives including arachidonic acid (AA, 20:4n-6) and eicosapentaenoic (EPA, 20:5n-3) and docosahexaenoic (DHA, 22:6n- 3).2,11 AA can be further metabolized to give rise to pro-inflammatory eicosanoids (2-series prostaglandins and thromboxanes, and 4-series leukotrienes).6,10,12,13 On the other hand, EPA, which originates from n-3 PUFAs, tends to generate less pro-inflammatory 3-series prostaglandins and thromboxanes, as well as the 5-series leukotrienes. In addition to these pro- inflammatory metabolites, SO ILE have also been noted to lead to reduced clearance of the reticuloendothelial system (RES), a key player in the phagocytosis of microorganisms, tissue debris, and particulate matter.14 With provision of SO ILE at a rate of 0.13 g/kg/hr over 10 hrs daily for 3 days, RES clearance fell by an average of 40%.14 In a 60kg individual, this would amount to 39 mL/hr of 20% lipid emulsion, which typically has 50g per 250 ml.

SO ILE can also lead to increased LDL and triglyceride levels as well as a decrease in HDL levels. This is due to the liposomes created from excess phospholipid emulsifier acquiring cholesterol and apolipoproteins from HDL in exchange for phospholipids.15,16 The capacity of HDL to handle this phospholipid influx is saturable, and if infusion rates exceed this capacity, liposomes begin to accumulate in plasma where they can continue to be enriched in cholesterol and begin to show characteristics of lipoprotein-X (Lp-X).17 It is important to note that 20% ILE tend to have lower phospholipid to triglyceride ratios compared to 10% ILE, resulting in faster phospholipid clearance. Twenty percent ILE are predominantly used in clinical practice.18

Another common complication of PN is intestinal failure associated liver disease (IFALD) affecting 30-60% of children and 15-40% of adults requiring long-term SO ILE.2 IFALD tends to vary in clinical presentation and can include hepatic steatosis, cholestasis, cholelithiasis, and hepatic fibrosis.19 Although, the etiology of IFALD seems multifactorial, some studies have revealed a correlation between parenteral lipid intake of ≥1 g/ kg/day with higher phytosterol levels.20 Additionally, elevated levels of phytosterols from SO ILE may be another contributing factor as higher phytosterol levels correlate with severity of IFALD.21 Typically, only a small percentage (5-10%) of dietary phytosterols are absorbed and play a beneficial role by inhibiting enteral absorption of cholesterol. Unfortunately, with parenteral administration, increased levels of phytosterols enter the circulation leading to higher concentrations as they cannot be converted to bile acids.22

Soybean Oil and Medium Chain Triglycerides (SO:MCT 50:50) (Generation 2)

The search for improved sources of ILE following widespread use of SO lipids first led to the use of medium chain triglycerides (MCT). Similar to other triglycerides, MCTs have a glycerol backbone with fatty acids attached, typically composed of between 6 (caprioc) and 12 (lauric) carbon atoms compared to the 13-21 carbon chains of long chain triglycerides (LCTs).23 In addition to being hydrolyzed faster by pancreatic lipases, MCTs are not incorporated into chylomicrons, and are thus rapidly delivered directly to the liver via portal circulation.23 In contrast to LCTs, once delivered to the cell, MCTs are able to passively cross the mitochondrial membrane due to their water- soluble properties and proceed directly for oxidation.

Based largely on these theoretical advantages of MCTs demonstrated primarily in animal models, ILE formulations have included combination MCT/LCT for the past 30 years in Europe. A number of small short- term studies have shown MCT ILE to be beneficial compared to SO ILE in terms of liver function tests, phospholipid to triglyceride ratio, and recovery of RES, although some studies reveal minimal benefit.24-27 Additionally, a larger prospective RCT showed that MCT/LCT use resulted in phospholipid profiles similar to healthy controls at the end of 4 weeks compared to 100% LCT.28 Due to the limited data on MCT (in any PN formulations – MCT/ LCT or pure MCT), there is a need for longer-term studies to evaluate the safety and efficacy of these ILE formulations.

Olive Oil (OO) Containing ILE (Generation 3)

In the third generation of ILEs, OO was introduced as an alternative lipid source. OO was seen as a potential substitute for SO because it contains higher amounts of monounsaturated fatty acids (MUFA) and less n-6 PUFAs.29 During the 1990s, ClinOleic®20% became available and was comprised of 80% OO and 20% SO.30 Since 18.5% of the fat is linoleic acid, 81-240ml per day would be needed to meet daily EFA needs. Concerns were raised that patients may not get these doses and may develop EFAD.31 Despite data showing a significant reduction in a-linolenic acid and higher Mead acid levels, there was no clinical evidence of EFAD and triene:tetraene ratios remained normal.31 Studies comparing OO/SO ILE to SO ILE have noted less deterioration of liver enzymes, better phospholipid profile, and improvement in some clinical variables such as ventilator days.32-34

Fish-Oil (FO) Containing ILE (Generation 4)

The latest generation of ILE have reduced the SO content by either completely switching to fish oil (FO) alone (Omegaven®; Fresenius Kabi) or using FO in combination with other sources of triglycerides (Smoflipid®; 30% SO, 30% MCT, 25% OO, and 15% FO or Lipoplus; 50% MCT, 40% SO, and 10% FO). FO is an ideal choice for use in ILE given its high content of n-3 PUFA, a-tocopherol, and minimal amounts of plant phytosterols.35 Omegaven® is currently the only ILE composed entirely of FO, but is not approved for routine use or commercially available in the U.S., and instead is available under study protocols or under a compassionate-use allowance through the FDA for treatment of IFALD.35 Smoflipid®, on the other hand has recently been approved by the FDA and is now commercially available in the U.S.

Numerous studies have shown significant improvement or reversal of IFALD with the use of FO ILE in the pediatric and adult population.36,37 Heller et al. studied the impact of combining FO (0.2 g/kg/day Omegaven®) with SO (0.8g/kg/day of 10% Lipovenoes®) versus SO alone (10% Lipovenoes®) in 44 post abdominal surgery patients and noted that the combination of FO with SO resulted in significant decrease in liver enzymes and bilirubin levels.38 Klek et al. conducted a 4 week-long trial randomizing 73 patients with stable intestinal failure to either Smoflipid® or SO ILE (Intralipid®) and noted that mean ALT, AST, and total bilirubin concentrations were significantly lower in the Smoflipid® group.42

All 3 FO ILE tend to have higher levels of a-tocopherol (∼200mg/L) raising plasma concentrations compared to SO ILE.42-43 .-tocopherol is an antioxidant from the Vitamin E family that is capable of scavenging free radicals that form from peroxidation of lipids, especially PUFAs and can result in cell damage.44 In addition to raising a-tocopherol concentrations, patients receiving FO ILEs also tend to have lower n-6 PUFA and higher n-3 PUFA concentrations, producing a less pro-inflammatory profile.43 Metry et al. noted a significantly lower IL-6 level in surgical ICU patients randomized to Smoflipid® compared to Intralipid® on both day 4 and 7 of the trial.45

Beyond these liver and anti-inflammatory benefits, studies have also revealed metabolic benefit as patients randomized to Lipoplus® had a greater reduction in free-fatty acids, smaller rise in triglyceride levels, and less reduction in HDL after ∼7 days of use compared to the Lipofundin® group.47 Wu et al. also noted lower triglyceride levels on day 6 in patients randomized to Smoflipid® versus Lipovenoes®.48

Of note, most of these trials are very short term and further studies on the long-term impact of FO ILE needs to be evaluated. One area of concern is the development of EFAD given the lower ratio of n-6 PUFAs. Fortunately, clinical trials thus far have revealed that although triene:tetraene ratios do rise, they did not exceed threshold for EFAD if the dose of ILE is ≥1g/kg/day.49,50 Mead acid levels also remained low again confirming that the rise in Mead acid levels may be largely due to the composition of ILE. An important contributor to the lack of EFAD with fish oil ILE may be their higher content of AA, which is typically derived from linoleic acid.51

CONCLUSION

In summary, significant advances have been made since the initial development and administration of ILE. We have progressed from searching for ILE that are non-toxic, to development of ILE that have beneficial properties other than being a source of non-protein, non-carbohydrate calories. Moving forward, additional research is necessary to expand our knowledge regarding the use of later generation of ILE in disease specific situations. The benefits with long-term use also need to be delineated, as much of current research has focused on short-term trials in small cohorts. In order to continue to provide the best care possible to our patients, we need to continue work in this field to not only reduce the risk of harm, but also continue to find ILE that will improve outcomes.

Download Tables, Images & References

Dispatches From The Guild Conference, Series #7

Hepatitis C Resistance Testing – When, Why and How to Do it

Read Article

Hepatitis C virus (HCV) therapy has evolved dramatically in recent years with the development of highly effective and well-tolerated direct-acting antivirals (DAAs). Although it was initially felt that DAA resistance would be a challenge, the cure rates in most patient populations are extremely high. Fortunately, new therapies in development will likely make the need for resistance testing less and less relevant. In this review, the principles of resistance development are reviewed and the role of resistance testing in clinical practice is discussed, with specific recommendations on when to do testing, how to interpret the results and how to modify therapy appropriately based on the results.

Jordan J. Feld MD MPH, Toronto Centre for Liver Disease, Toronto General Hospital, University Health Network, Sandra Rotman Centre for Global Health, University of Toronto Financial Disclosure: JJF has received consulting and/or research support from Abbvie, Abbott, Gilead, Janssen and Merck

Treatment for hepatitis C virus (HCV) infection has been revolutionized with the development of highly effective and extremely well tolerated direct-acting antivirals (DAAs). With these new therapies, cure rates well above 90% are now reliably achieved in almost all patient populations. With such remarkable success rates, patients, providers and payers have come to expect cure when a course of antiviral therapy is undertaken. With the cost of therapy and restrictions on retreatment, getting it right the first time, or certainly the second time, must be the priority for all clinicians. At least for the time being, maximizing the chance of success requires resistance testing in certain clinical scenarios. Fortunately, this is likely a temporary situation. New salvage regimens that work across all genotypes and against many resistant variants will likely significantly limit the need for resistance testing in the future, but until then, resistance testing has an important role.

Why is Resistance an Issue?

Like all RNA viruses, in a given patient, HCV circulates as a swarm of closely related but non-identical virions known as quasispecies. The virus has an error-prone polymerase that leads to approximately one error or substitution in the viral sequence with every round of replication. With upwards of 1012 new virions made per day, a virus with a substitution at every single site in the genome is generated every single day, just by chance.1 Most of these substitutions will have a detrimental effect on the ability of the virus to replicate and they will thus be selected against and disappear from the viral population. However, some substitutions, just by chance, will affect how a given drug, or a class of drugs, inhibits HCV replication. During therapy, these substitutions provide a major survival advantage and virions containing them will outgrow any drug- susceptible viruses. DAAs do not create resistance, they just select for it.

Not all Resistance is Created Equal

If all resistance-associated substitutions are generated every day, it is perhaps surprising that DAA therapy works at all. A number of factors limit the effect of resistant variants (Table 1).2,3 The first is the genetic barrier to resistance. For some DAAs, a single point mutation will lead to high-level resistance. However for others, 2 or more substitutions are required. The more substitutions required, the greater the genetic barrier to resistance and the less likely resistance is to occur. In addition to the genetic barrier, the replicative fitness of the resistant variant is also relevant. If the substitution leading to drug resistance also markedly impairs the ability of the virus to replicate, variants with this substitution will replicate extremely poorly making them hard to detect and they will quickly be outgrown by wild-type virus in the absence of drug treatment.

The best example of this scenario is the substitution that leads to sofosbuvir resistance (S282T). This substitution in the viral polymerase prevents sofosbuvir from binding to HCV but it also almost completely incapacitates the virus’ ability to replicate. As a result, even though variants with the S282T substitution are created as frequently as any other single substitution, they grow so poorly that they are effectively never found in patients before receiving sofosbuvir and remarkably have rarely even been identified and quickly disappear even in patients who have relapsed after a course of sofosbuvir treatment.4,5 It is the marked fitness impairment of the S282T variant that has made sofosbuvir and any agents of this class (nucleotide polymerase inhibitors) ‘special’ in the sense that resistance is not really an issue and patients can be retreated with sofosbuvir or another member of this class (none yet approved) after failing a previous course of therapy with this agent.

Fitness of resistant variants varies markedly by DAA class. Variants resistant to non-structural 5a (NS5A) inhibitors are highly fit and can effectively compete with wild-type virus. This has two important consequences. Firstly, variants resistant to these agents are more frequently found at baseline, even in patients who have never been treated before, and secondly, if they emerge after a failed attempt at treatment, they will persist long-term, affecting options for future therapy.The other classes of DAAs fall in between the extremes of the highly fit NS5A inhibitor-resistant variants and the extremely unfit sofosbuvir-resistant variants. Variants resistant to non-nucleotide protease inhibitors (NNI) are generally fit and more similar to NS5A-resistant variants, while polymerase inhibitor- (PI)-resistant variants are relatively unfit and are thus uncommon at baseline and do not persist long-term when they emerge after treatment.6

Beyond genetic barrier and fitness, the degree and the prevalence of resistance also affect the impact of detected variants. Certain substitutions can lead to high- level resistance, making the virus 100 or even 1000-fold less susceptible, while others may only lead to 2 to 5-fold reduced susceptibility. In addition, resistant variants may be very rare within the quasispecies population or very frequent. Most studies suggest that unless a variant is present in at least 15% of the quasispecies population, it is unlikely to be clinically significant.7 It is important to be aware of these details because the impact of resistance can easily be manipulated through selective reporting.

Terminology

There has been some debate in the literature about the preferred term to describe resistance. The original term proposed was resistance-associated variant or RAV. However, the appropriateness of this term has been challenged because a variant is either resistant or it is not (i.e. it replicates better than wild-type virus in the presence of drug or it does not). It is the substitution in the sequence that is associated with resistance and as such, the term resistance-associated substitution or RAS has largely supplanted RAV. Other terms including resistance-associated polymorphism (RAP) or resistance-associated mutation (RAM) have also been proposed.3 This is largely a semantic argument and for clinical purposes, the terms are interchangeable.

When is Resistance Testing Useful?

In order for any test to be clinically useful, outcomes should differ based on the result. If the outcome is the same whether the test is positive or negative, there is no point in doing the test. For most clinical scenarios, baseline HCV resistance testing is not useful because it has no meaningful impact on management. In such cases, it is best not to do the testing, as it only leads to confusion.

In fact, based on data available to date, baseline resistance testing is only relevant for certain patients with genotype 1a infection and for patients with genotype 3 infection and cirrhosis (Table 2). For all other patients, existing data do not support resistance testing.

Elbasvir/Grazoprevir

The importance of detection of substitutions associated with elbasvir-resistance in patients receiving elbasvir/ grazoprevir in different contexts highlights why the details matter. For example, the SVR12 rate in patients with genotype 1b infection treated with elbasvir/ grazoprevir is 98% in those with baseline substitutions associated with elbasvir resistance compared to 100% in those without any substitutions.8 Clearly, patients with genotype 1b respond very well to this regimen irrespective of the presence of elbasvir resistance and as such, no testing is warranted. In contrast, in those with genotype 1a infection treated with the same regimen, SVR12 rates drop to 58% in those with elbasvir-specific resistance-associated substitutions compared to 98% in those without baseline resistance. Clearly there is a large impact of resistance in genotype 1a and thus baseline testing is warranted. Furthermore, simple alterations in the treatment regimen can overcome the effect of resistance. Although the numbers are small, available data suggest that with extension of therapy from 12 to 16 weeks and the addition of ribavirin, SVR12 rates increased from 53% to 100% in genotype 1a patients with elbasvir-resistance at baseline.9

Sofosbuvir-Ledipasvir

Data regarding the importance of baseline resistance on outcomes with sofosbuvir/ledipasvir therapy are very illustrative of how the details matter for correct interpretation. Initial reports claimed that the presence of baseline resistance-associated substitutions had no effect on the response to sofosbuvir/ledipasvir in clinical trials. This conclusion was based on the observation that among genotype 1 patients, 25% had baseline substitutions associated with NS5A resistance, however the SVR12 rate was 95% in this group of patients, similar to the 98.5% in those without baseline resistance. However, as illustrated in Figure 1, the importance of baseline resistance increases markedly as relevant sub-populations are considered and low-level or low-frequency variants that dilute the apparent effect are removed from consideration. Similar to elbasvir/ grazoprevir, with sofosbuvir/ledipasvir, resistance has minimal or no effect on genotype 1b meaning that resistance should be reported by subtype, not for genotype 1 as a whole. In addition, only ledipasvir- specific substitutions and not all NS5A substitutions, and only those with at least 15% prevalence in the quasispecies population, should be reported. Among treatment-experienced patients with genotype 1a infection, only 6.5% harboured ledipasvir-specific resistance-associated substitutions at the 15% threshold, but only 76% of these patients achieved SVR12, compared to 98% among those without detectable resistance[10]. Based on a careful review of the data, baseline resistance testing is warranted in patients who previously failed interferon-based therapy and have genotype 1a infection with a plan to add ribavirin in those in whom resistance is found to be present. It may also be helpful in treatment-naïve patients with cirrhosis who will receive this regimen, as the difference in outcome is smaller but the consequences of failure are potentially greater.

Other Regimens for Genotype 1

With only 2 virological failures out of 624 patients in the ASTRAL 1 study of sofosbuvir/velpatasvir, resistance does not appear to have an effect and testing is not required before using this regimen for patients with genotype 1.11

Data suggest that resistance-associated substitutions may affect outcomes with paritaprevir/r/ombitasvir plus dasabuvir, but again only in patients with genotype 1a infection. However, for genotype 1a infection, ribavirin is recommended with this regimen for all patients and with this approach, SVR12 rates do not differ by the presence of resistance-associated substitutions. Whether a population with genotype 1a infection with no baseline resistance who does not require ribavirin could be identified, remains to be seen.12

A frequent polymorphism at position 80 (Q80K) is associated with treatment-failure with simeprevir. The effect was quite significant when simeprevir was combined with peginterferon and ribavirin. When simeprevir was combined with sofosbuvir, the effect of Q80K was only notable in patients with cirrhosis. As such, patients with cirrhosis and genotype 1a infection scheduled to receive this regimen should have Q80K testing done and if positive, should consider an alternative therapy.7

Genotype 3 Cirrhosis

The preferred therapy for patients with genotype 3 infection and cirrhosis is sofosbuvir/velpatasvir for 12 weeks. Although the results with this regimen are much improved over previous options, in patients with cirrhosis, SVR rates were 91%, compared to 97% in this without cirrhosis.13 Closer inspection revealed that SVR rates were down to 88% in patients with baseline NS5A inhibitor resistance-associated substitutions compared to 97% in those without these baseline substitutions. A similar effect was not seen in the sofosbuvir/velpatasvir arm of the recent POLARIS 3 study, with no obvious explanation for the differences between the two studies. Currently, international guidelines recommend NS5A inhibitor resistance testing for patients with genotype 3 and cirrhosis.7,14 If resistance is found, the addition of ribavirin is recommended based on extrapolation from the ASTRAL 4 study of patients with decompensated cirrhosis in whom those who received ribavirin had the highest SVR rates.15

Resistance Testing After Treatment Failure

The AASLD guidelines recommend resistance testing in all patients prior to retreatment after a failed course of DAAs.7 Importantly they also caution that the strategies to overcome resistance with current regimens have not been validated in the retreatment setting. However, numerous ‘salvage’ regimens that have shown proven efficacy for retreatment are in late- stage development.16 Although the detailed resistance data from these salvage studies have not yet been made publically available, the very high SVR rates despite a very high frequency of detectable resistance-associated substitutions at baseline suggest that most patients will not need baseline resistance testing prior to their use. However, it will be important to scrutinize the data carefully to avoid drawing incorrect conclusions about the value of resistance testing, as was originally done with sofosbuvir/ledipasvir. If specific substitutions are shown to be associated with failure, testing may be warranted. Fortunately, retreatment of HCV is rarely an emergency. As such, most patients would likely be better to wait for one of the coming therapies than to be retreated with one of the existing approved regimens.

The Counter-Argument to Testing

Some argue that baseline resistance testing is not necessary even for the specific populations mentioned because the number of patients with baseline resistance- associated substitutions who will not respond represents a relatively small percentage of the overall population and testing may be a barrier to treatment access or uptake. Other arguments against testing include cost, limited access in some regions and hard to interpret reports.3 While the effect may not be huge at the population level, for the individual patient, the presence of baseline resistance may have a major effect on the chance of SVR. The cost of resistance testing is generally fairly low, particularly if centralized at a reference center. Given the high cost of the therapies, even infrequent improvements in treatment approach or a small absolute effect on SVR would still pay for the cost of most, if not all testing.

Ideally resistance reports should be improved. Currently most HCV resistance reports follow the approach used for antibiotic and/or HIV resistance. However, unlike in other disease areas, DAAs for HCV cannot be easily mixed and matched. HCV regimens are studied as combinations and particularly with the increasing use of fixed dose combination pills; it is difficult, if not impossible, for example, to use a protease inhibitor from one company with the NS5A inhibitor from another. As such, resistance reports should give guidance on how to use an overall regimen in the context of the resistance profile observed e.g. extend to 16 weeks and add ribavirin for a detected elbasvir resistance-associated substitution. Improved reports would make it much easier for clinicians with limited virological experience to enter the HCV field and would reduce the reluctance of experienced clinicians to order resistance testing.

CONCLUSION

Resistance has added a layer of complexity to HCV management. If properly interpreted, baseline resistance data can add significant clinical value for specific patient populations. Hopefully standardized reporting in the literature will more clearly define the importance of resistance in studies and improved clinical reports will make resistance testing easier to use clinically. Fortunately, this is likely a temporary situation. Future regimens are unlikely to require baseline resistance testing and may not even require testing before retreatment. Until these new regimens arrive, resistance testing has a relatively small but potentially important role.

Download Tables, Images & References

Inflammatory Bowel Disease: A Practical Approach, Series #102

Fecal Microbiota Transplantation in the Elderly – A Need for Early Consideration in Select Cases of Clostridium difficile Infection

Read Article

Fecal microbiota transplantation (FMT) is recommended treatment for recurrent Clostridium difficile infection (CDI). There is also increasing evidence that FMT is effective in severe and severe-complicated CDI and in averting CDI-related complications such as colectomy and mortality. In this article, we explore the role of FMT in elderly patients with CDI and other gastrointestinal diseases. It may be reasonable to offer FMT earlier in the CDI disease course in older individuals, possibly after just the second recurrence and/or for the first episode of severe CDI to halt disease progression and prevent development of associated complications.

Yao-Wen Cheng1 Monika Fischer2 1Resident, Department of Medicine, Indiana University School of Medicine 2Associate Professor, Division of Gastroenterology, Department of Medicine, Indiana University School of Medicine, Indianapolis, IN

INTRODUCTION

Elderly patients (age ≥65 years) are considered a unique treatment population due to decreased physiological, immunologic, and cognitive reserve, while also shouldering a greater number of comorbidities and medications compared to their younger counterparts.1,2 These factors make elderly patients not only more susceptible to disease, but also less tolerant to aggressive therapies. Additional considerations such as age-related impairments in hepatic metabolism and renal clearance of medications,3 the Beers Criteria of medication contraindications in nursing home residents,4 and the heterogenous spectrum of elderly health status ranging from fit to frail,5 introduce multiple levels of complexity when treating disease in this unique population.

Over the past decade, the gut microbiome has risen to prominence after researchers realized its role in pathogen resistance, immunomodulation, epithelial cell propagation and nutrient metabolism.6 In parallel, researchers have pursued the allure of manipulating the gut microbiome via fecal microbiota transplantation (FMT), a procedure in which healthy fecal material is transferred into the diseased gut. The intent of FMT is to restore healthy microbial communities, thereby alleviating disease that may have resulted from a dysbiotic gut microbiota.

The purpose of this article is to explore current literature supporting the use of FMT for treatment of various diseases with a particular emphasis on its role in elderly patients.

Clostridium difficile Infection

Clostridium difficile infection (CDI) disproportionately effects the elderly population, causing greater incidence of first-time and recurrent CDI compared to their younger counterparts7,8 as well as an elevated risk of progression to severe and/or complicated CDI.9 The combination of these factors culminates in particularly poor outcomes for elderly patients, who represented 93% of CDI-associated deaths in 2008,10 92% of CDI- related US hospital admissions in 2009,11 and higher odds of CDI-related colectomy (OR 1.9)12 compared to patients <65 years of age.

There are likely multiple components contributing to elderly morbidity and mortality when afflicted with CDI. They may be more prone to Clostridium difficile colonization due to greater fluctuation in gut microbiome composition due to immunosenescence and alterations in gut transit time.13,14 Furthermore, elderly patients have higher rates of exposure to CDI risk factors such as antibiotics, healthcare facilities, chronic kidney disease, and multiple co-morbidities.15 Risk of healthcare-associated CDI increases by 2% for each year of age.16

Recurrent CDI

The use of FMT is best described as treatment for Clostridium difficile infection. Healthy colonies of bacteria such as bacteroides and firmicutes are transferred via FMT into the diseased gut and re- establish the microbiome diversity, thereby suppressing colonization by Clostridium difficile.17,18 A single FMT is effective at treating recurrent CDI (RCDI) at a rate approaching 90%,19 and is currently recommended by the American College of Gastroenterology (ACG).20 More importantly, FMT is superior to traditional antimicrobials at inducing durable cure. Subsequent CDI recurrence after treatment with FMT is lower (5- 15%)21,22 compared to the traditional antimicrobials vancomycin (35-65%)23 and fidaxomicin (25%).24

In studies focusing solely on elderly patients, rates of successful RCDI treatment with FMT have been quite comparable to those conducted in the general population. One review article found an 89.6% cure rate among 115 cases of RCDI treated with FMT in the elderly.25 In another systematic review, patients ≥65 were compared to patients <65 years of age and found to have inferior response rates to FMT; 26 primary cure for RCDI was 87% versus 99.4%, while CDI recurrence within 90 days was 4.9% vs 0.1%, respectively. The results suggest that the post-FMT clinical course of elderly patients should be followed closely for signs of recurrence. A significant fraction of patients may require repeat FMT or subsequent medical therapy for adequate treatment.

Severe and Severe-Complicated CDI

Beyond RCDI, there remains a need to introduce new and effective modalities for the treatment of severe and/ or complicated CDI (SCCDI). Colectomy is currently standard therapy, particularly for refractory cases of SCCDI. However, post-surgical mortality has remained close to 50% over the last decade despite new surgical techniques and prediction models for poor surgical outcomes.27-30 There have been several convincing studies suggesting that FMT may adequately treat SCCDI31-34 and also decrease rates of colectomy.35

Only one study, published by Agrawal and colleagues, has focused on elderly patients.31 In their cohort of 146 patients, 30.8% had severe CDI, 8.2% had severe CDI, and the remainder had RCDI. Overall primary cure rate was 82.9% (91% severe and 66% severe-complicated CDI), which improved to 95.9% after subsequent vancomycin or repeat FMT infusion. Only six patients in the study reported a serious adverse event, which consisted almost exclusively of recurrent diarrhea requiring repeat intervention or hospitalization. Notably, 69.2% of patients reported an improvement in their functional status after FMT.

Since 2013, our center has utilized a sequential FMT protocol with selective use of vancomycin.33 Compared to the 66% cure rate after a single FMT,31 our sequential FMT protocol has a cure rate of 87% for the treatment of severe-complicated CDI.34 Furthermore, since inception of our inpatient sequential FMT program, our center has seen a decrease in CDI-related mortality from 10.2% to 4.5% (p=0.021) among patients with SCCDI, and from 43.2% to 12.1% (p<0.001) in a subgroup of SCCDI patients who did not respond to 5 days of optimal medical therapy (medically refractory). CDI- related colectomy has also decreased from 6.8% to 2.7% (P=0.042) in SCCDI and 31.8% to 7.6% (P=0.001) in the medically refractory subgroup [unpublished data].

FMT has an emerging role in the treatment of SCCDI. Its acceptance as a therapeutic modality in elderly patients is particularly important because it can avert the need for colectomy. Furthermore, it can serve as an alternative for patients that are regarded as non- surgical candidates due to age-related frailty and/or co-morbidities. In instances where FMT is only partially effective, it can also serve as adjunct medical therapy to stabilize patients before surgery, which has been shown to be associated with better surgical outcomes.29

Inflammatory Bowel Disease

Inflammatory bowel disease (IBD) is similar to CDI in that both diseases are characterized by a dysbiotic microbiome.36 However, it is unclear whether dysbiosis is the result of an inappropriate host immune response to normal gut flora or a proper response to an abnormal microbiome. Almost 15% of inflammatory bowel disease (IBD) cases arise in patients ≥65 years of age, coinciding with the secondary peak of IBD incidence in the general population.37 Cases of ulcerative colitis (UC) tend to be more severe at time of diagnosis for elderly patients.38,39 However, for both UC and Crohn’s disease (CD), the clinical course in elderly patients tends to be less aggressive compared to younger patients,40-43 with fewer relapses and hospitalizations particularly in UC.40,44 Though the overall disease course of IBD may be favorable in elderly patients, outcomes during IBD-related hospitalizations are not. Higher rates of gastrointestinal bleed, anemia, hypovolemia, electrolyte disturbance, and malnutrition among the elderly lead to greater in-hospital mortality (OR 3.91) and post- colectomy length of stay (1.73 days) compared to patients <65 years.45

Poor outcomes among elderly patients with IBD may partially be explained by practical differences in therapy used for IBD suppression. Underutilization of biologic therapy and immunomodulators is well- described, likely driven by the perception that elderly patients are at higher risk of infection, malignancy,46 as well as drug interactions from polypharmacy.47,48

The role of FMT as a “natural” means of restoring the normal balance of gut microbiota in patients with IBD has thus piqued the interest of researchers, particularly with elderly patients in mind. Unfortunately, cohort studies of FMT for CD treatment have conflicting results,49,50 while several randomized controlled trials (RCT) involving UC patients of all ages have not been convincing. In two RCTs, UC patients who underwent FMT with stool sourced from a healthy donor responded better than those that received placebo, with 25-27% achieving remission.51,52 A third RCT did not generate significance between groups of UC patients receiving FMT with healthy versus autologous stool.53

The role of FMT in the treatment of IBD is unclear at this time. Additionally, the rigorous treatment regimens utilized by the aforementioned studies (daily self-administered FMT enemas for multiple weeks) would not be feasible for many of our elderly patients.

Clostridium difficile infection in IBD patients

CDI in patients with underlying IBD (CDI-IBD) is another noteworthy group that could benefit from treatment with FMT. Among all age groups, the likelihood of IBD patients contracting CDI is 2.5 to 8-fold higher than the general population.54-56 Moreover, mortality and colectomy rates are much higher for hospitalized CDI-IBD patients compared to patients with solely IBD or CDI.57-59

Successful treatment of CDI with FMT in IBD patients is close to 90%, similar to that of non-IBD patients.60-62 However, there is concern that patients can have an unpredictable IBD clinical course post-FMT. Fischer and colleagues found that 17.9% of patients had worsening IBD activity after treatment of CDI. 12% of this study group also experienced a serious adverse event, though not directly related to the FMT itself.62

The literature is currently devoid of studies on this topic, but it would not be unreasonable to expect worse outcomes among elderly patients with CDI-IBD. Previous comparisons to younger patients demonstrated worse rates of colectomy and mortality when elderly patients are hospitalized with IBD or CDI alone. Early intervention with FMT could attenuate progression to severe-complicated CDI thereby improving outcomes. However, a possible IBD flare after FMT could itself lead to colectomy and/or mortality. In elderly patients with CDI-IBD, aggressive anti-CDI medical therapy should be balanced with selective use of FMT after a thorough discussion with the patient about risks and benefits.

FMT Applications on the Horizon

The impact of the gut microbiome on host metabolism, immunogenicity, and neuro-hormonal responses has opened multiple avenues of research for the application of FMT.63,64 Larger case series have identified a potential for FMT in the treatment of pouchitis65-67 and irritable bowel syndrome.68,69 Further targets of treatment via FMT have only progressed to the level of case reports: hepatic encephalopathy,70 acute graft-versus- host disease,71 multiple sclerosis,72 and chronic fatigue syndrome.73

As it pertains to the elderly population, researchers have theorized that the gut microbiome may have a role in the development of Alzheimer’s disease. Disturbances to the elderly microbiome caused by immunosenescence or external factors such as antibiotics allows for colonization of fungi and bacteria capable of secreting amyloids and lipopolysaccharides into their environment,74-76 which induces host inflammation and gut permeability.77,78 Researchers hypothesize that the leaky gut could allow amyloids to reach the level of the brain where they could polymerize into the beta- pleated sheets characteristic of Alzheimer’s disease.74 Alternatively, proinflammatory bacterial colonizers may also induce cytokines and reactive oxygen species contributing to neurodegeneration.78-80 Further studies are needed to validate these theories.

Safety and Delivery of FMT

FMT is delivered in various methods including nasoduodenal/jejunal tube, esophagogastroduodenoscopy, pill, and enema, though colonoscopy is the most widely practiced. There are multiple benefits of colonoscopy including targeted delivery of fecal material, direct visualization of the colonic mucosa, and ability to rule out alternate etiologies such as IBD, ischemia, microscopic colitis, or malignancy.81 Importantly, there is strong evidence that treatment of CDI is superior when FMT is delivered via lower rather than upper GI modalities. In a cohort of over 2000 patients who received FMT for recurrent, severe, and/or refractory CDI, cure was 85.8% via colonoscopy versus 74.1% with an upper GI route (p<0.01).82

Previous studies have found increased risk of perforation in elderly patients after diagnostic colonoscopies. In one study, the incidence of perforation increased by age group: 0.026% for age 50-64, 0.087% for 65-79, and 0.317% for ≥80 years.83 Another study placed the odds of perforation at 1.33 when patients >65 were compared to those <65 years.84

Endoscopic injury and sedation-related aspiration events appear to be negligible after FMT, though no direct comparison to diagnostic colonoscopies or among age groups has been published. One review found that among 1,555 FMTs only 4 resulted in a direct procedural complication (mucosal tear or perforation) and 3 were associated with death that could not be directly attributed to the FMT itself.85 Another review comprised of 1,089 FMTs described adverse events in 17.7% of lower GI versus 43.6% upper GI FMTs. Severe adverse events, or procedural complications leading to death or hospitalization was found in 6.1% of lower GI versus 2.0% upper GI FMTs.86 In most cases, side effects from FMT were described as mild, self-limited, and confined to the gastrointestinal tract. In a study comprised exclusively of 146 elderly patients, the treatment of recurrent, severe, and/or complicated CDI with FMT resulted in only 6 serious adverse events, most of which were severe, recurrent diarrhea requiring hospital admission.31 The authors suggest that elderly age may be a relative contraindication to upper GI delivery of FMT due to risk of aspiration and small intestinal bacterial overgrowth.

CONCLUSION

The adage of “start low and go slow” for initiation of therapeutics in the elderly may not apply to FMT for the treatment of CDI, where rates of cure and CDI relapse are superior compared to traditional antimicrobials. Elderly patients may also appreciate the benefits of avoiding colectomy or having alternative therapy when they are non-surgical candidates due to age or co- morbidities. It may be reasonable to offer FMT earlier in the CDI disease course, possibly after just the second recurrence and/or the first episode of severe CDI to decrease progression of CDI severity and development of associated complications.

There is insufficient evidence to suggest that FMT for the sole purpose of treating IBD is beneficial. However, when elderly patients have CDI-IBD, clinicians will need to balance the benefit of FMT for treating CDI and its potential to induce an IBD flare.

Download Tables, Images & References

jojobethacklinkmarsbahisJojobet GirişcasibomJojobet GirişCasibomCasibomvaycasinoholiganbetcasibommarsbahis girişJojobettaraftarium24madridbet güncel girişmadridbet girişmadridbetGrandpashabet