LIVER DISORDERS, SERIES #3

Hepatocellular Carcinoma

Read Article

There are many factors that determine the appropriate treatment for patients with HCC including underlying liver disease, size, number of lesions, vascular involvement and extravascular disease. Further studies are needed to help refine the best treatment options in patients with early-,intermediate-, and late-stage disease who are either neither transplant candidates or unresectable. With current screening guidelines as well as improved and well-tolerated treatments for hepatitis C now available, there is hope that both the incidence and mortality from HCC will decrease.

Nitin Sardana MD, Elie Aoun MD, Division of Gastroenterology and Hepatology, Allegheny Health Network, Allegheny General Hospital, Pittsburgh, PA

Hepatocellular carcinoma (HCC) is the second most common cause of death worldwide from liver cancer. HCC is typically seen in underlying cirrhosis secondary to viral hepatitis, which is common risk factor. While direct acting antivirals may reduce the development of HCC due to hepatitis C in upcoming years, the increasing incidence of nonalcoholic fatty liver disease (NAFLD) make the trend of HCC difficult to predict. Surveillance for HCC should be performed in high risk patients. The diagnosis of HCC can be made radiographically in the majority of cases. There are many factors that determine the appropriate treatment for patients with HCC including underlying liver disease, size, number of lesions, vascular involvement and extravascular disease. The treatment of early stage and advanced disease is fairly clear. Further studies are needed to help refine the best treatment options in patients with early-, intermediate-, and late-stage disease who are either neither transplant candidates or unresectable.

INTRODUCTION

Primary liver cancer is the fifth most common cancer in men and ninth most common in women, accounting for 554,000 and 228,000 cases, respectively.1 Hepatocellular carcinoma (HCC) accounts for over 90% of all primary liver cancers. In most areas, its incidence is therefore a close approximation of the incidence of hepatocellular carcinoma.2 While the highest rates of liver cancer are in Africa and Asia, the incidence in the United States is also alarming.1 There will be an estimated 33,190 new cases of liver cancer diagnosed in the United States this year alone with approximately 23,000 deaths.3 The incidence of HCC in the United States varies with age, gender and race. The highest incidence is in those older than 65 years, males, Asians and Pacific Islanders.4 The median age at diagnosis is 63 years.3 Historically, the incidence of HCC has continued to climb through the years as has the mortality. Although the mortality rates continued to increase in the United States between 2007 and 2010, there was no significant increase in the incidence of HCC.4 With current screening guidelines as well as improved and well-tolerated treatments for hepatitis C now available, there is hope that both the incidence and mortality from HCC will decrease.

Risk Factors

HCC usually develops in the background of underlying cirrhosis.5 The risk of developing HCC in patients with cirrhosis increases depending on the etiology of the cirrhosis. These etiologies include hepatitis C, hepatitis B, hereditary hemochromatosis, Wilson’s disease, autoimmune hepatitis, alpha-1 antitrypsin, primary biliary cirrhosis (PBC), alcohol and non-alcoholic steatohepatitis-induced cirrhosis. Furthermore, risk factors specific to the etiologies can increase the incidence of HCC even higher.

Overall, the 5-year cumulative risk of developing HCC ranges from 4% in patients with primary biliary cirrhosis to 30% in those with cirrhosis from chronic hepatitis C infection.6 The stage of cirrhosis also correlates with the risk of developing HCC.6 There is an estimated 4-fold increased risk of developing HCC in those with cirrhosis as compared to those with chronic hepatitis.6 Approximately 80% of HCCs are attributable to chronic viral hepatitis.7 Even in those patients with HCC in whom cirrhosis is not present, there are typically histologic changes consistent with underlying liver disease including steatosis, varying degrees of fibrosis, dysplasia or iron overload.8

Patients with hepatitis C-induced cirrhosis have the highest 5-year cumulative risk of developing HCC.6 This incidence also varies depending on geography, where the 5-year cumulative risk is 30% in Japan and 17% in the United States and Europe.6 Among those with hepatitis C-induced cirrhosis, there are a subset of patients at higher risk than others. Patients with hepatitis C-induced cirrhosis who are older at diagnosis or at time of infection, males, elevated bilirubin, decreased platelets, presence of esophageal varices or physical examination findings of palmar erythema or spider angiomata have an increased risk of developing HCC.9,10 There is also an increased incidence of HCC in patients with hepatitis C-induced cirrhosis with comorbid conditions of porphyria cutanea tarda, hepatic steatosis, hepatitis B and alcohol use (>60g/day).11-14 Patients who are co-infected with hepatitis C and HIV tend to be diagnosed with HCC at an earlier age and sooner after their diagnosis of hepatitis C than those infected with HCV alone.15 There does not appear to be a significant difference in the incidence of HCC based on the genotype or viral load of hepatitis C.16

Similar to patients with hepatitis C, those with hepatitis B-induced cirrhosis are at increased for HCC, and there is geographic disparity. The 5-year cumulative incidence of HCC among patients with hepatitis B in East Asia is 15% compared to 10% in Europe. Older age, degree of thrombocytopenia and liver firmness on physical examination are associated with an increased risk of developing HCC.17 The incidence of HCC is also highest among those with hepatitis B-induced cirrhosis, less with chronic hepatitis B and is least common in inactive carriers.6 Patients with occult hepatitis B (presence of HBV DNA who are hepatitis B surface antigen-negative) are also at increased risk of developing HCC.18 While the risk associated with having a high HBV DNA or a hepatitis B e-antigen at time of diagnosis is unclear, the risk of developing HCC is lower for patients who clear hepatitis B surface antigen either spontaneously or with treatment.6,19 Co- infection of hepatitis B with hepatitis D increases the risk for developing HCC threefold.20 Aflatoxin exposure significantly increases the risk of patients with hepatitis B developing HCC.21 Similar to patients with hepatitis C, there is an increased risk of HCC in patients with hepatitis B who consume alcohol.14 Patients with alcohol-induced cirrhosis, even in the absence of chronic viral hepatitis, are also at increased risk of developing HCC with a 5-year cumulative incidence of 8%.6 Alcohol may also have a direct carcinogenic effect on the liver and may lead to HCC even in the absence of cirrhosis.14

Patients with cirrhosis related to etiologies other than chronic viral hepatitis are also at increased risk for HCC. The degree of their risk for some of these etiologies, however, is not as well defined. The risk of developing HCC in the setting of cryptogenic cirrhosis has been reported by Marrero et al to be as high as 29%.22 Among these patients, however, a significant proportion of these patients may have underlying non- alcoholic fatty liver disease (NAFLD).22 In their study, NAFLD accounted for up to 13% of the patients with HCC.22 While this incidence may be an overestimate, with an estimated prevalence of 30-40% in the United States and 6-35% worldwide, NAFLD is an important risk factor in the development of HCC.23 There are data to suggest that the duration cirrhosis may be longer in patients with NAFLD-related cirrhosis.24,25 There also appears to be an association with obesity and the risk of development and death from HCC.26,27 The risk of developing HCC is twice as high among patients with diabetes.28

Even among metabolic liver diseases, there is a wide range in the risk of HCC. While the risk of developing HCC among patients with Wilson’s disease is extremely low, the 5-year cumulative incidence among patients with hereditary hemochromatosis is 21%.29 The risk of developing HCC in patients with alpha-1 antitrypsin deficiency only increases once they develop cirrhosis.30 Similarly, patients with primary biliary cirrhosis are at increased risk with advanced fibrosis.31 HCC is seen predominantly in men with PBC, and the overall 5-year cumulative incidence is 4%.31 Cardiac congestive liver fibrosis is not thought to be a typical risk factor in the development of HCC, this has been reported in the literature.32 Coffee consumption is one well-studied association that actually decreases the risk of HCC.33 Recognizing the risk factors for the development in HCC is paramount in the surveillance, prevention and early recognition of the disease for improved outcomes.

Surveillance

The decrease in mortality with surveillance, the noninvasive means of testing and the difference between early and late detection are key factors as to why surveillance for HCC is recommended in high- risk patients by the American Association for the Study of Liver Diseases (AASLD).34 It has been reported that biannual alpha fetoprotein (AFP) and ultrasound imaging decreases the mortality from HCC by 37% in patients with past or present hepatitis B infection.35 The benefit of surveillance is also supported by data that show the poor prognosis in patients in whom the diagnosis is made only after they are symptomatic.36

The two most well-studied serologic markers for detection of HCC are AFP and descarboxyprothrombin (DCP), also known as prothrombin induced by vitamin K absence II (PIVKA II). The difficulty in supporting AFP as a screening test is that, depending on the cut-off level, the sensitivity or specificity may be suboptimal. A case-control study evaluating its efficiency at diagnosing HCC showed that its sensitivity was approximately 60%, and its positive predictive value was only 25.1% at a 5% tumor prevalence at a value of 20ng/mL.37 Raising the cut-off only decreases the sensitivity even further. There had also been some hope that PIVKA II could be an adequate serologic test used in the surveillance of HCC. However, data show that while it may be a useful diagnostic test, its role in surveillance is limited.38 The sensitivity of PIVKA II was 74% at a cutoff of 40 mAU/mL.39 While the use of the combination of AFP and PIVKA II increases the sensitivity in the diagnosis of HCC, the specificity is still only 74%.39 Ultimately, the use of AFP, PIVKA II or the combination of the two is inadequate to recommend universally in the surveillance of HCC.39 Groups continue to evaluate these in specific subset of patients. For example, a recent study showed that a combination of the PIVKA II and AFP may aide in the early detection of HCC in patients with hepatitis B.40 If further studies support this, guidelines may support its use. In the future, there may also be a role of various novel biomarkers to measure response to therapies.41

In addition to serologic tests, radiologic testing has also been studied for surveillance of HCC. Ultrasound is the diagnostic test of choice and is recommended by the AASLD in the surveillance for HCC.34 Advantages of ultrasound include the lack of radiation associated with its use, the ease of accessibility and its relatively low cost compared to other imaging modalities. However, the disadvantages of ultrasound testing are that it is operator dependent, a likely decreased sensitivity in obese patients and the overlap of ultrasonographic appearance of other lesions in the background of cirrhosis.42 Thus, ultrasound sensitivity has been reported to be between 65% and 80%.43 Computed tomography (CT) and magnetic resonance imaging (MRI) are not appropriate imaging modalities for surveillance because of the radiation exposure, the risks of contrast or gadolinium administration as well as the cost. Imaging with CT may be considered in obese patients in whom ultrasound is non-diagnostic.

Ideally, surveillance could be performed with a combination of serologic and radiologic tests. Unfortunately, the data do not support this. In a study of more than 9,000 patients in China, patients with hepatitis B underwent surveillance with ultrasound and AFP.44 The false positive rate was 7.5% when combining the two modalities. Ultimately, despite its limitations, ultrasound still has a high specificity and thus, is a more appropriate test than the current serologic markers in the surveillance for HCC.45

The AALSD recommends surveillance for HCC in high-risk patients every six months.34 This recommendation is based on a study that showed a survival benefit of semiannual surveillance compared to annual surveillance in patients with hepatitis B.46 In an attempt to simplify guidelines, the AASLD generalized these findings to all high-risk patients.34 A more recent study that showed smaller, less advanced tumors were detected, and patients had longer survival when surveillance was performed every six months as opposed to every twelve months to support the AASLD guidelines.47

Surveillance for HCC is recommended for patients at increased risk. Based on cost-effectiveness models, the AASLD considers that it is cost-effective to perform surveillance in patients with hepatitis B if the expected HCC risk is greater than 0.2% per or hepatitis C if the risk of developing HCC is greater than 1.5%/year,34 the difference likely owing to the varying prevalence between the two etiologies As previously mentioned, the incidence of HCC in etiologies of cirrhosis other than chronic viral hepatitis has not been clearly defined. While the guidelines do not make clear recommendations for many of these populations, it is reasonable to perform surveillance for HCC in any patient with cirrhosis until further data suggest otherwise.

There is no role of surveillance of HCC in patients with hepatitis C who do not have cirrhosis. There was a large prospective study of 12,000 men in Taiwan that showed a significant increase in the risk of developing HCC in patients with hepatitis C, although the data should be interpreted with caution as it included both cirrhotics and non-cirrhotics.48 Another prospective study of approximately 1,000 patients estimated that the risk of developing HCC in patients with hepatitis C who are not cirrhotic to be 0.8% per year.49 The AASLD deemed it cost effective to screen for HCC in patients with hepatitis C without cirrhosis only if the annual incidence was >1.5% per year. Thus, at this time, the evidence does not support surveillance for HCC in patients with hepatitis C who are not cirrhotic.

There are data, however, to support surveillance of HCC in certain subsets of patients with hepatitis B. Cost-effective analysis favors surveillance in patients with hepatitis B whose risk of developing HCC is >0.2% per year.34 Some of the risk factors for HCC in patients with hepatitis B have been described above. The presence or absence of these risk factors aide in the risk stratification. As in patients with hepatitis C, patients with hepatitis B with cirrhosis are at the highest risk of developing HCC and should undergo surveillance. The AASLD also recommends surveillance for adult Caucasian patients with active hepatitis B without cirrhosis, Asian male hepatitis B carriers older than 40 years, Asian females older than 50 years, hepatitis B carriers with a family history of HCC and African/North American blacks with hepatitis B.34 Co-infection with hepatitis C increases the risk for HCC, though there are no guidelines regarding surveillance in this population.

Diagnosis

Unlike many other solid organ tumors, there are many situations in which HCC can be diagnosed with a high degree of accuracy based on imaging alone.50 Specifically, four-phasic multidetector CT (unenhanced, arterial, venous and delayed phase) or dynamic contrast- enhanced MRI are used to diagnose HCC. Whether or not imaging alone is sufficient to diagnose HCC is also based on the size of the lesion, as the sensitivity and specificity of these modalities increase with increasing size of the tumor.51 Biopsy is generally avoided if possible, as a meta-analysis estimates the incidence of needle tract tumor seeding to be 2.7%.52

If there is concordance between contrast-enhanced ultrasound and MRI, the diagnosis of HCC can be made even if the lesion is less than 2 cm in patients with cirrhosis.53 However, the use of contrast agents for contrast-enhanced ultrasound has not been FDA approved in the United States. A single contrast- enhanced study, however, appears to be sufficient to diagnose HCC in patients with cirrhosis who were found to have nodules between 1-2 cm on surveillance.54 Thus, the AASLD does not recommend any further diagnostic testing in nodules >1 cm detected in patients at risk for HCC if there is arterial hypervascularity and venous or delayed phase washout on contrast-enhanced imaging.34 (Figures 1 and 2) If there are atypical features on imaging, then either a different contrast enhanced study or liver biopsy is recommended.34 For lesions less than 1 cm, the AASLD recommends serial imaging with ultrasound every three months as these lesions will likely be cirrhotic nodules.34

There are limited data that show that an AFP level of > 200 ng/mL in non-African American patients with hepatitis C related cirrhosis and a hepatic mass may be diagnostic of HCC.55 Previous data supported using this level as a cutoff to aid in the diagnosis of HCC in conjunction with imaging56,57. Current data, however, suggest that the use of AFP does not provide additional benefit to imaging.34,39,43

Staging

Once the diagnosis has been established, the next step is to determine the stage of HCC as treatment options vary depending on the stage of disease. There are multiple staging systems used for HCC, including the American Joint Committee on Cancer (AJCC) TNM system (last revised in 2010), Okuda system, Cancer of the Liver Italian Program (CLIP) score and the Barcelona Clinic Liver Cancer (BCLC) staging classification. Each of these scoring systems have their strengths and limitations.

The Okuda system, developed in 1985, includes tumor size, ascites, bilirubin and albumin to stage patients into three stages (Table 1).58 As this system does not include important factors that would alter treatment such as the presence of metastases or vascular involvement, it should not be used to make treatment decisions. It can provide prognostic information for patients however.

The CLIP score includes Child-Pugh stage, tumor morphology (uninodular, multinodular and extension) AFP and portal vein thrombosis.59 The CLIP system appears to be the best among the staging systems among patients who underwent transarterial chemoembolization (TACE).60 It also appears to be easier and more accurate than the Okuda classification.56

The AJCC TNM system was most recently updated in 2010. This system accounts for the size of the tumor, the number of discrete lesions, the presence of vascular involvement, lymph node involvement and the presence of distant metastases.61 Despite their importance on prognosis, the degree of fibrosis (Ishak classification) does not factor in on the stage.62,63 The benefit of the AJCC TNM system is that it (6th edition) has been validated in a cohort of patients who underwent liver transplantation and provided more accurate information regarding overall and recurrence-free survival as compared to six other staging systems, including the CLIP score and BCLC Group staging classification.64

The BCLC Group staging classification includes Okuda stage, extent of lesion, performance status, presence of constitutional symptoms, vascular invasion and extrahepatic spread (Table 2).65 Because of its ability to stratify patients into groups that would benefit from various treatments, the BCLC is the most commonly used staging system and is the staging system of choice based on the most recent AASLD guidelines.34

Treatment

Treatment options for HCC include surgical resection, liver transplantation, radiofrequency ablation (RFA), trans-arterial chemoembolization, radioembolization and systemic chemotherapeutic agents. The decision regarding the most appropriate therapy for a patient is based on their BCLC stage. The general principle of treatment is that more aggressive measures for earlier stage disease are used with the goal of providing curative therapy. Treatment of more advanced HCC is centered around palliation.


In years past, there were very few patients who
were diagnosed with very early stage HCC (BCLC 0,
defined as a solitary, asymptomatic lesion with diameter
< 2 cm without metastases. Improved surveillance
strategies, adherence to surveillance guidelines and
improved diagnostic tools are likely to increase the
detection of these very early stage HCCs. Surgical
resection is currently recommended for patients with
very early stage HCC and Child Pugh A cirrhosis with
a bilirubin <1 and no signs of portal hypertension. With
surgical resection, the overall 5-year survival is between
70-90%.66,67 Even after resection, there is a small risk
of recurrence.66,67 The presence of satellite lesions is
an independent risk factor for survival and recurrence
rate.66

Randomized controlled trials comparing surgical
resection to RFA have shown no survival difference
even though lesions were up to 5 cm is size.68,69 There
are still conflicting data regarding whether or not RFA
is a viable replacement for surgical resection in patients
with very early stage HCC. A recently published study of
52 patients with very early stage HCC confirmed these
findings by showing no difference in 1-, 3- and 5-year
overall and tumor-free survival rates when comparing
surgical resection and RFA.70 However, another study
of 237 patients showed that surgical resection provides
better overall survival and recurrence-free survival
compared to RFA.71 Given the conflicting data, surgical
resection is still the standard of care as the first-line
therapy for very early stage HCC.34 If patients with very
early stage disease have more advanced liver disease
or are otherwise not surgical candidates, either RFA or
liver transplantation should be considered.

Early-stage disease (BCLC A) is comprised of asymptomatic patients who are appropriate for resection, liver transplantation or percutaneous treatment.72 In patients who are surgical candidates, liver transplantation has been proven to be curative.73 In a landmark study by Mazzaferro and collegues, 48 patients with cirrhosis who had either a single HCC <5 cm or <3 lesions that were less than 3 cm in diameter underwent liver transplantation. There was a 75% 4 year-survival rate and recurrence-free survival was 83%.73 This study became the basis of what is now known as the Milan criteria. In addition to these size restrictions, the Milan criteria also includes the absence of vascular invasion and extrahepatic disease. Overall, the mean survival in patients with early-stage disease who underwent liver transplantation was shown to be 8.8 years, while patients who underwent surgical resection and RFA had a median survival of 4.3 years and 5.2 years, respectively.74 Unfortunately, due to the lack of available donor livers, transplantation is not always an option.

Intermediate-stage disease (BCLC B) includes asymptomatic patients with either a large or multinodular HCC and no evidence of vascular invasion or extrahepatic spread.72 The current guidelines support trans-arterial chemoembolization (TACE) as first-line therapy for patients in this group who are unresectable.34 This modality can be used as a bridge to liver transplantation as well. Trans-arterial therapy takes advantage of the dependence on the hepatic artery supplying HCC and usually includes a combination of injection a chemotherapeutic agent (suspended in lipiodol to expand exposure of tumor cells to the chemotherapy) followed by embolization of the hepatic artery.34 TACE is contraindicated in patients with vascular invasion due to the increased risk ischemia. A study from 2002 was ended early when it showed that patients who underwent chemoembolization had significant survival benefit as compared to patients who received symptomatic treatment.75 In this study, survival probability at 2 years was 63% compared to 27% in the control group. Patients with advanced (i.e., Child C) or decompensated cirrhosis are poor candidates for TACE as liver failure is a potential risk of this treatment.76

Advanced stage disease (BCLC C) includes both symptomatic and asymptomatic patients with vascular invasion and/or extrahepatic spread.72 End-stage disease consists of patients who are candidates for palliative treatment only because of the poor prognosis.72 Patients in either of these stages are candidates only for palliative treatment. Sorafenib is an oral multikinase inhibitor of platelet-derived growth factor receptor. It is a vascular endothelial growth factor receptor.77 Mouse models show that it inhibits tumor growth, vascularization and induces tumor apoptosis and hypoxia.78 It has significantly changed the median time to progression of the disease and has prolonged the median survival by almost three months from 7.9 months to 10.7 months.77 Patients with Child C cirrhosis with an Okuda score of 3 or an ECOG functional status >2 are defined as a terminal stage (BCLC D) and do not benefit from additional therapy.

Download Tables, Images & References

A CASE REPORT

Biliary Tubulopapillary Adenoma with Concurrent Biliary Stone Presenting with Pruritus and Obstructive Jaundice

Read Article

Benign tumors of the biliary tract are a rare cause of obstructive jaundice. We report the case of a 75 year old Caucasian male who presented with pruritus and obstructive jaundice who was noted to have a mass in the distal common bile duct (CBD) on computed tomography. Endoscopic retrograde cholangiopancreatography confirmed diffuse biliary dilation and a large filling defect. Balloon sweep resulted in the passage of small clumps of soft tissue in addition to stones. Positron emission test demonstrated no abnormal uptake. He underwent a Whipple procedure where a 3.5 cm pedunculated, polypoid mass was found in the CBD. Pathology revealed intraductal tubulopapillary adenoma with high grade dysplasia and microscopic mucin.

INTRODUCTION

Biliary papillary tumors account for 11% of all biliary ductal tumors1 and have malignant potential.2 They can be intrahepatic or extrahepatic in location3,4,5 and may be associated with tubular adenomas elsewhere in the gastrointestinal tract.6 Several cases of biliary papillary tumors have been reported from the far east.24,6-13 with few reported case series from the Western population.1,5 The term, biliary intraductal papillary tumor, has been used interchangeably with intraductal adenoma.6,7,14 Here, we present a rare case of benign biliary intraductal tubulopapillary adenoma with review of literature about biliary intraductal papillary tumors.

CASE

A 75 year old Caucasian male presented with a four month history of pruritus and weight loss. He had a past medical history of stage II prostate cancer in remission after being treated with hormone therapy. His abdominal examination was unremarkable. His liver enzymes revealed an alanine aminotransferase (ALT) of 127 IU/L, aspartate aminotransferase (AST) of 98 IU/L, alkaline phosphatase of 499 IU/L and a total bilirubin of 3.0 mg/dl. Computed tomography (CT) of the abdomen revealed a 3.4 cm enhancing mass in the distal common bile duct (CBD) with severe extra- and intrahepatic biliary dilation (Figure 1). Carbohydrate antigen 19-9 was 24.1 U/ml. Positron emission test (PET) scan demonstrated no abnormal uptake in the CBD. He underwent endoscopic retrograde cholangiopancreatography (ERCP) which showed diffuse biliary dilation and a large filling defect in the mid CBD with irregular ductal margins (Figure 2). Balloon sweep resulted in the passage of small clumps of soft tissue in addition to a large stone and sludge. Histology of the soft tissue suggested intraductal papillary neoplasm of the CBD. Endoscopic ultrasound (EUS) showed localized CBD mass with possible malignant cells on fine needle aspirate. Subsequently, he underwent a Whipple procedure where a 3.5 cm pedunculated polyposis mass was found in the CBD.

The resection margins were clear with benign periductal and peripancreatic lymph nodes. Pathology of the mass showed tubulopapillary adenoma with high grade dysplasia of biliary and gastric epithelial subtypes and microscopic mucin (Figure.3). The patient has since recovered well from the surgery.

DISCUSSION

Most intraductal biliary tumors are malignant with only 6% being reported as benign.14 World Health Organization (WHO) has classified biliary epithelial tumors as adenomas with or without dysplasia and carcinomas.15 Adenomas are further classified, based on their pattern of growth, as papillary, tubular and tubulopapillary. Biliary papillary tumors are composed of papillary proliferation of atypical biliary epithelium along with delicate fibrovascular cores and present as sessile or pedunculated polypoid mass within the bile duct.6,14 . About 50% are multiple and termed as biliary papillomatosis.2,8 Biliary papillary tumors encompass both benign tumors (papillary adenoma) with varying degrees of dysplasia and malignant tumors (papillary cholangiocarcinoma) with the latter accounting for 74% to 83% of the cases.1,2,5,8 Of all the types of cholangiocarcinomas, the papillary type of cholangiocarcinoma accounts for 2.9-8.9% and has a better prognosis.5,16,17 Biliary papillary tumors are further classified pathologically based on their epithelial subtype into gastric, intestinal and pancreato-biliary subtypes with the pancreato-biliary subtype accounting for more than 50% of cases.1,5,8)

While the data is conflicting, biliary intraductal papillary neoplasm could also be termed as intraductal papillary mucinous neoplasm of the biliary tract or IPMN-B. They share certain common pathological features with intraductal mucinous neoplasm of pancreas or IPMN-P such as papillary proliferation, similar epithelial subtypes and mucin production.8 The terminology and histopathology of IPMN-B is not well defined by WHO.15 Mucin hypersecretion is seen in only 25-35% of IPMN-B while it is seen in most of cases of IPMN-P.5,8,9 In one review of case series of biliary tumors, 36% of tumors previously named as cystic, papillary or mucinous tumors were re-classified as IPMN-B based on macroscopic and histopathological criteria used for diagnosing IPMN-P.5 Immunohistochemical analysis of biliary papillary tumors generally show a better prognosis for MUC 2 staining than for MUC 1 staining tumors.1,8,9 Biliary papillary tumors differ from biliary

cystadenoma by communication with the bile duct and absence of ovarian like stroma.18 Attempts have been made to study the inherent biology of biliary papillary tumors. In a study of genetic alterations, high level microsatellite instability was seen in 11.8% and low level microsatellite instability was seen in 35.3% of biliary papillary tumors.19

The median age for biliary papillary neoplasia is 68 years with a predilection for males.5 Cases of biliary papillary neoplasia from Asia have been associated with choledocholithiasis and parasitic infestations such as Clonorchis sinensis,2,24,25 but no such association has been found in the Western population.1 The common symptoms include abdominal pain,1,5 followed by obstructive jaundice,3,6 pruritus 7 and acute cholangitis.2 Bile duct stones may occur in association with biliary tumors which is believed to be the result of biliary stasis. Cases of bile duct rupture with implantation of tumor cells in the peritoneal space (pseudomyxoma peritonei) have been reported as well.3

Since it is slow-growing, it is possible to diagnose intraductal papillary neoplasia at an early stage, given the advancements in diagnostic procedures such as ERCP and cholangioscopy. CT scan and magnetic resonance imaging (MRI) often show hyperenhancement of the tumors within the bile duct.22 At cholangiography, the biliary ducts are noted to be dilated with intra luminal filling defects either due to the fixed or detached tumor with extrusion of soft tissue on balloon sweep .3,7,23 Mucin hypersecreting tumors may show mucin extruding out of the papilla.24,25 On cholangioscopy, sludge material is frequently noted to cover the papillary masses observed within the bile duct lumen. The masses are usually soft, friable and the surface color is bright yellow or pinkish.2 As it is difficult to detect malignant foci pre-operatively, diagnosis is usually made after radical resection such as Whipple procedure.6 Intra operative cholangioscopy may be needed to ascertain absence of macroscopic intraluminal extension of the tumors as they tend to spread along the epithelial surface of the bile duct.4 Elevated serum CA 19-9 is seen more frequently in mucin hypersecreting biliary papillary tumors likely due to cholestasis and cholangitis.2

Management of biliary papillary tumors is not well defined in the literature. As there is favorable prognosis after complete surgical resection and an inability to identify malignant foci pre-operatively, an aggressive surgical resection is recommended regardless of tumor size and extent.10-13 Types of surgeries performed depend on location of tumor and vary from pancreatoduodenectomy to hepatic resection.1,5 Since biliary intraductal papillary neoplasia are adenomas, they can recur after surgical resection.14 There are no guidelines regarding frequency or mode of surveillance of remnant biliary tract after surgical resection of biliary papillary tumors. In a study from Asia, comparing biliary intraductal papillary neoplasia to non-papillary biliary tumors, the five year survival after resection for biliary papillary adenoma was 90% as compared to 50%, 0% and 58%, respectively for papillary-cholangiocarcinoma, non-papillary- cholangiocarcinoma and IPMN-P.8 In a case series from the Western population, IPMN-B was found to have a component of invasive carcinoma in 74% of the cases1 and median 5 year survival for invasive IPMN-B was reported to be about 38%.5

CONCLUSION

We report a case of biliary intraductal tubulopapillary adenoma with high grade dysplasia which is a rare tumor in the western population and generally has a favorable prognosis as compared to non-papillary biliary tumors. Our case is also unique due to the concomitant presence of a common bile duct stone further emphasizing the possible predisposition of biliary intraductal papillary neoplasia to biliary stones. Further, we suggest that there is a need to reclassify biliary tumors to accommodate IPMN-B as a distinct entity similar to IPMN-P.

Download Tables, Images & References

FRONTIERS IN ENDOSCOPY, SERIES #19

Transgastric Endoscopic Necrosectomy Using a Dedicated Transluminal Stent

Read Article

James P. D. Walker, Kyle Eliason, Douglas G. Adler MD, FACG, AGAF, FASGE, Gastroenterology and Hepatology, University of Utah School of Medicine, Salt Lake City, UT

CASE REPORT

An 87-year-old female was referred to our institution for evaluation of several pancreatic fluid collections that had developed in the context of an episode of severe acute pancreatitis. The patient’s pancreatitis was presumed to be due to choledocholithiasis and prior to our evaluation she had undergone an ERCP with sphincterotomy and duct clearance as well as a cholecystectomy. The patient could not tolerate PO intake and was being fed via a nasojejunal feeding tube. The patient had previously been evaluated by surgery and interventional radiology, who did not feel that the patient was a candidate for surgery or percutaneous drainage of this large pancreatic fluid collection, respectively.

Contrast enhanced CT scan revealed multiple pancreatic fluid collections, although attention was mostly centered on a bilobed but internally communicating15x10cm collection causing significant extrinsic compression of the stomach (Figure 1). The patient was offered endoscopic transmural drainage and, after a discussion of risks and benefits, she accepted.

When evaluated by endoscopic ultrasound (EUS) the lesion was found to contain a large amount of solid debris and was thus felt to represent walled off pancreatic necrosis (WOPN), rather than a pseudocyst (Figure 2). EUS guided transmural access to the cyst was obtained with a 19gauge needle via a transgastric route. The cystgastrostomy was dilated to 6mm over a wire. A 15mm wide Axios stent (Xlumena, Mountainview CA) was advanced across the cystgastrostomy and deployed without difficulty (Figure 3). There was immediate drainage of approximately 1L of fluid consistent with cyst contents.

One week later, the patient underwent endoscopic pancreatic necrosectomy through the Axios stent with a standard EGD endoscope. Using a combination of nets, snares, and a rat tooth forceps, a large amount of necrotic pancreatic tissue was mechanically debrided with marked improvement in the appearance of the cyst cavity, although some debris remained (Figure 4). The necrotic cavity was lavaged with copious amounts of hydrogen peroxide mixed with sterile saline. Although the endoscopic portion of the procedure went well, the patient tolerated the procedure poorly from a respiratory perspective and thereafter declined further procedures given her age and overall situation. It was agreed that the Axios stent would simply be left in place to provide drainage of the pancreatic fluid collection.

A CT scan of her abdomen and pelvis obtained 5 weeks later showed essentially complete resolution of the large necrotic collection with the Axios stent still in good position (Figure 5). The patient still did not wish to undergo further procedures given her age and overall history and the stent was thus left in place.

Discussion

In the treatment of pancreatic fluid collections and walled off pancreatic necrosis (WOPN) from pancreatitis, there are three main approaches: surgical, interventional radiology (IR) and endoscopic. These approaches are effectively used either alone or in tandem based on the specifics of the patient’s disease, the comfort level of the care team and the stability of the patient’s condition.

All of these techniques can be used in isolation or in combination as required clinically and as the patient’s condition and severity changes over time. The first week or phase of disease management is mostly monitoring and supportive/pain control with possible antibiotic prophylaxis and fluid resuscitation.1 During this time surgery is usually not performed unless it is of an emergent nature due to the fact that surgery in this phase often exacerbates multiple organ failure.2 The next phase of management over the ensuing weeks typically includes such measures as contrast enhanced CT or MRI to assess fluid collections and necrosis for progression, maturation, and the presence of infection.3 In this phase, antibiotic treatment may be optimized and determination of sterile or infected pancreatic necrosis can be accomplished using fine needle aspiration cultures of pancreatic tissue if clinically indicated.4 In weeks four, five and six patients who are still stable typically remain under conservative medical treatment, while patients who are beginning to deteriorate will likely undergo more aggressive interventions.15 This is the phase of treatment when it is more common to see the minimally invasive surgical and laproscopic procedures as well as endoscopic drainage.

Waiting until at least four weeks after the onset of symptoms allows fluid collections to become walled- off and develop a mature wall and adherence to the stomach and/or duodenum, which facilitates endoscopic necrosectomy if this approach is chosen.5,6 After treatment is initiated, patients can have procedures repeated as necessary. Often cholecystectomy or ERCP with sphincterotomy is considered during this time to minimize recurrent biliary pancreatitis and any other gallstone or obstructive disease.7 Several complications can arise in this phase including vascular complications and pancreatic fistulas. These pancreatic fistulas can often be treated with endoscopic papillary stenting.8

Surgical methods to treat pancreatic fluid collections may include open necrosectomy, which was considered the ideal treatment in the past as part of a “step down” approach to therapy of acute pancreatitis. An open necrosectomy is typically performed by creating a midline or subcostal bilateral incision and depending on the extent and locality of the necrosis, a surgeon may access the pancreas through the lesser sac, gastrocolic omentum or the transverse mesocolon.9 Manual debridement is performed in one or more sessions. After the initial necrosectomy is complete, the abdominal incision is typically closed around a drain and repeat procedures are performed until the debridement is complete. Alternatively, the patient’s abdomen may be left open or a wound-vac may be placed to facilitate drainage and repeated trips to the operating room for debridement in the days ahead.

Another surgical option is a laparoscopic necrosectomy, which has grown in popularity due to its minimally invasive nature. A laparoscopic approach provides excellent access to the pancreas and allows for other maneuvers to be easily accomplished in the same setting i.e. cholecystectomy, feeding tube placement, etc.1,10 In one study, laproscopic necrosectomy procedures showed promising results although 7.1% were converted to open necrosectomy, 28.6% of patients developed a pancreatic fistula, and there was a wound infection rate of 10.7%.11 While these numbers may sound high, it should be emphasized that these are aggressive procedures being performed in very ill patients.

Another minimally invasive surgical approach is the retroperitoneal approach. This is performed in a number of ways, one of which is the video assisted retroperitoneal debridement (VARD). In the VARD procedure, a laproscopic camera is inserted through an incision centered on the 12th rib with insertion of laparoscopic devices as well. Fluid drainage along with debris removal can be accomplished followed by debridement of the necrotic cavity.1,12

Due to the more invasive nature of the surgical techniques they are associated with longer hospital stays and more cost to the patient than other procedures. They are therefore typically used in association with a therapeutic “step up” program that usually begins with a less invasive endoscopic or percutaneous IR procedure.1,13

IR placement of one or more percutaneous drainage catheters is commonly used in patients who may be too ill for endoscopy or surgery, an immature fluid collection in need of drainage, or with an acutely infected collection. A percutaneous drain can be useful for bridging unstable patients to more definitive procedures performed at a later date. One study showed a 100% success rate in hemodynamically stable patients (n=20) using percutaneous drains to treat necrotizing pancreatitis, success being defined as resolution of lesions at follow up IR procedures and via CT.14 Percutaneous drainage carries with it a risk of pancreatico-cutaneous fistulae, especially in patients with disconnected duct syndrome, where it can be as high as 45%.15 As such, evaluating ductal anatomy, typically via ERCP with stenting as appropriate, is sometimes helpful in this setting. In a review of percutaneous drainage as a primary treatment for necrotizing pancreatitis, 55% of patients had no need for further necrosectomy (214 out of 384).16 In patients that need extensive necrosectomy of solid tissue, other techniques are typically preferred over percutaneous drains.17

Endoscopic approaches to the drainage and debridement of pancreatic necrosis tend to be the least invasive but are nonetheless high-risk interventions. These approaches and can be used when a necrotic collection abuts the gastroduodenal wall or if the fluid collection communicates with the main pancreatic duct.

If the fluid collection does have communication with the main pancreatic duct, transpapillary drainage is often attempted using a plastic stent placed directly into the collection or bridging the communication with the duct.9 If the fluid collection is felt to be too large, have too much solid component, and/or does not communicate with the duct then transmural drainage can be achieved in many patients via a cystenteroscopy (most commonly a cystgastrostomy, less commonly a cystduodenostomy) created endoscopically and kept open with one or more plastic or metal stents. The cystenterostomy also provides a portal for repeated endoscopic debridement as necessary.

In a systematic review of endoscopic treatment of pancreatic fluid collections through transmural drainage, patients treated with metal stents had a success rate of 81.9% on average for various types of pancreatic fluid collections out of 124 patients treated with metal stents. The authors reported an 83.3% success rate for pseudocysts and 77.9% in patients with walled off necrosis (success being defined as a reduction in size > 50% or complete resolution). These same authors reported an adverse event rate of 23.3% including infection, bleeding, stent migration, occlusion etc (again emphasizing that endoscopic treatments are not low risk procedures). In the same study, plastic stents showed a success rate of 80.7% on average of 702 patients treated (85.1% for pseudocysts and 69.5% for walled off necrosis). The adverse event rate for plastic stents was 16.1%.18

There are now commercially available, dedicated transluminal stents with a wide enough bore (15 mm) for an endoscopic necrosectomy to be performed through the stent lumen itself. We use these stents frequently in our practice. These stents are designed for EUS-guided placement and are only now coming into clinical use. A study of one of these dedicated stent (n=22) showed a 100% clinical success rate and 100% technical success rate, with 10% of patients encountering complications, which included stent migration and hemorrhage.19 The main advantage of these dedicated stents is their wide lumen which both facilitates passive drainage to the GI tract of cyst contents and the fact that they can accommodate an endoscope so that the cavity can be entered as needed for endoscopic necrosectomy without having to remove the stent itself (as is often the case with plastic stents).

Overall, endoscopic treatment appears to be a good choice in the treatment of pancreatic fluid collections. One study of 116 patients (5 acute fluid collection, 8 necrosis, 30 acute pseudocyst, 64 chronic pseudocyst and 9 pancreatic abcess) treated via endoscopic drainage methods showed an 87.9% clinical success rate with resolution of collections and symptoms, and a 93.1% technical success rate of fluid collection resolution with or without resolution of symptoms. Collections recurred in 15.5% of patients and complications occurred in 11.2%. The most common complications were bleeding and pneumoperitoneum.20

Overall, methods for treating pancreatic fluid collections that develop following pancreatitis are numerous and allow for a customizable approach to treatment. There are many variables to consider with each patient’s management, including the provider’s comfort level and experience with some of these procedures. Careful consideration of all factors will be important to the patients’ outcomes.

Download Tables, Images & References

NUTRITION ISSUES IN GASTROENTEROLOGY, SERIES #142

Non-Celiac Gluten Sensitivity Where are We Now in 2015?

Read Article

Non-celiac gluten sensitivity (NCGS) is a term that is used to describe individuals who are not affected by celiac disease or wheat allergy yet who have intestinal and/ or extraintestinal symptoms related to gluten ingestion with improvement in symptoms upon gluten withdrawal. The prevalence of this condition remains unknown. In this paper, we will discuss the current advances in our understanding of NCGS including definition, epidemiology, clinical characteristics, diagnostic criteria and management.

Non-celiac gluten sensitivity (NCGS) is a term that is used to describe individuals who are not affected by celiac disease or wheat allergy yet who have intestinal and/or extraintestinal symptoms related to gluten ingestion with improvement in symptoms upon gluten withdrawal. The prevalence of this condition remains unknown. It is believed that NCGS represents a heterogenous group with different subgroups potentially characterized by different pathogenesis, clinical history, and clinical course. There also appears to be an overlap between NCGS and irritable bowel syndrome (IBS). Hence, there is a need for strict diagnostic criteria for NCGS. The lack of validated biomarkers remains a significant limitation in research studies on NCGS.

Anna Sapone MD PhD, Celiac Center, Division of Gastroenterology Daniel A. Leffler MD, MS, Director of Clinical Research, Celiac Center, Director of Quality Improvement, Associate Professor of Medicine at Harvard Medical School, Division of Gastroenterology Rupa Mukherjee MD, Celiac Center, Division of Gastroenterology, Department of Medicine, Instructor in Medicine at Harvard Medical School, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA

INTRODUCTION

The most common diseases caused by ingestion of wheat are autoimmune-mediated conditions such as celiac disease (CD) and IgE-mediated allergic reactions or wheat allergy (WA).1 CD affects roughly 1% of the general population. It is now increasingly clear that, besides CD and WA, an undefined percentage of the general population considers themselves to be suffering from problems due to wheat and/or gluten ingestion, relying largely on self-diagnosis. These individuals are generally considered to have gluten sensitivity (GS). An overlap between irritable bowel syndrome and GS has long been suspected and requires strict diagnostic criteria. Currently, the lack of biomarkers is a major limitation, and there remain many unresolved questions regarding GS. In this paper, we will discuss the current advances in our understanding of non-celiac gluten sensitivity (NCGS) including definition, epidemiology, clinical characteristics, diagnostic criteria and management.

Definition

Recent publications show that there is great interest in defining gluten-related disorders (See Figure 1). This term encompasses all conditions related to the ingestion of gluten-containing food. Included within this category is celiac disease (CD), a chronic, small intestinal immune-mediated enteropathy triggered by exposure to dietary gluten in genetically predisposed individuals characterized by specific autoantibodies against tissue transglutaminase 2 (anti-TG2), endomysium (EMA) and/or deamidated gliadin peptide (DGP).2 Wheat allergy (WA) is another gluten-related disorder that is defined as an adverse immunologic reaction to wheat proteins characterized by the production of wheat specific IgE antibodies that play a key role in disease pathogenesis. Cases of non-IgE-mediated wheat allergy also exist and can be confused with gluten sensitivity. Examples of WA include wheat-dependent, exercise- induced asthma (WDEIA), occupational asthma (baker’s asthma), rhinitis, and contact urticaria.1

In 2011, an international panel of experts met in London and reached consensus on a definition of non- celiac gluten sensitivity (NCGS). They defined NCGS as a “non-allergic and non-autoimmune condition in which the consumption of gluten can lead to symptoms similar to those seen in CD”.3 The consensus statement elaborated that symptoms in NCGS are triggered by gluten ingestion in the absence of celiac-specific antibodies (tissue transglutaminase [tTG], endomysium [EMA] and/or deamidated gliadin peptide [DGP]) and absence of enteropathy although an increased density of CD3+ intraepithelial lymphocytes (IELs) can be detected. Patients with NCGS have variable human leukocyte antigen (HLA) status and variable presence of IgG anti-gliadin (first generation) antibodies.3 NCGS is further characterized by resolution of symptoms with withdrawal of gluten and relapse of symptoms with gluten exposure. The clinical symptoms of NCGS can overlap with those of CD and WA. As our knowledge of NCGS continues to increase, this definition may require further modification in the future.

Epidemiology and Natural History of NCGS

The overall prevalence of NCGS in the general population is currently unknown largely because patients often self-diagnose and place themselves on a GFD without medical consultation. Anecdotal observations indicate that the prevalence ranges from 0.5% to 6% but this is based on studies with heterogenous study design and inconsistent definitions of the disease. In a large study of 5896 patients evaluated at the University of Maryland between 2004-2010, 347 patients fulfilled diagnostic criteria for NCGS leading to a prevalence of nearly 6%.1,4 Furthermore, data from the National Health and Nutrition Examination Survey (NHANES) for 2009- 2010 reported a possible prevalence of NCGS of 0.55% in the general U.S. population.5 Given the reported overlap between IBS and NCGS, epidemiologic studies on IBS can shed some light, albeit indirectly, on the frequency of NCGS. In one highly selected series of adults with IBS, the frequency of NCGS was reported to be 28% based on a double-blind, placebo-controlled gluten challenge.6 Furthermore, in a large study by Caroccio et al, 276 out of 920 (30%) of subjects with IBS-like symptoms based on Rome II criteria reported wheat sensitivity or multiple food hypersensitivities.7 It is estimated that the prevalence of NCGS in the general population is likely higher than that of CD (1%). The prevalence of NCGS in children is still unknown. Although risk factors for NCGS have not yet been identified, this disorder appears to be more common in females, with a male-to-female ratio of about 1:3, and in young/middle aged adults.

Due to a lack of longitudinal data and prospective studies on the natural history of NCGS, it is unclear if NCGS predisposes to any long-term complications. In the current literature, there are no reports of major complications such as intestinal lymphoma, gastrointestinal (GI) malignancies or associated autoimmune illness as observed in CD.

Pathogenesis

The pathophysiology of NCGS remains largely undetermined. A study by Sapone et al. has found that NCGS subjects have normal intestinal permeability compared to CD patients, intact level of protein expression that comprise intestinal epithelial tight junctions and a significant reduction in T-regulatory cell markers compared to controls and CD patients.4 Moreover, NCGS patients have an increase in the a and ? classes of intraepithelial lymphocytes (IELs) with no increase in adaptive immunity-related gut mucosal gene expression. These findings suggest an important role of the intestinal innate immune system in the pathogenesis of NCGS without an adaptive immune response.8 Unlike duodenal mucosa from CD patients exposed to gliadin in-vitro, intestinal mucosa from NCGS patients do not express markers of inflammation. Newer techniques such as examination of basophil activation in response to gluten or wheat stimulation might suggest alternative pathogenic mechanisms for NCGS.

Clinical Characteristics of NCGS

The clinical symptoms of NCGS are elicited soon after gluten exposure, improve or disappear with gluten withdrawal and reappear following gluten challenge, usually within hours or days. While this finding could be attributed to a placebo/nocebo effect, the 2011 study by Biesiekierski et al. argues for the existence of a true NCGS disorder. In a double-blind randomized, placebo-controlled study design, the authors found that IBS-like symptoms of NCGS were significantly higher in the gluten-treated group (68%) than subjects treated with placebo (40%).6

Studies suggest that the clinical presentation of NCGS follows an IBS-like picture characterized by abdominal pain, bloating, bowel irregularity (diarrhea and/or constipation) and systemic manifestations including “brain fog”, headache, joint and muscle pain, fatigue, depression, leg or arm numbness, dermatitis (eczema or skin rash) and anemia.1,4,9 In one study of IBS patients, the two most common extraintestinal manifestations with gluten challenge were “foggy mind” (42%) and fatigue (36%).9 Currently, data are lacking on the actual prevalence and type of intestinal and extraintestinal symptoms in patients with NCGS. Unlike CD, NCGS patients do not have an increased prevalence of autoimmune illness. In one group of 78 NCGS patients, none had type I diabetes mellitus and only one patient (1.3%) had autoimmune thyroiditis. This is compared to 5% and 19% prevalence for these autoimmune comorbidities, respectively, in a study of 80 patients with CD.9 With regards to psychiatric comorbidities, a recent study found no significant difference between patients with CD and NCGS in terms of anxiety, depression and quality of life indices.10 Overall, the role of NCGS in neuropsychiatric conditions (i.e. schizophrenia, autism spectrum disorders) remains a controversial and highly debated topic. However, NCGS patients reported more abdominal and non- abdominal symptoms after gluten exposure than CD patients (see Table 1).

In a recent retrospective review of IBS-like patients who underwent a double-blind placebo-controlled wheat challenge, nearly 25% of the patients were identified with NCGS. The study showed that a history of food allergy in infancy, coexistent atopic disease, multiple food intolerances, weight loss and anemia were more common in the NCGS group compared to the IBS controls.7 Therefore, it may be useful for physicians to enquire about these conditions in patients with IBS type symptoms to gauge the potential utility of a trial of gluten restriction.

NCGS and IBS

The relationship between NCGS and IBS is complex, and IBS-like symptoms are common in patients with NCGS. Vasquez et al. showed that gluten ingestion can elicit GI symptoms in non-CD patients, specifically, patients with diarrhea-predominant IBS (IBS-D).11 The IBS-D patients, particularly those with the HLA- DQ2 and/or DQ8 genotypes, had more frequent bowel movements per day on a gluten-containing diet, and this diet was associated with higher small intestinal permeability. This finding gave some insight into the role of the GFD in improving GI symptoms in IBS patients.

However, the exact role of gluten withdrawal in mitigating symptoms requires further investigation. In addition to gluten, it has been shown that wheat and wheat derivatives contain components such as amylase- trypsin inhibitors (ATIs) that can trigger symptoms in IBS patients. Another potential trigger for symptoms are the highly fermentable and osmotic, poorly-absorbed, short-chain carbohydrates (fermentable oligo-, di- and monosaccharide and polyols), also called FODMAPs which include fructans, galactans, fructose, lactose and polyols found in wheat, certain fruits, vegetables and milk as well as their derivatives.3 There is ongoing debate on the contribution of each of these diet components to symptoms experienced by patients with NCGS and IBS. In a placebo-controlled cross-over re-challenge study in 37 subjects with self-reported NCGS/IBS, subjects were randomly assigned to a reduced FODMAPs diet and then challenged with gluten or whey protein.12

All 37 subjects had improvement in their GI symptoms on the reduced FODMAPs diet without significant worsening of their symptoms when challenged with gluten or whey protein. It is important to note that the symptoms experienced by the NCGS patients cannot be attributed solely to FODMAPs since they often experienced resolution of symptoms with a GFD alone while still consuming FODMAPs from other sources such as legumes. However, this finding raises the possibility that some cases of IBS may, in fact, be due largely to FODMAPs and should not be classified as having NCGS. Therefore, there is a great need to identify and validate specific biomarkers that will play an important role in further defining NCGS as a clinical condition and clarify its prevalence in at- risk groups and the general population.

Laboratory Evaluation in NCGS

No specific biomarker has been identified for NCGS. However, trends in laboratory evaluation including serology, HLA genotyping and histology have been noted in patients meeting diagnostic criteria for NCGS.

CD Serology

Volta et al. investigated the CD serologic patterns in 78 patients with untreated NCGS. They found that 56.4% of the patients had elevated titers of “first generation” IgG-anti-gliadin antibody (AGA) compared to patients with untreated CD. The prevalence of IgG-AGA in NCGS was lower than that in CD patients (81.2%), but higher than in patients with connective tissue diseases (9%) or autoimmune liver disease (21.5%), and in healthy blood donors (2-8%). However, the prevalence of IgA-AGA in NCGS was low at 7.7%.9 Of note, the three key CD antibodies, IgA-tTG, IgG-DGP and IgA- EMA, were negative in all patients with NCGS except for a single low titer IgG-DGP.

HLA Genotyping

The CD-predisposing HLA-DQ2 and HLA-DQ8 haplotypes are found in roughly 50% of NCGS patients compared to 95% in CD patients and 30% in the general population.1

Histologic Findings

Sapone et al. compared small intestinal biopsy findings from patients with NCGS, CD and controls. Patients with NCGS had normal to mildly inflamed mucosa categorized as Marsh 0 or 1, while partial or subtotal villous atrophy (Marsh 3) with crypt hyperplasia was seen in all CD patients.4 In addition, the CD patients had increased intraepithelial lymphocytes (IELs) compared to controls. The level of CD3+ IELs in the NCGS patients was found to be in between that seen in CD patients and controls in the context of normal villous architecture. Other histologic findings that might be specific to NCGS patients include an increased level of activated circulating basophils7,13 and increased infiltration of eosinophils in the duodenum and/or ileum and colon.7,14

Diagnostic Approach to NCGS

As clinicians, it is important to suspect NCGS in a patient who presents with IBS-like symptoms such as abdominal pain, bloating, diarrhea and constipation as well as “foggy brain,” fatigue, headaches, joint or muscle pain that appear to improve on a GFD. Since these symptoms can also be seen with CD and, to a lesser extent, with wheat allergy (WA), these conditions need to be excluded in order to make a diagnosis of NCGS (see Table 2). Kabbani et al. have proposed a diagnostic algorithm to help differentiate NCGS from CD and WA15 (See Figure 2). The first step in the evaluation of a subject with symptoms responsive to a GFD is to check for the presence of celiac serologies (IgA-tTG and IgA/IgG DGP) on a gluten-containing diet. If the celiac serologies are negative and there is no IgA deficiency, a diagnosis of CD is unlikely, making NCGS a more likely diagnosis. Moreover, lack of symptoms of malabsorption (weight loss, diarrhea, nutrient deficiencies, iron deficiency anemia) and no CD risk factors (family history of CD, personal history of autoimmune illness) were found to further support a diagnosis of NCGS. WA allergy should similarly be evaluated for with IgE-based assays.

The authors found that incorporating a personal history of autoimmune illness, family history of celiac disease and nutrient deficiencies could help in the diagnostic model particularly in subjects with negative serology. Subjects with negative serology on a gluten- containing diet, no risk factors and no symptoms of enteropathy are highly likely to have NCGS and do not require further testing. Conversely, in a subject with negative serology but with typical symptoms of malabsorption or risk factors for CD, a biopsy is indicated. In a subject with borderline serology on a gluten-containing diet, the next step is HLA typing to determine whether a biopsy is indicated. A subject with borderline serology and negative HLA typing is considered to have NCGS. HLA typing is also useful to evaluate subjects suspected of NCGS or CD who self- start a GFD without a prior check of celiac serologies on a gluten-containing diet. Due to the high negative predictive value of the genetic assay, a diagnosis of CD can be effectively excluded with a negative finding. If HLA testing is negative in a subject without serology on a GFD whose symptoms are responsive to a GFD, the subject likely has NCGS and a gluten challenge would be unnecessary. However, if HLA testing is positive despite symptom resolution on a GFD, it is recommended that the subject undergo a gluten challenge followed by evaluation of celiac serologies. A gluten challenge is the monitored reintroduction of gluten containing food items usually over a two week period. The recommended daily gluten load is the equivalent of 1-2 slices of wheat bread.

Once CD and WA have been excluded clinically and by laboratory evaluation, a patient suspected of having NCGS should be asked to avoid a gluten-containing diet for at least 4-8 weeks. Gluten withdrawal is usually associated with significant improvement in symptoms within days. After the period of gluten withdrawal, a gluten challenge should be performed for confirmation of diagnosis. Since placebo effect from gluten withdrawal cannot be excluded entirely, a more ideal method for diagnosing NCGS is a double-blinded, placebo-controlled design, however this is unlikely to be feasible in most clinical practices.

Currently, research efforts are focusing on the use of an ex-vivo gluten challenge to distinguish patients with CD (treated and untreated) from NCGS, further classify NCGS and distinguish true NCGS from cases of mild CD without enteropathy. In the ex-vivo gluten challenge, cultured duodenal biopsies are exposed to gluten and maintained in various laboratory conditions to determine unique cytokine profile and histologic findings that can be used to classify different patient groups. This method would eliminate the need for a two week gluten challenge followed by an upper endoscopy with duodenal biopsies in patients already on a GFD in whom the diagnosis of CD is not clear. Patients frequently find the gluten challenge to be onerous and, in some cases, intolerable due to significant side effects from gluten exposure.

Management of NCGS

Successful treatment and management of NCGS is based on a multidisciplinary approach involving the primary care physician, gastroenterologist and nutritionist. It must be emphasized that dietary treatment should be implemented only after an appropriate diagnosis has been established. Patients with NCGS are advised to follow a diet with sufficiently reduced gluten content to manage and mitigate symptoms. Based on severity of symptoms, some patients may choose to follow a gluten-free diet (GFD) indefinitely. Since gluten-free food products are often not fortified with necessary vitamins and minerals, it is important to evaluate a patient with NCGS for any vitamin and mineral deficiencies and manage them appropriately. NCGS patients are typically advised to start a multivitamin. If a patient has persistent symptoms despite a low gluten or GFD, there should be consideration for other associated conditions such as lactose intolerance and/ or fructose malabsorption. These conditions can be evaluated for with breath testing and/or an empiric trial of a low FODMAP diet. It is important to also consider and exclude other conditions such as IBS and small intestinal bacterial overgrowth that can contribute to ongoing symptoms.

Since there is no biomarker for NCGS to monitor a patient’s status, clinicians are left to rely on symptom resolution. Based on our current understanding of NCGS, there is no intestinal or extraintestinal damage with gluten exposure. Since it is not yet known whether NCGS is a transient or permanent condition, it is strongly recommended by experts such as Fasano et al. that patients undergo periodic re-evaluation with reintroduction of gluten (e.g. every 6-12 months), particularly in the pediatric population, in an effort to liberalize the diet where possible.3 In clinical practice, however, many patients with symptom control on a low gluten or GFD are averse to intentional exposures to gluten. Currently, there are no guidelines on how best to monitor patients with NCGS.

Unanswered Questions and Future Research

The clinical spectrum of gluten-related disorders appears to be more heterogenous than previously appreciated. However, evidenced-based research in this area is lacking. Although NCGS is currently defined by gluten related symptoms in the absence of CD, this does not rule out the possibility that gluten could be “toxic” and have long-term clinical sequelae. A number of unanswered questions remain about NCGS that will dictate future research. What is the prevalence of NCGS both in the general population and in at-risk groups? What is/are its pathogenic mechanisms? Is the condition permanent or transient, and is the threshold of sensitivity the same or different for subjects and does it change over time in the same subject? Research on NCGS suggests that it may be a heterogenous condition comprised of several subgroups. There is a need for:


  • Prospective, multi-center studies on the natural
    history of this condition.
  • Biomarkers to properly diagnose and better
    define the different NCGS subgroups.
  • Research on the potential pathogenic role of
    other wheat components besides gluten and ATI,
    namely, FODMAPs in NCGS.

It is also anticipated that the definition of NCGS will undergo further modification with the accumulation of more data. In the meantime, it is important to have a standardized definition for NCGS to assist in diagnosis and to improve study design for future research.

Download Tables, Images & References

GASTROINTESTINAL MOTILITY AND FUNCTIONAL BOWEL DISORDERS, SERIES #9

Domperidone: Everything a Gastroenterologist Needs to Know

Read Article

Domperidone, first synthesized approximately 40 years ago, has been approved worldwide for specific clinical applications. However, in the United States it is only available through an FDA-approved Limited Access Program. In this article we review all the literature regarding its clinical efficacy and we provide a comprehensive list of recommendations and guidelines when considering initiating domperidone in patients that are suitable for this medication.

Domperidone, first synthesized approximately 40 years ago, has been approved worldwide for specific clinical applications. However, in the United States it is only available through an FDA-approved Limited Access Program. Patients with functional dyspepsia, gastroparesis, gastroesophageal reflux disease and refractory nausea and vomiting may benefit from the use of domperidone. The main limitation to using domperidone has been questions raised regarding cardiac toxicity, specifically QTc elongation that could potentially lead to fatal arrhythmias. Recent studies have not shown a significantly increased incidence of cardiac side effects even when domperidone was given at very high doses, two to three-fold greater than those typically described in the majority of the available literature. In this article we review all the literature regarding its clinical efficacy and we provide a comprehensive list of recommendations and guidelines when considering initiating domperidone in patients that are suitable for this medication.

Marco Bustamante-Bernal MD1 Priyanka Wani MD1 Richard W. McCallum MD2 1Department of Internal Medicine, Paul L. Foster School of Medicine, Texas Tech University Health Science Center 2Department of Internal Medicine, Division of Gastroenterology, Paul L. Foster School of Medicine, Texas Tech University Health Science Center, El Paso, TX

INTRODUCTION

Domperidone, first synthetized in 1974, has been approved for patient use throughout the world with specific clinical applications in gastroparesis, nausea and vomiting, gastroesophageal reflux disease, functional dyspepsia and more recently as adjunctive in small bowel capsule endoscopy. It is currently approved worldwide, however in the United States domperidone is only available through an FDA-approved Limited Access Program. It can be prescribed by physicians who apply for an Investigational New Drug (IND) protocol to provide this drug to patients with gastroparesis or other functional gastrointestinal (GI) disorders associated with nausea and vomiting where symptoms have been refractory to standard therapy or treatment limited by side effects of medications. Domperidone was not approved for use in the United States based on recommendations from the FDA review process to conduct clinical trials with larger patient numbers to further confirm its efficacy and safety.1 These trials were not subsequently performed or submitted to the FDA.

Our purpose in this publication is to provide physicians a comprehensive analysis about how they can best utilize domperidone in their practices, as well as update on available data for domperidone’s pharmacology and efficacy, with a major focus on safety with a full analysis of the recent questions that have been raised regarding cardiac toxicity.

Peak plasma concentrations are attained at 10 to 30 minutes after intramuscular and oral administration of domperidone respectively. Systemic bioavailability after intramuscular administration of domperidone is about 90%, whereas oral administration is 13 to 17%. The low systemic bioavailability after oral administration is explained by first-pass effect in the liver and gut wall metabolism.2

Distribution data in humans are lacking, but studies in rats with radiolabeled domperidone have shown wide distribution in body tissues except the central nervous system (CNS), where only very low concentrations occur. This is explained by the fact that domperidone minimally crosses the blood-brain barrier.

Domperidone undergoes rapid and extensive biotransformation by hydroxylation and oxidative dealkylation. After oral administration of 40 mg of radiolabeled domperidone, 31% of the radioactivity is excreted in the urine and 60% in the feces over a period of 4 days. The half-life is 7.5 hours in healthy subjects and is prolonged to up to 20 hours in patients with severe renal failure. However, since renal clearance is small compared to total plasma clearance, meaningful accumulation should not occur.2

Pharmacodynamics

Domperidone is a dopamine (D) antagonist with particular affinity for the D2 subtype receptors in the brain and the peripheral nervous system including the GI tract. Dopamine receptors in the chemoreceptor trigger zone, which can induce nausea, are blocked by the D2 receptor domperidone (Figure 1). Its mechanism of action in the GI tract is antagonism of apomorphine and dopamine induced changes in GI function. Stimulation of dopaminergic receptors inhibits gastric motility, resulting in symptoms such as post-prandial bloating and pain, premature satiety, nausea and vomiting. Dopamine antagonists, like domperidone and metoclopramide, inhibit this dopaminergic inhibitory effect resulting in net increase in acetylcholine release leading to improved GI motility, with the main effect being in the stomach and minimal effects in proximal small bowel. Unlike metoclopramide, domperidone does not cause any CNS side-effects since it essentially does not cross the blood-brain barrier with minimal evidence of presence in the brain3 (Figure 1.)

Dopamine is one of the neurotransmitters involved in mediating receptive relaxation of the stomach and dopamine antagonists partially inhibit this mechanism. Even after vagotomy, which decreases gastric motility, domperidone can still improve gastric emptying.4

The prokinetic effects of domperidone have broad implications in the upper GI tract, starting with small effects on the amplitude of esophageal contractions, but mainly to enhance antro-duodenal contractions, and better coordinate peristalsis across the pylorus resulting in acceleration of slow gastric emptying states.5 It has minimal effects on motility in the duodenum and proximal small bowel.

CLINICAL USES OF DOMPERIDONE
Functional Dyspepsia

More recently, according to the Rome III criteria, functional dyspepsia (FD) is considered to consist of two main subgroups: epigastric pain syndrome (EPS) and postprandial distress syndrome (PDS).6 PDS is characterized by early satiety, postprandial fullness, bloating, nausea and even vomiting.7 EPS is dominated by epigastric pain with some components of nausea and fullness.

Functional dyspepsia patients display a variety of abnormal digestive functions: delayed gastric emptying (30% of patients); accelerated gastric emptying (10%), and impaired gastric accommodation after meals (40%).8 Other data suggest that abnormal gastric sensation or visceral hypersensitivity, as well as psychosocial disturbance can be major determinants of symptom severity, particularly the epigastric pain component.9 The treatment of (FD) can be confusing because no medication is currently approved in the US, Canada or European Union for this specific indication.10 A reasonable treatment approach based on the current evidence, particularly in the EPS subgroup, is to initiate therapy with a daily proton pump inhibitor in Helicobacter pylori-negative patients. In the PDS subgroup where symptoms are induced or exacerbated by meals and pain is less prominent, prokinetic therapy would be preferred as an initial trial. In both settings if symptoms persist, particularly epigastric pain, a therapeutic trial with a tricyclic antidepressant may be considered for the goal of modifying brain-gut hypersensitivity, while another strategy is initiating therapy with an antinociceptive agent such as gabapentin or pregabalin.

Metoclopramide has been the only prokinetic utilized in the United States, since its approval by the FDA in the 1980’s for treating GERD and diabetic gastroparesis. Domperidone has also been studied for the treatment of FD. To date, 6 meta-analyses that describe the effect of domperidone in FD have been published.11 All of them are based on relatively small studies and numbers demonstrate superiority of domperidone over placebo in the treatment of FD. The analyzed studies using a domperidone dose of 30-60 mg/day for a total time of 2-6 weeks demonstrated a treatment effect of 30 to 63%. This data supports the theory of a treatment benefit for domperidone in FD. However an important unresolved issue is the short duration of treatment, since FD is a chronic condition. These studies in retrospect were addressing the PDS subset of patients classified by Rome criteria with primarily dysmotility-like symptoms, and this subgroup should be considered when contemplating initiating domperidone for functional dyspepsia.

Gastroparesis

Gastroparesis is a syndrome characterized by anorexia, bloating, early satiety, abdominal pain and vomiting, and is associated with objective evidence for delayed gastric emptying without evidence of any gastric obstruction. A major cause of gastroparesis is diabetes and it may be present in up to 30% to 50% of the gastroparetic patients. The idiopathic variety is also important and of equivalent frequency, and together both forms constitute more than 80% of all cases of gastroparesis.12

Although delayed gastric emptying is considered the cardinal finding in gastroparesis, it is clear that the pathogenesis of symptoms is complex and diffuse ranging from impaired fundic accommodation, related to impaired gastric inhibitory neurons,13 neuropathic changes involving the myenteric plexus,14 sensory nerve dysfunction.15 and gastric dysrhythmias.16

The evidence for using prokinetics is based on trials performed two or three decades ago, which in some cases may not have been as rigorously conducted in regard to numbers and population assessment.17 The dopamine D2-receptor antagonist, metoclopramide, is the only US FDA-approved medication for the treatment of gastroparesis and the recommended duration is no longer than a 12-week period.18 The reported serious adverse events such as tardive dyskinesia, dystonias and parkinsonism are always a “cloud” over the head of metoclopramide in balancing its efficacy. For more than 40% of patients unable to tolerate this agent or do not respond to metoclopramide, domperidone should be the next agent utilized.

The importance of domperidone in the management of gastroparesis is undeniable. The American Gastroenterological Association technical review on the diagnosis and treatment of gastroparesis, as well as the American Gastroenterological Association medical position statement for the diagnosis and treatment of gastroparesis recognizes domperidone as one of the most important treatment options currently unavailable for gastroparesis.19

An early study performed in 1997 involving 17 patients with gastroparesis and symptoms of nausea, vomiting, abdominal pain and bloating utilized domperidone 20 mg q.i.d. for an average of 23 months. Results showed a decrease in hospital admissions compared with before domperidone therapy (p <0.05), improvement in gastric emptying (p <0.05) and enhanced quality of life of 88% of patients. More recently, a multicenter, two-phase withdrawal study involving 208 insulin-dependent diabetic patients showed that domperidone is effective in treating moderate to severe upper GI symptoms independent of their gastric emptying status.20 This study also investigated two health-related quality of life measures of physical and mental components. Results at the end of the single-blind phase indicated that patients with a symptomatic response to domperidone also experienced significant improvements in health-related quality of life from baseline as measured by physical and mental component scores. Patients continuing on domperidone during the double-blind withdrawal phase maintained their clinical and health-related quality of life gains. In contrast, those in the placebo group experienced more gastroparetic symptoms and a decline in quality of life. Table 1 summarizes the published clinical trials with domperidone. However, trials investigating domperidone have been generally underpowered and often uncontrolled, so results must be interpreted with this caveat.

As the IND protocol is mainly utilized by gastrointestinal specialists in centers with institutional review boards, patients who do not have access to such centers might not be able to obtain domperidone. Conversely, physicians at a university research facility might not have enough patients for a large single center report. This situation could create a mismatch between the patient in need for treatment and the availability of prescribing physicians who have access to IND.33 Domperidone is also available through compounding pharmacies in the USA or through access to European pharmacies although this is not sanctioned as a “standard of care”.

The European dose schedule utilized for more than 30 years recommends dosing of 10 mg t.i.d or up to q.i.d. On the other hand, the clinical guidelines on the management of gastroparesis published in 2013.17 focusing on practice in the United States recommended a starting dose of 10 mg q.i.d. increasing up to 20 mg t.i.d. before meals and at betime. A recent study by Ortiz et al. showed that the use of domperidone at very high dose of 80-120 mg/day was well tolerated among the majority of the enrolled patients as well as being very efficacious, resulting in a 75% symptom improvement from baseline, for the treatment of gastroparesis and nausea and vomiting.1

Gastroesophageal Reflux Disease

Gastroesophageal reflux disease (GERD) is defined as a pathological condition when the amount of gastric contents refluxing into the esophagus exceeds the normal limit. Typical symptoms are heartburn and regurgitation, but the spectrum of symptoms ranges from asymptomatic patients, atypical chest pain, dysphagia, hoarseness and odynophagia.34 Complications of chronic GERD include esophageal mucosal damage, such as Barrett’s esophagus or stricture.

Despite the wide spectrum of abnormalities, three primary goals are applicable to all patients with GERD: 1) alleviation of symptoms, 2) resolution and prevention of complications, 3) prevention of recurrence.35

Proton pump inhibitors (PPI) are the most effective agents to treat GERD when compared to antacids, prokinetics, and H2 receptor blockers. They have few adverse effects and are well tolerated for long-term use. Due to the superiority and efficacy of PPI, treatment of GERD should start with an 8-week course of PPI. In some cases, PPI monotherapy cannot completely resolve symptoms in all cases of GERD; in this setting combination therapy with a prokinetic will further improve symptoms for some patients.36 Accompanying “dyspepsia-like” symptoms in addition to GERD are the most receptive to domperidone.

Motility modulating drugs exert their therapeutic effect in GERD by increasing the lower esophageal pressure, enhancing peristaltic contractions, improving esophageal clearance, and by accelerating gastric emptying.37

A randomized, double blind clinical trial by Ndraha evaluated the combination of PPI with domperidone in the treatment of GERD.38 Sixty patients were enrolled and separated in two groups, group A 30 patients received omeprazole 20 mg b.i.d and domperidone 10 mg t.i.d for 2 weeks, while group B 30 patients were only given omeprazole 20 mg b.i.d; symptoms were assessed after 2 weeks of treatment using the Frequency Scale for the Symptoms of GERD (FSSG).36 The FSSG score in the omeprazole + domperidone group after treatment (19.3 +- 11.3) was significantly lower than before treatment (26.7 +- 8.9, p <0.001) as well as significantly better than in the omeprazole group (from 23.9 +- 7.3 to 19.3+- 7.9, p <0.001). The mean improvement score in group A was 7.5 +- 5.9, while in group B was of 4.6 +- 3.3, and this difference was statistically significant (p=0.02). The author concluded that the combination of omeprazole with domperidone in highly symptomatic patients with GERD is superior to omeprazole monotherapy.

However, the true clinical efficacy of domperidone has not been confirmed in that data suggests ineffective healing of esophagitis despite improved symptom states. Ren et al. demonstrated in a recent meta-analysis (39) that combined therapy with a PPI and a prokinetic was associated with a greater symptomatic relief and a reduction in the number of reflux episodes, but there was no significant effect on 24-hour esophageal acid exposure time and healing of esophagitis demonstrated endoscopically. Their conclusion was that combining therapy can improve quality of life of patients with GERD.

Nausea and Vomiting

The antiemetic effect of dopamine is mediated by inhibition of D2 receptor activation in the area postrema and chemoreceptor trigger zone at the base of the fourth ventricle but outside the blood-brain barrier.40 (Figure 1)

The antiemetic properties of domperidone are well documented. In patients experiencing postoperative nausea and vomiting, IV domperidone was more effective than placebo.41 Neither domperidone nor metoclopramide was more effective than placebo when given prophylactically before induction or near the end of anesthesia for preventing nausea and vomiting.42

Nausea and vomiting associated with chemotherapy has been effectively controlled by domperidone when administered immediately before the cytotoxic regimen. It is more effective than placebo and compares favorably with metoclopramide in controlling vomiting as a result of moderately emetic chemotherapeutic agents.43,44 However, this chemotherapy phase of domperidone career relied on intravenous administration which now is no longer approved or being continued.

Domperidone has also been used to treat nausea and vomiting associated with other conditions including dysmenorrhea, head injury and intracranial lesions, hemodialysis, radiotherapy and migraine headaches. Most of these studies were open trials but did show some efficacy for these indications.3

The management of patients who need anti- Parkinsonian medications and other centrally acting dopamine agonists is often limited by the side effects of nausea, vomiting, anorexia and postprandial fullness. Metoclopramide is contraindicated in Parkinson’s disease because by crossing the blood-brain barrier it would antagonize levodopa therapy effects. On the other hand, domperidone is very useful in this setting because it inhibits peripheral dopaminergic activity without blocking central dopamine effects. Studies have shown that oral domperidone, at a dose of 60-150 mg/ day, decreases the incidence of nausea and vomiting in patients treated with bromocriptine, allowing them to tolerate higher doses of bromocriptine.45 Domperidone also improved gastric emptying and alleviated GI symptoms including nausea, vomiting, anorexia and abdominal bloating induced by levodopa. The beneficial effects of the anti-Parkinsonian drugs were not inhibited by domperidone, and no extrapyramidal side effects were reported with the use of domperidone.46

Small Bowel Capsule Endoscopy

Small bowel capsule endoscopy (SBCE) was introduced in 2001. It has since revolutionized the diagnostic workup for small bowel diseases.47 One of the major limitations of SBCE is the high percentage of cases in which the capsule does not reach the cecum by the end of the recording period and/or exhaustion of capsule’s battery life48 as reported in up to 30% of the procedures. It has been demonstrated that one of the risk factors for an incomplete SBCE is a long gastric transit time (GTT).49 Hence, there is rationale to use prokinetics to the procedure to decrease GTT and thereby potentially increase the rate of complete small bowel examinations.

Different prokinetics have been used in an attempt to increase completion rate (CR) and the diagnostic yield (DY) of SBCE. Metoclopramide remains the most commonly administered prokinetic. Domperidone has not been widely used in SBCE and the evidence base is limited.50,51 A retrospective study by Koulaouzidis et al.,52 analyzed the effect on CR, GTT and DY when using 10 mg of domperidone in liquid solution compared to no domperidone with the capsule ingestion. Results showed an increase in CR of 91.1% in the domperidone group vs. 84.3% in the other group (p= 0.04). The GTT was reduced in the domperidone group but it was not statistically significant compared to the non-domperidone group. Interestingly, the use of domperidone was associated with reduced DY for vascular, inflammatory and mass lesions. The study demonstrated that the use of domperidone increases the CR of SBCE but that there was no increase in DY, most likely secondary to interpreting the capsule images and related to domperidone use.

A prospective study by Westerhof et al.,53 analyzed the CR in 649 patients undergoing SBCE; 410 patients received domperidone 10 mg and 239 received erythromycin 250 mg 1 hour before the procedure. Results showed that CR was 86% after erythromycin vs. 80% after domperidone (p= 0.03); GTT was lower after erythromycin compared to domperidone (13 minutes vs 22 minutes, p= <0.001); however, there was no difference in DY, 50% vs 44%, respectively (p= 0.18). The authors concluded that the administration of erythromycin prior to SBCE increased the CR compared to domperidone, this is explained by the fact that domperidone’s motility effects do not extend beyond the duodenum whereas erythromycin induces diffuse small bowel motility effects.

SAFETY AND TOXICITY
Cardiac Toxicity

Domperidone is regarded as having similar properties to class III antiarrhythmic agents such as prolonging the action potential through blockade of distinct voltage- dependent potassium channels, thus delaying cardiac repolarization and prolonging QT interval, which can predispose to life-threatening ventricular arrhythmias such as torsades de pointes. The criteria for QT interval prolongation on an electrocardiogram (ECG) is >450ms in males and >470ms in females. Longer QT intervals are found in women compared to men.54 Osborn et al. reviewed the effect of intravenous use of domperidone in four women, of whom two had episodes of ventricular arrhythmias. Of note, the underlining cause for ventricular arrhythmia was attributed to hypokalemia.55 The intravenous form of domperidone no longer exists.

Based on questions of cardiac safety in Europe, there have been some recommendations for dosing and monitoring. However, we have performed a comprehensive literature research to analyze the concerns about cardiac events.

A Dutch case-control database study involving 1366 patients assessed the association between sudden cardiac death or sudden ventricular arrhythmia and domperidone use.56 A total of 1366 cases (62 involving sudden ventricular arrhythmia and 1304 sudden cardiac deaths) were matched to 14,114 controls by index date, sex, age, and type of practice. None of the patients who experienced sudden ventricular arrhythmia were using domperidone at the time of the event. The multivariable analysis controlled for QTc-prolonging drugs and medical conditions, smoking, alcohol use and CYP3A4 drug interactions. Among the 1304 patients with sudden cardiac death, only 10 were using domperidone at the time of the event, which translates to a statistically non- significant increased risk of sudden cardiac death (odds ratio [OR] 1.99, 95% confidence interval [CI] 0.80� 4.96). When these 10 patients were further stratified by daily dose (< 30 mg, 30 mg, and > 30 mg), the multivariable analysis showed an increased risk of sudden cardiac death for patients taking more than 30 mg per day (OR 11.4, 95% CI 1.99�65.2).

A very recently published study by Ortiz et al. did not find an association between the use very high dose of domperidone (more than three times the dose schedule in Europe) and an increased risk of cardiovascular events nor significant changes in QT interval.1 That study included 64 patients that were taking domperidone at doses of 80-120 mg/day for a mean duration of 8 months, some as long as 4 years. Results showed that 73% of the patients had symptomatic improvement in nausea and vomiting, 15.6% of patients had an increase of QTc at follow up but no cardiovascular events reported; 5% had palpitations without ECG changes and there were no sudden cardiac deaths.

Another relevant piece of information is that 2,000,000 prescriptions for domperidone were recorded in Canada in 2013 and between April 2003 and March 2010, it was recorded that 122, 333 elderly patients had domperidone on their prescription list in Ontario, Canada. Despite this large number of prescriptions and available warnings regarding cardiac side effects of domperidone, Health Canada had received only 18 (0.9 per 10,000) reports of serious adverse cardiac events but no deaths. In many of these patients, other risk factors for arrhythmias were also present.57,58

Moreover, to keep this in perspective, we know that other possible therapies for gastroparesis, specifically erythromycin, azithromycin, ondansetron and promethazine also have cardiac side effects.

Our conclusion from extensive literature review of the USA experience is that domperidone has the potential for cardiac side effects based on concerns for QT prolongation and increased risk of ventricular arrhythmias, but studies do not substantiate cardiac adverse events in patients receiving oral administration of domperidone, even at very high doses.

Endocrine Effects

Thyroid-stimulating hormone (TSH) and prolactin increased after domperidone administration, but there was no effect on cortisol secretion, aldosterone and 18- OHB.59 This indicates that domperidone has a direct effect on anterior pituitary rather than through a central hypothalamic mechanism. Unlike the mechanism of metoclopramide, domperidone has lipophobic properties therefore it effects are not based through the central dopaminergic receptors. The pituitary is outside the blood-brain barrier where domperidone can induce those hormonal effects. Domperidone’s endocrine effects on TSH have no clinical significance.

Prolactin is increased in everyone on domperidone. Comparative studies have reported similar degrees of increased serum prolactin concentrations in healthy subjects receiving domperidone or oral metoclopramide.60 However, few patients complain of symptoms including gynecomastia and nipple tenderness in 10% and galactorrhea in 5%. There is no association with prolactinomas or increased risk of breast cancer. Another observation is oligomenorrhea and rarely amenorrhea, although fertility remains unchanged. This is relevant since 80% of patients with gastroparesis are female and these side effects are regarded as more inconvenient than meaningful, and patients who benefit from domperidone are generally willing to accept them.61

Drug-drug Interactions

Dopamine antagonists should not be given in conjunction with monoamine oxidase inhibitors. Stimulation of D2 receptors causes inhibition of norepinephrine release from presynaptic nerve terminals. Antagonists of D2 receptors cause decreased inhibitory control, facilitating the release of norepinephrine.62

The main concern is a combination of domperidone with cytochrome CY3PA4 inhibitors. This enzyme is the main metabolic pathway for domperidone, therefore medications that interfere with this mechanism must be avoided. Inhibitors of CYP3A4 can block the metabolism of domperidone, resulting in increased plasma concentrations of domperidone, with the subsequent risk of increased risk of cardiovascular and endocrine side effects. These medications include azole antifungals (ketoconazole, fluconazole),1 protease inhibitors, macrolide antibiotics, calcium channel blockers, propranolol, metoprolol, HMG-CoA reductase inhibitors and newer anticoagulants such as apixaban.

TAKE HOME MESSAGES

Domperidone has been available for the treatment of gastrointestinal motility disorders throughout the world since the 1970’s. Unfortunately, domperidone is not easily available in the United States since the FDA withheld approval in 1989 due to borderline statistical significance related to sample size in the controlled clinical trials. It is mainly available through an IND process. Its efficacy relies on an anti-emetic effect by blocking D2 receptors centrally as well as the prokinetic property through blocking peripheral dopamine receptors in the gastric smooth muscle. It has an effective role in the treatment of gastric motility disorders, especially in patients that do not respond to diet modifications or develop side effects or have an inadequate response to metoclopramide. Interpreting the clinical significance and meaning of the concerns raised in the literature regarding cardiotoxicity when using domperidone require ongoing vigilance. While there are reports of QTc interval elongation and cardiovascular events related to the use of low dose domperidone, most studies and clinical experiences do not confirm this association. Moreover, data with prolonged dosing at 3-fold of the European dose shows no evidence for ventricular arrhythmias or cardiac death.

A dilemma has been created because of the statements made by authorities in some countries regarding the cardiotoxicity of domperidone. However, our extensive review does not support the conclusions made by these international agencies. At the present time, domperidone is an extremely effective treatment for gastroparesis and other disorders with nausea and vomiting and has an acceptable safety profile and risk- benefit ratio.

Our recommendations and guidelines for physicians who plan on initiating domperidone therapy in their practices are the following: 1) have a condition that would benefit from antiemetic and gastric prokinetic therapy; 2) document no response or presence of side effects secondary to metoclopramide and other anti-nausea/vomiting medications; 3) confirm no QTc elongation; 4) start at a dose of 20 mg q.i.d., 30 minutes before meals and before bedtime, to have a meaningful effect and if necessary increase gradually until achievement of therapeutic effect, sometimes requiring 120 mg/day; 5) treatment should be for a minimum of 3 months at the recommended doses in order to draw conclusions about its efficacy; 6) monitor other drug use to avoid CY3PA4 inhibitors; 7) monitor serum potassium and magnesium levels; 8) obtain ECG every 6 months; 9) discontinue domperidone if QTc interval becomes prolonged; 10) inquire about such symptoms as palpitations or chest pain.

Domperidone, although not Dom Perignon, is indeed the “champagne” of the prokinetic/antiemetic drug world and we hope this article will allow you to appreciate its clinical indications, efficacy, and most of all, safety, so your patients can benefit by instituting this agent into your practice. So “raise your glasses for a toast”, you have now acquired a new knowledge base for your practice.

Download Tables, Images & References

A CASE REPORT

Schistosomal Proctitis

Read Article

A 42 year old male expatriate from Yemen was referred to the gastroenterology outpatient clinic with a six-month history of intermittent rectal bleeding. He had daily bowel movements but noted occasional rectal bleeding. There was no history of diarrhea, abdominal pain, weight loss or fever. His physical examination was unremarkable and baseline laboratory investigations were normal. Colonoscopy revealed a 5-6 mm sessile polyp in the rectum with surrounding inflammation marked by erythema and loss of vasculature (Figure 1). The remainder of the examination to the terminal ileum was normal. The polyp was removed using a biopsy forceps without any complications.

Histology showed colonic mucosa with mixed inflammatory cell infiltrate, composed of plasma cells and eosinophils surrounding the crypts in the lamina propria (Figure 2). An egg of Schistosoma mansoni, with tapered anterior end and lateral spine near the posterior end, was also seen. This is shown at higher magnification in Figure 3. Schistosomal proctitis was diagnosed and was subsequently treated with praziquental.

Schistosomiasis is a trematode infection caused by blood fluke, Schistosoma, which is endemic to many parts of Africa, Asia and South America. It affects 200 million people worldwide and causes 200,000 deaths each year.1 Studies have shown that its prevalence is increasing in Europe and the United States due to an increasing number of travelers and immigrants to and from these areas.2 Infection in humans, the definitive host, is due to contact with fresh water snails, which act as intermediary host. There are three major species of Schistosomas that infect humans: S. haematobium, S. mansoni and S. japonicum. S. haematobium is associated with urogenital pathology while S. mansoni and S. japonicum cause intestinal and hepatic infection. S mansoni, the main colonic pathogen, resides in the mesenteric veins around the colon where it produces large number of eggs. Some of the eggs make their way to colonic lumen and are cleared in the stool. Others get deposited in the colonic mucosa causing ulceration and polyp formation due to cell mediated immune response and granulomatous reaction. This in turn leads to the main symptoms of colonic schistosomiasis i.e. abdominal pain, bloody diarrhea and tenesmus.3 Colonic schistosomiasis can be diagnosed by finding eggs in stools in acute cases or in tissue biopsies in chronic cases where egg excretion is scant. Schistosomal polyps are seen mainly in patients from endemic areas with chronic disease. The rectum is the preferred site for these polyps but they have been reported in the sigmoid and transverse colon as well.4 Treatment of schistosomiasis is with the antibiotic Praziquental, with the usual dose of 40-60mg/Kg in divided doses.

Gastroenterologists and primary care physicians should keep Schistosomasis in mind as one of the possible etiologies when evaluating patients with colorectal symptoms and exposure to endemic areas.

Download Tables, Images & References

NUTRITION ISSUES IN GASTROENTEROLOGY, SERIES #141

Trophic Agents in Treatment of Short Bowel Syndrome

Read Article

An important goal when treating the short bowel syndrome (SBS) patient who requires parenteral nutrition or fluid support is to reduce dependency on this support and, whenever possible, to eliminate its use altogether. There is great interest in the use of growth factors in patients who have been unable to achieve enteral independence during the adaptive period despite optimization of diet and medical management. A number of pharmacological agents have been demonstrated to induce trophic properties on the intestinal epithelium. In Part V of our series on SBS, we will focus on somatropin, a recombinant human growth hormone, and teduglutide, a recombinant human glucagon-like peptide-2 analogue, the currently approved trophic factors available for use as aids to wean parenteral support in SBS.

An important goal when treating the short bowel syndrome (SBS) patient who requires parenteral nutrition or fluid support is to reduce dependency on this support and, whenever possible, to eliminate its use altogether. There is great interest in the use of growth factors in patients with SBS who have been unable to achieve enteral independence during the adaptive period despite optimization of diet and medical management. A number of pharmacological agents have been demonstrated to induce trophic properties on the intestinal epithelium. In Part V, the final part of this series on SBS, we will focus on somatropin, a recombinant human growth hormone, and teduglutide, a recombinant human glucagon-like peptide-2 analogue, the currently approved trophic factors available for use as aids to wean parenteral support in SBS.

John K. DiBaise, MD, Professor of Medicine, Division of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, AZ. Carol Rees Parrish MS, RD, Nutrition Support Specialist, University of Virginia Health System Digestive Health Center of Excellence, Charlottesville, VA

INTRODUCTION

An otherwise healthy 45 year-old man underwent laparoscopic cholecystectomy for acute cholecystitis. His postoperative course was complicated by an undetected vascular injury causing widespread bowel necrosis requiring extensive resection leaving him with about 90 cm of jejunum anastomosed to his transverse colon. He has been on home parenteral nutrition (PN) since September 2011. Previous attempts to wean him from the PN have stalled at 4 nights per week despite adherence to aggressive dietary and pharmacologic strategies. Although he has experienced no complications from the PN, and is otherwise doing well, he desires to be off PN due to its untoward effects on his quality of life.

Are there any other non-surgical treatment options that may allow him to further wean and potentially eliminate his need for PN?

While life saving, PN use in short bowel syndrome (SBS) is associated with a reduction in quality of life and a number of complications arising from not only the PN, but also the catheter used to infuse the PN. These complications may include catheter-related bloodstream infections and venous thrombosis, metabolic bone disease, liver disease, and renal failure (See Part I in this series). An important goal when treating the SBS patient who requires parenteral support (i.e., PN or intravenous fluids [IVF]) is to reduce dependency on this support and, whenever possible, eliminate its use altogether. PN requirements decrease as the bowel adapts following resection allowing greater nutrient and fluid absorption. Over 50% of adults with SBS can be weaned completely from PN within 5 years of diagnosis.1,2 In contrast, the probability of eliminating PN use is < 6% if not successfully accomplished in the first 2 years following the individual’s last bowel resection.1 A number of clinical factors may serve as useful predictors of the success of eliminating the use of PN in SBS (Table 1). The presence of a colon and the remaining length of functional small bowel are the most critical factors. Permanent need of PN generally occurs when there is < 50-70 cm of small bowel with colon-in-continuity or < 100-150 cm of small bowel when the colon is absent.1

Following the 2 — 3 year period of greatest intestinal adaptation after massive resection, a homeostatic/ maintenance stage begins where no further spontaneous intestinal adaptation is thought to occur. Intestinal failure is frequently considered permanent when PN is required beyond this stage. There is great interest in the use of growth factors in patients with SBS who have been unable to achieve enteral independence during the adaptive period despite optimization of diet and medical management. The current understanding of the adaptation process has led to the study of hormones, nutrients, and growth factors in experimental models and in humans with SBS. A number of pharmacological agents have been demonstrated to induce trophic properties on the intestinal epithelium in animal models of SBS. These encouraging reports have been followed by conflicting reports of efficacy in humans regarding the enhancement of intestinal absorption, adaptive changes to the gut and utility in PN weaning. This review will focus on somatropin (Zorbtive™; Serono Inc., Rockland, MA), a recombinant human growth hormone, and teduglutide (Gattex®; NPS Pharmaceuticals, Bedminster, NJ), a recombinant human glucagon-like peptide-2 analogue, as they are both Food and Drug Administration (FDA)-approved for use as aids to wean parenteral support in SBS patients.

Growth Hormone

Growth hormone (GH) has been shown to promote crypt cell proliferation, mucosal growth, collagen deposition, and mesenchymal cell proliferation via insulin-like growth factor-1 and suppressor of cytokine signaling-2. Enhanced intestinal absorption has repeatedly been demonstrated in animal models of SBS, while there have been conflicting reports in humans. In 1995, Byrne et al. reported on 47 patients, most of whom had a colon- in-continuity, treated with a combination of growth hormone (GH), oral glutamine and an optimized SBS diet for 3 weeks in a controlled inpatient-like setting followed by continued use of the diet and glutamine.3 With follow-up for as long as 5 years, they showed that 40% of patients could be weaned completely from PN while another 50% could make significant reductions in their PN use.4 With these reports, the concept of intestinal rehabilitation was introduced.5,6 In a more recent uncontrolled, prospective case series, Zhu and colleagues used a similar treatment program and demonstrated very similar, long-lasting results.7 ? A phase III prospective, randomized, placebo- controlled trial conducted at 2 centers was subsequently performed. Forty-one PN-dependent SBS patients (most with colon-in-continuity) were enrolled and studied in an inpatient-like setting for 6 weeks; 2 weeks of diet and medication (i.e., antidiarrheal and proton pump inhibitor) optimization and PN stabilization followed by a 4-week treatment period. Patients were randomized into 3 groups: somatropin (0.10 mg/kg subcutaneously once daily) with glutamine, somatropin without glutamine, and placebo with glutamine. A significant reduction was seen in both groups treated with GH in PN requirements (the primary endpoint), including PN volume, PN energy and frequency of PN infusions at the end of the 4-week treatment period (Table 2). The extent of reduction, however, was greatest in the group receiving somatropin in combination with glutamine.8 PN reduction remained significantly reduced during a 12-week observation period in the somatropin with glutamine group only; importantly, a weight loss of about 5 kg was also observed in this group. Although tolerated, peripheral edema and musculoskeletal complaints were common in the somatropin treated groups. On the basis of this evidence, and the safety of the treatment program, the FDA approved the use of somatropin in December 2003 as a short-term (4 weeks) aid for PN weaning in patients with SBS. To date, somatropin has not been approved by the European Medicines Agency for this indication.

Despite the reports of success with GH, 3 randomized, controlled nutrient balance studies found conflicting evidence with respect to nutrient and wet weight absorption9-11 using this combination of somatropin and glutamine (but without diet or conventional medication optimization). This has led to a considerable amount of skepticism surrounding the long-term benefits of this approach and its clinical use remains controversial.12 Additionally, side effects of somatropin including peripheral edema, arthralgias and carpal tunnel syndrome are significant, further limiting its adoption into clinical practice. Concern exists about a potential increased risk of colorectal cancer in patients receiving somatropin if required to be administered over a longer period of time.13 Finally, there is also concern about the feasibility of replicating the results of the pivotal trial in an ambulatory setting without the same daily monitoring and counseling provided. Clearly, admitting a patient for 4 weeks to optimize diet, hydration and medical therapy and administer somatropin would be rather challenging in the present healthcare environment.

Contraindications, Precautions and Costs Associated with Somatropin Use

Somatropin is contraindicated in patients with active neoplasia and in those who are acutely critically ill. It has been associated with acute pancreatitis, impaired glucose tolerance, type 2 diabetes mellitus, carpal tunnel syndrome and arthralgias. In the U.S., the cost of a 4-week course of somatropin is approximately $20,000.28 An economic analysis of healthcare costs associated with GH use estimated a 2-year savings of $85,474 assuming that 34% of GH-treated patients eliminated PN use within 6 weeks of treatment and 31% remained PN-free after 2 years.29 However, remember that patients in the clinical trial were studied in an inpatient setting (albeit not hospital), and received daily visits and education/counseling; costs not factored into the dollar amount mentioned above. How the use of this agent will translate into the ambulatory setting where visits and supervision will not be as intensive is unknown. It has not been widely adopted into clinical practice more than a decade after its approval. Furthermore, the role of repeated course(s) or prolonged treatment with somatropin requires additional study.

Glucagon-like Peptide-2

Glucagon-like peptide-2 (GLP-2), secreted from distal small intestine and proximal colonic mucosal L-cells in response to luminal nutrients, plays an important role in intestinal adaptation. GLP-2 administration induces epithelial proliferation in the stomach, small bowel and colon by stimulating crypt cell proliferation and inhibiting enterocyte apoptosis, increases absorptive capacity and inhibits gut motility and secretion.14-18 In a small, open-label trial investigating the effects of GLP-2 in humans with SBS, 8 patients received 400 µg GLP-2 subcutaneously twice daily for 35 days.19 Of the 8 patients studied, 4 had a portion of colon-in- continuity and were receiving PN while the other 4 did not have a colon and did not require PN. An increase in overall energy absorption, decrease in fecal wet weight, slowing of gastric emptying and nonsignificant trend toward increased jejunal villus height and crypt depth were demonstrated.

Teduglutide, a recombinant, degradation-resistant, longer acting GLP-2 analogue, was shown in an open- label study to be safe, well-tolerated, intestinotrophic and significantly increased intestinal wet weight absorption, but not energy absorption in 16 SBS patients with an end- jejunostomy or a colon-in-continuity.20 Teduglutide was then studied in two phase III multinational, randomized, double-blind, placebo-controlled trials that included a total of 169 PN or parenteral fluid-requiring SBS patients in an outpatient setting (Table 2). A habitual diet was followed by patients in both trials. Notably, only about one-half of the subjects used antidiarrheal and antisecretory medications during the studies. In the first study, 83 SBS patients were separated into 3 treatment arms (placebo, 0.05 mg/kg/d and 0.10 mg/kg/d administered subcutaneously once daily) and treated with the study medication for 6 months following a PN optimization period. PN weaning was the primary endpoint (20% reduction by week 20 and maintained to week 24). Teduglutide was found to be safe and well tolerated; however, only the lower teduglutide dose significantly reduced PN requirements (46% for 0.05 mg/kg/d versus 6% for placebo), and 3 patients were completely weaned from PN.21 There was a strong trend towards overall reduction in fluid volume at the end of treatment in the teduglutide treated groups compared to placebo (2.5 L/wk vs. 0.9 L/wk, respectively; P=0.08). Parenteral energy intake, while much lower than baseline, was not significantly different from placebo at the end of 24 weeks of treatment (P=0.11). Villus height, plasma citrulline concentration and lean body mass were significantly increased in the teduglutide groups compared to placebo; no evidence of dysplasia in the intestinal samples was detected.22 After stopping teduglutide at the end of the 24 week treatment period, some patients (15/37) required an immediate increase in their fluids while others (22/37) seemed to maintain their fluid requirements and body weight.23 Indicators of sustained fluid reduction and maintenance of body weight included a longer length of the remaining small bowel and the presence of at least a portion of colon, lower body mass index at baseline, and lower PN volume reduction while on teduglutide (i.e., they were already receiving lower volume of parenteral support at baseline).

During long-term (an additional 28 weeks) treatment of 52 patients from the original 24-week study, at week 52, 68% of the 0.05-mg/kg/d and 52% of the 0.10-mg/ kg/d dose group had a ≥ 20% reduction in PN, with a reduction of 1 or more days of PN dependency in 68% and 37%, respectively.24 Those treated with the lower dose showed continued decrease in parenteral volume requirements (4.9 L/wk); 4 patients achieved complete independence from PN. Overall, it appears that long-term treatment is associated with continued improvement.

The second trial compared the lower dose of teduglutide to placebo administered for 6 months in 86 adult SBS patients and utilized a more aggressive PN weaning strategy (10-30% vs. 10% reductions at 2-weekly intervals and starting at week 2 vs. week 4.24 Once again, a significant benefit of teduglutide over placebo was seen (Table 2). Those receiving teduglutide were more than twice as likely to respond to therapy (63% versus 30%, P=0.02). The mean reduction in PN volume after 24 weeks was 4.4 L in the teduglutide group compared to 2.3 L in the placebo group. Fifty-four percent of those receiving teduglutide reduced at least 1 PN infusion day/week compared with 23% for placebo. No subjects were completely weaned from PN at the end of 24 weeks of treatment. In a preliminary report from a 2-year extension study, 65 patients (74%) completed the study. Of the 30 patients treated for 30 months with teduglutide, 28 (93%) made significant reductions in their parenteral support with a mean decrease of 7.6 L/wk, and 21 (70%) eliminated at least 1 infusion day.26 A total of 15 of the 134 (11%) patients treated in both phase III studies and their extension studies achieved enteral autonomy.27 Most of these patients had a portion of colon-in-continuity and lower baseline PN/IVF requirements. Due to the small numbers, a formal statistical analysis for predictive factors was not possible.

The most frequent gastrointestinal side effects reported in both trials included abdominal pain, nausea, stomal changes (in those with an ostomy), abdominal distension and peripheral edema; resolution occurred with treatment continuation or temporary discontinuation in most instances. Data from the extension studies suggest a tolerable safety profile with abdominal pain, injection site reactions and stomal complaints being most common (24). Although anti- teduglutide antibodies have been demonstrated in the blood of treated patients, they appear to be non- neutralizing and have not been shown to decrease the effect on PN volume reduction (25). Thus, it appears that long-term teduglutide treatment is associated with acceptable tolerability and continued improvement. On the basis of the data from these two pivotal trials, teduglutide was approved by the United States FDA in December 2012 and by the European Medicines Agency in 2012 (Revestive®; Nycomed, Zurich, Switzerland) for SBS patients as a long-term aid to PN weaning.

Contraindications, Precautions and Costs Associated with Teduglutide Use

The only contraindication to teduglutide use is active GI neoplasia. In patients with active, non-GI neoplasia, use should only be considered if benefits outweigh the risks. Precaution is necessary, however, due to a number of potential adverse effects including increased fluid absorption and the potential for fluid overload; the potential to increase drug absorption requiring dosage reduction and drug monitoring when using medications with narrow therapeutic window or require titration (e.g., benzodiazepines, opioids, psychotropics), and the risk for acceleration of neoplastic growth within the gut requiring periodic colonoscopic surveillance before and during its use (6 months before, 1 year after, and at least every 5 years thereafter). Additional monitoring for gastrointestinal obstruction, gallbladder, biliary and pancreatic disease (amylase, lipase, alkaline phosphatase and total bilirubin before and every 6 months while using) is also advised as part of the risk evaluation and mitigation strategy (REMS- https://www.gattex. com/hcp/rems.aspx) program required of prescribers (see Table 4 for one institution’s monitoring form). Given the average annual cost of $295,000 associated with teduglutide use in the U.S., appropriate patient selection will be important to determine the proper place for this therapy in the management of the PN- requiring SBS patient. Of note, in the U.S., the cost to the individual is generally much lower as a result of insurance coverage and patient support programs that provide financial assistance for out-of-pocket expenses. One other interesting predicament that may need to be considered, particularly in those patients weaned entirely from PN support while using teduglutide, is that there may be a potential for insurance denial of coverage of continued use of teduglutide since the patient is no longer using PN; letter writing and/or phone calls to the insurance company may be needed in this situation. It is important to recognize that the reduction in costs associated with PN use as weaning progresses will offset some of the cost associated with the use of teduglutide. Finally, some clinically important outcomes that defy accounting may come in the form of a dramatically improved quality of life as a result of decreased stool output, preserved hepatic function due to less PN dependence, and even an intestinal transplant avoided. Although these outcomes are difficult to quantify, to SBS patients and the clinicians who care for them, they are worthy goals.

Practical Approach to PN Weaning

The eligibility criteria used in the phase III clinical trials only provide a guide to aid in determining which patients should be considered for trophic factor use. At present, these agents should not be used in the pediatric population outside of the setting of a clinical research study. Suitable patients include those with SBS who have neither obstructive nor active GI malignant disease and who are dependent upon PN or IVF support despite optimization of diet, oral fluids and adjunctive medications (See Part II, III, IV A & B of this series). They should also be nutritionally optimized and in fluid balance. Furthermore, the patient should be motivated with a desire to reduce or discontinue the parenteral support. The presence/absence of a colon and length of the remaining small bowel do not necessarily factor into the selection of appropriate candidates and virtually any bowel anatomies can be considered. Table 3 lists factors to consider when determining whether to enlist a trophic factor in the care of the patient with SBS.

Prior to weaning, regardless of the use of a trophic factor, it is important for the SBS patient to recognize that the ‘trade-off’ to not being on PN is the need to take several medications orally and increase the amount of food and fluid ingested daily. Major lifestyle changes and increased out-of-pocket expenses are generally required. Consequently, patient education regarding the care plan (e.g., diet and drugs to be used and PN weaning plan) and ongoing support is important to enhance compliance. This is best done in the setting of a multidisciplinary practice with healthcare providers experienced in the care of SBS patients.

Before PN weaning begins, as previously mentioned but important to reemphasize, the SBS patient’s diet, fluid intake and conventional antidiarrheal and antisecretory medications should be optimized. Additionally, certain criteria should be met before reducing PN that include meeting the daily calorie and fluid intake goals established for the patient. Frequent follow-up is necessary with subsequent PN reductions based on tolerance as determined by the development of symptoms, hydration status, electrolytes and weight.30 A useful approach to monitor hydration status is to maintain the urinary sodium concentration > 20 mEq/L and daily urinary volume > 1 L (on PN free nights) and enteral balance (oral fluid intake minus stool output) between 500 and 1000 mL/d. Monitoring stool and urine output is cumbersome, however, SBS patients attempting to wean PN tend to be highly motivated. Providing the patient with tools to measure both stool and urine as well as a diary to record this information for review and discussion at the office, via e-mail, or over the phone is helpful.31

Once daily subcutaneous injection is required for the use of either somatropin or teduglutide. As injection site reactions are relatively common, rotating the injection site among the abdomen, thigh and upper arm is recommended. The injection should be administered at about the same time each day. The patients should be aware of the precautions necessary with the use of these medications and be instructed on the proper monitoring for complications and what to do/whom to contact should a problem occur. There are no data on the use of these agents in the presence of octreotide, biologic agents or immunosuppressant therapies.

PN reductions can be made by either decreasing the days that PN is infused/week or by decreasing the daily PN infusion volume equally throughout the week (e.g., 10%-30% reduction).30 Patients tend to prefer the former; however, dehydration is less of a potential concern with the latter. The teduglutide studies used the latter approach while the phase III somatropin study used the former approach. An optimal interval for making PN reduction decisions has not been defined. At most, in the ambulatory setting, once weekly would seem appropriate while acknowledging that this needs to be individualized. A recent report recommended obtaining laboratory studies weekly with an office visit monthly until parenteral requirements are stable, after which the frequency of monitoring and visits can be reduced.31 Once PN infusions are < 3 d/week, a trial of PN discontinuation is suggested. Although the occasional patient may successfully discontinue PN without the gradual weaning strategy, this approach is not recommended for the SBS who has been receiving PN for an extended period of time.

Oral micronutrient supplementation becomes necessary as PN is weaned and levels require periodic monitoring. Electrolyte supplementation, usually magnesium and/or potassium and sometimes bicarbonate, may also be needed and require monitoring. The frequency of monitoring will depend upon the stage of PN weaning and the presence of existing or prior deficiencies.30 Periodic laboratory monitoring will need to continue indefinitely, even in those weaned completely from PN.

CONCLUSION


An important goal in the treatment of SBS is to improve enteral autonomy, thereby reducing and, occasionally, eliminating the need for PN or IVF support. Following optimization of diet, hydration and conventional
pharmacological strategies (and occasionally surgical reconstructive procedures), the use of trophic factors has the potential to bring about further reductions.

The currently available agents include somatropin, a recombinant human GH, and teduglutide, a recombinant human GLP-2 analogue. Both agents, while quite different in terms of duration of use, cost and adverse
effects, have been shown in randomized, placebo-controlled trials to facilitate weaning from parenteral support. Long-term safety and efficacy, timing of administration in relation to the onset of SBS, optimal patient selection for use, duration of treatment and cost effectiveness of both somatropin and teduglutide strategies will require further study.

Download Tables, Images & References

FRONTIERS IN ENDOSCOPY, SERIES #18

Avoiding Misses and Near Misses: Improving Accuracy in EUS

Read Article

Even expert endoscopists can miss significant lesions during an EUS examination. In this article, we discuss the challenging aspects of EUS, analyze potential causes for misses and near misses in EUS, and suggest how the endosonographer can minimize these occurrences.

David L. Diehl MD1 Douglas G. Adler MD, FACG, AGAF, FASGE2 1Geisinger Medical Center, Department of Gastroenterology and Nutrition, Danville, PA2 Gastroenterology and Hepatology, University of Utah School of Medicine, Salt Lake City, UT

“Real knowledge is to know the extent of one’s ignorance.” – Confucius

“The first step in avoiding a trap is knowing of its existence.” – Thufir Hawat (Dune, Frank Herbert)

INTRODUCTION

Endoscopic ultrasound (EUS) has proven to be an indispensable tool for diagnosis and management of a wide variety of disorders. Mastery of EUS takes considerable time, dexterity, effort, and perseverance. A steady and wide-ranging caseload of EUS procedures is critical for maintaining competence and the development of new skills in this field. Even expert endoscopists can miss significant lesions during an EUS examination.

One of the most challenging aspects of EUS is that scope manipulation is driven by the real-time gray-scale images on the EUS screen, and not the view on the color video monitor used for standard endoscopic procedures. Evaluation and interpretation of ultrasound images are unfamiliar to most endoscopists, particularly in the United States, because formal training in transabdominal ultrasound imaging is not part of their GI curriculum, as it might be in Europe or Asia.

Manipulation of the echoendoscope is driven by the images on the EUS monitor, and the usual “up, down, left, right” movement will not produce the same effect as it would on the endoscopic video monitor. Mastery of the movements necessary to optimize EUS imaging is “as much done by the cerebellum as by the cerebrum”, and for real experts, the scope movements seem to be commanded subliminally.

Obtaining comprehensive EUS images of a given area of interest takes practice and skill, but the influence of factors that cannot be controlled by the endoscopist should not be underestimated. A patient’s body habitus, luminal anatomy, and the anatomic variations that can be encountered from one patient to another must be factored in to the ability of any endosonographer to arrive at the correct diagnosis. It must be recognized that the ability of EUS to visualize lesions is imperfect, even in the hands of the most experienced endosonographer. There are many potential pitfalls for EUS and EUS- FNA, which have been exhaustively summarized.1

Adverse effects (AEs) of EUS are remarkably low, even with the addition of fine needle aspiration. However, missing a diagnosis of malignancy remains one of the most feared complications of EUS. In distinction to ERCP, in which AEs are often acts of commission (for example, actively doing things that cause pancreatitis or a perforation), AEs in EUS are most commonly acts of omission (“missing things”).

It should be stressed that all EUS operators are human, and thus imperfect. Even the most skilled and experienced endosonographer can miss a clinically significant lesion.

The consequences of missing a cancer diagnosis are among the most difficult in all of procedural gastroenterology. However, missing other findings (even non-malignant ones), can result in increased health care costs and patient discomfort or inconvenience and can create medicolegal issues for physicians. We recognize that EUS, like all human endeavors, is an imperfect science and that even the most experienced endosonographer can miss a critically important finding. In this article, we will analyze potential causes for “misses and near misses” in EUS, and suggest how the endosonographer can minimize these occurrences.

Patient Dependent Factors in EUS Imaging of The Pancreas

Certain patient-dependent factors may have a significant impact on the ability to visualize the pancreas (Table 1). In patients with increased intra-abdominal fat, pancreatic imaging may be very challenging, as the pancreas might be hyperechoic due to fatty infiltration, or be atrophic which could be related to diabetes and/or metabolic syndrome. Fatty infiltration of the pancreas can occur in patients with a normal body habitus as well. Dense shadowing from inflammation, scarring, or calcifications in chronic pancreatitis can leave large portions of the pancreas, typically the head and genu, unseen. In the immediate period following acute pancreatitis, inflammatory changes can mask significant findings or be misinterpreted as a mass. EUS evaluation of the pancreas is often pursued in cases of idiopathic recurrent acute pancreatitis, but many endosonographers will wait for several weeks prior to doing a diagnostic EUS to allow these inflammatory changes to clear. Performing EUS of the pancreas in the time period just after an episode of acute pancreatitis can often be a fruitless endeavor.

Usually the presence of a biliary stent does not interfere with the ability to see a pancreatic mass with EUS, although in unusual cases, ultrasonographic shadowing in the pancreatic head from a biliary stent may make it difficult to see or fully delineate a small mass lesion. The literature is mixed on the influence of pancreatic cancer staging accuracy with or without a biliary stent in place.3,4 Three studies have looked at cytologic yields of EUS-FNA of pancreatic masses after a stent has been placed. The concern is that cytology yields could be diminished due to the aforementioned stent effect (which could affect targeting of tissue for FNA sampling), or that false positive cytology could result. One of the 3 studies demonstrated a diminished yield of FNA in the presence of a stent,5 while another showed more indeterminate results if the biliary stent was placed less than 1 day prior to EUS.6 The third study compared EUS FNA in patients who had previously received metal or plastic biliary stent.7 The yields were very high and comparable in both groups (all patients had stents). A single false positive FNA was found in a patient with a plastic stent).

One of the few available studies to analyze factors accounting for missed pancreatic masses was the No EndoSonographic detection of Tumor (NEST) study.7 In this retrospective analysis, 9 expert endosonographers retrospectively identified 20 patients in whom a pancreatic neoplasm was missed. The goal was to try to understand factors that led to the missed diagnosis. Twelve of the 20 missed malignancies had EUS features of chronic pancreatitis, again emphasizing the limitations of EUS in this setting. Three patients had diffusely infiltrating carcinoma, which was not mass forming. Other unusual causes of missed pancreatic neoplasms were “prominent ventral dorsal split” in two patients and recent acute pancreatitis in one patient.

We have encountered cases of “invisible” pancreatic masses, in which only a caliber change of the bile and/ or pancreatic ducts was the sole clue to the presence of malignancy (Figure 1).8 In other cases, the mass is “isoechoic” and subtle. An example of a patient with pancreatic adenocarcinoma that mimicked the usual difference in echogenicity seen between the dorsal and ventral anlagen in the uncinate process is shown in Figure 2. In this patient, the mass lesion was missed and the images were initially interpreted as showing a normal uncinate process.

The “Hidden” Areas in Pancreatic Imaging

Given the relationship of the pancreas to the adjacent stomach and duodenum, the entire gland can be reproducibly imaged by EUS. However, the endosonographer must assure that the whole pancreas has, in fact, been imaged. While this sounds simple, in practice this can sometimes be hard to do and it can often be difficult to be sure that one has seen the entire pancreas.

The distal extremity of the pancreatic tail, often in very close approximation to the splenic hilum, can be easily overlooked or incompletely imaged, particularly if another finding has caught the eye (Figure 3). Even concerted attempts to evaluate the entire tail can result in an incomplete examination in some patients.

The pancreatic neck (near the region of the portal confluence) is an area that some feel is seen better with the linear echoendoscope9 although data on this point is limited and it is wholly acceptable to view this area with a radial scope. When using a radial scope, some feel that this region may be better imaged by tracing the pancreas back from the pancreatic head, rather than pushing the scope forward from the pancreatic body towards the head of the pancreas.

The uncinate process may be hard or impossible to reach in some patients due to variations in gastric or duodenal anatomy, and, again, it can be difficult to ensure that the entire uncinate process has been seen. Inability to visualize all parts of the pancreas must be distinguished from reporting that “nothing abnormal was seen in the pancreas”, even though these two disparate concepts seem similar.

Operator Dependent Factors in EUS Imaging

There are also endoscopist related factors that may factor into the quality of an EUS examination (Table 2). Operator experience is probably the most important one. The variation in EUS imaging from patient to patient and the wide range of anatomic locations that must be familiar to the endosonographer are unlike any other facet of advanced endoscopic practice. The endoscopist must maintain an organized approach to EUS imaging to prevent missing pathologic findings.

Doing a so-called “directed examination” i.e. performing EUS just to FNA a pancreatic mass that was identified on a pre-procedure CT scan without thoroughly evaluating the rest of the gland, may lead to missing important additional findings, such as liver metastases, the presence of ascites, malignant adenopathy, and other important pathology (Figures 4,5).

Competence and comfort in using both the radial and linear echoendoscope is vital to procedural success as one or both instruments may be required in a single patient. For example, the mediastinum is often easier to fully scan with the radial instrument than the linear, which requires careful, staged rotation to perform a complete exam (Figures 6,7). With experience, and the willingness to spend a little more time, a comprehensive examination can be conducted with the linear echoendoscope.

Endosonographer, (Attempt To) Know Thyself

In studying learning, social psychologists have identified factors that can influence perception of visual information. A person’s motivational state in regards to their wishes and preference can influence their processing of visual stimuli.10 Humans often “see what they want to see”, despite clear objective findings. An example might be the endosonographer minimizing the findings on EUS because, for example, he doesn’t “want the patient to have cancer”, or has already concluded that the findings are not consistent with malignancy (Figure 8). An endosonographer who is overscheduled may also minimize a particular finding because of the additional time burden associated with performing a fine needle aspiration. Overscheduling can also lead to rushing or distraction that could result in a missed lesion.

Another psychological phenomenon that could influence the quality of an EUS examination is the so-called “Dunning-Kruger effect”.11 This describes a cognitive bias in which unskilled individuals mistakenly overestimate their abilities in a given field. Their very lack of skill leads them to fail to recognize that they lack skill, and tends to inflate their perception of their own competence. In another way of phrasing it, they “don’t know what they don’t know.”

An EUS trainee who works with an expert that provides active proctoring (rather than passive teaching) may be less likely to fall victim to this effect. One example of active proctoring would be to help the student learn how to find relevant anatomy, and then the mass, which is the target of the examination, rather than just positioning the endoscope at a mass and then relinquishing the endoscope to the trainee so that he or she can perform the FNA. However, also according to the Dunning-Kruger effect, the expert who finds EUS “easy” may mistakenly assume that it must be “easy” for others and, in turn, may be a less effective teacher because of this.

Recognize When Imaging is Less Than Optimal

Document any limitations in your ability to conduct a complete examination. Some patients have gastric and/or duodenal anatomy makes it very challenging to position the echoendoscope properly in order to obtain adequate images. Obvious situations that can preclude complete EUS examination include gastric bypass surgery or anatomy after reconstructive surgery (Billroth I or II, pancreaticoduodenectomy, etc.). An asymptomatic mild post-bulbar stenosis may make it impossible to safely advance the echoendoscope to the deep duodenum. Gentle dilation of a post- bulbar duodenal stricture with a 14mm or 15mm TTS balloon can frequently make the difference in obtaining complete pancreatic imaging. We have also encountered cases in which initial deep duodenal intubation was achieved early in the examination, but subsequently could not be achieved. The reason for this phenomenon is unclear, but we suspect insufflated air in the stomach and proximal small bowel changes the orientation of the patient’s anatomy in a dynamic way that makes repeated deep intubation difficult.

It is good to get into the habit of adequately documenting the EUS anatomy with endosonographic images, although no standard of care exists as to how many images should be obtained and exactly what structures must be photographed, and individual practice varies widely. Of note, some sites performing EUS do not have the ability to record EUS images and this should not be considered a violation of the standard of care either. With newer EUS consoles, there is the capability of transferring DICOM (digital imaging and communications in medicine) images to the hospital- wide PACS (picture archiving and communication system), and this has proven invaluable, for example, at multidisciplinary tumor board meetings.

Annotated photos illustrate what the endosonographer was seeing, because almost no one else is likely to be able to make definitive sense of the EUS images after the fact. Insufficient visualization is a reality of EUS, and should be appropriately documented.

Recognize When Repeat or Complementary Imaging is Required

Further imaging may be necessary if adequate endosonographic imaging is impossible, felt to be incomplete or inadequate for any reason, or for any of the other reasons mentioned above (Figure 9). In some cases, a repeat EUS will prove useful, for example, if initial FNA is negative but the index of suspicion is high for malignancy, repeat EUS FNA is generally warranted.12 Complementary imaging with CT or MRI may also be a good option in some cases. A “cancer until proven otherwise” approach is prudent when one is an EUS specialist. One must realize that the patients referred for pancreatic EUS imaging will be part of a pool “enriched” for patients harboring pancreatic neoplasia, so a high index of suspicion is the best approach. This can apply to patients referred for a CT read simply as “fullness of the head of the pancreas”, a slightly elevated CA 19-9, or even idiopathic acute pancreatitis. It is better to expect to find cancer than to be surprised by it.

CONCLUSION

Practitioners of endoscopic ultrasound never cease to be amazed at the ability of this modality to visualize and access structures in the chest, abdomen, and pelvis, and have a good sense as to when it will be helpful. One of our roles is to educate our colleagues on how EUS is an indispensable diagnostic modality, and is having a larger role in therapeutics as well. However, we must honestly confront the limitations of EUS in any given patient or anatomical situation, and recognize how to handle this. Those who understand the art of EUS realize that we have arrived here by “standing on the shoulders of giants”, and we must lend a shoulder for those that follow us who want to learn endoscopic ultrasound.

Download Tables, Images & References

INFLAMMATORY BOWEL DISEASE: A PRACTICAL APPROACH, SERIES #94

Mucosal Healing as an Emerging Therapeutic End-Point in Inflammatory Bowel Disease

Read Article

In the last few years mucosal healing (MH) has emerged as an important therapeutic goal for patients with inflammatory bowel disease. Growing evidence suggests that MC can improve patient outcomes and potentially alter the natural course of the disease. In this article, we discuss the important questions that remain to be answered. How should MH be defined? What is the impact of current treatments in healing mucosal lesions in CD and UC? What is the real impact of MH on the clinical course of IBD? In other words, is MH a valid surrogate end-point of disease outcome?

In the last few years mucosal healing has emerged as an important therapeutic goal for patients with inflammatory bowel disease. Growing evidence suggests that mucosal healing can improve patient outcomes and, potentially, can alter the natural course of the disease. However several questions remain to be answered: a validated definition of mucosal healing is lacking, the effect size of different drugs is difficult to assess because of different definitions, different study design, and different timing of endoscopic evaluation and, finally, the evidence that mucosal healing has a high positive predictive value for long-term good clinical outcome is still limited. For these reasons it is still uncertain how mucosal healing should be used in everyday clinical practice. Future studies are needed to answer the most important question if mucosal healing should be systematically assessed in all patients and if treatment strategies should be targeted to achieve complete mucosal healing.

Claudio Papi MD, Giovanna Condino MD, Giovanna Margagnoni MD, Annalisa Aratari MD, Gastroenterology & Hepatology Unit, S Filippo Neri Hospital, Rome, Italy

INTRODUCTION

The management of inflammatory bowel disease (IBD) has traditionally been aimed at improving symptoms and little attention has been focused on healing of mucosal lesions. Nevertheless mucosal healing (MH) has been always considered important in ulcerative colitis (UC), but not in Crohn’s disease (CD). In fact, since the 1960s, clinical studies suggested that the long-term outcome in UC patients after a steroid course was more favorable in patients who achieved both clinical and endoscopic remission compared to those who achieved only clinical remission.1 Up to the late 1990s, other observational studies reported the lack of a similar correlation in patients with CD. In particular, these studies described the absence of a clear impact of the mucosal lesions healing on relapse rates in CD patients with steroid-induced clinical remission.2 These observations led clinicians to limit their CD treatment focus to symptomatic improvement and remission, therefore abandoning the idea that MH could affect the natural course of the disease.

The attitude of clinicians towards MH changed drastically when anti-TNFα drugs entered the IBD clinical practice. For the first time, in fact, it was thought possible to achieve a rapid and possibly sustained healing of mucosal lesions also in CD.3 Since then, the interest on MH, in both UC and CD, grew so much that, nowadays, there is a trend towards considering MH a relevant end-point in clinical trials and a desirable goal in clinical practice and some authors have suggested that future studies should focus on MH as primary outcome measure.4

Over the last years, accumulating evidence suggest that MH is prognostically relevant. In a Norwegian population-based cohort, MH was found to have a significant impact on the long-term outcome of both CD and UC.5 Observational studies and subgroup analyses of randomized controlled trials in CD indicate that MH is associated with lower relapse rates, higher corticosteroid-free remission rates, reduced hospitalizations and reduced need of surgery at least in the medium term.6-8 Moreover, achieving MH may reduce the risk of relapse after infliximab (IFX) therapy is stopped.9 Also in UC MH has been associated with a more favorable outcome such as reduced risk of relapse, fewer hospitalizations and lower colectomy rates.10-13 Moreover, the persistence of mucosal inflammation, even in the absence of symptoms, has recently been associated to an increased risk of colorectal cancer in UC; therefore, to address medical treatments beyond clinical remission, could lead to possible reduction of incidence of cancer.14

Although compelling arguments suggests that MH may be associated with improved disease outcome, the most important question for clinical practice is if we should systematically assess MH in all patients and target our treatment strategies to achieve not only clinical remission but also complete healing of endoscopic lesions. Several questions may arise from this issue: How should MH be defined? What is the impact of current treatments in healing mucosal lesions in CD and UC? What is the real impact of MH on the clinical course of IBD? In other words, is MH a valid surrogate end-point of disease outcome?15

Definition of Mucosal Healing

There is a wide range of possible endoscopic lesions in IBD but, to date, there is no standardized definition of MH.

In CD, MH has be defined in a simple and pragmatic manner as “absence of ulcerations at follow-up endoscopy in patients who had ulcerations at baseline”.6 This definition may be simple for clinical practice but it is too rigid and does not consider patients with substantial endoscopic improvement but with persistence of some mucosal lesions. Endoscopic scores, such as the Crohn’s Disease Endoscopic Index of Severity (CDEIS) or the Simple Endoscopic Score for Crohn’s Disease (SES- CD) are generally restricted to clinical trials. They are complex to calculate, require training and expertise and are not suitable for routine clinical practice.16 Moreover these scores were initially conceived, as continuous- variable systems and no agreement on cut-off values to define MH exist. In fact, in various studies, different cut-off values for the CDEIS and SES-CD have been used for defining MH.16

For UC, there are several endoscopic scores. All are quite similar, but none of them has ever been properly validated.17 One of the most used is the Mayo Endoscopic Score that combines 5 variables (erythema, vascular pattern, friability, bleeding, erosions, and ulcerations) in a 4-point scale, as follows: 0=normal or inactive disease; 1=mild disease (erythema, decreased vascular pattern, mild friability); 2=moderate disease (marked erythema, absent vascular pattern, friability, erosions); 3=severe disease (spontaneous bleeding, ulcerations).18 The International Organization of Inflammatory Bowel Disease (IOIBD) has proposed a definition of MH in UC as “absence of friability, blood, erosions, ulcers in all visualized segments.17 This definition corresponds to a Mayo score comprised between 0 and 1 and is simple to use in clinical practice.

A precise definition of MH would be of critical importance. In order to be useful in clinical terms, any definition should carry a prognostic value, in particular, it is discussed whether the definition of MH should be reserved solely for those cases of complete healing of mucosal lesions or whether, less strictly, MH could be defined as a clear improvement of mucosal lesions but without complete mucosal restitutio ad integrum. In a retrospective study on a large cohort of CD patients treated with IFX, no difference in the long-term need of major abdominal surgery was observed in patients that achieved complete MH (absence of ulcerations) or partial MH (clear improvement of mucosal lesions, but still with ulcerations).6 Similarly in UC, a sub-study of the ACT 1 and ACT 2 trials showed that early MH with IFX was associated with a reduced risk of colectomy within 1 year, but the colectomy-free survival was similar in patients who achieved complete MH (Mayo Score = 0) or partial MH (Mayo Score ≤ 1).12 Taken together, these observations suggest that a distinction between complete and partial MH may not be relevant in clinical terms but further studies are needed.

Current Treatments for IBD and Mucosal Healing

Several drugs currently used in the management of IBD are capable of inducing MH in different clinical settings of disease location and severity. However the effect size of different treatments and the duration of the effect (short-term or sustained MH) are difficult to assess because of different definitions, different study designs, and different timing of endoscopic evaluation.19

Aminosalicylates are the first line treatment for mild to moderate UC. Their efficacy in inducing and maintaining clinical remission has been demonstrated in several randomized controlled trials.20,21 Several data prove the capacity of both oral and topic aminosalicylates in inducing also MH in mild to moderately active UC. In several studies using different oral 5-ASA doses and formulations, and considering different definitions of MH at different time points, the percentage of patients achieving endoscopic remission ranged from 25% to 70%.19 In a recent meta-analysis involving 3,977 patients treated with oral 5-ASA and 2,513 patients treated with rectal 5-ASA, the overall rate of MH, according to different definitions, was 36.9% in patients receiving oral 5-ASA and 50.3% in patients receiving rectal 5-ASA.22 Optimising oral dose and combining oral and topical aminosalicylates may result in an increased rate of endoscopic remission in the short-term, up to 75%-80% or approximately 30% when MH is defined as completely normal mucosa (Mayo score = 0).10, 23-24

Corticosteroids are the gold standard for the treatment of active moderate to severe IBD. Despite their excellent capacity to induce rapid symptomatic improvement and clinical remission in the short-term, it has been known for a long time that these drugs have a little impact on mucosal lesions. Historical trials showed that approximately one third of patients with CD and UC with corticosteroid-induced remission achieve also endoscopic remission1, 25 and similar rates of clinical and endoscopic remission have been recently reported in a prospective study on 157 UC patients receiving their first steroid course.11

Immunomodulators azathioprine (AZA) and 6-Mercaptopurine (6MP) are usually considered effective in inducing MH in CD, even though it is well known that these drugs take a long time to achieve their potential benefits. However, evidence for AZA- induced MH in CD is very limited, deriving from few small studies performed in different clinical settings in which rates of MH, defined with different criteria, range from 36% to 70% within 12-42 months of continuative treatment.26-28 Recently, the SONIC study investigated the effect of a combination of AZA plus IFX vs. AZA or IFX monotherapy in moderate to severe CD patients, immunosuppressive or biologic naïve. The primary outcome was steroid-free remission at week 26 and MH was a secondary end-point evaluated in a subgroup of 309 patients who were assessed endoscopically at baseline and after 6 months of therapy. MH was achieved in only 16.5% of patients receiving AZA monotherapy.29 As far as UC is concerned, data are very limited. In a small prospective study in patients with steroid dependent UC, 55% of patients receiving AZA achieved clinical and endoscopic remission within 6 months.30 Taken together these data suggest that AZA and 6MP may induce MH in a variable proportion of patients with CD and UC but the slow action of these drugs is the major limitation.

In the last 15 years, the advent of anti-TNFα agents has raised treatment expectations beyond symptomatic remission. In fact, the first observations suggested that, compared to conventional therapies, anti-TNFα agents were very effective in inducing and, possibly, in maintaining MH.3 Nevertheless, MH in anti-TNFα treated patients has not been systematically studied and data are available from subgroup analysis of RCTs and observational cohort studies.

In CD, subgroup analyses of RCTs suggest that scheduled IFX every 8 weeks can induce MH in approximately 30% of patients in the short-term, 50% in the long-term, and sustained MH (short and long- term) in approximately 30% of patients.8, 29 Real life experiences report similar data.6 Scheduled adalimumab (ADA) maintenance (40 mg every other week) can induce and maintain complete MH in approximately 25% of patients.31

In UC, scheduled IFX every 8 weeks can induce and maintain MH in approximately 30% to 50% of patients according to the definition of MH (Mayo score = 0 vs. Mayo score ≤ 1).13,32 Studies with ADA in UC report a short-term MH in approximately 40% of patients and one year MH in approximately 25% of patients.33, 34 Real life experiences with ADA in UC report partial MH (Mayo score ≤ 1) in approximately 50% of responders and complete MH (Mayo score = 0) in approximately 25% of patients after a median of 11 months.35

Is MH a Valid Surrogate End-Point of Disease Outcome?

Although evidence suggests that MH may improve disease outcome in terms of sustained remission and reduced complications, hospitalization and surgery, it is a surrogate end-point of disease course and an important point for discussion is to establish if MH is a valid surrogate end-point of disease outcome. A surrogate end-point is an outcome measure that per se has not a direct clinical importance but reflects a clinically relevant outcome. The greatest potential for the validity of a surrogate end-point is when the surrogate is in the only causal pathway of the disease process and the therapeutic effect on the true outcome is mediated through its effect on the surrogate.36 It is difficult to find this ideal setting in a complex and multifactorial disease such IBD and, therefore, surrogate end-points could yield misleading conclusions.15

Apart from the difficulties establishing the validity of MH as a surrogate end-point of disease outcome, there are other debated issues; first of all whether histology should be included in the evaluation of MH. Although theoretically appealing, the prognostic relevance of histological healing has not been extensively evaluated but some data suggest that histological healing is relevant in UC as microscopic inflammation, even without gross endoscopic lesions, is predictive of disease relapse in patients who are in clinical and endoscopic remission.37,38 Moreover, some studies indicate that ongoing microscopic inflammation is an independent risk factor of colorectal cancer in long-standing UC.39,40 Although the effect of different drugs on microscopic inflammation in UC has not been extensively studied, histological healing may be, theoretically, the ultimate therapeutic goal in UC. Conversely, in CD, comes the issue of the appropriateness of MH as a relevant end- point in the treatment of a disease that is typically transmural. A lesson about the inadequateness of superficial healing in a transmural condition, we learnt from fistulizing CD, where it has clearly emerged how the closure of fistulas’ external orifices can be achieved despite the persistence of the fistulous tracks.41 For this reason, the concept of MH in CD is evolving towards a more complex model of intestinal healing with the elaboration of an instrument, the so-called Lemann score, which should enable an assessment of the cumulative structural bowel damage.42 The score includes not only endoscopy but also cross-sectional imaging techniques and, in the near future, it could be used in clinical trials and observational studies to measure the progression of bowel damage over time and to assess the effects of treatment on the progression of bowel damage.

Take Home Messages


    1. In the last years the therapeutic goals of IBD have

      changed from mere control of symptoms towards

      long-term strategies aimed at affecting the natural

      course of the disease and MH is an emerging end-

      point in this setting.

      Although accumulating evidence suggests

      that MH is associated with improved disease

      outcome, it remains a weak surrogate end-point

      of disease course and further studies are needed

      to prospectively assess the impact of MH on long-

      term clinical outcomes.

      There is currently no standardized definition of

      MH and further studies are needed to develop

      and validate a definition of MH that carries a clear

      prognostic value.

      It is still uncertain how MH should be used in

      clinical practice. Although corticosteroids-free

      remission remains the first therapeutic goal in IBD,

      appropriate use and optimization of conventional

      and biological strategies may results in short and

      sustained MH in a variable proportion of patients.

      The most important question for clinical practice is

      if we should systematically assess MH and target

      our treatment strategies to achieve MH in all IBD

      patients.

      MH is likely not ready to be the primary therapeutic

      end-point in clinical practice, but it should be

      considered in decision-making. If the optimal

      management of a patient in clinical remission

      but with persistent endoscopic lesions is unclear

      (there are no prospective studies showing that

      escalation of therapy or switching to an alternative

      agent is associated with better outcomes in this

      setting), assessment of MH may be useful to select

      patients in sustained clinical remission in whom

      withdrawal of immunosuppressive or biologic

      therapy could be considered minimizing the risk

      of relapse.








Download Tables, Images & References

jojobethacklinkJojobet GirişJojobet GirişCasibomCasibomiptv satın alluxbetluxbetRulobetbaşakşehir masaj salonukatlaJojobet GirişHoliganbetholiganbetJojobet GirişMarsbahis Giriş