Research Spotlight

Posted July 15th 2018

Direct comparison of the thioacetamide and azoxymethane models of Type A hepatic encephalopathy in mice.

Victoria A. Jaeger D.O.

Victoria A. Jaeger D.O.

Grant, S., M. McMillin, G. Frampton, A. D. Petrescu, E. Williams, V. Jaeger, J. Kain and S. DeMorrow (2018). “Direct comparison of the thioacetamide and azoxymethane models of Type A hepatic encephalopathy in mice.” Gene Expr Jun 12. [Epub ahead of print].

Full text of this article.

Acute liver failure is a devastating consequence of hepatotoxic liver injury that can lead to the development of hepatic encephalopathy. There is no consensus on the best model to represent these syndromes in mice and therefore the aim of this study was to classify hepatic and neurological consequences of azoxymethane- and thioacetamide-induced liver injury. Azoxymethane-treated mice were euthanized at time points representing absence of, minor and significant stages of neurological decline. Thioacetamidetreated mice had tissue collected at up to 3 days following daily injections. Liver histology, serum chemistry, bile acids and cytokine levels were measured. Reflexes, grip strength measurement and ataxia were calculated for all groups. Brain ammonia, bile acid levels, cerebral edema and neuroinflammation were measured. Finally, in vitro and in vivo assessments of blood-brain barrier function were performed. Serum transaminases and liver histology demonstrate that both models generated hepatotoxic liver injury. Serum proinflammatory cytokine levels were significantly elevated in both models. Azoxymethanetreated mice had progressive neurological deficits while thioacetamide-treated mice had inconsistent neurological deficits. Bile acids and cerebral edema were increased to a higher degree in azoxymethane-treated mice, while cerebral ammonia and neuroinflammation were greater in thioacetamide-treated mice. Blood-brain barrier permeability exists in both models but was likely not due to direct toxicity of azoxymethane or thioacetamide on brain endothelial cells. In conclusion, both models generate acute liver injury and hepatic encephalopathy but the requirement of a single injection and the more consistent neurological decline makes azoxymethane treatment a better model for acute liver failure with hepatic encephalopathy.


Posted July 15th 2018

Donor predicted heart mass as predictor of primary graft dysfunction.

Shelley A. Hall M.D.

Shelley A. Hall M.D.

Gong, T. A., S. M. Joseph, B. Lima, G. V. Gonzalez-Stawinski, A. K. Jamil, J. Felius, H. Qin, G. Saracino, A. E. Rafael, P. Kale and S. A. Hall (2018). “Donor predicted heart mass as predictor of primary graft dysfunction.” J Heart Lung Transplant 37(7): 826-835.

Full text of this article.

BACKGROUND: Concern over the hazards associated with undersized donor hearts has impeded the utilization of otherwise viable allografts for transplantation. Previous studies have indicated predicted heart mass (PHM) may provide better size matching in cardiac transplantation than total body weight (TBW). We investigated whether size-matching donor hearts by PHM is a better predictor of primary graft dysfunction (PGD) than matching by TBW. METHODS: Records of consecutive adult cardiac transplants performed between 2012 and 2016 at a single-center academic hospital were reviewed. We compared patients implanted with hearts undersized by >/=30% with those implanted with donor hearts matched for size (within 30%), and performed the analysis both for undersizing by PHM and for undersizing by TBW. The primary outcome was moderate/severe PGD within 24 hours, according to the 2014 International Society for Heart and Lung Transplantation consensus. Secondary outcome was 1-year survival. RESULTS: Of 253 patients, 21 (8%) and 30 (12%) received hearts undersized by TBW and PHM, respectively. The overall rate of moderate/severe PGD was 13% (33 patients). PGD was associated with undersizing if performed by PHM (p = 0.007), but not if performed by TBW (p = 0.49). One-year survival was not different between groups (log-rank, p > 0.8). Multivariate analysis confirmed that undersizing donor hearts by PHM, but not by TBW, was predictive of moderate/severe PGD (OR 3.3, 95% CI 1.3 to 8.6). CONCLUSIONS: Undersized donor hearts by >/=30% by PHM may increase rates of PGD after transplantation, confirming that PHM provides more clinically appropriate size matching than TBW. Better size matching may ultimately allow for expanding the donor pool.


Posted July 15th 2018

Additional Arterial Conduits in Coronary Artery Bypass Surgery: Finally Coming of Age.

Michael J. Mack M.D.

Michael J. Mack M.D.

Gaudino, M., M. J. Mack and D. P. Taggart (2018). “Additional Arterial Conduits in Coronary Artery Bypass Surgery: Finally Coming of Age.” J Am Coll Cardiol 71(25): 2974-2976.

Full text of this article.

In April 1968, Rene Favaloro published his first description of the coronary artery bypass graft (CABG) surgery. In the 50 years since then, CABG has been arguably the most intensively studied surgical procedure. One of the most important and persistent controversies has been the ideal choice of conduits for revascularization, and in particular, whether the use of multiple arterial grafts leads to significantly improved long-term outcomes. Over the past 5 decades, a substantial amount of observational data reporting the beneficial effects of multiple arterial grafts has been published. The overwhelming majority of series reported a survival advantage, using predominantly either internal thoracic or radial arteries. Notably, even with propensity matching, these studies were almost exclusively based on retrospective observational data, and until < 2 years ago, no adequately powered, comparative, randomized trial had been published. The better outcomes associated with arterial grafts are hypothesized to result from their superior angiographic patency. Randomized trials and a network meta-analysis have consistently shown arterial conduits to have better mid- and long-term patency rates than saphenous vein grafts, providing a likely mechanistic explanation of the improved outcomes associated with the use of arteries. (Excerpt from this editorial; no abstract available.)


Posted July 15th 2018

Additional arterial conduits in coronary artery bypass surgery: Finally coming of age.

Michael J. Mack M.D.

Michael J. Mack M.D.

Gaudino, M., M. J. Mack and D. P. Taggart (2018). “Additional arterial conduits in coronary artery bypass surgery: Finally coming of age.” J Thorac Cardiovasc Surg Jun 8. [Epub ahead of print].

Full text of this article.

In April 1968, Rene Favaloro published his first description of the coronary artery bypass graft (CABG) surgery. In the 50 years since then, CABG has been arguably the most intensively studied surgical procedure. One of the most important and persistent controversies has been the ideal choice of conduits for revascularization, and in particular, whether the use of multiple arterial grafts leads to significantly improved long-term outcomes. Over the past 5 decades, a substantial amount of observational data reporting the beneficial effects of multiple arterial grafts has been published. The overwhelming majority of series reported a survival advantage, using predominantly either internal thoracic or radial arteries. Notably, even with propensity matching, these studies were almost exclusively based on retrospective observational data, and until < 2 years ago, no adequately powered, comparative, randomized trial had been published. The better outcomes associated with arterial grafts are hypothesized to result from their superior angiographic patency. Randomized trials and a network meta-analysis have consistently shown arterial conduits to have better mid- and long-term patency rates than saphenous vein grafts, providing a likely mechanistic explanation of the improved outcomes associated with the use of arteries. (Excerpt from this editorial; no abstract available.)


Posted July 15th 2018

Critical Event Intervals in Determining Candidacy for Intravenous Thrombolysis in Acute Stroke.

John S. Garrett M.D.

John S. Garrett M.D.

Garrett, J. S., S. Sonnamaker, Y. Daoud, H. Wang and D. Graybeal (2018). “Critical Event Intervals in Determining Candidacy for Intravenous Thrombolysis in Acute Stroke.” J Clin Med Res 10(7): 582-587.

Full text of this article.

Background: The aim of the study was to determine the optimal set point for the critical event benchmarks described in stroke guidelines and validate the ability of these goals to predict successful administration of intravenous thrombolysis within 60 min of hospital arrival. Methods: This was a retrospective cohort analysis of patients with acute ischemic stroke who received intravenous thrombolysis following presentation to the emergency department. The national benchmarks for time intervals associated with the completion of critical events required to determine candidacy for thrombolysis were evaluated for the ability to predict successful administration of thrombolysis within 60 min of hospital arrival. Optimal time interval cut points were then estimated using regression and receiver-operator characteristic curve analysis and compared to guidelines. Results: Of the 523 patients included in the analysis, 229 (43.8%) received intravenous thrombolysis within 60 min of hospital arrival. Of the patients who met the critical event interval goals described in guidelines, only 51.6% received thrombolysis within 60 min. The optimized cut points suggested by the regression analysis aligned with the guideline benchmarks with the only substantial difference being a shortened goal of arrival to neuroimaging start time of 19 min. This difference did not impact the overall predictive value. Conclusion: The critical event benchmarks proposed in this study by logistic regression closely correlate with the critical event benchmarks described in the AHA/ASA acute stroke guidelines.