| Peer-Reviewed

Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU

Received: 27 September 2021    Accepted: 9 October 2021    Published: 19 October 2021
Views:       Downloads:
Abstract

Throughout the world, brain tumors have become a medical priority as more people suffer from this malignant disease worldwide. In the field of computer science, researchers have been studying to utilize MRI scans to its fullest potential, in recognizing signs of tumors early on, and utilizing computers and convolutional neural networks to process massive amounts of patient data at once in hopes of saving lives. This investigation finds out the specifications of visualization of MRI scans and how filters and layers are used to identify lethal tumors in the brain. For one of our main methods, a pre-trained model to improve accuracy was used - the Xception model. This showed a contrast between previous existing models as those fully connected layers were added to the back of existing ones. Our main proposed model of Xception + Bidirectional GRU had the highest accuracy of 82% out of 7 different models. In our proposed model, Convolutional layers were used to extract specific features of an image and process other similar images in the same way. By using 3 layers of Convolution, Activation, and Max pooling, we saw the networks focus on the actual tumors in the brain by distinguishing patterns in images and focusing on that area to create visual representations. Principal components of this research were the ability to visualize abnormal features of brain scan images to filter out and layer regions to bring attention to tumors in the brain.

Published in American Journal of Psychiatry and Neuroscience (Volume 9, Issue 4)
DOI 10.11648/j.ajpn.20210904.11
Page(s) 147-156
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Brain Tumor, Deep CNN, Xception, BiGRU

References
[1] Torre, Lindsey A., Rebecca L. Siegel, Elizabeth M. Ward, and Ahmedin Jemal. “Global Cancer Incidence and Mortality Rates and Trends—An Update.” Cancer Epidemiology Biomarkers & Prevention 25, no. 1 (2015): 16–27. https://doi.org/10.1158/1055-9965.epi-15-0578.
[2] Abou-Antoun, Tamara J., James S. Hale, Justin D. Lathia, and Stephen M. Dombrowski. “Brain Cancer Stem Cells in Adults and Children: Cell Biology and Therapeutic Implications.” Neurotherapeutics 14, no. 2 (2017): 372–84. https://doi.org/10.1007/s13311-017-0524-0.
[3] L. Saenz del Burgo, R. M. Hernández, G. Orive, J. L. Pedraz, Nanotherapeutic approaches for brain cancer management. Nanomedicine. 10, 905–919 (2014).
[4] Saenz del Burgo, Laura, Rosa María Hernández, Gorka Orive, and Jose Luis Pedraz. “Nanotherapeutic Approaches for Brain Cancer Management.” Nanomedicine: Nanotechnology, Biology and Medicine 10, no. 5 (2014). https://doi.org/10.1016/j.nano.2013.10.001.
[5] Siegel, Rebecca L., Kimberly D. Miller, and Ahmedin Jemal. “Cancer Statistics, 2020.” CA: A Cancer Journal for Clinicians 70, no. 1 (2020): 7–30. https://doi.org/10.3322/caac.21590.
[6] “New Research Finds FastMRI Scans Generated with Artificial Intelligence Are as Accurate as Traditional MRI.” NYU Langone News. Accessed June 30, 2021. https://nyulangone.org/news/new-research-finds-fastmri-scans-generated-artificial-intelligence-are-accurate-traditional-mri.
[7] Obenauf, Anna C., and Joan Massagué. “Surviving at a Distance: Organ-Specific Metastasis.” Trends in Cancer 1, no. 1 (2015): 76–91. https://doi.org/10.1016/j.trecan.2015.07.009.
[8] Swati, Zar Nawab, Qinghua Zhao, Muhammad Kabir, Farman Ali, Zakir Ali, Saeed Ahmed, and Jianfeng Lu. “Brain Tumor Classification for MR Images Using Transfer Learning and Fine-Tuning.” Computerized Medical Imaging and Graphics 75 (2019): 34–46. https://doi.org/10.1016/j.compmedimag.2019.05.001.
[9] Deepak, S., and P. M. Ameer. “Brain Tumor Classification Using Deep CNN Features via Transfer Learning.” Computers in Biology and Medicine 111 (2019): 103345. https://doi.org/10.1016/j.compbiomed.2019.103345.
[10] Sumitra, N., and Rakesh Kumar Saxena. “Brain Tumor Classification Using Back Propagation Neural Network.” International Journal of Image, Graphics and Signal Processing 5, no. 2 (2013): 45–50. https://doi.org/10.5815/ijigsp.2013.02.07.
[11] Seetha, J., and S. Selvakumar Raja. “Brain Tumor Classification Using Convolutional Neural Networks.” Biomedical and Pharmacology Journal 11, no. 3 (2018): 1457–61. https://doi.org/10.13005/bpj/1511.
[12] Afshar, Parnian., Arash Mohammadi, and Konstantinos N. Plataniotis. “Brain Tumor Type Classification via Capsule Networks.” 2018 25th IEEE International Conference on Image Processing (ICIP), 2018. https://doi.org/10.1109/icip.2018.8451379.
[13] Sartaj. “Brain Tumor Classification (MRI).” Kaggle, May 24, 2020. https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri?select=Training.
[14] Yamashita, Rikiya, Mizuho Nishio, Richard Kinh Do, and Kaori Togashi. “Convolutional Neural Networks: An Overview and Application in Radiology.” Insights into Imaging 9, no. 4 (2018): 611–29. https://doi.org/10.1007/s13244-018-0639-9.
[15] Marmanis, Dimitrios, Mihai Datcu, Thomas Esch, and Uwe Stilla. “Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks.” IEEE Geoscience and Remote Sensing Letters 13, no. 1 (2016): 105–9. https://doi.org/10.1109/lgrs.2015.2499239.
[16] Chung, Junyoung., Gulcehre, Caglar, Cho, KyungHyun, & Bengio, Yoshua (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555.
Cite This Article
  • APA Style

    Ashley Seong. (2021). Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU. American Journal of Psychiatry and Neuroscience, 9(4), 147-156. https://doi.org/10.11648/j.ajpn.20210904.11

    Copy | Download

    ACS Style

    Ashley Seong. Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU. Am. J. Psychiatry Neurosci. 2021, 9(4), 147-156. doi: 10.11648/j.ajpn.20210904.11

    Copy | Download

    AMA Style

    Ashley Seong. Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU. Am J Psychiatry Neurosci. 2021;9(4):147-156. doi: 10.11648/j.ajpn.20210904.11

    Copy | Download

  • @article{10.11648/j.ajpn.20210904.11,
      author = {Ashley Seong},
      title = {Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU},
      journal = {American Journal of Psychiatry and Neuroscience},
      volume = {9},
      number = {4},
      pages = {147-156},
      doi = {10.11648/j.ajpn.20210904.11},
      url = {https://doi.org/10.11648/j.ajpn.20210904.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajpn.20210904.11},
      abstract = {Throughout the world, brain tumors have become a medical priority as more people suffer from this malignant disease worldwide. In the field of computer science, researchers have been studying to utilize MRI scans to its fullest potential, in recognizing signs of tumors early on, and utilizing computers and convolutional neural networks to process massive amounts of patient data at once in hopes of saving lives. This investigation finds out the specifications of visualization of MRI scans and how filters and layers are used to identify lethal tumors in the brain. For one of our main methods, a pre-trained model to improve accuracy was used - the Xception model. This showed a contrast between previous existing models as those fully connected layers were added to the back of existing ones. Our main proposed model of Xception + Bidirectional GRU had the highest accuracy of 82% out of 7 different models. In our proposed model, Convolutional layers were used to extract specific features of an image and process other similar images in the same way. By using 3 layers of Convolution, Activation, and Max pooling, we saw the networks focus on the actual tumors in the brain by distinguishing patterns in images and focusing on that area to create visual representations. Principal components of this research were the ability to visualize abnormal features of brain scan images to filter out and layer regions to bring attention to tumors in the brain.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Human Visualization of Brain Tumor Classifications Using Deep CNN: Xception + BiGRU
    AU  - Ashley Seong
    Y1  - 2021/10/19
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ajpn.20210904.11
    DO  - 10.11648/j.ajpn.20210904.11
    T2  - American Journal of Psychiatry and Neuroscience
    JF  - American Journal of Psychiatry and Neuroscience
    JO  - American Journal of Psychiatry and Neuroscience
    SP  - 147
    EP  - 156
    PB  - Science Publishing Group
    SN  - 2330-426X
    UR  - https://doi.org/10.11648/j.ajpn.20210904.11
    AB  - Throughout the world, brain tumors have become a medical priority as more people suffer from this malignant disease worldwide. In the field of computer science, researchers have been studying to utilize MRI scans to its fullest potential, in recognizing signs of tumors early on, and utilizing computers and convolutional neural networks to process massive amounts of patient data at once in hopes of saving lives. This investigation finds out the specifications of visualization of MRI scans and how filters and layers are used to identify lethal tumors in the brain. For one of our main methods, a pre-trained model to improve accuracy was used - the Xception model. This showed a contrast between previous existing models as those fully connected layers were added to the back of existing ones. Our main proposed model of Xception + Bidirectional GRU had the highest accuracy of 82% out of 7 different models. In our proposed model, Convolutional layers were used to extract specific features of an image and process other similar images in the same way. By using 3 layers of Convolution, Activation, and Max pooling, we saw the networks focus on the actual tumors in the brain by distinguishing patterns in images and focusing on that area to create visual representations. Principal components of this research were the ability to visualize abnormal features of brain scan images to filter out and layer regions to bring attention to tumors in the brain.
    VL  - 9
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Seoul International School, Gyunggi-do, South Korea

  • Sections