Journal Club

At the MIRTH AI Lab, our journal club is dedicated to studying both historical and recent advances in artificial intelligence in medical imaging, serving as a platform for students to engage in critical analysis and foster peer learning and collaboration. Through journal club, Dr. Shao teaches lab members about multidisciplinary developments in AI and medical imaging. In essence, Journal Club plays a pivotal role in advancing knowledge and promoting the effective use of AI in healthcare.

Dr. Shao has presented many fundamental papers at the lab’s journal club. These papers have been instrumental in helping students establish a robust foundation in deep learning for imaging. Below are examples of articles discussed at journal club.

Inception Network

  1. Szegedy, Christian, et al. “Going deeper with convolutions.”Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. Click to view
  2. Szegedy, Christian, et al. “Rethinking the inception architecture for computer vision.”Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. Click to view
  3. Chollet, François. “Xception: Deep learning with depthwise separable convolutions.”Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. Click to view
  4. Howard, Andrew G., et al. “Mobilenets: Efficient convolutional neural networks for mobile vision applications.”arXiv preprint arXiv:1704.04861 (2017). Click to view

Transformer Network

  1. Vaswani, Ashish, et al. “Attention is all you need.”Advances in neural information processing systems 30 (2017). Click to view
  2. Dosovitskiy, Alexey, et al. “An image is worth 16×16 words: Transformers for image recognition at scale.”arXiv preprint arXiv:2010.11929 (2020). Click to view
  3. Liu, Ze, et al. “Swin transformer: Hierarchical vision transformer using shifted windows.”Proceedings of the IEEE/CVF international conference on computer vision. 2021. Click to view
  4. Chen, Jieneng, et al. “Transunet: Transformers make strong encoders for medical image segmentation.”arXiv preprint arXiv:2102.04306 (2021). Click to view

Image Super-Resolution

  1. Dong, Chao, et al. “Learning a deep convolutional network for image super-resolution.”Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13. Springer International Publishing, 2014. Click to view
  2. Ledig, Christian, et al. “Photo-realistic single image super-resolution using a generative adversarial network.”Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. Click to view
  3. Saharia, Chitwan, et al. “Image super-resolution via iterative refinement.”IEEE Transactions on Pattern Analysis and Machine Intelligence 4 (2022): 4713-4726. Click to view

Generative Model

  1. Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.”arXiv preprint arXiv:1312.6114 (2013). Click to view
  2. Doersch, Carl. “Tutorial on variational autoencoders.”arXiv preprint arXiv:1606.05908 (2016). Click to view
  3. Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising diffusion probabilistic models.”Advances in neural information processing systems 33 (2020): 6840-6851. Click to view
  4. Luo, Calvin. “Understanding diffusion models: A unified perspective.”arXiv preprint arXiv:2208.11970 (2022). Click to view
  5. Rombach, Robin, et al. “High-resolution image synthesis with latent diffusion models.”Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. Click to view

Diffusion Models

  1. Wolleb, Sandkühler et al. “Diffusion Models for Implicit Image Segmentation Ensembles.” arXiv:2112.03145 (2021). Click to view
  2. Song, Meng, Ermon. “Denoising Diffusion Implicit Models.” arXiv:2010.02502 (2020). Click to view
  3. Nichol, Dhariwal. “Improved Denoising Diffusion Probabilistic Models.” arXiv:2102.09672 (2021). Click to view
  4. Hoogeboom, Nielson et al. “Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions.” arXiv:2102.05379 (2021). Click to view