The difficulty of applying deep learning algorithms to biomedical imaging systems arises from a lack of training images. An existing workaround to the lack of medical training images involves pre-training deep learning models on ImageNet, a non-medical dataset with millions of training images. However, the modality of ImageNet's dataset samples consisting of natural images in RGB frequently differs from the modality of medical images, consisting largely of images in grayscale such as X-ray and MRI scan imaging. https://www.selleckchem.com/products/rilematovir.html While this method may be effectively applied to non-medical tasks such as human face detection, it proves ineffective in many areas of medical imaging. Recently proposed generative models such as Generative Adversarial Networks (GANs) are able to synthesize new medical images. By utilizing generated images, we may overcome the modality gap arising from current transfer learning methods. In this paper, we propose a training pipeline which outperforms both conventional GAN-synthetic methods and transfer learning methods.Clinically, the Fundus Fluorescein Angiography (FA) is a more common mean for Diabetic Retinopathy (DR) detection since the DR appears in FA much more contrasty than in Color Fundus Image (CF). However, acquiring FA has a risk of death due to the fluorescent allergy. Thus, in this paper, we explore a novel unpaired CycleGAN-based model for the FA synthesis from CF, where some strict structure similarity constraints are employed to guarantee the perfectly mapping from one domain to another one. First, a triple multi-scale network architecture with multi-scale inputs, multi-scale discriminators and multi-scale cycle consistency losses is proposed to enhance the similarity between two retinal modalities from different scales. Second, the self-attention mechanism is introduced to improve the adaptive domain mapping ability of the model. Third, to further improve strict constraints in the feather level, quality loss is employed between each process of generation and reconstruction. Qualitative examples, as well as quantitative evaluation, are provided to support the robustness and the accuracy of our proposed method.Simulating medical images such as X-rays is of key interest to reduce radiation in non-diagnostic visualization scenarios. Past state of the art methods utilize ray tracing, which is reliant on 3D models. To our knowledge, no approach exists for cases where point clouds from depth cameras and other sensors are the only input modality. We propose a method for estimating an X-ray image from a generic point cloud using a conditional generative adversarial network (CGAN). We train a CGAN pix2pix to translate point cloud images into X-ray images using a dataset created inside our custom synthetic data generator. Additionally, point clouds of multiple densities are examined to determine the effect of density on the image translation problem. The results from the CGAN show that this type of network can predict X-ray images from points clouds. Higher point cloud densities outperformed the two lowest point cloud densities. However, the networks trained with high-density point clouds did not exhibit a significant difference when compared with the networks trained with medium densities. We prove that CGANs can be applied to image translation problems in the medical domain and show the feasibility of using this approach when 3D models are not available. Further work includes overcoming the occlusion and quality limitations of the generic approach and applying CGANs to other medical image translation problems.High spatial resolution of Magnetic Resonance images (MRI) provide rich structural details to facilitate accurate diagnosis and quantitative image analysis. However the long acquisition time of MRI leads to patient discomfort and possible motion artifacts in the reconstructed image. Single Image Super-Resolution (SISR) using Convolutional Neural networks (CNN) is an emerging trend in biomedical imaging especially Magnetic Resonance (MR) image analysis for image post processing. An efficient choice of SISR architecture is required to achieve better quality reconstruction. In addition, a robust choice of loss function together with the domain in which these loss functions operate play an important role in enhancing the fine structural details as well as removing the blurring effects to form a high resolution image. In this work, we propose a novel combined loss function consisting of an L1 Charbonnier loss function in the image domain and a wavelet domain loss function called the Isotropic Undecimated Wavelet loss (IUW loss) to train the existing Laplacian Pyramid Super-Resolution CNN. The proposed loss function was evaluated on three MRI datasets - privately collected Knee MRI dataset and the publicly available Kirby21 brain and iSeg infant brain datasets and on benchmark SISR datasets for natural images. Experimental analysis shows promising results with better recovery of structure and improvements in qualitative metrics.Magnetic resonance (MR) images are generally degraded by random noise governed by Rician distributions. In this study, we developed a modified adaptive high order singular value decomposition (HOSVD) method, taking consideration of the nonlocal self-similarity and weighted Schatten p-norm. We extracted 3D cubes from noise images and classified the similar cubes by the Euclidean distance between cubes to construction a fourth-order tensor. Each rank of unfolding matrices was adaptively determined by weighted Schatten p-norm regularization. The latent noise-free 3D MR images can be obtained by an adaptive HOSVD. Denoising experiments were tested on both synthetic and clinical 3D MR images, and the results showed the proposed method outperformed several existing methods for Rician noise removal in 3D MR images.Quantitative Coronary Angiography (QCA) is an important tool in the study of coronary artery disease. Validation of this technique is crucial for their ongoing development and refinement although it is difficult due to several factors such as potential sources of error. The present work aims to a further validation of a new semi-automated method for three-dimensional (3D) reconstruction of coronary bifurcations arteries based on X-Ray Coronary Angiographies (CA). In a dataset of 40 patients (79 angiographic views), we used the aforementioned method to reconstruct them in 3D space. The validation was based on the comparison of these 3D models with the true silhouette of 2D models annotated by an expert using specific metrics. The obtained results indicate a good accuracy for the most parameters (≥ 90 %). Comparison with similar works shows that our new method is a promising tool for the 3D reconstruction of coronary bifurcations and for application in everyday clinical use.