Deep hashing methods have been shown to be the most efficient approximate nearest neighbor search techniques for large-scale image retrieval. However, existing deep hashing methods have a poor small-sample ranking performance for case-based medical image retrieval. The top-ranked images in the returned query results may be as a different class than the query image. This ranking problem is caused by classification, regions of interest (ROI), and small-sample information loss in the hashing space. To address the ranking problem, we propose an end-to-end framework, called Attention-based Triplet Hashing (ATH) network, to learn low-dimensional hash codes that preserve the classification, ROI, and small-sample information. We embed a spatial-attention module into the network structure of our ATH to focus on ROI information. The spatial-attention module aggregates the spatial information of feature maps by utilizing max-pooling, element-wise maximum, and element-wise mean operations jointly along the channel axis. To highlight the essential role of classification in direntiating case-based medical images, we propose a novel triplet cross-entropy loss to achieve maximal class-separability and maximal hash code-discriminability simultaneously during model training. The triplet cross-entropy loss can help to map the classification information of images and similarity between images into the hash codes. Moreover, by adopting triplet labels during model training, we can utilize the small-sample information fully to alleviate the imbalanced-sample problem. Extensive experiments on two case-based medical datasets demonstrate that our proposed ATH can further improve the retrieval performance compared to the state-of-the-art deep hashing methods and boost the ranking performance for small samples. Compared to the other loss methods, the triplet cross-entropy loss can enhance the classification performance and hash code-discriminability.Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. https://www.selleckchem.com/products/almorexant-hcl.html Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.Sparse sampling and parallel imaging techniques are two effective approaches to alleviate the lengthy magnetic resonance imaging (MRI) data acquisition problem. Promising data recoveries can be obtained from a few MRI samples with the help of sparse reconstruction models. To solve the optimization models, proper algorithms are indispensable. The pFISTA, a simple and efficient algorithm, has been successfully extended to parallel imaging. However, its convergence criterion is still an open question. Besides, the existing convergence criterion of single-coil pFISTA cannot be applied to the parallel imaging pFISTA, which, therefore, imposes confusions and difficulties on users about determining the only parameter - step size. In this work, we provide the guaranteed convergence analysis of the parallel imaging version pFISTA to solve the two well-known parallel imaging reconstruction models, SENSE and SPIRiT. Along with the convergence analysis, we provide recommended step size values for SENSE and SPIRiT reconstructions to obtain fast and promising reconstructions. Experiments on in vivo brain images demonstrate the validity of the convergence criterion.The resection of small, low-dense or deep lung nodules during video-assisted thoracoscopic surgery (VATS) is surgically challenging. Nodule localization methods in clinical practice typically rely on the preoperative placement of markers, which may lead to clinical complications. We propose a markerless lung nodule localization framework for VATS based on a hybrid method combining intraoperative cone-beam CT (CBCT) imaging, free-form deformation image registration, and a poroelastic lung model with allowance for air evacuation. The difficult problem of estimating intraoperative lung deformations is decomposed into two more tractable sub-problems (i) estimating the deformation due the change of patient pose from preoperative CT (supine) to intraoperative CBCT (lateral decubitus); and (ii) estimating the pneumothorax deformation, i.e. a collapse of the lung within the thoracic cage. We were able to demonstrate the feasibility of our localization framework with a retrospective validation study on 5 VATS clinical cases.