Humans and robots can recognize materials with distinct thermal effusivities by making physical contact and observing temperatures during heat transfer. This works well with room temperature materials, yet research has shown that contact with distinct materials can result in similar temperatures and confusion when one material is heated or cooled. To thoroughly investigate this form of ambiguity, we designed a psychophysical experiment in which a participant discriminates between two materials given initial conditions that result in similar temperatures (i.e., ambiguous initial conditions). We conducted a study with 32 human participants and a robot. Humans and the robot confused the materials. We also found that robots can overcome this ambiguity using two temperature sensors with different temperatures prior to contact. We support this conclusion based on a mathematical proof using a heat transfer model and empirical results in which a robot achieved 100% accuracy compared to 5% human accuracy. Our results also indicate that robots with a single temperature sensor can use subtle cues to outperform humans. Overall, our work provides insights into challenging conditions for material recognition via heat transfer, and suggests methods by which robots can overcome these challenges.Computing and attending to salient regions of a visual scene is an innate and necessary preprocessing step for both biological and engineered systems performing high-level visual tasks including object detection, tracking, and classification. Computational bandwidth and speed are improved by preferentially devoting computational resources to salient regions of the visual field. The human brain computes saliency effortlessly, but modeling this task in engineered systems is challenging. We first present a neuromorphic dynamic saliency model, which is bottom-up, feed-forward, and based on the notion of proto-objects with neurophysiological spatio-temporal features requiring no training. Our neuromorphic model outperforms state-of-the-art dynamic visual saliency models in predicting human eye fixations (i.e., ground truth saliency). Secondly, we present a hybrid FPGA implementation of the model for real-time applications, capable of processing 112×84 resolution frames at 18.71 Hz running at a 100 MHz clock rate - a 23.77× speedup from the software implementation. Additionally, our fixed-point model of the FPGA implementation yields comparable results to the software implementation.Identifying new disease indications for the approved drugs can help reduce the cost and time of drug development. Most of the recent methods focus on exploiting the various information related to drugs and diseases for predicting the candidate drug-disease associations. However, the previous methods failed to deeply integrate the neighborhood topological structure and the node attributes of an interested drug-disease node pair. We propose a new prediction method, ANPred, to learn and integrate pairwise attribute information and neighbor topology information from the similarities and associations related to drugs and diseases. First, a bi-layer heterogeneous network with intra-layer and inter-layer connections is established to combine the drug similarities, the disease similarities, and the drug-disease associations. Second, the embedding of a pair of drug and disease is constructed based on integrating multiple biological premises about drugs and diseases. The learning framework based on multi-layer convolutional neural networks is designed to learn the attribute representation of the pair of drug and disease nodes from its embedding. The sequences composed of neighbor nodes are formed based on random walk on the heterogeneous network. A framework based on fully-connected autoencoder and skip-gram module is constructed to learn the neighbor topological representations of nodes. The cross-validation results indicate the performance of ANPred is superior to several state-of-the-art methods. The case studies on 5 drugs further confirm the ability of ANPred in discovering the potential drug-disease association candidates.Hospital readmission prediction is a study to learn models from historical medical data to predict probability of a patient returning to hospital in a certain period, e.g. 30 or 90 days, after the discharge. The motivation is to help health providers deliver better treatment and post-discharge strategies, lower the hospital readmission rate, and eventually reduce the medical costs. Due to inherent complexity of diseases and healthcare ecosystems, modeling hospital readmission is facing many challenges. By now, a variety of methods have been developed, but existing literature fails to deliver a complete picture to answer some fundamental questions, such as what are the main challenges and solutions in modeling hospital readmission; what are typical features/models used for readmission prediction; how to achieve meaningful and transparent predictions for decision making; and what are possible conflicts when deploying predictive approaches for real-world usages. In this paper, we systematically review computational models for hospital readmission prediction, and propose a taxonomy of challenges featuring four main categories (1) data variety and complexity; (2) data imbalance, locality and privacy; (3) model interpretability; and (4) model implementation. The review summarizes methods in each category, and highlights technical solutions proposed to address the challenges. In addition, a review of datasets and resources available for hospital readmission modeling also provides firsthand materials to support researchers and practitioners to design new approaches for effective and efficient hospital readmission prediction.Image-based cell counting is a fundamental yet challenging task with wide applications in biological research. In this paper, we propose a novel unified deep network framework designed to solve this problem for various cell types in both 2D and 3D images. Specifically, we first propose SAU-Net for cell counting by extending the segmentation network U-Net with a Self-Attention module. https://www.selleckchem.com/products/avacopan-ccx168-.html Second, we design an extension of Batch Normalization (BN) to facilitate the training process for small datasets. In addition, a new 3D benchmark dataset based on the existing mouse blastocyst (MBC) dataset is developed and released to the community. Our SAU-Net achieves state-of-the-art results on four benchmark 2D datasets - synthetic fluorescence microscopy (VGG) dataset, Modified Bone Marrow (MBM) dataset, human subcutaneous adipose tissue (ADI) dataset, and Dublin Cell Counting (DCC) dataset, and the new 3D dataset, MBC. The BN extension is validated using extensive experiments on the 2D datasets, since GPU memory constraints preclude use of 3D datasets.