TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Brief Description: Deep neural networks (DNNs) are vulnerable to “backdoor” poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model. Detection of backdoors in trained models without access to the training data or example triggers is an important open problem. In this paper, we identify an interesting property of these models: adversarial perturbations transfer from image to image more readily in poisoned models than in clean models.
Feedback-Based Tree Search for Reinforcement Learning
Brief Description: We describe a technique that iteratively applies MCTS on batches of small, finite-horizon versions of the original infinite-horizon MDP. We show that a deep neural network implementation of the technique can create a competitive AI agent for a popular multi-player online battle arena (MOBA) game.
Highly Scalable Verifiable Encrypted Search
Brief Description: In encrypted search, a server holds an encrypted database of documents but not the keys under which the documents are encrypted. The server answer keyword queries from a client with the list of documents matching the query. In this paper we present two highly scalable protocols to search over encrypted data which achieve full security against a possibly malicious server and supports conjunctive queries where the client submits many keywords and is asking the server to identify the documents that match all the keywords. The first protocol we present works in the single client model, where the party searching the data is also the data owner who originally stored with the server. The second protocol works in the more challenging multi-client model, where a data owner stores encrypted data with a server, and the allows a client to search the data via a query-based token.