The Role of Adaptive Optimizers for Honest Private Hyperparameter Tuning
Hyperparameter optimization is a ubiquitous challenge in machine learning, and the performance of a trained model depends crucially upon their effective selection. While a rich set of tools exist for this purpose, there are currently no practical hyperparameter selection methods under the constraint of differential privacy (DP). In this talk, we will present our recent work on honest hyperparameter selection for differentially private machine learning, in which the process of hyperparameter tuning is accounted for in the overall privacy budget. In particular, we will (i) show that standard privacy composition tools can outperform more advanced techniques in many settings, (ii) show that adaptive optimizers like DPAdam enjoy a significant advantage in the process of honest hyperparameter tuning, and (iii) draw upon novel limiting behavior of Adam in the DP setting to design a new and more efficient optimizer. In the end of the talk, we would like to urge the design of systems and tools with end-to-end privacy guarantees for machine learning.
Bio: Xi He's research focuses on the areas of privacy and security for big data, including the development of usable and trustworthy tools for data exploration and machine learning with provable security and privacy (S&P) guarantees. Rather than patching systems for their S&P issues, Xi's work takes a principled approach to designing provable S&P requirements and building practical tools that achieve these requirements. Considering S&P as a first-class citizen in system and algorithm design, she has demonstrated new optimization opportunities for these S&P-aware database systems and machine learning tools. She has published in the top database, privacy, and ML conferences, including SIGMOD, VLDB, CCS, PoPets, AAAI, a book on "Differential Privacy for Databases" in Foundations and Trends in Databases, and presented highly regarded tutorials on privacy at VLDB 2016, SIGMOD 2017, and SIGMOD 2021.