Research SummaryOptimization has played a significant role in many areas such as engineering, sciences, health care. My research focuses on continuous optimization and is mainly motivated by “big” problems arising from image processing, machine learning, statistics, finance, and so on. First-order optimization methodsOn solving big-data problems, scalability and reliability are two very important factors in designing numerical approaches. For these problems, computing second or higher-order derivatives is often extremely expensive in terms of both time and space complexity. Hence, first-order methods that acquire only first and/or zero-th order information of a problem are popularly used mainly due to their nice scalability and fast convergence to medium-accurate solutions. First-order methods have been extensively studied for problems without constraints or with simple constraints, but few efforts have been made on solving problems with complicated constraints. One of my current research interests is to develop efficient and to explore complexity of first-order methods for solving nonlinear-functional constrained problems with applications in machine learning, operations research, and engineering. Stochastic optimizationIf the data comes as a stream (e.g., in stochastic programs and online learning), and one wants to learn or extract important features from the data stream, storing all the data and then performing data reduction or mining may be impossible. At any time point, only partial samples of the data can be accessed. Even if one can wait until arrival of all the data and store it, accessing all the data for each update of the variables can be extremely expensive, and thus sampling small amount of the data is still beneficial and more efficient. Compressed sensing and low-rank matrix / tensor recoveryFor large or huge-scale data, fully acquiring it can be very expensive (e.g., MRI), and to save acquisition time, the data is often partially sampled, or its very few measurements are taken. In some applications (e.g., movie-user rating by Netflix), it is even impossible to acquire the complete data. However, due to special structures (e.g., sparseness, smoothness, and low-rankness) of the data, it can be reliably reconstructed from its incomplete observations or under-determined measurements. Regularized matrix and tensor factorizationEven if a large or huge amount of data is completely acquired, storing all of it is often very expensive and may be wasteful due to possible data redundancy (e.g., face recognition). Data reduction is usually necessary to remove redundancy and maintain principal information. Matrix and tensor factorizations with regularization terms (e.g., nonnegativity, sparsity, orthogonality) are efficient ways for dimensionality reduction and feature extraction. |