Publications
Journal publications and preprints (from new to old)
- Wei Liu, Qihang Lin, Yangyang Xu, A near‑optimal method for linearly constrained composite non‑convex non‑smooth problems.
- Wei Liu, Yangyang Xu, A SPIDER‑type stochastic subgradient method for expectation‑constrained nonconvex nonsmooth optimization. SIAM Journal on Optimization, 2026.
- Wei Liu, Muhammad Khan, Gabriel Mancino‑Ball, Yangyang Xu, A stochastic smoothing framework for nonconvex‑nonconcave min‑sum‑max problems with applications to Wasserstein distributionally robust optimization.
- Wei Liu, Qihang Lin, Yangyang Xu, Lower complexity bound of first‑order methods for affinely constrained composite non‑convex non‑smooth problems.
- Wei Liu, Xin Liu, Michael K. Ng, Zaikun Zhang, A graph‑partitioning based continuous optimization approach to semi‑supervised clustering problems.
- Hari Dahal, Wei Liu, Yangyang Xu, Damped proximal augmented Lagrangian method for weakly‑convex problems with convex constraints. Mathematical Programming Computation, 2026.
- Wei Liu, Qihang Lin, Yangyang Xu, First‑order methods for affinely constrained composite non‑convex non‑smooth problems: Lower complexity bound and near‑optimal methods.Mathematics of Operations Research
- Wei Liu, Xin Liu, and Xiaojun Chen, An inexact augmented Lagrangian algorithm for training leaky ReLU neural network with group sparsity. Journal of Machine Learning Research, 2023.
- Wei Liu, Xin Liu, and Xiaojun Chen, Linearly‑constrained nonsmooth optimization for training autoencoders. SIAM JOURNAL on Optimization, 2022.
Conference publications and preprints
- Wei Liu, Anweshit Panda, Ujwal Pandey, Haven Cook, George Slota, Naigang Wang, Jie Chen, Yangyang Xu, LoDAdaC: a unified local training‑based decentralized framework with Adam‑type updates and compressed communication. Accepted by Transactions on Machine Learning Research.
- Wei Liu, Anweshit Panda, Ujwal Pandey, Christopher Brissette, Yikang Shen, George M. Slota, Naigang Wang, Jie Chen, Yangyang Xu, Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization. Accepted by Transactions on Machine Learning Research.
Current interests (from high to low)
- Computing directional stationary points of nonconvex nonconcave minimax problems.
- Stochastic first‑order methods for min‑sum‑max problems (iteration complexity, convergence results).
- Decentralized optimization.
- Lower bounds for functionally constrained problems.
- Applications: Wasserstein distributionally robust problems, fairness‑constrained problems, large language models.
Ph.D. Thesis
- Optimization in Machine Learning: From Semi‑supervised Learning to Deep Learning (in Chinese). [PDF] [Slides]
|