数学交叉科学研究所系列学术报告(宋林昊博士,中南大学;程昆博士,北京交通大学)
来源:系统管理员 发布时间:2025-06-26
报告题目1:Learning Frechet Differentiable Operators Via Prespecified Neural Operators
报告人:宋林昊博士,中南大学
报告时间:2025年7月1日(周二)9:00-10:00
报告地点:20-200
报告摘要:Neural operators, built on neural networks, have emerged as a crucial tool in deep learning for approximating nonlinear operators. The present work develops an approximation and generalization theory for neural operators with prespecified encoders and decoders, improving and extending previous work by focusing on target operators that are Frechet differentiable. To extract the smoothness feature, we expand the target operator by the Taylor formula and apply a re-discretizing technique. This enables us to derive an upper bound on the approximation error for Frechet differentiable operators, and achieve improved rates of approximation under some properly chosen classes of encoders and decoders compared to those for Lipschitz continuous operators. Furthermore, we establish an upper bound on the generalization error for the empirical risk minimizer induced by prespecified neural operators. Explicit learning rates are derived when encoder-decoder pairs are chosen via polynomial approximation and principle component analysis. These findings quantitatively demonstrate how the reconstruction errors of infinite dimensional spaces and the smoothness of target operators influence learning performances.
报告人简介:宋林昊,博士毕业于北京航空航天大学数学科学学院,同时期在香港城市大学数据科学学院联合培养,现为中南大学讲师,研究领域为统计学习理论和深度学习理论,在Journal of Fourier analysis and applications 、Neural networks 等期刊发表多篇论文。
报告题目2:Regularized Reduced-rank Regression for structured output prediction
报告人:程昆博士,北京交通大学
报告时间:2025年7月1日(周二)10:00-11:00
报告地点:20-200
报告摘要:Reduced-rank regression (RRR) has been widely used to strength the dependency among multiple outputs. In this talk,we introduce a regularized vector-valued RRR approach, which plays an important role in predicting multiple output variables with structures. The estimator of vector-valued RRR is obtained by minimizing the empirically squared reproducing kernel Hilbert space (RKHS) distances between output feature kernel and all rdimensional subspaces in vector-valued RKHS. The algorithm is implemented easily with kernel tricks. We establish the learning rate of vector-valued RRR estimator under mild assumptions. Moreover, as a reduced-dimensional approximation of output kernel regression function, the estimator converges to the output regression function in probability when the rank r tends to infinity appropriately. It thus implies the consistency of structured predictor in general settings, especially in a misspecified case where the true regression function is not contained in the hypothesis space. Simulations and real data analysis are provided to illustrate the efficiency of the proposed method.
报告人简介:程昆,现为北京交通大学数学与统计学院讲师,研究方向为:统计学习理论、小波分析和复杂数据分析及其应用。目前已在机器学习和概率统计领域权威期刊发表多篇论文。
邀请人:向道红