Symplectic Eigenvalue Problem#

Problem Description#

Given a positive definite matrix \(L \in \mathbb{R}^{n\times n}\), the symplectic eigenvalue problem can be stated as the following optimization problem

\[\begin{split} \begin{aligned} \min_{X \in \mathbb{R}^{2n \times 2p}}\quad & \frac{1}{2} \mathrm{tr}\left( X^\top L X \right) \\ \text{s. t.} \quad & X^\top Q_m X = Q_p, \end{aligned} \end{split}\]

where \(Q_n := \left[ \begin{smallmatrix} {\bf 0}_{n\times n} & I_n\\ -I_n & {\bf 0}_{n\times n} \end{smallmatrix}\right]\). Such problem is a optimization optimization over the symplectic Stiefel manifold, and we aim to show how to solve it with cdopt package.

Importing modules#

We first import all the necessary modules for this optimization problem.

import cdopt  
import torch
import numpy as np
import scipy as sp
from scipy.sparse import csr_matrix, diags
import time

Generating datas#

We generate necessary data. Notice that \(L\) is a sparse matrix, we apply the functions from scipy.sparse to accelerate the computation.

n = 1000
p = 10

device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" )
dtype = torch.float64

alpha = 1
L_ori = diags([-1,3,-1], [-1,0,1], shape=(n,n), format='csc')
L = torch.sparse_csr_tensor(L_ori.indptr, L_ori.indices, L_ori.data, device= device, dtype= dtype)

Set functions and problems#

Then we set the objective function and the Stiefel manifold. Notice that all the existing AD packages has limited support for operations on sparse matrices, we manually define the gradient and Hessian-vector product of the objective function.

def obj_fun(X):
    return 0.5 * torch.sum( X * torch.matmul(L, X)  ) 


M = cdopt.manifold_torch.symp_stiefel_torch((n,p), device = device, dtype = dtype)   # The sympletctic manifold

Describe the optimization problem#

The optimization problem can be described by the manifold and the drivatives of objective function.

problem_test = cdopt.core.problem(M, obj_fun, beta = 'auto')  # describe the optimization problem and set the penalty parameter \beta.

Apply optimization solvers#

After describe the optimization problem, we can directly function value, gradient and Hessian-vector product from the cdopt.core.Problem class.

# the vectorized function value, gradient and Hessian-vector product of the constraint dissolving function. Their inputs are numpy 1D array, and their outputs are float or numpy 1D array.
cdf_fun_np = problem_test.cdf_fun_vec_np   
cdf_grad_np = problem_test.cdf_grad_vec_np 
cdf_hvp_np = problem_test.cdf_hvp_vec_np


## Apply limit memory BFGS solver from scipy.minimize 
from scipy.optimize import fmin_bfgs, fmin_cg, fmin_l_bfgs_b, fmin_ncg
Xinit = problem_test.Xinit_vec_np  # set initial point
# optimize by L-BFGS method
t_start = time.time()
out_msg = sp.optimize.minimize(cdf_fun_np, Xinit,method='L-BFGS-B',jac = cdf_grad_np, options={'disp': None, 'maxcor': 10, 'ftol': 0, 'gtol': 1e-06, 'eps': 0e-08,})
t_end = time.time() - t_start

# Statistics
feas = M.Feas_eval(M.v2m(M.array2tensor(out_msg.x)))   # Feasibility
stationarity = np.linalg.norm(out_msg['jac'],2)   # stationarity

result_lbfgs = [out_msg['fun'], out_msg['nit'], out_msg['nfev'],stationarity,feas, t_end]

# print results
print('Solver   fval         iter   f_eval   stationarity   feaibility     CPU time')
print('& L-BFGS & {:.2e}  & {:}  & {:}    & {:.2e}     & {:.2e}     & {:.2f} \\\\'.format(*result_lbfgs))
Solver   fval         iter   f_eval   stationarity   feaibility     CPU time
& L-BFGS & 5.00e+00  & 683  & 734    & 2.51e-05     & 1.56e-06     & 2.59 \\
# optimize by CG method
t_start = time.time()
out_msg = sp.optimize.minimize(cdf_fun_np, Xinit,method='CG',jac = cdf_grad_np, options={'disp': None,'gtol': 1e-06, 'eps': 0})
t_end = time.time() - t_start

# Statistics
feas = M.Feas_eval(M.v2m(M.array2tensor(out_msg.x)))   # Feasibility
stationarity = np.linalg.norm(out_msg['jac'],2)   # stationarity

result_cg = [out_msg['fun'], out_msg['nit'], out_msg['nfev'],stationarity,feas, t_end]

# print results
print('Solver   fval         iter   f_eval   stationarity   feaibility     CPU time')
print('& CG     & {:.2e}  & {:}  & {:}    & {:.2e}     & {:.2e}     & {:.2f} \\\\'.format(*result_cg))
Solver   fval         iter   f_eval   stationarity   feaibility     CPU time
& CG     & 5.00e+00  & 1187  & 1906    & 1.93e-05     & 1.78e-06     & 6.16 \\
# optimize by Newton GLTR trust-region method
t_start = time.time()
out_msg = sp.optimize.minimize(cdf_fun_np, Xinit, method='trust-krylov',jac = cdf_grad_np, hessp = cdf_hvp_np, options={'gtol': 1e-06, 'disp': False})
t_end = time.time() - t_start

# Statistics
feas = M.Feas_eval(M.v2m(M.array2tensor(out_msg.x)))   # Feasibility
stationarity = np.linalg.norm(out_msg['jac'],2)   # stationarity

result_cg = [out_msg['fun'], out_msg['nit'], out_msg['nfev'],stationarity,feas, t_end]

# print results
print('Solver   fval         iter   f_eval   stationarity   feaibility     CPU time')
print('& TR-KRY & {:.2e}  & {:}  & {:}    & {:.2e}     & {:.2e}     & {:.2f} \\\\'.format(*result_cg))
Solver   fval         iter   f_eval   stationarity   feaibility     CPU time
& TR-KRY & 5.00e+00  & 52  & 53    & 9.91e-07     & 9.06e-07     & 21.52 \\
# optimize by Newton conjugate gradient trust-region method  
t_start = time.time()
out_msg = sp.optimize.minimize(cdf_fun_np, Xinit, method='trust-ncg',jac = cdf_grad_np, hessp = cdf_hvp_np, options={'gtol': 1e-06, 'disp': False})
t_end = time.time() - t_start

# Statistics
feas = M.Feas_eval(M.v2m(M.array2tensor(out_msg.x)))   # Feasibility
stationarity = np.linalg.norm(out_msg['jac'],2)   # stationarity

result_cg = [out_msg['fun'], out_msg['nit'], out_msg['nfev'],stationarity,feas, t_end]

# print results
print('Solver   fval         iter   f_eval   stationarity   feaibility     CPU time')
print('& TR-NCG & {:.2e}  & {:}  & {:}    & {:.2e}     & {:.2e}     & {:.2f} \\\\'.format(*result_cg))
Solver   fval         iter   f_eval   stationarity   feaibility     CPU time
& TR-NCG & 5.00e+00  & 85  & 86    & 4.10e-07     & 3.75e-07     & 10.16 \\

Reference#

  1. Gao B, Son N T, Absil P A, et al. Riemannian optimization on the symplectic Stiefel manifold[J]. SIAM Journal on Optimization, 2021, 31(2): 1546-1575.