当前位置: 首页 > 工具软件 > APG > 使用案例 >

APG加速近端梯度算法

高弘光
2023-12-01
import numpy as np
import pandas as pd


#func_f是huberloss,func_g是l1_loss,grad_f是梯度,prox_g是prox,beta_f是gamma

def optimizeWithAPGD(x0, func_f, func_g, grad_f, prox_g, beta_f, tol=1e-6, max_iter=1000):
    """
    Optimize with Accelerated Proximal Gradient Descent Method
        min_x f(x) + g(x)
    where f is beta smooth and g is proxiable.

    input
    -----
    x0 : array_like
        Starting point for the solver
    func_f : function
        Input x and return the function value of f
    func_g : function
        Input x and return the function value of g
    grad_f : function
        Input x and return the gradient of f
    prox_g : function
        Input x and a constant float number and return the prox solution
    beta_f : float
        beta smoothness constant for f
    tol : float, optional
        Gradient tolerance for terminating the solver.
    max_iter : int, optional
        Maximum number of iteration for terminating the solver.

    output
    ------
    x : array_like
        Final solution
    obj_his : array_like
        Objective function value convergence history
    err_his : array_like
        Norm of gradient convergence history
    exit_flag : int
        0, norm of gradient below `tol`
        1, exceed maximum number of iteration
        2, others
    """
    # initial information
    x = x0.copy()
    y = x0.copy()
    g = grad_f(y)
    t = 0.0
    #
    step_size = 1.0 / beta_f
    # not recording the initial point since we do not have measure of the optimality
    obj_his = np.zeros(max_iter)
    err_his = np.zeros(max_iter)

    # start iteration
    iter_count = 0
    err = tol + 1.0
    while err >= tol:
        #####
        # TODO: complete the accelerate proximal gradient step
        x_new = prox_g(y - step_size * g, step_size)
        y_new = x_new + t / t+3 * (x_new - y)
        t_new = t+1
        #####
        #
        # update information
        obj = func_f(x_new) + func_g(x_new)
        err = norm(x - x_new)
        #
        np.copyto(x, x_new)
        np.copyto(y, y_new)
        t = t_new
        g = grad_f(y)
        #
        obj_his[iter_count] = obj
        err_his[iter_count] = err
        #
        # check if exceed maximum number of iteration
        iter_count += 1
        if iter_count >= max_iter:
            print('Proximal gradient descent reach maximum of iteration')
            return x, obj_his[:iter_count], err_his[:iter_count], 1
    #
    print(err, tol)
    return x, obj_his[:iter_count], err_his[:iter_count], 0

PG,AG优化器也在该网址:

https://github.com/interesting-courses/UW_coursework/blob/987e336e70482622c5d03428b5532349483f87f4/amath515/hw2/solvers.py

 类似资料: