Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: added gfirst to ADMML2 #196

Merged
merged 1 commit into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ as making new operators and very much encouraged.

Ready to contribute?

1. Fork the `PyLops` repo.
1. Fork the `PyProximal` repo.

2. Clone your fork locally:
```
Expand Down
4 changes: 0 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,6 @@
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06326/status.svg)](https://doi.org/10.21105/joss.06326)


:vertical_traffic_light: :vertical_traffic_light: This library is under early development.
Expect things to constantly change until version v1.0.0. :vertical_traffic_light: :vertical_traffic_light:


## Objective
This Python library provides all the needed building blocks for solving
non-smooth convex optimization problems using the so-called **proximal algorithms**.
Expand Down
4 changes: 2 additions & 2 deletions examples/README.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. _general_examples:

PyLops gallery
--------------
PyProximal gallery
------------------

Below is a gallery of examples
26 changes: 23 additions & 3 deletions pyproximal/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,29 @@
"""
PyProx
======
PyProximal
==========

This Python library provides all the needed building blocks for solving
non-smooth convex optimization problems using the so-called proximal algorithms.

PyProximal provides
1. A general construct for creating proximal operators
2. An extensive set of commonly used proximal operators
3. A set of solvers for composite objective functions with
differentiable and proximable functions.

Available subpackages
---------------------
projection
Project Operators
proximal
Proximal Operators
optimization
Solvers
utils
Utility routines

....
"""

from .ProxOperator import ProxOperator
from .proximal import *

Expand Down
10 changes: 5 additions & 5 deletions pyproximal/optimization/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
The subpackage optimization provides an extensive set of proximal algorithms
to be used with PyLops linear operators and PyProximal proximal operators.

A list of solvers in ``pylops.optimization.proximal`` using only proximal
A list of solvers in ``pyproximal.optimization.proximal`` using only proximal
operators:

ProximalPoint Proximal point algorithm (or proximal min.)
Expand All @@ -14,13 +14,13 @@
AndersonProximalGradient Proximal gradient algorithm with Anderson acceleration
GeneralizedProximalGradient Generalized Proximal gradient algorithm
HQS Half Quadrating Splitting
ADMM Alternating Direction Method of Multipliers
ADMM Alternating Direction Method of Multipliers (ADMM)
ADMML2 ADMM with L2 misfit term
LinearizedADMM Linearized ADMM
TwIST Two-step Iterative Shrinkage/Threshold
PlugAndPlay Plug-and-Play Prior with ADMM

A list of solvers in ``pylops.optimization.proximaldual`` using both proximal
A list of solvers in ``pyproximal.optimization.proximaldual`` using both proximal
and dual proximal operators:

PrimalDual Primal-Dual algorithm
Expand All @@ -31,8 +31,8 @@

Bregman Bregman iterations

Additional solvers are in ``pylops.optimization.sr3`` amd
``pylops.optimization.palm``:
Additional solvers are in ``pyproximal.optimization.sr3`` amd
``pyproximal.optimization.palm``:

SR3 Sparse Relaxed Regularized algorithm
PALM Proximal Alternating Linearized Minimization
Expand Down
50 changes: 31 additions & 19 deletions pyproximal/optimization/primal.py
Original file line number Diff line number Diff line change
Expand Up @@ -576,9 +576,9 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau,

Parameters
----------
proxfs : :obj:`list of pyproximal.ProxOperator`
proxfs : :obj:`list`
Proximal operators of the :math:`f_i` functions (must have ``grad`` implemented)
proxgs : :obj:`list of pyproximal.ProxOperator`
proxgs : :obj:`list`
Proximal operators of the :math:`g_j` functions
x0 : :obj:`numpy.ndarray`
Initial vector
Expand Down Expand Up @@ -630,7 +630,6 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau,
if len(weights) != len(proxgs) or np.sum(weights) != 1.:
raise ValueError(f'omega={weights} must be an array of size {len(proxgs)} '
f'summing to 1')
print(weights)

# check if epgs is a vector
if np.asarray(epsg).size == 1.:
Expand All @@ -649,9 +648,9 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau,
'Proximal operators (f): %s\n'
'Proximal operators (g): %s\n'
'tau = %10e\nepsg = %s\tniter = %d\n' % ([type(proxf) for proxf in proxfs],
[type(proxg) for proxg in proxgs],
0 if tau is None else tau,
epsg_print, niter))
[type(proxg) for proxg in proxgs],
0 if tau is None else tau,
epsg_print, niter))
head = ' Itn x[0] f g J=f+eps*g'
print(head)

Expand Down Expand Up @@ -857,19 +856,19 @@ def ADMM(proxf, proxg, x0, tau, niter=10, gfirst=False,
function that has a known proximal operator.

ADMM can also solve the problem of the form above with a more general
constraint: :math:`\mathbf{Ax}+\mathbf{Bz}=c`. This routine implements
constraint: :math:`\mathbf{Ax}+\mathbf{Bz}=\mathbf{c}`. This routine implements
the special case where :math:`\mathbf{A}=\mathbf{I}`, :math:`\mathbf{B}=-\mathbf{I}`,
and :math:`c=0`, as a general algorithm can be obtained for any choice of
and :math:`\mathbf{c}=\mathbf{0}`, as a general algorithm can be obtained for any choice of
:math:`f` and :math:`g` provided they have a known proximal operator.

On the other hand, for more general choice of :math:`\mathbf{A}`, :math:`\mathbf{B}`,
and :math:`c`, the iterations are not generalizable, i.e. thye depends on the choice of
:math:`f` and :math:`g` functions. For this reason, we currently only provide an additional
and :math:`\mathbf{c}`, the iterations are not generalizable, i.e. they depend on the choice of
the :math:`f` and :math:`g` functions. For this reason, we currently only provide an additional
solver for the special case where :math:`f` is a :class:`pyproximal.proximal.L2`
operator with a linear operator :math:`\mathbf{G}` and data :math:`\mathbf{y}`,
:math:`\mathbf{B}=-\mathbf{I}` and :math:`c=0`,
:math:`\mathbf{B}=-\mathbf{I}` and :math:`\mathbf{c}=\mathbf{0}`,
called :func:`pyproximal.optimization.primal.ADMML2`. Note that for the very same choice
of :math:`\mathbf{B}` and :math:`c`, the :func:`pyproximal.optimization.primal.LinearizedADMM`
of :math:`\mathbf{B}` and :math:`\mathbf{c}`, the :func:`pyproximal.optimization.primal.LinearizedADMM`
can also be used (and this does not require a specific choice of :math:`f`).

Parameters
Expand Down Expand Up @@ -966,7 +965,8 @@ def ADMM(proxf, proxg, x0, tau, niter=10, gfirst=False,
return x, z


def ADMML2(proxg, Op, b, A, x0, tau, niter=10, callback=None, show=False, **kwargs_solver):
def ADMML2(proxg, Op, b, A, x0, tau, niter=10, gfirst=False,
callback=None, show=False, **kwargs_solver):
r"""Alternating Direction Method of Multipliers for L2 misfit term

Solves the following minimization problem using Alternating Direction
Expand Down Expand Up @@ -997,6 +997,9 @@ def ADMML2(proxg, Op, b, A, x0, tau, niter=10, callback=None, show=False, **kwar
to guarantees convergence: :math:`\tau \in (0, 1/\lambda_{max}(\mathbf{A}^H\mathbf{A})]`.
niter : :obj:`int`, optional
Number of iterations of iterative scheme
gfirst : :obj:`bool`, optional
Apply Proximal of operator ``g`` first (``True``) or Proximal of
operator ``f`` first (``False``)
callback : :obj:`callable`, optional
Function with signature (``callback(x)``) to call after each iteration
where ``x`` is the current model vector
Expand Down Expand Up @@ -1043,12 +1046,21 @@ def ADMML2(proxg, Op, b, A, x0, tau, niter=10, callback=None, show=False, **kwar
x = x0.copy()
u = z = np.zeros(A.shape[0], dtype=A.dtype)
for iiter in range(niter):
# create augumented system
x = regularized_inversion(Op, b, [A, ], x0=x,
dataregs=[z - u, ], epsRs=[sqrttau, ],
**kwargs_solver)[0]
Ax = A @ x
z = proxg.prox(Ax + u, tau)
if gfirst:
Ax = A @ x
z = proxg.prox(Ax + u, tau)

# solve augumented system
x = regularized_inversion(Op, b, [A, ], x0=x,
dataregs=[z - u, ], epsRs=[sqrttau, ],
**kwargs_solver)[0]
else:
# solve augumented system
x = regularized_inversion(Op, b, [A, ], x0=x,
dataregs=[z - u, ], epsRs=[sqrttau, ],
**kwargs_solver)[0]
Ax = A @ x
z = proxg.prox(Ax + u, tau)
u = u + Ax - z

# run callback
Expand Down
Loading