Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Code Cov NlpProgram #275

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: actions/cache@v1
- uses: actions/cache@v3
env:
cache-name: cache-artifacts
with:
Expand Down
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "DiffOpt"
uuid = "930fe3bc-9c6b-11ea-2d94-6184641e85e7"
authors = ["Akshay Sharma", "Mathieu Besançon", "Joaquim Dias Garcia", "Benoît Legat", "Oscar Dowson"]
version = "0.4.3"
authors = ["Akshay Sharma", "Mathieu Besançon", "Joaquim Dias Garcia", "Benoît Legat", "Oscar Dowson", "Andrew Rosemberg"]
version = "0.5.0"

[deps]
BlockDiagonals = "0a1fb500-61f7-11e9-3c65-f5ef3456f9f0"
Expand Down
8 changes: 4 additions & 4 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# DiffOpt.jl

[DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl) is a package for differentiating convex optimization program ([JuMP.jl](https://github.com/jump-dev/JuMP.jl) or [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) models) with respect to program parameters. Note that this package does not contain any solver.
[DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl) is a package for differentiating convex and non-convex optimization program ([JuMP.jl](https://github.com/jump-dev/JuMP.jl) or [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) models) with respect to program parameters. Note that this package does not contain any solver.
This package has two major backends, available via the `reverse_differentiate!` and `forward_differentiate!` methods, to differentiate models (quadratic or conic) with optimal solutions.

!!! note
Currently supports *linear programs* (LP), *convex quadratic programs* (QP) and *convex conic programs* (SDP, SOCP, exponential cone constraints only).
Currently supports *linear programs* (LP), *convex quadratic programs* (QP), *convex conic programs* (SDP, SOCP, exponential cone constraints only), and *general nonlinear programs* (NLP).


## Installation
Expand All @@ -16,8 +16,8 @@ DiffOpt can be installed through the Julia package manager:

## Why are Differentiable optimization problems important?

Differentiable optimization is a promising field of convex optimization and has many potential applications in game theory, control theory and machine learning (specifically deep learning - refer [this video](https://www.youtube.com/watch?v=NrcaNnEXkT8) for more).
Recent work has shown how to differentiate specific subclasses of convex optimization problems. But several applications remain unexplored (refer section 8 of this [really good thesis](https://github.com/bamos/thesis)). With the help of automatic differentiation, differentiable optimization can have a significant impact on creating end-to-end differentiable systems to model neural networks, stochastic processes, or a game.
Differentiable optimization is a promising field of constrained optimization and has many potential applications in game theory, control theory and machine learning (specifically deep learning - refer [this video](https://www.youtube.com/watch?v=NrcaNnEXkT8) for more).
Recent work has shown how to differentiate specific subclasses of constrained optimization problems. But several applications remain unexplored (refer section 8 of this [really good thesis](https://github.com/bamos/thesis)). With the help of automatic differentiation, differentiable optimization can have a significant impact on creating end-to-end differentiable systems to model neural networks, stochastic processes, or a game.


## Contributing
Expand Down
35 changes: 29 additions & 6 deletions docs/src/manual.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
# Manual

!!! note
As of now, this package only works for optimization models that can be written either in convex conic form or convex quadratic form.


## Supported objectives & constraints - `QuadraticProgram` backend
## Supported objectives & constraints - scheme 1

For `QuadraticProgram` backend, the package supports following `Function-in-Set` constraints:

Expand Down Expand Up @@ -52,6 +48,33 @@ and the following objective types:

Other conic sets such as `RotatedSecondOrderCone` and `PositiveSemidefiniteConeSquare` are supported through bridges.

## Supported objectives & constraints - `NonlinearProgram` backend

For the `NonlinearProgram` backend, the package supports following `Function-in-Set` constraints:

| MOI Function | MOI Set |
|:-------|:---------------|
| `VariableIndex` | `GreaterThan` |
| `VariableIndex` | `LessThan` |
| `VariableIndex` | `EqualTo` |
| `ScalarAffineFunction` | `GreaterThan` |
| `ScalarAffineFunction` | `LessThan` |
| `ScalarAffineFunction` | `EqualTo` |
| `ScalarQuadraticFunction` | `GreaterThan` |
| `ScalarQuadraticFunction` | `LessThan` |
| `ScalarQuadraticFunction` | `EqualTo` |
| `ScalarNonlinearFunction` | `GreaterThan` |
| `ScalarNonlinearFunction` | `LessThan` |
| `ScalarNonlinearFunction` | `EqualTo` |

and the following objective types:

| MOI Function |
|:-------:|
| `VariableIndex` |
| `ScalarAffineFunction` |
| `ScalarQuadraticFunction` |
| `ScalarNonlinearFunction` |

## Creating a differentiable MOI optimizer

Expand All @@ -68,7 +91,7 @@ DiffOpt requires taking projections and finding projection gradients of vectors
## Conic problem formulation

!!! note
As of now, the package is using `SCS` geometric form for affine expressions in cones.
As of now, when defining a conic or convex quadratic problem, the package is using `SCS` geometric form for affine expressions in cones.

Consider a convex conic optimization problem in its primal (P) and dual (D) forms:
```math
Expand Down
2 changes: 1 addition & 1 deletion docs/src/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@
```

```@autodocs
Modules = [DiffOpt, DiffOpt.QuadraticProgram, DiffOpt.ConicProgram]
Modules = [DiffOpt, DiffOpt.QuadraticProgram, DiffOpt.ConicProgram, DiffOpt.NonLinearProgram]
```
62 changes: 62 additions & 0 deletions docs/src/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,3 +56,65 @@ MOI.set(model, DiffOpt.ForwardObjectiveFunction(), ones(2) ⋅ x)
DiffOpt.forward_differentiate!(model)
grad_x = MOI.get.(model, DiffOpt.ForwardVariablePrimal(), x)
```

3. To differentiate a general nonlinear program, have to use the API for Parameterized JuMP models. For example, consider the following nonlinear program:

```julia
using JuMP, DiffOpt, HiGHS

model = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer))
set_silent(model)

p_val = 4.0
pc_val = 2.0
@variable(model, x)
@variable(model, p in Parameter(p_val))
@variable(model, pc in Parameter(pc_val))
@constraint(model, cons, pc * x >= 3 * p)
@objective(model, Min, x^4)
optimize!(model)
@show value(x) == 3 * p_val / pc_val

# the function is
# x(p, pc) = 3p / pc
# hence,
# dx/dp = 3 / pc
# dx/dpc = -3p / pc^2

# First, try forward mode AD

# differentiate w.r.t. p
direction_p = 3.0
MOI.set(model, DiffOpt.ForwardConstraintSet(), ParameterRef(p), Parameter(direction_p))
DiffOpt.forward_differentiate!(model)
@show MOI.get(model, DiffOpt.ForwardVariablePrimal(), x) == direction_p * 3 / pc_val

# update p and pc
p_val = 2.0
pc_val = 6.0
set_parameter_value(p, p_val)
set_parameter_value(pc, pc_val)
# re-optimize
optimize!(model)
# check solution
@show value(x) ≈ 3 * p_val / pc_val

# stop differentiating with respect to p
DiffOpt.empty_input_sensitivities!(model)
# differentiate w.r.t. pc
direction_pc = 10.0
MOI.set(model, DiffOpt.ForwardConstraintSet(), ParameterRef(pc), Parameter(direction_pc))
DiffOpt.forward_differentiate!(model)
@show abs(MOI.get(model, DiffOpt.ForwardVariablePrimal(), x) -
-direction_pc * 3 * p_val / pc_val^2) < 1e-5

# always a good practice to clear previously set sensitivities
DiffOpt.empty_input_sensitivities!(model)
# Now, reverse model AD
direction_x = 10.0
MOI.set(model, DiffOpt.ReverseVariablePrimal(), x, direction_x)
DiffOpt.reverse_differentiate!(model)
@show MOI.get(model, DiffOpt.ReverseConstraintSet(), ParameterRef(p)) == MOI.Parameter(direction_x * 3 / pc_val)
@show abs(MOI.get(model, DiffOpt.ReverseConstraintSet(), ParameterRef(pc)).value -
-direction_x * 3 * p_val / pc_val^2) < 1e-5
```
8 changes: 8 additions & 0 deletions src/DiffOpt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ include("bridges.jl")

include("QuadraticProgram/QuadraticProgram.jl")
include("ConicProgram/ConicProgram.jl")
include("NonLinearProgram/NonLinearProgram.jl")

"""
add_all_model_constructors(model)
Expand All @@ -37,6 +38,13 @@ Add all constructors of [`AbstractModel`](@ref) defined in this package to
function add_all_model_constructors(model)
add_model_constructor(model, QuadraticProgram.Model)
add_model_constructor(model, ConicProgram.Model)
add_model_constructor(model, NonLinearProgram.Model)
return
end

function add_default_factorization(model)
model.input_cache.factorization =
NonLinearProgram._lu_with_inertia_correction
return
end

Expand Down
Loading
Loading