You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: class02/overview.md
+8-58Lines changed: 8 additions & 58 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,39 +4,20 @@
4
4
5
5
**Topic:** Numerical optimization for control (gradient/SQP/QP); ALM vs. interior-point vs. penalty methods
6
6
7
-
**Pluto Notebook for all the chapter**: Here is the actual [final chapter](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/class02.html)
8
-
9
7
---
10
8
11
9
## Overview
12
10
13
11
This class covers the fundamental numerical optimization techniques essential for optimal control problems. We explore gradient-based methods, Sequential Quadratic Programming (SQP), and various approaches to handling constraints including Augmented Lagrangian Methods (ALM), interior-point methods, and penalty methods.
14
12
15
-
## Learning Objectives
16
-
17
-
By the end of this class, students will be able to:
18
-
19
-
- Understand the mathematical foundations of gradient-based optimization
20
-
- Implement Newton's method for unconstrained minimization
21
-
- Apply root-finding techniques for implicit integration schemes
22
-
- Solve equality-constrained optimization problems using Lagrange multipliers
23
-
- Compare and contrast different constraint handling methods (ALM, interior-point, penalty)
24
-
- Implement Sequential Quadratic Programming (SQP) for nonlinear optimization
25
-
26
-
## Prerequisites
27
13
28
-
- Solid understanding of linear algebra and calculus
29
-
- Familiarity with Julia programming
30
-
- Basic knowledge of differential equations
31
-
- Understanding of optimization concepts from Class 1
14
+
The slides for this lecture can be found here [Lecture Slides (PDF)](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/ISYE_8803___Lecture_2___Slides.pdf)
32
15
33
-
## Materials
34
-
35
-
### Interactive Notebooks
36
-
37
-
The class is structured around four interactive Jupyter notebooks that build upon each other:
16
+
The Pluto julia notebook for my final chapter can be found here [final chapter](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/class02.html)
38
17
18
+
Although the main code for the julia demo's are contained in the Pluto notebook above, the following julia notebooks are the demo's I used in the class recording/presentation.
- Convergence properties and practical considerations
67
-
68
-
### Additional Resources
69
-
70
-
-**[Lecture Slides (PDF)](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/ISYE_8803___Lecture_2___Slides.pdf)** - Complete slide deck from the presentation
71
-
-**[Demo Script](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/penalty_barrier_demo.py)** - Python demonstration of penalty vs. barrier methods
72
-
73
-
## Key Concepts Covered
74
-
75
-
### Mathematical Foundations
76
-
-**Gradient and Hessian**: Understanding first and second derivatives in optimization
77
-
-**Newton's Method**: Quadratic convergence and implementation details
78
-
-**KKT Conditions**: Necessary and sufficient conditions for optimality
79
-
-**Duality Theory**: Lagrange multipliers and dual problems
0 commit comments