Learning Minima and Maxima work project make money

Convex Optimization – Minima and Maxima Local Minima or Minimize $bar{x}in :S$ is said to be local minima of a function $f$ if $fleft ( bar{x} right )leq fleft ( x right ),forall x in N_varepsilon left ( bar{x} right )$ where $N_varepsilon left ( bar{x} right )$ means neighbourhood of $bar{x}$, i.e., $N_varepsilon left ( bar{x} right )$ means $left | x-bar{x} right | Local Maxima or Maximizer $bar{x}in :S$ is said to be local maxima of a function $f$ if $fleft ( bar{x} right )geq fleft ( x right ), forall x in N_varepsilon left ( bar{x} right )$ where $N_varepsilon left ( bar{x} right )$ means neighbourhood of $bar{x}$, i.e., $N_varepsilon left ( bar{x} right )$ means $left | x-bar{x} right | Global minima $bar{x}in :S$ is said to be global minima of a function $f$ if $fleft ( bar{x} right )leq fleft ( x right ), forall x in S$ Global maxima $bar{x}in :S$ is said to be global maxima of a function $f$ if $fleft ( bar{x} right )geq fleft ( x right ), forall x in S$ Examples Step 1 − find the local minima and maxima of $fleft ( bar{x} right )=left | x^2-4 right |$ Solution − From the graph of the above function, it is clear that the local minima occurs at $x= pm 2$ and local maxima at $x = 0$ Step 2 − find the global minima af the function $fleft (x right )=left | 4x^3-3x^2+7 right |$ Solution − From the graph of the above function, it is clear that the global minima occurs at $x=-1$. Learning working make money

Learning Closest Point Theorem work project make money

Convex Optimization – Closest Point Theorem Let S be a non-empty closed convex set in $mathbb{R}^n$ and let $ynotin S$, then $exists$ a point $bar{x}in S$ with minimum distance from y, i.e.,$left | y-bar{x} right | leq left | y-x right | forall x in S.$ Furthermore, $bar{x}$ is a minimizing point if and only if $left ( y-hat{x} right )^{T}left ( x-hat{x} right )leq 0$ or $left ( y-hat{x}, x-hat{x} right )leq 0$ Proof Existence of closest point Since $Sne phi,exists$ a point $hat{x}in S$ such that the minimum distance of S from y is less than or equal to $left | y-hat{x} right |$. Define $hat{S}=S cap left { x:left | y-x right |leq left | y-hat{x} right | right }$ Since $ hat{S}$ is closed and bounded, and since norm is a continuous function, then by Weierstrass theorem, there exists a minimum point $hat{x} in S$ such that $left | y-hat{x} right |=Infleft { left | y-x right |,x in S right }$ Uniqueness Suppose $bar{x} in S$ such that $left | y-hat{x} right |=left | y-hat{x} right |= alpha$ Since S is convex, $frac{hat{x}+bar{x}}{2} in S$ But, $left | y-frac{hat{x}-bar{x}}{2} right |leq frac{1}{2}left | y-hat{x} right |+frac{1}{2}left | y-bar{x} right |=alpha$ It can”t be strict inequality because $hat{x}$ is closest to y. Therefore, $left | y-hat{x} right |=mu left | y-hat{x} right |$, for some $mu$ Now $left | mu right |=1.$ If $mu=-1$, then $left ( y-hat{x} right )=-left ( y-hat{x} right )Rightarrow y=frac{hat{x}+bar{x}}{2} in S$ But $y in S$. Hence contradiction. Thus $mu=1 Rightarrow hat{x}=bar{x}$ Thus, minimizing point is unique. For the second part of the proof, assume $left ( y-hat{x} right )^{tau }left ( x-bar{x} right )leq 0$ for all $xin S$ Now, $left | y-x right |^{2}=left | y-hat{x}+ hat{x}-xright |^{2}=left | y-hat{x} right |^{2}+left |hat{x}-x right |^{2}+2left (hat{x}-x right )^{tau }left ( y-hat{x} right )$ $Rightarrow left | y-x right |^{2}geq left | y-hat{x} right |^{2}$ because $left | hat{x}-x right |^{2}geq 0$ and $left ( hat{x}- xright )^{T}left ( y-hat{x} right )geq 0$ Thus, $hat{x}$ is minimizing point. Conversely, assume $hat{x}$ is minimizimg point. $Rightarrow left | y-x right |^{2}geq left | y-hat{x} right |^2 forall x in S$ Since S is convex set. $Rightarrow lambda x+left ( 1-lambda right )hat{x}=hat{x}+lambdaleft ( x-hat{x} right ) in S$ for $x in S$ and $lambda in left ( 0,1 right )$ Now, $left | y-hat{x}-lambdaleft ( x-hat{x} right ) right |^{2}geq left | y-hat{x} right |^2$ And $left | y-hat{x}-lambdaleft ( x-hat{x} right ) right |^{2}=left | y-hat{x} right |^{2}+lambda^2left | x-hat{x} right |^{2}-2lambdaleft ( y-hat{x} right )^{T}left ( x-hat{x} right )$ $Rightarrow left | y-hat{x} right |^{2}+lambda^{2}left | x-hat{x} right |-2 lambdaleft ( y-hat{x} right )^{T}left ( x-hat{x} right )geq left | y-hat{x} right |^{2}$ $Rightarrow 2 lambdaleft ( y-hat{x} right )^{T}left ( x-hat{x} right )leq lambda^2left | x-hat{x} right |^2$ $Rightarrow left ( y-hat{x} right )^{T}left ( x-hat{x} right )leq 0$ Hence Proved. Learning working make money

Learning Caratheodory Theorem work project make money

Caratheodory Theorem Let S be an arbitrary set in $mathbb{R}^n$.If $x in Coleft ( S right )$, then $x in Coleft ( x_1,x_2,….,x_n,x_{n+1} right )$. Proof Since $x in Coleft ( Sright )$, then $x$ is representated by a convex combination of a finite number of points in S, i.e., $x=displaystylesumlimits_{j=1}^k lambda_jx_j,displaystylesumlimits_{j=1}^k lambda_j=1, lambda_j geq 0$ and $x_j in S, forall j in left ( 1,k right )$ If $k leq n+1$, the result obtained is obviously true. If $k geq n+1$, then $left ( x_2-x_1 right )left ( x_3-x_1 right ),….., left ( x_k-x_1 right )$ are linearly dependent. $Rightarrow exists mu _j in mathbb{R}, 2leq jleq k$ (not all zero) such that $displaystylesumlimits_{j=2}^k mu _jleft ( x_j-x_1 right )=0$ Define $mu_1=-displaystylesumlimits_{j=2}^k mu _j$, then $displaystylesumlimits_{j=1}^k mu_j x_j=0, displaystylesumlimits_{j=1}^k mu_j=0$ where not all $mu_j”s$ are equal to zero. Since $displaystylesumlimits_{j=1}^k mu_j=0$, at least one of the $mu_j > 0,1 leq j leq k$ Then, $x=displaystylesumlimits_{1}^k lambda_j x_j+0$ $x=displaystylesumlimits_{1}^k lambda_j x_j- alpha displaystylesumlimits_{1}^k mu_j x_j$ $x=displaystylesumlimits_{1}^kleft ( lambda_j- alphamu_j right )x_j $ Choose $alpha$ such that $alpha=minleft { frac{lambda_j}{mu_j}, mu_jgeq 0 right }=frac{lambda_j}{mu _j},$ for some $i=1,2,…,k$ If $mu_jleq 0, lambda_j-alpha mu_jgeq 0$ If $mu_j> 0, then :frac{lambda _j}{mu_j}geq frac{lambda_i}{mu _i}=alpha Rightarrow lambda_j-alpha mu_jgeq 0, j=1,2,…k$ In particular, $lambda_i-alpha mu_i=0$, by definition of $alpha$ $x=displaystylesumlimits_{j=1}^k left ( lambda_j- alphamu_jright )x_j$,where $lambda_j- alphamu_jgeq0$ and $displaystylesumlimits_{j=1}^kleft ( lambda_j- alphamu_jright )=1$ and $lambda_i- alphamu_i=0$ Thus, x can be representated as a convex combination of at most (k-1) points. This reduction process can be repeated until x is representated as a convex combination of (n+1) elements. Learning working make money

Learning Convex Hull work project make money

Convex Optimization – Hull The convex hull of a set of points in S is the boundary of the smallest convex region that contain all the points of S inside it or on its boundary. OR Let $Ssubseteq mathbb{R}^n$ The convex hull of S, denoted $Coleft ( S right )$ by is the collection of all convex combination of S, i.e., $x in Coleft ( S right )$ if and only if $x in displaystylesumlimits_{i=1}^n lambda_ix_i$, where $displaystylesumlimits_{1}^n lambda_i=1$ and $lambda_i geq 0 forall x_i in S$ Remark − Conves hull of a set of points in S in the plane defines a convex polygon and the points of S on the boundary of the polygon defines the vertices of the polygon. Theorem $Coleft ( S right )= left { x:x=displaystylesumlimits_{i=1}^n lambda_ix_i,x_i in S, displaystylesumlimits_{i=1}^n lambda_i=1,lambda_i geq 0 right }$ Show that a convex hull is a convex set. Proof Let $x_1,x_2 in Coleft ( S right )$, then $x_1=displaystylesumlimits_{i=1}^n lambda_ix_i$ and $x_2=displaystylesumlimits_{i=1}^n lambda_gamma x_i$ where $displaystylesumlimits_{i=1}^n lambda_i=1, lambda_igeq 0$ and $displaystylesumlimits_{i=1}^n gamma_i=1,gamma_igeq0$ For $theta in left ( 0,1 right ),theta x_1+left ( 1-theta right )x_2=theta displaystylesumlimits_{i=1}^n lambda_ix_i+left ( 1-theta right )displaystylesumlimits_{i=1}^n gamma_ix_i$ $theta x_1+left ( 1-theta right )x_2=displaystylesumlimits_{i=1}^n lambda_i theta x_i+displaystylesumlimits_{i=1}^n gamma_ileft ( 1-theta right )x_i$ $theta x_1+left ( 1-theta right )x_2=displaystylesumlimits_{i=1}^nleft [ lambda_itheta +gamma_ileft ( 1-theta right ) right ]x_i$ Considering the coefficients, $displaystylesumlimits_{i=1}^nleft [ lambda_itheta +gamma_ileft ( 1-theta right ) right ]=theta displaystylesumlimits_{i=1}^n lambda_i+left ( 1-theta right )displaystylesumlimits_{i=1}^ngamma_i=theta +left ( 1-theta right )=1$ Hence, $theta x_1+left ( 1-theta right )x_2 in Coleft ( S right )$ Thus, a convex hull is a convex set. Learning working make money

Learning Weierstrass Theorem work project make money

Convex Optimization – Weierstrass Theorem Let S be a non empty, closed and bounded set (also called compact set) in $mathbb{R}^n$ and let $f:Srightarrow mathbb{R} $ be a continuous function on S, then the problem min $left { fleft ( x right ):x in S right }$ attains its minimum. Proof Since S is non-empty and bounded, there exists a lower bound. $alpha =Infleft { fleft ( x right ):x in S right }$ Now let $S_j=left { x in S:alpha leq fleft ( x right ) leq alpha +delta ^jright } forall j=1,2,…$ and $delta in left ( 0,1 right )$ By the definition of infimium, $S_j$ is non-empty, for each $j$. Choose some $x_j in S_j$ to get a sequence $left { x_j right }$ for $j=1,2,…$ Since S is bounded, the sequence is also bounded and there is a convergent subsequence $left { y_j right }$, which converges to $hat{x}$. Hence $hat{x}$ is a limit point and S is closed, therefore, $hat{x} in S$. Since f is continuous, $fleft ( y_i right )rightarrow fleft ( hat{x} right )$. Since $alpha leq fleft ( y_i right )leq alpha+delta^k, alpha=displaystylelim_{krightarrow infty}fleft ( y_i right )=fleft ( hat{x} right )$ Thus, $hat{x}$ is the minimizing solution. Remarks There are two important necessary conditions for Weierstrass Theorem to hold. These are as follows − Step 1 − The set S should be a bounded set. Consider the function fleft ( x right )=x$. It is an unbounded set and it does have a minima at any point in its domain. Thus, for minima to obtain, S should be bounded. Step 2 − The set S should be closed. Consider the function $fleft ( x right )=frac{1}{x}$ in the domain left ( 0,1 right ). This function is not closed in the given domain and its minima also does not exist. Hence, for minima to obtain, S should be closed. Learning working make money

Learning Affine Set work project make money

Convex Optimization – affine Set A set $A$ is said to be an affine set if for any two distinct points, the line passing through these points lie in the set $A$. Note − $S$ is an affine set if and only if it contains every affine combination of its points. Empty and singleton sets are both affine and convex set. For example, solution of a linear equation is an affine set. Proof Let S be the solution of a linear equation. By definition, $S=left { x in mathbb{R}^n:Ax=b right }$ Let $x_1,x_2 in SRightarrow Ax_1=b$ and $Ax_2=b$ To prove : $Aleft [ theta x_1+left ( 1-theta right )x_2 right ]=b, forall theta inleft ( 0,1 right )$ $Aleft [ theta x_1+left ( 1-theta right )x_2 right ]=theta Ax_1+left ( 1-theta right )Ax_2=theta b+left ( 1-theta right )b=b$ Thus S is an affine set. Theorem If $C$ is an affine set and $x_0 in C$, then the set $V= C-x_0=left { x-x_0:x in C right }$ is a subspace of C. Proof Let $x_1,x_2 in V$ To show: $alpha x_1+beta x_2 in V$ for some $alpha,beta$ Now, $x_1+x_0 in C$ and $x_2+x_0 in C$ by definition of V Now, $alpha x_1+beta x_2+x_0=alpha left ( x_1+x_0 right )+beta left ( x_2+x_0 right )+left ( 1-alpha -beta right )x_0$ But $alpha left ( x_1+x_0 right )+beta left ( x_2+x_0 right )+left ( 1-alpha -beta right )x_0 in C$ because C is an affine set. Therefore, $alpha x_1+beta x_2 in V$ Hence proved. Learning working make money

Learning Linear Programming work project make money

Convex Optimization – Linear Programming Methodology Linear Programming also called Linear Optimization, is a technique which is used to solve mathematical problems in which the relationships are linear in nature. the basic nature of Linear Programming is to maximize or minimize an objective function with subject to some constraints. The objective function is a linear function which is obtained from the mathematical model of the problem. The constraints are the conditions which are imposed on the model and are also linear. From the given question, find the objective function. find the constraints. Draw the constraints on a graph. find the feasible region, which is formed by the intersection of all the constraints. find the vertices of the feasible region. find the value of the objective function at these vertices. The vertice which either maximizes or minimizes the objective function (according to the question) is the answer. Examples Step 1 − Maximize $5x+3y$ subject to $x+yleq 2$, $3x+yleq 3$, $xgeq 0 :and :ygeq 0$ Solution − The first step is to find the feasible region on a graph. Clearly from the graph, the vertices of the feasible region are $left ( 0, 0 right )left ( 0, 2 right )left ( 1, 0 right )left ( frac{1}{2}, frac{3}{2} right )$ Let $fleft ( x, y right )=5x+3y$ Putting these values in the objective function, we get − $fleft ( 0, 0 right )$=0 $fleft ( 0, 2 right )$=6 $fleft ( 1, 0 right )$=5 $fleft ( frac{1}{2}, frac{3}{2} right )$=7 Therefore, the function maximizes at $left ( frac{1}{2}, frac{3}{2} right )$ Step 2 − A watch company produces a digital and a mechanical watch. Long-term projections indicate an expected demand of at least 100 digital and 80 mechanical watches each day. Because of limitations on production capacity, no more than 200 digital and 170 mechanical watches can be made daily. To satisfy a shipping contract, a total of at least 200 watches much be shipped each day. If each digital watch sold results in a $$2$ loss, but each mechanical watch produces a $$5$ profit, how many of each type should be made daily to maximize net profits? Solution − Let $x$ be the number of digital watches produced $y$ be the number of mechanical watches produced According to the question, at least 100 digital watches are to be made daily and maximaum 200 digital watches can be made. $Rightarrow 100 leq :xleq 200$ Similarly, at least 80 mechanical watches are to be made daily and maximum 170 mechanical watches can be made. $Rightarrow 80 leq :yleq 170$ Since at least 200 watches are to be produced each day. $Rightarrow x +yleq 200$ Since each digital watch sold results in a $$2$ loss, but each mechanical watch produces a $$5$ profit, Total profit can be calculated as $Profit =-2x + 5y$ And we have to maximize the profit, Therefore, the question can be formulated as − Maximize $-2x + 5y$ subject to $100 :leq x:leq 200$ $80 :leq y:leq 170$ $x+y:leq 200$ Plotting the above equations in a graph, we get, The vertices of the feasible region are $left ( 100, 170right )left ( 200, 170right )left ( 200, 180right )left ( 120, 80right ) and left ( 100, 100right )$ The maximum value of the objective function is obtained at $left ( 100, 170right )$ Thus, to maximize the net profits, 100 units of digital watches and 170 units of mechanical watches should be produced. Learning working make money

Learning Inner Product work project make money

Convex Optimization – Inner Product Inner product is a function which gives a scalar to a pair of vectors. Inner Product − $f:mathbb{R}^n times mathbb{R}^nrightarrow kappa$ where $kappa$ is a scalar. The basic characteristics of inner product are as follows − Let $X in mathbb{R}^n$ $left langle x,x right ranglegeq 0, forall x in X$ $left langle x,x right rangle=0Leftrightarrow x=0, forall x in X$ $left langle alpha x,y right rangle=alpha left langle x,y right rangle,forall alpha in kappa : and: forall x,y in X$ $left langle x+y,z right rangle =left langle x,z right rangle +left langle y,z right rangle, forall x,y,z in X$ $left langle overline{y,x} right rangle=left ( x,y right ), forall x, y in X$ Note − Relationship between norm and inner product: $left | x right |=sqrt{left ( x,x right )}$ $forall x,y in mathbb{R}^n,left langle x,y right rangle=x_1y_1+x_2y_2+…+x_ny_n$ Examples 1. find the inner product of $x=left ( 1,2,1 right ): and : y=left ( 3,-1,3 right )$ Solution $left langle x,y right rangle =x_1y_1+x_2y_2+x_3y_3$ $left langle x,y right rangle=left ( 1times3 right )+left ( 2times-1 right )+left ( 1times3 right )$ $left langle x,y right rangle=3+left ( -2 right )+3$ $left langle x,y right rangle=4$ 2. If $x=left ( 4,9,1 right ),y=left ( -3,5,1 right )$ and $z=left ( 2,4,1 right )$, find $left ( x+y,z right )$ Solution As we know, $left langle x+y,z right rangle=left langle x,z right rangle+left langle y,z right rangle$ $left langle x+y,z right rangle=left ( x_1z_1+x_2z_2+x_3z_3 right )+left ( y_1z_1+y_2z_2+y_3z_3 right )$ $left langle x+y,z right rangle=left { left ( 4times 2 right )+left ( 9times 4 right )+left ( 1times1 right ) right }+$ $left { left ( -3times2 right )+left ( 5times4 right )+left ( 1times 1right ) right }$ $left langle x+y,z right rangle=left ( 8+36+1 right )+left ( -6+20+1 right )$ $left langle x+y,z right rangle=45+15$ $left langle x+y,z right rangle=60$ Learning working make money

Learning Introduction work project make money

Convex Optimization – Introduction This course is useful for the students who want to solve non-linear optimization problems that arise in various engineering and scientific applications. This course starts with basic theory of linear programming and will introduce the concepts of convex sets and functions and related terminologies to explain various theorems that are required to solve the non linear programming problems. This course will introduce various algorithms that are used to solve such problems. These type of problems arise in various applications including machine learning, optimization problems in electrical engineering, etc. It requires the students to have prior knowledge of high school maths concepts and calculus. In this course, the students will learn to solve the optimization problems like $min fleft ( x right )$ subject to some constraints. These problems are easily solvable if the function $fleft ( x right )$ is a linear function and if the constraints are linear. Then it is called a linear programming problem (LPP). But if the constraints are non-linear, then it is difficult to solve the above problem. Unless we can plot the functions in a graph, then try to analyse the optimization can be one way, but we can”t plot a function if it”s beyond three dimensions. Hence there comes the techniques of non-linear programming or convex programming to solve such problems. In these tutorial, we will focus on learning such techniques and in the end, a few algorithms to solve such problems. first we will bring the notion of convex sets which is the base of the convex programming problems. Then with the introduction of convex functions, we will some important theorems to solve these problems and some algorithms based on these theorems. Terminologies The space $mathbb{R}^n$ − It is an n-dimensional vector with real numbers, defined as follows − $mathbb{R}^n=left { left ( x_1,x_2,…,x_n right )^{tau }:x_1,x_2,….,x_n in mathbb{R} right }$ The space $mathbb{R}^{mXn}$ − It is a set of all real values matrices of order $mXn$. Learning working make money

Learning Convex Optimization – Resources work project make money

Convex Optimization – Useful Resources The following resources contain additional information on Convex Optimization. Please use them to get more in-depth knowledge on this. Useful Links on Convex Optimization − Wikipedia Reference for Convex Optimization. Useful Books on Convex Optimization To enlist your site on this page, please drop an email to [email protected] Learning working make money