Wikimaths

A collaborative site on maths but not only!

Outils pour utilisateurs

Outils du site


world:logdet

$$ \newcommand{\arginf}{\mathrm{arginf}} \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\asconv}[1]{\stackrel{#1-a.s.}{\rightarrow}} \newcommand{\Aset}{\mathsf{A}} \newcommand{\b}[1]{{\mathbf{#1}}} \newcommand{\ball}[1]{\mathsf{B}(#1)} \newcommand{\bproof}{\textbf{Proof :}\quad} \newcommand{\bmuf}[2]{b_{#1,#2}} \newcommand{\card}{\mathrm{card}} \newcommand{\chunk}[3]{{#1}_{#2:#3}} \newcommand{\convprob}[1]{\stackrel{#1-\text{prob}}{\rightarrow}} \newcommand{\Cov}{\mathbb{C}\mathrm{ov}} \newcommand{\CPE}[2]{\PE\lr{#1| #2}} \renewcommand{\det}{\mathrm{det}} \newcommand{\dimlabel}{\mathsf{m}} \newcommand{\dimU}{\mathsf{q}} \newcommand{\dimX}{\mathsf{d}} \newcommand{\dimY}{\mathsf{p}} \newcommand{\dlim}{\Rightarrow} \newcommand{\e}[1]{{\left\lfloor #1 \right\rfloor}} \newcommand{\eproof}{\quad \Box} \newcommand{\eremark}{</WRAP>} \newcommand{\eqdef}{:=} \newcommand{\eqlaw}{\stackrel{\mathcal{L}}{=}} \newcommand{\eqsp}{\;} \newcommand{\Eset}{ {\mathsf E}} \newcommand{\esssup}{\mathrm{essup}} \newcommand{\fr}[1]{{\left\langle #1 \right\rangle}} \newcommand{\falph}{f} \renewcommand{\geq}{\geqslant} \newcommand{\hchi}{\hat \chi} \newcommand{\Id}{\mathrm{Id}} \newcommand{\img}{\text{Im}} \newcommand{\indi}[1]{\mathbf{1}_{#1}} \newcommand{\indiacc}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\indin}[1]{\mathbf{1}\{#1\}} \newcommand{\itemm}{\quad \quad \blacktriangleright \;} \newcommand{\ker}{\text{Ker}} \newcommand{\klbck}[2]{\mathrm{K}\lr{#1||#2}} \newcommand{\law}{\mathcal{L}} \newcommand{\labelinit}{\pi} \newcommand{\labelkernel}{Q} \renewcommand{\leq}{\leqslant} \newcommand{\lone}{\mathsf{L}_1} \newcommand{\lrav}[1]{\left|#1 \right|} \newcommand{\lr}[1]{\left(#1 \right)} \newcommand{\lrb}[1]{\left[#1 \right]} \newcommand{\lrc}[1]{\left\{#1 \right\}} \newcommand{\lrcb}[1]{\left\{#1 \right\}} \newcommand{\ltwo}[1]{\PE^{1/2}\lrb{\lrcb{#1}^2}} \newcommand{\Ltwo}{\mathrm{L}^2} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mcbb}{\mathcal B} \newcommand{\mcf}{\mathcal{F}} \newcommand{\meas}[1]{\mathrm{M}_{#1}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\normmat}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\nset}{\mathbb N} \newcommand{\one}{\mathsf{1}} \newcommand{\PE}{\mathbb E} \newcommand{\PP}{\mathbb P} \newcommand{\Psif}{\Psi_f} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\psconv}{\stackrel{\PP-a.s.}{\rightarrow}} \newcommand{\qset}{\mathbb Q} \newcommand{\rmd}{\mathrm d} \newcommand{\rme}{\mathrm e} \newcommand{\rmi}{\mathrm i} \newcommand{\Rset}{\mathbb{R}} \newcommand{\rset}{\mathbb{R}} \newcommand{\rti}{\sigma} \newcommand{\section}[1]{==== #1 ====} \newcommand{\seq}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\set}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\sg}{\mathrm{sgn}} \newcommand{\supnorm}[1]{\left\|#1\right\|_{\infty}} \newcommand{\thv}{{\theta_\star}} \newcommand{\tmu}{ {\tilde{\mu}}} \newcommand{\Tset}{ {\mathsf{T}}} \newcommand{\Tsigma}{ {\mathcal{T}}} \newcommand{\ttheta}{{\tilde \theta}} \newcommand{\tv}[1]{\left\|#1\right\|_{\mathrm{TV}}} \newcommand{\unif}{\mathrm{Unif}} \newcommand{\Xset}{{\mathsf X}} \newcommand{\Xsigma}{\mathcal X} \newcommand{\Yset}{{\mathsf Y}} \newcommand{\Ysigma}{\mathcal Y} \newcommand{\Var}{\mathbb{V}\mathrm{ar}} \newcommand{\zset}{\mathbb{Z}} \newcommand{\Zset}{\mathsf{Z}} $$

2017/10/07 23:39 · douc
,

$$ \newcommand{\det}{\mathrm{det}} \newcommand{\Mplus}{M^+_{n}} $$

$$ \newcommand{\blemma}{<WRAP center round box 80%> **__Lemma__** } $$

Concavity of the $\log \det$ function

Let $\Mplus$ the space of real-valued $n \times n$ symmetric positive matrices. We show

Lemma: The function $X \mapsto \log \det X$ is concave on $\Mplus$.

Proof

Let $X,Y \in \Mplus$ and $\lambda \in [0,1]$. Since $X^{-1/2}YX^{-1/2} \in \Mplus$, it is diagonalisable in some orthonormal basis and write $\mu_1,\ldots, \mu_n$ the (possibly repeated) entries of the diagonal. Note in particular that $\det \lr{X^{-1/2}YX^{-1/2}}=\prod_{i=1}^n \mu_i$. Then, \begin{align*} \log \det \lr{(1-\lambda)X+\lambda Y}&=\log \det \lrb{X^{1/2} \lr{(1-\lambda)I+\lambda X^{-1/2}YX^{-1/2}} X^{1/2}}\\ &=\log \det X + \log \det \lr{(1-\lambda)I+\lambda X^{-1/2}YX^{-1/2}} \nonumber \\ &=\log \det X + \sum_{i=1}^n \log(1-\lambda+\lambda \mu_i)\nonumber \\ & \geq \log \det X + \sum_{i=1}^n (1-\lambda) \underbrace{\log(1)}_{=0}+\lambda \log( \mu_i) \label{eq:diag}:= D \end{align*} where the last inequality follows from the concavity of the $\log$. Now, rewrite the rhs $D$ as: \begin{align*} D&=(1-\lambda) \log \det X + \lambda \lr{\log \det X^{1/2}+ \log \det X^{-1/2}YX^{-1/2} + \log \det X^{1/2}} \\ &=(1-\lambda) \log \det X + \lambda \log \det Y \end{align*} $\eproof$

Derivatives

Lemma: The derivative of the real valued function $\Sigma \mapsto \log\mathrm{det}(\Sigma)$ defined on $\rset^{d\times d}$ is given at a $\Sigma$ which is symmetric positive by: \[ \partial_{\Sigma}\{\log\mathrm{det}(\Sigma)\}= \Sigma^{-1}\eqsp, \] where, for all real valued function $f$ defined on $\rset^{d\times d}$, $\partial_{\Sigma}f(\Sigma)$ denotes the $\rset^{d\times d}$ matrix such that for all $1\leqslant i,j\leqslant d$, $\{\partial_{\Sigma}f(\Sigma)\}_{i,j}$ is the partial derivative of $f$ with respect to $\Sigma_{i,j}$.

Proof

Recall that for all $i \in \{1,\ldots,d\}$ we have $\det(\Sigma)=\sum_{k=1}^d \Sigma_{i,k} \Delta_{i,k}$ where $\Delta_{i,j}$ is the $(i,j)$-cofactor associated to $\Sigma$. For any fixed $i,j$, the component $\Sigma_{i,j}$ does not appear in anywhere in the decomposition $\sum_{k=1}^d \Sigma_{i,k} \Delta_{i,k}$, except for the term $k=j$. This implies $$ \frac{\partial \log \det(\Sigma)}{\partial \Sigma_{i,j}}=\frac{1}{\det \Sigma}\frac{\partial \det(\Sigma)}{\partial \Sigma_{i,j}}=\frac{\Delta_{i,j}}{\det \Sigma} $$ Recalling the identity $\Sigma\; [\Delta_{j,i}]_{1\leq i,j \leq d}=(\det \Sigma)\; I_d$ so that $\Sigma^{-1}=\frac{[\Delta_{i,j}]_{1\leq i,j \leq d}^T}{\det \Sigma}$, we finally get $$ \lrb{\frac{\partial \log \det(\Sigma)}{\partial \Sigma_{i,j}}}_{1\leq i,j \leq d}=(\Sigma^{-1})^T=\Sigma^{-1} $$ where the last equality follows from the fact that $\Sigma$ is symmetric. $\eproof$

world/logdet.txt · Dernière modification: 2020/03/19 17:50 (modification externe)