SuperNotes by yuri.rodrix

Notas de YuriRod


Página tipo blog en el que voy a publicar mis notas de aprendizaje, en especial de temas como matemáticas, física y quizá algo de programación

Redes neuronales
Redes neuronales
Redes neuronales
Redes neuronales
Redes neuronales
Redes neuronales

Ecuación de Laplace con Condiciones de Dirichlet: Discretización Finita y Resolución por Jacobi & Gauss-Seidel

Ecuación de Laplace

...

2ϕ=0  en el dominio Ω\nabla^{2}\phi = 0 \ \text{ en el dominio } \Omega

Condicion de Frontera

ϕ=g(x,y)  en la frontera Ω\phi = g(x,y) \ \text{ en la frontera } \partial \Omega

Discretización

2ϕ=2ϕx2+2ϕy2=0\nabla^{2}\phi = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} = 02ϕx2i,jϕi+1,j2ϕi,j+ϕi1,jh2,2ϕy2i,jϕi,j+12ϕi,j+ϕi,j1h2.\frac{\partial^2 \phi}{\partial x^2}\bigg|_{i,j} \approx \frac{\phi_{i+1,j} - 2\,\phi_{i,j} + \phi_{i-1,j}}{h^2}, \qquad \frac{\partial^2 \phi}{\partial y^2}\bigg|_{i,j} \approx \frac{\phi_{i,j+1} - 2\,\phi_{i,j} + \phi_{i,j-1}}{h^2}.ϕi+1,j2ϕi,j+ϕi1,jh2+ϕi,j+12ϕi,j+ϕi,j1h2=0\frac{\phi_{i+1,j} - 2\,\phi_{i,j} + \phi_{i-1,j}}{h^2} + \frac{\phi_{i,j+1} - 2\,\phi_{i,j} + \phi_{i,j-1}}{h^2} = 0ϕi+1,j2ϕi,j+ϕi1,j+ϕi,j+12ϕi,j+ϕi,j1=0\phi_{i+1,j} - 2\,\phi_{i,j} + \phi_{i-1,j} + \phi_{i,j+1} - 2\,\phi_{i,j} + \phi_{i,j-1} = 04ϕi,j+ϕi+1,j+ϕi1,j+ϕi,j+1+ϕi,j1=0- 4\,\phi_{i,j} + \phi_{i+1,j} +\phi_{i-1,j} + \phi_{i,j+1} + \phi_{i,j-1} = 0ϕi+1,j+ϕi1,j+ϕi,j+1+ϕi,j1=4ϕi,j\phi_{i+1,j} +\phi_{i-1,j} + \phi_{i,j+1} + \phi_{i,j-1} = 4\,\phi_{i,j}14(ϕi+1,j+ϕi1,j+ϕi,j+1+ϕi,j1)=ϕi,j\frac{1}{4}( \phi_{i+1,j} +\phi_{i-1,j} + \phi_{i,j+1} + \phi_{i,j-1})= \phi_{i,j}

Método de Jacobi

(D+L+U)x=b(D+L+U)x=bDx+(L+U)x=bDx+(L+U)x=bDx=(L+U)x+bDx=-(L+U)x+bDx(k+1)=(L+U)x(k)+bDx^{(k+1)}=-(L+U)x^{(k)}+bx=D1(L+U)Tjx+D1Cjbx=\underset{T_j}{\underbrace{-D^{-1}(L+U)}}x+\underset{C_j}{\underbrace{D^{-1}}}b
xi(k+1)  =  1aii(bi    j=1jinaijxj(k)).x_i^{(k+1)} \;=\; \frac{1}{a_{ii}} \Bigl( b_i \;-\; \sum_{\substack{j=1 \\ j\neq i}}^{n} a_{ij}\,x_j^{(k)} \Bigr).

METJacobiCode
Convergencia
ρ(Tj)<1\Large \rho(T_j)<1

M. de Gauss-Seidel

(D+L+U)x=b(D+L+U)x=b(D+L)x+Ux=b(D+L)x +Ux=b(D+L)x=Ux+b(D+L)x=-Ux+b(D+L)x(k+1)=Ux(k)+b(D+L)x^{(k+1)}=-Ux^{(k)}+bx=(D+L)1UTgx+(D+L)1Cgbx=\underset{T_g}{\underbrace{-(D+L)^{-1}U}}x+\underset{C_g}{\underbrace{(D+L)^{-1}}}b
xi(k+1)  =  1aii(bi    j=1i1aijxj(k+1)    j=i+1naijxj(k)).x_i^{(k+1)} \;=\; \frac{1}{a_{ii}} \Bigl( b_i \;-\; \sum_{j=1}^{i-1} a_{ij}\,x_j^{(k+1)} \;-\; \sum_{j=i+1}^{n} a_{ij}\,x_j^{(k)} \Bigr).

METGaussSeidelCode
Convergencia
ρ(Tg)<1\Large \rho(T_g)<1



;