既然你提供的代码,从而作出了努力。我会指出一个更好的方法。当您使用MATLAB时,请尝试使用该语言的功能。不要假装你还在使用低级语言。因此,我们可以写一个Jacobi迭代作为
X_(n+1) = inv(D)*(b - R*X_n)
其中d是包含对角线A的对角线矩阵,R是A的非对角元素的矩阵,所以有在对角线上的零。我们如何在MATLAB中做到这一点?
首先,以简单的方式构建D和R.
D = diag(diag(A));
R = A - D;
现在,我们应该认识到,计算对角矩阵的逆是愚蠢的。更好的办法是计算对角线上每个元素的倒数。
Dinv = diag(1./diag(A));
所以,现在我们可以写一个单一的雅可比迭代,因为
X = Dinv*(b - R*X);
见人说人不需要嵌套循环。我们根本不打算索引这些矩阵。现在将它们全部包装在一个MATLAB函数中。保持友善,检查问题,并大胆使用评论。
============================================== ====
function [X,residuals,iter] = JacobiMethod(A,b)
% solves a linear system using Jacobi iterations
%
% The presumption is that A is nxn square and has no zeros on the diagonal
% also, A must be diagonally dominant for convergence.
% b must be an nx1 vector.
n = size(A,1);
if n ~= size(A,2)
error('JACOBIMETHOD:Anotsquare','A must be n by n square matrix')
end
if ~isequal(size(b),[n 1])
error('JACOBIMETHOD:incompatibleshapes','b must be an n by 1 vector')
end
% get the diagonal elements
D = diag(A);
% test that none are zero
if any(D) == 0
error('JACOBIMETHOD:zerodiagonal', ...
'The sky is falling! There are zeros on the diagonal')
end
% since none are zero, we can compute the inverse of D.
% even better is to make Dinv a sparse diagonal matrix,
% for better speed in the multiplies.
Dinv = sparse(diag(1./D));
R = A - diag(D);
% starting values. I'm not being very creative here, but
% using the vector b as a starting value seems reasonable.
X = b;
err = inf;
tol = 100*eps(norm(b));
iter = 0; % count iterations
while err > tol
iter = iter + 1;
Xold = X;
% the guts of the iteration
X = Dinv*(b - R*X);
% this is not really an error, but a change per iteration.
% when that is stable, we choose to terminate.
err = norm(X - Xold);
end
% compute residuals
residuals = b - A*X;
======================================= ===========
让我们看看它是如何工作的。
A = rand(5) + 4*eye(5);
b = rand(5,1);
[X,res,iter] = JacobiMethod(A,b)
X =
0.12869
-0.0021942
0.10779
0.11791
0.11785
res =
5.7732e-15
1.6653e-14
1.5654e-14
1.6542e-14
1.843e-14
iter =
39
它是否收敛到我们从反斜杠获得的解决方案?
A\b
ans =
0.12869
-0.0021942
0.10779
0.11791
0.11785
这对我来说很好。更好的代码可能会检查对角线优势以试图预测代码何时失败。我可能会选择对解决方案更加智能的容忍度,或者对X更好的起始值。最后,我想提供更完整的帮助和参考。
你想看到的是良好代码的一般特性。
太棒了!非常感谢你的帮助!我真的很感激它。 – Kristian