2016-12-19 22 views
3

昨天我使用期望最大化算法实现了GMM(高斯混合模型)。如你所记得的,它将一些不知名的分布建模为高斯分布的混合体,我们需要了解它的均值和方差,以及每个高斯的权重。GMM-对数似然不单调

这是代码背后的数学(它不是那么复杂) http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/

这是我的代码:

import numpy as np 
from scipy.stats import multivariate_normal 
import matplotlib.pyplot as plt 

#reference for this code is http://mccormickml.com/2014/08/04/gaussian-mixture-models-tutorial-and-matlab-code/ 

def expectation(data, means, covs, priors): #E-step. returns the updated probabilities 
    m = data.shape[0]      #gets the data, means covariances and priors of all clusters 
    numOfClusters = priors.shape[0] 

    probabilities = np.zeros((m, numOfClusters)) 
    for i in range(0, m): 
     for j in range(0, numOfClusters): 
      sum = 0 
      for l in range(0, numOfClusters): 
       sum += normalPDF(data[i, :], means[l], covs[l]) * priors[l, 0] 
      probabilities[i, j] = normalPDF(data[i, :], means[j], covs[j]) * priors[j, 0]/sum 

    return probabilities 

def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters 
    m, n = data.shape 
    numOfClusters = probabilities.shape[1] 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    for i in range(0, numOfClusters): 
     priors[i, 0] = np.sum(probabilities[:, i])/m #update priors 

     for j in range(0, m): #update means 
      means[i] += probabilities[j, i] * data[j, :] 

      vec = np.reshape(data[j, :] - means[i, :], (n, 1)) 
      covs[i] += probabilities[j, i] * np.dot(vec, vec.T) #update covs 

     means[i] /= np.sum(probabilities[:, i]) 
     covs[i] /= np.sum(probabilities[:, i]) 

    return [means, covs, priors] 

def normalPDF(x, mean, covariance): #this is simply multivariate normal pdf 
    n = len(x) 

    mean = np.reshape(mean, (n,)) 
    x = np.reshape(x, (n,)) 

    var = multivariate_normal(mean=mean, cov=covariance,) 
    return var.pdf(x) 


def initClusters(numOfClusters, data): #initialize all the gaussian clusters (means, covariances, priors 
    m, n = data.shape 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    initialCovariance = np.cov(data.T) 

    for i in range(0, numOfClusters): 
     means[i] = np.random.rand(n) #the initial mean for each gaussian is chosen randomly 
     covs[i] = initialCovariance #the initial covariance of each cluster is the covariance of the data 
     priors[i, 0] = 1.0/numOfClusters #the initial priors are uniformly distributed. 

    return [means, covs, priors] 

def logLikelihood(data, probabilities): #data is our data. probabilities[i, j] = k means probability example i belongs in cluster j is 0 < k < 1 
    m = data.shape[0] #num of examples 

    examplesByCluster = np.zeros((m, 1)) 
    for i in range(0, m): 
     examplesByCluster[i, 0] = np.argmax(probabilities[i, :]) 
    examplesByCluster = examplesByCluster.astype(int) #examplesByCluster[i] = j means that example i belongs in cluster j 

    result = 0 
    for i in range(0, m): 
     result += np.log(probabilities[i, examplesByCluster[i, 0]]) #example i belongs in cluster examplesByCluster[i, 0] 

    return result 

m = 2000 #num of training examples 
n = 8 #num of features for each example 

data = np.random.rand(m, n) 
numOfClusters = 2 #num of gaussians 
numIter = 30 #num of iterations of EM 
cost = np.zeros((numIter, 1)) 

[means, covs, priors] = initClusters(numOfClusters, data) 

for i in range(0, numIter): 
    probabilities = expectation(data, means, covs, priors) 
    [means, covs, priors] = maximization(data, probabilities) 

    cost[i, 0] = logLikelihood(data, probabilities) 

plt.plot(cost) 
plt.show() 

的问题是,对数似然是行为古怪。我预计它会单调增加。但事实并非如此。

例如,具有8个特征与3个高斯簇2000个的例子中,对数似然看起来像这样(30次迭代) -

enter image description here

因此,这是很不好。但在其他测试中,我跑了,例如一个测试用的2个功能和2群15本例中,对数似然是这样的 -

enter image description here

更好,但还不够完善。

为什么会发生这种情况,我该如何解决?

+1

你想要模拟什么数据?从代码看来,您正在对随机点进行建模,即在数据中没有找到结构。如果是这样的话,你的GMM模型可能会随机波动 – etov

+0

在这种情况下,它是随机的,但将来它可能是任何类型的数据,从温度到车辆传感器读数,任何事情。我认为数据是随机的并不重要。从理论上讲,我们保证单调收敛。即使是随机数据。 –

+0

您是否尝试将您的结果与已知实现的结果进行比较?一个选项是scikit-learn的[GaussianMixture](http://scikit-learn.org/stable/modules/mixture.html)。 –

回答

3

问题在于最大化步骤。

该代码使用means来计算covs。然而,这是在相同的循环中完成的,然后将means除以概率之和。

这会导致估计的协方差爆炸。

这里有一个修复建议:

def maximization(data, probabilities): #M-step. this updates the means, covariances, and priors of all clusters 
    m, n = data.shape 
    numOfClusters = probabilities.shape[1] 

    means = np.zeros((numOfClusters, n)) 
    covs = np.zeros((numOfClusters, n, n)) 
    priors = np.zeros((numOfClusters, 1)) 

    for i in range(0, numOfClusters): 
     priors[i, 0] = np.sum(probabilities[:, i])/m #update priors 

     for j in range(0, m): #update means 
      means[i] += probabilities[j, i] * data[j, :] 

     means[i] /= np.sum(probabilities[:, i]) 

    for i in range(0, numOfClusters): 
     for j in range(0, m): #update means 
      vec = np.reshape(data[j, :] - means[i, :], (n, 1)) 
      covs[i] += probabilities[j, i] * np.multiply(vec, vec.T) #update covs 

     covs[i] /= np.sum(probabilities[:, i]) 

    return [means, covs, priors] 

而导致成本函数(200个数据点,4个功能): Cost function

编辑: 我确信这个bug是在唯一的问题代码,但是运行一些额外的例子,我仍然有时会看到非单调行为(尽管比以前更不稳定)。所以这似乎只是问题的一部分。

EDIT2: 协方差计算还有另一个问题:向量乘法应该是元素明智的,而不是点积 - 记住结果应该是一个向量。现在结果似乎一直是单调递增的。