2016-08-11 167 views
0

我使用keras作为CNN,但问题是存在内存泄漏。错误是使用keras时出现内存错误

 [email protected]:~/12EC35005/MTP_Workspace/MTP$ python cnn_implement.py 
     Using Theano backend. 
     [INFO] compiling model... 
     Traceback (most recent call last): 
      File "cnn_implement.py", line 23, in <module> 
      model = CNNModel.build(width=150, height=150, depth=3) 
      File "/home/ms/anushreej/12EC35005/MTP_Workspace/MTP/cnn/networks/model_define.py", line 27, in build 
      model.add(Dense(depth*height*width)) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/models.py", line 146, in add 
      output_tensor = layer(self.outputs[0]) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py", line 458, in __call__ 
      self.build(input_shapes[0]) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/layers/core.py", line 604, in build 
      name='{}_W'.format(self.name)) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/initializations.py", line 61, in glorot_uniform 
      return uniform(shape, s, name=name) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/initializations.py", line 32, in uniform 
      return K.variable(np.random.uniform(low=-scale, high=scale, size=shape), 
      File "mtrand.pyx", line 1255, in mtrand.RandomState.uniform (numpy/random/mtrand/mtrand.c:13575) 
      File "mtrand.pyx", line 220, in mtrand.cont2_array_sc (numpy/random/mtrand/mtrand.c:2902) 
     MemoryError 

现在我无法理解为什么会发生这种情况。我的训练图像非常小,尺寸为150 * 150 * 3。

的代码 - :

 # import the necessary packages 
     from keras.models import Sequential 
     from keras.layers.convolutional import Convolution2D 
     from keras.layers.core import Activation 
     from keras.layers.core import Flatten 
     from keras.layers.core import Dense 

     class CNNModel: 
      @staticmethod 
      def build(width, height, depth): 
      # initialize the model 
      model = Sequential() 
      # first set of CONV => RELU 
      model.add(Convolution2D(50, 5, 5, border_mode="same", batch_input_shape=(None, depth, height, width))) 
      model.add(Activation("relu")) 

      # second set of CONV => RELU 
      # model.add(Convolution2D(50, 5, 5, border_mode="same")) 
      # model.add(Activation("relu")) 

      # third set of CONV => RELU 
      # model.add(Convolution2D(50, 5, 5, border_mode="same")) 
      # model.add(Activation("relu")) 

      model.add(Flatten()) 

      model.add(Dense(depth*height*width)) 

      # if weightsPath is not None: 
      # model.load_weights(weightsPath) 

      return model 
+0

你怎么知道有内存泄漏?而不是另一个问题? –

回答

0

我面临同样的问题,我认为这个问题只是扁平化层之前的数据点都超过你的系统可以处理(我试过在差分方程等等一个高ram工作,并与更少的RAM给了这个错误)。只需添加更多的CNN图层来减小尺寸,然后添加一个平坦的图层即可。

这给了我和错误:

model = Sequential() model.add(Convolution2D(32, 3, 3,border_mode='same',input_shape=(1, 96, 96),activation='relu')) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(1000,activation='relu')) model.add(Dense(97,activation='softmax'))

这没有给一个错误

model = Sequential() model.add(Convolution2D(32, 3, 3,border_mode='same',input_shape=(1, 96, 96),activation='relu')) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(Convolution2D(128, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(1000,activation='relu')) model.add(Dense(97,activation='softmax')

希望它能帮助。

相关问题