2017-12-27 269 views
1

我想基于this文章基于图像分割任务使用Keras构建CNN。由于我的数据集很小,我想使用Keras ImageDataGenerator并将其送到fit_generator()。所以,我沿着Keras网站上的example。但是,由于压缩图像和掩膜生成器不起作用,我遵循此answer并创建了我自己的生成器。Keras CNN维度问题

我的输入数据大小为(701,256,1),我的问题是二进制(前景,背景)。对于每张图片,我都有一个相同形状的标签。

现在,我正面临维度问题。 answer中也提到了这一点,但我不确定如何解决它。

错误:

ValueError: Error when checking target: expected dense_3 to have 2 dimensions, but got array with shape (2, 704, 256, 1) 

我粘贴整个代码我这里有:

import numpy 
import pygpu 
import theano 
import keras 

from keras.models import Model, Sequential 
from keras.layers import Input, Dense, Dropout, Activation, Flatten 
from keras.layers import Conv2D, MaxPooling2D, Reshape 
from keras.layers import BatchNormalization 
from keras.preprocessing.image import ImageDataGenerator 

from keras.utils import np_utils 
from keras import backend as K 

def superGenerator(image_gen, label_gen): 
    while True: 
     x = image_gen.next() 
     y = label_gen.next() 
     yield x[0], y[0] 


img_height = 704 
img_width = 256 

train_data_dir = 'Dataset/Train/Images' 
train_label_dir = 'Dataset/Train/Labels' 
validation_data_dir = 'Dataset/Validation/Images' 
validation_label_dir = 'Dataset/Validation/Labels' 
n_train_samples = 1000 
n_validation_samples = 500 
epochs = 50 
batch_size = 2 

input_shape = (img_height, img_width,1) 
target_shape = (img_height, img_width) 

model = Sequential() 

model.add(Conv2D(80,(28,28), input_shape=input_shape)) 
model.add(BatchNormalization()) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) 

model.add(Conv2D(96,(18,18))) 
model.add(BatchNormalization()) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) 

model.add(Conv2D(128,(13,13))) 
model.add(BatchNormalization()) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2))) 


model.add(Conv2D(160,(8,8))) 
model.add(BatchNormalization()) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2,2))) 

model.add(Flatten()) 

model.add(Dense(1024, activation='relu')) 
model.add(Dense(512, activation='relu')) 
model.add(Dropout(0.25)) 

model.add(Dense(2, activation='softmax')) 

model.summary() 

model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy']) 

data_gen_args = dict(
    rescale=1./255, 
    horizontal_flip=True, 
    vertical_flip=True 
    ) 

train_datagen = ImageDataGenerator(**data_gen_args) 
train_label_datagen = ImageDataGenerator(**data_gen_args) 
test_datagen = ImageDataGenerator(**data_gen_args) 
test_label_datagen = ImageDataGenerator(**data_gen_args) 

seed = 1 

train_image_generator = train_datagen.flow_from_directory(
    train_data_dir, 
    target_size=target_shape, 
    color_mode='grayscale', 
    batch_size=batch_size, 
    class_mode = 'binary', 
    seed=seed) 
train_label_generator = train_label_datagen.flow_from_directory(
    train_label_dir, 
    target_size=target_shape, 
    color_mode='grayscale', 
    batch_size=batch_size, 
    class_mode = 'binary', 
    seed=seed) 

validation_image_generator = test_datagen.flow_from_directory(
    validation_data_dir, 
    target_size=target_shape, 
    color_mode='grayscale', 
    batch_size=batch_size, 
    class_mode = 'binary', 
    seed=seed) 

validation_label_generator = test_label_datagen.flow_from_directory(
    validation_label_dir, 
    target_size=target_shape, 
    color_mode='grayscale', 
    batch_size=batch_size, 
    class_mode = 'binary', 
    seed=seed) 

train_generator = superGenerator(train_image_generator, train_label_generator,batch_size) 
test_generator = superGenerator(validation_image_generator, validation_label_generator,batch_size) 

model.fit_generator(
    train_generator, 
    steps_per_epoch= n_train_samples // batch_size, 
    epochs=50, 
    validation_data=test_generator, 
    validation_steps=n_validation_samples // batch_size) 

model.save_weights('first_try.h5') 

我是新来Keras(和细胞神经网络),所以任何帮助将是非常赞赏。

+0

什么是错误? – Nain

+0

我更新了问题。谢谢 :) – Maja

回答

1

好的。我做了一些rubberduck调试,并阅读了更多的文章。当然,维度是一个问题。 This简单的回答为我做了。 我的标签与输入图像的形状相同,因此模型的输出也应具有该形状。我用Conv2DTranspose来解决这个问题。