2017-01-20 182 views
0

这是一个变化的自动编码器网络,我必须定义一个采样方法来生成潜伏的z,我认为这可能是错误的。这个py文件正在进行训练,另一个py文件正在做在线预测,所以我需要保存keras模型,保存模型没有任何问题,但是当我从'h5'文件加载模型时,它显示一个错误:Keras保存模型问题

NameError: name 'latent_dim' is not defined 

以下是代码:

df_test = df[df['label']==cluster_num].iloc[:,:data_num.shape[1]] 

data_scale_ = preprocessing.StandardScaler().fit(df_test.values) 

data_num_ = data_scale.transform(df_test.values) 

models_deep_learning_scaler.append(data_scale_) 

batch_size = data_num_.shape[0]//10 

original_dim = data_num_.shape[1] 

latent_dim = data_num_.shape[1]*2 

intermediate_dim = data_num_.shape[1]*10 

nb_epoch = 1 

epsilon_std = 0.001 



x = Input(shape=(original_dim,)) 

init_drop = Dropout(0.2, input_shape=(original_dim,))(x) 

h = Dense(intermediate_dim, activation='relu')(init_drop) 

z_mean = Dense(latent_dim)(h) 

z_log_var = Dense(latent_dim)(h) 





def sampling(args): 

    z_mean, z_log_var = args 

    epsilon = K.random_normal(shape=(latent_dim,), mean=0., 

           std=epsilon_std) 

    return z_mean + K.exp(z_log_var/2) * epsilon 



# note that "output_shape" isn't necessary with the TensorFlow backend 

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) 



# we instantiate these layers separately so as to reuse them later 



decoder_h = Dense(intermediate_dim, activation='relu') 

decoder_mean = Dense(original_dim, activation='linear') 

h_decoded = decoder_h(z) 

x_decoded_mean = decoder_mean(h_decoded) 





def vae_loss(x, x_decoded_mean): 

    xent_loss = original_dim * objectives.mae(x, x_decoded_mean) 

    kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) 

    return xent_loss + kl_loss 



vae = Model(x, x_decoded_mean) 

vae.compile(optimizer=Adam(lr=0.01), loss=vae_loss) 



train_ratio = 0.95 

train_num = int(data_num_.shape[0]*train_ratio) 



x_train = data_num_[:train_num,:] 

x_test = data_num_[train_num:,:] 



vae.fit(x_train, x_train, 

     shuffle=True, 

     nb_epoch=nb_epoch, 

     batch_size=batch_size, 

     validation_data=(x_test, x_test)) 

vae.save('./models/deep_learning_'+str(cluster_num)+'.h5') 

del vae 

from keras.models import load_model 
vae = load_model('./models/deep_learning_'+str(cluster_num)+'.h5') 

它显示错误: NameError: name 'latent_dim' is not defined

回答

0

对于变亏你正在使用许多变量没有被Keras模块闻名。你需要通过custom_objects参数load_model函数。

你的情况:

vae.save('./vae_'+str(cluster_num)+'.h5') 
vae.summary() 

del vae 

from keras.models import load_model 
vae = load_model('./vae_'+str(cluster_num)+'.h5', custom_objects={'latent_dim': latent_dim, 'epsilon_std': epsilon_std, 'vae_loss': vae_loss}) 
vae.summary() 
+0

它可以在同一个PY文件,但如果我从磁盘装载H5文件时创建一个新的PY文件,这是行不通的,错误的是: NameError:名字“ latent_dim'未定义 – zb1872

+0

是的,'custom_objects'只是一个字典..在新文件中,这些变量/函数不存在。所以你需要定义它们或者咸菜加载它们。此外,由于在变化损失中需要使用'z_mean'和'z_log_var',您可能需要重新创建体系结构的一部分,因为腌菜不起作用。 – indraforyou