浅谈keras中loss与val_loss的关系
loss函数如何接受输入值
keras封装的比较厉害,官网给的例子写的云里雾里,
在stackoverflow找到了答案
Youcanwrapthelossfunctionasainnerfunctionandpassyourinputtensortoit(ascommonlydonewhenpassingadditionalargumentstothelossfunction).
defcustom_loss_wrapper(input_tensor): defcustom_loss(y_true,y_pred): returnK.binary_crossentropy(y_true,y_pred)+K.mean(input_tensor) returncustom_loss
input_tensor=Input(shape=(10,)) hidden=Dense(100,activation='relu')(input_tensor) out=Dense(1,activation='sigmoid')(hidden) model=Model(input_tensor,out) model.compile(loss=custom_loss_wrapper(input_tensor),optimizer='adam')
Youcanverifythatinput_tensorandthelossvaluewillchangeasdifferentXispassedtothemodel.
X=np.random.rand(1000,10) y=np.random.randint(2,size=1000) model.test_on_batch(X,y)#=>1.1974642 X*=1000 model.test_on_batch(X,y)#=>511.15466
fit_generator
fit_generatorultimatelycallstrain_on_batchwhichallowsforxtobeadictionary.
Also,itcouldbealist,inwhichcasexisexpectedtomap1:1totheinputsdefinedinModel(input=[in1,…],…)
###generator yield[inputX_1,inputX_2],y ###model model=Model(inputs=[inputX_1,inputX_2],outputs=...)
补充知识:学习keras时对loss函数不同的选择,则model.fit里的outputs可以是one_hot向量,也可以是整形标签
我就废话不多说了,大家还是直接看代码吧~
from__future__importabsolute_import,division,print_function,unicode_literals
importtensorflowastf
fromtensorflowimportkeras
importnumpyasnp
importmatplotlib.pyplotasplt
print(tf.__version__)
fashion_mnist=keras.datasets.fashion_mnist
(train_images,train_labels),(test_images,test_labels)=fashion_mnist.load_data()
class_names=['T-shirt/top','Trouser','Pullover','Dress','Coat',
'Sandal','Shirt','Sneaker','Bag','Ankleboot']
#plt.figure()
#plt.imshow(train_images[0])
#plt.colorbar()
#plt.grid(False)
#plt.show()
train_images=train_images/255.0
test_images=test_images/255.0
#plt.figure(figsize=(10,10))
#foriinrange(25):
#plt.subplot(5,5,i+1)
#plt.xticks([])
#plt.yticks([])
#plt.grid(False)
#plt.imshow(train_images[i],cmap=plt.cm.binary)
#plt.xlabel(class_names[train_labels[i]])
#plt.show()
model=keras.Sequential([
keras.layers.Flatten(input_shape=(28,28)),
keras.layers.Dense(128,activation='relu'),
keras.layers.Dense(10,activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
#loss='sparse_categorical_crossentropy'则之后的label不需要变成one_hot向量,直接使用整形标签即可
metrics=['accuracy'])
one_hot_train_labels=keras.utils.to_categorical(train_labels,num_classes=10)
model.fit(train_images,one_hot_train_labels,epochs=10)
one_hot_test_labels=keras.utils.to_categorical(test_labels,num_classes=10)
test_loss,test_acc=model.evaluate(test_images,one_hot_test_labels)
print('\nTestaccuracy:',test_acc)
#predictions=model.predict(test_images)
#predictions[0]
#np.argmax(predictions[0])
#test_labels[0]
loss若为loss=‘categorical_crossentropy',则fit中的第二个输出必须是一个one_hot类型,
而若loss为loss=‘sparse_categorical_crossentropy'则之后的label不需要变成one_hot向量,直接使用整形标签即可
以上这篇浅谈keras中loss与val_loss的关系就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持毛票票。