keras导入weights方式
keras源码engine中toplogy.py定义了加载权重的函数:
load_weights(self,filepath,by_name=False)
其中默认by_name为False,这时候加载权重按照网络拓扑结构加载,适合直接使用keras中自带的网络模型,如VGG16
VGG19/resnet50等,源码描述如下:
If`by_name`isFalse(default)weightsareloaded
basedonthenetwork'stopology,meaningthearchitecture
shouldbethesameaswhentheweightsweresaved.
Notethatlayersthatdon'thaveweightsarenottaken
intoaccountinthetopologicalordering,soaddingor
removinglayersisfineaslongastheydon'thaveweights.
若将by_name改为True则加载权重按照layer的name进行,layer的name相同时加载权重,适合用于改变了
模型的相关结构或增加了节点但利用了原网络的主体结构情况下使用,源码描述如下:
If`by_name`isTrue,weightsareloadedintolayers
onlyiftheysharethesamename.Thisisuseful
forfine-tuningortransfer-learningmodelswhere
someofthelayershavechanged.
在进行边缘检测时,利用VGG网络的主体结构,网络中增加反卷积层,这时加载权重应该使用
model.load_weights(filepath,by_name=True)
补充知识:Keras下实现mnist手写数字
之前一直在用tensorflow,被同学推荐来用keras了,把之前文档中的mnist手写数字数据集拿来练手,
代码如下。
importstruct
importnumpyasnp
importos
importkeras
fromkeras.modelsimportSequential
fromkeras.layersimportDense
fromkeras.optimizersimportSGD
defload_mnist(path,kind):
labels_path=os.path.join(path,'%s-labels.idx1-ubyte'%kind)
images_path=os.path.join(path,'%s-images.idx3-ubyte'%kind)
withopen(labels_path,'rb')aslbpath:
magic,n=struct.unpack('>II',lbpath.read(8))
labels=np.fromfile(lbpath,dtype=np.uint8)
withopen(images_path,'rb')asimgpath:
magic,num,rows,cols=struct.unpack(">IIII",imgpath.read(16))
images=np.fromfile(imgpath,dtype=np.uint8).reshape(len(labels),784)#28*28=784
returnimages,labels
#loadingtrainandtestdata
X_train,Y_train=load_mnist('.\\data',kind='train')
X_test,Y_test=load_mnist('.\\data',kind='t10k')
#turnlabelstoone_hotcode
Y_train_ohe=keras.utils.to_categorical(Y_train,num_classes=10)
#definemodels
model=Sequential()
model.add(Dense(input_dim=X_train.shape[1],output_dim=50,init='uniform',activation='tanh'))
model.add(Dense(input_dim=50,output_dim=50,init='uniform',activation='tanh'))
model.add(Dense(input_dim=50,output_dim=Y_train_ohe.shape[1],init='uniform',activation='softmax'))
sgd=SGD(lr=0.001,decay=1e-7,momentum=0.9,nesterov=True)
model.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=["accuracy"])
#starttraining
model.fit(X_train,Y_train_ohe,epochs=50,batch_size=300,shuffle=True,verbose=1,validation_split=0.3)
#countaccuracy
y_train_pred=model.predict_classes(X_train,verbose=0)
train_acc=np.sum(Y_train==y_train_pred,axis=0)/X_train.shape[0]
print('Trainingaccuracy:%.2f%%'%(train_acc*100))
y_test_pred=model.predict_classes(X_test,verbose=0)
test_acc=np.sum(Y_test==y_test_pred,axis=0)/X_test.shape[0]
print('Testaccuracy:%.2f%%'%(test_acc*100))
训练结果如下:
Epoch45/50 42000/42000[==============================]-1s17us/step-loss:0.2174-acc:0.9380-val_loss:0.2341-val_acc:0.9323 Epoch46/50 42000/42000[==============================]-1s17us/step-loss:0.2061-acc:0.9404-val_loss:0.2244-val_acc:0.9358 Epoch47/50 42000/42000[==============================]-1s17us/step-loss:0.1994-acc:0.9413-val_loss:0.2295-val_acc:0.9347 Epoch48/50 42000/42000[==============================]-1s17us/step-loss:0.2003-acc:0.9413-val_loss:0.2224-val_acc:0.9350 Epoch49/50 42000/42000[==============================]-1s18us/step-loss:0.2013-acc:0.9417-val_loss:0.2248-val_acc:0.9359 Epoch50/50 42000/42000[==============================]-1s17us/step-loss:0.1960-acc:0.9433-val_loss:0.2300-val_acc:0.9346 Trainingaccuracy:94.11% Testaccuracy:93.61%
以上这篇keras导入weights方式就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持毛票票。