PyTorch: Extract learned weights correctly(PyTorch:正确提取学习到的权重)
问题描述
我试图从线性层中提取权重,但它们似乎没有改变,尽管错误单调下降(即正在训练).打印权重的总和,没有任何反应,因为它保持不变:
<块引用>np.sum(model.fc2.weight.data.numpy())
以下是代码片段:
def train(epochs):模型.train()对于范围内的纪元(1,纪元+1):# 在训练集上训练打印(np.sum(model.fc2.weight.data.numpy()))对于枚举(train_loader)中的batch_idx,(数据,目标):数据,目标 = 变量(数据),变量(数据)optimizer.zero_grad()输出 = 模型(数据)损失=标准(输出,目标)损失.向后()优化器.step()和
# 定义模型类网络(nn.Module):def __init__(self):super(Net, self).__init__()# 仿射操作:y = Wx + bself.fc1 = nn.Linear(100, 80, 偏差=假)init.normal(self.fc1.weight,mean=0,std=1)self.fc2 = nn.Linear(80, 87)self.fc3 = nn.Linear(87, 94)self.fc4 = nn.Linear(94, 100)def forward(self, x):x = self.fc1(x)x = F.relu(self.fc2(x))x = F.relu(self.fc3(x))x = F.relu(self.fc4(x))返回 x
也许我查看了错误的参数,尽管我检查了文档.感谢您的帮助!
解决方案 使用 model.parameters() 获得任何模型或层的可训练权重.记得放在list()里面,不然打印不出来.
以下代码片段有效
<预><代码>>>>进口火炬>>>将 torch.nn 导入为 nn>>>l = nn.Linear(3,5)>>>w = 列表(l.parameters())>>>瓦I am trying to extract the weights from a linear layer, but they do not appear to change, although error is dropping monotonously (i.e. training is happening). Printing the weights' sum, nothing happens because it stays constant:
np.sum(model.fc2.weight.data.numpy())
Here are the code snippets:
def train(epochs):
model.train()
for epoch in range(1, epochs+1):
# Train on train set
print(np.sum(model.fc2.weight.data.numpy()))
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(data)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
and
# Define model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(100, 80, bias=False)
init.normal(self.fc1.weight, mean=0, std=1)
self.fc2 = nn.Linear(80, 87)
self.fc3 = nn.Linear(87, 94)
self.fc4 = nn.Linear(94, 100)
def forward(self, x):
x = self.fc1(x)
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
return x
Maybe I am looking on the wrong parameters, although I checked the docs. Thanks for your help!
Use model.parameters() to get trainable weight for any model or layer. Remember to put it inside list(), or you cannot print it out.
The following code snip worked
>>> import torch
>>> import torch.nn as nn
>>> l = nn.Linear(3,5)
>>> w = list(l.parameters())
>>> w
这篇关于PyTorch:正确提取学习到的权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:正确提取学习到的权重
基础教程推荐
- 求两个直方图的卷积 2022-01-01
- 包装空间模型 2022-01-01
- 在Python中从Azure BLOB存储中读取文件 2022-01-01
- 无法导入 Pytorch [WinError 126] 找不到指定的模块 2022-01-01
- PermissionError: pip 从 8.1.1 升级到 8.1.2 2022-01-01
- Plotly:如何设置绘图图形的样式,使其不显示缺失日期的间隙? 2022-01-01
- 在同一图形上绘制Bokeh的烛台和音量条 2022-01-01
- 使用大型矩阵时禁止 Pycharm 输出中的自动换行符 2022-01-01
- 修改列表中的数据帧不起作用 2022-01-01
- PANDA VALUE_COUNTS包含GROUP BY之前的所有值 2022-01-01
