PyTorch: Extract learned weights correctly(PyTorch:正确提取学习到的权重)
问题描述
我试图从线性层中提取权重,但它们似乎没有改变,尽管错误单调下降(即正在训练).打印权重的总和,没有任何反应,因为它保持不变:
<块引用>np.sum(model.fc2.weight.data.numpy())
以下是代码片段:
def train(epochs):模型.train()对于范围内的纪元(1,纪元+1):# 在训练集上训练打印(np.sum(model.fc2.weight.data.numpy()))对于枚举(train_loader)中的batch_idx,(数据,目标):数据,目标 = 变量(数据),变量(数据)optimizer.zero_grad()输出 = 模型(数据)损失=标准(输出,目标)损失.向后()优化器.step()和
# 定义模型类网络(nn.Module):def __init__(self):super(Net, self).__init__()# 仿射操作:y = Wx + bself.fc1 = nn.Linear(100, 80, 偏差=假)init.normal(self.fc1.weight,mean=0,std=1)self.fc2 = nn.Linear(80, 87)self.fc3 = nn.Linear(87, 94)self.fc4 = nn.Linear(94, 100)def forward(self, x):x = self.fc1(x)x = F.relu(self.fc2(x))x = F.relu(self.fc3(x))x = F.relu(self.fc4(x))返回 x
也许我查看了错误的参数,尽管我检查了文档.感谢您的帮助!
解决方案 使用 model.parameters()
获得任何模型或层的可训练权重.记得放在list()里面,不然打印不出来.
以下代码片段有效
<预><代码>>>>进口火炬>>>将 torch.nn 导入为 nn>>>l = nn.Linear(3,5)>>>w = 列表(l.parameters())>>>瓦
I am trying to extract the weights from a linear layer, but they do not appear to change, although error is dropping monotonously (i.e. training is happening). Printing the weights' sum, nothing happens because it stays constant:
np.sum(model.fc2.weight.data.numpy())
Here are the code snippets:
def train(epochs):
model.train()
for epoch in range(1, epochs+1):
# Train on train set
print(np.sum(model.fc2.weight.data.numpy()))
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(data)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
and
# Define model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(100, 80, bias=False)
init.normal(self.fc1.weight, mean=0, std=1)
self.fc2 = nn.Linear(80, 87)
self.fc3 = nn.Linear(87, 94)
self.fc4 = nn.Linear(94, 100)
def forward(self, x):
x = self.fc1(x)
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
return x
Maybe I am looking on the wrong parameters, although I checked the docs. Thanks for your help!
Use model.parameters()
to get trainable weight for any model or layer. Remember to put it inside list(), or you cannot print it out.
The following code snip worked
>>> import torch
>>> import torch.nn as nn
>>> l = nn.Linear(3,5)
>>> w = list(l.parameters())
>>> w
这篇关于PyTorch:正确提取学习到的权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:正确提取学习到的权重


基础教程推荐
- 如何在海运重新绘制中自定义标题和y标签 2022-01-01
- 筛选NumPy数组 2022-01-01
- 何时使用 os.name、sys.platform 或 platform.system? 2022-01-01
- 如何让 python 脚本监听来自另一个脚本的输入 2022-01-01
- 用于分类数据的跳跃记号标签 2022-01-01
- 使用PyInstaller后在Windows中打开可执行文件时出错 2022-01-01
- 线程时出现 msgbox 错误,GUI 块 2022-01-01
- Dask.array.套用_沿_轴:由于额外的元素([1]),使用dask.array的每一行作为另一个函数的输入失败 2022-01-01
- Python kivy 入口点 inflateRest2 无法定位 libpng16-16.dll 2022-01-01
- 在 Python 中,如果我在一个“with"中返回.块,文件还会关闭吗? 2022-01-01