PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device(quot;cuda:0quot;))?(PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0)) 有什么区别?)
问题描述
在 PyTorch 中,以下两种方法向 GPU 发送张量(或模型)有什么区别:
In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU:
设置:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model()
X = torch.DoubleTensor(X)
| 方法一 | 方法二 |
|---|---|
X.cuda() | device = torch.device("cuda:0")X = X.to(device) |
(我真的不需要详细解释后端发生的事情,只是想知道它们是否本质上都在做同样的事情)
(I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing)
推荐答案
两者没有区别.
pytorch 的早期版本具有 .cuda() 和 .cpu() 方法来将张量和模型从 cpu 移动到 gpu 并返回.然而,这使得代码编写有点麻烦:
There is no difference between the two.
Early versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:
if cuda_available:
x = x.cuda()
model.cuda()
else:
x = x.cpu()
model.cpu()
后来的版本引入了 .to() 基本上以优雅的方式处理所有事情:
Later versions introduced .to() that basically takes care of everything in an elegant way:
device = torch.device('cuda') if cuda_available else torch.device('cpu')
x = x.to(device)
model = model.to(device)
这篇关于PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0")) 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0&qu
基础教程推荐
- 在Python中从Azure BLOB存储中读取文件 2022-01-01
- 无法导入 Pytorch [WinError 126] 找不到指定的模块 2022-01-01
- 修改列表中的数据帧不起作用 2022-01-01
- 求两个直方图的卷积 2022-01-01
- PANDA VALUE_COUNTS包含GROUP BY之前的所有值 2022-01-01
- 包装空间模型 2022-01-01
- 使用大型矩阵时禁止 Pycharm 输出中的自动换行符 2022-01-01
- 在同一图形上绘制Bokeh的烛台和音量条 2022-01-01
- Plotly:如何设置绘图图形的样式,使其不显示缺失日期的间隙? 2022-01-01
- PermissionError: pip 从 8.1.1 升级到 8.1.2 2022-01-01
