PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device(quot;cuda:0quot;))?(PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0)) 有什么区别?)
问题描述
在 PyTorch 中,以下两种方法向 GPU 发送张量(或模型)有什么区别:
In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU:
设置:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model()
X = torch.DoubleTensor(X)
方法一 | 方法二 |
---|---|
X.cuda() | device = torch.device("cuda:0") X = X.to(device) |
(我真的不需要详细解释后端发生的事情,只是想知道它们是否本质上都在做同样的事情)
(I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing)
推荐答案
两者没有区别.
pytorch 的早期版本具有 .cuda()
和 .cpu()
方法来将张量和模型从 cpu 移动到 gpu 并返回.然而,这使得代码编写有点麻烦:
There is no difference between the two.
Early versions of pytorch had .cuda()
and .cpu()
methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:
if cuda_available:
x = x.cuda()
model.cuda()
else:
x = x.cpu()
model.cpu()
后来的版本引入了 .to()
基本上以优雅的方式处理所有事情:
Later versions introduced .to()
that basically takes care of everything in an elegant way:
device = torch.device('cuda') if cuda_available else torch.device('cpu')
x = x.to(device)
model = model.to(device)
这篇关于PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0")) 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0&qu


基础教程推荐
- Python kivy 入口点 inflateRest2 无法定位 libpng16-16.dll 2022-01-01
- 使用PyInstaller后在Windows中打开可执行文件时出错 2022-01-01
- 线程时出现 msgbox 错误,GUI 块 2022-01-01
- 在 Python 中,如果我在一个“with"中返回.块,文件还会关闭吗? 2022-01-01
- Dask.array.套用_沿_轴:由于额外的元素([1]),使用dask.array的每一行作为另一个函数的输入失败 2022-01-01
- 如何让 python 脚本监听来自另一个脚本的输入 2022-01-01
- 如何在海运重新绘制中自定义标题和y标签 2022-01-01
- 筛选NumPy数组 2022-01-01
- 用于分类数据的跳跃记号标签 2022-01-01
- 何时使用 os.name、sys.platform 或 platform.system? 2022-01-01