Weekly Reading List #3
Issue #3: 2018/04/30 to 2018/05/06
This is an experimental series in which I briefly introduce the interesting data science stuffs I read, watched, or listened to during the week. Please give this post some claps if you’d like this series to be continued.
I’ve been busy with other stuffs this week, so this issue will only cover the new Pytorch 0.4.0 and the roadmap to the production ready 1.0 version.
Released in late April:
with a migration guide:
Welcome to the migration guide for PyTorch 0.4.0. In this release we introduced many exciting new features and critical…pytorch.org
A perhaps incomplete list of important changes with a brief summary for each one of them:
torch.autograd.Variable. But the old code will still work.
- Don’t use
type()to query the underlying type of a
- Add an in-place method
.requires_grad_()to set the
.dataattribute now returns a
requires_grad=False. But changes to the returned
Tensorwon’t be tracked by
.detach()method if you want the changes to be tracked.
- 0-dimensional (scalar) Tensors. Fixes the inconsistency between
- Use .
item()to get the Python number from a scaler variable instead of
torch.set_grad_enabled(is_train)to exclude variables from
autogradinstead of setting
torch.tensorto create new
Tensorobjects. When calling the function, assign the dtype, device, and layout with the new
- The new
tensor.new_*shortcuts. The former takes a
Tensor; the latter takes a shape.
- Use the new
.to(device)method to write device-agnostic code.
- Add a new
.deviceattribute to get the
torch.devicefor all Tensors.
The code samples at the end of the migration guide are a good way to check if you’ve understood the above changes correctly.
Similarly, a maybe incomplete list of new features:
- Windows support.
torch.where(condition, tensor1, tensor2)
torch.utils.checkpoint.checkpointto trade compute for memory.
torch.utils.checkpoint.checkpoin_sequentialfor sequential models.
- torch.utils.bottleneck to identify hotspots.
reduce=Falsesupport for all loss functions.
- 24 basic probability distributions
Published on May 2:
Dear PyTorch Users, We would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch…pytorch.org
Probably one of the most important takeaways:
In 1.0, your code continues to work as-is, we’re not making any big changes to the existing API.
Basically Facebook is merging Caffe2 and PyTorch to provide both a framework that works for both research and production settings, as hinted earlier in April:
Over the last year and a half the Caffe2 project has invested heavily in high-performance computation, mobile…github.com
So the gist of the solution is adding a just-in-time (JIT) compiler
torch.jit to export your model to run on a Caffe2-based C++-only runtime. This compiler has two modes:
- Tracing Mode: tracing native Python code. But it will probably cause problems if your model contains if statements and loops (for example, RNN with variable lengths).
- Script Mode: compile code into a intermediate representation. But it only supports a subset of Python language, so usually you’ll have to isolate the code you want to be compiled.
The naming is still subject to change. The 1.0 version is expected to be released this summer.
Source: Deep Learning on Medium