Move all torch.nn.modules type annotations inline by ezyang · Pull Request #38211 · pytorch/pytorch · GitHub 您所在的位置:网站首页 白也诗无敌剑来 Move all torch.nn.modules type annotations inline by ezyang · Pull Request #38211 · pytorch/pytorch · GitHub

Move all torch.nn.modules type annotations inline by ezyang · Pull Request #38211 · pytorch/pytorch · GitHub

#Move all torch.nn.modules type annotations inline by ezyang · Pull Request #38211 · pytorch/pytorch · GitHub| 来源: 网络整理| 查看: 265

Stack from ghstack:

Move all torch.nn.modules type annotations inline #38211 Move all torch.nn.modules type annotations inline

Just because the annotations are inline doesn't mean the files type check; most of the newly annotated files have type errors and I added exclusions for them in mypy.ini. The payoff of moving all of these modules inline is I can delete the relevant code generation logic for the pyi files (which was added ignore annotations that weren't actually relevant anymore.) Because we aren't actually typechecking these modules in most cases, it is inevitable that some of these type annotations are wrong. I slavishly copied the old annotations from the pyi files unless there was an obvious correction I could make. These annotations will probably need fixing up later.

Moving these annotations inline was really hairy because of interactions with JIT, and also the fact that Python type erasure is a lie (inheriting from Generic does change the behavior of your object). Here is the list of things I had to fix and/or work around:

The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.) I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines __new__ specially, which means that inspect.signature doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see TypeVar to represent a Callable's arguments python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules. The Callable trick on forward caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to infer_type to ensure we always do inference for Module.forward, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively. When __annotations__ is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes JIT test suite has dependencies across tests #39463 by testing if __annotations__ is defined on the specific class, excluding parent classes from the test. Added a missing fake source range to the invocation of get_signature In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an Optional[Tensor] parameter, whose optional-ness is determined at __init__ time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at __init__ time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang [email protected]

Differential Revision: D21497397



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有