'In Pytorch, is there a difference between (x<0) and x.lt(0)?
Suppose x is a tensor in Pytorch. One can either write:
x_lowerthanzero = x.lt(0)
or:
x_lowerthanzero = (x<0)
with seemingly the exact same results. Many other operations have Pytorch built-in equivalents: x.gt(0) for (x>0), x.neg() for -x, x.mul() etc.
Is there a good reason to use one form over the other?
Solution 1:[1]
They are equivalent. < is simply a more readable alias.
Python operators have canonical function mappings e.g:
Algebraic operations
| Operation | Syntax | Function |
|---|---|---|
| Addition | a + b |
add(a, b) |
| Subtraction | a - b |
sub(a, b) |
| Multiplication | a * b |
mul(a, b) |
| Division | a / b |
truediv(a, b) |
| Exponentiation | a ** b |
pow(a, b) |
| Matrix Multiplication | a @ b |
matmul(a, b) |
Comparisons
| Operation | Syntax | Function |
|---|---|---|
| Ordering | a < b |
lt(a, b) |
| Ordering | a <= b |
le(a, b) |
| Equality | a == b |
eq(a, b) |
| Difference | a != b |
ne(a, b) |
| Ordering | a >= b |
ge(a, b) |
| Ordering | a > b |
gt(a, b) |
You can check that these are indeed mapped to the respectively named torch functions here e.g:
def __lt__(self, other):
return self.lt(other)
Solution 2:[2]
Usually there is no reason for using one over the other, they are mostly for convenience: Many of those methods do have for instance an out argument, which lets you specify a tensor in which to save the result, but you can just as well do that using the operators instead of the methods.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | |
| Solution 2 | flawr |
