NEML2 1.4.0
|
Namespaces | |
namespace | linalg |
Classes | |
struct | ConstantTensors |
A helper class to hold static data of type torch::Tensor. More... | |
Functions | |
Tensor | full_to_reduced (const Tensor &full, const torch::Tensor &rmap, const torch::Tensor &rfactors, Size dim=0) |
Generic function to reduce two axes to one with some map. | |
Tensor | reduced_to_full (const Tensor &reduced, const torch::Tensor &rmap, const torch::Tensor &rfactors, Size dim=0) |
Convert a Tensor from reduced notation to full notation. | |
Tensor | full_to_mandel (const Tensor &full, Size dim=0) |
Convert a Tensor from full notation to Mandel notation. | |
Tensor | mandel_to_full (const Tensor &mandel, Size dim=0) |
Convert a Tensor from Mandel notation to full notation. | |
Tensor | full_to_skew (const Tensor &full, Size dim=0) |
Convert a Tensor from full notation to skew vector notation. | |
Tensor | skew_to_full (const Tensor &skew, Size dim=0) |
Convert a Tensor from skew vector notation to full notation. | |
Tensor | jacrev (const Tensor &y, const Tensor &p) |
Use automatic differentiation (AD) to calculate the derivatives w.r.t. to the parameter. | |
Tensor | base_diag_embed (const Tensor &a, Size offset, Size d1, Size d2) |
SR2 | skew_and_sym_to_sym (const SR2 &e, const WR2 &w) |
Product w_ik e_kj - e_ik w_kj with e SR2 and w WR2. | |
SSR4 | d_skew_and_sym_to_sym_d_sym (const WR2 &w) |
Derivative of w_ik e_kj - e_ik w_kj wrt. e. | |
SWR4 | d_skew_and_sym_to_sym_d_skew (const SR2 &e) |
Derivative of w_ik e_kj - e_ik w_kj wrt. w. | |
WR2 | multiply_and_make_skew (const SR2 &a, const SR2 &b) |
Shortcut product a_ik b_kj - b_ik a_kj with both SR2. | |
WSR4 | d_multiply_and_make_skew_d_first (const SR2 &b) |
Derivative of a_ik b_kj - b_ik a_kj wrt a. | |
WSR4 | d_multiply_and_make_skew_d_second (const SR2 &a) |
Derivative of a_ik b_kj - b_ik a_kj wrt b. | |
Tensor | pow (const Real &a, const Tensor &n) |
Tensor | pow (const Tensor &a, const Tensor &n) |
Tensor | bmm (const Tensor &a, const Tensor &b) |
Batched matrix-matrix product. | |
Tensor | bmv (const Tensor &a, const Tensor &v) |
Batched matrix-vector product. | |
Tensor | bvv (const Tensor &a, const Tensor &b) |
Batched vector-vector (dot) product. | |
constexpr Real | mandel_factor (Size i) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | batch_cat (const std::vector< T > &tensors, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
neml2::Tensor | base_cat (const std::vector< T > &tensors, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | batch_stack (const std::vector< T > &tensors, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
neml2::Tensor | base_stack (const std::vector< T > &tensors, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | batch_sum (const T &a, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | base_sum (const T &a, Size d=0) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | pow (const T &a, const Real &n) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | sign (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | cosh (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | sinh (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | tanh (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | where (const torch::Tensor &condition, const T &a, const T &b) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | heaviside (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | macaulay (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | dmacaulay (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | sqrt (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | exp (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | abs (const T &a) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | diff (const T &a, Size n=1, Size dim=-1) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | batch_diag_embed (const T &a, Size offset=0, Size d1=-2, Size d2=-1) |
template<class T , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<T>, T>>> | |
T | log (const T &a) |
template<class Derived , typename = typename std::enable_if_t<std::is_base_of_v<TensorBase<Derived>, Derived>>> | |
Derived | pow (const Derived &a, const Scalar &n) |
Variables | |
constexpr Real | eps = std::numeric_limits<at::scalar_value_type<Real>::type>::epsilon() |
constexpr Real | sqrt2 = 1.4142135623730951 |
constexpr Real | invsqrt2 = 0.7071067811865475 |
constexpr Size | mandel_reverse_index [3][3] = {{0, 5, 4}, {5, 1, 3}, {4, 3, 2}} |
constexpr Size | mandel_index [6][2] = {{0, 0}, {1, 1}, {2, 2}, {1, 2}, {0, 2}, {0, 1}} |
constexpr Size | skew_reverse_index [3][3] = {{0, 2, 1}, {2, 0, 0}, {1, 0, 0}} |
constexpr Real | skew_factor [3][3] = {{0.0, -1.0, 1.0}, {1.0, 0.0, -1.0}, {-1.0, 1.0, 0.0}} |
T abs | ( | const T & | a | ) |
neml2::Tensor base_cat | ( | const std::vector< T > & | tensors, |
Size | d = 0 ) |
neml2::Tensor base_stack | ( | const std::vector< T > & | tensors, |
Size | d = 0 ) |
Batched matrix-matrix product.
The input matrices a
and b
must have exactly 2 base dimensions. The batch shapes must broadcast.
Batched matrix-vector product.
The input tensor a
must have exactly 2 base dimensions. The input tensor v
must have exactly 1 base dimension. The batch shapes must broadcast.
Batched vector-vector (dot) product.
The input tensor a
must have exactly 1 base dimension. The input tensor vbmust
have exactly 1 base dimension. The batch shapes must broadcast.
T cosh | ( | const T & | a | ) |
Derivative of a_ik b_kj - b_ik a_kj wrt a.
Derivative of a_ik b_kj - b_ik a_kj wrt b.
Derivative of w_ik e_kj - e_ik w_kj wrt. w.
Derivative of w_ik e_kj - e_ik w_kj wrt. e.
T dmacaulay | ( | const T & | a | ) |
T exp | ( | const T & | a | ) |
Convert a Tensor
from full notation to Mandel notation.
The tensor in full notation full
can have arbitrary batch shape. The optional argument dim
denotes the base dimension starting from which the conversion should take place.
For example, a full tensor has shape (2, 3, 1, 5; 2, 9, 3, 3, 2, 3)
where the semicolon separates batch and base shapes. The symmetric axes have base dim 2 and 3. After converting to Mandel notation, the resulting tensor will have shape (2, 3, 1, 5; 2, 9, 6, 2, 3)
. Note how the shape of the symmetric dimensions (3, 3)
becomes (6)
. In this example, the base dim (the second argument to this function) should be 2.
full | The input tensor in full notation |
dim | The base dimension where the symmetric axes start |
Tensor full_to_reduced | ( | const Tensor & | full, |
const torch::Tensor & | rmap, | ||
const torch::Tensor & | rfactors, | ||
Size | dim = 0 ) |
Generic function to reduce two axes to one with some map.
The tensor in full notation full
can have arbitrary batch shape. The optional argument dim
denotes the base dimension starting from which the conversion should take place.
The function will reduce the two axis at the desired location down to one, using the provided maps.
For example, a full tensor has shape (2, 3, 1, 5; 2, 9, 3, 3, 2, 3)
where the semicolon separates batch and base shapes. The reduction axes have base dim 2 and 3. After applying the reduction, the resulting tensor will have shape (2, 3, 1, 5; 2, 9, X, 2, 3)
where X is the reduced shape. In this example, the base dim (the second argument to this function) should be 2.
full | The input tensor in full notation |
rmap | The reduction map |
rfactors | The reduction factors |
dim | The base dimension where the reduced axes start |
Convert a Tensor
from full notation to skew vector notation.
The tensor in full notation full
can have arbitrary batch shape. The optional argument dim
denotes the base dimension starting from which the conversion should take place.
For example, a full tensor has shape (2, 3, 1, 5; 2, 9, 3, 3, 2, 3)
where the semicolon separates batch and base shapes. The symmetric axes have base dim 2 and 3. After converting to skew notation, the resulting tensor will have shape (2, 3, 1, 5; 2, 9, 3, 2, 3)
. Note how the shape of the symmetric dimensions (3, 3)
becomes (3)
. In this example, the base dim (the second argument to this function) should be 2.
full | The input tensor in full notation |
dim | The base dimension where the symmetric axes start |
T heaviside | ( | const T & | a | ) |
This is (almost) equivalent to Torch's heaviside, except that the Torch's version is not differentiable (back-propagatable). I said "almost" because torch::heaviside allows you to set the return value in the case of input == 0. Our implementation always return 0.5 when the input == 0.
Use automatic differentiation (AD) to calculate the derivatives w.r.t. to the parameter.
y
and the paramter p
have the same batch shape.However, in practice, the batch shape of the output y
and the batch shape of the parameter p
can be different. In that case, calculating the full Jacobian is not possible, and an exception will be thrown.
One possible (inefficient) workaround is to expand and copy the parameter p
batch dimensions, e.g., batch_expand_copy, before calculating the output y
.
y | The Tensor to to be differentiated |
p | The parameter to take derivatives with respect to |
T log | ( | const T & | a | ) |
T macaulay | ( | const T & | a | ) |
Convert a Tensor from Mandel notation to full notation.
See full_to_mandel for a detailed explanation.
mandel | The input tensor in Mandel notation |
dim | The base dimension where the symmetric axes start |
Shortcut product a_ik b_kj - b_ik a_kj with both SR2.
Derived pow | ( | const Derived & | a, |
const Scalar & | n ) |
Tensor reduced_to_full | ( | const Tensor & | reduced, |
const torch::Tensor & | rmap, | ||
const torch::Tensor & | rfactors, | ||
Size | dim = 0 ) |
Convert a Tensor from reduced notation to full notation.
See full_to_reduced for a detailed explanation.
reduced | The input tensor in reduced notation |
rmap | The unreduction map |
rfactors | The unreduction factors |
dim | The base dimension where the reduced axes start |
T sign | ( | const T & | a | ) |
T sinh | ( | const T & | a | ) |
Convert a Tensor from skew vector notation to full notation.
See full_to_skew for a detailed explanation.
skew | The input tensor in skew notation |
dim | The base dimension where the symmetric axes start |
T sqrt | ( | const T & | a | ) |
T tanh | ( | const T & | a | ) |