ops – Some Common Ops and extra Ops stuff#
This file contains auxiliary Ops, used during the compilation phase and Ops
building class (FromFunctionOp) and decorator (wrap_py()) that
help make new Ops more rapidly.
- class pytensor.compile.ops.DeepCopyOp[source]#
- c_code(node, name, inames, onames, sub)[source]#
Return the C implementation of an
Op.Returns C code that does the computation associated to this
Op, given names for the inputs and outputs.- Parameters:
node (Apply instance) – The node for which we are compiling the current C code. The same
Opmay be used in more than one node.name (str) – A name that is automatically assigned and guaranteed to be unique.
inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"to the name in the list.outputs (list of strings) – Each string is the name of a C variable where the
Opshould store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.sub (dict of strings) – Extra symbols defined in
CLinkersub symbols (such as'fail').
- c_code_cache_version()[source]#
Return a tuple of integers indicating the version of this
Op.An empty tuple indicates an “unversioned”
Opthat will not be cached between processes.The cache mechanism may erase cached modules that have been superseded by newer versions. See
ModuleCachefor details.See also
c_code_cache_version_apply
- make_node(x)[source]#
Construct an
Applynode that represent the application of this operation to the given inputs.This must be implemented by sub-classes.
- Returns:
node – The constructed
Applynode.- Return type:
- perform(node, args, outs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Applynode that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variableinnode.inputs.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variableinnode.outputs. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storagelist might contain data. If an element of output_storage is notNone, it has to be of the right type, for instance, for aTensorVariable, it has to be a NumPyndarraywith the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform(); they could’ve been allocated by anotherOp’sperformmethod. AnOpis free to reuseoutput_storageas it sees fit, or to discard it and allocate new memory.
- class pytensor.compile.ops.FromFunctionOp(fn, itypes, otypes, infer_shape)[source]#
Build a basic PyTensor Op around a function.
Since the resulting Op is very basic and is missing most of the optional functionalities, some optimizations may not apply. If you want to help, you can supply an infer_shape function that computes the shapes of the output given the shapes of the inputs.
Also the gradient is undefined in the resulting op and PyTensor will raise an error if you attempt to get the gradient of a graph containing this op.
- perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Applynode that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variableinnode.inputs.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variableinnode.outputs. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storagelist might contain data. If an element of output_storage is notNone, it has to be of the right type, for instance, for aTensorVariable, it has to be a NumPyndarraywith the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform(); they could’ve been allocated by anotherOp’sperformmethod. AnOpis free to reuseoutput_storageas it sees fit, or to discard it and allocate new memory.
- class pytensor.compile.ops.TypeCastingOp[source]#
Op that performs a graph-level type cast operation, but has no effect computation-wise (identity function).
- c_code(node, nodename, inp, out, sub)[source]#
Return the C implementation of an
Op.Returns C code that does the computation associated to this
Op, given names for the inputs and outputs.- Parameters:
node (Apply instance) – The node for which we are compiling the current C code. The same
Opmay be used in more than one node.name (str) – A name that is automatically assigned and guaranteed to be unique.
inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"to the name in the list.outputs (list of strings) – Each string is the name of a C variable where the
Opshould store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.sub (dict of strings) – Extra symbols defined in
CLinkersub symbols (such as'fail').
- c_code_cache_version()[source]#
Return a tuple of integers indicating the version of this
Op.An empty tuple indicates an “unversioned”
Opthat will not be cached between processes.The cache mechanism may erase cached modules that have been superseded by newer versions. See
ModuleCachefor details.See also
c_code_cache_version_apply
- perform(node, inputs, outputs_storage)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Applynode that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variableinnode.inputs.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variableinnode.outputs. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storagelist might contain data. If an element of output_storage is notNone, it has to be of the right type, for instance, for aTensorVariable, it has to be a NumPyndarraywith the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform(); they could’ve been allocated by anotherOp’sperformmethod. AnOpis free to reuseoutput_storageas it sees fit, or to discard it and allocate new memory.
- class pytensor.compile.ops.ViewOp[source]#
Returns an inplace view of the input. Used internally by PyTensor.
- make_node(x)[source]#
Construct an
Applynode that represent the application of this operation to the given inputs.This must be implemented by sub-classes.
- Returns:
node – The constructed
Applynode.- Return type:
- pullback(args, outputs, g_outs)[source]#
Construct a graph for the vector-Jacobian product (pullback).
Given a function \(f\) implemented by this
Opwith inputs \(x\) and outputs \(y = f(x)\), the pullback computes \(\bar{x} = \bar{y}^T J\) where \(J\) is the Jacobian \(\frac{\partial f}{\partial x}\) and \(\bar{y}\) are the cotangent vectors (upstream gradients).This is the core method for reverse-mode automatic differentiation.
If the output is not differentiable with respect to an input, return a variable of type
DisconnectedTypefor that input. If the gradient is not implemented for some input, return a variable of typeNullType(seepytensor.gradient.grad_not_implemented()andpytensor.gradient.grad_undefined()).- Parameters:
- Returns:
input_cotangents – The cotangent vectors w.r.t. each input. One
Variableper input.- Return type:
list of Variable
- pytensor.compile.ops.register_deep_copy_op_c_code(typ, code, version=())[source]#
Tell DeepCopyOp how to generate C code for an PyTensor Type.
- Parameters:
typ (PyTensor type) – It must be the PyTensor class itself and not an instance of the class.
code (C code) – Deep copies the PyTensor type ‘typ’. Use %(iname)s and %(oname)s for the input and output C variable names respectively.
version – A number indicating the version of the code, for cache.
- pytensor.compile.ops.register_view_op_c_code(type, code, version=())[source]#
Tell ViewOp how to generate C code for an PyTensor Type.
- Parameters:
type (PyTensor type) – It must be the PyTensor class itself and not an instance of the class.
code (C code) – Returns a view for the PyTensor type ‘type’. Use %(iname)s and %(oname)s for the input and output C variable names respectively.
version – A number indicating the version of the code, for cache.
- pytensor.compile.ops.wrap_py(itypes, otypes, infer_shape=None)[source]#
Decorator that converts a function into a basic PyTensor op that will call the supplied function as its implementation.
It takes an optional infer_shape parameter that should be a callable with this signature:
- def infer_shape(fgraph, node, input_shapes):
… return output_shapes
Here
input_shapesandoutput_shapesare lists of tuples that represent the shape of the corresponding inputs/outputs.This should not be used when performance is a concern since the very basic nature of the resulting Op may interfere with certain graph optimizations.
Examples
- @wrap_py(itypes=[pytensor.tensor.fmatrix, pytensor.tensor.fmatrix],
otypes=[pytensor.tensor.fmatrix])
- def numpy_dot(a, b):
return numpy.dot(a, b)