function - defines pytensor.function#
Guide#
This module provides function(), commonly accessed as pytensor.function,
the interface for compiling graphs into callable objects.
You’ve already seen example usage in the basic tutorial… something like this:
>>> import pytensor
>>> x = pytensor.tensor.dscalar()
>>> f = pytensor.function([x], 2*x)
>>> f(4)
array(8.0)
The idea here is that we’ve compiled the symbolic graph (2*x) into a function that can be called on a number and will do some computations.
The behaviour of function can be controlled in several ways, such as
In, Out, mode, updates, and givens. These are covered
in the tutorial examples and tutorial on modes.
Reference#
- class pytensor.compile.function.In[source]#
A class for attaching information to function inputs.
- value[source]#
The default value to use at call-time (can also be a Container where the function will find a value at call-time.)
- mutable[source]#
Truemeans the compiled-function is allowed to modify this argument.Falsemeans it is not allowed.
- borrow[source]#
Trueindicates that a reference to internal storage may be returned, and that the caller is aware that subsequent function evaluations might overwrite this memory.
- strict[source]#
If
False, a function argument may be copied or cast to match the type required by the parametervariable. IfTrue, a function argument must exactly match the type required byvariable.
- allow_downcast[source]#
Trueindicates that the value you pass for this input can be silently downcasted to fit the right type, which may lose precision. (Only applies whenstrictisFalse.)
- class pytensor.compile.function.Out[source]#
A class for attaching information to function outputs
- pytensor.compile.function.function(inputs, outputs, mode=None, updates=None, givens=None, no_default_updates=False, accept_inplace=False, name=None, rebuild_strict=True, allow_input_downcast=None, profile=None, on_unused_input='raise')[source]#
Return a
callable objectthat will calculateoutputsfrominputs.- Parameters:
params (list of either Variable or In instances, but not shared variables.) – the returned
Functioninstance will have parameters for these variables.outputs (list of Variables or Out instances) – expressions to compute.
mode (None, string or
Modeinstance.) – compilation modeupdates (iterable over pairs (shared_variable, new_expression). List, tuple or dict.) – expressions for new
SharedVariablevaluesgivens (iterable over pairs (Var1, Var2) of Variables. List, tuple or dict. The Var1 and Var2 in each pair must have the same Type.) – specific substitutions to make in the computation graph (Var2 replaces Var1).
no_default_updates (either bool or list of Variables) – if True, do not perform any automatic update on Variables. If False (default), perform them all. Else, perform automatic updates on all Variables that are neither in
updatesnor inno_default_updates.name – an optional name for this function. The profile mode will print the time spent in this function.
rebuild_strict – True (Default) is the safer and better tested setting, in which case
givensmust substitute new variables with the same Type as the variables they replace. False is a you-better-know-what-you-are-doing setting, that permitsgivensto replace variables with new variables of any Type. The consequence of changing a Type is that all results depending on that variable may have a different Type too (the graph is rebuilt from inputs to outputs). If one of the new types does not make sense for one of the Ops in the graph, an Exception will be raised.allow_input_downcast (Boolean or None) – True means that the values passed as inputs when calling the function can be silently downcasted to fit the dtype of the corresponding Variable, which may lose precision. False means that it will only be cast to a more general, or precise, type. None (default) is almost like False, but allows downcasting of Python float scalars to floatX.
profile (None, True, or ProfileStats instance) – accumulate profiling information into a given ProfileStats instance. If argument is
Truethen a new ProfileStats instance will be used. This profiling object will be available via self.profile.on_unused_input – What to do if a variable in the ‘inputs’ list is not used in the graph. Possible values are ‘raise’, ‘warn’, and ‘ignore’.
- Return type:
Functioninstance- Returns:
a callable object that will compute the outputs (given the inputs) and update the implicit function arguments according to the
updates.
Inputs can be given as variables or
Ininstances.Ininstances also have a variable, but they attach some extra information about how call-time arguments corresponding to that variable should be used. Similarly,Outinstances can attach information about how output variables should be returned.The default is typically ‘FAST_RUN’ but this can be changed in pytensor.config. The mode argument controls the sort of rewrites that will be applied to the graph, and the way the rewritten graph will be evaluated.
After each function evaluation, the
updatesmechanism can replace the value of any (implicit)SharedVariableinputs with new values computed from the expressions in theupdateslist. An exception will be raised if you give two update expressions for the sameSharedVariableinput (that doesn’t make sense).If a
SharedVariableis not given an update expression, but has aVariable.default_updatemember containing an expression, this expression will be used as the update expression for this variable. Passingno_default_updates=Truetofunctiondisables this behavior entirely, passingno_default_updates=[sharedvar1, sharedvar2]disables it for the mentioned variables.Regarding givens: Be careful to make sure that these substitutions are independent, because behaviour when
Var1of one pair appears in the graph leading toVar2in another expression is undefined (e.g. with{a: x, b: a + 1}). Replacements specified with givens are different from replacements that occur during normal rewriting, in thatVar2is not expected to be equivalent toVar1.
- pytensor.compile.function.function_dump(filename, inputs, outputs=None, mode=None, updates=None, givens=None, no_default_updates=False, accept_inplace=False, name=None, rebuild_strict=True, allow_input_downcast=None, profile=None, on_unused_input=None, extra_tag_to_remove=None, trust_input=False)[source]#
This is helpful to make a reproducible case for problems during PyTensor compilation.
Ex:
replace
pytensor.function(...)bypytensor.function_dump('filename.pkl', ...).If you see this, you were probably asked to use this function to help debug a particular case during the compilation of an PyTensor function.
function_dumpallows you to easily reproduce your compilation without generating any code. It pickles all the objects and parameters needed to reproduce a call topytensor.function(). This includes shared variables and their values. If you do not want that, you can choose to replace shared variables values with zeros by calling set_value(…) on them before callingfunction_dump.To load such a dump and do the compilation:
>>> import pickle >>> import pytensor >>> d = pickle.load(open("func_dump.bin", "rb")) >>> f = pytensor.function(**d)
Note: The parameter
extra_tag_to_removeis passed to the StripPickler used. To pickle graph made by Blocks, it must be:['annotations', 'replacement_of', 'aggregation_scheme', 'roles']
- class pytensor.compile.function.types.Function(vm, input_storage, output_storage, indices, outputs, defaults, unpack_single, return_none, output_keys, maker, trust_input=False, name=None)[source]#
A class that wraps the execution of a
VMmaking it easier for use as a “function”.Functionis the callable object that does computation. It has the storage of inputs and outputs, performs the packing and unpacking of inputs and return values. It implements the square-bracket indexing so that you can look up the value of a symbolic node.Functions are copyable via
Function.copyand thecopy.copyinterface. When a function is copied, this instance is duplicated. Contrast with self.maker (instance ofFunctionMaker) that is shared between copies. The meaning of copying a function is that the containers and their current values will all be duplicated. This requires that mutable inputs be copied, whereas immutable inputs may be shared between copies.A Function instance is hashable, on the basis of its memory address (its id). A Function instance is only equal to itself. A Function instance may be serialized using the
pickleorcPicklemodules. This will save all default inputs, the graph, and WRITEME to the pickle file.A
Functioninstance has aFunction.trust_inputfield that defaults toFalse. WhenTrue, theFunctionwill skip all checks on the inputs.- finder[source]#
Dictionary mapping several kinds of things to containers.
We set an entry in finder for: - the index of the input - the variable instance the input is based on - the name of the input
All entries map to the container or to DUPLICATE if an ambiguity is detected.
- __call__(*args, output_subset=None, **kwargs)[source]#
Evaluates value of a function on given arguments.
- Parameters:
args (list) – List of inputs to the function. All inputs are required, even when some of them are not necessary to calculate requested subset of outputs.
kwargs (dict) –
The function inputs can be passed as keyword argument. For this, use the name of the input or the input instance as the key.
Keyword argument
output_subsetis a list of either indices of the function’s outputs or the keys belonging to theoutput_keysdict and represent outputs that are requested to be calculated. Regardless of the presence ofoutput_subset, the updates are always calculated and processed. To disable the updates, you should use thecopymethod withdelete_updates=True.
- Returns:
List of outputs on indices/keys from
output_subsetor all of them, ifoutput_subsetis not passed.- Return type:
list
- copy(share_memory=False, swap=None, delete_updates=False, name=None, profile=None)[source]#
Copy this function. Copied function will have separated maker and fgraph with original function. User can choose whether to separate storage by changing the share_memory arguments.
- Parameters:
share_memory (boolean) – When True, two function share intermediate storages(storages except input and output storages). Otherwise two functions will only share partial storages and same maker. If two functions share memory and allow_gc=False, this will increase executing speed and save memory.
swap (dict) – Dictionary that map old SharedVariables to new SharedVariables. Default is None. NOTE: The shared variable swap in only done in the new returned function, not in the user graph.
delete_updates (boolean) – If True, Copied function will not have updates.
name (string) – If provided, will be the name of the new Function. Otherwise, it will be old + “ copy”
profile (bool | str | ProfileStats | None) – as pytensor.function profile parameter
- Returns:
Copied pytensor.Function
- Return type:
pytensor.Function