diff --git a/doc/core_development_guide.rst b/doc/core_development_guide.rst index 82c15ddc8f..b942813018 100644 --- a/doc/core_development_guide.rst +++ b/doc/core_development_guide.rst @@ -26,4 +26,4 @@ some of them might be outdated though: * :ref:`unittest` -- Tutorial on how to use unittest in testing PyTensor. -* :ref:`sparse` -- Description of the ``sparse`` type in PyTensor. +* :ref:`libdoc_sparse` -- Description of the ``sparse`` type in PyTensor. diff --git a/doc/extending/creating_a_c_op.rst b/doc/extending/creating_a_c_op.rst index 3c44f5a33a..12105faa8d 100644 --- a/doc/extending/creating_a_c_op.rst +++ b/doc/extending/creating_a_c_op.rst @@ -923,7 +923,7 @@ pre-defined macros. These section tags have no macros: ``init_code``, discussed below. * ``APPLY_SPECIFIC(str)`` which will automatically append a name - unique to the :ref:`Apply` node that applies the `Op` at the end + unique to the :ref:`apply` node that applies the `Op` at the end of the provided ``str``. The use of this macro is discussed further below. @@ -994,7 +994,7 @@ Apply node in their own names to avoid conflicts between the different versions of the apply-specific code. The code that wasn't apply-specific was simply defined in the ``c_support_code`` method. -To make indentifiers that include the :ref:`Apply` node name use the +To make indentifiers that include the :ref:`apply` node name use the ``APPLY_SPECIFIC(str)`` macro. In the above example, this macro is used when defining the functions ``vector_elemwise_mult`` and ``vector_times_vector`` as well as when calling function diff --git a/doc/extending/creating_an_op.rst b/doc/extending/creating_an_op.rst index 746342ad4a..e42241b92e 100644 --- a/doc/extending/creating_an_op.rst +++ b/doc/extending/creating_an_op.rst @@ -7,7 +7,7 @@ Creating a new :class:`Op`: Python implementation So suppose you have looked through the library documentation and you don't see a function that does what you want. -If you can implement something in terms of an existing :ref:`Op`, you should do that. +If you can implement something in terms of an existing :ref:`op`, you should do that. Odds are your function that uses existing PyTensor expressions is short, has no bugs, and potentially profits from rewrites that have already been implemented. diff --git a/doc/extending/inplace.rst b/doc/extending/inplace.rst index 8b3a5477ae..74ffa58119 100644 --- a/doc/extending/inplace.rst +++ b/doc/extending/inplace.rst @@ -200,7 +200,7 @@ input(s)'s memory). From there, go to the previous section. certainly lead to erroneous computations. You can often identify an incorrect `Op.view_map` or :attr:`Op.destroy_map` - by using :ref:`DebugMode`. + by using :ref:`DebugMode `. .. note:: Consider using :class:`DebugMode` when developing diff --git a/doc/extending/other_ops.rst b/doc/extending/other_ops.rst index fd065fef36..fbad2ba48e 100644 --- a/doc/extending/other_ops.rst +++ b/doc/extending/other_ops.rst @@ -197,7 +197,7 @@ Want C speed without writing C code for your new Op? You can use Numba to generate the C code for you! Here is an `example Op `_ doing that. -.. _alternate_PyTensor_types: +.. _alternate_pytensor_types: Alternate PyTensor Types ======================== diff --git a/doc/library/tensor/random/index.rst b/doc/library/tensor/random/index.rst index d1f87af77b..a086a19d1f 100644 --- a/doc/library/tensor/random/index.rst +++ b/doc/library/tensor/random/index.rst @@ -83,7 +83,7 @@ Low-level objects .. automodule:: pytensor.tensor.random.op :members: RandomVariable, default_rng -..automodule:: pytensor.tensor.random.type +.. automodule:: pytensor.tensor.random.type :members: RandomType, RandomGeneratorType, random_generator_type .. automodule:: pytensor.tensor.random.var diff --git a/doc/tutorial/examples.rst b/doc/tutorial/examples.rst index e74d604f63..859d57a3ae 100644 --- a/doc/tutorial/examples.rst +++ b/doc/tutorial/examples.rst @@ -347,15 +347,7 @@ afterwards compile this expression to get functions, using pseudo-random numbers is not as straightforward as it is in NumPy, though also not too complicated. -The way to think about putting randomness into PyTensor's computations is -to put random variables in your graph. PyTensor will allocate a NumPy -`RandomStream` object (a random number generator) for each such -variable, and draw from it as necessary. We will call this sort of -sequence of random numbers a *random stream*. *Random streams* are at -their core shared variables, so the observations on shared variables -hold here as well. PyTensor's random objects are defined and implemented in -:ref:`RandomStream` and, at a lower level, -in :ref:`RandomVariable`. +The general user-facing API is documented in :ref:`RandomStream` For a more technical explanation of how PyTensor implements random variables see :ref:`prng`.