Skip to content

Commit 441c64f

Browse files
authored
Fix build warnings in docs (#1380)
* Remove unused and outdated convert.rst from the docs Fix #1378 * Fix extra numbered footnote reference in ophys tutorial * Change code block highlighting from c to bash to avoid build warning * Fix duplicate target here warning in 3_spec_api.rst * Fix missing section label for crossreferencing between the extension tutorial and gallery * Updated Changelog
1 parent 10823a7 commit 441c64f

File tree

6 files changed

+25
-56
lines changed

6 files changed

+25
-56
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
- Enforce electrode ID uniqueness during insertion into table. @CodyCBakerPhD (#1344)
1010
- Fix integration tests with invalid test data that will be caught by future hdmf validator version.
1111
@dsleiter, @rly (#1366, #1376)
12-
12+
- Fix build warnings in docs @oruebel (#1380)
1313

1414
## PyNWB 1.5.1 (May 24, 2021)
1515

docs/gallery/domain/ophys.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@
119119
# Storing fluorescence measurements
120120
# ---------------------------------
121121
#
122-
# Now that ROIs are stored, you can store fluorescence (or dF/F [#]_) data for these regions of interest.
122+
# Now that ROIs are stored, you can store fluorescence (or dF/F) data for these regions of interest.
123123
# This type of data is stored using the :py:class:`~pynwb.ophys.RoiResponseSeries` class. You will not need
124124
# to instantiate this class directly to create objects of this type, but it is worth noting that this is the
125125
# class you will work with after you read data back in.

docs/source/extensions_tutorial/2_create_extension_spec_walkthrough.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ generates a repository with the appropriate directory structure.
1111
After you finish the instructions `here <https://github.com/nwb-extensions/ndx-template#getting-started>`_,
1212
you should have a directory structure that looks like this
1313

14-
.. code-block:: c
14+
.. code-block:: bash
1515
1616
├── LICENSE.txt
1717
├── MANIFEST.in
@@ -51,8 +51,8 @@ you should have a directory structure that looks like this
5151
└── create_extension_spec.py
5252
5353
At its core, an NWB extension consists of YAML text files, such as those generated in the `spec`
54-
folder. While you can write these YAML extension files by hand, PyNWB provides a convenient API
55-
via the :py:mod:`~pynwb.spec` module for creating extensions.
54+
folder. While you can write these YAML extension files by hand, PyNWB provides a convenient API
55+
via the :py:mod:`~pynwb.spec` module for creating extensions.
5656

5757
Open ``src/spec/create_extension_spec.py``. You will be
5858
modifying this script to create your own NWB extension. Let's first walk through each piece.

docs/source/extensions_tutorial/3_spec_api.rst

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,8 @@ within it Datasets, Attributes, Links, and/or other Groups. Groups are specified
3434

3535
- ``neurodata_type_def`` declares the name of the neurodata type.
3636
- ``neurodata_type_inc`` indicates what data type you are extending (Groups must extend Groups, and Datasets must extend Datasets).
37-
- To define a new neurodata type that does not extend an existing type, use
38-
``neurodata_type_inc=NWBContainer`` for a group or ``neurodata_type_inc=NWBData`` for a dataset.
37+
- To define a new neurodata type that does not extend an existing type, use
38+
``neurodata_type_inc=NWBContainer`` for a group or ``neurodata_type_inc=NWBData`` for a dataset.
3939
``NWBContainer`` and ``NWBData`` are base types for NWB.
4040
- To use a type that has already been defined, use ``neurodata_type_inc`` and not ``neurodata_type_def``.
4141
- You can define a group that is not a neurodata type by omitting both ``neurodata_type_def`` and ``neurodata_type_inc``.
@@ -79,17 +79,20 @@ All larger blocks of numeric or text data should be stored in Datasets. Specifyi
7979
``neurodata_type_def``, ``neurodata_type_inc``, ``doc``, ``name``, ``default_name``, ``linkable``, ``quantity``, and
8080
``attributes`` all work the same as they do in :py:class:`~pynwb.spec.NWBGroupSpec`, described in the previous section.
8181

82-
``dtype`` defines the type of the data, which can be a basic type, compound type, or reference type.
83-
See a list of options `here <https://schema-language.readthedocs.io/en/latest/description.html#dtype>`_.
84-
Basic types can be defined as string objects and more complex types via :py:class:`~pynwb.spec.NWBDtypeSpec` and `RefSpec <https://hdmf.readthedocs.io/en/latest/hdmf.spec.spec.html#hdmf.spec.spec.RefSpec>`_.
82+
``dtype`` defines the type of the data, which can be a basic type, compound type, or reference type.
83+
See a list of `dtype options <https://schema-language.readthedocs.io/en/latest/description.html#dtype>`_
84+
as part of the specification language docs. Basic types can be defined as string objects and more complex
85+
types via :py:class:`~pynwb.spec.NWBDtypeSpec` and
86+
`RefSpec <https://hdmf.readthedocs.io/en/latest/hdmf.spec.spec.html#hdmf.spec.spec.RefSpec>`_.
8587

8688

87-
``shape`` is a specification defining the allowable shapes for the dataset. See the shape specification
88-
`here <https://schema-language.readthedocs.io/en/latest/specification_language_description.html#shape>`_. ``None`` is
89-
mapped to ``null``. Is no shape is provided, it is assumed that the dataset is only a single element.
89+
``shape`` is a specification defining the allowable shapes for the dataset. See the
90+
`shape specification <https://schema-language.readthedocs.io/en/latest/specification_language_description.html#shape>`_
91+
as part of the specification language docs. ``None`` is mapped to ``null``. Is no shape is provided, it is
92+
assumed that the dataset is only a single element.
9093

91-
If the dataset is a single element (scalar) that represents meta-data, consider using an Attribute (see
92-
below) to store the data more efficiently instead. However, note that a Dataset can have Attributes,
94+
If the dataset is a single element (scalar) that represents meta-data, consider using an Attribute (see
95+
below) to store the data more efficiently instead. However, note that a Dataset can have Attributes,
9396
whereas an Attribute cannot have Attributes of its own.
9497
``dims`` provides labels for each dimension of ``shape``.
9598

@@ -139,16 +142,16 @@ defined in the ``attributes`` field of a :py:class:`~pynwb.spec.NWBGroupSpec` or
139142
neurodata type, i.e., the ``neurodata_type_def`` and ``neurodata_type_inc`` keys are not allowed. The only way to match an object with a spec is through the name of the attribute so ``name`` is
140143
required. You cannot have multiple attributes on a single group/dataset that correspond to the same
141144
:py:class:`~pynwb.spec.NWBAttributeSpec`, since these would have to have the same name. Therefore, instead of
142-
specifying number of ``quantity``, you have a ``required`` field which takes a boolean value. Another
145+
specifying number of ``quantity``, you have a ``required`` field which takes a boolean value. Another
143146
key difference between datasets and attributes is that attributes cannot have attributes of their own.
144147

145148
.. tip::
146149
Dataset or Attribute? It is often possible to store data as either a Dataset or an Attribute. Our best advice is
147-
to keep Attributes small. In HDF5 the typical size limit for attributes is 64Kbytes. If an attribute is going to
148-
store more than 64Kbyte, then make it a Dataset. Attributes are also more efficient for storing very
149-
small data, such as scalars. However, attributes cannot have attributes of their own, and in HDF5,
150+
to keep Attributes small. In HDF5 the typical size limit for attributes is 64Kbytes. If an attribute is going to
151+
store more than 64Kbyte, then make it a Dataset. Attributes are also more efficient for storing very
152+
small data, such as scalars. However, attributes cannot have attributes of their own, and in HDF5,
150153
I/O filters, such as compression and chunking, cannot apply to attributes.
151-
154+
152155

153156
Link Specifications
154157
^^^^^^^^^^^^^^^^^^^

docs/source/extensions_tutorial/extensions_tutorial_home.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
.. _extending-nwb:
2+
13
Extending NWB
24
=============
35

docs/source/tutorial_source/convert.rst

Lines changed: 0 additions & 36 deletions
This file was deleted.

0 commit comments

Comments
 (0)