diff --git a/README.md b/README.md
index 5e704f5..c999b95 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,9 @@ S3FS is a [PyFilesystem](https://www.pyfilesystem.org/) interface to
Amazon S3 cloud storage.
As a PyFilesystem concrete class, [S3FS](http://fs-s3fs.readthedocs.io/en/latest/) allows you to work with S3 in the
-same way as any other supported filesystem.
+same way as any other supported filesystem. Note that as S3 is not strictly
+speaking a filesystem there are some limitations which are discussed in detail
+in the [documentation](https://fs-s3fs.readthedocs.io/en/latest/#limitations).
## Installing
diff --git a/README.rst b/README.rst
index 0624209..fcba978 100644
--- a/README.rst
+++ b/README.rst
@@ -6,7 +6,10 @@ Amazon S3 cloud storage.
As a PyFilesystem concrete class,
`S3FS `__ allows you to work
-with S3 in the same way as any other supported filesystem.
+with S3 in the same way as any other supported filesystem. Note that as
+S3 is not strictly speaking a filesystem there are some limitations
+which are discussed in detail in the
+`documentation `__.
Installing
----------
@@ -15,7 +18,7 @@ You can install S3FS from pip as follows:
::
- pip install fs-s3fs
+ pip install fs-s3fs
Opening a S3FS
--------------
@@ -24,37 +27,37 @@ Open an S3FS by explicitly using the constructor:
.. code:: python
- from fs_s3fs import S3FS
- s3fs = S3FS('mybucket')
+ from fs_s3fs import S3FS
+ s3fs = S3FS('mybucket')
Or with a FS URL:
.. code:: python
- from fs import open_fs
- s3fs = open_fs('s3://mybucket')
+ from fs import open_fs
+ s3fs = open_fs('s3://mybucket')
Downloading Files
-----------------
To *download* files from an S3 bucket, open a file on the S3 filesystem
for reading, then write the data to a file on the local filesystem.
-Here's an example that copies a file ``example.mov`` from S3 to your HD:
+Here’s an example that copies a file ``example.mov`` from S3 to your HD:
.. code:: python
- from fs.tools import copy_file_data
- with s3fs.open('example.mov', 'rb') as remote_file:
- with open('example.mov', 'wb') as local_file:
- copy_file_data(remote_file, local_file)
+ from fs.tools import copy_file_data
+ with s3fs.open('example.mov', 'rb') as remote_file:
+ with open('example.mov', 'wb') as local_file:
+ copy_file_data(remote_file, local_file)
Although it is preferable to use the higher-level functionality in the
-``fs.copy`` module. Here's an example:
+``fs.copy`` module. Here’s an example:
.. code:: python
- from fs.copy import copy_file
- copy_file(s3fs, 'example.mov', './', 'example.mov')
+ from fs.copy import copy_file
+ copy_file(s3fs, 'example.mov', './', 'example.mov')
Uploading Files
---------------
@@ -77,9 +80,9 @@ to a bucket:
.. code:: python
- import fs, fs.mirror
- s3fs = S3FS('example', upload_args={"CacheControl": "max-age=2592000", "ACL": "public-read"})
- fs.mirror.mirror('/path/to/mirror', s3fs)
+ import fs, fs.mirror
+ s3fs = S3FS('example', upload_args={"CacheControl": "max-age=2592000", "ACL": "public-read"})
+ fs.mirror.mirror('/path/to/mirror', s3fs)
see `the Boto3
docs `__
@@ -91,9 +94,9 @@ and can be used in URLs. It is important to URL-Escape the
.. code:: python
- import fs, fs.mirror
- with open fs.open_fs('s3://example?acl=public-read&cache_control=max-age%3D2592000%2Cpublic') as s3fs
- fs.mirror.mirror('/path/to/mirror', s3fs)
+ import fs, fs.mirror
+ with open fs.open_fs('s3://example?acl=public-read&cache_control=max-age%3D2592000%2Cpublic') as s3fs
+ fs.mirror.mirror('/path/to/mirror', s3fs)
S3 URLs
-------
@@ -102,7 +105,7 @@ You can get a public URL to a file on a S3 bucket as follows:
.. code:: python
- movie_url = s3fs.geturl('example.mov')
+ movie_url = s3fs.geturl('example.mov')
Documentation
-------------
diff --git a/docs/index.rst b/docs/index.rst
index 9899ac8..bfef353 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -62,12 +62,15 @@ directory exists.
If you create all your files and directories with S3FS, then you can
forget about how things are stored under the hood. Everything will work
as you expect. You *may* run in to problems if your data has been
-uploaded without the use of S3FS. For instance, if you create a
+uploaded without the use of S3FS. For instance, if you create or open a
`"foo/bar"` object without a `"foo/"` object. If this occurs, then S3FS
may give errors about directories not existing, where you would expect
-them to be. The solution is to create an empty object for all
+them to be. One solution is to create an empty object for all
directories and subdirectories. Fortunately most tools will do this for
you, and it is probably only required of you upload your files manually.
+Alternatively you may be able to get away with creating the `S3FS` object
+directly with ``strict=False`` to bypass some consistency checks
+which could fail when empty objects are missing.
Authentication