diff --git a/docs/howto/deploy/onprem.md b/docs/howto/deploy/onprem.md index f666327838c..07dc816459b 100644 --- a/docs/howto/deploy/onprem.md +++ b/docs/howto/deploy/onprem.md @@ -227,7 +227,7 @@ blockstore: - Using a local adapter on a shared location is relativly new and not battle-tested yet - lakeFS doesn't control the way a shared location is managed across machines -- Import works only for folders +- When using lakectl or the lakeFS UI, you can currently import only directories. If you need to import a single file, use the [HTTP API](https://docs.lakefs.io/reference/api.html#/import/importStart) or API Clients with `type=object` in the request body and `destination=`. - Garbage collector (for committed and uncommitted) and lakeFS Hadoop FileSystem currently unsupported {% include_relative includes/setup.md %} diff --git a/docs/howto/import.md b/docs/howto/import.md index 5093dc67170..13db0753325 100644 --- a/docs/howto/import.md +++ b/docs/howto/import.md @@ -73,7 +73,7 @@ lakectl import \ 1. The import duration depends on the amount of imported objects, but will roughly be a few thousand objects per second. 1. For security reasons, if you are using lakeFS on top of your local disk (`blockstore.type=local`), you need to enable the import feature explicitly. To do so, set the `blockstore.local.import_enabled` to `true` and specify the allowed import paths in `blockstore.local.allowed_external_prefixes` (see [configuration reference]({% link reference/configuration.md %})). - Presently, local import is allowed only for directories, and not single objects. + When using lakectl or the lakeFS UI, you can currently import only directories locally. If you need to import a single file, use the [HTTP API](https://docs.lakefs.io/reference/api.html#/import/importStart) or API Clients with `type=object` in the request body and `destination=`. 1. Making changes to data in the original bucket will not be reflected in lakeFS, and may cause inconsistencies. ## Examples