diff --git a/docs/docs/docker.md b/docs/docs/docker.md
index f2d013a56b93..c2d60c81509a 100644
--- a/docs/docs/docker.md
+++ b/docs/docs/docker.md
@@ -54,7 +54,7 @@ The docker setup command assumes that you are using Postgres as your database pr
:::
:::important
-If you are using a [Server File](#using-the-server-file) then you should [change the command](#command) that runs the `api_serve` service.
+If you are using a [Server File](server-file.md) then you should [change the command](#command) that runs the `api_serve` service.
:::
## Dockerfile
@@ -482,236 +482,8 @@ yarn why @supabase/supabase-js
In this case, it looks like it's ultimately because of our auth provider, `@supabase/supabase-js`.
-## Using the Server File
+## Using the server file
-Redwood v7 introduced a new entry point to Redwood's api server: the server file at `api/src/server.ts`.
-The server file was made with Docker in mind. It allows you to
+Sometimes you will want additional control over the API server, perhaps adding content type parsers, or Fastify plugins.
-1. have control over how the api server starts,
-2. customize the server as much as you want, and
-3. minimize the number of dependencies needed to start the api server process (all you need is Node.js!)
-
-Get started by running the setup command:
-
-```
-yarn rw setup server-file
-```
-
-This should give you a new file at `api/src/server.ts`:
-
-```typescript title="api/src/server.ts"
-import { createServer } from '@redwoodjs/api-server'
-
-import { logger } from 'src/lib/logger'
-
-async function main() {
- const server = await createServer({
- logger,
- })
-
- await server.start()
-}
-
-main()
-```
-
-Without the server file, to start the api side, you'd use binaries provided by `@redwoodjs/api-server` such as `yarn rw-server api` (you may also see this as `./node_modules/.bin/rw-server api`).
-
-With the server file, there's no indirection. Just use `node`:
-
-```
-yarn node api/dist/server.js
-```
-
-### Building
-
-You can't run the server file directly with Node.js; it has to be built first:
-
-```
-yarn rw build api
-```
-
-The api serve stage in the Dockerfile pulls from the api build stage, so things are already in the right order there. Similarly, for `yarn rw dev`, the dev server will build and reload the server file for you.
-
-### Command
-
-That means you will swap the `CMD` instruction in the api server stage:
-
-```diff
- ENV NODE_ENV=production
-
-- CMD [ "node_modules/.bin/rw-server", "api" ]
-+ CMD [ "api/dist/server.js" ]
-```
-
-:::important
-If you are using a [Server File](#using-the-server-file) then you must change the command that runs the `api_serve` service to `./api/dist/server.js` as shown above.
-
-Not updating the command will not completely configure the GraphQL Server and not setup [Redwood Realtime](./realtime.md), if you are using that.
-:::
-
-### Configuring the server
-
-There are three ways you may wish to configure the server.
-
-#### Underlying Fastify server
-
-First, you can configure how the underlying Fastify server is instantiated via the`fastifyServerOptions` passed to the `createServer` function:
-
-```ts title="api/src/server.ts"
-const server = await createServer({
- logger,
- // highlight-start
- fastifyServerOptions: {
- // ...
- },
- // highlight-end
-})
-```
-
-For the complete list of options, see [Fastify's documentation](https://fastify.dev/docs/latest/Reference/Server/#factory).
-
-#### Configure the redwood API plugin
-
-Second, you may want to alter the behavior of redwood's API plugin itself. To do this we provide a `configureApiServer(server)` option where you can do anything you wish to the fastify instance before the API plugin is registered. Two examples are given below.
-
-##### Example: Compressing Payloads and Rate Limiting
-
-Let's say that we want to compress payloads and add rate limiting.
-We want to compress payloads only if they're larger than 1KB, preferring deflate to gzip,
-and we want to limit IP addresses to 100 requests in a five minute window.
-We can leverage two Fastify ecosystem plugins, [@fastify/compress](https://github.com/fastify/fastify-compress) and [@fastify/rate-limit](https://github.com/fastify/fastify-rate-limit) respectively.
-
-First, you'll need to install these packages:
-
-```
-yarn workspace api add @fastify/compress @fastify/rate-limit
-```
-
-Then register them with the appropriate config:
-
-```ts title="api/src/server.ts"
-const server = await createServer({
- logger,
- async configureApiServer(server) {
- await server.register(import('@fastify/compress'), {
- global: true,
- threshold: 1024,
- encodings: ['deflate', 'gzip'],
- })
-
- await server.register(import('@fastify/rate-limit'), {
- max: 100,
- timeWindow: '5 minutes',
- })
- },
-})
-```
-
-##### Example: File Uploads
-
-If you try to POST file content to the api server such as images or PDFs, you may see the following error from Fastify:
-
-```json
-{
- "statusCode": 400,
- "code": "FST_ERR_CTP_INVALID_CONTENT_LENGTH",
- "error": "Bad Request",
- "message": "Request body size did not match Content-Length"
-}
-```
-
-This's because Fastify [only supports `application/json` and `text/plain` content types natively](https://www.fastify.io/docs/latest/Reference/ContentTypeParser/).
-While Redwood configures the api server to also accept `application/x-www-form-urlencoded` and `multipart/form-data`, if you want to support other content or MIME types (likes images or PDFs), you'll need to configure them here in the server file.
-
-You can use Fastify's `addContentTypeParser` function to allow uploads of the content types your application needs.
-For example, to support image file uploads you'd tell Fastify to allow `/^image\/.*/` content types:
-
-```ts title="api/src/server.ts"
-const server = await createServer({
- logger,
- configureApiServer(server) {
- server.addContentTypeParser(/^image\/.*/, (_req, payload, done) => {
- payload.on('end', () => {
- done()
- })
- })
- },
-})
-```
-
-The regular expression (`/^image\/.*/`) above allows all image content or MIME types because [they start with "image"](https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Image_types).
-
-Now, when you POST those content types to a function served by the api server, you can access the file content on `event.body`.
-
-#### Additional Fastify plugins
-
-Finally, you can register additional Fastify plugins on the server instance:
-
-```ts title="api/src/server.ts"
-const server = await createServer({
- logger,
-})
-
-// highlight-next-line
-server.register(myFastifyPlugin)
-```
-
-:::note Fastify encapsulation
-
-Fastify is built around the concept of [encapsulation](https://fastify.dev/docs/latest/Reference/Encapsulation/). It is important to note that redwood's API plugin cannot be mutated after it is registered, see [here](https://fastify.dev/docs/latest/Reference/Plugins/#asyncawait). This is why you must use the `configureApiServer` option to do as shown above.
-
-:::
-
-### The `start` method
-
-Since there's a few different ways to configure the host and port the server listens at, the server instance returned by `createServer` has a special `start` method:
-
-```ts title="api/src/server.ts"
-await server.start()
-```
-
-`start` is a thin wrapper around [`listen`](https://fastify.dev/docs/latest/Reference/Server/#listen).
-It takes the same arguments as `listen`, except for host and port. It computes those in the following way, in order of precedence:
-
-1. `--apiHost` or `--apiPort` flags:
-
-```
-yarn node api/dist/server.js --apiHost 0.0.0.0 --apiPort 8913
-```
-
-2. `REDWOOD_API_HOST` or `REDWOOD_API_PORT` env vars:
-
-```
-export REDWOOD_API_HOST='0.0.0.0'
-export REDWOOD_API_PORT='8913'
-yarn node api/dist/server.js
-```
-
-3. `[api].host` and `[api].port` in `redwood.toml`:
-
-```toml title="redwood.toml"
-[api]
- host = '0.0.0.0'
- port = 8913
-```
-
-If you'd rather not have `createServer` parsing `process.argv`, you can disable it via `parseArgv`:
-
-```ts title="api/src/server.ts"
-await createServer({
- parseArgv: false,
-})
-```
-
-And if you'd rather it do none of this, just change `start` to `listen` and specify the host and port inline:
-
-```ts title="api/src/server.ts"
-await server.listen({
- host: '0.0.0.0',
- port: 8913,
-})
-```
-
-If you don't specify a host, `createServer` uses `NODE_ENV` to set it. If `NODE_ENV` is production, it defaults to `'0.0.0.0'` and `'::'` otherwise.
-The Dockerfile sets `NODE_ENV` to production so that things work out of the box.
+Refer to our [Server File](server-file.md) for details on how to make use of this.
diff --git a/docs/docs/server-file.md b/docs/docs/server-file.md
new file mode 100644
index 000000000000..203deb66416b
--- /dev/null
+++ b/docs/docs/server-file.md
@@ -0,0 +1,236 @@
+# Server File
+
+Redwood v7 introduced a new entry point to Redwood's api server: the server file at `api/src/server.ts`.
+
+It allows you to:
+
+1. have control over how the api server starts,
+2. customize the server as much as you want, and
+3. minimize the number of dependencies needed to start the api server process (all you need is Node.js!)
+
+Get started by running the setup command:
+
+```
+yarn rw setup server-file
+```
+
+This should give you a new file at `api/src/server.ts`:
+
+```typescript title="api/src/server.ts"
+import { createServer } from '@redwoodjs/api-server'
+
+import { logger } from 'src/lib/logger'
+
+async function main() {
+ const server = await createServer({
+ logger,
+ })
+
+ await server.start()
+}
+
+main()
+```
+
+Without the server file, to start the api side, you'd use binaries provided by `@redwoodjs/api-server` such as `yarn rw-server api` (you may also see this as `./node_modules/.bin/rw-server api`).
+
+With the server file, there's no indirection. Just use `node`:
+
+```
+yarn node api/dist/server.js
+```
+
+### Building
+
+You can't run the server file directly with Node.js; it has to be built first:
+
+```
+yarn rw build api
+```
+
+The api serve stage in the Dockerfile pulls from the api build stage, so things are already in the right order there. Similarly, for `yarn rw dev`, the dev server will build and reload the server file for you.
+
+### Command
+
+That means you will swap the `CMD` instruction in the api server stage:
+
+```diff
+ ENV NODE_ENV=production
+
+- CMD [ "node_modules/.bin/rw-server", "api" ]
++ CMD [ "api/dist/server.js" ]
+```
+
+:::important
+If you are using a [Server File](#using-the-server-file) then you must change the command that runs the `api_serve` service to `./api/dist/server.js` as shown above.
+
+Not updating the command will not completely configure the GraphQL Server and not setup [Redwood Realtime](./realtime.md), if you are using that.
+:::
+
+### Configuring the server
+
+There are three ways you may wish to configure the server.
+
+#### Underlying Fastify server
+
+First, you can configure how the underlying Fastify server is instantiated via the`fastifyServerOptions` passed to the `createServer` function:
+
+```ts title="api/src/server.ts"
+const server = await createServer({
+ logger,
+ // highlight-start
+ fastifyServerOptions: {
+ // ...
+ },
+ // highlight-end
+})
+```
+
+For the complete list of options, see [Fastify's documentation](https://fastify.dev/docs/latest/Reference/Server/#factory).
+
+#### Configure the redwood API plugin
+
+Second, you may want to alter the behavior of redwood's API plugin itself. To do this we provide a `configureApiServer(server)` option where you can do anything you wish to the fastify instance before the API plugin is registered. Two examples are given below.
+
+##### Example: Compressing Payloads and Rate Limiting
+
+Let's say that we want to compress payloads and add rate limiting.
+We want to compress payloads only if they're larger than 1KB, preferring deflate to gzip,
+and we want to limit IP addresses to 100 requests in a five minute window.
+We can leverage two Fastify ecosystem plugins, [@fastify/compress](https://github.com/fastify/fastify-compress) and [@fastify/rate-limit](https://github.com/fastify/fastify-rate-limit) respectively.
+
+First, you'll need to install these packages:
+
+```
+yarn workspace api add @fastify/compress @fastify/rate-limit
+```
+
+Then register them with the appropriate config:
+
+```ts title="api/src/server.ts"
+const server = await createServer({
+ logger,
+ async configureApiServer(server) {
+ await server.register(import('@fastify/compress'), {
+ global: true,
+ threshold: 1024,
+ encodings: ['deflate', 'gzip'],
+ })
+
+ await server.register(import('@fastify/rate-limit'), {
+ max: 100,
+ timeWindow: '5 minutes',
+ })
+ },
+})
+```
+
+##### Example: Multipart POSTs
+
+If you try to POST file content to the api server such as images or PDFs, you may see the following error from Fastify:
+
+```json
+{
+ "statusCode": 400,
+ "code": "FST_ERR_CTP_INVALID_CONTENT_LENGTH",
+ "error": "Bad Request",
+ "message": "Request body size did not match Content-Length"
+}
+```
+
+This's because Fastify [only supports `application/json` and `text/plain` content types natively](https://www.fastify.io/docs/latest/Reference/ContentTypeParser/).
+While Redwood configures the api server to also accept `application/x-www-form-urlencoded` and `multipart/form-data`, if you want to support other content or MIME types (likes images or PDFs), you'll need to configure them here in the server file.
+
+You can use Fastify's `addContentTypeParser` function to allow uploads of the content types your application needs.
+For example, to support image file uploads you'd tell Fastify to allow `/^image\/.*/` content types:
+
+```ts title="api/src/server.ts"
+const server = await createServer({
+ logger,
+ configureApiServer(server) {
+ server.addContentTypeParser(/^image\/.*/, (_req, payload, done) => {
+ payload.on('end', () => {
+ done()
+ })
+ })
+ },
+})
+```
+
+The regular expression (`/^image\/.*/`) above allows all image content or MIME types because [they start with "image"](https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Image_types).
+
+Now, when you POST those content types to a function served by the api server, you can access the file content on `event.body`.
+
+Note that for the GraphQL endpoint, using Redwood's built-in [Uploads](uploads.md), multipart requests are already configured.
+
+#### Additional Fastify plugins
+
+Finally, you can register additional Fastify plugins on the server instance:
+
+```ts title="api/src/server.ts"
+const server = await createServer({
+ logger,
+})
+
+// highlight-next-line
+server.register(myFastifyPlugin)
+```
+
+:::note Fastify encapsulation
+
+Fastify is built around the concept of [encapsulation](https://fastify.dev/docs/latest/Reference/Encapsulation/). It is important to note that redwood's API plugin cannot be mutated after it is registered, see [here](https://fastify.dev/docs/latest/Reference/Plugins/#asyncawait). This is why you must use the `configureApiServer` option to do as shown above.
+
+:::
+
+### The `start` method
+
+Since there's a few different ways to configure the host and port the server listens at, the server instance returned by `createServer` has a special `start` method:
+
+```ts title="api/src/server.ts"
+await server.start()
+```
+
+`start` is a thin wrapper around [`listen`](https://fastify.dev/docs/latest/Reference/Server/#listen).
+It takes the same arguments as `listen`, except for host and port. It computes those in the following way, in order of precedence:
+
+1. `--apiHost` or `--apiPort` flags:
+
+```
+yarn node api/dist/server.js --apiHost 0.0.0.0 --apiPort 8913
+```
+
+2. `REDWOOD_API_HOST` or `REDWOOD_API_PORT` env vars:
+
+```
+export REDWOOD_API_HOST='0.0.0.0'
+export REDWOOD_API_PORT='8913'
+yarn node api/dist/server.js
+```
+
+3. `[api].host` and `[api].port` in `redwood.toml`:
+
+```toml title="redwood.toml"
+[api]
+ host = '0.0.0.0'
+ port = 8913
+```
+
+If you'd rather not have `createServer` parsing `process.argv`, you can disable it via `parseArgv`:
+
+```ts title="api/src/server.ts"
+await createServer({
+ parseArgv: false,
+})
+```
+
+And if you'd rather it do none of this, just change `start` to `listen` and specify the host and port inline:
+
+```ts title="api/src/server.ts"
+await server.listen({
+ host: '0.0.0.0',
+ port: 8913,
+})
+```
+
+If you don't specify a host, `createServer` uses `NODE_ENV` to set it. If `NODE_ENV` is production, it defaults to `'0.0.0.0'` and `'::'` otherwise.
+The Dockerfile sets `NODE_ENV` to production so that things work out of the box.
diff --git a/docs/docs/uploads.md b/docs/docs/uploads.md
new file mode 100644
index 000000000000..cad3b7607602
--- /dev/null
+++ b/docs/docs/uploads.md
@@ -0,0 +1,763 @@
+# Uploads & Storage
+
+Getting started with file uploads can open up a world of possibilities for your application. Whether you're enhancing user profiles with custom avatars, allowing document sharing, or enabling image galleries - Redwood has an integrated way of uploading files and storing them.
+
+There are two parts to this:
+
+1. Setting up the frontend and GraphQL schema to send and receive files - Uploads
+2. Manipulate the data inside services, and pass it to Prisma, for persistence - Storage
+
+We can roughly breakdown the flow as follows
+
+![Redwood Uploads Flow Diagram](/img/uploads/uploads-flow.png)
+
+## Uploading Files
+
+### 1. Setting up the File scalar
+
+Before we start sending files via GraphQL we need to tell Redwood how to handle them. Redwood and GraphQL Yoga are pre-configured to handle the `File` scalar.
+
+In your mutations, use the `File` scalar for the fields where you are submitting an upload
+
+```graphql title="api/src/graphql/profiles.sdl.ts"
+input UpdateProfileInput {
+ id: Int
+ firstName: String
+ # ...other fields
+ // highlight-next-line
+ avatar: File
+}
+```
+
+You're now ready to receive files!
+
+### 2. Configuring the UI
+
+Let's setup a basic form to add avatar images to your profile.
+
+Assuming you've built a [Form](forms.md) for profile
+
+```tsx title="web/src/components/ProfileForm.tsx"
+// highlight-next-line
+import { FileField, TextField, FieldError } from '@redwoodjs/forms'
+
+export const ProfileForm = ({ onSubmit }) => {
+ return {
+
+ }
+}
+```
+
+A `FileField` is just a standard `` - that's integrated with your Form context - it just makes it easier to extract the data for submission.
+
+Now we need to send the file as a mutation!
+
+```tsx title="web/src/components/EditProfile.tsx"
+import { useMutation } from '@redwoodjs/web'
+
+const UPDATE_PROFILE_MUTATION = gql`
+ // This is the Input type we setup with File earlier!
+ // highlight-next-line
+ mutation UpdateProfileMutation($input: UpdateProfileInput!) {
+ updateProfile(input: $input) {
+ firstName
+ lastName
+ // highlight-next-line
+ avatar
+ }
+ }
+`
+
+const EditProfile = ({ profile }) => {
+ const [updateProfile, { loading, error }] = useMutation(
+ UPDATE_PROFILE_MUTATION,
+ {
+ /*..*/
+ }
+ )
+
+ const onSave = (formData: UpdateProfileInput) => {
+ // We have to extract the first file from the input
+
+ const input = {
+ ...formData,
+ // highlight-next-line
+ avatar: formData.avatar?.[0], // FileField returns an array, we want the first and only file; Multi-file uploads are available
+ }
+
+ updateProfile({ variables: { input } })
+ }
+
+ return (
+
+ )
+}
+```
+
+While [multi-file uploads are possible](#saving-file-lists---savefilesinlist), when our example form is submitted we process the data to ensure the avatar field contains a single file instead of an array (because that's how we setup the UpdateProfileInput). The onSave function then calls the updateProfile mutation. The mutation automatically handles the file upload because we've set up the File scalar and configured our backend to process file inputs.
+
+### 3. Logging the Item Details
+
+Try uploading your avatar photo now, and if you log the `avatar` field in your service:
+
+```ts title="api/src/services/profile.ts"
+export const updateProfile = async ({ id, input }) => {
+ // highlight-next-line
+ console.log(input.avatar)
+ // File {
+ // filename: 'profile-picture.jpg',
+ // mimetype: 'image/jpeg',
+ // createReadStream: [Function: createReadStream]
+ // ...
+ // }
+
+ // Example without using the built-in helpers
+ await fs.writeFile(
+ '/test/profile.jpg',
+ Buffer.from(await input.avatar.arrayBuffer())
+ )
+}
+```
+
+You'll see that you are receiving an instance of [File](https://developer.mozilla.org/en-US/docs/Web/API/File).
+
+That's part 1 done - you can receive uploaded files. In the next steps, we'll talk about some tooling and a Prisma client extension that Redwood gives you, to help you persist and manage your uploads.
+
+
+**What's happening behind the scenes?**
+
+Once you send the request, and open up your Network Inspect Panel, you'll notice that the graphql request looks slightly different - it has a different Content-Type (instead of the regular `application/json`).
+
+That's because when you send a [File](https://developer.mozilla.org/en-US/docs/Web/API/File) - the Redwood Apollo client will switch the request to a multipart form request, using [GraphQL Multipart Request Spec](https://github.com/jaydenseric/graphql-multipart-request-spec). This is the case whether you send a `File`, `FileList` or `Blob` (which is a less specialized File).
+
+On the backend, GraphQL Yoga is pre-configured to handle multipart form requests, _as long as_ you specify the `File` scalar in your SDL.
+
+
+
+## Storage
+
+Great, now you can receive Files from GraphQL - but how do you go about saving them, and tracking them, in your database? Well, Redwood has the answers for you! Keep going to find out how!
+
+### 1. Configuring the Prisma schema
+
+In your Prisma schema, the `avatar` field should be defined as a string:
+
+```prisma title="api/db/schema.prisma"
+model Profile {
+ id: Int
+ // ... other fields
+ // highlight-next-line
+ avatar String?
+}
+```
+
+This is because Prisma doesn't have a native File type. Instead, we store the file path or URL as a string in the database. The actual file processing and storage will be handled in your service layer, and pass the path to Prisma to save.
+
+### 2. Configuring the Upload savers and Uploads extension
+
+To make it easier (and more consistent) dealing with file uploads, Redwood gives you a standardized way of saving your uploads (i.e. write to storage) by using what we call "savers," along with our custom Uploads extension that will handle deletion and updates automatically for you.
+:::note The rest of the doc assumes you are running a "Serverful" configuration for your deployments, as it involves the file system. :::
+
+Let's first run the setup command:
+
+```shell
+yarn rw setup uploads
+```
+
+Which generates the following configuration file:
+
+```ts title="api/src/lib/uploads.ts"
+import { UploadsConfig, setupStorage } from '@redwoodjs/storage'
+import { FileSystemStorage } from '@redwoodjs/storage/FileSystemStorage'
+import { UrlSigner } from '@redwoodjs/storage/signedUrl'
+
+// ⭐ (1)
+const uploadConfig: UploadsConfig = {
+ profile: {
+ fields: ['avatar'], // 👈 the fields that will contain your `File`s
+ },
+}
+
+// ⭐ (2)
+export const fsStorage = new FileSystemStorage({
+ baseDir: './uploads',
+})
+
+// ⭐ (3) Optional
+export const urlSigner = new UrlSigner({
+ secret: process.env.UPLOADS_SECRET,
+ endpoint: '/signedUrl',
+})
+
+// ⭐ (4)
+const { saveFiles, storagePrismaExtension } = setupStorage({
+ uploadsConfig,
+ storageAdapter: fsStorage,
+ urlSigner,
+})
+
+export { saveFiles, storagePrismaExtension }
+```
+
+Let's break down the key components of this configuration.
+
+**1. Upload Configuration**
+This is where you configure the fields that will receive uploads. In our case, it's the profile.avatar field.
+
+The shape of `UploadsConfig` looks like this:
+
+```
+[prismaModel] : {
+ fields: ['modelField1']
+ }
+```
+
+**2. Storage Adapter**
+We create a storage adapter, in this case `FileSystemStorage`, that will save your uploads to the `./uploads` folder.
+
+This just sets the base path. The actual filenames and folders are determined by the saveFiles utility functions, but [can be overridden!](#customizing-save-file-name-or-save-path)
+
+**3. Url Signer instance**
+This is an optional class that will help you generate signed urls for your files, so you can limit access to these files. Generate a secret with `yarn rw g secret` and add to your .env as `UPLOADS_SECRET`.
+
+**4. Utility Functions**
+We provide utility functions that can be exported from this file to be used elsewhere, such as services.
+
+- `saveFiles` - object containing functions to save File objects to storage, and return a path.
+ For example:
+
+```
+saveFiles.forProfile(gqlInput)
+```
+
+- `storagePrismaExtension` - The Prisma client extension we'll use in `api/src/lib/db.ts` to automatically handle updates, deletion of uploaded files (including when the Prisma operation fails). It also configures [Result extensions](https://www.prisma.io/docs/orm/prisma-client/client-extensions/result), to give you utilities like `profile.withSignedUrl()`.
+
+### 3. Attaching the Uploads extension
+
+Now we need to extend our db client in `api/src/lib/db.ts` to use the configured prisma client.
+
+```ts title="api/src/lib/db.ts"
+import { PrismaClient } from '@prisma/client'
+
+import { emitLogLevels, handlePrismaLogging } from '@redwoodjs/api/logger'
+
+import { logger } from './logger'
+// highlight-next-line
+import { storagePrismaExtension } from './uploads'
+
+// 👇 Notice here we create prisma client, and don't export it yet
+export const prismaClient = new PrismaClient({
+ log: emitLogLevels(['info', 'warn', 'error']),
+})
+
+handlePrismaLogging({
+ db: prismaClient,
+ logger,
+ logLevels: ['info', 'warn', 'error'],
+})
+
+// 👇 Export db after adding uploads extension
+// highlight-next-line
+export const db = prismaClient.$extends(storagePrismaExtension)
+```
+
+The `$extends` method is used to extend the functionality of your Prisma client by adding
+
+- [Query extensions](https://www.prisma.io/docs/orm/prisma-client/client-extensions/query) which will intercept your `create`, `update`, `delete` operations
+- [Result extensions](https://www.prisma.io/docs/orm/prisma-client/client-extensions/result) for your stored files - which gives you helper methods on the result of your prisma query
+
+More details on these extensions can be found [here](#storage-prisma-extension).
+
+
+
+__Why Export This Way__
+
+
+The `$extends` method returns a new instance of the Prisma client with the extensions applied. By exporting this new instance as db, you ensure that any additional functionality provided by the uploads extension is available throughout your application, without needing to change where you import. Note one of the [limitations](https://www.prisma.io/docs/orm/prisma-client/client-extensions#limitations) of using extensions is you have to use `$on` on your prisma client (as we do in handlePrismaLogging), it needs to happen before you use `$extends`
+
+
+
+### 4. Implementing Upload savers
+
+You'll also need a way to actually save the incoming `File` object to a file persisted on storage. In your services, you can use the pre-configured "savers" to write your `File` objects to storage. Prisma will automatically save the path into the database. The savers and storage adapters, configured in `api/src/lib/uploads`, determine where the file is saved.
+
+```ts title="api/src/services/profiles/profiles.ts"
+// highlight-next-line
+import { saveFiles } from 'src/lib/uploads'
+
+export const updateProfile: MutationResolvers['updateProfile'] = async ({
+ id,
+ input,
+}) => {
+ // highlight-next-line
+ const processedInput = await saveFiles.forProfile(input)
+
+ // input.avatar (File) becomes a path string 👇
+ // Settings in src/lib/uploads.ts configures where the upload is saved
+ // processedInput.avatar -> '/mySavePath/profile/avatar/generatedId.jpg'
+
+ return db.profile.update({
+ data: processedInput,
+ where: { id },
+ })
+}
+```
+
+For each of the models you configured when you setup uploads (in `UploadConfig`) - you have savers for them.
+
+So if you passed:
+
+```ts
+const uploadConfig: UploadsConfig = {
+ profile: {
+ fields: ['avatar'],
+ },
+ anotherModel: {
+ fields: ['document'],
+ },
+}
+
+const { saveFiles } = setupStorage(uploadConfig)
+
+// Available methods 👇
+saveFiles.forProfile(profileGqlInput)
+saveFiles.forAnotherModel(anotherModelGqlInput)
+
+// Special case - not mapped to prisma model
+saveFiles.inList(arrayOfFiles)
+```
+
+:::info
+You might have already noticed that the saver functions sort-of tie your GraphQL inputs to your Prisma model.
+
+In essence, these utility functions expect to take an object very similar to the Prisma data argument (the data you're passing to your `create`, `update`), but with File objects at fields `avatar`, and `document` instead of strings.
+
+If your `File` is in a different key (or a key did you did not configure in the upload config), it will be ignored and left as-is.
+
+:::
+
+## Informational/Utilities
+
+## Storage Prisma Extension
+
+This Prisma extension is designed to handle file uploads and deletions in conjunction with database operations. The goal here is for you as the developer to not have to think too much in terms of files, rather just as Prisma operations. The extension ensures that file uploads are properly managed alongside database operations, preventing orphaned files and maintaining consistency between the database and the storage.
+
+:::note
+The extension will _only_ operate on fields and models configured in your `UploadConfig` which you configure in [`api/src/lib/uploads.{js,ts}`](#2-configuring-the-upload-savers-and-uploads-extension).
+:::
+
+What this configures is:
+
+**A) CRUD operations**
+
+- when the record is deleted, the associated upload is removed from storage
+- when a record is updated, the associated upload file is also replaced
+
+...and negative cases such as:
+
+- saved uploads are removed if creation fails
+- saved uploads are removed if update fails (while keeping the original)
+
+### `create` & `createMany` operations
+
+If your create operation fails, it removes any uploaded files to avoid orphaned files (so you can retry the request)
+
+### `update` & `updateMany` operations
+
+1. If update operation is successful, removes the old uploaded files
+2. If it fails, removes any newly uploaded files (so you can retry the request)
+
+### `delete` operations
+
+Removes any associated uploaded files, once delete operation completes.
+
+### `upsert` operations
+
+Depending on whether it's updating or creating, performs the same actions as create or update.
+
+## Result Extensions
+
+When you add the storage prisma extension, it also configures your prisma objects to have special helper methods.
+
+These will only appear on fields that you configure in your `UploadConfig`.
+
+```typescript
+const profile = await db.profile.update(/*...*/)
+
+// The result of your prisma query contains the helpers
+profile?.withSignedUrl() // ✅
+
+// Incorrect: you need to await the result of your prisma query first!
+db.profile.update(/*...*/).withSignedUrl() // 🛑
+
+// Assuming the comment model does not have an upload field
+// the helper won't appear
+db.comment.findMany(/*..*/).withSignedUrl() // 🛑
+```
+
+**B) Result extensions**
+
+```ts title="api/src/services/profiles/profiles.ts"
+export const profile = async ({ id }) => {
+ // 👇 await the result from your prisma query
+ const profile = await db.profile.findUnique({
+ where: { id },
+ })
+
+ // Convert the avatar field (which was persisted as a path) to data uri string
+ // highlight-next-line
+ return profile?.withDataUri()
+}
+```
+
+:::tip
+It's very important to note limitations around what Prisma extensions can do:
+
+**a) The CRUD operation extensions will not run on nested read and write operations**
+For example:
+
+```js
+const savedFiles = saveFiles.inList(input.files)
+
+db.folder.update({
+ data: {
+ ...input,
+ files: {
+ // highlight-start
+ createMany: {
+ data: savedFiles, // if the createMany fails, the saved files will _not_ be deleted
+ },
+ // highlight-end
+ },
+ },
+ where: { id },
+})
+```
+
+**b) Result extensions are not available on relations.**
+
+You can often rewrite the query in a different way though. For example, when looking up files :
+
+```ts
+const filesViaRelation = await db.folder
+ .findUnique({ where: { id: root?.id } })
+ .files()
+
+const filesWhereQuery = await db.file.findMany({
+ where: {
+ folderId: root?.id,
+ },
+})
+
+// 🛑 Will not work, because files accessed via relation
+// highlight-next-line
+return filesViaRelation.map((file) => file.withSignedUrl())
+
+// ✅ OK, because direct lookup
+// highlight-next-line
+return filesWhereQuery.map((file) => file.withSignedUrl())
+```
+
+:::
+
+### Saving File lists - `saveFiles.inList()`
+
+If you would like to upload FileLists (or an arrays of Files), use this special utility to persist your Files to storage. This is necessary because String arrays aren't supported on databases - you probably want to save them to a different table, or specific fields.
+
+Let's say you define in your SDL, a way to send an Array of files.
+
+```graphql
+input UpdateAlbumInput {
+ name: String
+ photos: [File]
+}
+```
+
+You can use the `.inList` function like this:
+
+```ts title="api/src/services/albums.ts"
+export const updateAlbum = async ({
+ id,
+ input,
+}) => {
+
+ // notice we're passing in the file list, and not the input!
+ // highlight-next-line
+ const processedInput = await saveFiles.inList(input.photos)
+ /* Returns an array like this:
+ [
+ '/baseStoragePath/AG1258019MAFGK.jpg',
+ '/baseStoragePath/BG1059149NAKKE.jpg',
+ ]
+ */
+
+ const mappedPhotos = processedInput.map((path) => ({ path }))
+ /* Will make `mappedPhotos` be an array of objects like this:
+ [
+ { path: '/baseStoragePath/AG1258019MAFGK.jpg' },
+ { path: '/baseStoragePath/BG1059149NAKKE.jpg' },
+ ]
+ */
+
+ return db.album.update({
+ data: {
+ ...input,
+ photo: {
+ createMany: {
+ data: mappedPhotos,
+ },
+ },
+ },
+ where: { id },
+ })
+
+```
+
+### Customizing save file name or save path
+
+If you'd like to customize the filename that a saver will write to you can override it when calling it. For example, you could name your files by the User's id
+
+```ts
+await saveFiles.forProfile(data, {
+ // highlight-next-line
+ fileName: 'profilePhoto-' + context.currentUser.id,
+})
+
+// Will save files to
+// /base_path/profilePhoto-58xx4ruv41f8eit0y25.png
+```
+
+If you'd like to customize where files are saved, perhaps you want to put it in a specific folder, so you can make those files [publicly available](#making-a-folder-public), you can override the folder to use too (skipping the base path of your Storage adapter):
+
+```ts
+await saveFiles.forProfile(data, {
+ fileName: 'profilePhoto-' + context.currentUser.id,
+ // highlight-next-line
+ path: '/public_avatar',
+})
+
+// Will save files to
+// /public_avatar/profilePhoto-58xx4ruv41f8eit0y25.png
+```
+
+The extension is determined by the name of the uploaded file.
+
+### Signed URLs
+
+When you setup uploads, we also generate an API function (an endpoint) for you - by default at `/signedUrl`. You can use this in conjunction with the `.withSignedUrl` helper. For example:
+
+```ts title="api/src/services/profiles.ts"
+import { EXPIRES_IN } from '@redwoodjs/storage/UrlSigner'
+
+export const profile = async ({ id }) => {
+ const profile = await db.profile.findUnique({
+ where: { id },
+ })
+
+ // Convert the avatar field to signed URLs
+ // highlight-start
+ return profile?.withSignedUrl({
+ expiresIn: EXPIRES_IN.days(2),
+ })
+ // highlight-end
+}
+```
+
+The object being returned will look like:
+
+```ts
+{
+ id: 125,
+ avatar: '/.redwood/functions/signedUrl?s=s1gnatur3&expiry=1725190749613&path=path.png'
+}
+```
+
+This will generate a URL that will expire in 2 days (from the point of query). Let's breakdown the URL:
+
+| URL Component | |
+| ------------------------------- | ---------------------------------------------------- |
+| `/.redwood/functions/signedUrl` | Point to the API server, and the endpoint configured |
+| `s=s1gnatur3` | The signature that we'll validate |
+| `expiry=1725190749613` | Time stamp for when it expires |
+| `path=path.png` | The key to look up the file on your storage |
+
+
+How the signedUrl function validates
+
+This function is automatically generated for you, but let's take a quick look at how it works:
+
+```ts title="api/src/functions/signedUrl/signedUrl.ts"
+import type { SignatureValidationArgs } from '@redwoodjs/storage/UrlSigner'
+
+// The urlSigner and fsStorage instances were configured when we setup uploads
+// highlight-next-line
+import { urlSigner, fsStorage } from 'src/lib/uploads'
+
+export const handler = async (event) => {
+ // Validate the signature using the urlSigner instance
+ // highlight-next-line
+ const fileToReturn = urlSigner.validateSignature(
+ // Pass the params {s, path, expiry}
+ // highlight-next-line
+ event.queryStringParameters as SignatureValidationArgs
+ )
+
+ // Use the returned value to lookup the file in your storage
+ // highlight-next-line
+ const { contents, type } = await fsStorage.read(fileToReturn)
+
+ return {
+ statusCode: 200,
+ headers: {
+ // You also get the type from the read
+ 'Content-Type': type,
+ },
+ // Return the contents of the file
+ body: contents,
+ }
+}
+```
+
+We created and exported the `urlSigner` instance and `fsStorage` adapter in `src/lib/uploads`.
+
+The details to validate come through as query parameters, which we pass to the `urlSigner.validateSignature` parameter.
+
+If it's valid, you will receive a path (or key) to the file - which you can then lookup in your storage.
+
+The `read` function also returns the mime-type of the file (based on the extension) - which you pass as a response header. This ensures that browsers know how to read your response!
+
+
+
+### Data URIs
+
+When you have smaller files, you can choose instead to return a Base64 DataURI string that you can render directly into your html.
+
+```ts title="api/src/services/profiles.ts"
+export const profile = async ({ id }) => {
+ const profile = await db.profile.findUnique({
+ where: { id },
+ })
+
+ // highlight-next-line
+ return profile?.withDataUri()
+}
+```
+
+:::tip
+The `withDataUri` extension is an `async` function. Remember to await, if you are doing additional manipulation before returning your result object from the service.
+:::
+
+The output of `withDataUri` would be your profile object, with the upload fields transformed into a data uri. For example:
+
+```js
+{
+ // other fields
+ id: 12355,
+ name: 'Danny'
+ email: '...'
+ // Because configured avatar as an upload field:
+ // highlight-next-line
+ avatar: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAJ/...Q0MgUHJvZmlsZQAAKJF1kL='
+}
+```
+
+## Configuring the server further
+
+Sometimes, you may need more control over how the Redwood API server behaves. This could include customizing the body limit for requests, redirects, or implementing additional logic - that's exactly what the [Server File](server-file.md) is for!
+
+### Making a folder public
+
+While you can always create a function to access certain files publicly, similar to the `/signedUrl` function that gets generated for you - another way could be to configure the API server with the [fastify-static](https://github.com/fastify/fastify-static) plugin to make a specific folder publicly accessible.
+
+```js title="api/server.js"
+import path from 'path'
+// highlight-next-line
+import fastifyStatic from '@fastify/static'
+
+import { createServer } from '@redwoodjs/api-server'
+import { logger } from 'src/lib/logger'
+
+async function main() {
+ const server = await createServer({
+ logger,
+ })
+
+ // highlight-start
+ server.register(fastifyStatic, {
+ root: path.join(process.cwd() + '/uploads/public_profile_photos'),
+ prefix: '/public_uploads',
+ })
+ // highlight-end
+
+ await server.start()
+}
+
+main()
+```
+
+Based on the above, you'll be able to access your files at:
+
+```
+http://localhost:8910/.redwood/functions/public_uploads/01J6AF89Y89WTWZF12DRC72Q2A.jpeg
+
+OR directly
+
+http://localhost:8911/public_uploads/01J6AF89Y89WTWZF12DRC72Q2A.jpeg
+
+```
+
+Where you are only exposing **part** of your uploads directory publicly
+
+### Customizing the body limit for requests
+
+The default body size limit for the Redwood API server is 100MB (per request). Depending on the sizes of files you're uploading, especially in the case of multiple files, you may receive errors like this:
+
+```json
+{
+ "code": "FST_ERR_CTP_BODY_TOO_LARGE",
+ "error": "Payload Too Large",
+ "message": "Request body is too large"
+}
+```
+
+You can configure the `bodyLimit` option to increase or decrease the default limit.
+
+```js title="api/server.js"
+import { createServer } from '@redwoodjs/api-server'
+
+import { logger } from 'src/lib/logger'
+
+async function main() {
+ const server = await createServer({
+ logger,
+ fastifyServerOptions: {
+ // highlight-next-line
+ bodyLimit: 1024 * 1024 * 500, // 500MB
+ },
+ })
+
+ await server.start()
+}
+
+main()
+```
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 428fd7a3a761..28c24efb2730 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -194,6 +194,7 @@ module.exports = {
'schema-relations',
'security',
'seo-head',
+ 'server-file',
'serverless-functions',
'services',
'storybook',
@@ -231,6 +232,7 @@ module.exports = {
],
},
'webhooks',
+ 'uploads',
'vite-configuration',
],
},
diff --git a/docs/static/img/uploads/uploads-flow.png b/docs/static/img/uploads/uploads-flow.png
new file mode 100644
index 000000000000..768cad8512db
Binary files /dev/null and b/docs/static/img/uploads/uploads-flow.png differ