Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VNEXT] multiple storage provider support #332

Draft
wants to merge 7 commits into
base: vnext
Choose a base branch
from

Conversation

tankerkiller125
Copy link
Contributor

@tankerkiller125 tankerkiller125 commented Nov 14, 2024

What type of PR is this?

  • feature

What this PR does / why we need it:

Adds support for multiple storage providers, notably S3, Google Cloud, Azure Blobs, and of course local storage

Which issue(s) this PR fixes:

Fixes: #265

Summary by CodeRabbit

  • New Features

    • Transitioned document storage from local filesystem to cloud storage, enhancing scalability and accessibility.
    • Improved file handling for item attachments by utilizing cloud storage for serving files.
    • Added support for multiple cloud providers (AWS, Azure, Google Cloud) in document storage.
    • Introduced a connection string configuration for storage management.
  • Bug Fixes

    • Enhanced error logging for cloud storage operations during document creation and deletion.
    • Improved error handling for attachment retrieval, ensuring clearer visibility into issues.
  • Chores

    • Updated dependencies to support new cloud storage functionalities and improved observability.
    • Modified environment variable configurations for storage paths in Dockerfiles.

Copy link
Contributor

coderabbitai bot commented Nov 14, 2024

Walkthrough

The pull request introduces significant updates to the go.mod file and the repo_documents.go file. The go.mod file now includes multiple new dependencies for cloud storage and observability, while removing one obsolete dependency. The repo_documents.go file has transitioned from local filesystem storage to cloud-based document management using the gocloud.dev/blob package, enhancing error handling and logging for document operations.

Changes

File Path Change Summary
backend/go.mod Added new dependencies for cloud storage (e.g., gocloud.dev, AWS, Azure), removed github.com/rogpeppe/go-internal, updated existing dependencies.
backend/internal/data/repo/repo_documents.go Refactored DocumentRepository to use cloud storage instead of local filesystem for document creation and deletion; updated import statements accordingly.
Dockerfile Updated environment variable HBOX_STORAGE_DATA to specify storage path as a file URI.
Dockerfile.rootless Updated environment variable HBOX_STORAGE_DATA to specify storage path as a file URI.
backend/app/api/handlers/v1/v1_ctrl_items_attachments.go Enhanced file handling logic to serve attachments from cloud storage using gocloud.dev/blob.
backend/app/api/main.go Modified run function to include a new connString parameter in repository initialization.
backend/internal/data/repo/repos_all.go Updated New function to accept an additional connString parameter for repository instantiation.
backend/internal/sys/config/conf_database.go Added ConnString field to Storage struct and updated Data field default value.

Assessment against linked issues

Objective Addressed Explanation
Support multiple storage systems (Issue #265)
Ability to easily add support for S3, Azure, GCP, etc. (Issue #265)

Security Recommendations

  • Ensure that any sensitive credentials for cloud storage services are managed securely, using environment variables or secret management tools.
  • Validate user inputs when interacting with cloud storage to prevent injection attacks or unauthorized access.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 60e8be8 and 33935a3.

📒 Files selected for processing (3)
  • backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (3 hunks)
  • backend/internal/data/repo/repo_documents.go (4 hunks)
  • backend/internal/sys/config/conf_database.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/internal/sys/config/conf_database.go
🔇 Additional comments (2)
backend/internal/data/repo/repo_documents.go (1)

59-60: LGTM! Secure path handling implementation.

The path construction is secure due to:

  • Usage of pathlib.Safe
  • UUID for uniqueness
  • Proper path joining
backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (1)

167-189: Verify Access Controls and Security Configurations

Let's ensure proper security measures are in place for the blob storage implementation.

✅ Verification successful

Security Controls Verified with Some Recommendations

The attachment handling implementation has proper security controls in place through:

  • Authentication middleware (mwAuthToken)
  • Role-based access control middleware (mwRoles)
  • Proper error handling for unauthorized/forbidden access
  • Secure connection string handling via repository abstraction

However, for enhanced security, consider these recommendations:

  • Add content-type validation before serving files to prevent MIME-type spoofing
  • Implement file size limits to prevent DoS attacks
  • Consider adding virus scanning for uploaded attachments
  • Add rate limiting for attachment downloads
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential security issues in the codebase

# Check for hardcoded connection strings
echo "Checking for hardcoded connection strings..."
rg -i '(blob|storage|bucket).*["\'].*://.*["\']' --type go

# Check for missing access controls in attachment handlers
echo "Checking for proper access control implementations..."
ast-grep --pattern 'func ($x *V1Controller) Handle.*Attachment.* {
  $$$
  ctx := services.NewContext($_)
  $$$
}'

# Check for proper error handling patterns
echo "Checking error handling patterns..."
rg -A 2 'validate\.NewRequestError' --type go

Length of output: 193


Script:

#!/bin/bash
# Let's break down the verification into separate commands for better reliability

# Check for connection string handling
echo "=== Checking connection string handling ==="
rg "GetConnString" -A 3 -B 3

# Check for access control patterns
echo "=== Checking access control patterns ==="
rg "func.*HandleAttachment" -A 10

# Check authentication middleware usage
echo "=== Checking authentication middleware ==="
rg "middleware.*Auth" -A 2

# Check error handling for unauthorized access
echo "=== Checking error handling patterns ==="
rg "StatusUnauthorized|StatusForbidden" -A 2

# Check blob storage security configurations
echo "=== Checking blob storage security ==="
rg "blob\.Open.*Bucket" -A 3 -B 3

Length of output: 6292


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (7)
backend/go.mod (3)

Line range hint 3-3: Fix invalid Go version

The specified Go version 1.23.0 does not exist yet. The latest stable version is 1.22.x.

Apply this diff to update to the latest stable version:

-go 1.23.0
+go 1.22.0

26-26: Security Recommendations for Cloud Storage Implementation

With the addition of multiple cloud storage providers, please ensure the following security measures are implemented:

  1. Secure Configuration:

    • Use environment variables or secure vaults for cloud credentials
    • Implement least-privilege IAM roles
    • Enable audit logging for cloud operations
  2. Data Protection:

    • Enable server-side encryption for cloud storage
    • Implement proper access controls
    • Use signed URLs for temporary access
  3. Network Security:

    • Configure appropriate firewall rules
    • Use VPC endpoints where possible
    • Enable TLS for data in transit

Would you like me to provide example configurations for secure cloud storage setup?

Also applies to: 32-67


74-75: Leverage OpenTelemetry for Security Monitoring

The addition of OpenTelemetry packages provides an opportunity to implement security monitoring. Consider:

  1. Tracking authentication/authorization events
  2. Monitoring file access patterns
  3. Setting up alerts for suspicious activities

Would you like me to provide example configurations for security monitoring with OpenTelemetry?

Also applies to: 109-114

backend/internal/data/repo/repo_documents.go (4)

97-103: Suggestion: Simplify bucket closure with deferred call

The deferred function used to close the bucket adds unnecessary complexity.

Recommendation:

Simplify the code by using a direct defer statement.

defer func() {
    if err := bucket.Close(); err != nil {
        log.Err(err).Msg("failed to close bucket")
    }
}()

Or even:

defer bucket.Close()

Ensure that error handling is consistent with your application's logging practices.


136-142: Suggestion: Simplify bucket closure with deferred call

Simplify the deferred closure of the bucket as mentioned previously.

Recommendation:

defer func() {
    if err := bucket.Close(); err != nil {
        log.Err(err).Msg("failed to close bucket")
    }
}()

17-21: Security Recommendation: Securely manage storage provider dependencies

By importing multiple storage providers using blank identifiers, all provider drivers are included, which may increase the application's attack surface and binary size.

Recommendation:

Import only the storage providers that are necessary for your application to reduce potential vulnerabilities and minimize the binary size. Ensure that all credentials and configurations for these providers are managed securely.


Line range hint 71-76: Security Concern: Validate and sanitize file paths

The path function constructs file paths that are used to store documents. Ensure that the pathlib.Safe function properly sanitizes inputs to prevent path traversal attacks.

Recommendation:

Verify that pathlib.Safe effectively neutralizes any malicious input in gid and ext. Additionally, consider adding validation to restrict ext to a whitelist of allowed extensions.

🛑 Comments failed to post (5)
backend/go.mod (1)

26-130: 💡 Codebase verification

⚠️ Security vulnerabilities detected in dependencies

The security scan has revealed some concerns that need attention:

  • golang.org/x/crypto v0.28.0 is vulnerable to the Terrapin attack (CVE-2023-48795). Upgrade to v0.17.0 or later.
  • google.golang.org/grpc v1.65.0 is using a version that's vulnerable to:
    • Private token exposure in logs (fixed in v1.64.1)
    • HTTP/2 Stream Cancellation attacks
    • gRPC-Go HTTP/2 Rapid Reset vulnerability

Recommended actions:

  • Update golang.org/x/crypto to at least v0.17.0
  • Update google.golang.org/grpc to at least v1.64.1

Other scanned dependencies (gocloud.dev, cloud provider SDKs, oauth2, and JWT packages) show no known vulnerabilities.

🔗 Analysis chain

Verify dependencies for security vulnerabilities

Let's check the newly added dependencies for known security vulnerabilities.


Let me check a few more critical dependencies for security vulnerabilities.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for security advisories in new dependencies

# Check for vulnerabilities in gocloud.dev
gh api graphql -f query='
{
  securityVulnerabilities(first: 5, ecosystem: GO, package: "gocloud.dev") {
    nodes {
      advisory {
        summary
        severity
        publishedAt
      }
      vulnerableVersionRange
      firstPatchedVersion {
        identifier
      }
    }
  }
}'

# Check for vulnerabilities in cloud provider SDKs
for pkg in "cloud.google.com/go" "github.com/Azure/azure-sdk-for-go" "github.com/aws/aws-sdk-go-v2"; do
  echo "Checking $pkg..."
  gh api graphql -f query="
  {
    securityVulnerabilities(first: 5, ecosystem: GO, package: \"$pkg\") {
      nodes {
        advisory {
          summary
          severity
          publishedAt
        }
        vulnerableVersionRange
        firstPatchedVersion {
          identifier
        }
      }
    }
  }"
done

Length of output: 2080


Script:

#!/bin/bash
# Check for vulnerabilities in other critical dependencies
for pkg in "golang.org/x/crypto" "google.golang.org/grpc" "golang.org/x/oauth2" "github.com/golang-jwt/jwt"; do
  echo "Checking $pkg..."
  gh api graphql -f query="
  {
    securityVulnerabilities(first: 5, ecosystem: GO, package: \"$pkg\") {
      nodes {
        advisory {
          summary
          severity
          publishedAt
        }
        vulnerableVersionRange
        firstPatchedVersion {
          identifier
        }
      }
    }
  }"
done

# Also check the go.mod file for any deprecated or insecure version constraints
grep -E "^[[:space:]]*[^/]+[[:space:]]+v[0-9]+\.[0-9]+\.[0-9]+" backend/go.mod | while read -r line; do
  echo "Checking version: $line"
done

Length of output: 5108

backend/internal/data/repo/repo_documents.go (4)

81-85: ⚠️ Potential issue

Critical Issue: Invalid bucket URL in blob.OpenBucket call

In the Create method, blob.OpenBucket is called with an empty string as the URL. This will fail to open a valid bucket and will result in runtime errors when attempting to store documents.

Recommendation:

Provide a valid bucket URL, and ensure that sensitive information like access keys or connection strings are securely managed, preferably through environment variables or a configuration management system. Here's how you might adjust the code:

bucketURL := os.Getenv("BUCKET_URL")
bucket, err := blob.OpenBucket(context.Background(), bucketURL)
if err != nil {
    log.Err(err).Msg("failed to open bucket")
    return DocumentOut{}, err
}

89-89: ⚠️ Potential issue

Critical Issue: Incorrect MD5 hash computation for ContentMD5

The ContentMD5 option should contain the MD5 checksum of the content being uploaded. Currently, it computes the MD5 hash of doc.Title, which is incorrect and could lead to integrity issues.

Recommendation:

Compute the MD5 checksum of the document content. Note that this requires reading the content stream twice or buffering it, which may not be efficient for large files. Additionally, consider using a stronger hash function if supported.

// Read the content into a buffer to compute MD5 and upload
buf := new(bytes.Buffer)
tee := io.TeeReader(doc.Content, buf)

hash := md5.New()
if _, err := io.Copy(hash, tee); err != nil {
    log.Err(err).Msg("failed to compute MD5 hash")
    return DocumentOut{}, err
}

options := &blob.WriterOptions{
    ContentType: "application/octet-stream",
    ContentMD5:  hash.Sum(nil),
}

91-93: ⚠️ Potential issue

Critical Issue: Upload method does not exist on blob.Bucket

The blob.Bucket type does not have an Upload method. Using a non-existent method will result in a compilation error.

Recommendation:

Use NewWriter to obtain a Writer for the bucket, and then write your content to it.

w, err := bucket.NewWriter(ctx, path, options)
if err != nil {
    log.Err(err).Msg("failed to create bucket writer")
    return DocumentOut{}, err
}
defer func() {
    if err := w.Close(); err != nil {
        log.Err(err).Msg("failed to close writer")
    }
}()

if _, err := io.Copy(w, buf); err != nil {
    log.Err(err).Msg("failed to write content to bucket")
    return DocumentOut{}, err
}

120-124: ⚠️ Potential issue

Critical Issue: Invalid bucket URL in blob.OpenBucket call

In the Delete method, blob.OpenBucket is called with an empty string as the URL, which will fail to open the bucket.

Recommendation:

Just like in the Create method, supply a valid bucket URL and manage credentials securely.

bucketURL := os.Getenv("BUCKET_URL")
bucket, err := blob.OpenBucket(context.Background(), bucketURL)
if err != nil {
    log.Err(err).Msg("failed to open bucket")
    return err
}

@tankerkiller125
Copy link
Contributor Author

@coderabbitai bucket.Upload does exist as a function with the dependency.

Copy link
Contributor

coderabbitai bot commented Nov 14, 2024

Tip

For best results, initiate chat on the files or code changes.

@tankerkiller125: Thank you for the clarification. I appreciate that bucket.Upload exists with the dependency. Apologies for any confusion earlier.

Since the application is now integrating with multiple cloud storage providers using gocloud.dev/blob, I have some security recommendations:

  • Secure Credentials Storage: Ensure that all cloud storage provider credentials (API keys, access tokens, etc.) are securely stored and not hard-coded into the source code. Use environment variables or a secure secrets management system to handle sensitive information.

  • Validate User Inputs: When processing or handling any user-supplied data that interacts with the storage providers, make sure to validate and sanitize inputs to prevent injection attacks or unauthorized access.

  • Encryption in Transit and at Rest: Verify that data is encrypted both in transit and at rest. Use HTTPS/TLS connections when communicating with cloud storage APIs, and enable server-side encryption if available.

  • Least Privilege Principle: Configure cloud storage permissions to follow the principle of least privilege, granting only the necessary permissions needed for the application to function.

  • Audit Logging: Implement audit logging for storage operations to monitor and detect any unauthorized access or anomalies.

Let me know if you have any questions or need further assistance.


✏️ Learnings added
Learnt from: tankerkiller125
URL: https://github.com/sysadminsmedia/homebox/pull/332

Timestamp: 2024-11-14T02:49:28.294Z
Learning: The `bucket.Upload` function exists in `gocloud.dev/blob` and is available with the dependency.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

BREAKING CHANGE: Add new file connection string variable
@tankerkiller125
Copy link
Contributor Author

TODO: File deletion still doesn't work correctly for some reason

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 9

🧹 Outside diff range and nitpick comments (9)
backend/internal/sys/config/conf_database.go (1)

9-10: Document the storage configuration options

The new storage configuration needs documentation to explain:

  1. Supported connection string formats for each provider
  2. Required permissions and authentication methods
  3. Security best practices for production deployments

Consider adding detailed comments or creating a separate documentation file:

 type Storage struct {
+    // ConnString specifies the storage provider URL:
+    // - Local: file://./.data
+    // - S3: s3://bucket-name?region=us-west-1
+    // - GCS: gs://bucket-name
+    // - Azure: azure://container-name
     ConnString string `yaml:"conn_string" conf:"default:file://./.data"`
+    // Data specifies the root directory for local storage
     Data       string `yaml:"data" conf:"default:./"`
 }
Dockerfile.rootless (1)

45-47: Security recommendations for multi-storage configuration

Since this PR adds support for multiple storage providers, consider these security enhancements:

  1. Add environment variables for cloud credentials:
 ENV HBOX_STORAGE_DATA=file:///data/
+# For S3
+ENV AWS_ACCESS_KEY_ID=""
+ENV AWS_SECRET_ACCESS_KEY=""
+ENV AWS_REGION=""
+# For Google Cloud Storage
+ENV GOOGLE_APPLICATION_CREDENTIALS=""
+# For Azure Blob Storage
+ENV AZURE_STORAGE_ACCOUNT=""
+ENV AZURE_STORAGE_KEY=""
  1. Consider adding comments to document supported URL formats for different providers:
+# Supported storage URLs:
+# Local: file:///data/
+# S3: s3://bucket-name
+# GCS: gs://bucket-name
+# Azure: azblob://container-name
 ENV HBOX_STORAGE_DATA=file:///data/
Dockerfile (1)

63-63: Consider adding documentation for storage configuration

Since this is part of the multi-storage provider support, it would be helpful to add comments in the Dockerfile explaining the supported URL schemes for different storage providers (e.g., s3://, gs://, azure://).

Consider adding a comment like this:

+# Storage URL schemes: file:///, s3://, gs://, azure://
 ENV HBOX_STORAGE_DATA=file:///data/
backend/internal/data/repo/repo_documents.go (4)

17-21: Security Considerations for Cloud Storage Integration

While the cloud provider imports look good, here are some security recommendations:

  1. Ensure the connection strings are stored securely (e.g., using environment variables)
  2. Implement proper access controls and bucket policies
  3. Consider implementing request signing for enhanced security
  4. Monitor and audit storage access patterns

59-59: Enhance path validation and sanitization

While pathlib.Safe is used, consider additional security measures:

  1. Validate maximum path length
  2. Implement path traversal protection
  3. Add file extension whitelist
 func (r *DocumentRepository) path(gid uuid.UUID, ext string) string {
+	// Whitelist allowed extensions
+	allowedExt := map[string]bool{".pdf": true, ".doc": true, ".docx": true}
+	if !allowedExt[ext] {
+		return ""
+	}
 	return pathlib.Safe(filepath.Join(r.storePath, gid.String(), "documents", uuid.NewString()+ext))
 }

102-112: Improve error handling and resource management

The bucket operations look good, but consider these enhancements:

  1. Use context with timeout for the write operation
  2. Implement retry mechanism for transient failures
  3. Add metrics/monitoring for storage operations
+func (r *DocumentRepository) writeWithRetry(ctx context.Context, bucket *blob.Bucket, path string, data []byte, opts *blob.WriterOptions) error {
+	backoff := time.Second
+	maxRetries := 3
+	
+	for i := 0; i < maxRetries; i++ {
+		err := bucket.WriteAll(ctx, path, data, opts)
+		if err == nil {
+			return nil
+		}
+		if !errors.Is(err, context.DeadlineExceeded) {
+			return err
+		}
+		time.Sleep(backoff * time.Duration(i+1))
+	}
+	return errors.New("max retries exceeded")
+}

141-151: Add safety measures for delete operations

Consider implementing these safety measures:

  1. Add soft delete capability
  2. Implement versioning for recovery
  3. Add audit logging for delete operations
+	// Add audit log
+	log.Info().
+		Str("document_id", id.String()).
+		Str("path", doc.Path).
+		Msg("document deleted")
backend/app/api/main.go (1)

204-207: Add audit logging for storage operations

Given the addition of multiple storage providers, implement comprehensive audit logging to track:

  • Storage provider access attempts
  • File operations (upload/download/delete)
  • Authentication failures

Example implementation:

// Add to services package
type StorageAuditor interface {
    LogStorageAccess(provider, operation string, success bool)
    LogStorageError(provider, operation string, err error)
}
backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (1)

177-182: Avoid closing the bucket within the request scope

Opening and closing the bucket on every request may introduce overhead and affect performance. Consider initializing the bucket once at application startup and reusing it across requests. Close the bucket when the application is shutting down.

🛑 Comments failed to post (9)
backend/internal/sys/config/conf_database.go (2)

9-9: ⚠️ Potential issue

Ensure secure handling of connection strings

The ConnString field may contain sensitive credentials for cloud storage providers.

Security recommendations:

  1. Ensure connection strings are loaded from secure sources (env vars, secrets management)
  2. Avoid logging or exposing the full connection string
  3. Consider adding validation for supported URL schemes (file://, s3://, gs://, azure://)

Would you like me to provide a code example for secure connection string handling?


10-10: ⚠️ Potential issue

Validate and sanitize storage paths

The Data field's default value of "./" could potentially allow access to sensitive files outside the intended storage area.

Security recommendations:

  1. Implement path sanitization to prevent directory traversal
  2. Consider using absolute paths instead of relative paths
  3. Add validation to ensure paths remain within allowed boundaries

Example validation:

+ // Add to Storage struct
+ func (s *Storage) Validate() error {
+     if strings.Contains(s.Data, "..") {
+         return errors.New("path must not contain parent directory references")
+     }
+     return nil
+ }

Committable suggestion skipped: line range outside the PR's diff.

Dockerfile (1)

63-63: ⚠️ Potential issue

Verify volume permissions for cloud provider credentials

The /data volume permissions (777) might be too permissive. Consider:

  1. Using more restrictive permissions for credential files
  2. Separating storage data from configuration/credential storage

Apply this change for better security:

-RUN mkdir -p /app /data && chmod -R 777 /data
+RUN mkdir -p /app /data && \
+    chmod 755 /data && \
+    mkdir -p /data/storage && \
+    chmod 777 /data/storage && \
+    mkdir -p /data/credentials && \
+    chmod 600 /data/credentials

Also applies to: 71-71

backend/internal/data/repo/repo_documents.go (2)

94-100: 🛠️ Refactor suggestion

Enhance content security and type detection

The MD5 checksum implementation is good, but consider these improvements:

  1. Use a stronger hash function (SHA-256) for content verification
  2. Implement content type validation
  3. Add file size limits

27-29: 🛠️ Refactor suggestion

Consider adding validation for storage configuration

The new fields storePath and connString are critical for operation but lack validation. Consider adding validation to ensure these are properly initialized.

 type DocumentRepository struct {
 	db         *ent.Client
 	storePath  string
 	connString string
+
+	// Add a constructor method
+	func NewDocumentRepository(db *ent.Client, storePath, connString string) (*DocumentRepository, error) {
+		if storePath == "" || connString == "" {
+			return nil, errors.New("storage configuration cannot be empty")
+		}
+		return &DocumentRepository{
+			db:         db,
+			storePath:  storePath,
+			connString: connString,
+		}, nil
+	}
 }

Committable suggestion skipped: line range outside the PR's diff.

backend/app/api/main.go (2)

204-207: 🛠️ Refactor suggestion

Implement storage provider health checks

With multiple storage providers, add health checks to verify:

  • Storage provider connectivity
  • Proper authentication
  • Required permissions

Example implementation:

// Add to the runner.AddFunc calls
runner.AddPlugin(NewTask("storage-health-check", time.Duration(5)*time.Minute, func(ctx context.Context) {
    if err := app.services.Storage.HealthCheck(ctx); err != nil {
        log.Error().Err(err).Msg("storage health check failed")
    }
}))

204-204: 💡 Codebase verification

⚠️ Potential issue

Storage connection string requires additional security measures

Based on the verification results, the connection string handling needs security improvements:

  1. The connection string is stored in plain text in the configuration (conf_database.go)
  2. No validation or sanitization is performed before passing to repo.New()
  3. The default value file://./.data suggests local file system usage, but the PR adds cloud provider support

Recommended security measures:

  • Add connection string format validation before passing to repo.New()
  • Implement secure credential management (e.g., environment variables, secret management service)
  • Add sanitization to prevent injection attacks in cloud provider URLs
  • Consider encrypting the connection string in memory
  • Add audit logging for storage operations with sensitive data redaction
🔗 Analysis chain

Enhance security for storage connection string handling

The addition of cfg.Storage.ConnString introduces potential security risks when handling cloud storage credentials. Consider implementing the following security measures:

  1. Validate the connection string format before passing it to repo.New()
  2. Ensure the connection string is not logged in plain text
  3. Clear sensitive data from memory after use

Let's verify the current handling of the connection string:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential credential logging
rg -i "connstring|connection.*string" -A 3 -B 3

# Check for secure credential handling patterns
ast-grep --pattern 'log.$$$($_, $connString)'

Length of output: 17287


Script:

#!/bin/bash
# Check for connection string handling in repo package
rg -i "connstring|connection.*string" -A 3 -B 3 backend/internal/repo/

# Check for storage provider initialization and credential handling
ast-grep --pattern 'New($$$)'

# Look for any logging of connection strings
rg -i "log.*connstring|log.*connection.*string" -A 3 -B 3

Length of output: 2160


Script:

#!/bin/bash
# Check storage initialization in repo package
rg -p "repo/.*\.go$" --files | xargs cat

# Check for any credential handling patterns
ast-grep --pattern 'cfg.Storage.ConnString'

# Look for storage provider initialization
ast-grep --pattern 'type Storage struct {
  $$$
}'

Length of output: 721

backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (2)

167-171: 🛠️ Refactor suggestion

Externalize the bucket URL to support multiple storage providers securely

Currently, the bucket URL "file://./.data/" is hardcoded, which limits flexibility and may pose security risks. To fully support multiple storage providers like S3, Google Cloud, and Azure Blobs, consider externalizing this configuration. Using environment variables or a configuration file allows for dynamic selection of storage backends and prevents exposure of internal paths.


172-176: ⚠️ Potential issue

Close the file reader to prevent resource leaks

The file reader obtained from bucket.NewReader should be closed after use to release resources properly.

Apply this diff to fix the issue:

file, err := bucket.NewReader(ctx, doc.Path, nil)
if err != nil {
	log.Err(err).Msg("failed to open file")
	return validate.NewRequestError(err, http.StatusInternalServerError)
}
+defer file.Close()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		file, err := bucket.NewReader(ctx, doc.Path, nil)
		if err != nil {
			log.Err(err).Msg("failed to open file")
			return validate.NewRequestError(err, http.StatusInternalServerError)
		}
		defer file.Close()

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (4)
backend/internal/data/repo/repo_documents_test.go (1)

50-52: Security: Ensure proper connection string handling in production code

While using a simple file:// prefix works for local testing, ensure that the production code:

  1. Validates and sanitizes connection strings
  2. Uses environment variables or secure secrets management for cloud credentials
  3. Implements proper access controls and encryption for cloud storage
backend/internal/core/services/service_items_attachments.go (3)

Line range hint 44-67: Implement security measures for file uploads

The file upload process needs several security enhancements:

  1. File size limits to prevent DoS attacks
  2. Content type validation
  3. Malware scanning
  4. Rate limiting per user

Additionally, consider implementing the cloud storage interface here instead of relying on local storage.

 // Create the document
+// Add security checks
+if err := validateFileSize(file); err != nil {
+    return repo.ItemOut{}, err
+}
+if err := validateContentType(filename); err != nil {
+    return repo.ItemOut{}, err
+}
 doc, err := svc.repo.Docs.Create(ctx, ctx.GID, repo.DocumentCreate{Title: filename, Content: file})

Line range hint 1-89: Architectural Recommendations for Multi-Storage Support

To properly support multiple storage providers as per PR objectives, this service needs significant architectural changes:

  1. Introduce a storage provider interface:
type StorageProvider interface {
    Get(ctx context.Context, path string) (io.Reader, error)
    Put(ctx context.Context, path string, content io.Reader) error
    Delete(ctx context.Context, path string) error
}
  1. Inject the storage provider into the service:
type ItemService struct {
    repo *repo.Repository
    storage StorageProvider
}
  1. Security enhancements needed:
    • Secure credential management for cloud providers
    • Input validation and sanitization
    • Access control checks
    • Rate limiting
    • Audit logging for file operations

Would you like me to provide a detailed implementation plan for these changes?


Critical: Implement cloud storage abstraction for secure document handling

The codebase shows a tightly coupled document storage system with direct filesystem operations. Based on the analysis of DocumentRepository and related types, here are the critical security and architectural recommendations:

  1. Create a storage interface in a new package (e.g., backend/internal/storage):
type StorageProvider interface {
    Delete(ctx context.Context, path string) error
    Store(ctx context.Context, path string, data []byte) error
    // Add other necessary methods
}
  1. Security concerns to address:
  • 🔒 Implement path sanitization to prevent directory traversal attacks
  • 🔒 Add content validation before storage operations
  • 🔒 Implement proper access control checks
  • 🔒 Use signed URLs for temporary access when needed
  1. Replace the direct os.Remove call with the storage provider in service_items_attachments.go:
-err = os.Remove(attachments.Edges.Document.Path)
+err = svc.storageProvider.Delete(ctx, attachments.Edges.Document.Path)

This change is critical for both security and cloud readiness. The current implementation exposes the system to potential security vulnerabilities and doesn't support cloud storage providers.

🔗 Analysis chain

Line range hint 75-89: Critical: Replace direct filesystem operations with cloud storage interface

The use of os.Remove directly contradicts the PR objective of supporting multiple storage providers. This needs to be replaced with the cloud storage interface. Additionally, there are several security concerns:

  1. Path traversal vulnerability due to lack of path validation
  2. Race conditions between existence check and deletion
  3. No abstraction layer for storage operations

Replace the direct filesystem operation with the cloud storage interface:

-err = os.Remove(attachments.Edges.Document.Path)
+// Use cloud storage interface
+err = svc.storageProvider.Delete(ctx, attachments.Edges.Document.Path)

Let me analyze the storage architecture to provide accurate recommendations for transitioning from direct filesystem operations.


I need to understand the current document handling architecture to provide accurate recommendations for the storage transition.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for other direct filesystem operations that need to be replaced
rg -g '*.go' 'os\.(Remove|Create|Open|Write|Read)' -A 2 -B 2

Length of output: 3352


Script:

#!/bin/bash
# Let's check if there's a cloud storage interface or provider already defined
rg -g '*.go' -B 3 -A 3 'interface.*Storage|type.*Storage'

# Also check for any existing cloud storage implementations
rg -g '*.go' -B 3 -A 3 '(struct|interface).*Provider'

Length of output: 646


Script:

#!/bin/bash
# Let's check for any document or file handling related interfaces/types
rg -g '*.go' -B 3 -A 3 'type.*(Document|File|Upload)'

# Also check for any existing file operations in the attachments service
ast-grep --pattern 'func ($svc *ItemsAttachmentsService) $$$'

Length of output: 9838

🛑 Comments failed to post (3)
backend/internal/core/services/main_test.go (1)

64-64: 💡 Codebase verification

Security: The hardcoded path is safe, but cloud storage testing needs improvement

The verification reveals:

  1. The "/homebox" path in repo.New() is used consistently across the codebase as a bucket prefix, not a filesystem path.
  2. Cloud storage support is properly implemented using gocloud.dev/blob with multiple providers (Azure, GCS, S3, File).
  3. However, test coverage for cloud storage is limited to local file storage only.

Security recommendations:

  • Add test cases using gocloud.dev/blob/memblob to verify multi-provider functionality
  • Consider adding integration tests for cloud providers in CI using mock credentials
  • Document the storage URL format and security considerations in the codebase
🔗 Analysis chain

Security: Avoid hardcoded paths and add test coverage for cloud storage

  1. The hardcoded "/homebox" path should be configurable to prevent path traversal risks and improve flexibility.
  2. Consider adding test cases for cloud storage providers (S3, Azure, GCP) to ensure the new multi-storage functionality is properly tested.

Let's verify the usage of hardcoded paths across the codebase:

Consider these improvements:

  1. Move the "/homebox" path to a test configuration variable
  2. Add test cases using the gocloud.dev/blob/memblob package to simulate cloud storage providers without requiring actual cloud credentials in tests

The previous search revealed the presence of cloud storage integration, but we need to analyze the actual implementation and test coverage. This will help us verify if the hardcoded path is a security concern and if cloud storage testing is adequate.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for hardcoded "/homebox" paths that might need to be made configurable
rg -l "/homebox"

# Search for other test files to see if cloud storage providers are tested
fd -e go -x rg -l "gocloud.dev/blob"

Length of output: 8033


Script:

#!/bin/bash
# Let's check how "/homebox" is used in the repo implementation and tests
rg -A 5 "/homebox" backend/internal/data/repo/

# Check for cloud storage URL patterns and tests
rg -A 5 "blob.OpenBucket|gocloud.dev/blob" backend/

Length of output: 16811

backend/internal/core/services/service_items_attachments.go (2)

26-35: ⚠️ Potential issue

Validate document title input

The title update operation should validate and sanitize the input to prevent injection attacks or malicious filenames.

 attDoc := attachments.Edges.Document
+// Add title validation
+if !isValidDocumentTitle(data.Title) {
+    return repo.ItemOut{}, fmt.Errorf("invalid document title")
+}
 _, err = svc.repo.Docs.Rename(ctx, attDoc.ID, data.Title)

Committable suggestion skipped: line range outside the PR's diff.


16-21: 💡 Codebase verification

⚠️ Potential issue

🔒 Add group-based access control to prevent unauthorized document access

The current implementation of AttachmentPath retrieves documents without verifying if they belong to the user's group. This could lead to unauthorized access across group boundaries. The repository layer shows that attachments are linked to items and documents, but the service layer isn't enforcing group-based access control.

Required security changes:

  • Add group ID parameter to AttachmentPath method
  • Verify attachment belongs to the specified group using HasGroupWith predicate
  • Add middleware to validate group membership from the authentication context
🔗 Analysis chain

Add access control and path validation

The function should validate that the requesting user has permission to access the document before returning the path. Additionally, the returned path should be sanitized to prevent potential path traversal attacks.


Let me analyze the security patterns before making the final recommendation.


Let me analyze the authentication and authorization patterns before making the final security recommendation.


Let me analyze the service structure and authentication flow before making the final security assessment.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for other instances where document paths are returned without validation
rg -g '*.go' 'return.*\.Path' -A 2 -B 2

Length of output: 608


Script:

#!/bin/bash
# Check for access control patterns in the service layer
ast-grep --pattern 'func ($service *ItemsAttachmentsService) AttachmentPath($ctx context.Context, $id int) ($_, error) {
  $$$
}'

# Check for any permission checks in the repository layer
rg -g '*.go' 'func.*Get.*context.Context.*\).*error' -A 5

# Look for any security middleware or auth checks
rg -g '*.go' '(middleware|auth|permission|access).*Check' -A 3

Length of output: 23539


Script:

#!/bin/bash
# Check for group-based access control in the attachment service
ast-grep --pattern 'func ($service *ItemsAttachmentsService) $method($ctx context.Context, $$$) ($_, error) {
  $$$
  HasGroupWith($$$)
  $$$
}'

# Look for middleware implementations
rg -g '*.go' 'func.*middleware' -A 5

# Check for auth token validation
rg -g '*.go' 'token.*valid' -A 3

Length of output: 640


Script:

#!/bin/bash
# Check for attachment service implementation
rg -g '*.go' 'type.*ItemsAttachmentsService.*struct' -A 10

# Look for group-based access checks in the repository layer
rg -g '*.go' 'func.*Attachment.*Get.*Context' -B 5 -A 10

# Check for auth middleware in router setup
rg -g '*.go' 'router.*middleware' -A 5

Length of output: 2734

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 6

🧹 Outside diff range and nitpick comments (1)
backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (1)

5-5: Security Considerations for Cloud Storage Integration

While the gocloud.dev/blob package provides a unified interface for multiple storage providers, we should ensure proper security measures:

  1. Implement bucket access controls
  2. Use signed URLs for temporary access
  3. Configure proper CORS policies
  4. Enable server-side encryption

Would you like me to provide specific implementation examples for these security measures?

🛑 Comments failed to post (6)
backend/internal/data/repo/repo_documents.go (4)

62-64: ⚠️ Potential issue

Consider Access Control for Connection String

The GetConnString method could potentially expose sensitive credentials. Consider:

  1. Adding access control or authentication checks
  2. Returning only non-sensitive parts of the connection string
  3. Adding logging for access attempts

133-154: ⚠️ Potential issue

Address Potential Race Condition in Delete Method

The current implementation has several concerns:

  1. Race condition between Get and Delete operations
  2. Similar to Create method, defer placement needs adjustment
  3. No verification of successful deletion

Consider implementing optimistic locking or atomic operation. Also, apply similar defer pattern as suggested for Create method.

 bucket, err := blob.OpenBucket(context.Background(), r.connString)
 if err != nil {
   log.Err(err).Msg("failed to open bucket")
   return err
 }
+defer func() {
+  if err := bucket.Close(); err != nil {
+    log.Err(err).Msg("failed to close bucket")
+  }
+}()

 doc, err := r.db.Document.Get(ctx, id)
 if err != nil {
   return err
 }
+
+// Consider adding existence check before deletion
+exists, err := bucket.Exists(ctx, doc.Path)
+if err != nil {
+  return err
+}
+if !exists {
+  log.Warn().Msg("document already deleted from storage")
+}

Committable suggestion skipped: line range outside the PR's diff.


86-116: 🛠️ Refactor suggestion

Enhance Error Handling and Security in Create Method

While the implementation is good, consider these improvements:

  1. The defer bucket.Close() should be placed immediately after successful bucket opening
  2. Consider implementing retry logic for transient failures
  3. Add context timeout for bucket operations
  4. Consider implementing upload size limits

Apply this diff:

 bucket, err := blob.OpenBucket(context.Background(), r.connString)
 if err != nil {
   log.Err(err).Msg("failed to open bucket")
   return DocumentOut{}, err
 }
+defer func() {
+  if err := bucket.Close(); err != nil {
+    log.Err(err).Msg("failed to close bucket")
+  }
+}()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	bucket, err := blob.OpenBucket(context.Background(), r.connString)
	if err != nil {
		log.Err(err).Msg("failed to open bucket")
		return DocumentOut{}, err
	}
	defer func() {
		if err := bucket.Close(); err != nil {
			log.Err(err).Msg("failed to close bucket")
		}
	}()

	fileData, err := io.ReadAll(doc.Content)
	if err != nil {
		log.Err(err).Msg("failed to read all from content")
		return DocumentOut{}, err
	}

	hash := md5.New()
	hash.Write(fileData)
	options := &blob.WriterOptions{
		ContentType:                 "application/octet-stream",
		DisableContentTypeDetection: false,
		ContentMD5:                  hash.Sum(nil),
	}

	err = bucket.WriteAll(ctx, path, fileData, options)
	if err != nil {
		log.Err(err).Msg("failed to write all to bucket")
		return DocumentOut{}, err
	}

5-5: 💡 Codebase verification

🔒 Security Review: Connection String Handling Needs Improvement

The current implementation has several security concerns that should be addressed:

  • The connection string is stored in plain text in the YAML configuration without any validation or sanitization
  • No URL validation or allowlist mechanism exists for cloud provider URLs
  • Error logs expose the full connection string when bucket operations fail
  • No connection string rotation mechanism is implemented

Required security improvements:

  • Add URL validation for the connection string before using it with blob.OpenBucket
  • Implement cloud provider URL allowlisting
  • Mask sensitive parts of connection string in error logs
  • Consider implementing connection string rotation via environment variables
🔗 Analysis chain

Security Considerations for Cloud Storage Implementation

While the cloud storage integration looks good, here are some security recommendations:

  1. Ensure the connString contains no sensitive credentials in logs
  2. Consider implementing connection string rotation mechanism
  3. Validate cloud provider URLs against allowlist
  4. Consider implementing bucket name validation

Let's verify the logging implementation:

Also applies to: 7-7, 17-21, 27-29


Let me verify the logging implementation further to check for any sensitive data exposure:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential credential logging
rg -i "connString|connection.*string" -A 3 -B 3

Length of output: 5330


Script:

#!/bin/bash
# Check for logging related to connection strings
rg -i "log.*connString|zerolog.*connString" -A 3 -B 3

# Check configuration loading
rg -i "yaml.*conn_string|env.*conn_string" -A 3 -B 3

# Check if there's any URL parsing or validation
rg -i "parse.*url|validate.*url|allowlist|safelist" -A 3 -B 3

Length of output: 5710

backend/app/api/handlers/v1/v1_ctrl_items_attachments.go (2)

76-76: ⚠️ Potential issue

Enhance File Type Security Validation

While adding support for .avif and .ico formats, consider implementing additional security measures:

  1. Validate file content type using magic numbers
  2. Implement file size limits
  3. Scan for malware

Here's a suggested implementation:

+ func validateFileType(file multipart.File) error {
+     buffer := make([]byte, 512)
+     _, err := file.Read(buffer)
+     if err != nil {
+         return err
+     }
+     contentType := http.DetectContentType(buffer)
+     
+     allowedTypes := map[string]bool{
+         "image/jpeg": true,
+         "image/png":  true,
+         "image/webp": true,
+         "image/gif":  true,
+         "image/bmp":  true,
+         "image/avif": true,
+         "image/x-icon": true,
+     }
+     
+     if !allowedTypes[contentType] {
+         return errors.New("invalid file type")
+     }
+     
+     // Reset the file pointer
+     _, err = file.Seek(0, 0)
+     return err
+ }

Committable suggestion skipped: line range outside the PR's diff.


167-189: 🛠️ Refactor suggestion

Improve Resource Management and Security Headers

The current implementation has several areas for improvement:

  1. Resource Management:

    • Consider using a single defer statement
    • Add context timeout
  2. Security Headers:

    • Add Content-Security-Policy
    • Set X-Content-Type-Options
    • Configure Cache-Control

Here's a suggested refactor:

+ ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
+ defer cancel()

  bucket, err := blob.OpenBucket(ctx, ctrl.repo.Docs.GetConnString())
  if err != nil {
      log.Err(err).Msg("failed to open bucket")
      return validate.NewRequestError(err, http.StatusInternalServerError)
  }
+ defer bucket.Close()

  file, err := bucket.NewReader(ctx, doc.Path, nil)
  if err != nil {
      log.Err(err).Msg("failed to open file")
      return validate.NewRequestError(err, http.StatusInternalServerError)
  }
+ defer file.Close()

+ w.Header().Set("Content-Security-Policy", "default-src 'self'")
+ w.Header().Set("X-Content-Type-Options", "nosniff")
+ w.Header().Set("Cache-Control", "no-store")

  http.ServeContent(w, r, doc.Title, doc.CreatedAt, file)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
		defer cancel()

		bucket, err := blob.OpenBucket(ctx, ctrl.repo.Docs.GetConnString())
		if err != nil {
			log.Err(err).Msg("failed to open bucket")
			return validate.NewRequestError(err, http.StatusInternalServerError)
		}
		defer bucket.Close()

		file, err := bucket.NewReader(ctx, doc.Path, nil)
		if err != nil {
			log.Err(err).Msg("failed to open file")
			return validate.NewRequestError(err, http.StatusInternalServerError)
		}
		defer file.Close()

		w.Header().Set("Content-Security-Policy", "default-src 'self'")
		w.Header().Set("X-Content-Type-Options", "nosniff")
		w.Header().Set("Cache-Control", "no-store")

		http.ServeContent(w, r, doc.Title, doc.CreatedAt, file)

Copy link

cloudflare-workers-and-pages bot commented Dec 28, 2024

Deploying homebox-docs with  Cloudflare Pages  Cloudflare Pages

Latest commit: 8350170
Status: ✅  Deploy successful!
Preview URL: https://4467ad33.homebox-docs.pages.dev
Branch Preview URL: https://mk-vnext-storage.homebox-docs.pages.dev

View logs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant