Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

my plarform-go-challenge implementation #25

Closed
wants to merge 2 commits into from

Conversation

d1gt
Copy link

@d1gt d1gt commented Jan 12, 2025

This is my implementation of the challenge
more info on how to run the app in the readme

@lunemec
Copy link

lunemec commented Jan 13, 2025

Hello @d1gt ,
thank you for your submission.

My first question on your submission is, what will happen if I take the code, and deploy it to production, and have few concurrent users using it? Can you describe what will happen, in as much detail as you can think of?

Thank you

@d1gt
Copy link
Author

d1gt commented Jan 14, 2025

Hello @d1gt , thank you for your submission.

My first question on your submission is, what will happen if I take the code, and deploy it to production, and have few concurrent users using it? Can you describe what will happen, in as much detail as you can think of?

Thank you

Hello @lunemec ,

When you deploy this service to production with concurrent users, the HTTP server will spawn a new goroutine for each connection and the requests will flow through the middleware chain:

  • Recovery wraps everything to catch panics across all requests and ensures one bad request won't bring down the server
  • Rate Limiter checks each IP (using shared mutex-protected map)
  • Context adds 15s timeout and request ID
  • Logging tracks timing and details
  • Auth validates JWT and extracts user ID
  • Finally hits the handler for business logic

The key point about concurrent requests is that while they all flow through this chain independently in their own goroutines, they share two critical resources, the rate limiter's map and the SQLite database connection.

The SQLite database becomes the main bottleneck because it can only handle one write at a time. When multiple users try to add/remove favorites simultaneously, write operations get serialized - they have to wait in line. While read operations can happen concurrently, they get blocked during writes, limiting throughput to about 50-100 writes/second under load.

The rate limiter introduces additional concerns: under high concurrency, noticeable delay from mutex contention, plus the limiter map grows indefinitely (about 100 bytes per IP) since there's no cleanup mechanism, potentially causing memory issues at scale.

This implementation would work for moderate traffic but needs improvements for high-scale production use.

Here's a visualization of the request flow

Loading
flowchart TD
    R1[N Requests] --> S[HTTP Server]

    S --> |new goroutine| P1[Processing]

    subgraph "Middleware Chain"
        direction TB
        P1 --> REC1{Recovery}

        REC1 --> |panic| E500[500 Internal Error]
        REC1 --> RL1{Rate Limiter}

        RL1 --> |20 req/min| E429[429 Too Many Requests]
        RL1 --> CTX1{Context}

        CTX1 --> |15s| E504[504 Gateway Timeout]
        CTX1 --> LOG1[ Logging]

        LOG1 --> AUTH1{ JWT Auth}

        AUTH1 --> |invalid token| E401[401 Unauthorized]
        AUTH1 --> ROUTE1[Mux Router]

    end

    ROUTE1 --> H1[Handler]

    H1 --> DB[(SQLite)]

    DB --> |success| OK[200 OK]
    DB --> |created| CREATED[201 Created]
    DB --> |not found| E404[404 Not Found]
    DB --> |conflict| E409[409 Conflict]


    subgraph "Shared Resources / Bottlenecks"
        direction TB
        SR1[Rate Limiter State]
        SR2[SQLite Connection]
    end

    style S fill:#4CAF50,color:#fff
    style DB fill:#FF9800,color:#fff

    style E500 fill:#FF5252,color:#fff
    style E429 fill:#FF5252,color:#fff
    style E504 fill:#FF5252,color:#fff
    style E401 fill:#FF5252,color:#fff
    style E404 fill:#FF5252,color:#fff
    style E409 fill:#FF5252,color:#fff
    style OK fill:#4CAF50,color:#fff
    style CREATED fill:#4CAF50,color:#fff

    classDef processNode fill:#E3F2FD,stroke:#1565C0
    class P1,P2,P3,ROUTE1,ROUTE2,ROUTE3,H1,H2,H3 processNode

Copy link

@alexandros-georgantas alexandros-georgantas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @d1gt ,

First of all, thank you for taking the time to implement this challenge!

My overall impression is that you did a good job, implementing the requirements, structuring your code, and providing sufficient instructions.

I've left some comments that I would really appreciate if you could find some time to answer.

Thanks

@@ -0,0 +1,80 @@
include .env.test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please describe what will happen if this file does not exist?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curl requests in the Makefile use the JWT_TOKEN env variable, if we do not include it wont be able to load the token

SQLITE_DB := app.db

docker-run: docker-build
docker rm -f $(DOCKER_CONTAINER_NAME) || true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the scope of this task, could you please explain why you have included as part of your docker-run sequence the build of the container? Take into consideration that you have a separate command for building the container, as well as that you have provided instructions in your README file is it necessary to remove and rebuild the container each time someone executes docker-run?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

during the development of the task I wanted to run both build and run with one command to make sure I didn't break anything.. I should've removed it but forgot about it 😅

@echo "$(1) set"
endef

use-user1:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please explain why you’ve decided not to implement a fairly simple users table and instead went with the flow of generating JWTs and passing them as env values to represent your users?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the time I mistakenly thought it would be much simpler to use/test this way, also to save time

```
4. Remove favorite
```bash
make api-remove ASSET_ID=<id>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I tried to test that I got not found error, so I am assuming that this is a typo here and instead of ASSET_ID you meant FAVORITE_ID, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ASSET_ID parameter is correct
the favorites table uses ASSET_ID as a foreign key from the assets table
the correct flow would be to first list your favorites by calling make api-list
and then remove it by calling make api-remove ASSET_ID=02f25919-a2a3-4e33-a742-13919d660b45


func (h *Handlers) RegisterRoutes(r *mux.Router) {
// Favorites endpoints
r.HandleFunc("/favorites", h.ListFavorites).Methods(http.MethodGet)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you suggest an alternative way of structuring your favorites endpoints?

middleware.RateLimitMiddleware(20),
middleware.ContextMiddleware(ctx),
middleware.LoggingMiddleware(),
middleware.AuthMiddleware(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please describe why you’ve decided to chain all your middlewares together? Could this become a bit limiting in some case? e.g. what will happen in the case that you want to support public endpoints that not require authentication?
p.s. I saw that you've added a condition in your authentication layer for bypassing swagger however is that scalable?
Could you suggest any alternatives to your implementation?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the scope of this task I believe its not limiting at all.
But on a production environment it would be limiting, as an alternative I would keep the middleware as modular as it is now, but each endpoint would have its own chain according to its needs

// @Produce json
// @Param page query int false "Page number"
// @Param limit query int false "Items per page"
// @Success 200 {object} types.APIResponse

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After checking the generated Swagger documentation (hosted by your server) I think that types.APIResponse is not that meaningful in terms of helping a developer to understand the response data. Could you investigate why this is the case? Is there anything that you would have done differently?

Copy link

@lkslts64 lkslts64 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @d1gt, thanks for submitting your solution.

You did a really great job implementing the core deliverable but also you've added many nice to haves like unit tests, HTTP middlewares, Makefile etc 👏

I've also added some questions below. I would appreciate if you took some time to look into them. Thanks

return &sqliteRepository{db: db}, nil
}

func initSchema(db *sql.DB) error {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, how would you handle database schema migrations?

errChan := make(chan error, 1)
var wg sync.WaitGroup

// Asset generation workers

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you consider asset generation a task worth parallelizing?

"platform-go-challenge/internal/service/types"
"strings"

"github.com/mattn/go-sqlite3"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason behind choosing SQLite? Can you elaborate a bit please?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the scope of this task I needed a relational database but didnt want to deploy a postgres or mssql
so I went with sqlite
in a production environment I would go with something that scales and can handle concurrent writes

}

type Chart struct {
BaseAsset

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like struct embedding. Nice one!

type Favorite struct {
UserID string `json:"user_id"`
AssetID string `json:"asset_id"`
Type string `json:"type"`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you actually need the Type information in the Favourites table? How could you retrieve the asset's type alternatively?

@d1gt d1gt closed this Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants