Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add caching for GET /rounds/current #143

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions index.js
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,10 @@ const getMeasurement = async (req, res, client, measurementId) => {
const getRoundDetails = async (req, res, client, getCurrentRound, roundParam) => {
const roundNumber = await parseRoundNumberOrCurrent(getCurrentRound, roundParam)

if (roundParam === 'current') {
res.setHeader('cache-control', 'no-store')
}
res.setHeader(
'cache-control',
`max-age=${roundParam === 'current' ? 60 : 31536000}}`
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about a dynamic setting for the current round, which ensures it's not longer than the expected round end. Wdyt @bajtos?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good idea.

In #147, I reworked "get tasks for the current round" logic so that we don't need dynamic max-age any more.

However, we should eventually move the logic of determining "what's the current meridian round index" from spark-api to spark-checker. I can imagine that it can be useful for spark-checker nodes to not start any new work (retrieval) if the current round is coming to an end.

By the time we finish the retrieval, submit the measurement to spark-api, and spark-publisher commits the measurement to Meridian, the new round will have already started, and our measurement will be evaluated as invalid (measuring a task that does not belong to this round).

I think we should make the check robust to handle changes in round lengths. Ideally, the smart contract should indicate when the current round is expected to end. I think we already have this value in Meridian state - the public field currentRoundEndBlockNumber.

I am proposing to open a new GH issue to track this idea. WDYT?

)

await replyWithDetailsForRoundNumber(res, client, roundNumber)
}
Expand Down