Skip to content

Commit

Permalink
Merge pull request #247 from blockfrost/chore/health-check-interval
Browse files Browse the repository at this point in the history
chore: control metric/healthcheck interval with env METRICS_COLLECTOR…
  • Loading branch information
slowbackspace authored Oct 25, 2024
2 parents 6bc48a1 + 334f0e5 commit 2754777
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 5 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ Optional configuration:

- `BLOCKFROST_BACKEND_URL` URL pointing to your own backend (blockfrost-backend-ryo) if you prefer not to use the public Blockfrost API
- `BLOCKFROST_BLOCK_LISTEN_INTERVAL` how often should be the server pulling the backend for new data (in milliseconds, default `5000`)
- `METRICS_COLLECTOR_INTERVAL_MS` frequency for refreshing metrics and performing health check (default `10000`)
- `PORT` which port the server should listen to (default `3005`)

Once your server has started, you can connect to it.
Expand Down
4 changes: 3 additions & 1 deletion src/constants/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ export const FIAT_RATES_ENABLE_ON_TESTNET = false;
export const BLOCKFROST_REQUEST_CONCURRENCY = 500;

// How often should metrics be updated
export const METRICS_COLLECTOR_INTERVAL_MS = 10_000;
export const METRICS_COLLECTOR_INTERVAL_MS = Number(
process.env.METRICS_COLLECTOR_INTERVAL_MS ?? 10_000,
);

// If healthcheck repeatedly fails for duration longer than this constant the process exits
export const HEALTHCHECK_FAIL_THRESHOLD_MS = 6 * METRICS_COLLECTOR_INTERVAL_MS; // 6 health checks
Expand Down
10 changes: 6 additions & 4 deletions src/utils/prometheus.ts
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ export class MetricsCollector {
if (this.healthCheckFailingSince) {
const failDurationMs = Date.now() - this.healthCheckFailingSince;

if (failDurationMs > HEALTHCHECK_FAIL_THRESHOLD_MS) {
if (HEALTHCHECK_FAIL_THRESHOLD_MS > 0 && failDurationMs > HEALTHCHECK_FAIL_THRESHOLD_MS) {
logger.error(
`Healthcheck failing for longer than ${HEALTHCHECK_FAIL_THRESHOLD_MS} ms. Exiting process.`,
);
Expand Down Expand Up @@ -79,10 +79,12 @@ export class MetricsCollector {
};

startCollector = async (interval: number) => {
this.metrics = await this._collect();
this.intervalId = setInterval(async () => {
if (interval > 0) {
this.metrics = await this._collect();
}, interval);
this.intervalId = setInterval(async () => {
this.metrics = await this._collect();
}, interval);
}
};

stopCollector = () => {
Expand Down

0 comments on commit 2754777

Please sign in to comment.