Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resolves #99 pass version to the worker #103

Merged
merged 3 commits into from
Sep 7, 2022

Conversation

ggrossetie
Copy link
Contributor

resolves #99

@ggrossetie
Copy link
Contributor Author

I've added an explicit timeout on string-limit-2.test.js and event.test.js because tests are running forever on GitHub actions. I thought it was related to the Node version and/or OS but I don't see any pattern. I think one of our tests is just flaky. Hopefully, with the timeout we will get a proper error.

@ggrossetie
Copy link
Contributor Author

ggrossetie commented Sep 5, 2022

Still running forever 😬
In theory, the default timeout is 30 seconds: https://node-tap.org/docs/cli/

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@mcollina mcollina requested a review from jsumners September 5, 2022 13:43
@ggrossetie
Copy link
Contributor Author

Thanks for your review @mcollina

I think the timeout in .taprc is in seconds (not milliseconds) so the actual timeout is 60000000 ms. When there's a failure, you can see the following in the log:

  ---
  env: {}
  file: test/thread-management.test.js
  timeout: 60000000
  command: C:\hostedtoolcache\windows\node\16.17.0\x64\node.exe
  args:
    - test/thread-management.test.js
  stdio:
    - 0
    - pipe
    - 2
  cwd: D:\a\thread-stream\thread-stream
  exitCode: 1

I believe the value should be 60 seconds ? For reference, Pino is using 480.

@ggrossetie
Copy link
Contributor Author

I can't find why tests are failing from time to time 😞

@mcollina
Copy link
Member

mcollina commented Sep 5, 2022

They are flaky :(

Copy link
Member

@jsumners jsumners left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@@ -1,5 +1,6 @@
'use strict'

const { version } = require('./package.json')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cue the future issue where people transpiling everything will complain about needing the package.json....

@mcollina
Copy link
Member

mcollina commented Sep 5, 2022

please adopt the same trick we use in pino and fastify for keeping the version up to date without having to load package.json.

@ggrossetie
Copy link
Contributor Author

They are flaky :(

Yes but I'm trying to understand why. It works fine when I run each test individually using node. We are also using tap sequentially using jobs: 1 so I guess it's not a race condition. In theory, we should not share anything between tests so I'm a bit clueless. Maybe we should make sure that everything is closed/terminated using t.afterEach.
Anyway, I will investigate a bit further 😄

please adopt the same trick we use in pino and fastify for keeping the version up to date without having to load package.json.

But I don't see when sync-version.js is called? I would have call this script in a prepublishOnly task no?

@coveralls
Copy link

Pull Request Test Coverage Report for Build 2999159363

  • 0 of 1 (100.0%) changed or added relevant line in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 46.019%

Totals Coverage Status
Change from base Build 2991804988: 0%
Covered Lines: 182
Relevant Lines: 356

💛 - Coveralls

@coveralls
Copy link

coveralls commented Sep 6, 2022

Pull Request Test Coverage Report for Build 2999242094

  • 1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 46.019%

Totals Coverage Status
Change from base Build 2994447695: 0%
Covered Lines: 182
Relevant Lines: 356

💛 - Coveralls

@ggrossetie ggrossetie force-pushed the issue-99-pass-ctx-info branch from f4a480a to 1e3f804 Compare September 6, 2022 10:00
@ggrossetie
Copy link
Contributor Author

The plot thickens!
I was able to fix the flakiness with 1e3f804

Apparently, adding require('why-is-node-running') prevents an infinite loop from occurring... 🤔

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@@ -3,13 +3,15 @@
const { test } = require('tap')
const { join } = require('path')
const ThreadStream = require('..')
require('why-is-node-running')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a separate issue should be opened and worked on to investigate why this is "fixing" the test suite.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is some weird interaction between tap own use of async_hooks, worker_threads and code coverage.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's affecting the Pino tests as well. We get flakes over there all the time.

@mcollina mcollina merged commit 5151881 into pinojs:main Sep 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pass contextual information to the worker
4 participants