Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moto server #35655

Merged
merged 70 commits into from
Aug 21, 2020
Merged

Moto server #35655

merged 70 commits into from
Aug 21, 2020

Conversation

martindurant
Copy link
Contributor

@martindurant martindurant commented Aug 10, 2020

Changes moto for s3 tests from monkeypatched/mocking to server mode. This allows aiobotocore exceptions to raise correctly, needed for tests to pass; the change is required by the upcoming async release of s3fs, but also works for old sync (botocore) version. A release of s3fs without this change would break pandas tests.

Note: since I now need to pass storage_options to the various s3 calls in the tests to make them pass (giving the endpoint of the moto s3 server), I needed to also plumb storage_options through the excel IO, which was previously missing, and update the excel tests.

  • closes #xxxx
  • tests added / passed
  • passes black pandas
  • passes git diff upstream/master -u -- "*.py" | flake8 --diff
  • whatsnew entry

Martin Durant added 2 commits August 18, 2020 11:51
Revert all code changes, to see if environments only are responsibble
for failures
@TomAugspurger
Copy link
Contributor

@martindurant the old pyarrow seems to be coming from defaults. Perhaps only conda-forge has new enough packages for us.

So maybe make sure that all the pyarrows have >=0.12.0 (or whatever you need). That'll get us what we need as long as conda-forge is one of the channels in the env.

@martindurant
Copy link
Contributor Author

Interestingly, travis-37-locale doesn't have pyarrow listed at all (but the newest version on defaults is 0.16.0)

Copy link
Contributor

@TomAugspurger TomAugspurger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, will merge later today.

@TomAugspurger TomAugspurger merged commit 712ffbd into pandas-dev:master Aug 21, 2020
@TomAugspurger
Copy link
Contributor

Thanks!

# OK is bucket already exists
pass
timeout = 2
while not cli.list_buckets()["Buckets"] and timeout > 0:
Copy link
Member

@alimcmaster1 alimcmaster1 Aug 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think bumping this timeout will solve the issue in below (not sure why it would have > 2 secs though..):

https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=40793&view=logs&j=404760ec-14d3-5d48-e580-13034792878f&t=f81e4cc8-d61a-5fb8-36be-36768e5c561a ?

s3 = s3fs.S3FileSystem(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})

try:
s3.rm(bucket, recursive=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a possibility this runs into race conditions with tests running in parallel (Running pytest xdist).

e.g one test function attempting to create the bucket whilst another deletes it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not recommend running tests in parallel for this, indeed - they access the same port and same bucket.
If you wanted to make parallel tests, but keep the one server process (which seems to be required by windows), you would need to randomise the bucket being accessed, and propagate this to all of the tests in the fixture output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IO Network Local or Cloud (AWS, GCS, etc.) IO Issues Testing pandas testing functions or related to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants