Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring the Linux build script #99

Closed
joerick opened this issue Sep 12, 2018 · 3 comments
Closed

Refactoring the Linux build script #99

joerick opened this issue Sep 12, 2018 · 3 comments

Comments

@joerick
Copy link
Contributor

joerick commented Sep 12, 2018

Currently the linux build works by piping a bash script into /bin/bash inside the docker container.

A couple good ideas have come out of the discussion in #97, so I wanted to keep them in an open issue!

  1. @altendky: maybe a good way would be a Python script with a CLI (probably small enough to comfortably do with stdlib and 2/3 compatibility) and we pass in the info via the command. No config file to write and copy, no template to fill out. Just copy the exact Python script over to the container and then docker run it with proper arguments.

  2. @altendky: Or, maybe we just replace the string/file with python code calling docker run for each command?

@joerick
Copy link
Contributor Author

joerick commented Sep 12, 2018

I must admit, I'm quite drawn to (2), since that could make the code quite similar to the macOS and windows builds, keeping the control flow in a single python process. We'd wrap up the docker exec in a call() function like

def call(args, env):
    subprocess.check_call(['docker', 'exec', '--env', env, container_id] + args)

Then, once the container is set up and running, it's just lots of commands like

call(['pip', 'wheel', project_dir, '-w', '/tmp/built_wheel', '--no-deps'], env=env)

I'm sure it's a bit more complicated than that (the macOS and Windows builds stuff like glob and shutil to move stuff around). Oh, and expanding env vars might be tricky? But I'm sure there'd be a way around.

@altendky
Copy link
Contributor

You could use check_output() as the general rule instead and could then read out variables and directory listings. Just make sure of things like having ls output a file per line rather than splitting on spaces (hmm, can a file have a newline?).

https://docs.docker.com/engine/reference/run/#env-environment-variables

For setting env var you can (1) ['-e', 'deep=purple', 'do', 'something'] or (2) have the function take an additional dict for extra env vars or (3) **kwargs for env vars. I think (2) uses Python syntax and is explicit and flexible so I'd tend towards that.

@joerick
Copy link
Contributor Author

joerick commented Oct 30, 2020

The piped bash script was removed in #386 - replaced with a system similar to (2) above, but using a shell process and open pipes instead of docker exec for performance reasons. So this is fixed :)

@joerick joerick closed this as completed Oct 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants