-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Create allennlp-hub repo #3351
Comments
- IIUC, this is most of what's needed to make a formal release, but I haven't tested that. Local-only so far. - Installing dependencies that aren't on pypi proved clunkier than expected. - For now I'm going to set the CI to manually run `pip install --editable allennlp` for on a local checkout. - The allennlp specified in `setup.py` will be excluded with an environment variable. - Prereq for allenai/allennlp#3351.
I've created https://github.com/allenai/allennlp-hub and an associated build for the sniff tests on Team City. The basic functionality works, but there's a bit of cleanup to be done. In particular:
|
@matt-gardner your advice on 2) would be appreciated! I'm out until Tuesday, but I'll wrap up the remaining pieces then. |
This is awesome, thanks @brendan-ai2! You're right that we never had sniff tests for the parsing models, and I'm not really sure why not. As for adding things, it would be ideal if we didn't have to modify this repo in order to add a new semantic parser to the hub, just add something to a One option is to do a named approached, similar to Probably the right thing to do is just put a few existing models (e.g., ones that were used for papers) into |
- For allenai/allennlp#3351. - Conveniently allenai/allennlp#3361 broke `allennlp_semparse` a while back, so the (AllenNLP Hub Master Build)[http://build.allennlp.org/viewType.html?buildTypeId=AllenNLPHub_Master] should break when this PR merged. - We should then fix `allennlp-semparse` and verify that the build goes green.
- Ensures that the versions of `allennlp` and `allennlp-semparse` specified in `requirements.txt`/`setup.py` are compatible. - Corresponding build: http://build.allennlp.org/viewType.html?buildTypeId=AllenNLPHub_Release - The existing dockerfile and TC build work only on the various `master`s. - For allenai/allennlp#3351
@brendan-ai2, can this be closed now? |
See github.com/allenai/allennlp-hub |
We're splitting out the models and dataset readers into task-specific repositories, keeping the core abstractions in a more lightweight main library. We still want somewhere to go to get all of the pretrained models, though. We should create a repository called something like AllenNLP Hub that pip installs all of the task-specific repos, then exposes something like our
pretrained.py
and our sniff tests.The main repo would then pip install the hub during CI, to run the sniff tests periodically and make sure we're not breaking anything downstream.
The text was updated successfully, but these errors were encountered: