Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cleans up package for Julia 1.0 and updates the YAML file with the parsing regexes.
As written, the tests do not completely pass:
I'm comfortable with this level of accuracy; when I originally wrote this code, it was near impossible to trace how the python library was moving through these one-off test failures, so I manually edited the test files to make the tests pass. That said, I'd be more comfortable just publishing these accuracy statistics, as close enough is far better than not being able to parse user-agents at all.
My next commit will be to update the test files, which adds a lot more tests. Will post those metrics here as well.
I'd love to get feedback from anyone else who might be concerned about what the move forward should be. WIth 1000's of tests, I'm not that invested on making the parser code better or changing the tests to get marginal accuracy gains.