Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup for julia 1.0 #14

Merged
merged 4 commits into from
Dec 19, 2018
Merged

Cleanup for julia 1.0 #14

merged 4 commits into from
Dec 19, 2018

Conversation

randyzwitch
Copy link
Contributor

Cleans up package for Julia 1.0 and updates the YAML file with the parsing regexes.

As written, the tests do not completely pass:

parse_device: 121/22 (99.2%)
parse_os: 1068/1128 (94.7%)
parse_os2: 92/92 (100%)
parse_ua: 164/165 (99.4%)
parse_ua2: 972/972 (100%)

I'm comfortable with this level of accuracy; when I originally wrote this code, it was near impossible to trace how the python library was moving through these one-off test failures, so I manually edited the test files to make the tests pass. That said, I'd be more comfortable just publishing these accuracy statistics, as close enough is far better than not being able to parse user-agents at all.

My next commit will be to update the test files, which adds a lot more tests. Will post those metrics here as well.

I'd love to get feedback from anyone else who might be concerned about what the move forward should be. WIth 1000's of tests, I'm not that invested on making the parser code better or changing the tests to get marginal accuracy gains.

@randyzwitch
Copy link
Contributor Author

Updated the test files (which also removed some of the old ones):

parse_device: 15144/16017 (94.6%)
parse_os: 1517/1528 (99.3%)
parse_ua: 204/205 (99.5%)

Again, I'm still comfortable with this accuracy. My plan is to remove the test macro from the individual tests, but rather do my own counting and then test that level of accuracy. This way, the CI tests will capture whether or not any future changes has decreased the accuracy of the parser from its current level, rather than fail for not having 100% parser accuracy.

@codecov-io
Copy link

codecov-io commented Dec 19, 2018

Codecov Report

❗ No coverage uploaded for pull request base (master@13073b8). Click here to learn what that means.
The diff coverage is 85.71%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master      #14   +/-   ##
=========================================
  Coverage          ?   57.48%           
=========================================
  Files             ?        1           
  Lines             ?      127           
  Branches          ?        0           
=========================================
  Hits              ?       73           
  Misses            ?       54           
  Partials          ?        0
Impacted Files Coverage Δ
src/UAParser.jl 57.48% <85.71%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 13073b8...4fd02ab. Read the comment docs.

@randyzwitch randyzwitch changed the title [WIP] Cleanup for julia 1.0 Cleanup for julia 1.0 Dec 19, 2018
@randyzwitch randyzwitch merged commit 7fb06eb into master Dec 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants