-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iterative autotuning of basals and ratios #261
Comments
The existing lib/determine-basal/cob-autosens.js doesn't have all the info it needs to determine whether a meal is over yet: it just calculates carbsAbsorbed, and then lib/meal/total.js uses that to actually calculate COB. It would probably make sense to create a new high-level autotune-prep.js, which would:
Then, having created the glucose data file with a top-level stanza for each category of interest, we could have an autotune.js (or separate autotune/csf.js, autotune/basal.js, and autotune/isf.js) that would parse their respective data sets and use deviations from the gap-free data intervals to perform the actual CSF, basal, and ISF adjustment calculations. With this method, we could run the autotune process once an hour, parse only the previous hour's data, and update autotune.json accordingly. |
https://github.com/openaps/oref0/blob/autotune/lib/autotune-prep/total.js now has all of the preparation steps working, and is spitting out json with top-level csf_glucose_data, isf_glucose_data, and basal_glucose_data arrays. I decided to allocate data to ISF tuning if the BGI is more than about 1/4 of the "basal BGI" (for now), and use the rest for tuning basals. Also, when avgDelta is positive, it doesn't make sense to use that for calculating ISF, so that data goes toward basals as well. |
I also added in the avgDelta and deviation values to the json dump, which I think should mean that all of the recursive stuff can be contained to autotune-prep, and we can do the rest of the tuning as a single pass. |
oref0-autotune-prep.js and oref0-autotune.js are now working well, and spitting out reasonable-looking results (although I still have some questions as to whether the CSF estimate is accurate). The main problem at this point is that oref0-autotune-prep.js is highly recursive and takes an hour or two to run on a full day's worth of data (at 100% of one CPU). Probably need to figure out how to make the COB calculation more efficient before we can really test out the algorithm and make sure it converges on reasonable values for ISF and CSF. I think the way to do that would be to find and move the relevant carb absorption code out of lib/determine-basal/cob-autosens.js and into oref0-autotune-prep.js, so it can be run to calculate just the absorption since the last BG data point and update COB accordingly, rather than recalculating carb absorption from scratch on each one. |
Oh, and they also work great on data downloaded from the NS treatments.json and entries.json API endpoints. The only thing you need from openaps is an initial profile.json. One additional enhancement might be to allow for providing a NS url and have the script download the needed treatments and entries data directly from NS... |
Re-did the CSF algorithm to be much simpler and 50x more efficient. :-) I'm now parsing a day's worth of data in 2-3 minutes, so I can get started on testing whether it converges on a reasonable value. |
After some bug fixes, I ran the script overnight starting with some too-aggressive and too-sensitive settings, and confirmed that the algorithm eventually converged (after a handful of runs on the same 3 weeks' worth of data) on two nearly identical and reasonable sets of estimates for ISF, CSF, IC ratio, and basals. Here's how I ran the test: Collect data:
Set up testprofile.json. My too-sensitive one looked like:
Run test (overnight):
Display results (in another tab while test is running):
|
Some more plain language to go with the code as it is today (I also made some changes today to rename and comment, which is in line with the below): There are two key pieces: autotune-prep and autotune.js 1. Autotune-prep:
2. Autotune.js:
|
A few food for thought items for later:
This may be the first tool to peel back the onion and start reliably understanding post-meal BGs to the granular level of basal vs csf vs uncounted/miscounted meal carbs. |
Things that come to mind:
|
Looks promising... I tried to do a run myself, but got errors:
Is that caused by a missing file, or should I install the autotune differently? Why do you do calculate it on an hourly basis, and not (as the pump and Nightscout support) on a half-hour basis. |
Awesome idea and thanks for this!
I'm not sure if either of my suggestions would make a difference in practice, but I at least wanted to pass along my thoughts and support. |
@PieterGit you probably just need to I did hourly basals initially because that is convenient to parse and calculate. Anything more frequent introduces more noise, as you're using less and less data to tune each (half) hour. And with DIA of at least 3h anyway, scheduling basals for less than an hour is unlikely to make much difference... |
as suggested by @scottleibrand on issue openaps#261 (comment)
Thanks.
. Investigated what happened. |
@PieterGit that looks like it's pulling in a bad autotune file, then running into a safety configuration. Are you running with the pump profile being pulled in now? That's preventing a lot of @jyaw's runaway numbers he was seeing before we ran it with a pump profile. |
Pump profile + 20% limits are currently doing well on both my data (regular, plus a set each of super sensitive and super resistant parameters) and @jyaw's data and preventing any crazy extremes, converging to reasonable values. |
@TC2013 we want some safety caps, which is why it currently would store a different profile rather than editing the pump profile - then a 20% cap will prevent it from constantly iterating up up and away and being very different from what the human originally set for manual mode. Open to brainstorming if whether there are other ways to achieve this - but generally thinking if the values and basals start to be more than 20% off for looping purposes, the human needs to decide whether to adjust the baseline pump-stored values, in order for it to tune further beyond that. |
For OS X, the |
hey did the input/output change for bin/oref0-autotune.js? there's now a autotune/glucose.json and autotune/autotune.json.... aren't those pretty much the same? would I pass the previous run's autotune/glucose.json as autotune/autotune.json? With only 2 args it's complaining about my carb_ratio being out of bounds. |
disregard my previous comment.... not thinking straight. Good now :) |
* Merge remote-tracking branch 'refs/remotes/origin/oref0-setup' into openaps/oref0-setup merge * make oref0-find-ti work on edison explorer board out of the box, required for ww pump rename files, add to package.json * fix json tabs and comma's * use oref0-find-ti (because it also works on explorer board) * fix dexusb and oref0-find-ti issue * remove stdin by default * create .profile and dexusb fixes * fix oref0 install docs as suggested by @scottleibrand on issue #261 (comment) * apply changes because of scott's review fix things a scott suggested on 28-12-2016 * sort files under bin in package.json. increment version number and use semver to indicate dev version * run subg-ww-radio-parameters script on mmtune * append radio_locale to pump.ini for ww pumps temporary workaround for oskarpearson/mmeowlink#55 * fix add radio_locale if it's not there workaround oops, grep -q returns 0 if found, we want to add radio_locale if it's not there so && instead of ||
@scottleibrand @danamlewis used the script below to test what looks like an issue... My resulting profile.json does not have correct ".basalprofile.start" entries. They appear to be inherited from the initial pump profile with repition until the "start" value changes in the initial pump profile (e.g. repeated 00:00:00 until 3am when my old profile changed). My repo appears to be even with the upstream autotune branch, so I don't think that's it. Can you confirm this on your end?
Also, have updates to the shell script ready for loggin stdout and reporting original vs. autotune in a table. But I'm thinking I want to work through the above issue before I do a PR for my updates. |
Yeah, for now those start entries are simply copied from the original profile. They don't do anything AFAICT, so I haven't bothered to change them. I think @sulkaharo had an easy fix for that, but I don't know where that ended up... |
Note this assumes the even hour implementation. Our profile currently uses half an hours, for example a changed basal rate at 5:30 AM to combat morning resistance, where 5:00 seemed too early due to often being low at that time and 6:00 seemed too late for the basal to kick in before breakfast. |
Related - #301 pulls all of the needed profile data into the one profile object. We should do a refactor where all scripts pull data using this file and always go through using a method call that returns needed data, rather than accessing data in the profile directly. This would allow us to implement the support for loading multiple profile objects with validity periods, which is a step toward allowing systems like autotune use profile data that was actually in use at the time of the analysis. (And changing the file format if needed - right now all the core in the repo is tightly coupled to data storage file formats.) |
I added #310 which can create an Microsoft Excel file with the 'expanded profile' (isf and basal profile for each half an hour of a day), for each profile json file in the autotune directory. Please test and leave feedback here or in a new issue or i-to-b channel. |
I think we're getting close. Just opened #313 for review |
So hey, how does the tuning take profile changes during the period being analyzed into account? From what I can see nothing was done regarding this, which AFAIK can lead to the analysis producing a profile that's on the other end of the safety margin from what it should be. If this goes into the production release without even a simple solution in place, at least the documentation of the feature should say the results are unpredictable if the analysis is run within 24 hours of a pump profile change, leading to potential under- or overdosing of insulin as configured by the adjustment margin (as in, the same as the current autosens, which is also a big problem). |
I agree with @sulkaharo's concern. It seems that we now have the ability to push/pull/sync profile info from NS though?.. Perhaps we need to pull all profiles that cover the particular time window from NS? In a case where there's an overlap, we could just do a piecewise autotune for that day and note this to the user somehow? Or not provide autotune results during periods with older (not the current) profiles? There could be a need for different approach in cases with manual vs iterative tuning as well... |
I believe that concern is addressed by the fact that, with autotune enabled, changes to the pump only get reflected in pumpprofile.json, which is only used to set the 20% caps on autotune. The actual IOB calculations etc. will continue to use the autotuned profile.json, which removes the sudden shifts you would see with just autosens. We definitely should add some documentation around this, though, so people know that changes they make to the pump's basal profile won't take effect for looping right away like they did without autotune, won't have any effect at all until midnight, and even after that will only influence basals and ratios that are more than 20% off the new pump settings. |
+1 for docs :) I'm definitely not sure what all the intended consequences of this system are. |
This issue is starting to get busy, and has nuggets of what needs to go in to start. Unless someone beats me to it, I'll pr to the OpenAPS docs and we can discuss additions/edits on that pr (will come back and link it here). Although, since this could be used by non-OpenAPS users, we'll need to try to be clear on how this is documented as a one-off run by anyone w requisite data, vs the documentation for how it incorporates into OpenAPS looping if/when enabled. |
(Starting stubbing out WIP docs - please direct PRs there for things that need to be added, further documented, etc. to this page: http://openaps.readthedocs.io/en/latest/docs/walkthrough/phase-4/autotune.html Thanks! ) |
Implements new Autotune feature (#261) Commit details: * oref0-autotune-prep.js * use oref0/lib/autotune-prep * don't print autosens debug stuff when running in meal mode * divide basal_glucose_data from isf_glucose_data at basalBgi > -3 * bgi; comments and TODOs * bucketize data, calculate deltas and deviations, and use those to better allocate data to csf, isf, or basals * prep for an optimized append mode to an existing autotune/glucose.json * initial framework for oref0-autotune.js * adjust basals for basal deviations * add bgi to output json * try including rising BGs in ISF calculations * initial basic ISF autotuning * use medians, not averages, for ISF calcs * add mealCarbs and mealAbsorption start/end * first pass at CSF estimation * when avgDelta with large negative BGI, don't use that for ISF or basal tuning * convert sgv records into glucose if needed * add support for nightscout treatments.json format * only consider BGs for 6h before a meal to speed up processing * properly map sgv to glucose * add support for carbs from NS * remove unnecessary clock and basalprofile arguments * update basalprofile * profile needs isfProfile not isf_profile * use min_5m_carbimpact in calculating total deviations too * way more efficient and simpler iterative algorithm for calculating COB * add mealCarbs to glucose_data * make sure new CSF isn't NaN * disable min deviation for CSF calculation * smooth out basal adjustments by incrementing evenly and reducing proportionally over 3h * smooth out basal adjustments by using average of current and last 3 hours as iob_inputs.profile.current_basal * make sure increases and decreases of basal are both doing the same 20% * minPossibleDeviation, and actually basal_glucose_data.push when avgDelta > 0 * include Math.max(0,bgi) in minPossibleDeviation * null treatment check * add pumpprofile as optional argument * TODO: use pumpprofile to implement 20% limit on tuning * use pumpprofile to implement 20% limit on tuning * use pumpprofile to implement 20% limit on ISF and CSF tuning * only set pumpCSF if setting pumpISF * logging * null check * undefined check * Commenting to describe index.js * Deleting unnecssary variable that's not used * More commenting * Last bit of commenting for now * Rename function from diaCarbs to categorizeBGdatums * Rename total.js to categorize.js * Update reference to now categorize.js * Reference categorize instead of total.js * Rename total.js to categorize.js * Rename categorize.js to tune.js * Fixing function naming from diacarbs to tuneAllTheThings * Delete tune.js * Simplifying min and maxrate * Tweaking troubleshooting language * Adding to-do about dinner carbs not absorbed at midnight * Defining fullNewCSF * Function tuneAllTheThings instead of Generate * Update index.js * Make pump profile required for autotune (#298) * Make pump profile required for autotune * Added oref0-autotune-test.sh script to test autotune. Allows the user to specify date range and number of runs as well as openaps directory and user's Nightscout URL. Note that the pump profile is pulled from the following location: <loop dir>/settings/profile.json. Also note that --end-date and --runs are not required parameters, but the script will default to the day before today as the end date and 5 runs, so you may or may not want to use those. Example Usage: ./oref0-autotune-test.sh --dir=openaps --ns-host=<NS URL> --start-date=2016-12-9 --end-date=2016-12-10 --runs=2 * Added oref0-autotune-test.sh script to test the autotune. Allows the user to specify date range and number of runs as well as openaps directory and user's Nightscout URL. Note that the pump profile is pulled from the following location: <loop dir>/settings/profile.json. Also note that --end-date and --runs are not required parameters, but the script will default to the day before today as the end date and 5 runs, so you may or may not want to use those. Example Usage: ./oref0-autotune-test.sh --dir=openaps --ns-host=<NS URL> --start-date=2016-12-9 --end-date=2016-12-10 --runs=2 (#303) * Added stdout logging option to oref0-autotune-test.sh. Terminal output is still there as it was before. Logging is off by default, but can be enabled with the --log=true option. Also cleaned up odds and ends in the file :) * default to 1 run, for yesterday, if not otherwise specified * If a previous settings/autotune.json exists, use that; otherwise start from settings/profile.json * write out isf to sens too: used by determine-basal * make sure suggested.json is printed all on one line * support optional --autotune autotune.json * round insulinReq * add @sulkaharo's method to calculate basal start * small fix for autotune command line parameters (#308) correct documentation of parameters and exit if user enters an unknown command line option * autotune export to microsoft excel initial version, requires xlsxwriter * increment version number for autotune * export excel improvements for autotune swap run and date column, do some formatting (font size, etc) * rename export to excel to .xlsx instead of .xls for consistency * missed one... changed --xls to --xlsx in the cli example * swap Date and Run column, add license stuff in script * Install autotune with oref0-setup (#312) * Commenting out "type": = "current" (#296) * restart networking completely instead just cycling wlan0 (#284) * restart networking completely instead just cycling wlan0 this has proved more stable for me across some wifi netoworks * Re-add dhclient release/renew * Update oref0-online.sh * Bt device name (#307) * Update oref0-online.sh Change the BT devicename from BlueZ 5.37 to hostname of Board * Update oref0-setup.sh add hostname as BT device name. * Exit scripts when variables under or functions fail (#309) * Exit script when variables unset or functions fail * first attempt at setting up nightly autotune with oref0-setup and using autotuned profile.json for looping * increment version number for autotune * check if settings/autotune.json is valid json * specify a default for radio_locale * require openaps 0.1.6 or higher for radio_locale * radio_locale requires openaps 0.2.0-dev or later * redirect oref0-ns-autotune stderr to log file * update script name in usage * use settings/pumpprofile.json in oref0-ns-autotune * Updated bin/oref-autotune-test.sh with capability of running a small summary report at the end of the script. The report consists of the tunable parameters of interest, their current pump value and the autotune recommendation. Implemented report separately in oref0-autotune-recommends-report.sh. Example Usage: ~/src/oref0/bin/oref0-autotune-recommends-report.sh <Full path to OpenAPS Directory>. * Merged changes that were incorporated into the updated oref0-ns-autotune.sh to add terminal logging to autotune.<date/time stamp>.log in the autotune directory as well as a simple table report at the end of this manual autotune to show current pump profile vs autotune recommended profile. Implemented report in oref0-autotune-recommends-report.sh * rename to oref0-autotune-core.js and oref0-autotune.sh * Clarify usage * We're redirecting stderr not stdout * only cp autotune/profile.json if it's valid * camelCase autotune and use pumpProfile.autosens_max and min (#319) * camelCase autotune.js and categorize.js * use pumpProfile.autosens_max and min instead of 20% hard-coded cap * revert require('../profile/isf'); * camelCase autotuneM(in|ax) * profile/isf function is still isfLookup * change ISFProfile back to isfProfile to match pumpprofile * change basalProfile back to basalprofile to match pumpprofile.json * change ISFProfile back to isfProfile to match pumpprofile * camelCase pumpHistory to match categorize.js * change basalProfile back to basalprofile to match pumpprofile.json * camelCase pumpProfile to match autotune/index.js * camelCase to match autotune/index.js * autotuneMin/Max and camelCase fixes * update 20% log statements for basals to reflect autotune min/max * mmtune a bit more often * fix start and end date for ns-treatments.json * leave ISF unchanged if fewer than 5 ISF data points (#322) * leave ISF unchanged if fewer than 5 ISF data points * move stuff out of the else block * output csf as expected by oref0-autotune-recommends-report.sh * fix ww pump and dexusb with small changes (#323) * fix blocker bug for ww pumps and for dex usb * additional upcasing for radio_locale for cli * Compare lowercase radio_locale to "ww" * bump version and require [email protected] or later * install jq for autotune * use pip install rather than cloning (#324) * pip install git+URL@dev instead of cloning * sudo pip install * move openaps dev install out where it belongs * remove commented code * bump version and require [email protected] or later * redirect stderr to stdout so we can grep it * Continue and output a non-autotuned profile if we don't have autotune_data
Did my first autotune run on on 6 weeks of data. Findings/remarks:
Anybody has objections of enabling Excel generation by default? Compared to autotune, the Excel generation only takes a few CPU cycles extra. |
I also used Excel to calculate a |
I think for retrospective analysis we probably want to discourage people from running multiple runs on the same input data, and instead have them just use more input data if they want to see if it results in bigger adjustments. To speed things up, we should have oref0-autotune.sh download the treatments for each day separately, so that oref0-autotune-prep.js doesn't have to scan through multiple weeks of treatments for every 5m data point. When I did this manually in a bash loop, it sped things up considerably. I'm not sure what you mean by the hard limit on basal insulin increase. Have you seen an example of where such a limit would be useful because it's doing the wrong thing? If not, I think the autosens_max and autosens_min limits should be sufficient. I don't use the Excel stuff myself yet, and don't really have any strong opinions about file naming, so you can do whatever you want there. :-) Can you expand on the ISF profile bug? Have you PR'd a fix for that yet? |
I'm really interested in running against retrospective data but have only used Nightscout for uploading CGM readings thus far. However, as we have an Animas Vibe pump, all the historical basal, bolus and carb info is held in Diasend. Is there any relatively easy way of using the exported (into .xls) data from Diasend and plugging into Nightscout / mlab? Or can the system take information from a combination of sources? |
If you can construct a treatments.json file in the format expected by the autotune code, you can run oref0-autotune-prep against that instead of against the ns-treatments.js downloaded by oref0-autotune. |
Thanks for the seedy reply Scott. OK, I reckon I'll set up some excel formulas/macros to do. If I come up with something reusable then I will post it back here in case anyone else is interested. I'll have a look at the documentation and see if I can work out the appropriate format. If there are any existing samples that I can refer to then that would be useful but it would probably help if I read the documentation first and then asked questions afterwards! ;) I'll come back if I'm stuck. |
The expected format is json, so you'll probably end up writing a script to parse the data from .csv or something and turn it into the proper json format. Not sure if Excel can do json natively, but I've never heard of anyone doing it that way. :-) |
Anything else we want to track under this issue? If not, I think it's ready to be closed. |
oref0-autotune-core autotune.1.2017-08-01.json profile.json profile.pump.json > newprofile.1.2017-08-01.json TypeError: Cannot set property 'i' of undefined I'm receiving this error on both Debian and Mac after it spews through my data. |
First of all, thanks a lot for an amazing tool! I am having problems getting Autotune working on my Raspberry Pi, where I want set it up to run periodically. To ensure that I followed the instructions correctly, I redid the setup on a VM and that works. The issue is that on the final output, Autotune reports incorrect numbers (e.g. all basal rates are incorrectly listed as 1.000), It seems that Autotune calculates the values correctly, but reports "Invalid number" and is unable to copy elements to the final recommendations table. Interestingly the decimal-separator of all numbers on the recommendation table is changed to whatever I have not chosen (e.g. if I follow the instructions and use dot as separator, the separator on the recommendation values will be a comma, and visa versa). |
Hi, I'm having a frustrating newb problem. I believe I'm all set to run autotune from my mac terminal, but when I run oref0 it keeps telling me there's "no such file or directory". I've gotten some path wrong somehow. I can see that there is in fact such a directory, as I just navigated to it. Any help much appreciated! (base) Darcys-MacBook-Pro:~ dp$ cd |
Never mind, I caught by stupid error, and it's working now! thanks. |
I'm interested in setting up a system, perhaps python script, that will automatically run oref0 once a day, and create a visual graph showing the existing pump settings and autotune's suggestions, so the two can be visually compared easily. (We're just running loop on iOS with omnipod/dexcom). Does anyone know of any sort of project like this already in development? Thanks for this great system. Darcy |
One possibly easier way to implement #99 in oref0 would be to take an incremental approach to iteratively adjusting basal schedules and ratios.
To keep track of required adjustments to the pump's programmed basal schedule, we could create a long-lived autotune.json file that contains an entry for each hour of the day for which we've identified an adjustment. For ISF and CSF, we could similarly track any required adjustments in that same file.
For time periods without COB, and where basal insulin activity dominates, we could look at net deviations for each hour, and calculate how much less or additional basal insulin would have been required to eliminate the deviations. We could then add an adjustment factor/multiplier to the autotune.json to adjust basal by a fraction of that, perhaps 10%. That could be split across a few hours' basals: perhaps 5% of the required adjustment would go in the 2h-prior slot, and 2.5% each could go in the 1h-prior and 3h-prior ones. The other 90% of adjustment would not be made at all unless subsequent days' outcomes justified further adjustments, so as to dampen oscillations and cause the system to react gradually to observed insulin needs.
After mealtimes, we could observe whether AMA's COB estimate drops to zero before or after post-prandial deviations drop to zero. If the COB estimate hits zero first, that indicates that the (perceived carb) sensitivity factor is too low. As with basals, we could adjust the CSF ratio (calculated from the pump's ISF and IC ratio) multiplier in autotune.json by a fraction (perhaps 10%) of the adjustment that would've been required to get COB to decay to zero at the time we saw deviations drop to ~0. For meals where deviations drop to zero while COB is still positive, we'd want to subtract out any net positive deviations after the initial negative deviation, so we can account for any carbs whose absorption was delayed by post-meal activity etc.
For any post-correction periods and post-meal periods with significant insulin activity after COB and deviations have dropped to zero, we can calculate deviations for the period where bolus and high-temp insulin dominate basal insulin, and calculate what adjustments to ISF would have been necessary to bring deviations for that period to zero. As with basals and CSF, we can gradually adjust the ISF multiplier by a fraction (perhaps 10%) of the observed deviations.
The text was updated successfully, but these errors were encountered: