-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large MOST models (nt > 1000) #7
Comments
One quick thing, right off the bat ... MATPOWER/matpower#70 addresses |
I apologize if this issue was known and solved. Is this new feature exclusive to to MATPOWER7 (not in 7.0b1)? |
It is being worked in pull request MATPOWER/matpower#70 for inclusion into the master branch, which I hope to complete very soon. So, it's not yet in the master branch, let alone in any numbered release of MATPOWER. |
I confess I question whether attempting to solve a single optimization with 8760 hours is what you really want to do even if it were computationally feasible. There are good reasons, besides the obvious computational ones, for not doing unit commitment with an hourly 1 year horizon (e.g. data). Then again, there are often good reasons to use a tool in ways the creator of the tool never envisioned. May I ask what is the context of this 8760 hour problem you are trying to solve with MOST? |
This comment has been minimized.
This comment has been minimized.
May I ask how soon? thanks in advance |
I read the issue. That looks great;
The cause of the issue seems to be the same, an expanding sparse matrix. |
It does look like the bottleneck now for building these large models is in I actually found an amazing single fix for both issues, believe it or not (must have been some divine inspiration, because I don't know how it even occurred to me to try something like this). Simply add ...
... right after line 46 in On a big model of mine it cut the time in This is the kind of stuff you aren't supposed to have to think about when programming in a high-level language like Matlab. 😜 If I'm guessing correctly, after executing line 46, there are two references to one big struct. When we go messing with the contents of that struct (in 87), it creates a new copy (to keep the original unchanged), allocating new memory, etc., but then we replace the original with the copy in 218, and it frees up the memory for the original. By removing the extra reference to the original (with my proposed added line) before modifying it, it removes the need to allocate and free memory for the extra (unused) copy. I'll try to get both of these fixes into MATPOWER as soon as I can, if I don't finish it today, it'll probably be late next week. Thanks for bringing this to my attention. |
These issues have now been fixed. See MATPOWER/matpower#70 and MATPOWER/matpower#79. |
Building a multi-period MOST model for a large number of periods (i.e. 8,760 hours = 1 year) at one shot took 7 hours on a high performance computer cluster just to build the model, without solving it
If you want to do optimal sizing (long term planning), 7 hours means your optimization might take a year even on a cluster.
I used MATLAB's profiler to see where most of the time was spent, and I can see that the main culprets are routines:
I went through both functions to try and make some improvements. I couldn't do anything about (1), however, in params_lin_constraint, I could achieve about 20% time savings by changing line #108
from: At = At + Akt_full;
to: At(:, i1:iN) = At(:, i1:iN) + Akt_full(:, i1:iN); %this line was added by ****; original was the one above and it was very slow for large At
Other options to circumvent the problem of building a large MOST model are:
after that, you can connect these models together.
In this case, i'd advise making the sub-models NOT mutually exclusive.
instead, having 1 overlapping period between each 2 consecutive models helps:
You can force a constraint: Pg(last time period of slice i) = Pg(first time period in slice i+1)
This way, the ramping constraint between each J(t) scenario and each scenario J(t+1)
would be built properly. However, stored-energy level still needs to be corrected manually.
Disadvantages:
for every time period (t): storage_min <= sum(SoC(1:t)) <= storage_max
so you need to adjust this constraint for all later slices i>1
you can circumvent this obstacle by requiring the terminal-storage at each subslice
to be at a certain level; but that changes the solution.
would still get a correct optimal solution.
However, most_summary(om) would read the results wrong
Advantages:
Build slice (1) and solve it, if it is feasible proceed to build slice (2), if not, terminate with penalty proportional to slice number
Disadvantages:
finding a feasible solution for later hours may require oversizing the storage
this isn't a concern if you have very few components and a simple system.
for a complex system, with transmission losses, and different storage devices with different
characters. The obtained solution is unlikely to be optimal.
you can circumvent this obstacle by requiring the terminal-storage at the end of each slice
to be a certain level; but that changes the solution.
Advantages:
i think i will proceed with option 2.
you can do option 2, save the model and adjust it instead of rebuilding it everytime
Constraitns that involve/incorporate time coupling accross periods
(not accross wind-scenarios or between k=0 and contingency(k)) are:
'Rrp'
'Rrm'
'mindown'
'minup'
'uvw'
The text was updated successfully, but these errors were encountered: