Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Efficient synapse creation #600

Merged
merged 84 commits into from
Apr 9, 2016
Merged

Conversation

thesamovar
Copy link
Member

This branch is for work on issue #65.

@thesamovar
Copy link
Member Author

I created a new file brian2/synapses/conditions.py with some ideas towards efficient synapse creation. The idea is that we do the following:

First, take the condition and break it into disjunctive normal form. That is, an expression like (a and b) or (c and d and e) or f, i.e. an or of ands. All logical expressions can be broken down in this way, and we use sympy to do that.

Now, we are thinking about iterating over j and computing the set of i matching the condition. We do this by serially finding all the i matching each of the subexpressions in turn, and then taking the union of these. So, in the example above we'd first find all i matching a and b, then all i matching c and d and e and then all i matching f and then take the union of these three.

Now consider just one of these subexpressions, it will be of the form a and b and c and ..., i.e. an and of atomic subsubexpressions (which I'll call atoms here).

We analyse each atom to see if it is of a standard form (or can be transformed into a standard form). The standard forms are inequalities on i, e.g. i<f(j), and rand()<p(j). At the moment I just directly check if they are of this form without trying to transform them, but we can do that later. We note these standard terms and form the remainder expression which has them removed.

Now when we generate the set of i for the subexpression, we first compute imin and imax from the set of inequalities, and then (if there is a rand()<p term) we generate a candidate set using the binomial+random sample method used in Brian 1, from imin to imax. Finally we evaluate the remainder condition on these candidates.

So far I've implemented this and written a comparison script equivalent to what the numpy codegen target would do (but not integrated it with the rest of Brian). Here are some timings (for len(source)==len(target)==N, where N goes up to 10k in some examples, 100k in others):

1% connectivity. Better for N>1000, around 2.5x better for N=10k.

randonly

0.1% connectivity. Better for N>1000, around 8x better for N=10k.

smallrand

10% connectivity. Uniformly worse.

largerand

Connectivity where abs(i-j)<10. A bit better for N=10k but 20x better for N=100k.

inearj10

abs(i-j)<100 and rand()<p. Better for N>=1000 and around 6x better at N=10k.

within100rand

i==j, oddly not much improvement until N=100k (10x improvement). Was surprised by this. Guess the overheads are quite high, and maybe that can be optimised quite a lot.

ij

A complex expression, almost equal at low N and around 3x better for large N=10k.

ifarfromjrand

I think these timing differences will be much bigger for e.g. C++ standalone where the overheads will be much lower. Still, even here it shows that in almost all cases for N>1000 it's considerably better to use this optimised version, with timings for N>=1000 varying from a low point of 2x slower to a high point of 20x faster.

@thesamovar
Copy link
Member Author

Oh, and this branch is based off cython performance, that's why there are so many commits. As of this comment, the only relevant commit is the last one.

I'd be interested in feedback on this approach (timings, code, etc.). I think it's promising enough to be worth including, and not too complicated (around 200 lines of code).

Oh, it might also be interesting to have an example of the generated numpy code for a particular expression (i>j-100 and i<j+100 and rand()<0.1):

from numpy import *
from numpy.random import choice, binomial, rand
from sklearn.utils.random import sample_without_replacement
from random import sample
from __builtin__ import max
all_i = []
all_j = []
_vectorisation_idx = arange(N)
for j in xrange(N):
    i = _vectorisation_idx
    cur_i = []
    imins = (0,
        1+int(floor(j - 100)),
        )
    imaxes = (N-1,
        -1+int(ceil(j + 100)),
        )
    imin = max(imins)
    imax = min(imaxes)
    if imax>=imin:
        i = _vectorisation_idx[imin:imax+1]
        p = 0.1
        if p>1.0:
            p = 1.0
        k = binomial(imax-imin+1, p)
        i = i[sample_without_replacement(len(i), k)] # fast
        #i = sample(i, k) # ok
        #i = choice(i, k, replace=False) # very slow
        cur_i.append(i)
    else:
        cur_i.append([])
    all_i.append(i)
    all_j.append(full(len(i), j, dtype=int))
all_i = hstack(all_i)
all_j = hstack(all_j)

@thesamovar
Copy link
Member Author

For the case of i==j a few micro-optimisations that can be easily generalised to work in all cases improve it so that for N=10k the optimised version is around 8x faster and for N=100k around 95x faster, and more importantly it's growing as O(N) instead of O(N^2). Total time was around 0.1s with optimisations and 10s without for N=100k. I think this would be fine for numpy performance, and suggests again that once we do the same thing for weave and/or C++ standalone we should get even bigger speed improvements.

@mstimberg
Copy link
Member

I am still not convinced that the analysis of expressions is the right approach to follow... I am worried that this will end up as another huge chunk of complex code and in the end for quite a few examples it might actually slow things down since we take a long time to do calculations like finding bounds in sympy, but in the end the bounds are 0 and N-1 or something like that (we could switch the whole system off for small numbers of synapses but then this would be a completely arbitrary heuristic).

Also, it will still not allow us to do some essential things like generating a random number of targets per source. Wouldn't allowing the user to be more explicit be more the Brian way[TM]? E.g. instead of writing S.connect('i == j') they could write S.connect_for_each_source('i') and for S.connect('j>i-100 and j<i+100 and rand()<p') they could write S.connect_for_each_source('rand()<p', from='i-100', to='i+100') or something along those lines? Or the quick&easy solution of just adding min_for_each_source and max_for_each_source arguments (with a better name maybe) to Synapses.connect that would default to 0 and N_post but could be provided by the user to give tighter bounds (e.g. 'i' for both arguments in the extreme one-to-one case).

The whole process of coming up with a consensus and a working solution that we have sufficient confidence in for something more complex might take too long for 2.0 -- a short term solution would be to just document it (i.e. tell people about the inefficiency for S.connect('i==j') and advise them to connect with index arrays instead if that is a problem)...

@thesamovar
Copy link
Member Author

I think it's worth doing this for 2.0, otherwise there's no efficient way of doing rand()<p on standalone, which is a really important use case. I'm happy with specifying it explicitly rather than analysing conditions. The only possible issue is future/backwards compatibility, but actually I don't think it's a big issue. Suppose we had a syntax like irange=('j-100', 'j+100') then this can always be converted into an equivalent expression like j-100<i and i<j+100. We wouldn't need that for 2.0, but for future versions if we introduce more types of syntax we could get backwards compatibility in this way.

So here are some suggestions for syntax:

S.connect('(i-100)%3==0', for_j=('i-100', 'i+100'), with_p=0.01)
S.connect(j_range=('i-100', 'i+100', 3), probability='f(j)')
S.connect(j_min='i-100', j_max='i+100', j_step=3, random_fraction=0.01)

S.connect_i_to('i+(-1)**m', for_m=(0, 2))
S.connect(j='i+(-1)**m', for_m=(0, 2))

What I want to achieve with the syntax for the standard connect is to make it clear that these conditions are in addition to the condition. Not sure if they achieve that. For the connect_i_to it's the equivalent of what you called connect_each_source_to. I'm not really happy with either name, but source on its own suggests the source group not the source neuron (at least to me). We could just connect for both and use keywords to disambiguate.

Well, that's just a few ideas, I suspect we can come up with something better than any of that.

@mstimberg
Copy link
Member

Yes, I think coming up with a good syntax is the hardest challenge... But note that for the probability we already have the p keyword argument, users should never have to write rand()<p for connect calls.

@thesamovar
Copy link
Member Author

OK, so syntax suggestion:

S.connect(condition='(i-100)%3==0', target_range=('i-100', 'i+100'), p=0.01)
S.connect(num_random_targets=20)
S.connect(target='i+(-1)**m', m_range=(0, 2))

In general, condition could be any string. target_range would be a 2-tuple or 3-tuple (start, stop, step) where each of the elements can be a value or string, p could be a value or string, etc. For strings other than condition, they would have to be functions of i but not j (we could potentially allow for construction column by column instead of row by row in the future, but in any case they wouldn't be allowed to be functions of i and j - only condition can have that). You would only be allowed to specify p or num_random_targets but not both, and if you specify target=... then maybe you can't specify p, etc. (Would need to think about the exact conditions here.) The variable m could either be a fixed name (would be happy for it to be something other than m) or user-specified, e.g. you could also do target='i+a', a_range=(0, 10). I've used target but other names are worth thinking about, e.g. j so that you'd write S.connect(j='i+(-1)**m', m_range=(0, 2)).

This would cover a lot of what we talked about, including both our ideas about what the syntax should be, and I think would cover a lot of relevant use cases. What do you think? I think it's worth doing and the syntax above may be good enough or nearly so.

@mstimberg
Copy link
Member

Ok great, this is getting really close to what I had in mind. I think I'd prefer a fixed name (and probably not m) instead of some "magic" for a var_range keyword. I am also not sure whether tuples are a good idea for the ranges, maybe rather use separate keywords, such as target_start, target_stop or something like that? I don't really see a use case for a step, you can always include it in the condition. For the wording in general, I am not sure: rather sources and targets or pre and post? Currently the first argument of connect is pre_or_cond and the second is post (which currently only takes an array of indices, but it could also take a string with a function of i, i.e. replace your target argument).
A more technical thing is that we should probably separate things in separate templates (as we currently already do for connections from strings and from arrays) -- this will also make it easier for new devices to not support some of it.

@thesamovar
Copy link
Member Author

I think step is important for a connectivity pattern that is not uncommon, namely if you have a 2D array of neurons (indexed in 1D). I am OK with start, stop and step arguments instead of a single range argument, but given that it's the same argument as for the range function that is very standard for Python I think maybe it's OK. Would be interesting to see what others think about that though.

@mstimberg
Copy link
Member

Some feedback on the proposed syntax:
Generally, i and j was preferred over source/target or pre/post. Independent of all the new syntax, I think the best would be to get rid of this mixed first argument from the current syntax, i.e. where we currently have

def connect(self, pre_or_cond, post=None , ...)

we should rather use

def connect(self, condition=None, i=None, j=None)

I think this would be clearer by forcing you to use keyword arguments when you specify synapses directly:

syn.connect(i=[1, 2, 3], j=[3, 2, 1])

But this is a minor point. I think we can all agree on the syntax for just restricting the range of the targets (and optionally of the source as well, maybe) relatively quickly. Just one thing came up: it is natural to define the range inclusively (as you did above), but the range function in Python excludes the upper limit so this would be somewhat inconsistent. Maybe not use range in the name for this reason? limits, bounds, minmax ? All of those would not lend themselves to include a step value as a third element, though, but maybe this could be separate?

For the syntax where you specify the targets with a running variable, Romain had the idea of introducing some new string syntax for that variable. Something along the lines of

syn.connect(j='i+k', foreach='k in [-3, 3]')
syn.connect(j='i+k', foreach='k in [-3, 4[')
syn.connect(j='i+k', foreach='k in {-3, 0, 3}')

I have to admit there is something to it, even though I am not sure it is worth the additional complication.

I think the trickiest thing syntax-wise is to include the num_random_targets somewhere in a nice way.

@thesamovar
Copy link
Member Author

Independent of all the new syntax, I think the best would be to get rid of this mixed first argument from the current syntax

I agree!

Just one thing came up: it is natural to define the range inclusively (as you did above), but the range function in Python excludes the upper limit so this would be somewhat inconsistent. Maybe not use range in the name for this reason? limits, bounds, minmax ?

Good point, but how about we just keep it as range but follow Python semantics?

For the explicit connections, I had thought about something along those lines too but decided that we are generally trying to avoid introducing new syntax in strings. How about for_each_k_in=(low, high) instead? I think I still prefer k_range=(low, high) to these I think (and it would be consistent with the others). If we do want to introduce a new string syntax, what about going full on:

S.connect(j='i+k for k in range(-3, 4)') # my preferred of the two, follows Python semantics
S.connect(j='i+k for k in (-3, 3)') # shorter, violates Python semantics in many ways

The nice thing about this is that it is actually valid Python (albeit in a string). It would be much more restricted than Python obviously but allows us to expand its possibilities in the future if we wanted to, and is anyway nicely self-descriptive.

Generally speaking, I prefer to stick to Python semantics, but we could go with intervals as an alternative, in which case we should use a consistent syntax based around intervals (e.g. j_interval instead of j_range). I think that's important: we should have a coherent and consistent syntax, not a mix of different syntaxes.

One last option, how about we go even more full on mimicking Python generator syntax:

S.connect(j='i+k for k in range(-3, 4)')
S.connect(j='k for k in range(0, N) if condition')
S.connect(j='k for k in random_sample(0, N, p)')
S.connect(j='k for k in fixed_sample(0, N, num_targets)')

Not sure how I feel about this one, but just putting it out there.

@thesamovar
Copy link
Member Author

OK, the last commit implements a first go at doing binomial synapse creation for the numpy target. I don't think it's maximally efficient yet, it will probably break some tests because I didn't think carefully about all the possible conditions, and it relies on the sklearn package which we probably want to avoid for the future, but it does make quite a speed up. Here are the results for the old synapse creation method (O(N^2)):

old

And for the new method:

new

The graph to look at is the total time because it runs for a very short duration. As you can see, for the old version, B2 is much slower at large N than B1 for all codegen targets. However, with the new numpy method, they're about the same.

This will need some work to make it fully general and also fit in all the other changes, but I think it shows that it doesn't need to be too much work to implement these ideas.

@mstimberg
Copy link
Member

I did not yet have time to look at this in detail, but just about the syntax question:

Good point, but how about we just keep it as range but follow Python semantics?

The problem with that is that it feels somewhat unnatural, e.g. don't you think that many users would fall into the trap of writing:

S.connect(condition='(i-100)%3==0', j_range=('i-100', 'i+100'), p=0.01)

instead of

S.connect(condition='(i-100)%3==0', j_range=('i-100', 'i+101'), p=0.01)

?

I am a bit undecided about the full generator syntax: on the one hand I think it is great, it is readable and has clear semantics and you can quickly try it out in real code to see whether it does what you want it to do. On the other hand, we already have the problem that users think that abstract code is Python code and start using all kind of Python features in it (say, if clauses or array indexing) -- I worry that this would happen more often with this generator syntax.
@romainbrette, @yger: any opinion on this? (I am referring to Dan's Python-generator-like syntax proposal two comments above: #600 (comment))

I agree with coming up with a consistent syntax, of course.

@thesamovar
Copy link
Member Author

Yeah, you're right about range. Well, I guess j_min=..., j_max=... would work instead although it seems rather wordy. Alternatively, how about j_in=(min, max) or j_between=(min, max) with an additional j_step argument? It's not quite so simple, but less likely to lead to problems. Although actually, the step argument makes less sense semantically if we're not using range and I do think it would be nice to include the step option.

I'm also very unsure about the generator syntax. As you say, it has a lot of nice properties but also may lead to confusion. I guess if we're very clear in our error messages then maybe it's OK? Another advantage of it is that by using Python syntax we're allowing ourselves lots of room to add new features in the future (we just expand which bits of Python syntax we support).

@thesamovar
Copy link
Member Author

One more datapoint. Parsing the generator syntax is super easy:

from brian2 import *
from brian2.parsing.rendering import NodeRenderer
import ast

def parse_synapse_generator(expr):
    node = ast.parse('[%s]' % expr, mode='eval').body
    nr = NodeRenderer()
    print 'Element:', nr.render_node(node.elt)
    print 'Variable:', node.generators[0].target.id
    print 'Variable in:', nr.render_node(node.generators[0].iter)
    if len(node.generators[0].ifs)==1:
        print 'If:', nr.render_node(node.generators[0].ifs[0])
    elif len(node.generators[0].ifs)>1:
        raise SyntaxError("Only allowed one if statement")

parse_synapse_generator('k+1 for k in range(i-100, i+100, 2) if f(k)')

gives output

Element: k + 1
Variable: k
Variable in: range(i - 100, i + 100, 2)
If: f(k)

Handling all the error conditions will of course add a bit more to this, but the point is that this is not going to lead to complex parsing code.

@thesamovar
Copy link
Member Author

OK, so now I'm getting more and more convinced that the generator syntax is the best. The reason is that by its very nature, this problem requires a fairly deep syntax to express all the possible options and combinations of options. If we stick to simple keywords, we will have very many of them or overload them a lot, which will be very verbose and confusing, and require you to look up what they mean. On the other hand, the generator syntax is, as you said, completely clear and unambiguous. It's also very flexible: it can do everything we want to do and leaves loads of space for new stuff later.

If we go down this route, I'd leave open a couple of simple options too. So, the full syntax would be:

S.connect(cond=..., p=...)
S.connect(j='generator syntax')

@thesamovar
Copy link
Member Author

Actually, even with the full error checking it's not very long. See dev/ideas/parse_synaptic_generator_syntax.py.

@mstimberg
Copy link
Member

I am still a bit unsure about this... Parsing certainly isn't the problem, as your script shows. What I am more worried about is that we are introducing another form of code strings with subtle differences to both our standard abstract code and to Python code. The similarity to Python code already leads to confusion sometimes ("why can't I use fft(neuron.v) in a code string?") but I am also not yet quite clear about the relation of the new generator code to our standard abstract code. Most importantly: how would indexing work, could you refer to state variables at all? Or would generator expressions be completely limited self-containing expressions. I guess this is the only reasonable way. But then I wonder how would it be possible to express something like the following: "Have each cell connect to 20 random cells in its vicinity (say distance < 200 um)"?
The first part could be formulated as, say, j='k for k in fixed_sample(0, N_post, 20)', but how do we get the condition in there?

k for k in fixed_sample(0, N_post, 20) if sqrt((x_pre - x_post)**2 + (y_pre - y_post)**2) < 200*umetre

Here, we don't know how to get x_post -- we could force the user to use j for j in ... to make this work, but I am not sure this is very clear. The only option right now would be to define a custom function f(i, j) that calculates the distance based on the indices, but this is more work than this should be and in general we try that users don't have to care about indices in such models.
Either way, the formulation wouldn't actually do the right thing, it would first select 20 random targets and then discard those that are too far away. We could re-define the semantics in a way that fixed_sample would try to draw new samples if the if clause does not apply, but I think that would go too far away from standard Python semantics.

Hmm, this needs some further thought...

@thesamovar
Copy link
Member Author

Good point about indexing.

I don't see any way that we could express "Have each cell connect to 20 random cells in its vicinity (say distance < 200 um)" in any of the schemes we discussed so far though, that's a new use case. As far as I can tell, there's no way to do it efficiently: you'd have to compute all the points satisfying the distance condition first (i.e. O(N^2)) and then do random selections. I think it might be worth considering this kind of use case in the future, but maybe not for 2.0?

For the case of picking some number of random targets and then discarding the ones that don't satisfy the distance condition I think your example should work fine. For me, the semantics of that would be as follows. Since you're writing j='k for k in ... if ...' then I would have the index for x_post be the value of j which in this case would be k. In general, if you wrote j='expr for var in ... if ...' then the index of x_post would be expr. That makes sense to me, at least.

@mstimberg
Copy link
Member

As far as I can tell, there's no way to do it efficiently: you'd have to compute all the points satisfying the distance condition first (i.e. O(N^2)) and then do random selections. I think it might be worth considering this kind of use case in the future, but maybe not for 2.0?

I think it could be done efficiently, but it would mean that the sampling function would have to know about the condition. Something like fixed_sample(0, N_post, 20, abs(x_pre - x_post) < 50*umetre), it would then have to redraw for each point that does not fullfill the condition. But getting the details of this right is probably tricky -- I agree let's not worry about this for the moment. I also did a quick check and did not find an obvious way to do this with CSA or NEST's topology module, this does not seem to be a much-used pattern.

In general, if you wrote j='expr for var in ... if ...' then the index of x_post would be expr. That makes sense to me, at least.

Seems to make sense indeed, I don't know why I was confused...

I think I'd be happy with a (for now, fairly restrictive) generator syntax. One more point to consider: how would the new functions (range, random_sample, fixed_sample, ... ) tie in with our function system? Or would they rather be core syntax elements which are converted into something else during the code generation process?

@thesamovar
Copy link
Member Author

Interesting. I think in that case, whether or not it can be done efficiently depends on the scaling. If for a given i there are an approximately fixed fraction k (independent of N) of neurons j that satisfy the condition, then if you want to find a random selection of F such neurons then your method would be, I think, O(FN/k)=O(N). However, if given i there is an upper bound K (independent of N) neurons j satisfying the condition then this gives k=K/N so the method would be O(FN^2/K)=O(N^2). For the case of a spatial locality condition I guess this second case is more likely (since increasing the number of neurons probably means simulating a larger area rather than increasing the density of neurons in the area), but indeed there may be some situations where the former situation happens too. Actually there is an efficient way of doing it in the bad case too using a spatial subdivision algorithm, but that's very specific to the case of spatial locality conditions.

I think the functions range, random_sample and fixed_sample (we might want to change the names) would be core syntax elements rather than trying to fit them in with the function system. Each template can implement them in its own way. That said, I think we should also implement them as Python functions because that way people can actually test out that the condition does what they think by just running it as a standard Python expression. We'd use those versions in the numpy target as well, of course.

Do you want to run this by @romainbrette?

@mstimberg
Copy link
Member

Interesting. I think in that case, whether or not it can be done efficiently depends on the scaling.

I think it is a tricky problem in the general case -- a simple "re-draw if condition is not fulfilled" would work great for a condition that is almost always true (say, i != j) but it might end up re-drawing a lot if the condition is often not fulfilled (e.g. your example with scaling up in size). On a theoretical level, it can never be worse than O(n^2), i.e. than first evaluating the condition on all possible pairs, but in practice it could be worse because you'd end up creating many more random numbers than you need, which is relatively costly. Either way, I think there are easy solutions for many special cases but nothing for the general case, so let's not worry about this for now.

Ok, about the functions, that's what I thought. It would be kind of nice if the system were extensible but that's something we can always add later.

Do you want to run this by @romainbrette?

I think for that it would be a nice thing to have a wiki page that has everything in it, i.e. a list of the syntax options after the change (from the user point of view) -- would you mind creating such a page?

@thesamovar
Copy link
Member Author

OK, done: https://github.com/brian-team/brian2/wiki/Synapse-creation

Do you think that's a fair representation of the options so far? Feel free to edit if not.

@mstimberg
Copy link
Member

Alternative idea (maybe even better?) we have an argument connect(... skip_invalid_indices=True) that automatically ignores any invalid j values?

That might be a good idea actually, it could make the formulation of many connection schemes easier if you don't have to worry about the borders.

@thesamovar
Copy link
Member Author

OK, code, tests, documentation and modified examples done for skip_if_invalid (seemed a slightly clearer syntax).

@mstimberg
Copy link
Member

OK, code, tests, documentation and modified examples done for skip_if_invalid (seemed a slightly clearer syntax).

Nice, this makes complicated expressions like the Gaussian example quite a bit clearer.

@thesamovar
Copy link
Member Author

Oh, good spot!

@mstimberg
Copy link
Member

Oh, good spot!

the test you wrote was actually failing for numpy ;)

[ci skip]
….connect`

This will make things clearer for users using the outdated way of connecting with indices (e.g. `syn.connect([0,1], [1, 2])`)
@mstimberg
Copy link
Member

I fixed some small last issues, otherwise I am fine with everything. All tests and examples work on my machine. After waiting for the test runs to finish (which will unfortunately still take quite some time), this should be finally ready to merge! This will be a major new feature, worthy of a point release by its own :)

@coveralls
Copy link

Coverage Status

Coverage increased (+5.2%) to 92.665% when pulling 53e724d on efficient_synapse_creation into 70c0644 on master.

@thesamovar
Copy link
Member Author

Good news! It passes the statistical tests for connect(p=...) for all runtime targets. I wasn't sure if there was a reasonably time efficient way to do this for standalone mode, and in any case it will probably fail the test given how crappy the RNG is for standalone. Do you want to try doing that?

I'm now happy for this to be merged once tests pass. I did a long and standalone run on my computer and it was fine. Do you want to do the same on linux?

@coveralls
Copy link

Coverage Status

Coverage increased (+5.2%) to 92.665% when pulling 4a1cdc9 on efficient_synapse_creation into 70c0644 on master.

@mstimberg
Copy link
Member

Good news! It passes the statistical tests for connect(p=...) for all runtime targets. I wasn't sure if there was a reasonably time efficient way to do this for standalone mode, and in any case it will probably fail the test given how crappy the RNG is for standalone. Do you want to try doing that?

Thanks, it is looking fine on my machine as well, I had one "Overall fail" once, but nothing that seemed to indicate a systematic problem. I hacked together a quick solution to run the test in standalone (compiling it once and then re-running the main file which will always start with a new seed) and it passed as well.

I'm now happy for this to be merged once tests pass. I did a long and standalone run on my computer and it was fine. Do you want to do the same on linux?

I had already run this, so all should be good. The tests are not shown as passing on the github issue because the last commit has [ci skip], but the last tested commit successfully passed all test. So we are good to go, I'll merge. I am really happy with the final state of this PR -- now all that remains is to wait for the user reports that show that we forgot something important ;-)

@mstimberg mstimberg merged commit 5c036df into master Apr 9, 2016
@mstimberg mstimberg deleted the efficient_synapse_creation branch April 9, 2016 17:56
@thesamovar
Copy link
Member Author

Standalone passed? Wow! I'm impressed. :)

Anyway, great that this is finally done! What a relief.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants