-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running LipSDP on a single layer #4
Comments
To answer your first question, the program expects at least two linear transformations in the definition of a neural network, i.e. f(x) = W_2 * phi(W_1 * x + b_1) + b_2. So for your use case, I would recommend setting W_2 = I (the identity matrix) and b_2 = 0 (the zero vector). Let me know if that works for you. To answer the second question, the --split-size argument is expecting an integer >= 2. That sounds like a bug though -- perhaps the easiest fix is to ensure that --split is used for networks of appropriate size. |
Ok, that makes sense, thanks for the info. So how do you specify the bias of a layer? Looking at your |
Hi @arobey1, are the number of hidden fc layers in mnist_weights.mat 5 as shown in the paper or is it 2 because on loading the weights, I am getting the latter. |
Hi,
I am wondering if it is possible to apply your method to a single layer, i.e., relu(Wx + b). When I try to give LipSDP one layer, I get an "Inner matrix dimensions must agree." error. More specifically, I used the following code to create the weight file (which is adapted from the example code in the
README
):Then I ran:
I get the error show above.
I also know that there is a
--split
option, and my understanding is that with--split --split-size 1
LipSDP would be applied to each layer individually, and then the results would be multiplied together. When I use this option on the one-layer network, no error is thrown, but a Lipschitz constant of 0 is returned.The text was updated successfully, but these errors were encountered: