Replies: 9 comments 3 replies
-
Wow, I have not tried that many epochs before. I've noticed that in direct comparisons between the SmartAmp and SmartAmpPro models, the SmartAmp (WaveNet) has a more natural sound. Although on higher distortion, WaveNet can't quite grab on to the signal, where LSTM seems to have an easier time with it up to a point. LSTM has been around longer. I'm assuming the cab sim adds impulse response? That is like a time effect that would probably give pedalnet difficulty. The same as using an actual mic on an amp, but simulated. Feel free to share the wav files or resulting models! Good guesses on the learning rate, what you are describing is called adaptive or dynamic learning rate, but it's not implemented here. That would be a good thing to look at, and I also suspect it would reduce the training times. See this part of the pytorch documentation: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate |
Beta Was this translation helpful? Give feedback.
-
Hi Keith,
Yes, right on the cab is an impulse response. Ok great, I think I'm
starting to understand some of this stuff finally haha!
Here is a dropbox link that includes a reaper session containing the inputs
and outputs, as well as a couple frozen tracks with cabs added on
afterwards. I also copied the .json and .ckpt files into the directory.
That should be all that's needed to use the model right?
https://www.dropbox.com/sh/f3bzctm9c47xx2t/AADKiJNuQifH4imS_hR_Bp12a?dl=0
…On Fri, Feb 19, 2021 at 1:38 PM Keith Bloemer ***@***.***> wrote:
Wow, I have not tried that many epochs before. I've noticed that in direct
comparisons between the SmartAmp and SmartAmpPro models, the SmartAmp
(WaveNet) has a more natural sound. Although on higher distortion, WaveNet
can't quite grab on to the signal, where LSTM seems to have an easier time
with it up to a point. LSTM has been around longer.
I'm assuming the cab sim adds impulse response? That is like a time effect
that would probably give pedalnet difficulty. The same as using an actual
mic on an amp, but simulated. Feel free to share the wav files or resulting
models!
Good guesses on the learning rate, what you are describing is called
adaptive or dynamic learning rate, but it's not implemented here. That
would be a good thing to look at, and I also suspect it would reduce the
training times.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#25 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA6OCQCAZTRGSPLGA7KULK3S72V2VANCNFSM4X4X4Z3Q>
.
|
Beta Was this translation helpful? Give feedback.
-
Nice! I tried out the model, I could tell it had difficulty training it, might have more luck on the high distortion with guitarLSTM/SmartAmpPro. Is that from Neural’s Granophyre plugin? |
Beta Was this translation helpful? Give feedback.
-
Yessir that’s the plugin it came from 👍 I’ll try the LSTM tomorrow and
check back in on how it goes.
I just stumbled across this article and repo as well:
https://musiclaboratorydotcom.wordpress.com/2019/01/05/a-deep-learning-approach-to-guitar-amp-modeling-part-1-intro-and-initial-attempts/
https://github.com/sdatkinson/neural-amp-modeler
I ran a quick test and it seemed to hone in on the neighborhood of the
sound relatively quickly. I'll be testing with that model as well,
hopefully I can keep my GPU busy during work hours and check the results
during breaks and in the evening.
Stephen just updated the code base to use pytorch today.
It's great to have such a sampling of different approaches, I think it's
helping me absorb some of the concepts.
What an exciting time to be alive haha!
…On Sun, Feb 21, 2021 at 7:39 PM Keith Bloemer ***@***.***> wrote:
Nice! I tried out the model, I could tell it had difficulty training it,
might have more luck on the high distortion with guitarLSTM/SmartAmpPro. Is
that from Neural’s Granophyre plugin?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#25 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA6OCQD67FULACC7AZ73OPTTAGRUZANCNFSM4X4X4Z3Q>
.
|
Beta Was this translation helpful? Give feedback.
-
Very cool, thanks for sharing! It’s such a fun application for ai, I expect lots of people will be trying out different variations on it. |
Beta Was this translation helpful? Give feedback.
-
Hi Keith,
I just wrapped up a training in PedalnetRT with 15000 epochs:
https://www.dropbox.com/sh/nzfoweljztxba2b/AAAlFzLCPZ19QNFzmqI_Dt1-a?dl=0
ignore the misnamed 'grano' files, these are from the Neural DSP Gojira
plugin. Also ignore my terrible skill in dialing in tone and playing haha
Let me know what you think if you have a sec. I did notice that from about
10k to 15k (and most likely long before 10k actually,) the loss didn't
really drop very much at all. I'm thinking this might be improved by using
an adaptive learning rate, what do you think? I started looking briefly at
pytorch_lightning and how to implement that, and it does look manageable at
first glance, but I haven't dug into too much yet. Wondering if that would
be a way to get the training time down as well.
Listening to the output file vs the prediction, I can't hear any difference
really until I add a cab. Once the cab is added, it becomes pretty evident
that the bass/low end could be better.
…On Sun, Feb 21, 2021 at 8:20 PM Keith Bloemer ***@***.***> wrote:
Very cool, thanks for sharing! It’s such a fun application for ai, I
expect lots of people will be trying out different variations on it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#25 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA6OCQH5U6NMCSW2EZ5LVSDTAGWNNANCNFSM4X4X4Z3Q>
.
|
Beta Was this translation helpful? Give feedback.
-
@GuitarML can you enable discussions and convert this to a discussion? would be helpful, thanks |
Beta Was this translation helpful? Give feedback.
-
That's a great idea, discussions are now enabled. I think the subject of this discussion has kind of evolved so I'll let @ThreepE0 start one and summarize where it will continue. I'm also going to enable discussions on the other repos. |
Beta Was this translation helpful? Give feedback.
-
cool, thanks also issue can be converted to discussions: https://docs.github.com/en/discussions/managing-discussions-for-your-community/moderating-discussions#converting-an-issue-to-a-discussion |
Beta Was this translation helpful? Give feedback.
-
Hello again,
I'm not sure if this is the best place to put this (again, sorry) but I'm noticing after running quite a few trainings that some strange things happen when cab sim is on in the output wav file. I know that the simpler the better as far as input and output, but I did want to mention this in case it helps anyone else running the training, or in case a method that handles this better comes up.
I'm running some fairly long trainings (15000 epochs) as we speak. I notice that if I have the cab sim on, I get a prediction that is quite far from the original, both quieter and much much lower gain. I can share example files if that would help.
My suspicion is that the cab sim may be introducing some time-based distortion or delay, and this model might not care for that.
I'm finishing my 15000 epoch training now with the cab sim off, and will check back in once that's done. But I did run a 9000 epoch training and the results were pretty good with the cab sim off. In the included example wav files, I did notice that even with 9000 epochs, the string slide sound cut in and out a bit, like you'd expect from a bit crusher effect. It was an improvement over the default 1500 though.
All that being said, I wonder if there's a way to cut training speed down; I know the LSTM model is out there, but I haven't had great results with it yet. I do want to try it again with different parameters, but so far I've gotten better results with this model. LSTM is considered an older technology when compared to Wavenet, correct?
Without being very confident in what I'm talking about here, please bear with me: I'm thinking about learning rate; Is there a way to use a large learning rate at the beginning, and sort of pepper the field with guesses, then use the closes successful guess to train using a smaller learning rate? Maybe this is already built into how the model works, I don't know.
Beta Was this translation helpful? Give feedback.
All reactions