Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgumentOutOfRangeException during training #3

Closed
bratao opened this issue Feb 11, 2016 · 2 comments
Closed

ArgumentOutOfRangeException during training #3

bratao opened this issue Feb 11, 2016 · 2 comments

Comments

@bratao
Copy link

bratao commented Feb 11, 2016

Hello !

First of all, thank you for publishing such high quality project. It is the absolute state-of-art ! Thank you !!!

I was evaluating this in a small dataset, however I get this ArgumentOutOfRangeException.

The error is in SimpleRNN.cs, at this part:

Logger.WriteLine("Saving feature2hidden weights...");
saveMatrixBin(mat_feature2hidden, fo);

This matrix, mat_feature2hidden, have the Height of 200, however the Width is 0.

The command:
.\Bin\RNNSharpConsole.exe -mode train -trainfile bruno-data.txt -modelfile .\bruno-model.bin -validfile bruno-valid.txt -ftrfile .\config_bruno.txt -tagfile .\bruno-tags.txt -modeltype 0 -layersize 200 -alpha 0.1 -crf 1 -maxiter 0 -savestep 200K -dir 1 -dropout 0

The config:
#The file name for template feature set
TFEATURE_FILENAME:.\bruno-features
#The context range for template feature set. In below, the context is current token, next token and next after next token
TFEATURE_CONTEXT: 0

And the error:

info,11/02/2016 14:25:38 Distortion: 0,0873744581246124, vqSize: 256
info,11/02/2016 14:25:38 Saving feature2hidden weights...
info,11/02/2016 14:25:38 Saving matrix with VQ 256...
info,11/02/2016 14:25:38 Sorting data set (size: 0)...

Unhandled Exception: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
   at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
   at AdvUtils.VarBigArray`1.get_Item(Int64 offset)
   at AdvUtils.VectorQuantization.BuildCodebook(Int32 vqSize)
   at RNNSharp.RNN.saveMatrixBin(Matrix`1 mat, BinaryWriter fo, Boolean BuildVQ) in C:\github\RNNSharp\RNNSharp\RNN.cs:line 138
   at RNNSharp.SimpleRNN.saveNetBin(String filename) in C:\github\RNNSharp\RNNSharp\SimpleRNN.cs:line 517
   at RNNSharp.BiRNN.saveNetBin(String filename) in C:\github\RNNSharp\RNNSharp\BiRNN.cs:line 499
   at RNNSharp.RNNEncoder.Train() in C:\github\RNNSharp\RNNSharp\RNNEncoder.cs:line 107
   at RNNSharpConsole.Program.Main(String[] args) in C:\github\RNNSharp\RNNSharpConsole\Program.cs:line 279

Maybe it is because I'm not using a word embedding ??

Thank you again for this great project !

@bratao
Copy link
Author

bratao commented Feb 11, 2016

It was the missing word embedding !

@bratao bratao closed this as completed Feb 11, 2016
@zhongkaifu
Copy link
Owner

Thanks. RNNSharp should not crash, even word embedding feature is not used. I will fix it.

zhongkaifu added a commit that referenced this issue Feb 15, 2016
#2. Improve BiRNN learning process
#3. Support to train model without validated corpus
zhongkaifu added a commit that referenced this issue Feb 24, 2016
#2. Optimize LSTM encoding to improve performance significantly
#3. Apply dynamtic learning rate
zhongkaifu added a commit that referenced this issue Jul 8, 2016
…m input layer.

#2. Refactoring dropout layer and output layer
#3. Refactoring layer initialization
zhongkaifu added a commit that referenced this issue Nov 30, 2016
#2. Fix bug in softmax output layer when computing hidden layer value
#3. Refactoring code
zhongkaifu added a commit that referenced this issue Dec 22, 2016
#2. Refactor configuration file and command line parameter
#3. use SIMD for backward pass in output layer
zhongkaifu added a commit that referenced this issue Feb 5, 2017
… is worse than LSTM

#2. Fix backward bug in Dropout layer
#3. Refactoring code
zhongkaifu added a commit that referenced this issue Feb 5, 2017
…hidden layer is more than 1

#2. Improve training part of bi-directional RNN. We don't re-run forward before updating weights
#3. Fix bugs in Dropout layer
#4. Change hidden layer settings in configuration file.
#5. Refactoring code
ericxsun pushed a commit to ericxsun/RNNSharp that referenced this issue Feb 9, 2017
…lled when running validation

zhongkaifu#2. Support model vector quantization reduce model size to 1/4 original
zhongkaifu#3. Refactoring code and speed up training
zhongkaifu#4. Fixing feature extracting bug
zhongkaifu added a commit that referenced this issue Mar 8, 2017
#2. Refactoring code
#3. Make RNNDecoder thread-safe
zhongkaifu added a commit that referenced this issue Apr 22, 2017
#2. Code refactoring
#3. Performance improvement
zhongkaifu added a commit that referenced this issue May 3, 2017
#2. Improve training performnce ~ 300% up
#3. Fix learning rate update bug
#4. Apply SIMD instruction to update error in layers
#5. Code refactoring
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants