Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] The Default value for use_mini_batch_size should be set to False #1065

Merged
merged 1 commit into from
Jan 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions aeon/classification/deep_learning/_fcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class FCNClassifier(BaseDeepClassifier):
The number of epochs to train the model.
batch_size : int, default = 16
The number of samples per gradient update.
use_mini_batch_size : bool, default = True
use_mini_batch_size : bool, default = False
Whether or not to use the mini batch size formula.
random_state : int or None, default = None
Seed for random number generation.
Expand Down Expand Up @@ -117,7 +117,7 @@ def __init__(
last_file_name="last_model",
n_epochs=2000,
batch_size=16,
use_mini_batch_size=True,
use_mini_batch_size=False,
callbacks=None,
verbose=False,
loss="categorical_crossentropy",
Expand Down
4 changes: 2 additions & 2 deletions aeon/regression/deep_learning/_fcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class FCNRegressor(BaseDeepRegressor):
the number of epochs to train the model
batch_size : int, default = 16
the number of samples per gradient update.
use_mini_batch_size : bool, default = True,
use_mini_batch_size : bool, default = False,
whether or not to use the mini batch size formula
random_state : int or None, default=None
Seed for random number generation.
Expand Down Expand Up @@ -120,7 +120,7 @@ def __init__(
last_file_name="last_model",
n_epochs=2000,
batch_size=16,
use_mini_batch_size=True,
use_mini_batch_size=False,
callbacks=None,
verbose=False,
output_activation="linear",
Expand Down