Welcome to MetaPerceptron’s documentation!¶
MetaPerceptron (Metaheuristic-optimized Multi-Layer Perceptron) is a Python library that implements variants and the traditional version of Multi-Layer Perceptron models. These include Metaheuristic-optimized MLP models (GA, PSO, WOA, TLO, DE, …) and Gradient Descent-optimized MLP models (SGD, Adam, Adelta, Adagrad, …). It provides a comprehensive list of optimizers for training MLP models and is also compatible with the Scikit-Learn library. With MetaPerceptron, you can perform searches and hyperparameter tuning using the features provided by the Scikit-Learn library.
Free software: GNU General Public License (GPL) V3 license
Provided Estimator: MlpRegressor, MlpClassifier, MhaMlpRegressor, MhaMlpClassifier
Total Metaheuristic-based MLP Regressor: > 200 Models
Total Metaheuristic-based MLP Classifier: > 200 Models
Total Gradient Descent-based MLP Regressor: 12 Models
Total Gradient Descent-based MLP Classifier: 12 Models
Supported performance metrics: >= 67 (47 regressions and 20 classifications)
Supported objective functions (as fitness functions or loss functions): >= 67 (47 regressions and 20 classifications)
Documentation: https://metaperceptron.readthedocs.io
Python versions: >= 3.8.x
Dependencies: numpy, scipy, scikit-learn, pandas, mealpy, permetrics, torch, skorch
Installation¶
Install the current PyPI release:
$ pip install metaperceptron==1.1.0
Install directly from source code:
$ git clone https://github.com/thieu1995/metaperceptron.git $ cd metaperceptron $ python setup.py install
In case, you want to install the development version from Github:
$ pip install git+https://github.com/thieu1995/metaperceptron
After installation, you can import MetaPerceptron as any other Python module:
$ python
>>> import metaperceptron
>>> metaperceptron.__version__
Examples¶
In this section, we will explore the usage of the MetaPerceptron model with the assistance of a dataset. While all the preprocessing steps mentioned below can be replicated using Scikit-Learn, we have implemented some utility functions to provide users with convenience and faster usage.
Combine MetaPerceptron library like a normal library with scikit-learn:
### Step 1: Importing the libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, LabelEncoder
from metaperceptron import MlpRegressor, MlpClassifier, MhaMlpRegressor, MhaMlpClassifier
#### Step 2: Reading the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
#### Step 3: Next, split dataset into train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=100)
#### Step 4: Feature Scaling
scaler_X = MinMaxScaler()
scaler_X.fit(X_train)
X_train = scaler_X.transform(X_train)
X_test = scaler_X.transform(X_test)
le_y = LabelEncoder() # This is for classification problem only
le_y.fit(y)
y_train = le_y.transform(y_train)
y_test = le_y.transform(y_test)
#### Step 5: Fitting MLP-based model to the dataset
##### 5.1: Use standard MLP model for regression problem
regressor = MlpRegressor(hidden_size=50, act1_name="tanh", act2_name="sigmoid", obj_name="MSE",
max_epochs=1000, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=False)
regressor.fit(X_train, y_train)
##### 5.2: Use standard MLP model for classification problem
classifer = MlpClassifier(hidden_size=50, act1_name="tanh", act2_name="sigmoid", obj_name="NLLL",
max_epochs=1000, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=False)
classifer.fit(X_train, y_train)
##### 5.3: Use Metaheuristic-based MLP model for regression problem
print(MhaMlpClassifier.SUPPORTED_OPTIMIZERS)
print(MhaMlpClassifier.SUPPORTED_REG_OBJECTIVES)
opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30}
model = MhaMlpRegressor(hidden_size=50, act1_name="tanh", act2_name="sigmoid",
obj_name="MSE", optimizer="OriginalWOA", optimizer_paras=opt_paras, verbose=True)
regressor.fit(X_train, y_train)
##### 5.4: Use Metaheuristic-based MLP model for classification problem
print(MhaMlpClassifier.SUPPORTED_OPTIMIZERS)
print(MhaMlpClassifier.SUPPORTED_CLS_OBJECTIVES)
opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30}
classifier = MhaMlpClassifier(hidden_size=50, act1_name="tanh", act2_name="softmax",
obj_name="CEL", optimizer="OriginalWOA", optimizer_paras=opt_paras, verbose=True)
classifier.fit(X_train, y_train)
#### Step 6: Predicting a new result
y_pred = regressor.predict(X_test)
y_pred_cls = classifier.predict(X_test)
y_pred_label = le_y.inverse_transform(y_pred_cls)
#### Step 7: Calculate metrics using score or scores functions.
print("Try my AS metric with score function")
print(regressor.score(X_test, y_test, method="AS"))
print("Try my multiple metrics with scores function")
print(classifier.scores(X_test, y_test, list_methods=["AS", "PS", "F1S", "CEL", "BSL"]))
Utilities everything that metaperceptron provided:
### Step 1: Importing the libraries
from metaperceptron import Data, MlpRegressor, MlpClassifier, MhaMlpRegressor, MhaMlpClassifier
from sklearn.datasets import load_digits
#### Step 2: Reading the dataset
X, y = load_digits(return_X_y=True)
data = Data(X, y)
#### Step 3: Next, split dataset into train and test set
data.split_train_test(test_size=0.2, shuffle=True, random_state=100)
#### Step 4: Feature Scaling
data.X_train, scaler_X = data.scale(data.X_train, scaling_methods=("minmax"))
data.X_test = scaler_X.transform(data.X_test)
data.y_train, scaler_y = data.encode_label(data.y_train) # This is for classification problem only
data.y_test = scaler_y.transform(data.y_test)
#### Step 5: Fitting MLP-based model to the dataset
##### 5.1: Use standard MLP model for regression problem
regressor = MlpRegressor(hidden_size=50, act1_name="tanh", act2_name="sigmoid", obj_name="MSE",
max_epochs=1000, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=False)
regressor.fit(data.X_train, data.y_train)
##### 5.2: Use standard MLP model for classification problem
classifer = MlpClassifier(hidden_size=50, act1_name="tanh", act2_name="sigmoid", obj_name="NLLL",
max_epochs=1000, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=False)
classifer.fit(data.X_train, data.y_train)
##### 5.3: Use Metaheuristic-based MLP model for regression problem
print(MhaMlpClassifier.SUPPORTED_OPTIMIZERS)
print(MhaMlpClassifier.SUPPORTED_REG_OBJECTIVES)
opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30}
model = MhaMlpRegressor(hidden_size=50, act1_name="tanh", act2_name="sigmoid",
obj_name="MSE", optimizer="OriginalWOA", optimizer_paras=opt_paras, verbose=True)
regressor.fit(data.X_train, data.y_train)
##### 5.4: Use Metaheuristic-based MLP model for classification problem
print(MhaMlpClassifier.SUPPORTED_OPTIMIZERS)
print(MhaMlpClassifier.SUPPORTED_CLS_OBJECTIVES)
opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30}
classifier = MhaMlpClassifier(hidden_size=50, act1_name="tanh", act2_name="softmax",
obj_name="CEL", optimizer="OriginalWOA", optimizer_paras=opt_paras, verbose=True)
classifier.fit(data.X_train, data.y_train)
#### Step 6: Predicting a new result
y_pred = regressor.predict(data.X_test)
y_pred_cls = classifier.predict(data.X_test)
y_pred_label = scaler_y.inverse_transform(y_pred_cls)
#### Step 7: Calculate metrics using score or scores functions.
print("Try my AS metric with score function")
print(regressor.score(data.X_test, data.y_test, method="AS"))
print("Try my multiple metrics with scores function")
print(classifier.scores(data.X_test, data.y_test, list_methods=["AS", "PS", "F1S", "CEL", "BSL"]))
A real-world dataset contains features that vary in magnitudes, units, and range. We would suggest performing normalization when the scale of a feature is irrelevant or misleading. Feature Scaling basically helps to normalize the data within a particular range.
metaperceptron package¶
metaperceptron.core package¶
metaperceptron.core.base_mlp_numpy module¶
- class metaperceptron.core.base_mlp_numpy.BaseMhaMlp(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name=None, optimizer='OriginalWOA', optimizer_paras=None, verbose=True)[source]¶
Bases:
BaseEstimator
Defines the most general class for Metaheuristic-based MLP model that inherits the BaseMlpNumpy class
- Parameters:
hidden_size (int, default=50) – The number of hidden nodes
act1_name (str, default='tanh') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
act2_name (str, default='sigmoid') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
obj_name (None or str, default=None) – The name of objective for the problem, also depend on the problem is classification and regression.
optimizer (str or instance of Optimizer class (from Mealpy library), default = "OriginalWOA") – The Metaheuristic Algorithm that use to solve the feature selection problem. Current supported list, please check it here: https://github.com/thieu1995/mealpy. If a custom optimizer is passed, make sure it is an instance of Optimizer class.
optimizer_paras (None or dict of parameter, default=None) – The parameter for the optimizer object. If None, the default parameters of optimizer is used (defined in https://github.com/thieu1995/mealpy.) If dict is passed, make sure it has at least epoch and pop_size parameters.
verbose (bool, default=True) – Whether to print progress messages to stdout.
- CLS_OBJ_LOSSES = None¶
- SUPPORTED_ACTIVATIONS = ['none', 'relu', 'leaky_relu', 'celu', 'prelu', 'gelu', 'elu', 'selu', 'rrelu', 'tanh', 'hard_tanh', 'sigmoid', 'hard_sigmoid', 'log_sigmoid', 'swish', 'hard_swish', 'soft_plus', 'mish', 'soft_sign', 'tanh_shrink', 'soft_shrink', 'hard_shrink', 'softmin', 'softmax', 'log_softmax', 'silu']¶
- SUPPORTED_CLS_METRICS = {'AS': 'max', 'BSL': 'min', 'CEL': 'min', 'CKS': 'max', 'F1S': 'max', 'F2S': 'max', 'FBS': 'max', 'GINI': 'min', 'GMS': 'max', 'HL': 'min', 'HS': 'max', 'JSI': 'max', 'KLDL': 'min', 'LS': 'max', 'MCC': 'max', 'NPV': 'max', 'PS': 'max', 'ROC-AUC': 'max', 'RS': 'max', 'SS': 'max'}¶
- SUPPORTED_CLS_OBJECTIVES = {'AS': 'max', 'BSL': 'min', 'CEL': 'min', 'CKS': 'max', 'F1S': 'max', 'F2S': 'max', 'FBS': 'max', 'GINI': 'min', 'GMS': 'max', 'HL': 'min', 'HS': 'max', 'JSI': 'max', 'KLDL': 'min', 'LS': 'max', 'MCC': 'max', 'NPV': 'max', 'PS': 'max', 'ROC-AUC': 'max', 'RS': 'max', 'SS': 'max'}¶
- SUPPORTED_OPTIMIZERS = ['OriginalABC', 'OriginalACOR', 'AugmentedAEO', 'EnhancedAEO', 'ImprovedAEO', 'ModifiedAEO', 'OriginalAEO', 'MGTO', 'OriginalAGTO', 'DevALO', 'OriginalALO', 'OriginalAO', 'OriginalAOA', 'IARO', 'LARO', 'OriginalARO', 'OriginalASO', 'OriginalAVOA', 'OriginalArchOA', 'AdaptiveBA', 'DevBA', 'OriginalBA', 'DevBBO', 'OriginalBBO', 'OriginalBBOA', 'OriginalBES', 'ABFO', 'OriginalBFO', 'OriginalBMO', 'DevBRO', 'OriginalBRO', 'OriginalBSA', 'ImprovedBSO', 'OriginalBSO', 'CleverBookBeesA', 'OriginalBeesA', 'ProbBeesA', 'OriginalCA', 'OriginalCDO', 'OriginalCEM', 'OriginalCGO', 'DevCHIO', 'OriginalCHIO', 'OriginalCOA', 'OCRO', 'OriginalCRO', 'OriginalCSA', 'OriginalCSO', 'OriginalCircleSA', 'OriginalCoatiOA', 'JADE', 'OriginalDE', 'SADE', 'SAP_DE', 'DevDMOA', 'OriginalDMOA', 'OriginalDO', 'DevEFO', 'OriginalEFO', 'OriginalEHO', 'AdaptiveEO', 'ModifiedEO', 'OriginalEO', 'OriginalEOA', 'LevyEP', 'OriginalEP', 'CMA_ES', 'LevyES', 'OriginalES', 'Simple_CMA_ES', 'OriginalESOA', 'OriginalEVO', 'OriginalFA', 'DevFBIO', 'OriginalFBIO', 'OriginalFFA', 'OriginalFFO', 'OriginalFLA', 'DevFOA', 'OriginalFOA', 'WhaleFOA', 'OriginalFOX', 'OriginalFPA', 'BaseGA', 'EliteMultiGA', 'EliteSingleGA', 'MultiGA', 'SingleGA', 'OriginalGBO', 'DevGCO', 'OriginalGCO', 'OriginalGJO', 'OriginalGOA', 'DevGSKA', 'OriginalGSKA', 'Matlab101GTO', 'Matlab102GTO', 'OriginalGTO', 'GWO_WOA', 'IGWO', 'OriginalGWO', 'RW_GWO', 'OriginalHBA', 'OriginalHBO', 'OriginalHC', 'SwarmHC', 'OriginalHCO', 'OriginalHGS', 'OriginalHGSO', 'OriginalHHO', 'DevHS', 'OriginalHS', 'OriginalICA', 'OriginalINFO', 'OriginalIWO', 'DevJA', 'LevyJA', 'OriginalJA', 'DevLCO', 'ImprovedLCO', 'OriginalLCO', 'OriginalMA', 'OriginalMFO', 'OriginalMGO', 'OriginalMPA', 'OriginalMRFO', 'WMQIMRFO', 'OriginalMSA', 'DevMVO', 'OriginalMVO', 'OriginalNGO', 'ImprovedNMRA', 'OriginalNMRA', 'OriginalNRO', 'OriginalOOA', 'OriginalPFA', 'OriginalPOA', 'AIW_PSO', 'CL_PSO', 'C_PSO', 'HPSO_TVAC', 'LDW_PSO', 'OriginalPSO', 'P_PSO', 'OriginalPSS', 'DevQSA', 'ImprovedQSA', 'LevyQSA', 'OppoQSA', 'OriginalQSA', 'OriginalRIME', 'OriginalRUN', 'GaussianSA', 'OriginalSA', 'SwarmSA', 'DevSARO', 'OriginalSARO', 'DevSBO', 'OriginalSBO', 'DevSCA', 'OriginalSCA', 'QleSCA', 'OriginalSCSO', 'ImprovedSFO', 'OriginalSFO', 'L_SHADE', 'OriginalSHADE', 'OriginalSHIO', 'OriginalSHO', 'ImprovedSLO', 'ModifiedSLO', 'OriginalSLO', 'DevSMA', 'OriginalSMA', 'DevSOA', 'OriginalSOA', 'OriginalSOS', 'DevSPBO', 'OriginalSPBO', 'OriginalSRSR', 'DevSSA', 'OriginalSSA', 'OriginalSSDO', 'OriginalSSO', 'OriginalSSpiderA', 'OriginalSSpiderO', 'OriginalSTO', 'OriginalSeaHO', 'OriginalServalOA', 'OriginalTDO', 'DevTLO', 'ImprovedTLO', 'OriginalTLO', 'OriginalTOA', 'DevTPO', 'OriginalTS', 'OriginalTSA', 'OriginalTSO', 'EnhancedTWO', 'LevyTWO', 'OppoTWO', 'OriginalTWO', 'DevVCS', 'OriginalVCS', 'OriginalWCA', 'OriginalWDO', 'OriginalWHO', 'HI_WOA', 'OriginalWOA', 'OriginalWaOA', 'OriginalWarSO', 'OriginalZOA']¶
- SUPPORTED_REG_METRICS = {'A10': 'max', 'A20': 'max', 'A30': 'max', 'ACOD': 'max', 'APCC': 'max', 'AR': 'max', 'AR2': 'max', 'CI': 'max', 'COD': 'max', 'COR': 'max', 'COV': 'max', 'CRM': 'min', 'DRV': 'min', 'EC': 'max', 'EVS': 'max', 'GINI': 'min', 'GINI_WIKI': 'min', 'JSD': 'min', 'KGE': 'max', 'MAAPE': 'min', 'MAE': 'min', 'MAPE': 'min', 'MASE': 'min', 'ME': 'min', 'MRB': 'min', 'MRE': 'min', 'MSE': 'min', 'MSLE': 'min', 'MedAE': 'min', 'NNSE': 'max', 'NRMSE': 'min', 'NSE': 'max', 'OI': 'max', 'PCC': 'max', 'PCD': 'max', 'R': 'max', 'R2': 'max', 'R2S': 'max', 'RAE': 'min', 'RMSE': 'min', 'RSE': 'min', 'RSQ': 'max', 'SMAPE': 'min', 'VAF': 'max', 'WI': 'max'}¶
- SUPPORTED_REG_OBJECTIVES = {'A10': 'max', 'A20': 'max', 'A30': 'max', 'ACOD': 'max', 'APCC': 'max', 'AR': 'max', 'AR2': 'max', 'CI': 'max', 'COD': 'max', 'COR': 'max', 'COV': 'max', 'CRM': 'min', 'DRV': 'min', 'EC': 'max', 'EVS': 'max', 'GINI': 'min', 'GINI_WIKI': 'min', 'JSD': 'min', 'KGE': 'max', 'MAAPE': 'min', 'MAE': 'min', 'MAPE': 'min', 'MASE': 'min', 'ME': 'min', 'MRB': 'min', 'MRE': 'min', 'MSE': 'min', 'MSLE': 'min', 'MedAE': 'min', 'NNSE': 'max', 'NRMSE': 'min', 'NSE': 'max', 'OI': 'max', 'PCC': 'max', 'PCD': 'max', 'R': 'max', 'R2': 'max', 'R2S': 'max', 'RAE': 'min', 'RMSE': 'min', 'RSE': 'min', 'RSQ': 'max', 'SMAPE': 'min', 'VAF': 'max', 'WI': 'max'}¶
- evaluate(y_true, y_pred, list_metrics=None)[source]¶
Return the list of performance metrics of the prediction.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- predict(X, return_prob=False)[source]¶
Inherit the predict function from BaseMlpNumpy class, with 1 more parameter return_prob.
- Parameters:
X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The input data.
return_prob (bool, default=False) –
It is used for classification problem:
If True, the returned results are the probability for each sample
If False, the returned results are the predicted labels
- save_evaluation_metrics(y_true, y_pred, list_metrics=('RMSE', 'MAE'), save_path='history', filename='metrics.csv')[source]¶
Save evaluation metrics to csv file
- Parameters:
y_true (ground truth data) –
y_pred (predicted output) –
list_metrics (list of evaluation metrics) –
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- save_model(save_path='history', filename='model.pkl')[source]¶
Save model to pickle file
- Parameters:
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".pkl" extension) –
- save_training_loss(save_path='history', filename='loss.csv')[source]¶
Save the loss (convergence) during the training process to csv file.
- Parameters:
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- save_y_predicted(X, y_true, save_path='history', filename='y_predicted.csv')[source]¶
Save the predicted results to csv file
- Parameters:
X (The features data, nd.ndarray) –
y_true (The ground truth data) –
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- score(X, y, method=None)[source]¶
Return the metric of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
method (str, default="RMSE") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=None)[source]¶
Return the list of metrics of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
list_methods (list, default=("MSE", "MAE")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_fit_request(*, lb: bool | None | str = '$UNCHANGED$', obj_weights: bool | None | str = '$UNCHANGED$', save_population: bool | None | str = '$UNCHANGED$', ub: bool | None | str = '$UNCHANGED$') BaseMhaMlp ¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
lb (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
lb
parameter infit
.obj_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
obj_weights
parameter infit
.save_population (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
save_population
parameter infit
.ub (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
ub
parameter infit
.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') BaseMhaMlp ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') BaseMhaMlp ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
- class metaperceptron.core.base_mlp_numpy.MlpNumpy(input_size=5, hidden_size=10, output_size=1, act1_name='tanh', act2_name='sigmoid')[source]¶
Bases:
object
This class defines the general Multi-Layer Perceptron (MLP) model using Numpy
- Parameters:
input_size (int, default=5) – The number of input nodes
hidden_size (int, default=10) – The number of hidden nodes
output_size (int, default=1) – The number of output nodes
act1_name (str, default='tanh') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
act2_name (str, default='sigmoid') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
- fit(X, y)[source]¶
Fit the model to data matrix X and target(s) y.
- Parameters:
X (ndarray or sparse matrix of shape (n_samples, n_features)) – The input data.
y (ndarray of shape (n_samples,) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
- Returns:
self – Returns a trained MLP model.
- Return type:
object
metaperceptron.core.base_mlp_torch module¶
- class metaperceptron.core.base_mlp_torch.BaseMlpTorch(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name=None, max_epochs=1000, batch_size=32, optimizer='SGD', optimizer_paras=None, verbose=False)[source]¶
Bases:
BaseEstimator
Defines the most general class for traditional MLP models that inherits the BaseEstimator class of Scikit-Learn library.
- Parameters:
hidden_size (int, default=50) – The hidden size of MLP network (This network only has single hidden layer).
act1_name (str, defeault="tanh") – This is activation for hidden layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
act2_name (str, defeault="sigmoid") – This is activation for output layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
obj_name (str, default=None) – The name of objective for the problem, also depend on the problem is classification and regression.
max_epochs (int, default=1000) – Maximum number of epochs / iterations / generations
batch_size (int, default=32) – The batch size
optimizer (str, default = "SGD") – The gradient-based optimizer from Pytorch. List of supported optimizer is: [“Adadelta”, “Adagrad”, “Adam”, “Adamax”, “AdamW”, “ASGD”, “LBFGS”, “NAdam”, “RAdam”, “RMSprop”, “Rprop”, “SGD”]
optimizer_paras (dict or None, default=None) – The dictionary parameters of the selected optimizer.
verbose (bool, default=True) – Whether to print progress messages to stdout.
- CLS_OBJ_LOSSES = None¶
- SUPPORTED_CLS_METRICS = {'AS': 'max', 'BSL': 'min', 'CEL': 'min', 'CKS': 'max', 'F1S': 'max', 'F2S': 'max', 'FBS': 'max', 'GINI': 'min', 'GMS': 'max', 'HL': 'min', 'HS': 'max', 'JSI': 'max', 'KLDL': 'min', 'LS': 'max', 'MCC': 'max', 'NPV': 'max', 'PS': 'max', 'ROC-AUC': 'max', 'RS': 'max', 'SS': 'max'}¶
- SUPPORTED_LOSSES = {'MAE': <class 'torch.nn.modules.loss.L1Loss'>, 'MSE': <class 'torch.nn.modules.loss.MSELoss'>}¶
- SUPPORTED_OPTIMIZERS = ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'AdamW', 'ASGD', 'LBFGS', 'NAdam', 'RAdam', 'RMSprop', 'Rprop', 'SGD']¶
- SUPPORTED_REG_METRICS = {'A10': 'max', 'A20': 'max', 'A30': 'max', 'ACOD': 'max', 'APCC': 'max', 'AR': 'max', 'AR2': 'max', 'CI': 'max', 'COD': 'max', 'COR': 'max', 'COV': 'max', 'CRM': 'min', 'DRV': 'min', 'EC': 'max', 'EVS': 'max', 'GINI': 'min', 'GINI_WIKI': 'min', 'JSD': 'min', 'KGE': 'max', 'MAAPE': 'min', 'MAE': 'min', 'MAPE': 'min', 'MASE': 'min', 'ME': 'min', 'MRB': 'min', 'MRE': 'min', 'MSE': 'min', 'MSLE': 'min', 'MedAE': 'min', 'NNSE': 'max', 'NRMSE': 'min', 'NSE': 'max', 'OI': 'max', 'PCC': 'max', 'PCD': 'max', 'R': 'max', 'R2': 'max', 'R2S': 'max', 'RAE': 'min', 'RMSE': 'min', 'RSE': 'min', 'RSQ': 'max', 'SMAPE': 'min', 'VAF': 'max', 'WI': 'max'}¶
- evaluate(y_true, y_pred, list_metrics=None)[source]¶
Return the list of performance metrics of the prediction.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- predict(X, return_prob=False)[source]¶
Inherit the predict function from BaseMlp class, with 1 more parameter return_prob.
- Parameters:
X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The input data.
return_prob (bool, default=False) –
It is used for classification problem:
If True, the returned results are the probability for each sample
If False, the returned results are the predicted labels
- save_evaluation_metrics(y_true, y_pred, list_metrics=('RMSE', 'MAE'), save_path='history', filename='metrics.csv')[source]¶
Save evaluation metrics to csv file
- Parameters:
y_true (ground truth data) –
y_pred (predicted output) –
list_metrics (list of evaluation metrics) –
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- save_model(save_path='history', filename='model.pkl')[source]¶
Save model to pickle file
- Parameters:
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".pkl" extension) –
- save_training_loss(save_path='history', filename='loss.csv')[source]¶
Save the loss (convergence) during the training process to csv file.
- Parameters:
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- save_y_predicted(X, y_true, save_path='history', filename='y_predicted.csv')[source]¶
Save the predicted results to csv file
- Parameters:
X (The features data, nd.ndarray) –
y_true (The ground truth data) –
save_path (saved path (relative path, consider from current executed script path)) –
filename (name of the file, needs to have ".csv" extension) –
- score(X, y, method=None)[source]¶
Return the metric of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
method (str, default="RMSE") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=None)[source]¶
Return the list of metrics of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
list_methods (list, default=("MSE", "MAE")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') BaseMlpTorch ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') BaseMlpTorch ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
- class metaperceptron.core.base_mlp_torch.MlpTorch(input_size, hidden_size, output_size, act1_name='tanh', act2_name='sigmoid')[source]¶
Bases:
Module
Define the MLP model
- SUPPORTED_ACTIVATIONS = ['none', 'threshold', 'relu', 'rrelu', 'hardtanh', 'relu6', 'sigmoid', 'hardsigmoid', 'tanh', 'silu', 'mish', 'hardswish', 'elu', 'celu', 'selu', 'glu', 'gelu', 'hardshrink', 'leakyrelu', 'logsigmoid', 'softplus', 'softshrink', 'multiheadattention', 'prelu', 'softsign', 'tanhshrink', 'softmin', 'softmax', 'logsoftmax']¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
metaperceptron.core.mha_mlp module¶
- class metaperceptron.core.mha_mlp.MhaMlpClassifier(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name='CEL', optimizer='OriginalWOA', optimizer_paras=None, verbose=True)[source]¶
Bases:
BaseMhaMlp
,ClassifierMixin
Defines the general class of Metaheuristic-based MLP model for Classification problems that inherit the BaseMhaMlp and ClassifierMixin classes.
- Parameters:
hidden_size (int, default=50) – The number of hidden nodes
act1_name (str, default='tanh') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
act2_name (str, default='sigmoid') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
obj_name (str, default="MSE") – Current supported objective functions, please check it here: https://github.com/thieu1995/permetrics
optimizer (str or instance of Optimizer class (from Mealpy library), default = "OriginalWOA") – The Metaheuristic Algorithm that use to solve the feature selection problem. Current supported list, please check it here: https://github.com/thieu1995/mealpy. If a custom optimizer is passed, make sure it is an instance of Optimizer class.
optimizer_paras (None or dict of parameter, default=None) – The parameter for the optimizer object. If None, the default parameters of optimizer is used (defined in https://github.com/thieu1995/mealpy.) If dict is passed, make sure it has at least epoch and pop_size parameters.
verbose (bool, default=True) – Whether to print progress messages to stdout.
Examples
>>> from metaperceptron import Data, MhaMlpClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=100, random_state=1) >>> data = Data(X, y) >>> data.split_train_test(test_size=0.2, random_state=1) >>> data.X_train_scaled, scaler = data.scale(data.X_train, method="MinMaxScaler") >>> data.X_test_scaled = scaler.transform(data.X_test) >>> opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30} >>> print(MhaMlpClassifier.SUPPORTED_CLS_OBJECTIVES) {'PS': 'max', 'NPV': 'max', 'RS': 'max', ...., 'KLDL': 'min', 'BSL': 'min'} >>> model = MhaMlpClassifier(hidden_size=50, act1_name="tanh", act2_name="sigmoid", >>> obj_name="NPV", optimizer="OriginalWOA", optimizer_paras=opt_paras, verbose=True) >>> model.fit(data.X_train_scaled, data.y_train) >>> pred = model.predict(data.X_test_scaled) >>> print(pred) array([1, 0, 1, 0, 1])
- CLS_OBJ_LOSSES = ['CEL', 'HL', 'KLDL', 'BSL']¶
- create_network(X, y) Tuple[MlpNumpy, ObjectiveScaler] [source]¶
- evaluate(y_true, y_pred, list_metrics=('AS', 'RS'))[source]¶
Return the list of performance metrics on the given test data and labels.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list, default=("AS", "RS")) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- objective_function(solution=None)[source]¶
Evaluates the fitness function for classification metric
- Parameters:
solution (np.ndarray, default=None) –
- Returns:
result – The fitness value
- Return type:
float
- score(X, y, method='AS')[source]¶
Return the metric on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
method (str, default="AS") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=('AS', 'RS'))[source]¶
Return the list of metrics on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
list_methods (list, default=("AS", "RS")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_fit_request(*, lb: bool | None | str = '$UNCHANGED$', obj_weights: bool | None | str = '$UNCHANGED$', save_population: bool | None | str = '$UNCHANGED$', ub: bool | None | str = '$UNCHANGED$') MhaMlpClassifier ¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
lb (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
lb
parameter infit
.obj_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
obj_weights
parameter infit
.save_population (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
save_population
parameter infit
.ub (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
ub
parameter infit
.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') MhaMlpClassifier ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') MhaMlpClassifier ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
- class metaperceptron.core.mha_mlp.MhaMlpRegressor(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name='MSE', optimizer='OriginalWOA', optimizer_paras=None, verbose=True)[source]¶
Bases:
BaseMhaMlp
,RegressorMixin
Defines the general class of Metaheuristic-based MLP model for Regression problems that inherit the BaseMhaMlp and RegressorMixin classes.
- Parameters:
hidden_size (int, default=50) – The number of hidden nodes
act1_name (str, default='tanh') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
act2_name (str, default='sigmoid') – Activation function for the hidden layer. The supported activation functions are: [“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”, “silu”]
obj_name (str, default="MSE") – Current supported objective functions, please check it here: https://github.com/thieu1995/permetrics
optimizer (str or instance of Optimizer class (from Mealpy library), default = "OriginalWOA") – The Metaheuristic Algorithm that use to solve the feature selection problem. Current supported list, please check it here: https://github.com/thieu1995/mealpy. If a custom optimizer is passed, make sure it is an instance of Optimizer class.
optimizer_paras (None or dict of parameter, default=None) – The parameter for the optimizer object. If None, the default parameters of optimizer is used (defined in https://github.com/thieu1995/mealpy.) If dict is passed, make sure it has at least epoch and pop_size parameters.
verbose (bool, default=True) – Whether to print progress messages to stdout.
Examples
>>> from metaperceptron import MhaMlpRegressor, Data >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_samples=200, random_state=1) >>> data = Data(X, y) >>> data.split_train_test(test_size=0.2, random_state=1) >>> data.X_train_scaled, scaler = data.scale(data.X_train, method="MinMaxScaler") >>> data.X_test_scaled = scaler.transform(data.X_test) >>> opt_paras = {"name": "GA", "epoch": 10, "pop_size": 30} >>> model = MhaMlpRegressor(hidden_size=15, act1_name="relu", act2_name="sigmoid", >>> obj_name="MSE", optimizer="BaseGA", optimizer_paras=opt_paras, verbose=True) >>> model.fit(data.X_train_scaled, data.y_train) >>> pred = model.predict(data.X_test_scaled) >>> print(pred)
- create_network(X, y) Tuple[MlpNumpy, ObjectiveScaler] [source]¶
- Returns:
network (MLP, an instance of MLP network)
obj_scaler (ObjectiveScaler, the objective scaler that used to scale output)
- evaluate(y_true, y_pred, list_metrics=('MSE', 'MAE'))[source]¶
Return the list of performance metrics of the prediction.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list, default=("MSE", "MAE")) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- objective_function(solution=None)[source]¶
Evaluates the fitness function for regression metric
- Parameters:
solution (np.ndarray, default=None) –
- Returns:
result – The fitness value
- Return type:
float
- score(X, y, method='RMSE')[source]¶
Return the metric of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
method (str, default="RMSE") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=('MSE', 'MAE'))[source]¶
Return the list of metrics of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
list_methods (list, default=("MSE", "MAE")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_fit_request(*, lb: bool | None | str = '$UNCHANGED$', obj_weights: bool | None | str = '$UNCHANGED$', save_population: bool | None | str = '$UNCHANGED$', ub: bool | None | str = '$UNCHANGED$') MhaMlpRegressor ¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
lb (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
lb
parameter infit
.obj_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
obj_weights
parameter infit
.save_population (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
save_population
parameter infit
.ub (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
ub
parameter infit
.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') MhaMlpRegressor ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') MhaMlpRegressor ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
metaperceptron.core.traditional_mlp module¶
- class metaperceptron.core.traditional_mlp.MlpClassifier(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name='NLLL', max_epochs=1000, batch_size=32, optimizer='SGD', optimizer_paras=None, verbose=False, **kwargs)[source]¶
Bases:
BaseMlpTorch
Defines the class for traditional MLP network for Classification problems that inherit the BaseMlpTorch class
- Parameters:
hidden_size (int, default=50) – The hidden size of MLP network (This network only has single hidden layer).
act1_name (str, defeault="tanh") – This is activation for hidden layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
act2_name (str, defeault="sigmoid") – This is activation for output layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
obj_name (str, default="NLLL") – The name of objective for classification problem (binary and multi-class classification)
max_epochs (int, default=1000) – Maximum number of epochs / iterations / generations
batch_size (int, default=32) – The batch size
optimizer (str, default = "SGD") – The gradient-based optimizer from Pytorch. List of supported optimizer is: [“Adadelta”, “Adagrad”, “Adam”, “Adamax”, “AdamW”, “ASGD”, “LBFGS”, “NAdam”, “RAdam”, “RMSprop”, “Rprop”, “SGD”]
optimizer_paras (dict or None, default=None) – The dictionary parameters of the selected optimizer.
verbose (bool, default=True) – Whether to print progress messages to stdout.
Examples
>>> from metaperceptron import MlpClassifier, Data >>> from sklearn.datasets import make_regression >>> >>> ## Make dataset >>> X, y = make_regression(n_samples=200, n_features=10, random_state=1) >>> ## Load data object >>> data = Data(X, y) >>> ## Split train and test >>> data.split_train_test(test_size=0.2, random_state=1, inplace=True) >>> ## Scale dataset >>> data.X_train, scaler = data.scale(data.X_train, scaling_methods=("minmax")) >>> data.X_test = scaler.transform(data.X_test) >>> ## Create model >>> model = MlpClassifier(hidden_size=25, act1_name="tanh", act2_name="sigmoid", obj_name="BCEL", >>> max_epochs=10, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=True) >>> ## Train the model >>> model.fit(data.X_train, data.y_train) >>> ## Test the model >>> y_pred = model.predict(data.X_test) >>> ## Calculate some metrics >>> print(model.score(X=data.X_test, y=data.y_test, method="RMSE")) >>> print(model.scores(X=data.X_test, y=data.y_test, list_methods=["R2", "NSE", "MAPE"])) >>> print(model.evaluate(y_true=data.y_test, y_pred=y_pred, list_metrics=["R2", "NSE", "MAPE", "NNSE"]))
- CLS_OBJ_BINARY_1 = ['PNLLL', 'HEL', 'BCEL', 'CEL', 'BCELL']¶
- CLS_OBJ_BINARY_2 = ['NLLL']¶
- CLS_OBJ_LOSSES = ['CEL', 'HEL', 'KLDL']¶
- CLS_OBJ_MULTI = ['NLLL', 'CEL']¶
- SUPPORTED_LOSSES = {'BCEL': <class 'torch.nn.modules.loss.BCELoss'>, 'BCELL': <class 'torch.nn.modules.loss.BCEWithLogitsLoss'>, 'CEL': <class 'torch.nn.modules.loss.CrossEntropyLoss'>, 'GNLLL': <class 'torch.nn.modules.loss.GaussianNLLLoss'>, 'HEL': <class 'torch.nn.modules.loss.HingeEmbeddingLoss'>, 'KLDL': <class 'torch.nn.modules.loss.KLDivLoss'>, 'NLLL': <class 'torch.nn.modules.loss.NLLLoss'>, 'PNLLL': <class 'torch.nn.modules.loss.PoissonNLLLoss'>}¶
- create_network(X, y) Tuple[NeuralNetClassifier, ObjectiveScaler] [source]¶
- Returns:
network (MLP, an instance of MLP network)
obj_scaler (ObjectiveScaler, the objective scaler that used to scale output)
- evaluate(y_true, y_pred, list_metrics=('AS', 'RS'))[source]¶
Return the list of performance metrics on the given test data and labels.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list, default=("AS", "RS")) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- score(X, y, method='AS')[source]¶
Return the metric on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
method (str, default="AS") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=('AS', 'RS'))[source]¶
Return the list of metrics on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
list_methods (list, default=("AS", "RS")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') MlpClassifier ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') MlpClassifier ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
- class metaperceptron.core.traditional_mlp.MlpRegressor(hidden_size=50, act1_name='tanh', act2_name='sigmoid', obj_name='MSE', max_epochs=1000, batch_size=32, optimizer='SGD', optimizer_paras=None, verbose=False, **kwargs)[source]¶
Bases:
BaseMlpTorch
Defines the class for traditional MLP network for Regression problems that inherit the BaseMlpTorch and RegressorMixin classes.
- Parameters:
hidden_size (int, default=50) – The hidden size of MLP network (This network only has single hidden layer).
act1_name (str, defeault="tanh") – This is activation for hidden layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
act2_name (str, defeault="sigmoid") – This is activation for output layer. The supported activation are: {“none”, “relu”, “leaky_relu”, “celu”, “prelu”, “gelu”, “elu”, “selu”, “rrelu”, “tanh”, “hard_tanh”, “sigmoid”, “hard_sigmoid”, “log_sigmoid”, “silu”, “swish”, “hard_swish”, “soft_plus”, “mish”, “soft_sign”, “tanh_shrink”, “soft_shrink”, “hard_shrink”, “softmin”, “softmax”, “log_softmax”}.
obj_name (str, default="MSE") – The name of loss function for the network.
max_epochs (int, default=1000) – Maximum number of epochs / iterations / generations
batch_size (int, default=32) – The batch size
optimizer (str, default = "SGD") – The gradient-based optimizer from Pytorch. List of supported optimizer is: [“Adadelta”, “Adagrad”, “Adam”, “Adamax”, “AdamW”, “ASGD”, “LBFGS”, “NAdam”, “RAdam”, “RMSprop”, “Rprop”, “SGD”]
optimizer_paras (dict or None, default=None) – The dictionary parameters of the selected optimizer.
verbose (bool, default=True) – Whether to print progress messages to stdout.
Examples
>>> from metaperceptron import MlpRegressor, Data >>> from sklearn.datasets import make_regression >>> >>> ## Make dataset >>> X, y = make_regression(n_samples=200, n_features=10, random_state=1) >>> ## Load data object >>> data = Data(X, y) >>> ## Split train and test >>> data.split_train_test(test_size=0.2, random_state=1, inplace=True) >>> ## Scale dataset >>> data.X_train, scaler = data.scale(data.X_train, scaling_methods=("minmax")) >>> data.X_test = scaler.transform(data.X_test) >>> ## Create model >>> model = MlpRegressor(hidden_size=25, act1_name="tanh", act2_name="sigmoid", obj_name="MSE", >>> max_epochs=10, batch_size=32, optimizer="SGD", optimizer_paras=None, verbose=True) >>> ## Train the model >>> model.fit(data.X_train, data.y_train) >>> ## Test the model >>> y_pred = model.predict(data.X_test) >>> ## Calculate some metrics >>> print(model.score(X=data.X_test, y=data.y_test, method="RMSE")) >>> print(model.scores(X=data.X_test, y=data.y_test, list_methods=["R2", "NSE", "MAPE"])) >>> print(model.evaluate(y_true=data.y_test, y_pred=y_pred, list_metrics=["R2", "NSE", "MAPE", "NNSE"]))
- SUPPORTED_LOSSES = {'MAE': <class 'torch.nn.modules.loss.L1Loss'>, 'MSE': <class 'torch.nn.modules.loss.MSELoss'>}¶
- create_network(X, y)[source]¶
- Returns:
network (MLP, an instance of MLP network)
obj_scaler (ObjectiveScaler, the objective scaler that used to scale output)
- evaluate(y_true, y_pred, list_metrics=('MSE', 'MAE'))[source]¶
Return the list of performance metrics of the prediction.
- Parameters:
y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Predicted values for X.
list_metrics (list, default=("MSE", "MAE")) – You can get metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- score(X, y, method='RMSE')[source]¶
Return the metric of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
method (str, default="RMSE") – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
result – The result of selected metric
- Return type:
float
- scores(X, y, list_methods=('MSE', 'MAE'))[source]¶
Return the list of metrics of the prediction.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
list_methods (list, default=("MSE", "MAE")) – You can get all metrics from Permetrics library: https://github.com/thieu1995/permetrics
- Returns:
results – The results of the list metrics
- Return type:
dict
- set_predict_request(*, return_prob: bool | None | str = '$UNCHANGED$') MlpRegressor ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
return_prob (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
return_prob
parameter inpredict
.- Returns:
self – The updated object.
- Return type:
object
- set_score_request(*, method: bool | None | str = '$UNCHANGED$') MlpRegressor ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
method (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
method
parameter inscore
.- Returns:
self – The updated object.
- Return type:
object
metaperceptron.helpers package¶
metaperceptron.helpers.act_util module¶
- metaperceptron.helpers.act_util.silu(x)¶
metaperceptron.helpers.metric_util module¶
metaperceptron.helpers.preprocessor module¶
- class metaperceptron.helpers.preprocessor.Data(X=None, y=None, name='Unknown')[source]¶
Bases:
object
The structure of our supported Data class
- Parameters:
X (np.ndarray) – The features of your data
y (np.ndarray) – The labels of your data
- SUPPORT = {'scaler': ['standard', 'minmax', 'max-abs', 'log1p', 'loge', 'sqrt', 'sinh-arc-sinh', 'robust', 'box-cox', 'yeo-johnson']}¶
- class metaperceptron.helpers.preprocessor.FeatureEngineering[source]¶
Bases:
object
- create_threshold_binary_features(X, threshold)[source]¶
Perform feature engineering to add binary indicator columns for values below the threshold. Add each new column right after the corresponding original column.
Args: X (numpy.ndarray): The input 2D matrix of shape (n_samples, n_features). threshold (float): The threshold value for identifying low values.
Returns: numpy.ndarray: The updated 2D matrix with binary indicator columns.
- class metaperceptron.helpers.preprocessor.LabelEncoder[source]¶
Bases:
object
Encode categorical features as integer labels.
- fit(y)[source]¶
Fit label encoder to a given set of labels.
Parameters:¶
- yarray-like
Labels to encode.
- fit_transform(y)[source]¶
Fit label encoder and return encoded labels.
- Parameters:
y (array-like of shape (n_samples,)) – Target values.
- Returns:
y – Encoded labels.
- Return type:
array-like of shape (n_samples,)
metaperceptron.helpers.scaler_util module¶
- class metaperceptron.helpers.scaler_util.BoxCoxScaler(lmbda=None)[source]¶
Bases:
BaseEstimator
,TransformerMixin
- class metaperceptron.helpers.scaler_util.DataTransformer(scaling_methods=('standard',), list_dict_paras=None)[source]¶
Bases:
BaseEstimator
,TransformerMixin
- SUPPORTED_SCALERS = {'box-cox': <class 'metaperceptron.helpers.scaler_util.BoxCoxScaler'>, 'log1p': <class 'metaperceptron.helpers.scaler_util.Log1pScaler'>, 'loge': <class 'metaperceptron.helpers.scaler_util.LogeScaler'>, 'max-abs': <class 'sklearn.preprocessing._data.MaxAbsScaler'>, 'minmax': <class 'sklearn.preprocessing._data.MinMaxScaler'>, 'robust': <class 'sklearn.preprocessing._data.RobustScaler'>, 'sinh-arc-sinh': <class 'metaperceptron.helpers.scaler_util.SinhArcSinhScaler'>, 'sqrt': <class 'metaperceptron.helpers.scaler_util.SqrtScaler'>, 'standard': <class 'sklearn.preprocessing._data.StandardScaler'>, 'yeo-johnson': <class 'metaperceptron.helpers.scaler_util.YeoJohnsonScaler'>}¶
- class metaperceptron.helpers.scaler_util.Log1pScaler[source]¶
Bases:
BaseEstimator
,TransformerMixin
- class metaperceptron.helpers.scaler_util.ObjectiveScaler(obj_name='sigmoid', ohe_scaler=None)[source]¶
Bases:
object
For label scaler in classification (binary and multiple classification)
- class metaperceptron.helpers.scaler_util.SinhArcSinhScaler(epsilon=0.1, delta=1.0)[source]¶
Bases:
BaseEstimator
,TransformerMixin
metaperceptron.helpers.validator module¶
Citation Request¶
Note:
If you want to understand how Metaheuristic is applied to Multi-Layer Perceptron, you need to read the paper
titled `Let a biogeography-based optimizer train your Multi-Layer Perceptron`.
The paper can be accessed at the following `this link <https://doi.org/10.1016/j.ins.2014.01.038>`_
Please include these citations if you plan to use this library:
@software{nguyen_van_thieu_2023_10251022,
author = {Nguyen Van Thieu},
title = {MetaPerceptron: Unleashing the Power of Metaheuristic-optimized Multi-Layer Perceptron - A Python Library},
month = dec,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.10251021},
url = {https://github.com/thieu1995/MetaPerceptron}
}
@article{van2023mealpy,
title={MEALPY: An open-source library for latest meta-heuristic algorithms in Python},
author={Van Thieu, Nguyen and Mirjalili, Seyedali},
journal={Journal of Systems Architecture},
year={2023},
publisher={Elsevier},
doi={10.1016/j.sysarc.2023.102871}
}
@article{van2023groundwater,
title={Groundwater level modeling using Augmented Artificial Ecosystem Optimization},
author={Van Thieu, Nguyen and Barma, Surajit Deb and Van Lam, To and Kisi, Ozgur and Mahesha, Amai},
journal={Journal of Hydrology},
volume={617},
pages={129034},
year={2023},
publisher={Elsevier}
}
@article{thieu2019efficient,
title={Efficient time-series forecasting using neural network and opposition-based coral reefs optimization},
author={Thieu Nguyen, Tu Nguyen and Nguyen, Binh Minh and Nguyen, Giang},
journal={International Journal of Computational Intelligence Systems},
volume={12},
number={2},
pages={1144--1161},
year={2019}
}
If you have an open-ended or a research question, you can contact me via nguyenthieu2102@gmail.com
Important links¶
Official source code repo: https://github.com/thieu1995/metaperceptron
Official document: https://metaperceptron.readthedocs.io/
Download releases: https://pypi.org/project/metaperceptron/
Issue tracker: https://github.com/thieu1995/metaperceptron/issues
Notable changes log: https://github.com/thieu1995/metaperceptron/blob/master/ChangeLog.md
- This project also related to our another projects which are “optimization” and “machine learning”, check it here:
License¶
The project is licensed under GNU General Public License (GPL) V3 license.