Please enable JavaScript.
Coggle requires JavaScript to display documents.
Modeling with Multi TSs as input, Hyperparameters, XGBoost, Feature Eng,…
-
Hyperparameters
-
Hyper parameter: values set prior to the training process such as number of neurons, layers, learning rate..etc
AWSTraining: Hyperparameter tuning job
- Random Search/Bayesain Search
- Choose Algorithm
- Choose Error metric
- Configure parameter ranges
XGBoost
-
The most important ones:
- Max_depth (0 – inf): is critical to ensure that you have the right balance between bias and variance. If the max_depth is set too small, you will underfit the training data. If you increase the max_depth, the model will become more complex and will overfit the training data. Default value is 6.
- Gamma (0 – inf): Minimum loss reduction needed to add more partitions to the tree.
- Eta (0 – 1): step size shrinkage used in update to prevents overfitting and make the boosting process more conservative. After each boosting step, you can directly get the weights of new features, and eta shrinks the feature weights.
- Alpha: L1 regularization term on weights. regularization term to avoid overfitting. The higher the gamma the higher the regularization. If gamma is set to zero, no regularization is put in place.
- Lambda: L2 regularization
**Check out the rest of hyperparameters here: https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html
-
-
-
-
-
-
-
-
-
-