Core ML Concepts
This section covers the foundational machine learning theory, algorithms, metrics, and techniques that underpin every ML system — independent of any specific cloud platform or framework.
Topics​
- ML Problem Types — How to frame the question
- Algorithm Selection — Choosing the right tool for the job
- Neural Network Architectures — Deep learning building blocks
- Data Preparation — Cleaning, scaling, and augmenting data
- Feature Engineering — Encoding, dimensionality reduction, and multicollinearity
- Training Concepts — Gradient descent, learning rate, loss functions, and distributed training
- Hyperparameter Tuning — Strategies and key hyperparameters
- Regularization — Preventing overfitting with L1, L2, dropout, and more
- Evaluation Metrics — Classification, regression, and forecasting metrics
- Overfitting vs Underfitting — Diagnosing and fixing model performance
- Class Imbalance — Techniques for handling skewed datasets
- Validation Strategies — Train/test splits, cross-validation, and time series splits
- NLP Concepts — Text preprocessing and vectorization
- Computer Vision — Image classification, object detection, and segmentation
- Time Series — Components, rules, and forecasting approaches
- Statistics and Probability — Distributions, correlation, and residual analysis
- Model Explainability — SHAP, feature importance, and partial dependence