With the help of machine learning and its methods, you can train your neural networks. Marketing, FinTech, crisis management, risk management, security, medicine, robotics, etc. Areas of machine learning applications such as These are very popular and there is a great demand for each of them. The widespread stereotype that machine learning is the most difficult area of data analysis is just a myth. Our course will help you master the basics of artificial intelligence, update the mathematical framework in your memory, and provide a complete set of tools for working with neural networks.
-
2.5 months/120 hours
25-30 students
I, III 18:30-21:30
Readiness for intensive training
Knowledge of English at least Intermediate level
To have a personal computer or a laptop
18 years and older
Know and be able to use the basic machine learning algorithms (regression, decision tree, boost)
Be able to use algorithms in accordance with the task and the model
Analyze the project and analyze it step by step
Perform data preprocessing and data validation
Know the performance indicators of the model: Accuracy, Precision, Recall, etc.
Understand and practice unsupervised learning and deep learning with frameworks.
Number of modules
Probability (Data Distributions review), Bayesian statistics
General Stages in ML project
Data Import (pandas, numpy)
Confusion Matrix and Model Performance metrics
Regression Algorithms (with Matrix notation):
Decision Tree Algorithms:
Boosting Algorithms:
Neural Networks
Optimization Methods (Hyper-parameters, Gradient Descent etc.)
Hyperparameter optimization
Imbalanced classification
Model explainability
Unsupervised learning:
DL Frameworks (Tensorflow2 , Keras)
Introduction to Tensors, variables
Introduction to Gradients and Automatic Differentiation (tf.GradientTape())
Introduction graphs and functions
Introduction to modules, layers, and models
Basic training loops
Advanced Automatic Differentiation
Keras Sequential model
Training & evaluation with the built-in methods
Save and load Keras models
Working with preprocessing layers
Working with the model’s layers
Dropout, Weight initialization
Optimization algorithms (Mini batch gradient descent, Momentum, RMSProp, Adam)
Batch normalization
CNN (Convolutional Neural Network)
Convolutions
Pooling
Fully connected
RNN (Recurrent Neural Networks)
RNN
LSTM
GRU
NLP
Classical methods overview(language model, tf-idf vectorization)
Word embeddings, CBOW and skip-gram methods, word2vec