Objectives
- Identify your data source and format
- Choose how to serve data to machine learning workflows
- Design a data ingestion solution
- Identify machine learning tasks
- Choose a service to train a model
- Choose between computing options
- Understand how a model will be consumed.
- Decide whether to deploy your model to a real-time or batch endpoint.
- Create an Azure Machine Learning workspace.
- Identify resources and assets.
- Train models in the workspace.
- The Azure Machine Learning studio.
- The Python Software Development Kit (SDK).
- The Azure Command Line Interface (CLI).
- Work with Uniform Resource Identifiers (URIs).
- Create and use datastores.
- Create and use data assets.
- Choose the appropriate compute target.
- Create and use a compute instance.
- Create and use a compute cluster.
- Understand environments in Azure Machine Learning.
- Explore and use curated environments.
- Create and use custom environments.
- Prepare your data to use AutoML for classification.
- Configure and run an AutoML experiment.
- Evaluate and compare models.
- Configure to use MLflow in notebooks
- Use MLflow for model tracking in notebooks
- Convert a notebook to a script.
- Test scripts in a terminal.
- Run a script as a command job.
- Use parameters in a command job.
- Use MLflow when you run a script as a job.
- Review metrics, parameters, artifacts, and models from a run.
- Create components.
- Build an Azure Machine Learning pipeline.
- Run an Azure Machine Learning pipeline.
- Define a hyperparameter search space.
- Configure hyperparameter sampling.
- Select an early-termination policy.
- Run a sweep job.
- Use managed online endpoints.
- Deploy your MLflow model to a managed online endpoint.
- Deploy a custom model to a managed online endpoint.
- Test online endpoints.
- Create a batch endpoint.
- Deploy your MLflow model to a batch endpoint.
- Deploy a custom model to a batch endpoint.
- Invoke batch endpoints.