{{ ml-platform-full-name }} is a full-cycle ML development environment. {{ ml-platform-full-name }} offers powerful features to easily work with {{ yandex-cloud }} services.
In {{ ml-platform-name }}, you can train models and perform computations in {{ ds-nb }}, run remote computations using {{ ds-jobs }} jobs, deploy the trained models or any Docker images as a service in {{ ds-inf }}.
You do not need to spend time creating and maintaining VMs: when you create a new project, computing resources are automatically allocated for implementing it.
The VM already has the pre-installed {{ jlab }}Lab development environment and packages for data analysis and machine learning (TensorFlow, Torch, Keras, NumPy, etc.) on it, and you can start using them immediately. For the full list of packages, see {#T}.
If you are missing a package, you can install it right from a notebook or build a custom Docker image.
{{ ml-platform-name }} offers a wide range of ready-made computing resource configurations. You can select one or multiple configurations and get a managed service without the need to set up a VM. The allocated resources will be assigned to you as long as you are using it or until you intentionally release the VM. By default, an idle VM is released in three hours, but you can set the time to reduce costs or to keep the selected configuration assigned to you.
{{ ml-platform-name }} is not just a cloud: it allows all organization members to work in a shared space managed by [{{ org-full-name }}]({{ link-org-cloud-center }}). Resources you create depend on your projects but are not limited only to them. For more information about relationships between {{ ml-platform-name }} resources, see {#T}.
We have introduced communities for you to collaborate on projects and flexibly manage your costs in {{ ml-platform-name }}. You can link a separate {{ yandex-cloud }} billing account to each community to separate the finances of different teams. Yet communities do not isolate teams from each other and allow sharing projects and created resources.
Resource access permissions and scope are managed using roles. For more information about roles, see {#T}.
In addition, community administrators can set up functions to be available in projects and impose limits on the use of configurations to control the costs.
{{ ds-inf }} provides easy-to-use tools for deploying services based on both models trained in {{ ml-platform-name }} and custom Dockerimages built outside {{ ml-platform-name }}.
Aliases allow you to balance the load across multiple running nodes and publish new versions without having to stop your running service. You can create an alias in the {{ ml-platform-name }} interface.
On the node page in the {{ ml-platform-name }} interface, you can track the monitoring charts and logs of the deployed instances. You can also change the configuration of computing resources and send test requests to the deployed service API.
List of guides on using nodes and aliases.