diff --git a/umn/source/_static/images/en-us_image_0000001809029912.png b/umn/source/_static/images/en-us_image_0000001809029912.png deleted file mode 100644 index 3e864f5b..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809029912.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001809029916.png b/umn/source/_static/images/en-us_image_0000001809029916.png deleted file mode 100644 index 85eee0cc..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809029916.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001809029928.png b/umn/source/_static/images/en-us_image_0000001809029928.png deleted file mode 100644 index 2036a6b5..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809029928.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001809189756.png b/umn/source/_static/images/en-us_image_0000001809189756.png deleted file mode 100644 index caf35a81..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809189756.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001809189760.png b/umn/source/_static/images/en-us_image_0000001809189760.png deleted file mode 100644 index 1cc4d812..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809189760.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001809189772.png b/umn/source/_static/images/en-us_image_0000001809189772.png deleted file mode 100644 index 23cc5221..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001809189772.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855868557.png b/umn/source/_static/images/en-us_image_0000001855868557.png deleted file mode 100644 index 95c8e1e1..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001855868557.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855868569.png b/umn/source/_static/images/en-us_image_0000001855868569.png deleted file mode 100644 index 5663a2fb..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001855868569.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855948581.png b/umn/source/_static/images/en-us_image_0000001855948581.png deleted file mode 100644 index 48cdcc7b..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001855948581.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855948589.png b/umn/source/_static/images/en-us_image_0000001855948589.png deleted file mode 100644 index 39b706b9..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001855948589.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855948597.png b/umn/source/_static/images/en-us_image_0000001855948597.png deleted file mode 100644 index c6fb1fa8..00000000 Binary files a/umn/source/_static/images/en-us_image_0000001855948597.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001855868573.png b/umn/source/_static/images/en-us_image_0000002301562030.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855868573.png rename to umn/source/_static/images/en-us_image_0000002301562030.png diff --git a/umn/source/_static/images/en-us_image_0000001855868577.png b/umn/source/_static/images/en-us_image_0000002301562034.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855868577.png rename to umn/source/_static/images/en-us_image_0000002301562034.png diff --git a/umn/source/_static/images/en-us_image_0000001855868581.png b/umn/source/_static/images/en-us_image_0000002301562042.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855868581.png rename to umn/source/_static/images/en-us_image_0000002301562042.png diff --git a/umn/source/_static/images/en-us_image_0000001855948605.png b/umn/source/_static/images/en-us_image_0000002301721718.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855948605.png rename to umn/source/_static/images/en-us_image_0000002301721718.png diff --git a/umn/source/_static/images/en-us_image_0000001855948609.png b/umn/source/_static/images/en-us_image_0000002301721726.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855948609.png rename to umn/source/_static/images/en-us_image_0000002301721726.png diff --git a/umn/source/_static/images/en-us_image_0000001855948613.png b/umn/source/_static/images/en-us_image_0000002301721734.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001855948613.png rename to umn/source/_static/images/en-us_image_0000002301721734.png diff --git a/umn/source/_static/images/en-us_image_0000001809189776.png b/umn/source/_static/images/en-us_image_0000002335521125.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809189776.png rename to umn/source/_static/images/en-us_image_0000002335521125.png diff --git a/umn/source/_static/images/en-us_image_0000001809189780.png b/umn/source/_static/images/en-us_image_0000002335521133.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809189780.png rename to umn/source/_static/images/en-us_image_0000002335521133.png diff --git a/umn/source/_static/images/en-us_image_0000001809189784.png b/umn/source/_static/images/en-us_image_0000002335521137.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809189784.png rename to umn/source/_static/images/en-us_image_0000002335521137.png diff --git a/umn/source/_static/images/en-us_image_0000001809029932.png b/umn/source/_static/images/en-us_image_0000002335561333.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809029932.png rename to umn/source/_static/images/en-us_image_0000002335561333.png diff --git a/umn/source/_static/images/en-us_image_0000001809029936.png b/umn/source/_static/images/en-us_image_0000002335561341.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809029936.png rename to umn/source/_static/images/en-us_image_0000002335561341.png diff --git a/umn/source/_static/images/en-us_image_0000001809029940.png b/umn/source/_static/images/en-us_image_0000002335561349.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001809029940.png rename to umn/source/_static/images/en-us_image_0000002335561349.png diff --git a/umn/source/backup_using_cbr/backing_up_an_ecs.rst b/umn/source/backup_using_cbr/backing_up_an_ecs.rst index c13c85f7..d806c908 100644 --- a/umn/source/backup_using_cbr/backing_up_an_ecs.rst +++ b/umn/source/backup_using_cbr/backing_up_an_ecs.rst @@ -58,14 +58,14 @@ EVS Disk Backup Procedure #. In the ECS list, locate the target ECS and choose **More** > **Manage Image/Backup** > **Create Disk Backup**. - - If the ECS has been associated with a vault, configure the backup information as prompted. + - If the ECS has been associated with a vault, configure the backup information as instructed. - **Server List**: The ECS to be backed up is selected by default. Click |image3| to view the disks attached to the ECSs. Select the disks to be backed up. - **Name**: Customize your backup name. - **Description**: Supplementary information about the backup. - **Full Backup**: If this option is selected, the system will perform full backup for the disks to be associated. The storage capacity used by the backup increases accordingly. - - If the ECS is not associated with a vault, buy a vault first and then configure the backup information as prompted. + - If the ECS is not associated with a vault, buy a vault first and then configure the backup information as instructed. For details, see `Creating a Disk Backup Vault `__. diff --git a/umn/source/backup_using_cbr/overview.rst b/umn/source/backup_using_cbr/overview.rst index 1505c531..b9008fc2 100644 --- a/umn/source/backup_using_cbr/overview.rst +++ b/umn/source/backup_using_cbr/overview.rst @@ -73,7 +73,7 @@ An image can be a system disk image, data disk image, or full-ECS image. | | | | | | | | - **Rapid deployment of multiple services** | | | | | | | -| | | You can use a system disk image to quickly create multiple ECSs with the same OS, thereby quickly deploying services these ECSs. | | +| | | You can use a system disk image to quickly create multiple ECSs with the same OS, thereby quickly deploying services on these ECSs. | | +---------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Data disk image | Specific data disk | **Rapid data replication** | A data disk image can replicate all data on a disk and create new EVS disks. The EVS disks can be attached to other ECSs for data replication and sharing. | | | | | | @@ -85,7 +85,7 @@ An image can be a system disk image, data disk image, or full-ECS image. | | | | | | | | - **Rapid deployment of multiple services** | | | | | | | -| | | You can use a full-ECS image to quickly create multiple ECSs with the same OS and data, thereby quickly deploying services these ECSs. | | +| | | You can use a full-ECS image to quickly create multiple ECSs with the same OS and data, thereby quickly deploying services on these ECSs. | | +---------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ .. _en-us_topic_0000001128445638__section10399144613501: @@ -128,7 +128,7 @@ For example, if the size of a disk is 100 GB and the used space is 40 GB, the 40 An incremental backup backs up only the data changed since the last backup, which is storage- and time-efficient. -When a backup is deleted, only the data blocks that are not depended on by other backups are deleted, so that other backups can still be used for restoration. Both a full backup and an incremental backup can restore data to the state at a given backup point in time. +When a backup is deleted, only the data blocks not relied on by other backups are deleted, so that other backups can still be used for restoration. Both a full backup and an incremental backup can restore data to the state at a given backup point in time. When creating a backup of a disk, CBR also creates a snapshot for it. Every time a new disk backup is created, CBR deletes the old snapshot and keeps only the latest snapshot. diff --git a/umn/source/change_history.rst b/umn/source/change_history.rst index 51b8dae1..ed41f6a5 100644 --- a/umn/source/change_history.rst +++ b/umn/source/change_history.rst @@ -318,7 +318,7 @@ Change History | | Modified the following content: | | | | | | - Optimized the operations for installing a Tesla driver and CUDA toolkit in :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. | -| | - Terminated the sections of installing a NVIDIA GPU driver and CUDA toolkit on P1, P2, and P2v ECSs and added :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS ` for installation. | +| | - Terminated the sections of installing an NVIDIA GPU driver and CUDA toolkit on P1, P2, and P2v ECSs and added :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS ` for installation. | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2019-12-26 | Added the following content: | | | | @@ -340,7 +340,7 @@ Change History +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2019-03-06 | Modified the following content: | | | | -| | - Deleted metadata types that are not supported in :ref:`Obtaining ECS Details Using Metadata `. | +| | - Deleted unsupported metadata types from :ref:`Obtaining ECS Details Using Metadata `. | | | - Added use constraints in :ref:`Injecting User Data `. | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2019-03-05 | Deleted the following content: | @@ -421,8 +421,8 @@ Change History | | | | | - Added description in :ref:`GPU-accelerated ECSs ` because P1 and P2 ECSs do not support automatic recovery. | | | - :ref:`Configuring Mapping Between Hostnames and IP Addresses in the Same VPC ` | -| | - Installing a NVIDIA GPU Driver and CUDA Toolkit on a P1 ECS | -| | - Installing a NVIDIA GPU Driver and CUDA Toolkit on a P2 ECS | +| | - Installing an NVIDIA GPU Driver and CUDA Toolkit on a P1 ECS | +| | - Installing an NVIDIA GPU Driver and CUDA Toolkit on a P2 ECS | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2018-12-10 | Added the following content: | | | | @@ -490,7 +490,7 @@ Change History | | | | | - :ref:`Audit Using CTS ` | | | - :ref:`How Can I Test the Network Performance of Linux ECSs? ` | -| | - :ref:`Why Does an Authentication Failure Occurs After I Attempt to Remotely Log In to a Windows ECS? ` | +| | - :ref:`Why Does an Authentication Failure Occur After I Attempt to Remotely Log In to a Windows ECS? ` | | | | | | Modified the following content: | | | | @@ -528,13 +528,13 @@ Change History | | - Added the description of viewing details about failed tasks in :ref:`Viewing Failed Tasks `. | | | - Added the FPGA, HDK, SDK, AEI, and DPDK terms in :ref:`Glossary `. | | | - Modified the functions of and notes on using P2 ECSs in :ref:`GPU-accelerated ECSs `. | -| | - Added the OSs supported by P2 ECSs in installing a NVIDIA GPU driver and CUDA toolkit on the P2 ECSs. | +| | - Added the OSs supported by P2 ECSs in installing an NVIDIA GPU driver and CUDA toolkit on the P2 ECSs. | | | - Replaced screenshots in :ref:`How Do I Obtain My Disk Device Name in the ECS OS Using the Device Identifier Provided on the Console? ` | +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2018-04-28 | Added the following content: | | | | | | - Added newly released FPGA-accelerated ECSs. | -| | - Installing a NVIDIA GPU Driver and CUDA Toolkit on a P2 ECS | +| | - Installing an NVIDIA GPU Driver and CUDA Toolkit on a P2 ECS | | | - :ref:`Viewing Failed Tasks ` | | | | | | Modified the following content: | @@ -553,7 +553,7 @@ Change History | 2018-02-03 | Added the following content: | | | | | | - 6.7.2-Changing a General-Purpose ECS to an H1 ECS | -| | - Installing a NVIDIA GPU Driver and CUDA Toolkit on a P1 ECS | +| | - Installing an NVIDIA GPU Driver and CUDA Toolkit on a P1 ECS | | | - :ref:`What Can I Do If Switching from a Non-root User to User root Times Out? ` | | | - :ref:`Why Is the Memory of an ECS Obtained by Running the free Command Inconsistent with the Actual Memory? ` | | | | diff --git a/umn/source/disks/attaching_a_disk_to_an_ecs.rst b/umn/source/disks/attaching_a_disk_to_an_ecs.rst index bf1da96c..7c485950 100644 --- a/umn/source/disks/attaching_a_disk_to_an_ecs.rst +++ b/umn/source/disks/attaching_a_disk_to_an_ecs.rst @@ -28,7 +28,7 @@ Procedure #. Under **Computing**, click **Elastic Cloud Server**. -#. In the search box above the upper right corner of the ECS list, enter the ECS name, IP address, or ID for search. +#. In the search box above the ECS list, enter the ECS name, IP address, or ID for search. #. Click the name of the target ECS. diff --git a/umn/source/disks/detaching_an_evs_disk_from_a_running_ecs.rst b/umn/source/disks/detaching_an_evs_disk_from_a_running_ecs.rst index 8a1348bd..7b0a18bb 100644 --- a/umn/source/disks/detaching_an_evs_disk_from_a_running_ecs.rst +++ b/umn/source/disks/detaching_an_evs_disk_from_a_running_ecs.rst @@ -11,7 +11,7 @@ Scenarios You can detach EVS disks from an ECS. - System disks (mounted to **/dev/sda** or **/dev/vda**) can only be detached offline. They must be stopped before being detached. -- Data disks (mounted to points other than **dev/sda**) can be detached online if the attached ECS is running certain OSs. You can detach these data disks without stopping the ECS. +- Data disks (mounted to points other than **/dev/sda**) can be detached both online and offline if the attached ECS is running certain OSs. You can detach these data disks without stopping the ECS. This section describes how to detach a disk from a running ECS. diff --git a/umn/source/disks/expanding_the_local_disks_of_a_disk-intensive_ecs.rst b/umn/source/disks/expanding_the_local_disks_of_a_disk-intensive_ecs.rst index 5ed026b1..f57e5d43 100644 --- a/umn/source/disks/expanding_the_local_disks_of_a_disk-intensive_ecs.rst +++ b/umn/source/disks/expanding_the_local_disks_of_a_disk-intensive_ecs.rst @@ -10,7 +10,7 @@ Scenarios Disk-intensive ECSs can use both local disks and EVS disks to store data. Local disks are generally used to store service data and feature higher throughput than EVS disks. -Disk-intensive ECSs do not support specifications modification. When the capacity of local disks is insufficient, you can create a new disk-intensive ECS with higher specifications for capacity expansion. The data stored in the original ECS can be migrated to the new ECS through EVS. +Disk-intensive ECSs do not support specification modification. When the capacity of local disks is insufficient, you can create a new disk-intensive ECS with higher specifications for capacity expansion. The data stored in the original ECS can be migrated to the new ECS through EVS. Procedure --------- diff --git a/umn/source/eips/enabling_internet_connectivity_for_an_ecs_without_an_eip_bound.rst b/umn/source/eips/enabling_internet_connectivity_for_an_ecs_without_an_eip_bound.rst index 1a0f0281..c81a82ff 100644 --- a/umn/source/eips/enabling_internet_connectivity_for_an_ecs_without_an_eip_bound.rst +++ b/umn/source/eips/enabling_internet_connectivity_for_an_ecs_without_an_eip_bound.rst @@ -8,7 +8,7 @@ Enabling Internet Connectivity for an ECS Without an EIP Bound Scenarios --------- -To ensure platform security and conserve EIPs, EIPs are only assigned to specified ECSs. The ECSs that have not EIPs bound cannot access the Internet directly. If these ECSs need to access the Internet (for example, to perform a software upgrade or install a patch), you can select an ECS that has an EIP bound to function as a proxy ECS to provide an access channel for these ECSs. +To ensure platform security and conserve EIPs, EIPs are only assigned to specified ECSs. ECSs without EIPs cannot access the Internet directly. If these ECSs need to access the Internet (for example, to perform a software upgrade or install a patch), you can select an ECS that has an EIP bound to function as a proxy ECS to provide an access channel for these ECSs. .. note:: diff --git a/umn/source/elastic_network_interfaces/dynamically_assigning_ipv6_addresses.rst b/umn/source/elastic_network_interfaces/dynamically_assigning_ipv6_addresses.rst index ee199163..ed01572f 100644 --- a/umn/source/elastic_network_interfaces/dynamically_assigning_ipv6_addresses.rst +++ b/umn/source/elastic_network_interfaces/dynamically_assigning_ipv6_addresses.rst @@ -29,7 +29,7 @@ Constraints - Ensure that **Automatically-assigned IPv6 address** is selected during ECS creation. - After the ECS is started, its hot-swappable NICs cannot automatically acquire IPv6 addresses. -- Only ECSs can work in dual-stack mode and BMSs cannot. +- Only ECSs support IPv4/IPv6 dual-stack. BMSs do not support IPv4/IPv6 dual-stack. - Only one IPv6 address can be bound to a NIC. Procedure @@ -39,7 +39,7 @@ Procedure - Linux: Dynamic assignment of IPv6 addresses can be enabled automatically (recommended) or manually, as shown in :ref:`Table 1 `. - If a private image created from a CentOS 6.x or Debian ECS with automatic IPv6 address assignment enabled is used to create an ECS in an environment that does not support IPv6, the ECS may start slow because of IPv6 address assignment timeout. You can set the timeout duration for assigning IPv6 addresses by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment `. + If you use a private image from a CentOS 6.x or Debian ECS with IPv6 auto-assignment enabled to create an ECS in a non-IPv6 environment, the ECS may experience slow startup due to an IPv6 address assignment timeout. You can set the timeout duration for assigning IPv6 addresses by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment `. .. _en-us_topic_0140963099__en-us_topic_0129883696_table1091729658: @@ -192,7 +192,7 @@ You can also enable dynamic IPv6 address assignment by following the instruction .. caution:: - When you run **ipv6-setup-**\ *xxx*, the network service will be automatically restarted. As a result, the network is temporarily disconnected. - - If a private image created from a CentOS 6.x or Debian ECS with automatic IPv6 address assignment enabled is used to create an ECS in an environment that does not support IPv6, the ECS may start slow because of IPv6 address assignment timeout. Set the timeout duration for assigning IPv6 addresses to 30s by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment ` and try to create a new private image again. + - If you use a private image from a CentOS 6.x or Debian ECS with IPv6 auto-assignment enabled to create an ECS in a non-IPv6 environment, the ECS may experience slow startup due to an IPv6 address assignment timeout. You can set the timeout duration for assigning IPv6 addresses to 30s by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment ` and try to create a new private image again. #. Run the following command to check whether IPv6 is enabled for the ECS: @@ -280,7 +280,7 @@ Linux (Manually Enabling Dynamic Assignment of IPv6 Addresses) .. caution:: - If a private image created from a CentOS 6.x or Debian ECS with automatic IPv6 address assignment enabled is used to create an ECS in an environment that does not support IPv6, the ECS may start slow because of IPv6 address assignment timeout. Set the timeout duration for assigning IPv6 addresses to 30s by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment ` and try to create a new private image again. + If you use a private image from a CentOS 6.x or Debian ECS with IPv6 auto-assignment enabled to create an ECS in a non-IPv6 environment, the ECS may experience slow startup due to an IPv6 address assignment timeout. You can set the timeout duration for assigning IPv6 addresses to 30s by referring to :ref:`Setting the Timeout Duration for IPv6 Address Assignment ` and try to create a new private image again. #. .. _en-us_topic_0140963099__en-us_topic_0129883696_li967053013012: @@ -579,7 +579,7 @@ Linux (Manually Enabling Dynamic Assignment of IPv6 Addresses) Setting the Timeout Duration for IPv6 Address Assignment -------------------------------------------------------- -After automatic IPv6 address assignment is configured on an ECS running CentOS 6.x or Debian, the ECS will be created as a private image. When this image is used to create an ECS in an environment that IPv6 is unavailable, the ECS may start slow because acquiring an IPv6 address times out. Before creating the private image, you can set the timeout duration for acquiring IPv6 addresses to 30s as follows: +If you use a private image from a CentOS 6.x or Debian ECS with IPv6 auto-assignment enabled to create an ECS in a non-IPv6 environment, the ECS may experience slow startup due to an IPv6 address assignment timeout. You can set the timeout duration for assigning IPv6 addresses to 30s and try to create a new private image again. - CentOS 6.\ *x*: diff --git a/umn/source/faqs/disk_partition_attachment_and_expansion/index.rst b/umn/source/faqs/disk_partition_attachment_and_expansion/index.rst index 80649941..48ffb7a4 100644 --- a/umn/source/faqs/disk_partition_attachment_and_expansion/index.rst +++ b/umn/source/faqs/disk_partition_attachment_and_expansion/index.rst @@ -15,7 +15,7 @@ Disk Partition, Attachment, and Expansion - :ref:`What Are the Requirements for Attaching an EVS Disk to an ECS? ` - :ref:`Which ECSs Can Be Attached with SCSI EVS Disks? ` - :ref:`How Can I Attach a Snapshot-based System Disk to an ECS as Its Data Disk? ` -- :ref:`Why Does a Linux ECS with a SCSI Disk Attached Fails to Be Restarted? ` +- :ref:`Why Does a Linux ECS with a SCSI Disk Attached Fail to Be Restarted? ` - :ref:`Can All Users Use the Encryption Feature? ` - :ref:`How Can I Add ECSs Using Local Disks to an ECS Group? ` - :ref:`Why Does a Disk Attached to a Windows ECS Go Offline? ` @@ -37,7 +37,7 @@ Disk Partition, Attachment, and Expansion what_are_the_requirements_for_attaching_an_evs_disk_to_an_ecs which_ecss_can_be_attached_with_scsi_evs_disks how_can_i_attach_a_snapshot-based_system_disk_to_an_ecs_as_its_data_disk - why_does_a_linux_ecs_with_a_scsi_disk_attached_fails_to_be_restarted + why_does_a_linux_ecs_with_a_scsi_disk_attached_fail_to_be_restarted can_all_users_use_the_encryption_feature how_can_i_add_ecss_using_local_disks_to_an_ecs_group why_does_a_disk_attached_to_a_windows_ecs_go_offline diff --git a/umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fails_to_be_restarted.rst b/umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fail_to_be_restarted.rst similarity index 96% rename from umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fails_to_be_restarted.rst rename to umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fail_to_be_restarted.rst index 4505381f..e57b0ebb 100644 --- a/umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fails_to_be_restarted.rst +++ b/umn/source/faqs/disk_partition_attachment_and_expansion/why_does_a_linux_ecs_with_a_scsi_disk_attached_fail_to_be_restarted.rst @@ -2,8 +2,8 @@ .. _en-us_topic_0087382187: -Why Does a Linux ECS with a SCSI Disk Attached Fails to Be Restarted? -===================================================================== +Why Does a Linux ECS with a SCSI Disk Attached Fail to Be Restarted? +==================================================================== Symptom ------- diff --git a/umn/source/faqs/eip/what_should_i_do_if_an_eip_cannot_be_pinged.rst b/umn/source/faqs/eip/what_should_i_do_if_an_eip_cannot_be_pinged.rst index 88220a29..973baf4b 100644 --- a/umn/source/faqs/eip/what_should_i_do_if_an_eip_cannot_be_pinged.rst +++ b/umn/source/faqs/eip/what_should_i_do_if_an_eip_cannot_be_pinged.rst @@ -56,7 +56,7 @@ ICMP is used for the ping command. Check whether the security group accommodatin #. On the **Elastic Cloud Server** page, click the name of the target ECS. - The page providing details about the ECS is displayed. + The ECS details page is displayed. #. Click the **Security Groups** tab, expand the information of the security group, and view security group rules. diff --git a/umn/source/faqs/remote_login/remote_login_errors_on_windows/index.rst b/umn/source/faqs/remote_login/remote_login_errors_on_windows/index.rst index 468e19fa..314cfa90 100644 --- a/umn/source/faqs/remote_login/remote_login_errors_on_windows/index.rst +++ b/umn/source/faqs/remote_login/remote_login_errors_on_windows/index.rst @@ -5,7 +5,7 @@ Remote Login Errors on Windows ============================== -- :ref:`Why Does an Authentication Failure Occurs After I Attempt to Remotely Log In to a Windows ECS? ` +- :ref:`Why Does an Authentication Failure Occur After I Attempt to Remotely Log In to a Windows ECS? ` - :ref:`Why Can't I Use the Local Computer to Connect to My Windows ECS? ` - :ref:`How Can I Obtain the Permission to Remotely Log In to a Windows ECS? ` - :ref:`Why Does the System Display No Remote Desktop License Servers Available to Provide a License When I Log In to a Windows ECS? ` @@ -24,7 +24,7 @@ Remote Login Errors on Windows :maxdepth: 1 :hidden: - why_does_an_authentication_failure_occurs_after_i_attempt_to_remotely_log_in_to_a_windows_ecs + why_does_an_authentication_failure_occur_after_i_attempt_to_remotely_log_in_to_a_windows_ecs why_cant_i_use_the_local_computer_to_connect_to_my_windows_ecs how_can_i_obtain_the_permission_to_remotely_log_in_to_a_windows_ecs why_does_the_system_display_no_remote_desktop_license_servers_available_to_provide_a_license_when_i_log_in_to_a_windows_ecs diff --git a/umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occurs_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst b/umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occur_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst similarity index 94% rename from umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occurs_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst rename to umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occur_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst index 49e93dad..07de192f 100644 --- a/umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occurs_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst +++ b/umn/source/faqs/remote_login/remote_login_errors_on_windows/why_does_an_authentication_failure_occur_after_i_attempt_to_remotely_log_in_to_a_windows_ecs.rst @@ -2,8 +2,8 @@ .. _en-us_topic_0018339851: -Why Does an Authentication Failure Occurs After I Attempt to Remotely Log In to a Windows ECS? -============================================================================================== +Why Does an Authentication Failure Occur After I Attempt to Remotely Log In to a Windows ECS? +============================================================================================= Symptom ------- diff --git a/umn/source/faqs/specification_modification/why_does_the_disk_attachment_of_a_linux_ecs_fail_after_i_modify_the_ecs_specifications.rst b/umn/source/faqs/specification_modification/why_does_the_disk_attachment_of_a_linux_ecs_fail_after_i_modify_the_ecs_specifications.rst index 804de26d..12abcde7 100644 --- a/umn/source/faqs/specification_modification/why_does_the_disk_attachment_of_a_linux_ecs_fail_after_i_modify_the_ecs_specifications.rst +++ b/umn/source/faqs/specification_modification/why_does_the_disk_attachment_of_a_linux_ecs_fail_after_i_modify_the_ecs_specifications.rst @@ -17,31 +17,31 @@ Procedure #. .. _en-us_topic_0214940106__en-us_topic_0120890833_li218141135312: - Run the following command to view the disks attached before specifications modification: + Run the following command to view the disks attached before specification modification: **fdisk -l** **\| grep 'Disk /dev/'** .. _en-us_topic_0214940106__en-us_topic_0120890833_fig10595124010458: .. figure:: /_static/images/en-us_image_0214947581.png - :alt: **Figure 1** Viewing disks attached before specifications modification + :alt: **Figure 1** Viewing disks attached before specification modification - **Figure 1** Viewing disks attached before specifications modification + **Figure 1** Viewing disks attached before specification modification As shown in :ref:`Figure 1 `, the ECS has three disks attached: **/dev/vda**, **/dev/vdb**, and **/dev/vdc**. #. .. _en-us_topic_0214940106__en-us_topic_0120890833_li161843557534: - Run the following command to view disks attached after specifications modification: + Run the following command to view disks attached after specification modification: **df -h\| grep '/dev/'** .. _en-us_topic_0214940106__en-us_topic_0120890833_fig692535712437: .. figure:: /_static/images/en-us_image_0214947582.png - :alt: **Figure 2** Viewing disks attached after specifications modification + :alt: **Figure 2** Viewing disks attached after specification modification - **Figure 2** Viewing disks attached after specifications modification + **Figure 2** Viewing disks attached after specification modification As shown in :ref:`Figure 2 `, only one disk **/dev/vda** is attached to the ECS. @@ -64,7 +64,7 @@ Procedure Ensure that **/mnt/vdb1** is empty. Otherwise, the attachment will fail. -#. Run the following commands to check whether the numbers of disks before and after specifications modifications are the same: +#. Run the following commands to check whether the numbers of disks before and after specification modifications are the same: **fdisk -l** **\| grep 'Disk /dev/'** @@ -80,4 +80,4 @@ Procedure **Figure 3** Checking the number of disks attached - As shown in :ref:`Figure 3 `, the numbers of disks before and after specifications modifications are the same. The disks are **/dev/vda**, **/dev/vdb**, and **/dev/vdc**. + As shown in :ref:`Figure 3 `, the numbers of disks before and after specification modifications are the same. The disks are **/dev/vda**, **/dev/vdb**, and **/dev/vdc**. diff --git a/umn/source/getting_started/initializing_evs_data_disks/index.rst b/umn/source/getting_started/initializing_evs_data_disks/index.rst index 2e197ff0..0fb8cc2e 100644 --- a/umn/source/getting_started/initializing_evs_data_disks/index.rst +++ b/umn/source/getting_started/initializing_evs_data_disks/index.rst @@ -6,17 +6,15 @@ Initializing EVS Data Disks =========================== - :ref:`Scenarios and Disk Partitions ` -- :ref:`Initializing a Windows Data Disk (Windows Server 2008) ` -- :ref:`Initializing a Windows Data Disk (Windows Server 2019) ` -- :ref:`Initializing a Linux Data Disk (fdisk) ` -- :ref:`Initializing a Linux Data Disk (parted) ` +- :ref:`Initializing a Windows Data Disk (Windows Server 2019) ` +- :ref:`Initializing a Linux Data Disk (fdisk) ` +- :ref:`Initializing a Linux Data Disk (parted) ` .. toctree:: :maxdepth: 1 :hidden: scenarios_and_disk_partitions - initializing_a_windows_data_disk_windows_server_2008 initializing_a_windows_data_disk_windows_server_2019 initializing_a_linux_data_disk_fdisk initializing_a_linux_data_disk_parted diff --git a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_fdisk.rst b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_fdisk.rst index f6d36784..8f485de8 100644 --- a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_fdisk.rst +++ b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_fdisk.rst @@ -1,6 +1,6 @@ -:original_name: en-us_topic_0085634797.html +:original_name: en-us_topic_0000002520144801.html -.. _en-us_topic_0085634797: +.. _en-us_topic_0000002520144801: Initializing a Linux Data Disk (fdisk) ====================================== @@ -285,12 +285,12 @@ The following example shows you how a new primary partition can be created on a .. note:: - After the server is restarted, the disk will not be automatically mounted. You can modify the **/etc/fstab** file to configure automount at startup. For details, see :ref:`Configuring Automatic Mounting at System Start `. + After the server is restarted, the disk will not be automatically mounted. You can modify the **/etc/fstab** file to configure auto mount at startup. For details, see :ref:`Configuring Auto Mount at Startup `. -.. _en-us_topic_0085634797__en-us_topic_0000001809189108_en-us_topic_0000001808330216_section15839912195453: +.. _en-us_topic_0000002520144801__en-us_topic_0000002015136322_en-us_topic_0000001809189108_en-us_topic_0000001808330216_section15839912195453: -Configuring Automatic Mounting at System Start ----------------------------------------------- +Configuring Auto Mount at Startup +--------------------------------- The **fstab** file controls what disks are automatically mounted at startup. You can use **fstab** to configure your data disks to mount automatically. This operation will not affect the existing data. @@ -355,7 +355,7 @@ The example here uses UUIDs to identify disks in the **fstab** file. You are adv **mount** **\|** **grep** **/mnt/sdc** - If information similar to the following is displayed, automatic mounting has been configured: + If information similar to the following is displayed, auto mount has taken effect: .. code-block:: diff --git a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_parted.rst b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_parted.rst index e5f862a2..f49d7892 100644 --- a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_parted.rst +++ b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_linux_data_disk_parted.rst @@ -1,6 +1,6 @@ -:original_name: en-us_topic_0085634798.html +:original_name: en-us_topic_0000002488064958.html -.. _en-us_topic_0085634798: +.. _en-us_topic_0000002488064958: Initializing a Linux Data Disk (parted) ======================================= @@ -267,12 +267,12 @@ The following example shows you how a new partition can be created on a new data .. note:: - After the server is restarted, the disk will not be automatically mounted. You can modify the **/etc/fstab** file to configure automount at startup. For details, see :ref:`Configuring Automatic Mounting at System Start `. + After the server is restarted, the disk will not be automatically mounted. You can modify the **/etc/fstab** file to configure auto mount at startup. For details, see :ref:`Configuring Auto Mount at Startup `. -.. _en-us_topic_0085634798__en-us_topic_0000001809029272_en-us_topic_0000001808490156_section15839912195453: +.. _en-us_topic_0000002488064958__en-us_topic_0000002015294546_en-us_topic_0000001809029272_en-us_topic_0000001808490156_section15839912195453: -Configuring Automatic Mounting at System Start ----------------------------------------------- +Configuring Auto Mount at Startup +--------------------------------- The **fstab** file controls what disks are automatically mounted at ECS startup. You can configure the **fstab** file of an ECS that has data. This operation will not affect the existing data. @@ -337,7 +337,7 @@ The following example uses UUIDs to identify disks in the **fstab** file. You ar **mount** **\|** **grep** **/mnt/sdc** - If information similar to the following is displayed, automatic mounting has been configured: + If information similar to the following is displayed, auto mount has taken effect: .. code-block:: diff --git a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_windows_data_disk_windows_server_2019.rst b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_windows_data_disk_windows_server_2019.rst index c788b63c..f2221fe7 100644 --- a/umn/source/getting_started/initializing_evs_data_disks/initializing_a_windows_data_disk_windows_server_2019.rst +++ b/umn/source/getting_started/initializing_evs_data_disks/initializing_a_windows_data_disk_windows_server_2019.rst @@ -1,6 +1,6 @@ -:original_name: en-us_topic_0117490178.html +:original_name: en-us_topic_0000002520224813.html -.. _en-us_topic_0117490178: +.. _en-us_topic_0000002520224813: Initializing a Windows Data Disk (Windows Server 2019) ====================================================== @@ -39,7 +39,7 @@ Procedure The **Server Manager** window is displayed. - .. figure:: /_static/images/en-us_image_0000001855868573.png + .. figure:: /_static/images/en-us_image_0000002301562030.png :alt: **Figure 1** Server Manager **Figure 1** Server Manager @@ -49,7 +49,7 @@ Procedure The **Computer Management** window is displayed. - .. figure:: /_static/images/en-us_image_0000001855948605.png + .. figure:: /_static/images/en-us_image_0000002301721718.png :alt: **Figure 2** Computer Management **Figure 2** Computer Management @@ -59,7 +59,7 @@ Procedure Disks are displayed in the right pane. If there is a disk that is not initialized, the system will prompt you with the **Initialize Disk** dialog box. - .. figure:: /_static/images/en-us_image_0000001809029932.png + .. figure:: /_static/images/en-us_image_0000002335561333.png :alt: **Figure 3** Disk list **Figure 3** Disk list @@ -69,7 +69,7 @@ Procedure The **Computer Management** window is displayed. - .. figure:: /_static/images/en-us_image_0000001809189776.png + .. figure:: /_static/images/en-us_image_0000002335521125.png :alt: **Figure 4** Computer Management **Figure 4** Computer Management @@ -85,7 +85,7 @@ Procedure The **New Simple Volume Wizard** window is displayed. - .. figure:: /_static/images/en-us_image_0000001855868577.png + .. figure:: /_static/images/en-us_image_0000002301562034.png :alt: **Figure 5** New Simple Volume Wizard **Figure 5** New Simple Volume Wizard @@ -95,7 +95,7 @@ Procedure The **Specify Volume Size** page is displayed. - .. figure:: /_static/images/en-us_image_0000001855948609.png + .. figure:: /_static/images/en-us_image_0000002301721726.png :alt: **Figure 6** Specify Volume Size **Figure 6** Specify Volume Size @@ -105,7 +105,7 @@ Procedure The **Assign Drive Letter or Path** page is displayed. - .. figure:: /_static/images/en-us_image_0000001809029936.png + .. figure:: /_static/images/en-us_image_0000002335561341.png :alt: **Figure 7** Assign Drive Letter or Path **Figure 7** Assign Drive Letter or Path @@ -115,7 +115,7 @@ Procedure The **Format Partition** page is displayed. - .. figure:: /_static/images/en-us_image_0000001809189780.png + .. figure:: /_static/images/en-us_image_0000002335521133.png :alt: **Figure 8** Format Partition **Figure 8** Format Partition @@ -125,7 +125,7 @@ Procedure The **Completing the New Simple Volume Wizard** page is displayed. - .. figure:: /_static/images/en-us_image_0000001855868581.png + .. figure:: /_static/images/en-us_image_0000002301562042.png :alt: **Figure 9** Completing the New Simple Volume Wizard **Figure 9** Completing the New Simple Volume Wizard @@ -139,7 +139,7 @@ Procedure Wait for the initialization to complete. When the volume status changes to **Healthy**, the initialization has finished successfully. - .. figure:: /_static/images/en-us_image_0000001855948613.png + .. figure:: /_static/images/en-us_image_0000002301721734.png :alt: **Figure 10** Disk initialized **Figure 10** Disk initialized @@ -149,9 +149,9 @@ Procedure If New Volume (D:) appears, the disk is successfully initialized and no further action is required. - .. figure:: /_static/images/en-us_image_0000001809189784.png + .. figure:: /_static/images/en-us_image_0000002335521137.png :alt: **Figure 11** This PC **Figure 11** This PC -.. |image1| image:: /_static/images/en-us_image_0000001809029940.png +.. |image1| image:: /_static/images/en-us_image_0000002335561349.png diff --git a/umn/source/getting_started/initializing_evs_data_disks/scenarios_and_disk_partitions.rst b/umn/source/getting_started/initializing_evs_data_disks/scenarios_and_disk_partitions.rst index 9111a4d0..64b2d2d4 100644 --- a/umn/source/getting_started/initializing_evs_data_disks/scenarios_and_disk_partitions.rst +++ b/umn/source/getting_started/initializing_evs_data_disks/scenarios_and_disk_partitions.rst @@ -14,7 +14,7 @@ After a disk is attached to a server, you need to log in to the server to initia - System disk - A system disk does not require manual initialization because it is automatically created and initialized upon server creation. The default partition style is master boot record (MBR). + A system disk does not require manual initialization because it is automatically created and initialized upon the server creation. The default partition style is master boot record (MBR). - Data disk @@ -23,12 +23,19 @@ After a disk is attached to a server, you need to log in to the server to initia In both cases, you must initialize the data disk before using it. Choose an appropriate partition style based on your service plan. -Partitioning Operation Guide ----------------------------- +Notes and Constraints +--------------------- -:ref:`Table 1 ` lists the common disk partition styles. In Linux, different disk partition styles require different partitioning tools. +- A disk created from a data source does not need to be initialized. Such a disk contains the source data in the beginning. Initializing the disk may clear the initial data on it. +- Initializing a disk does not change the server's IP address or the disk ID. +- Initializing a disk does not delete the snapshots created for the disk, so you can still use snapshots to roll back data to the source disk after the disk is initialized. -.. _en-us_topic_0030831623__en-us_topic_0085245975_table2729705994129: +Disk Partition Styles +--------------------- + +:ref:`Table 1 ` lists the common disk partition styles. In Linux, different partition styles require different partitioning tools. + +.. _en-us_topic_0030831623__en-us_topic_0000002015294542_en-us_topic_0000001855947921_en-us_topic_0000001808490252_table2729705994129: .. table:: **Table 1** Disk partition styles @@ -47,3 +54,9 @@ Partitioning Operation Guide | | | | | | | 1 EiB = 1048576 TiB | Disk partitions created using GPT are not categorized. | | +----------------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------+ + +.. important:: + + The maximum disk size supported by MBR is 2 TiB, and that supported by GPT is 18 EiB. Because an EVS data disk currently supports up to 32 TiB, use GPT if your disk size is greater than 2 TiB. + + If the partition style is changed after the disk has been used, all data on the disk will be lost, so take care to select an appropriate partition style when initializing the disk. If you must change the partition style to GPT after a disk has been used, it is recommended that you back up the disk data before the change. diff --git a/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_an_ssh_key_pair.rst b/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_an_ssh_key_pair.rst index 4d28a99c..ff9aa79e 100644 --- a/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_an_ssh_key_pair.rst +++ b/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_an_ssh_key_pair.rst @@ -111,7 +111,7 @@ The following operations use PuTTY as an example. Before using PuTTY to log in, #. Run the following command using the EIP to remotely log in to the ECS through SSH: - **ssh** **Username**\ **@\ EIP** + **ssh** **username**\ **@\ EIP** .. note:: diff --git a/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_vnc.rst b/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_vnc.rst index d95bcce7..619339cd 100644 --- a/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_vnc.rst +++ b/umn/source/instances/logging_in_to_a_linux_ecs/logging_in_to_a_linux_ecs_using_vnc.rst @@ -109,7 +109,7 @@ Logging In to an ECS Using an English Keyboard #. Under **Computing**, click **Elastic Cloud Server**. -#. In the search box above the upper right corner of the ECS list, enter the ECS name and click |image2| for search. +#. In the search box above the ECS list, enter the ECS name and click |image2| for search. #. Locate the row containing the ECS and click **Remote Login** in the **Operation** column. @@ -162,7 +162,7 @@ Logging In to an ECS Using a Non-English Keyboard #. Under **Computing**, click **Elastic Cloud Server**. -#. In the search box above the upper right corner of the ECS list, enter the ECS name, IP address, or ID, and click |image4| for search. +#. In the search box above the ECS list, enter the ECS name, IP address, or ID, and click |image4| for search. #. Locate the row containing the ECS and click **Remote Login** in the **Operation** column. diff --git a/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_from_a_linux_server.rst b/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_from_a_linux_server.rst index 03dc9133..751d34a4 100644 --- a/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_from_a_linux_server.rst +++ b/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_from_a_linux_server.rst @@ -33,7 +33,7 @@ To log in to a Windows ECS from a local Linux server, use a remote access tool, #. Run the following command to log in to the ECS: - **rdesktop -u** *Username* **-p** *Password* **-g** *Resolution* *EIP* + **rdesktop -u** *username* **-p** *password* **-g** *resolution* *EIP* For example, run **rdesktop -u administrator -p password -g 1024*720 121.xx.xx.xxx**. diff --git a/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_using_vnc.rst b/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_using_vnc.rst index b9961bdc..d98cf080 100644 --- a/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_using_vnc.rst +++ b/umn/source/instances/logging_in_to_a_windows_ecs/logging_in_to_a_windows_ecs_using_vnc.rst @@ -305,7 +305,7 @@ Log In to a Windows ECS Using a Non-English Keyboard For instructions about how to obtain the password for logging in to a Windows ECS, see :ref:`Obtaining the Password for Logging In to a Windows ECS `. -#. In the search box above the upper right corner of the ECS list, enter the ECS name, IP address, or ID, and click |image4| for search. +#. In the search box above the ECS list, enter the ECS name, IP address, or ID, and click |image4| for search. #. Locate the row containing the ECS and click **Remote Login** in the **Operation** column. diff --git a/umn/source/instances/managing_gpu_drivers_of_gpu-accelerated_ecss/manually_installing_a_tesla_driver_on_a_gpu-accelerated_ecs.rst b/umn/source/instances/managing_gpu_drivers_of_gpu-accelerated_ecss/manually_installing_a_tesla_driver_on_a_gpu-accelerated_ecs.rst index 3300ce49..eb046bae 100644 --- a/umn/source/instances/managing_gpu_drivers_of_gpu-accelerated_ecss/manually_installing_a_tesla_driver_on_a_gpu-accelerated_ecs.rst +++ b/umn/source/instances/managing_gpu_drivers_of_gpu-accelerated_ecss/manually_installing_a_tesla_driver_on_a_gpu-accelerated_ecs.rst @@ -333,7 +333,7 @@ The following uses Ubuntu 20.04 64bit as an example to describe how to install t 10. (Optional) Check whether CUDA has been installed. - If the CUDA version is 11.5 or earlier, perform the following operations to check whether CUDA has been installed: If the CUDA version is 11.6 or later, skip this step. + If the CUDA version is 11.5 or earlier, perform the following operations to check whether CUDA has been installed. If the CUDA version is 11.6 or later, skip this step. a. Run the following command to switch to **/usr/local/cuda-10.1/samples/1_Utilities/deviceQuery**: diff --git a/umn/source/instances/reinstalling_or_changing_the_os/changing_the_os.rst b/umn/source/instances/reinstalling_or_changing_the_os/changing_the_os.rst index bbf06f6b..b6e69d92 100644 --- a/umn/source/instances/reinstalling_or_changing_the_os/changing_the_os.rst +++ b/umn/source/instances/reinstalling_or_changing_the_os/changing_the_os.rst @@ -33,11 +33,11 @@ Notes - After the OS is changed, the original OS is not retained, and the original system disk is deleted, including the data in all partitions of the system disk. - Changing the OS clears the data in all partitions of the system disk, including the system partition. Back up data before changing the OS. - Changing the OS does not affect data in data disks. -- After the OS is changed, your service running environment must be deployed in the new OS again. +- After the OS is changed, your service runtime environment must be deployed on the new OS again. - After the OS is changed, the ECS will be automatically started. - After the OS is changed, the system disk type of the ECS cannot be changed. - After the OS is changed, the IP and MAC addresses of the ECS remain unchanged. -- After the OS is changed, customized configurations, such as DNS and hostname of the original OS will be reset and require reconfiguration. +- After the OS is changed, custom settings (such as DNS and hostname) of the original OS will be reset. They need to be configured again. - An OS change takes about 1 to 4 minutes to complete. During this process, the ECS status is **Changing OS**. - After the OS is changed, the password for logging in to the ECS is reset. To retrieve the password, perform the following operations: @@ -147,7 +147,7 @@ Follow-up Procedure It is a good practice to back up the **/etc/fstab** file before writing data into it. - To enable automatic partition mounting upon system startup, see :ref:`Initializing a Linux Data Disk (fdisk) `. + To enable automatic partition mounting upon system startup, see :ref:`Initializing EVS Data Disks `. #. Mount the partition so that you can use the data disk. diff --git a/umn/source/instances/viewing_ecs_information/searching_for_ecss.rst b/umn/source/instances/viewing_ecs_information/searching_for_ecss.rst index d8ebb663..d65efd9c 100644 --- a/umn/source/instances/viewing_ecs_information/searching_for_ecss.rst +++ b/umn/source/instances/viewing_ecs_information/searching_for_ecss.rst @@ -25,7 +25,7 @@ A variety of ECS search types are available. For details, see :ref:`Table 1 `. + Add tag keys and tag values to the ECS. For the tag key and tag value requirements, see :ref:`Table 1 `. .. note:: @@ -82,13 +82,13 @@ Adding Tags on the TMS Console #. Under **Management & Deployment**, click **Tag Management Service**. -#. On the displayed **Resource Tags** page, select the region where the resource is located, select **ECS-ECS** for **Resource Type**, and click **Search**. +#. On the displayed page, select the region where the resource is located, select **ECS-ECS** for **Resource Type**, and click **Search**. All ECSs matching the search criteria are displayed. -#. In the **Search Result** area, click **Create Key**. In the displayed dialog box, enter a key (for example **project**) and click **OK**. +#. In the **Search Results** area, click **Create Key**. In the displayed dialog box, enter a key (for example, **project**) and click **OK**. - After the tag is created, the tag key is added to the resource list. If the key is not displayed in the resource list, click |image3| and select the created key from the drop-down list. + After the tag is created, the tag key is added to the resource list. If the key is not displayed in the resource list, click |image3| and select the key you created just now. By default, the value of the tag key is **Not tagged**. You need to set a value for the tag of each resource to associate the tag with the resource. diff --git a/umn/source/resources_and_tags/tag_management/deleting_tags.rst b/umn/source/resources_and_tags/tag_management/deleting_tags.rst index 3564c821..2eee40d6 100644 --- a/umn/source/resources_and_tags/tag_management/deleting_tags.rst +++ b/umn/source/resources_and_tags/tag_management/deleting_tags.rst @@ -41,13 +41,13 @@ Deleting a Tag on the TMS Console #. On the **Resource Tags** page, set the search criteria for ECSs and click **Search**. -#. In the **Search Result** area, click **Edit** to make the resource tag list editable. +#. In the **Search Results** area, click **Edit** to make the resource tag list editable. - If the key of a tag you want to delete is not contained in the list, click |image2| and select the tag key from the drop-down list. It is a good practice to select at most 10 keys to display. + If the key of a tag you want to delete is not contained in the list, click |image2| and select the target tag keys. It is a good practice to select at most 10 keys to display. #. Locate the row containing the target ECS and click |image3|. -#. (Optional) Click |image4| in the upper right of the **Search Result** area. +#. (Optional) Click |image4| in the upper right of the **Search Results** area. The resource list is refreshed and the refresh time is updated. diff --git a/umn/source/security/security_groups/changing_a_security_group.rst b/umn/source/security/security_groups/changing_a_security_group.rst index 13a9040d..24484f30 100644 --- a/umn/source/security/security_groups/changing_a_security_group.rst +++ b/umn/source/security/security_groups/changing_a_security_group.rst @@ -27,7 +27,7 @@ Procedure #. In the ECS list, choose **More** > **Manage Network** > **Change Security Group** in the **Operation** column. - The **Change Security Group** dialog box is displayed. + The **Change Security Group** panel slides out. .. figure:: /_static/images/en-us_image_0000002385344877.png diff --git a/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/disk-intensive_ecss.rst b/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/disk-intensive_ecss.rst index 94a0ec02..42de313a 100644 --- a/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/disk-intensive_ecss.rst +++ b/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/disk-intensive_ecss.rst @@ -192,7 +192,7 @@ Notes - To improve network performance, you can set the NIC MTU of a D2 ECS to **8888**. -- D2 ECSs do not support specifications modification. +- D2 ECSs do not support specification modification. - D2 ECSs do not support local disk snapshots or backups. diff --git a/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/gpu-accelerated_ecss.rst b/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/gpu-accelerated_ecss.rst index 5f78a981..18e278fb 100644 --- a/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/gpu-accelerated_ecss.rst +++ b/umn/source/service_overview/ecs_types_and_specifications/ecs_specifications/gpu-accelerated_ecss.rst @@ -28,6 +28,7 @@ Available now: All GPU models except the recommended ones. If available ECSs are - P series + - :ref:`Computing-accelerated P5e ` - :ref:`Computing-accelerated P5s ` - :ref:`Computing-accelerated P3 ` - :ref:`Computing-accelerated P2s ` (recommended) @@ -67,6 +68,15 @@ Images Supported by GPU-accelerated ECSs | | | - Windows Server 2016 Standard 64bit | | | | - Windows Server 2012 R2 Standard 64bit | +-----------------------+-----------------------+------------------------------------------+ + | Computing-accelerated | P5e | - CentOS 7.9 64bit | + | | | - CentOS 7.8 64bit | + | | | - CentOS 7.7 64bit | + | | | - CentOS 7.6 64bit | + | | | - Ubuntu 22.04 64bit | + | | | - Ubuntu 20.04 64bit | + | | | - Ubuntu 18.04 64bit | + | | | - Ubuntu 16.04 64bit | + +-----------------------+-----------------------+------------------------------------------+ | Computing-accelerated | P5s | - CentOS 7.9 64bit | | | | - CentOS 7.8 64bit | | | | - CentOS 7.7 64bit | @@ -130,7 +140,7 @@ Graphics-accelerated Enhancement G7v **Overview** -G7v ECSs use NVIDIA A40 GPUs and support DirectX, Shader Model, OpenGL, and Vulkan. Each GPU provides 48 GiB of GPU memory. Theoretically, the peak FP32 is 37.4 TFLOPS and the peak TF32 tensor is 74.8 TFLOPS \| 149.6 TFLOPS (sparsity enabled). They deliver two times the rendering performance and 1.4 times the graphics processing performance of RTX6000 GPUs to meet professional graphics processing requirements. +G7v ECSs use NVIDIA A40 GPUs and support DirectX, Shader Model, OpenGL, and Vulkan. Each GPU provides 48 GiB of GPU memory. Theoretically, the peak FP32 is 37.4 TFLOPS and the peak TF32 tensor is 74.8 TFLOPS \| 149.6 TFLOPS (sparsity enabled). They deliver twice the rendering performance and 1.4 times the graphics processing performance of RTX6000 GPUs to meet professional graphics processing requirements. Select your desired GPU-accelerated ECS type and specifications. @@ -166,9 +176,9 @@ Select your desired GPU-accelerated ECS type and specifications. - Heavy-load CPU inference - Application flow identical to common ECSs - Automatic scheduling of G7v ECSs to AZs where NVIDIA A40 GPUs are used -- One NVENC (encoding) engine and two NVDEC (decoding) engines (including AV1 decoding) embedded +- Built-in one NVENC and two NVDECs (including AV1 decoding) -**Supported Common Software** +**Supported Software** G7v ECSs are used in graphics acceleration scenarios, such as video rendering, cloud desktop, and 3D visualization. If the software relies on GPU DirectX and OpenGL hardware acceleration, use G7v ECSs. G7v ECSs support the following commonly used graphics processing software: @@ -199,7 +209,7 @@ G7v ECSs are used in graphics acceleration scenarios, such as video rendering, c For details, see :ref:`Manually Installing a GRID Driver on a GPU-accelerated ECS `. -- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type. +- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type. - GPU-accelerated ECSs do not support live migration. @@ -210,7 +220,7 @@ Graphics-accelerated Enhancement G7 **Overview** -G7 ECSs use NVIDIA A40 GPUs and support DirectX, Shader Model, OpenGL, and Vulkan. Each GPU provides 48 GiB of GPU memory. Theoretically, the peak FP32 is 37.4 TFLOPS and the peak TF32 tensor is 74.8 TFLOPS \| 149.6 TFLOPS (sparsity enabled). They deliver two times the rendering performance and 1.4 times the graphics processing performance of RTX6000 GPUs to meet professional graphics processing requirements. +G7 ECSs use NVIDIA A40 GPUs and support DirectX, Shader Model, OpenGL, and Vulkan. Each GPU provides 48 GiB of GPU memory. Theoretically, the peak FP32 is 37.4 TFLOPS and the peak TF32 tensor is 74.8 TFLOPS \| 149.6 TFLOPS (sparsity enabled). They deliver twice the rendering performance and 1.4 times the graphics processing performance of RTX6000 GPUs to meet professional graphics processing requirements. Select your desired GPU-accelerated ECS type and specifications. @@ -244,9 +254,9 @@ Select your desired GPU-accelerated ECS type and specifications. - Heavy-load CPU inference - Application flow identical to common ECSs - Automatic scheduling of G7 ECSs to AZs where NVIDIA A40 GPUs are used -- One NVENC (encoding) engine and two NVDEC (decoding) engines (including AV1 decoding) embedded +- Built-in one NVENC and two NVDECs (including AV1 decoding) -**Supported Common Software** +**Supported Software** G7 ECSs are used in graphics acceleration scenarios, such as video rendering, cloud desktop, and 3D visualization. If the software relies on GPU DirectX and OpenGL hardware acceleration, use G7 ECSs. G7 ECSs support the following commonly used graphics processing software: @@ -319,9 +329,9 @@ Select your desired GPU-accelerated ECS type and specifications. - Graphics applications accelerated - Heavy-load CPU inference - Automatic scheduling of G6 ECSs to AZs where NVIDIA T4 GPUs are used -- One NVENC engine and two NVDEC engines embedded +- Built-in one NVENC and two NVDECs -**Supported Common Software** +**Supported Software** G6 ECSs are used in graphics acceleration scenarios, such as video rendering, cloud desktop, and 3D visualization. If the software relies on GPU DirectX and OpenGL hardware acceleration, use G6 ECSs. G6 ECSs support the following commonly used graphics processing software: @@ -347,6 +357,52 @@ G6 ECSs are used in graphics acceleration scenarios, such as video rendering, cl - GPU-accelerated ECSs do not support live migration. +.. _en-us_topic_0097289624__section7795174175814: + +Computing-accelerated P5e +------------------------- + +**Overview** + +P5e ECSs use high-performance NVIDIA Tesla H100 NVL to deliver outstanding training. + +**Specifications** + +.. table:: **Table 5** P5e ECS specifications + + +-----------------+-------+--------------+-----------------------------------------+---------------------------+-----------------+-----------+--------------+------------------+----------------+ + | Flavor | vCPUs | Memory (GiB) | Max./Assured Network Bandwidth (Gbit/s) | Max. Network PPS (10,000) | Max. NIC Queues | Max. NICs | GPUs | GPU Memory (GiB) | Virtualization | + +=================+=======+==============+=========================================+===========================+=================+===========+==============+==================+================+ + | p5e.10xlarge.12 | 40 | 480 | 24/9 | 550 | 16 | 8 | 2 x H100 NVL | 188 | KVM | + +-----------------+-------+--------------+-----------------------------------------+---------------------------+-----------------+-----------+--------------+------------------+----------------+ + | p5e.20xlarge.12 | 80 | 960 | 32/18 | 750 | 32 | 8 | 4 x H100 NVL | 376 | KVM | + +-----------------+-------+--------------+-----------------------------------------+---------------------------+-----------------+-----------+--------------+------------------+----------------+ + | p5e.40xlarge.12 | 160 | 1920 | 40/36 | 850 | 32 | 8 | 8 x H100 NVL | 752 | KVM | + +-----------------+-------+--------------+-----------------------------------------+---------------------------+-----------------+-----------+--------------+------------------+----------------+ + +**P5e ECS Features** + +- 1:12 ratio of vCPUs to memory +- CPU: 4th Generation Intel® Xeon® Scalable 8458P processors (2.7 GHz of basic frequency and 3.8 GHz of turbo frequency) +- Each GPU provides 94 GiB of GPU memory and 3,026 TFLOPS INT8 compute. +- The GPU memory bandwidth can reach up to 2,000 GB/s. + +**Supported Software** + +P5e ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P5e ECSs. P5e ECSs support the following commonly used software: + +- Common deep learning frameworks, such as TensorFlow, Spark, PyTorch, MXNet, and Caffe +- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max +- Agisoft PhotoScan +- MapD +- More than 2,000 GPU-accelerated applications such as Amber, NAMD, and VASP + +**Notes** + +- P5e ECSs support automatic recovery when the hosts accommodating such ECSs become faulty. +- After a P5e ECS is stopped, its basic resources (vCPUs, memory, image, and encoding cards) are not billed, but its system disk is billed based on the disk capacity. If other service resources, such as EVS disks, EIPs, and bandwidth are associated with the ECS, these resources are billed separately. +- Specifications of P5e ECSs can only be changed to other specifications of the same instance type. + .. _en-us_topic_0097289624__section453311473114: Computing-accelerated P5s @@ -453,7 +509,7 @@ P3 ECSs use NVIDIA A100 GPUs and provide flexibility and ultra-high-performance The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P3 ECSs. -**Supported Common Software** +**Supported Software** P3 ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P3 ECSs. P3 ECSs support the following commonly used software: @@ -532,7 +588,7 @@ P2s ECSs use NVIDIA Tesla V100 GPUs to provide flexibility, high-performance com The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2s ECSs. -**Supported Common Software** +**Supported Software** P2s ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P2s ECSs. P2s ECSs support the following commonly used software: @@ -551,7 +607,7 @@ P2s ECSs are used in computing acceleration scenarios, such as deep learning tra - By default, P2s ECSs created using a Windows public image have the Tesla driver installed. - If a P2s ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. -- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type. +- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type. - GPU-accelerated ECSs do not support live migration. .. _en-us_topic_0097289624__section208472383415: @@ -611,7 +667,7 @@ P2v ECSs use NVIDIA Tesla V100 GPUs and deliver high flexibility, high-performan The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2v ECSs. -**Supported Common Software** +**Supported Software** P2v ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P2v ECSs. P2v ECSs support the following commonly used software: @@ -631,7 +687,7 @@ P2v ECSs are used in computing acceleration scenarios, such as deep learning tra - By default, P2v ECSs created using a Windows public image have the Tesla driver installed. - By default, P2v ECSs created using a Linux public image do not have a Tesla driver installed. After the ECS is created, install a driver on it for computing acceleration. For details, see :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. - If a P2v ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. -- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type. +- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type. - GPU-accelerated ECSs do not support live migration. .. _en-us_topic_0097289624__section3135224614: @@ -716,9 +772,9 @@ Pi2 ECSs use NVIDIA Tesla T4 GPUs dedicated for real-time AI inference. These EC - Up to 8.1 TFLOPS of single-precision computing on a single GPU - Up to 130 TOPS of INT8 computing on a single GPU - 16 GiB of GDDR6 GPU memory with a bandwidth of 320 GiB/s on a single GPU -- One NVENC engine and two NVDEC engines embedded +- Built-in one NVENC and two NVDECs -**Supported Common Software** +**Supported Software** Pi2 ECSs are used in GPU-based inference computing scenarios, such as image recognition, speech recognition, and natural language processing. The Pi2 ECSs can also be used for light-load training. @@ -738,5 +794,5 @@ Pi2 ECSs support the following commonly used software: - By default, Pi2 ECSs created using a Windows public image have the Tesla driver installed. - By default, Pi2 ECSs created using a Linux public image do not have a Tesla driver installed. After the ECS is created, install a driver on it for computing acceleration. For details, see :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. - If a Pi2 ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see :ref:`Manually Installing a Tesla Driver on a GPU-accelerated ECS `. -- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type. +- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type. - GPU-accelerated ECSs do not support live migration. diff --git a/umn/source/service_overview/images/cloud-init.rst b/umn/source/service_overview/images/cloud-init.rst index dab3e1f8..2f27d80a 100644 --- a/umn/source/service_overview/images/cloud-init.rst +++ b/umn/source/service_overview/images/cloud-init.rst @@ -19,7 +19,7 @@ To ensure that ECSs that are created using a private image support custom config - For Windows OSs, download and install Cloudbase-Init. - For Linux OSs, download and install Cloud-Init. -After being installed in an image, Cloud-Init or Cloudbase-Init automatically configures initial attributes for the ECSs created using this image. +Once installed in an image, Cloud-Init or Cloudbase-Init automatically configures initial attributes for the ECSs created using this image. For more information, see *Image Management Service User Guide*. diff --git a/umn/source/service_overview/user_permissions.rst b/umn/source/service_overview/user_permissions.rst index 1aa8cf2f..8a0cd471 100644 --- a/umn/source/service_overview/user_permissions.rst +++ b/umn/source/service_overview/user_permissions.rst @@ -10,4 +10,4 @@ Two types of permissions are provided by default: user management and resource m - User management refers to the management of users, user groups, and user group rights. - Resource management refers to the control operations that can be performed by users on cloud service resources. -For further details, see `Permissions `__. +For details, see `Permissions `__.