Configuring MySQL source endpoints
When creating or editing an endpoint, you can define:
- Managed Service for MySQL cluster connection or custom installation settings, including those based on Compute Cloud VMs. These are required parameters.
- Additional parameters.
Managed Service for MySQL cluster
Warning
To create or edit an endpoint of a managed database, you need the managed-mysql.viewer
role or the primitive viewer
role issued for the folder hosting a cluster of this managed database.
Connecting to the database with the cluster ID specified in Nebius Israel. Available only for clusters deployed in Managed Service for MySQL.
-
Managed Service for MySQL cluster: Specify ID of the cluster to connect to.
-
Security groups: Select the cloud network to host the endpoint and security groups for network traffic.
Thus, you will be able to apply the specified security group rules to the VMs and clusters in the selected network without changing the settings of these VMs and clusters. For more information, see Network in Data Transfer.
-
Database: Specify the name of the database in the selected cluster. Leave the field empty if you want to transfer tables from multiple databases at the same time. In this case, specify the database for creating service tables in the Database for auxiliary tables field.
-
User: Specify the username that Data Transfer will use to connect to the database.
-
Password: Enter the user password to the database.
- Endpoint type:
mysql-source
.
-
--cluster-id
: ID of the cluster you need to connect to. -
--database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
--user
: Username that Data Transfer will use to connect to the database. -
To set a user password to access the DB, use one of the following parameters:
-
--raw-password
: Password as text. -
--password-file
: The path to the password file.
-
- Endpoint type:
mysql_source
.
-
connection.mdb_cluster_id
: ID of cluster to connect to. -
database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
user
: Username that Data Transfer will use to connect to the database. -
password.raw
: Password in text form.
Example of the configuration file structure:
resource "yandex_datatransfer_endpoint" "<endpoint name in Terraform>" {
name = "<endpoint name>"
settings {
mysql_source {
security_groups = [ "list of security group IDs" ]
connection {
mdb_cluster_id = "<Managed Service for MySQL cluster ID>"
}
database = "<name of database being transferred>"
user = "<username for connection>"
password {
raw = "<user password>"
}
<advanced endpoint settings>
}
}
}
For more information, see the Terraform provider documentation
-
mdbClusterId
: ID of the cluster you need to connect to. -
database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
user
: Username that Data Transfer will use to connect to the database. -
password.raw
: Database user password (in text form).
Custom installation
For OnPremise, all fields are filled in manually.
-
Host: Enter the IP address or FQDN of the master host you want to connect to.
-
Port: Set the number of the port that Data Transfer will use for the connection.
-
CA certificate: Upload the certificate file or add its contents as text if transmitted data must be encrypted, for example, to meet PCI DSS requirements.
-
Subnet ID: Select or create a subnet in the desired availability zone.
If the value in this field is specified for both endpoints, both subnets must be hosted in the same availability zone.
-
Database: Specify the name of the database in the selected cluster. Leave the field empty if you want to transfer tables from multiple databases at the same time. In this case, specify the database for creating service tables in the Database for auxiliary tables field.
-
User: Specify the username that Data Transfer will use to connect to the database.
-
Password: Enter the user password to the database.
-
Security groups: Select the cloud network to host the endpoint and security groups for network traffic.
Thus, you will be able to apply the specified security group rules to the VMs and clusters in the selected network without changing the settings of these VMs and clusters. For more information, see Network in Data Transfer.
- Endpoint type:
mysql-source
.
-
--host
: IP address or FQDN of the master host you want to connect to. -
--port
: Number of the port that Data Transfer will use for the connection. -
--ca-certificate
— If the transmitted data needs to be encrypted, for example, to meet the requirements of PCI DSS. -
--subnet-id
: ID of the subnet the host resides in. -
--database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
--user
: Username that Data Transfer will use to connect to the database. -
To set a user password to access the DB, use one of the following parameters:
-
--raw-password
: Password as text. -
--password-file
: The path to the password file.
-
- Endpoint type:
mysql_source
.
-
on_premise.hosts
: List of IPs or FQDNs of hosts to connect to. Since only single-item lists are supported, specify the master host address. -
on_premise.port
: Port number that Data Transfer will use for connections. -
on_premise.tls_mode.enabled.ca_certificate
: CA certificate if the data to transfer must be encrypted to comply with PCI DSS requirements. -
on_premise.subnet_id
: ID of subnet that host is on. -
database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
user
: Username that Data Transfer will use to connect to the database. -
password.raw
: Password in text form.
Example of the configuration file structure:
resource "yandex_datatransfer_endpoint" "<endpoint name in Terraform>" {
name = "<endpoint name>"
settings {
mysql_source {
security_groups = [ "list of security group IDs" ]
connection {
on_premise {
hosts = ["<host list>"]
port = <connection port>
}
}
database = "<name of database being transferred>"
user = "<username for connection>"
password {
raw = "<user password>"
}
<advanced endpoint settings>
}
}
}
For more information, see the Terraform provider documentation
onPremise
: Database connection parameters:-
hosts
— IP address or FQDN of the master host to connect to. -
port
: The number of the port that Data Transfer will use for the connection. tlsMode
: Parameters for encrypting transmitted data if it is required, for example, to meet PCI DSS requirements.disabled
: Disabled.enabled
: Enabled.caCertificate
: CA certificate.
-
subnetId
: ID of the subnet the host resides in.
-
-
database
— Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time. -
user
: Username that Data Transfer will use to connect to the database. -
password.raw
: Database user password (in text form).
Additional settings
-
Included tables: Data is only transferred from listed tables. This option is specified using regular expressions.
When you add new tables when editing an endpoint used in Snapshot and increment or Replication transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred (Preview) field in the transfer settings.
-
Excluded tables: Data from the listed tables is not transferred. This option is specified using regular expressions.
-
Transfer schema: Allows you to select the DB schema elements that will be transferred when activating or deactivating a transfer.
-
Time zone for connecting to the database: Specify the IANA Time Zone Database
identifier. By default, the server local time zone is used. -
Database for auxiliary tables: Database for dummy tables (
__tm_keeper
and__tm_gtid_keeper
). By default, this is the source database the data is transferred from.
-
--include-table-regex
: List of included tables. If this is on, the data will only be transferred from the tables in this list. This option is specified using regular expressions.When you add new tables when editing an endpoint used in Snapshot and increment or Replication transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred (Preview) field in the transfer settings.
-
--exclude-table-regex
: List of excluded tables. Data from the listed tables will not be transferred. This option is specified using regular expressions. -
--timezone
: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0. -
Schema transfer settings:
--transfer-before-data
: When activating transfer.--transfer-after-data
: When deactivating transfer.
-
include_table_regex
: List of included tables. If this is on, the data will only be transferred from the tables in this list. This option is specified using regular expressions.When you add new tables when editing an endpoint used in Snapshot and increment or Replication transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred (Preview) field in the transfer settings.
-
exclude_table_regex
: List of excluded tables. Data from tables on this list will not be transferred. This option is specified using regular expressions. -
timezone
: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0. -
object_transfer_settings
: Schema transfer settings:view
: Views.routine
: Procedures and functions.trigger
: Triggers.
You can specify one of the following values for each entity:
BEFORE_DATA
: Move at transfer activation.AFTER_DATA
: Move at transfer deactivation.NEVER
: Do not move.
-
includeTablesRegex
: List of included tables. If this is on, the data will only be transferred from the tables in this list. This option is specified using regular expressions.When you add new tables when editing an endpoint used in Snapshot and increment or Replication transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred (Preview) field in the transfer settings.
-
excludeTablesRegex
: Blacklist of tables. Data from the listed tables will not be transferred. This option is specified using regular expressions. -
timezone
: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0. -
objectTransferSettings
: Settings for transferring a DB schema when activating and deactivating a transfer (BEFORE_DATA
andAFTER_DATA
values, respectively).
Settings for transferring a DB schema when enabling and disabling a transfer
During a transfer, the database schema is transferred from the source to the target. The transfer is performed in two stages:
-
At the activation stage.
This step is performed before copying or replicating data to create a schema on the target. At this stage, you can enable the migration of views and stored procedures, stored functions, and triggers.
-
At the deactivation stage.
This step is performed at the end of the transfer operation when it is deactivated. If the transfer keeps running in replication mode, the final stage of the transfer will be performed only when replication stops. At this stage, you can enable the migration of views and stored procedures, stored functions, and triggers.
At the final stage, it is assumed that when the transfer is deactivated, there is no writing load on the source. You can ensure this by switching to
read-only
mode. At this stage, the database schema on the target is brought to a state where it will be consistent with the schema on the source.
Known limitations
If you are setting up a transfer from a MySQL cluster, use the cluster master server. During its operation, the transfer creates service tables in the source database. Therefore, you cannot use a MySQL replica as a source, because it is read-only.
If you are setting up a transfer from a MySQL cluster to a ClickHouse cluster, consider the way the data of date and time types
- Data of the
TIME
type is transferred as strings with the source and target time zones ignored. - When transferring data of the
TIMESTAMP
type, the time zone set in the MySQL source settings or advanced endpoint settings is used. For more information, see the MySQL documentation . - The source endpoint assigns the UTC+0 time zone to data of the
DATETIME
type.