Even before design of NDBCLUSTER
began in 1996, it was evident that one of the major problems to be encountered in building parallel databases would be communication between the nodes in the network. For this reason, NDBCLUSTER
was designed from the very beginning to permit the use of a number of different data transport mechanisms. In this Manual, we use the term transporter for these.
The NDB Cluster codebase provides for four different transporters:
TCP/IP using 100 Mbps or gigabit Ethernet, as discussed in Section 23.3.3.10, “NDB Cluster TCP/IP Connections”.
Direct (machine-to-machine) TCP/IP; although this transporter uses the same TCP/IP protocol as mentioned in the previous item, it requires setting up the hardware differently and is configured differently as well. For this reason, it is considered a separate transport mechanism for NDB Cluster. See Section 23.3.3.11, “NDB Cluster TCP/IP Connections Using Direct Connections”, for details.
Shared memory (SHM). For more information about SHM, see Section 23.3.3.12, “NDB Cluster Shared-Memory Connections”.
Scalable Coherent Interface (SCI).
Using SCI transporters in NDB Cluster requires specialized hardware, software, and MySQL binaries not available with NDB 8.0.
Most users today employ TCP/IP over Ethernet because it is ubiquitous. TCP/IP is also by far the best-tested transporter for use with NDB Cluster.
Regardless of the transporter used, NDB
attempts to make sure that communication with data node processes is done using chunks that are as large as possible since this benefits all types of data transmission.