Jump to: navigation, search

Both Gigabit Ethernet and InfiniBand are used to connect the nodes in the cluster. All the nodes have Ethernet connectivity and 12 out of the 14 nodes have InfiniBand connectivity.

Ethernet connectivity

Ethernet connectivity is assured by two Cisco Catalyst switches, one for each rack; the switches are connected between them (trunked) by a single Ethernet link. Intel based nodes have a single connection to the switch which is shared by the normal Ethernet connection and IPMI since IPMI is integrated on the mainboard; the AMD based nodes have two physical connections since they have separate IPMI boards.

All nodes use the eth0 interface for Ethernet connectivity and have addresses in the range. Host names for ethernet are without any suffix.

Ethernet is also used for logins between nodes; to connect to another Cluster node, just type the name of the desired target node in a terminal on the source node; a SSH session with X11 graphics forwarding will be opened automatically, without the password being asked.

InfiniBand connectivity

InfiniBand connectivity is assured by a single InfiniBand switch located in the Door rack. Only two nodes out of fourteen do not have InfiniBand connectivity.

The nodes use the ib0 interface for InfiniBand and have addresses in the range; the InfiniBand IPv4 address has the final number identical to the Ethernet address, so for example the node psi has an Ethernet address of and an InfiniBand address of Host names for InfiniBand are suffixed with -ib, thus for example the node psi has the InfiniBand host name of psi-ib.

IPMI connectivity

This is not generally needed for regular users; it is presented here for reference only. IPMI addresses share the same IP range as the regular Etherenet addresses which is; the last number of IPMI addresses is obtained by adding 200 to the last number of the regular Ethernet address, thus the host psi which has an Ethernet address of will have an IPMI address of Host names are suffixed with -ipmi, thus the IPMI host name for psi is psi-ipmi.

Connectivity to the BluePC network

The server of the BluePC network (called zet) has one Ethernet network card which assures a direct connection to the Cluster Ethernet network; the address of this card is All routing between the BluePCs and the Cluster is done thru this Ethernet connection. Communications are allowed only one-way, from the BluePCs to the Cluster.

From any BluePC you can connect via ssh to any node in the cluster; simply type its name in a terminal on the BluePC machine; this also forwards X11 traffic so users can run X11 graphics applications on the Cluster. No password is asked for BluePC to Cluster sessions. Please note that ssh only works from BluePC to Cluster and not the other way; the BluePC network has a policy of not providing inbound access from anywhere, including the Cluster.

Connectivity to the outside world

Only a single node - xi - has a direct connection to the outside world; it uses the eth1 interface for this and has an IP address of; connections from this host to the Internet are made using this interface. All the other nodes have their default gateway through the BluePC network so they rely on it for connecting to the Internet. However this is of little importance for regular users.

Only outbound traffic from the cluster to the outside world is allowed. No inbound user traffic is allowed to the Cluster; if such traffic is necessary, contact the Cluster administrator in order to arrange the opening of the required ports.