public interface Cluster
This class provides the view of the Copycat cluster from the perspective of a single server. When a
CopycatServer is started, the server will form a cluster
with other servers. Each Copycat cluster consists of some set of members, and each
Member represents a single server in the cluster. Users can use the Cluster to react to
state changes in the underlying Raft algorithm via the various listeners.
server.cluster().onJoin(member -> {
System.out.println(member.address() + " joined the cluster!");
});
Membership exposed via this interface is provided from the perspective of the local server and may not
necessarily be consistent with cluster membership from the perspective of other nodes. The only consistent
membership list is on the leader node.
Cluster to manage the Copycat cluster membership. Typically, servers join the
cluster by calling CopycatServer#open() or #join(), but in the event that a server fails
permanently and thus cannot remove itself, other nodes can remove arbitrary servers.
server.cluster().onJoin(member -> {
member.remove().thenRun(() -> System.out.println("Removed " + member.address() + " from the cluster!"));
});
When a member is removed from the cluster, the configuration change removing the member will be replicated to all
the servers in the cluster and persisted to disk. Once a member has been removed, for that member to rejoin the cluster
it must fully restart and request to rejoin the cluster. For servers configured with a persistent
StorageLevel, cluster configurations are stored on disk.
Additionally, members can be promoted and demoted by any
other member of the cluster. When a member state is changed, a cluster configuration change request is sent
to the cluster leader where it's logged and replicated through the Raft consensus algorithm. During
the configuration change, servers' cluster configurations will be updated. Once the configuration change is
complete, it will be persisted to disk on all servers and is guaranteed not to be lost even in the event of a
full cluster shutdown (assuming the server uses a persistent StorageLevel).
| Modifier and Type | Method and Description |
|---|---|
default CompletableFuture<Void> |
bootstrap(Address... cluster)
Bootstraps the cluster.
|
CompletableFuture<Void> |
bootstrap(Collection<Address> cluster)
Bootstraps the cluster.
|
default CompletableFuture<Void> |
join(Address... cluster)
Joins the cluster.
|
CompletableFuture<Void> |
join(Collection<Address> cluster)
Joins the cluster.
|
Member |
leader()
Returns the current cluster leader.
|
CompletableFuture<Void> |
leave()
Leaves the cluster.
|
Member |
member()
Returns the local cluster member.
|
Member |
member(Address address)
Returns a member by address.
|
Member |
member(int id)
Returns a member by ID.
|
Collection<Member> |
members()
Returns a collection of all cluster members.
|
Listener<Member> |
onJoin(Consumer<Member> callback)
Registers a callback to be called when a member joins the cluster.
|
Listener<Member> |
onLeaderElection(Consumer<Member> callback)
Registers a callback to be called when a leader is elected.
|
Listener<Member> |
onLeave(Consumer<Member> callback)
Registers a callback to be called when a member leaves the cluster.
|
long |
term()
Returns the current cluster term.
|
Member leader()
If no leader has been elected for the current term, the leader will be null.
Once a leader is elected, the leader must be known to the local server's configuration. If the returned
Member is null then that does not necessarily indicate that no leader yet exists for the
current term, only that the local server has not learned of a valid leader for the term.
null if no leader is known for the current term.long term()
The term is representative of the epoch determined by the underlying Raft consensus algorithm. The term is a monotonically
increasing number used by Raft to represent a point in logical time. If the cluster is persistent (i.e. all servers use a persistent
StorageLevel), the term is guaranteed to be unique and monotonically increasing even across
cluster restarts. Additionally, for any given term, Raft guarantees that only a single leader can be elected.
Listener<Member> onLeaderElection(Consumer<Member> callback)
The provided callback will be called when a new leader is elected for a term. Because Raft ensures only a single leader
can be elected for any given term, each election callback will be called at most once per term. However, note that a leader may
not be elected for a term as well.
server.cluster().onLeaderElection(member -> {
System.out.println(member.address() + " elected for term " + server.cluster().term());
});
The Member provided to the callback represents the member that was elected leader. Copycat guarantees that this member is
a member of the Cluster. When a leader election callback is called, the correct term() for the leader is guaranteed
to have already been set. Thus, to get the term for the provided leader, simply read the cluster term().callback - The callback to be called when a new leader is elected.Member member()
The local member is representative of the local CopycatServer. The local cluster member will always be non-null and
AVAILABLE if the server is running.
Member member(int id)
The returned Member is referenced by the unique Member.id().
id - The member ID.null if no member with the given id exists.Member member(Address address)
The returned Member is referenced by the member's Member.address() which is synonymous
with the member's Member.serverAddress().
address - The member server address.null if no member with the given Address exists.NullPointerException - if address is nullCollection<Member> members()
The returned members are representative of the last configuration known to the local server. Over time,
the cluster configuration may change. In the event of a membership change, the returned Collection
will not be modified, but instead a new collection will be created. Similarly, modifying the returned
collection will have no impact on the cluster membership.
default CompletableFuture<Void> bootstrap(Address... cluster)
Bootstrapping the cluster results in a new cluster being formed with the provided configuration. The initial nodes in a cluster must always be bootstrapped. This is necessary to prevent split brain. If the provided configuration is empty, the local server will form a single-node cluster.
Only Member.Type.ACTIVE members can be included in a bootstrap configuration. If the local server is
not initialized as an active member, it cannot be part of the bootstrap configuration for the cluster.
When the cluster is bootstrapped, the local server will be transitioned into the active state and begin
participating in the Raft consensus algorithm. When the cluster is first bootstrapped, no leader will exist.
The bootstrapped members will elect a leader amongst themselves. Once a cluster has been bootstrapped, additional
members may be joined to the cluster. In the event that the bootstrapped members cannot
reach a quorum to elect a leader, bootstrap will continue until successful.
It is critical that all servers in a bootstrap configuration be started with the same exact set of members. Bootstrapping multiple servers with different configurations may result in split brain.
The CompletableFuture returned by this method will be completed once the cluster has been bootstrapped,
a leader has been elected, and the leader has been notified of the local server's client configurations.
cluster - The bootstrap cluster configuration.CompletableFuture<Void> bootstrap(Collection<Address> cluster)
Bootstrapping the cluster results in a new cluster being formed with the provided configuration. The initial nodes in a cluster must always be bootstrapped. This is necessary to prevent split brain. If the provided configuration is empty, the local server will form a single-node cluster.
Only Member.Type.ACTIVE members can be included in a bootstrap configuration. If the local server is
not initialized as an active member, it cannot be part of the bootstrap configuration for the cluster.
When the cluster is bootstrapped, the local server will be transitioned into the active state and begin
participating in the Raft consensus algorithm. When the cluster is first bootstrapped, no leader will exist.
The bootstrapped members will elect a leader amongst themselves. Once a cluster has been bootstrapped, additional
members may be joined to the cluster. In the event that the bootstrapped members cannot
reach a quorum to elect a leader, bootstrap will continue until successful.
It is critical that all servers in a bootstrap configuration be started with the same exact set of members. Bootstrapping multiple servers with different configurations may result in split brain.
The CompletableFuture returned by this method will be completed once the cluster has been bootstrapped,
a leader has been elected, and the leader has been notified of the local server's client configurations.
cluster - The bootstrap cluster configuration.default CompletableFuture<Void> join(Address... cluster)
Joining the cluster results in the local server being added to an existing cluster that has already been bootstrapped. The provided configuration will be used to connect to the existing cluster and submit a join request. Once the server has been added to the existing cluster's configuration, the join operation is complete.
Any type of server may join a cluster. In order to join a cluster, the provided list of
bootstrapped members must be non-empty and must include at least one active member of the cluster. If no member
in the configuration is reachable, the server will continue to attempt to join the cluster until successful. If
the provided cluster configuration is empty, the returned CompletableFuture will be completed exceptionally.
When the server joins the cluster, the local server will be transitioned into its initial state as defined by
the configured Member.Type. Once the server has joined, it will immediately begin participating in
Raft and asynchronous replication according to its configuration.
It's important to note that the provided cluster configuration will only be used the first time the server attempts
to join the cluster. Thereafter, in the event that the server crashes and is restarted by joining the cluster
again, the last known configuration will be used assuming the server is configured with persistent storage. Only when
the server leaves the cluster will its configuration and log be reset.
In order to preserve safety during configuration changes, Copycat leaders do not allow concurrent configuration
changes. In the event that an existing configuration change (a server joining or leaving the cluster or a
member being promoted or demoted) is under way, the local
server will retry attempts to join the cluster until successful. If the server fails to reach the leader,
the join will be retried until successful.
cluster - A list of cluster member addresses to join.CompletableFuture<Void> join(Collection<Address> cluster)
Joining the cluster results in the local server being added to an existing cluster that has already been bootstrapped. The provided configuration will be used to connect to the existing cluster and submit a join request. Once the server has been added to the existing cluster's configuration, the join operation is complete.
Any type of server may join a cluster. In order to join a cluster, the provided list of
bootstrapped members must be non-empty and must include at least one active member of the cluster. If no member
in the configuration is reachable, the server will continue to attempt to join the cluster until successful. If
the provided cluster configuration is empty, the returned CompletableFuture will be completed exceptionally.
When the server joins the cluster, the local server will be transitioned into its initial state as defined by
the configured Member.Type. Once the server has joined, it will immediately begin participating in
Raft and asynchronous replication according to its configuration.
It's important to note that the provided cluster configuration will only be used the first time the server attempts
to join the cluster. Thereafter, in the event that the server crashes and is restarted by joining the cluster
again, the last known configuration will be used assuming the server is configured with persistent storage. Only when
the server leaves the cluster will its configuration and log be reset.
In order to preserve safety during configuration changes, Copycat leaders do not allow concurrent configuration
changes. In the event that an existing configuration change (a server joining or leaving the cluster or a
member being promoted or demoted) is under way, the local
server will retry attempts to join the cluster until successful. If the server fails to reach the leader,
the join will be retried until successful.
cluster - A collection of cluster member addresses to join.CompletableFuture<Void> leave()
Invocations of this method will cause the local CopycatServer to leave the cluster.
This method is for advanced usage only. Typically, users should use CopycatServer.leave()
to leave the cluster and close a server in order to ensure all associated resources are properly closed.
When a server leaves the cluster, the server submits a LeaveRequest
to the cluster leader. The leader will replicate and commit the configuration change in order to remove the
leaving server from the cluster and notify each member of the leaving server.
In order to preserve safety during configuration changes, Copycat leaders do not allow concurrent configuration
changes. In the event that an existing configuration change (a server joining or leaving the cluster or a
member being promoted or demoted) is under way, the local
server will retry attempts to leave the cluster until successful. The server will continuously attempt to
leave the cluster until successful.
Listener<Member> onJoin(Consumer<Member> callback)
The registered callback will be called whenever a new Member joins the cluster. Membership
changes are sequentially consistent, meaning each server in the cluster will see members join in the same
order, but different servers may see members join at different points in time. Users should not in any case
assume that because one server has seen a member join the cluster all servers have.
The returned Listener can be used to stop listening for servers joining the cluster.
// Start listening for members joining the cluster.
Listener<Member> listener = server.cluster().onJoin(member -> System.out.println(member.address() + " joined!"));
// Stop listening for members joining the cluster.
listener.close();
callback - The callback to be called when a member joins the cluster.Listener<Member> onLeave(Consumer<Member> callback)
The registered callback will be called whenever an existing Member leaves the cluster. Membership
changes are sequentially consistent, meaning each server in the cluster will see members leave in the same
order, but different servers may see members leave at different points in time. Users should not in any case
assume that because one server has seen a member leave the cluster all servers have.
The returned Listener can be used to stop listening for servers leaving the cluster.
// Start listening for members leaving the cluster.
Listener<Member> listener = server.cluster().onLeave(member -> System.out.println(member.address() + " left!"));
// Stop listening for members leaving the cluster.
listener.close();
callback - The callback to be called when a member leaves the cluster.Copyright © 2013–2016. All rights reserved.