对于一个主从式的服务系统,其高可用性,即HA(High Available) 需要保证个基本功能:
Master节点会在适当的时刻将集群的当前状态信息,写入到本地文件系统,比如目录/hera/store/state/目录下,当发现Master崩溃时,可以在当前主机上重新启动一个新的进程从此目录加载集群信息,然后通知所有记录的工作节点恢复工作。
优点:持久化功能实现简单
缺点:当服务器本身出现问题时,难以快速恢复或无法恢复;需要考虑其它的方案来完成选举功能
基本实现逻辑与方案一类似,只是提供了更可靠的持久化功能,将集群状态信息存放在诸如GlusterFS、HDFS等之上,避免一台服务器出现问题导致集群无法恢复的问题。
ZooKeeper兼具选举和持久化的功能,Zookeeper集群模式本身基于Paxos的思想实现了Zab算法,以保证自身的HA,因此一个ZK集群需要保存至少有n/2个节点是存活的。
ZK为用户提供了便利的接口来为自己的系统提供选主功能,具体的文档可参考官网。简单来说就是将为所有的Master创建Client连接,并在同一个父目录下尝试创建目录,
然后通过ZK Lib提供的接口来同步主节点的存活状态。
ZK同时又可以看作是一个远程的文件系统,用户可以通过ZK Client来持久化目录,信息共享的目录,所以集群的状态的持久依然可以通过同一个ZK集群来完成。
优点:实现方法简单,同时兼具数据持久化和选主的功能
缺点:需要额外维护一个ZK 集群
Atomix是一个能用的Java框架,用来构建高可用的分布式系统。它是基于RAFT协议的实现,为用户提供了各种原子数据结构,比如map/set/integer等,这些数据结构都可以在整个集群中共享并保证一致性,同时也提供了LeaderElection的原子对象,用来注册候选主结点、监听相关事件等的功能。
优点:我们可以方便地构建一个Atomix集群或是与Master整合在一个进程中,同时满足数据持久化和选主的功能。
缺点:需要保证整个分布式系统中有多于n/2个Atomix服务正常运行。
当一个主结点发生选举,并产生新的活动主节点时,就会执行集群的状态恢复流程,简述如下:
AtomixCluster:是Atomix集群信息交互以及管理的基础组件,可以通过此类来引导一个创建一个新集群或是加入到已经存在的集群,维护结点间的通信,以及失败检测。使用启动时提供的成员配置信息,创建AtomixCluster实例的示例代码如下所示:
AtomixCluster cluster = AtomixCluster.builder()
.withAddress(address) // 当前Manager监听地址
.withMemberId(id) // 唯一标识,用于标识集群中的不同成员
// 指定要创建或加入的集群的ID
.withClusterId(clusterId == null ? DEFAULT_CLUSTER_ID : clusterId)
// 指定通过哪种方式来确认集群的成员
.withMembershipProvider(
BootstrapDiscoveryProvider.builder()
.withNodes(members.stream().map(member ->
Node.builder()
.withAddress(member.address())
.withId(member.id())
.build())
.collect(Collectors.toList()))
.build())
.withProperties(properties)
.build();
cluster.start().join();
在创建AtomixCluster实例时,比较重要的点是指定一个NodeDiscoveryProvider类的实现类,它是一个SPI(Service Provider Interface,服务提供者接口),为ClusterMembershipService提供定位集群成员的依据,并能够分享这些信息,可以认为NodeDiscoveryProvider为集群提供了一个整体的成员动态视图,它有如下三个实现类:
BootstrapDiscoveryProvider:引导发现模式,集群中的所有成员都是预先定义好的,每一个成员结点都会向它所配置的对端(peer)发送心跳,通过这种方式能够将成员的最新状态通过MessagingService传播到集群中的所有成员,就像是Gossip协议那样。
MulticastDiscoveryProvider:使用这个类,需要在创建AtomixCluster实例时,开启组播功能AtomixClusterBuilder,withMulticastEnabled(),这样集群中的所有成员结点就可以通过BroadcastService服务,将事件发送给内部的Netty SocketServer服务,然后由Netty负责将事件以组播的形式转发到特定网上上的所有连接。
DnsDiscoveryProvider:通过DNS服务发现所有成员。
RaftServer:实现了RAFT协议实体结点,可以通过build设计模式来引导创建一个实例,如下面的示例代码:
RaftServer server = RaftServer.builder(MemberId.from(getDefaultMemberId(this.address.toString())))
.withMembershipService(cluster.getMembershipService())
.withProtocol(new RaftServerCommunicator(CustomSerializer.newSerializer(), cluster.getCommunicationService()))
.withPrimitiveTypes(CustomPrimitiveTypes.systemPrimitiveTypeRegistry())
.withElectionTimeout(Duration.ofSeconds(2))
.withHeartbeatInterval(Duration.ofMillis(500))
.withSessionTimeout(Duration.ofSeconds(5000))
.withStorage(RaftStorage.builder()
.withPrefix(name)
.withDirectory(name)
.withStorageLevel(StorageLevel.DISK)
.withDynamicCompaction(false)
.withNamespace(RaftNamespaces.RAFT_STORAGE)
.build())
.build();
这里主要关注以下几个点:
RaftServiceManager:Server端的状态机(State Machine),除了能够会话的内部状态和日志的索引,还能够处理用户通过PrimitiveService服务提交的各种命令。
RaftLog:协议中的日志对象,记录集群的当前状态,支持序列化和压缩(异步)。
为了简化创建过程,上面代码利用AtomixCluster实例生成ClusterMembershipService、ClusterCommunicationService实例。
当然还有一些其它的配置参数没有被列举在上面的代码中,例如线程池大小等,具体可以参考Atomix源码。
RaftClient:面向用户的通用接口,可以直接使用此类的实例向Raft集群提交操作命令,例如获取当前集群的Leader、任期以及其它的信息等;或是通过此类获取间接创建其它类型的Client实例,例如DefaultRaftSessionClient。
创建RaftClient的代码示例如下:
RaftClient client = RaftClient.builder(members.stream().map(Member::id).collect(Collectors.toList()))
.withClientId(name)
.withMemberId(MemberId.from("member-" + name))
.withPartitionId(PartitionId.from("default-partition", DEFAULT_PARTION_ID))
.withProtocol(new RaftClientCommunicator(CustomSerializer.newSerializer(), cluster.getCommunicationService()))
.build();
RaftSessionClient:通过调用此类提供的接口,可以向Raft集群状态机操作,即PrimitiveOperation。当这个Client被打开时,会尝试向集群中已知的Server注册自己,然后周期性地保持心跳。默认的实现类是DefaultRaftSessionClient,每创建一个PrimitiveType的对象,就需要创建一个这样的连接实例,保证用户提交的所有操作都是顺序处理的,示例代码如下:
// 通过RaftClient提供的方法,创建一个RaftSessionClient实例,并指明它绑定的数据类型为
// AtomicValueType
RaftSessionClient sessionClient = client.
sessionBuilder(DEFAULT_SESSION_ID, AtomicValueType.instance(), new ServiceConfig())
.withReadConsistency(consistency)
.withMinTimeout(Duration.ofMillis(250))
.withMaxTimeout(Duration.ofSeconds(5))
.withMaxRetries(3)
.build();
// 为Primitive Client添加事件,用户可以自定义自事件处理逻辑,由于client的连接状态
// 可能由于某种原因发生切换,因此可以通过addStateChangeListener(...)方法添加一个
// 回调函数,这里仅仅是打印一些日志,当然通过这种方式更新“标记变量”。
sessionClient.addStateChangeListener(primitiveState -> {
if (primitiveState == PrimitiveState.CONNECTED) {
LOGGER.info(LoggerEvent.GENERAL.getName(), "Persist cluster info engine session is %s now.", primitiveState);
} else if (primitiveState == PrimitiveState.SUSPENDED) {
LOGGER.warn(LoggerEvent.GENERAL.getName(), "Persist cluster info engine session is %s now, waiting the cluster to resume!", primitiveState);
} else if (primitiveState == PrimitiveState.EXPIRED || primitiveState == PrimitiveState.CLOSED) {
// on this state, the session will be removed from RaftSessionManager,
// this will lead to throw an exception of UnknownSession, while
// trying to close the current session.
LOGGER.error(LoggerEvent.GENERAL.getName(), "Persist cluster info engine session is %s now, persisting operations will fail.", primitiveState);
}
});
// 连接这个Client
sessionClient.connect().get(10, TimeUnit.SECONDS);
上面的代码只是创建了一个能够向集群Server间提交作用于AtomicValueType类型之上的,更新状态机客户端(一个通用的Client),而针对AtomicValueType这一类的数据本身,可以支持哪些操作方法,如get、set等,则需要创建具体的客户端代理类AtomicValueProxy,负责操纵AtomicValue类型的数据,示例代码如下:
ProxyClient<AtomicValueService> proxy = new DefaultProxyClient<>(
"cluster-info-client",
AtomicValueType.instance(),
MultiRaftProtocol.builder().build(),
AtomicValueService.class,
Collections.singletonList(partition),
(key, partitions) -> partitions.get(0));
AtomicValueProxy atomicValueReaderWriter = new AtomicValueProxy(proxy,
new CustomPrimitiveRegistry(CustomPrimitiveTypes.systemPrimitiveTypeRegistry()));
AtomicValueProxy:它是对ProxyClient的封装,Atomix支持用户自定义的AtomicValueType类型,但它们都支持一样的数据操纵,即可以通过同一个ProxyClient的定义完成数据操纵,因此为了能够在代码层展现统一的逻辑,这个类被设计成静态代理模式,支持用户指定自己的PrimitiveRegistry实例。构造函数如下:
public AtomicValueProxy(ProxyClient<AtomicValueService> proxy, PrimitiveRegistry registry) {
super(proxy, registry);
}
ProxyClient:动态代理不同的数据类型的操作,Atomix提供了许多PrimitiveType的实现类,例如AtomicValueType、AtomicMapType、AtomicCounterType等,它们拥有不同的接口,但都有共同的父类,因此很适合通过Proxy设计模式,使用一个SessionClient(前面有提到)的定义完成与集群的交互。Atomix通过JAVA的动态代理技术,实现DefaultProxyClient类的实例化,它可以绑定多个SessionClient,具体的构造函数如下:
public DefaultProxyClient(
String name,
PrimitiveType type, // 支持的数据类型,如AtomicValueType
PrimitiveProtocol protocol,// 支持的协议,如RAFT
Class<S> serviceType, // AtomicValueType数据类型,支持的操作方法或接口
Collection<SessionClient> partitions, // 可以维护多个SessionClient,存放在Map数据结构中
Partitioner<String> partitioner) { // SessionClient的映射方法
super(name, type, protocol, createSessions(type, serviceType, partitions));
this.partitioner = checkNotNull(partitioner);
this.serializer = Serializer.using(type.namespace());//序列化器,可以自定义
}
至此,我们就可以通过atomicValueReaderWriter与Atomix Raft集群交互Primitive数据了,它支持的读写接口如下:
public class AtomicValueProxy extends AbstractAsyncPrimitive<AsyncAtomicValue<byte[]>, AtomicValueService> implements AsyncAtomicValue<byte[]>, AtomicValueClient {
private final Set<AtomicValueEventListener<byte[]>> eventListeners = Sets.newConcurrentHashSet();
public AtomicValueProxy(ProxyClient<AtomicValueService> proxy, PrimitiveRegistry registry) {
super(proxy, registry);
}
@Override
public void change(byte[] newValue, byte[] oldValue) {
eventListeners.forEach(l -> l.event(new AtomicValueEvent<>(AtomicValueEvent.Type.UPDATE, newValue, oldValue)));
}
@Override
public CompletableFuture<byte[]> get() {
return getProxyClient().applyBy(name(), service -> service.get());
}
@Override
public CompletableFuture<Void> set(byte[] value) {
return getProxyClient().acceptBy(name(), service -> service.set(value));
}
@Override
public CompletableFuture<Boolean> compareAndSet(byte[] expect, byte[] update) {
return getProxyClient().applyBy(name(), service -> service.compareAndSet(expect, update));
}
@Override
public CompletableFuture<byte[]> getAndSet(byte[] value) {
return getProxyClient().applyBy(name(), service -> service.getAndSet(value));
}
}
Raft协议中,为结点(RaftServer)的定义多个不同的状态(角色),对应到Atomix中的定义如下:
enum Role {
/**
* Represents the state of an inactive server.
* <p>
* All servers start in this state and return to this state when {@link #leave() stopped}.
*/
INACTIVE(false),
/**
* Represents the state of a server in the process of catching up its log.
* <p>
* Upon successfully joining an existing cluster, the server will transition to the passive state and remain there
* until the leader determines that the server has caught up enough to be promoted to a full member.
*/
PASSIVE(false),
/**
* Represents the state of a server in the process of being promoted to an active voting member.
*/
PROMOTABLE(false),
/**
* Represents the state of a server participating in normal log replication.
* <p>
* The follower state is a standard Raft state in which the server receives replicated log entries from the leader.
*/
FOLLOWER(true),
/**
* Represents the state of a server attempting to become the leader.
* <p>
* When a server in the follower state fails to receive communication from a valid leader for some time period,
* the follower will transition to the candidate state. During this period, the candidate requests votes from
* each of the other servers in the cluster. If the candidate wins the election by receiving votes from a majority
* of the cluster, it will transition to the leader state.
*/
CANDIDATE(true),
/**
* Represents the state of a server which is actively coordinating and replicating logs with other servers.
* <p>
* Leaders are responsible for handling and replicating writes from clients. Note that more than one leader can
* exist at any given time, but Raft guarantees that no two leaders will exist for the same {@link RaftCluster#getTerm()}.
*/
LEADER(true);
}
因此为了能够在集群发生Leader选举时,监听到相应的事件,拿到产生的Leader信息,Atomix允许用户为RaftServer实例添加回调函数,例如下面的代码,注册了一个监事件监听器。当集群发生Leader切换时,集群中的每一个RaftServer实例都会最终同步到最新的Leader信息,因此RaftClusterContext.addLeaderElectionListener(Consumer callback)方法的参数是一个可以接收RaftMember(Leader)参数的函数,这里会将新Leader的基本信息放入到自定义的队列leaderHistoryQueue中,以异步的方式消费这个队列。
// Server即DefaultRaftServer的实例,而server.cluster()方法返回的是RaftClusterContext的实例。
// RaftClusterContext管理当前Server结点所拥有的Raft集群持久化状态,并依靠内部的RaftContext实例,
// 同步集群的选举信息。
server.cluster().addLeaderElectionListener(member -> {
// 由于Leader选举事件返回的对象,即member,的实际类型是DefaultRaftMember,仅仅包含了与Raft相关的信息,
// 但我们希望能够拿到对应的Member类型的数据,因此这里会使用membersCache保存集群中所有成员的Member对象,
// 后面会有详细说明。
if (!membersCache.containsKey(member.memberId().id())) {
updateCache();
}
oldLeader = newLeader;
newLeader = membersCache.get(member.memberId().id());
try {
LOGGER.info(LoggerEvent.GENERAL.getName(),
"The leader has been changed from %s to %s, and append this event into the history blocking queue.",
oldLeader, newLeader);
LeaderDescriptor newLeaderDiscriptor = new LeaderDescriptor();
newLeaderDiscriptor.setHost(newLeader.properties().getProperty("host"));
newLeaderDiscriptor.setPort(Integer.parseInt(newLeader.properties().getProperty("port")));
newLeaderDiscriptor.setHeartbeatPort(Integer.parseInt(newLeader.properties().getProperty("heartbeatPort")));
newLeaderDiscriptor.setElectionAddress(newLeader.address().toString());
leaderHistoryQueue.offer(newLeaderDiscriptor, 10, TimeUnit.SECONDS);
} catch (Exception e) {
LOGGER.error(e, LoggerEvent.GENERAL.getName(), null,
"Failed to append leader changed event into the blocking queue!");
}
});
通过RaftClusterContext.addLeaderElectionListener(Consumer callback)方法注册的回调函数只能拿到最新的Leader对应的RaftMember实例,它只包含新Leader的少量信息,不包含诸如IP地址、监听端口等的其它属性信息(Properties),因此为了能够拿到这些信息,我们需要通过如下的代码:
Set<Member> knownMembers = server.cluster().getMembershipService().getMembers();
上面的代码会返回当前集群中所有已知的成员的Member对象,但这会产生一次RPC调用,因此为了能够减少不必要的网络开销,我们可以使用本地缓存的方式在每一次Leader切换时,产生尝试从缓存中拿到对应的Member对象,否则调用上面的过程。
在发生Leader切换时,必然会有集群成员(RaftServer)的状态变化,比如从Follower到Leader,从Follower到Candidate,从Candidate到Follower等,而前面介绍的Leader选举事件,只能让我们知道集群在正常状态,当前最新的Leader是哪个。但是如果当前结点的背后的RaftServer出现异常情况,无法再更新集群状态时,会导致当前RaftServer无法收到新的Leader事件,同时当前结点作为调度系统的Master角色,应当在这种情况下关闭自己的调度服务,但很显然在这种情况下,Leader选举事件并不能帮到我们,莫慌,Atomix同时也为我们提供了另外一个接口,来注册Raft集群成员状态变更监听器,代码如下:
this.roleChangeAction = (currentRole, isLeader) -> {
if (isLeader) {
logger.debug(LoggerEvent.GENERAL.getName(),
"This server was the may be the latest leader, but now is transiting to %s!",
currentRole);
if (currentRole == RaftServer.Role.LEADER) {
if (!getTaskDispatcher().isServable()) {
logger.warn(LoggerEvent.GENERAL.getName(),
"This server is elected as the new leader now, so try to online.");
online();
}
} else {
// 有可能当前结点出现了网络问题,而集群中的其它结点正常,继续进行Raft协议的过程
if (getTaskDispatcher().isServable()) {
logger.warn(LoggerEvent.GENERAL.getName(), "Even though this server is marked as leader, but actually it should be offline, because of some unexpected issues!!!");
offline();
}
}
}
}
server.addRoleChangeListener(role -> {
if (this.roleChangeAction != null) {
this.roleChangeAction.accept(role, isLeader());
}
});
按Atomix的RaftServer的实现,在发生Leader结点出现网络问题,无法再与集群其它结点通信时,不会触发Leader事件,但是这个结点的状态会变为FOLLOWER,像Raft协议中定义的那样,因此我们需要在这种情况下通过Raft角色变更事件,来辅助更新当前结点的真实状态。
经过前面一系列的过程后,我们能够得到一个,通过Atomix Raft集群,同步调度系统Master状态的集群了,但要想实现HA的功能,还需要一个关键的步骤,就是集群状态的存储及恢复。
在前面的代码中,我们创建了能够与Atomix Raft集群交互Primitive数据的客户端代理实例,atomicValueReaderWriter,同时我们也构建了拥有Leader选举功能的集群,那么我们就可以利用这些功能,实现集群状态的持久化功能。
从调度系统来看,在HA模式下,需要数据持久的时刻是调度集群中的结点(Master/Worker角色)产生事件、任务事件、底层网络IO事件等,比如一个新的Worker结点启动并向Master注册时,会产生一个事件,这会导致当前的整个调度集群状态发生了变化,因此我们需要记录下此时的状态,以便在发生Leader切换时,新产生的Leader结点(被激活的Master)能够加载最新的集群状态的快照( Snapshot)。
在这种情况下,我们需要做的额外工作就是通过AtomicValueProxy的实例atomicValueReaderWriter向Raft集群的状态机提交当前集群的快照,由Raft集群负责备份和存储,示例代码如下:
public class RaftLeaderElectionClient extends AbstractLeaderElectionCluster {
private AtomicValueProxy atomicValueReaderWriter;
/**
* When the raft server are not available, means more than N/2+1 leaders are lost,
* this method will throw a {@link java.net.ConnectException}, in which case null
* value will be returned.
*
* Also at the first time setup a new cluster, the value returned through persist
* engine will be null.
*
* After {@code DEFAULT_GET_PRIMITIVE_TIMEOUT} milliseconds, this method will throw
* a {@link TimeoutException} and return null.
*
* @return null if no values persisted or a {@link java.net.ConnectException} thrown
*/
public ClusterInfoSnapshot getClusterInfo() {
if (!couldPersist()) {
LOGGER.error(LoggerEvent.GENERAL.getName(),
"Couldn't load data through persist engine, because the client is invalid!");
return null;
}
try {
byte[] value = this.clusterInfo.get().get();
if (value == null) {
// value is null implies the current cluster has no data persisted,
// so we try to set a new empty data into to current cluster.
NodesSnapshot initNodesSnapshot = new NodesSnapshot();
TasksSnapshot initTasksSnapshot = new TasksSnapshot();
ClusterInfoSnapshot initSnapshot = ClusterInfoSnapshot.builder()
.withNodesSnapshot(initNodesSnapshot)
.withTasksSnapshot(initTasksSnapshot)
.build();
return updateAndGetValue(initSnapshot);
}
return deserialize(ClusterInfoPrimitiveType.INSTANCE.namespace(), value);
} catch (Exception e) {
LOGGER.error(e, LoggerEvent.SCHEDULING.getName(), null, "Failed to fetch cluster info from persist store.");
}
return null;
}
/**
* When the raft server are not available, means more than N/2+1 leaders are lost,
* this method will throw a {@link java.net.ConnectException}, in which case null
* value will be returned.
* After {@code DEFAULT_GET_PRIMITIVE_TIMEOUT} milliseconds, this method will throw
* a {@link TimeoutException} and return null.
*
* @return null if no values persisted or a {@link java.net.ConnectException} thrown
*/
public ClusterInfoSnapshot updateAndGetValue(ClusterInfoSnapshot value) {
try {
byte[] serializedValue = serialize(DEFAULT_PRIMITIVE_TYPE.namespace(), value);
byte[] updatedValue = this.atomicValueReaderWriter.getAndSet(serializedValue).get();
// If updated value successfully at the first time, the returned value will be null,
// so in order to know this operations is done, the argument value will returned
// directly.
if (updatedValue == null) {
return value;
}
return deserialize(DEFAULT_PRIMITIVE_TYPE.namespace(), updatedValue);
} catch (InterruptedException | ExecutionException e) {
LOGGER.error(e, LoggerEvent.SCHEDULING.getName(), null, "Exceptions thrown when persisted cluster info!");
}
return null;
}
}
ClusterInfoSnapshot:集群的状态信息,包括所有结点和任务的状态,这个类是可以被序列化的。
除了需要保存集群的状态的情况,在发生Leader切换或是集群重启时,新的Master结点需要从已经持久化的集群状态快照恢复,这就涉及两个过程,示例代码如下:
public class RaftLeaderElectionServer extends AbstractLeaderElectionCluster {
private RaftServer server;
private RaftLeaderElectionClient client;
private boolean running = false;
private Member oldLeader;
private Member newLeader;
private HashMap<String, Member> membersCache = new HashMap<>();
private BiConsumer<ControllerDescriptor, Boolean> action;
private BiConsumer<RaftServer.Role, Boolean> roleChangeAction;
private AtomicBoolean couldPersist = new AtomicBoolean(true);
private BlockingQueue<ControllerDescriptor> leaderHistoryQueue = new LinkedBlockingQueue<>();
private ExecutorService leaderChangedWorkService = ThreadUtils.newDaemonSingleThreadScheduledExecutor("leader-changed-service");
public ClusterInfoSnapshot loadState() {
return this.client.getAtomicValueReaderWriter.getClusterInfo();
}
}
class Master {
private RaftLeaderElectionServer persistEngine = new RaftLeaderElectionServer();
public void init() {
persistEngine.addLeaderTransferredListener((newLeader, couldPersist) -> {
if (!couldPersist) {
logger.error(LoggerEvent.SCHEDULING.getName(), "The persist engine was closed or in some troubles, so enforce this master quit.");
offline();
close();
} else {
if (newLeader.getElectionAddress().equals(persistEngine.getAddress().toString())) {
// 当前的Master尝试从最新的集群状态快照恢复
getTaskDispatcher().recover();
} else {
offline();
}
}
});
}
}
public class FIFODispatcher extends AbstractDispatcher {
@Override
public void recover() {
logger.info(LoggerEvent.GENERAL.getName(), "Start to restore cluster info from the snapshot!");
this.state = ControllerState.RECOVERING;
restoreFromSnapshot();
forwardMessageThread.schedule(this::completeRecovery, SchedulerManager.SCHEDULER_TIMEOUT, TimeUnit.MILLISECONDS);
}
}
private boolean restoreFromSnapshot() {
persistEngine.get().loadState();
}
/**
* Before this task running, the leader election may be triggered again,
* leading this server to be a follower and marked as UNKNOWN.
*
* During recovering, the Controller could complete recovery ahead of delay
* time, even though this method could be invoked in any possible places in
* concurrent, the state changing is sequential.
*
* So it's need to do synchronizations here to ensure only one recovering procedure on the air.
*/
private void completeRecovery() {
if (this.state != ControllerState.RECOVERING) {
logger.info(LoggerEvent.GENERAL.getName(), "Current state is %s, skipping recovery.", state);
return;
}
Set<String> unknownWorkers = getSchedulerManagerRef().tryRemoveUnknownScheduler(workers ->
workers.forEach(
worker -> logger.warn(LoggerEvent.GENERAL.getName(), "Scheduler %s was unknown and been removed!", worker))
);
if (!unknownWorkers.isEmpty()) {
doSnapshot();
}
Set<TaskInfo> unknownTasks = getTaskManagerRef().tryRemoveUnknownTasks();
if (!unknownTasks.isEmpty()) {
doSnapshot();
updateTaskInDB(unknownTasks);
}
online();
logger.info(LoggerEvent.GENERAL.getName(), "Completed recovery. Start to serving.");
printCurrentState();
}
Atomix通过RaftServer,为用户提供了一系列的、可扩展的功能,用来构建支持Raft协议的集群,同时也定义了一系列内置的分布式原子操作数据类型(Primitive Type),使得我们可以除了利用Raft本身Leader选举功能外,还能够依赖于Raft集群的状态机功能,对外提供持久化数据的能力。
因此我们可以仅仅通过一个Atomix库,扩展自己的系统,实现具有HA功能的、数据持久化功能的分布式系统,这相比其它的方式,如Zookeep、分布式共享文件系统等的试来说,在具有相对更少的外部依赖的同时,也使系统更加健壮、构建更快速。
虽然基于Atomix方案有以上的一些优点,但也会有一些显而易见的缺点: