我想在群集中运行Akka调度程序的单个实例。目前我的调度程序在我的本地工作正常但不如预期。调度程序从DB中选择订单并推送到Kafka主题 我使用的是akka 2.5.6(Java)。我已经完成了official Doc但没有提供那么多帮助。 任何帮助都感激不尽。
public class OrderReprocessActor extends UntypedActor {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
OrderProcessorJdbcConnection orderProcessorJdbcConnection;
private final String SELECT_QUERY_TO_GET_FAILED_ORDER="SELECT * FROM ORDER_HISTORY WHERE ORDER_STATUS = ?";
CommonPropsUtil commonPropsUtil;
final Cluster cluster = Cluster.get(getContext().system());
public static Props getProps() {
return Props.create(OrderReprocessActor.class);
}
@Inject
public OrderReprocessActor(OrderProcessorJdbcConnection orderProcessorJdbcConnection , CommonPropsUtil commonPropsUtil){
this.orderProcessorJdbcConnection = orderProcessorJdbcConnection;
this.commonPropsUtil = commonPropsUtil;
}
@Override
public void onReceive(Object message) throws Throwable {
String failedStatus = (String) message;
List<OrderHistory> failedOrderList = getOrders(failedStatus);
pushOrderToKafka(failedOrderList);
String intervalSeconds = commonPropsUtil.getCommonPropsValueForKey(CommonConstants.ORDER_REPROCESSOR_SCHEDULER_INTERVAL);
if(StringUtils.isNotEmpty(intervalSeconds))
{
int interval = Integer.parseInt(intervalSeconds);
getContext().system().scheduler().scheduleOnce(Duration.create(interval, TimeUnit.SECONDS),
() -> {
getSelf().tell(failedStatus, ActorRef.noSender());
}, getContext().system().dispatcher());
}
}
/**
* This method takes the failedOrderList and pushes to Kafka Topic
*
*/
private void pushOrderToKafka(List<OrderHistory> failedOrders) {
log.info("Entering pushOrderToKafka()");
String kafkaOrderTopic = commonPropsUtil.getCommonPropsValueForKey(CommonConstants.KAFKA_SUBMIT_ORDER_TOPIC);
Properties props = getKafkaProperties();
Producer<String, Order> producer = new KafkaProducer<>(props);
for (OrderHistory orderHistory : failedOrders) {
ObjectMapper objectMapper = new ObjectMapper();
try {
Order order = objectMapper.readValue(orderHistory.getOrderData().toString(),Order.class);
log.info("******************Order ID..."+orderHistory.getOrderId());
producer.send(new ProducerRecord<String, Order>(kafkaOrderTopic, orderHistory.getOrderId(), order)).get();
} catch (IOException e) {
log.error("IOException caught , message="+e.getMessage());
} catch (InterruptedException e) {
log.error("InterruptedException caught , message="+e.getMessage());
} catch (ExecutionException e) {
log.error("ExecutionException caught , message="+e.getMessage());
}
}
producer.close();
log.info("Exiting pushOrderToKafka()");
}
/**
* This method return kafka connection properties
* @return
*/
private Properties getKafkaProperties() {
String kafkaBootStrapServers = commonPropsUtil.getCommonPropsValueForKey(CommonConstants.KAFKA_BOOTSTRAP_SERVERS);
Properties props = new Properties();
props.put(CommonConstants.BOOTSTRAP_SERVERS, kafkaBootStrapServers);
props.put(CommonConstants.KEY_SERIALIZER, VZWCommonConstants.STRING_SERIALIZER);
props.put(CommonConstants.VALUE_SERIALIZER, VZWCommonConstants.ORDER_SERIALIZER);
return props;
}
/**
*This method get all the failed Order from DB
* @return List<OrderReprocessActor.OrderHistory>
* @throws SQLException
*/
private List<OrderReprocessActor.OrderHistory> getOrders(String failStatus) throws SQLException {
log.info("Entering getAllFailedOrdersFromDB()");
Connection connection = orderProcessorJdbcConnection.getConnection();
try {
PreparedStatement pstmt = connection.prepareStatement(SELECT_QUERY_TO_GET_FAILED_ORDER);
pstmt.setString(1,failStatus);
ResultSet rersultSet = pstmt.executeQuery();
return getOrdersFromResultSet(rersultSet);
}catch (SQLException e){
log.error("SQLException caught while fetching failed Order from DB");
log.error(e.getMessage());
}finally {
orderProcessorJdbcConnection.releaseConnection(connection);
}
log.info("Exiting getAllFailedOrdersFromDB()");
return null;
}
/**
* Retrives order from sql result set
* @param rersultSet
* @return
* @throws SQLException
*/
private List<OrderHistory> getOrdersFromResultSet(ResultSet rersultSet) throws SQLException {
List<OrderReprocessActor.OrderHistory> failedOrderList = new ArrayList<>();
while(rersultSet.next()){
String orderId = rersultSet.getString("order_id");
String orderData = rersultSet.getString("order_data");
OrderHistory orderHistory = new OrderHistory();
orderHistory.setOrderId(orderId);
orderHistory.setOrderData(orderData);
failedOrderList.add(orderHistory);
}
return failedOrderList;
}
public static class OrderHistory{
private String orderId;
private String orderData;
public String getOrderId() {
return orderId;
}
public void setOrderId(String orderId) {
this.orderId = orderId;
}
public String getOrderData() {
return orderData;
}
public void setOrderData(String orderData) {
this.orderData = orderData;
}
}
}
答案 0 :(得分:0)
让OrderReprocessActor
成为cluster singleton。来自文档:
群集单例模式由akka.cluster.singleton.ClusterSingletonManager实现。它管理所有集群节点中的一个单独的actor实例或一组标记有特定角色的节点。 ClusterSingletonManager是一个应该在集群中的所有节点或具有指定角色的所有节点上启动的actor。实际的单例actor由最早节点上的ClusterSingletonManager通过从提供的Props创建子actor来启动。 ClusterSingletonManager确保在任何时间点最多只能运行一个单例实例。
答案 1 :(得分:0)
像这样创建一个集群单例:
final ClusterSingletonManagerSettings settings =
ClusterSingletonManagerSettings.create(system);
system.actorOf(
ClusterSingletonManager.props(
Props.create(Consumer.class, () -> new Consumer(queue, testActor)),
TestSingletonMessages.end(),
settings),
"consumer");
Akka将确保仅在群集的领导节点上创建actor。
要使用单身演员,请通过其路径要求代理:
ClusterSingletonProxySettings proxySettings =
ClusterSingletonProxySettings.create(system);
ActorRef proxy =
system.actorOf(ClusterSingletonProxy.props("/user/consumer", proxySettings),
"consumerProxy");
这些示例改编自the docs。