使用Akka Actor比普通文件操作方法有什么优势?我试图计算分析日志文件所花费的时间。操作是查找已登录超过50次的IP地址并显示它们。与Akka Actor模型相比,普通文件操作更快。为什么这样?
使用普通文件操作
public static void main(String[] args) {
// TODO Auto-generated method stub
//long startTime = System.currentTimeMillis();
File file = new File("log.txt");
Map<String, Long> ipMap = new HashMap<>();
try {
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String line = br.readLine();
while(line!=null) {
int idx = line.indexOf('-');
String ipAddress = line.substring(0, idx).trim();
long count = ipMap.getOrDefault(ipAddress, 0L);
ipMap.put(ipAddress, ++count);
line = br.readLine();
}
System.out.println("================================");
System.out.println("||\tCount\t||\t\tIP");
System.out.println("================================");
fr.close();
br.close();
Map<String, Long> result = new HashMap<>();
// Sort by value and put it into the "result" map
ipMap.entrySet().stream()
.sorted(Map.Entry.<String, Long>comparingByValue().reversed())
.forEachOrdered(x -> result.put(x.getKey(), x.getValue()));
// Print only if count > 50
result.entrySet().stream().filter(entry -> entry.getValue() > 50).forEach(entry ->
System.out.println("||\t" + entry.getValue() + " \t||\t" + entry.getKey())
);
// long endTime = System.currentTimeMillis();
// System.out.println("Time: "+(endTime-startTime));
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Using Actors:
1. The Main Class
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
// Create actorSystem
ActorSystem akkaSystem = ActorSystem.create("akkaSystem");
// Create first actor based on the specified class
ActorRef coordinator = akkaSystem.actorOf(Props.create(FileAnalysisActor.class));
// Create a message including the file path
FileAnalysisMessage msg = new FileAnalysisMessage("log.txt");
// Send a message to start processing the file. This is a synchronous call using 'ask' with a timeout.
Timeout timeout = new Timeout(6, TimeUnit.SECONDS);
Future<Object> future = Patterns.ask(coordinator, msg, timeout);
// Process the results
final ExecutionContext ec = akkaSystem.dispatcher();
future.onSuccess(new OnSuccess<Object>() {
@Override
public void onSuccess(Object message) throws Throwable {
if (message instanceof FileProcessedMessage) {
printResults((FileProcessedMessage) message);
// Stop the actor system
akkaSystem.shutdown();
}
}
private void printResults(FileProcessedMessage message) {
System.out.println("================================");
System.out.println("||\tCount\t||\t\tIP");
System.out.println("================================");
Map<String, Long> result = new LinkedHashMap<>();
// Sort by value and put it into the "result" map
message.getData().entrySet().stream()
.sorted(Map.Entry.<String, Long>comparingByValue().reversed())
.forEachOrdered(x -> result.put(x.getKey(), x.getValue()));
// Print only if count > 50
result.entrySet().stream().filter(entry -> entry.getValue() > 50).forEach(entry ->
System.out.println("||\t" + entry.getValue() + " \t||\t" + entry.getKey())
);
long endTime = System.currentTimeMillis();
System.out.println("Total time: "+(endTime - startTime));
}
}, ec);
}
2.文件分析器类
public class FileAnalysisActor extends UntypedActor {
private Map<String, Long> ipMap = new HashMap<>();
private long fileLineCount;
private long processedCount;
private ActorRef analyticsSender = null;
@Override
public void onReceive(Object message) throws Exception {
/*
This actor can receive two different messages, FileAnalysisMessage or LineProcessingResult, any
other type will be discarded using the unhandled method
*/
//System.out.println(Thread.currentThread().getName());
if (message instanceof FileAnalysisMessage) {
List<String> lines = FileUtils.readLines(new File(
((FileAnalysisMessage) message).getFileName()));
fileLineCount = lines.size();
processedCount = 0;
// stores a reference to the original sender to send back the results later on
analyticsSender = this.getSender();
for (String line : lines) {
// creates a new actor per each line of the log file
Props props = Props.create(LogLineProcessor.class);
ActorRef lineProcessorActor = this.getContext().actorOf(props);
// sends a message to the new actor with the line payload
lineProcessorActor.tell(new LogLineMessage(line), this.getSelf());
}
} else if (message instanceof LineProcessingResult) {
// a result message is received after a LogLineProcessor actor has finished processing a line
String ip = ((LineProcessingResult) message).getIpAddress();
// increment ip counter
Long count = ipMap.getOrDefault(ip, 0L);
ipMap.put(ip, ++count);
// if the file has been processed entirely, send a termination message to the main actor
processedCount++;
if (fileLineCount == processedCount) {
// send done message
analyticsSender.tell(new FileProcessedMessage(ipMap), ActorRef.noSender());
}
} else {
// Ignore message
this.unhandled(message);
}
}
}
3.Logline Processor Class
public class LogLineProcessor extends UntypedActor {
@Override
public void onReceive(Object message) throws Exception {
if (message instanceof LogLineMessage) {
// What data each actor process?
//System.out.println("Line: " + ((LogLineMessage) message).getData());
// Uncomment this line to see the thread number and the actor name relationship
//System.out.println("Thread ["+Thread.currentThread().getId()+"] handling ["+ getSelf().toString()+"]");
// get the message payload, this will be just one line from the log file
String messageData = ((LogLineMessage) message).getData();
int idx = messageData.indexOf('-');
if (idx != -1) {
// get the ip address
String ipAddress = messageData.substring(0, idx).trim();
// tell the sender that we got a result using a new type of message
this.getSender().tell(new LineProcessingResult(ipAddress), this.getSelf());
}
} else {
// ignore any other message type
this.unhandled(message);
}
}
}
消息类
FileAnalysis消息
公共类FileAnalysisMessage {
private String fileName;
public FileAnalysisMessage(String file) {
this.fileName = file;
}
public String getFileName() {
return fileName;
}
}
2.File Processed Message
public class FileProcessedMessage {
private Map<String, Long> data;
public FileProcessedMessage(Map<String, Long> data) {
this.data = data;
}
public Map<String, Long> getData() {
return data;
}
}
LineProcessing结果
公共类LineProcessingResult {
private String ipAddress;
public LineProcessingResult(String ipAddress) {
this.ipAddress = ipAddress;
}
public String getIpAddress() {
return ipAddress;
}
}
4.Logline Message
public class LogLineMessage {
private String data;
public LogLineMessage(String data) {
this.data = data;
}
public String getData() {
return data;
}
}
我正在为文件中的每一行创建一个actor。
答案 0 :(得分:1)
对于所有并发框架,在部署的并发数量与每个并发单元所涉及的复杂性之间总是存在权衡。阿卡也不例外。
在非akka方法中,每行都有一个相对简单的步骤序列:
相比之下,您的akka方法对于每一行都要复杂得多:
LogLineMessage
消息LineProcessingResult
消息如果我们天真地假设上述每个步骤都花费了相同的时间,那么你需要2个带akka的线程才能以与没有akka的1个线程相同的速度运行。
让每个并发单元做得更多
不是每1行有1 Actor
,而是让每个actor将N行处理成自己的子哈希映射(例如每个Actor处理1000行):
public class LogLineMessage {
private String[] data;
public LogLineMessage(String[] data) {
this.data = data;
}
public String[] getData() {
return data;
}
}
然后,Actor不会发回像IP地址那样简单的东西。相反,它将为其子行发送计数哈希:
public class LineProcessingResult {
private HashMap<String, Long> ipAddressCount;
public LineProcessingResult(HashMap<String, Long> count) {
this.ipAddressCount = Count;
}
public HashMap<String, Long> getIpAddress() {
return ipAddressCount;
}
}
协调的Actor可以负责组合所有各种子计数:
//inside of FileAnalysisActor
else if (message instanceof LineProcessingResult) {
HashMap<String,Long> localCount = ((LineProcessingResult) message).getIpAddressCount();
localCount.foreach((ipAddress, count) -> {
ipMap.put(ipAddress, ipMap.getOrDefault(ipAddress, 0L) + count);
})
然后,您可以改变N,以查看特定系统的最佳性能。
不要将整个文件读入内存
并发解决方案的另一个缺点是它首先将整个文件读入内存。这对于JVM来说是不必要的并且很费力。
相反,一次读取N行文件。一旦你在内存中的那些行产生了如前所述的Actor。
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String[] lineBuffer;
int bufferCount = 0;
int N = 1000;
String line = br.readLine();
while(line!=null) {
if(0 == bufferCount)
lineBuffer = new String[N];
else if(N == bufferCount) {
Props props = Props.create(LogLineProcessor.class);
ActorRef lineProcessorActor = this.getContext().actorOf(props);
lineProcessorActor.tell(new LogLineMessage(lineBuffer),
this.getSelf());
bufferCount = 0;
continue;
}
lineBuffer[bufferCount] = line;
br.readLine();
bufferCount++;
}
//handle the final buffer
if(bufferCount > 0) {
Props props = Props.create(LogLineProcessor.class);
ActorRef lineProcessorActor = this.getContext().actorOf(props);
lineProcessorActor.tell(new LogLineMessage(lineBuffer),
this.getSelf());
}
这将允许文件IO,行处理和子地图组合并行运行。