我使用整数键和字符串值测试了mapdb,以在其中插入10,000,000个元素。这就是我所看到的:
Processed 1.0E-5 percent of the data / time so far = 0 seconds
Processed 1.00001 percent of the data / time so far = 7 seconds
Processed 2.00001 percent of the data / time so far = 14 seconds
Processed 3.00001 percent of the data / time so far = 20 seconds
Processed 4.00001 percent of the data / time so far = 26 seconds
Processed 5.00001 percent of the data / time so far = 33 seconds
Processed 6.00001 percent of the data / time so far = 39 seconds
Processed 7.00001 percent of the data / time so far = 45 seconds
Processed 8.00001 percent of the data / time so far = 53 seconds
Processed 9.00001 percent of the data / time so far = 60 seconds
Processed 10.00001 percent of the data / time so far = 66 seconds
Processed 11.00001 percent of the data / time so far = 73 seconds
Processed 12.00001 percent of the data / time so far = 80 seconds
Processed 13.00001 percent of the data / time so far = 88 seconds
Processed 14.00001 percent of the data / time so far = 96 seconds
Processed 15.00001 percent of the data / time so far = 102 seconds
Processed 16.00001 percent of the data / time so far = 110 seconds
Processed 17.00001 percent of the data / time so far = 119 seconds
Processed 18.00001 percent of the data / time so far = 127 seconds
Processed 19.00001 percent of the data / time so far = 134 seconds
Processed 20.00001 percent of the data / time so far = 141 seconds
Processed 21.00001 percent of the data / time so far = 149 seconds
Processed 22.00001 percent of the data / time so far = 157 seconds
Processed 23.00001 percent of the data / time so far = 164 seconds
Processed 24.00001 percent of the data / time so far = 171 seconds
Processed 25.00001 percent of the data / time so far = 178 seconds
....
在178秒内将大约250万个实例放入地图中。对于1000万,大概是12分钟。
然后我切换到更复杂的值并且速度大幅下降(将整个10,000,000个实例添加到地图中需要3-4天)。有人有任何建议加快mapdb插入?与MabDB有任何先前的速度相关经验/问题?
此处还有评估:http://kotek.net/blog/3G_map
更新:我使用通用程序创建地图。这是一个伪代码:
DB db = DBMaker.newFileDB()....;
... map = db.getHashMap(...);
loop (...) {
map.put(...);
}
db.commit();
答案 0 :(得分:2)
MapDB作者在这里。
为了开始使用专门的串行器,它们会更快:
Map m = dbmaker.createHashMap(“a”)。keySerializer(Serializer.LONG).valueSerializer(Serializer.LONG).makeOrGet()
接下来要导入我建议使用Data Pump和TreeMap。这里有一个例子: https://github.com/jankotek/MapDB/blob/master/src/test/java/examples/Huge_Insert.java
答案 1 :(得分:0)
从mapdb的官方网站,我看到如下:
并发 - MapDB具有记录级锁定和最新技术 并发引擎。其性能几乎与数字呈线性关系 核心。数据可以由多个并行线程写入。
我想,就是这样,写了简单的测试:
package com.stackoverflow.test;
import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentNavigableMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import org.mapdb.*;
public class Test {
private static final int AMOUNT = 100000;
private static final class MapAddingThread implements Runnable {
private Integer fromElement;
private Integer toElement;
private Map<Integer, String> map;
private CountDownLatch countDownLatch;
public MapAddingThread(CountDownLatch countDownLatch, Map<Integer, String> map, Integer fromElement, Integer toElement) {
this.countDownLatch = countDownLatch;
this.map = map;
this.fromElement = fromElement;
this.toElement = toElement;
}
public void run() {
for (Integer i = this.fromElement; i < this.toElement; i++) {
map.put(i, i.toString());
}
this.countDownLatch.countDown();
}
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
// int cores = 1;
int cores = Runtime.getRuntime().availableProcessors();
CountDownLatch countDownLatch = new CountDownLatch(cores);
ExecutorService executorService = Executors.newFixedThreadPool(cores);
int part = AMOUNT / cores;
long startTime = new Date().getTime();
System.out.println("Starting test in " + cores + " threads");
DB db = DBMaker.newFileDB(new File("testdb5")).cacheDisable().closeOnJvmShutdown().make();
Map<Integer, String> map = db.getHashMap("collectionName5");
for (Integer i = 0; i < cores; i++) {
executorService.execute(new MapAddingThread(countDownLatch, map, i * part, (i + 1) * part));
}
countDownLatch.await();
long endTime = new Date().getTime();
System.out.println("Filling elements takes : " + (endTime - startTime));
db.commit();
System.out.println("Commit takes : " + (new Date().getTime() - endTime));
db.close();
}
}
得到了结果:
以4个线程开始测试
填充元素需要:4424
提交需要:901
然后我在一个帖子中运行相同的内容:
package com.stackoverflow.test;
import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentNavigableMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import org.mapdb.*;
public class Test {
private static final int AMOUNT = 100000;
private static final class MapAddingThread implements Runnable {
private Integer fromElement;
private Integer toElement;
private Map<Integer, String> map;
private CountDownLatch countDownLatch;
public MapAddingThread(CountDownLatch countDownLatch, Map<Integer, String> map, Integer fromElement, Integer toElement) {
this.countDownLatch = countDownLatch;
this.map = map;
this.fromElement = fromElement;
this.toElement = toElement;
}
public void run() {
for (Integer i = this.fromElement; i < this.toElement; i++) {
map.put(i, i.toString());
}
this.countDownLatch.countDown();
}
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
int cores = 1;
// int cores = Runtime.getRuntime().availableProcessors();
CountDownLatch countDownLatch = new CountDownLatch(cores);
ExecutorService executorService = Executors.newFixedThreadPool(cores);
int part = AMOUNT / cores;
long startTime = new Date().getTime();
System.out.println("Starting test in " + cores + " threads");
DB db = DBMaker.newFileDB(new File("testdb5")).cacheDisable().closeOnJvmShutdown().make();
Map<Integer, String> map = db.getHashMap("collectionName5");
for (Integer i = 0; i < cores; i++) {
executorService.execute(new MapAddingThread(countDownLatch, map, i * part, (i + 1) * part));
}
countDownLatch.await();
long endTime = new Date().getTime();
System.out.println("Filling elements takes : " + (endTime - startTime));
db.commit();
System.out.println("Commit takes : " + (new Date().getTime() - endTime));
db.close();
}
}
得到了结果:
在1个线程中开始测试
填充元素需要:3639
提交需要:924
所以,如果我正确地做了一切,那么mapdb 似乎不能扩展核心数。
只有你可以玩的东西:
Api方法(例如加密切换,缓存,树形图/哈希图使用)
尝试通过Reflection