Web Dyno和Worker Dyno,没有性能变化,仍显示R14错误日志

时间:2018-06-14 01:39:01

标签: java heroku itext

按照heroku文件中的说明,重负荷应由工人dynos处理。 我创建了一个带有两个内部运行模块的Spring启动应用程序。

webmodule 它只是通过Rabbit MQ将数据发送到工作线程

@Autowired
private RabbitMQSender rabbitMQSender;


@RequestMapping(value = "/MergeNSplitService/merge", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON)
public String mergeUsers(@RequestParam("fileIds") String fileIds,
                         @RequestParam("accessToken") String accessToken,
                         @RequestParam("instanceURL") String instanceURL,
                         @RequestParam("useSoap")boolean useSoap) throws ConnectionException, DocumentException, IOException {
    Gson gson = new GsonBuilder().disableHtmlEscaping().create();
    LOGGER.info("fileIds -> "+fileIds);
    LOGGER.info("accessToken -> "+accessToken);
    LOGGER.info("instanceURL -> "+instanceURL);
    LOGGER.info("useSoap -> "+useSoap);
    BigOpertaion bigOpertaion = new BigOpertaion();
    bigOpertaion.setFileIds(fileIds);
    bigOpertaion.setAccessToken(accessToken);
    bigOpertaion.setInstanceURL(instanceURL);
    bigOpertaion.setUseSoap(useSoap);
    rabbitMQSender.merge(bigOpertaion);
    return gson.toJson("Merge PDF SUBMITTED");

}

工人模块,它可以合并两个或多个PDF

@Component
public class RabbitMQListener {

    Logger LOGGER = LoggerFactory.getLogger(RabbitMQListener.class);
    static EnterpriseConnection connection;
    private static final ExecutorService THREADPOOL = Executors.newCachedThreadPool();

    @RabbitListener(queues = SpringBootHerokuExampleApplication.PDF_MERGE_QUEUE)
    public void mergeProcess(BigOpertaion bigOpertaion){
        mergeanduploadPDF(bigOpertaion.getFileIds(), bigOpertaion.getAccessToken(), bigOpertaion.getInstanceURL(), bigOpertaion.isUseSoap());
    }

    @RabbitListener(queues = SpringBootHerokuExampleApplication.PDF_SPLIT_QUEUE)
    public void splitProcess(BigOpertaion bigOpertaion){

    }

    public void mergeanduploadPDF(String file1Ids, String accessToken, String instanceURL, boolean useSoap) {

        System.out.println("Querying for the mail request...");

        ConnectorConfig config = new ConnectorConfig();
        config.setSessionId(accessToken);
        if (useSoap) {
            config.setServiceEndpoint(instanceURL + "/services/Soap/c/40.0");
        } else {
            config.setServiceEndpoint(instanceURL + "/services/Soap/T/40.0");
        }

        List<File> inputFiles = new ArrayList<File>();

        try {

            THREADPOOL.execute(new Runnable() {
                @Override
                public void run() {
                    String[] split = file1Ids.split(",");
                    String parentId = split[split.length-1];
                    StringBuilder buff = new StringBuilder();
                    String sep = "";
                    for (String str : split) {
                        if(str != parentId) {
                            buff.append(sep);
                            buff.append("'"+str+"'");
                            sep = ",";
                        }
                    }
                    String queryIds = buff.toString();

                    try {
                        connection = Connector.newConnection(config);
                        QueryResult queryResults = connection.query(
                                "Select Id,VersionData from ContentVersion where Id IN (Select LatestPublishedVersionId from ContentDocument where Id IN ("
                                        + queryIds + "))");

                        boolean done = false;

                        if (queryResults.getSize() > 0) {
                            while (!done) {
                                for (SObject sObject : queryResults.getRecords()) {
                                    ContentVersion contentData = (ContentVersion) sObject;
                                    File tempFile = File.createTempFile("test_", ".pdf", null);
                                    try (OutputStream os = Files.newOutputStream(Paths.get(tempFile.toURI()))) {
                                        os.write(contentData.getVersionData());
                                    }
                                    inputFiles.add(tempFile);
                                }
                                if (queryResults.isDone()) {
                                    done = true;
                                }else {
                                    queryResults = connection.queryMore(queryResults.getQueryLocator());
                                }

                            }
                        }

                        Document PDFCombineUsingJava = new Document();
                        PdfSmartCopy copy = new PdfSmartCopy(PDFCombineUsingJava, new FileOutputStream("CombinedPDFDocument.pdf"));
                        PDFCombineUsingJava.open();
                        int number_of_pages = 0;
                        inputFiles.parallelStream().forEachOrdered(inputFile -> {
                            try {
                                createFiles(inputFile, number_of_pages, copy);
                            } catch (IOException | BadPdfFormatException e) {
                                e.printStackTrace();
                            }
                        });

                        PDFCombineUsingJava.close();
                        copy.close();
                        File mergedFile = new File("CombinedPDFDocument" + ".pdf");
                        mergedFile.createNewFile();

                        LOGGER.info("Creating ContentVersion record...");
                        ContentVersion[] record = new ContentVersion[1];
                        ContentVersion mergedContentData = new ContentVersion();
                        mergedContentData.setVersionData(readFromFile(mergedFile.getName()));
                        mergedContentData.setFirstPublishLocationId(parentId);
                        mergedContentData.setTitle("Merged Document");
                        mergedContentData.setPathOnClient("/CombinedPDFDocument.pdf");

                        record[0] = mergedContentData;


                        // create the records in Salesforce.com
                        SaveResult[] saveResults = connection.create(record);

                        // check the returned results for any errors
                        for (int i = 0; i < saveResults.length; i++) {
                            if (saveResults[i].isSuccess()) {
                                System.out.println(i + ". Successfully created record - Id: " + saveResults[i].getId());
                            } else {
                                Error[] errors = saveResults[i].getErrors();
                                for (int j = 0; j < errors.length; j++) {
                                    System.out.println("ERROR creating record: " + errors[j].getMessage());
                                }
                            }
                        }
                    } catch (ConnectionException | IOException | DocumentException e) {
                        e.printStackTrace();
                    }
                }

                private void createFiles(File inputFile, int number_of_pages, PdfSmartCopy copy) throws IOException, BadPdfFormatException {
                    PdfReader ReadInputPDF = new PdfReader(inputFile.toString());
                    number_of_pages = ReadInputPDF.getNumberOfPages();
                    for (int page = 0; page < number_of_pages; ) {
                        copy.addPage(copy.getImportedPage(ReadInputPDF, ++page));
                    }
                    copy.freeReader(ReadInputPDF);
                    ReadInputPDF.close();
                }
            });


        } catch (Exception e) {
            e.printStackTrace();
        }

    }

    public static byte[] readFromFile(String fileName) throws IOException {
        byte[] buf = new byte[8192];
        try (InputStream is = Files.newInputStream(Paths.get(fileName))) {
            int len = is.read(buf);
            if (len < buf.length) {
                return Arrays.copyOf(buf, len);
            }
            try(ByteArrayOutputStream os = new ByteArrayOutputStream(16384)){
                while (len != -1) {
                    os.write(buf, 0, len);
                    len = is.read(buf);
                }
                return os.toByteArray();
            }
        }
    }
}

最初所有内容都是在web dyno中完成的,所以我看到R14错误。我阅读文档并看到如果我使用Rabbit MQ并将繁重的逻辑传递给worker dyno那么这可能有所帮助。但现在工人dyno开始提供2018-06-13T12:51:46.516017+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded) 2018-06-13T12:52:08.595533+00:00 heroku[worker.1]: Process running mem=526M(102.4%)

这是因为在free dyno中heroku提供的512 MB JVM中无法完成简单的文件读写操作吗? 或者有更好的方法来合并两个PDF文件吗?

修改

Procfile

web: java $JAVA_OPTS -jar web-service/target/web-service-0.0.1-SNAPSHOT.jar --server.port=$PORT
worker: java $JAVA_OPTS -jar worker-service/target/worker-service-0.0.1-SNAPSHOT.jar

合并两个小PDF时,它不会抛出任何错误,当合并PDF超过10 MB时,它会抛出R14错误。

0 个答案:

没有答案