使用SPARK Framework

时间:2017-02-03 03:43:51

标签: file-upload jetty embedded-jetty spark-java

我有以下代码来处理Spark中的文件上传,文件低于其完美无缺的限制。现在,当我使用1GB的大文件并在某些Jetty内部逻辑上启动时中止浏览器连接而不提供响应错误代码。到目前为止,我看到Chrome Developer工具控制台中的以下行:

  

POST http://localhost:8888/uploads net :: ERR_CONNECTION_ABORTED

这里有关于请求的一些细节:

Request Headers
Accept:application/json, text/javascript, */*; q=0.01
Content-Type:multipart/form-data; boundary=----WebKitFormBoundarybo8RwdTiWU9T9m3q
Origin:http://localhost:8888
Referer:http://localhost:8888/index.html
User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
X-Requested-With:XMLHttpRequest

Request Payload
------WebKitFormBoundarybo8RwdTiWU9T9m3q
Content-Disposition: form-data; name="files[]";     filename="some big file.mp4"
Content-Type: video/mp4


------WebKitFormBoundarybo8RwdTiWU9T9m3q--

这里的代码我使用:

package examples.fileUploader;

import org.eclipse.jetty.util.URIUtil;
import spark.Spark;
import javax.servlet.MultipartConfigElement;
import javax.servlet.http.Part;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Collection;

public final class test
{
    public static void main(final String... args)
    {
        Spark.port(8888);

        Spark.staticFileLocation("/examples/fileUploader/public/");
        Spark.staticFiles.externalLocation(URIUtil.addPaths(System.getProperty("user.dir"), "/src/main/java/examples/fileUploader/public"));

    Spark.get("/hello", (request, response) -> "Hello World!");


    Spark.post("/uploads",  "multipart/form-data", (request, response) -> {
        //- Servlet 3.x config
        String location =  URIUtil.addPaths(System.getProperty("user.dir"), "/tmp");  // the directory location where files will be stored
        long maxFileSize = 100000000;  // the maximum size allowed for uploaded files
        long maxRequestSize = 100000000;  // the maximum size allowed for multipart/form-data requests
        int fileSizeThreshold = 1024;  // the size threshold after which files will be written to disk
        MultipartConfigElement multipartConfigElement = new MultipartConfigElement(location, maxFileSize, maxRequestSize, fileSizeThreshold);
        request.raw().setAttribute("org.eclipse.jetty.multipartConfig", multipartConfigElement);
        //-/

        Collection<Part> parts = request.raw().getParts();
        for(Part part : parts) {
            System.out.println("Name:");
            System.out.println(part.getName());
            System.out.println("Size: ");
            System.out.println(part.getSize());
            System.out.println("Filename:");
            System.out.println(part.getSubmittedFileName());
        }

        String fName = request.raw().getPart("upfile").getSubmittedFileName();
        System.out.println("Title: "+request.raw().getParameter("title"));
        System.out.println("File: "+fName);

        Part uploadedFile = request.raw().getPart("upfile");
        Path out = Paths.get("/aaa/bbb/"+fName);
        try (final InputStream in = uploadedFile.getInputStream()) {
            Files.copy(in, out);
            uploadedFile.delete();
        }
        // cleanup
        multipartConfigElement = null;
        parts = null;
        uploadedFile = null;

        return "OK";
    });
}
}
enter code here

我对Spark的pom依赖

<dependency>
<groupId>com.sparkjava</groupId>
<artifactId>spark-core</artifactId>
<version>2.5.3</version>
</dependency>

0 个答案:

没有答案