我可以在代码中将文件扩展名更改为小写,然后再上传吗?

时间:2015-03-30 14:12:01

标签: javascript php fine-uploader

我正在使用此脚本https://github.com/FineUploader/server-examples/blob/master/php/s3/s3demo.php。问题是如果我上传一张名为image.JPG的图片(大写的扩展名),我会遇到显示图片的问题。我想在上传之前将文件扩展名更改为小写,但无法在代码中找到我应添加/更改的位置。

我应该在代码中添加$ext = strtolower(pathinfo(xxx, PATHINFO_EXTENSION));以将所有上传的扩展程序保存为小写?

链接中的代码。

    <?php
/**
 * PHP Server-Side Example for Fine Uploader S3.
 * Maintained by Widen Enterprises.
 *
 *
 * This example:
 *  - handles non-CORS environment
 *  - handles size validation and no size validation
 *  - handles delete file requests for both DELETE and POST methods
 *  - Performs basic inspections on the policy documents and REST headers before signing them
 *  - Ensures again the file size does not exceed the max (after file is in S3)
 *  - signs policy documents (simple uploads) and REST requests
 *    (chunked/multipart uploads)
 *  - returns a thumbnailUrl in the response for older browsers so thumbnails can be displayed next to the file
 *
 * Requirements:
 *  - PHP 5.3 or newer
 *  - Amazon PHP SDK (only if utilizing the AWS SDK for deleting files or otherwise examining them)
 *
 * If you need to install the AWS SDK, see http://docs.aws.amazon.com/aws-sdk-php-2/guide/latest/installation.html.
 */

// You can remove these two lines if you are not using Fine Uploader's
// delete file feature
require 'aws/aws-autoloader.php';
use Aws\S3\S3Client;

// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_SERVER['AWS_SECRET_KEY'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_SERVER['PARAM1'];
$serverPrivateKey = $_SERVER['PARAM2'];

// The following variables are used when validating the policy document
// sent by the uploader. 
$expectedBucketName = "upload.fineuploader.com";
// $expectedMaxSize is the value you set the sizeLimit property of the 
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = null;

$method = getRequestMethod();

// This second conditional will only ever evaluate to true if
// the delete file feature is enabled
if ($method == "DELETE") {
    deleteObject();
}
// This is all you really need if not using the delete file feature
// and not working in a CORS environment
else if ($method == 'POST') {

    // Assumes the successEndpoint has a parameter of "success" associated with it,
    // to allow the server to differentiate between a successEndpoint request
    // and other POST requests (all requests are sent to the same endpoint in this example).
    // This condition is not needed if you don't require a callback on upload success.
    if (isset($_REQUEST["success"])) {
        verifyFileInS3(shouldIncludeThumbnail());
    }
    else {
        signRequest();
    }
}

// This will retrieve the "intended" request method.  Normally, this is the
// actual method of the request.  Sometimes, though, the intended request method
// must be hidden in the parameters of the request.  For example, when attempting to
// send a DELETE request in a cross-origin environment in IE9 or older, it is not
// possible to send a DELETE request.  So, we send a POST with the intended method,
// DELETE, in a "_method" parameter.
function getRequestMethod() {

    if ($_POST['_method'] != null) {
        return $_POST['_method'];
    }

    return $_SERVER['REQUEST_METHOD'];
}

function getS3Client() {
    global $serverPublicKey, $serverPrivateKey;

    return S3Client::factory(array(
        'key' => $serverPublicKey,
        'secret' => $serverPrivateKey
    ));
}

// Only needed if the delete file feature is enabled
function deleteObject() {
    getS3Client()->deleteObject(array(
        'Bucket' => $_POST['bucket'],
        'Key' => $_POST['key']
    ));
}

function signRequest() {
    header('Content-Type: application/json');

    $responseBody = file_get_contents('php://input');
    $contentAsObject = json_decode($responseBody, true);
    $jsonContent = json_encode($contentAsObject);

    $headersStr = $contentAsObject["headers"];
    if ($headersStr) {
        signRestRequest($headersStr);
    }
    else {
        signPolicy($jsonContent);
    }
}

function signRestRequest($headersStr) {
    if (isValidRestRequest($headersStr)) {
        $response = array('signature' => sign($headersStr));
        echo json_encode($response);
    }
    else {
        echo json_encode(array("invalid" => true));
    }
}

function isValidRestRequest($headersStr) {
    global $expectedBucketName;

    $pattern = "/\/$expectedBucketName\/.+$/";
    preg_match($pattern, $headersStr, $matches);

    return count($matches) > 0;
}

function signPolicy($policyStr) {
    $policyObj = json_decode($policyStr, true);

    if (isPolicyValid($policyObj)) {
        $encodedPolicy = base64_encode($policyStr);
        $response = array('policy' => $encodedPolicy, 'signature' => sign($encodedPolicy));
        echo json_encode($response);
    }
    else {
        echo json_encode(array("invalid" => true));
    }
}

function isPolicyValid($policy) {
    global $expectedMaxSize, $expectedBucketName;

    $conditions = $policy["conditions"];
    $bucket = null;
    $parsedMaxSize = null;

    for ($i = 0; $i < count($conditions); ++$i) {
        $condition = $conditions[$i];

        if (isset($condition["bucket"])) {
            $bucket = $condition["bucket"];
        }
        else if (isset($condition[0]) && $condition[0] == "content-length-range") {
            $parsedMaxSize = $condition[2];
        }
    }

    return $bucket == $expectedBucketName && $parsedMaxSize == (string)$expectedMaxSize;
}

function sign($stringToSign) {
    global $clientPrivateKey;

    return base64_encode(hash_hmac(
        'sha1',
        $stringToSign,
        $clientPrivateKey,
        true
    ));
}

// This is not needed if you don't require a callback on upload success.
function verifyFileInS3($includeThumbnail) {
    global $expectedMaxSize;

    $bucket = $_POST["bucket"];
    $key = $_POST["key"];

    // If utilizing CORS, we return a 200 response with the error message in the body
    // to ensure Fine Uploader can parse the error message in IE9 and IE8,
    // since XDomainRequest is used on those browsers for CORS requests.  XDomainRequest
    // does not allow access to the response body for non-success responses.
    if (isset($expectedMaxSize) && getObjectSize($bucket, $key) > $expectedMaxSize) {
        // You can safely uncomment this next line if you are not depending on CORS
        header("HTTP/1.0 500 Internal Server Error");
        deleteObject();
        echo json_encode(array("error" => "File is too big!", "preventRetry" => true));
    }
    else {
        $link = getTempLink($bucket, $key);
        $response = array("tempLink" => $link);

        if ($includeThumbnail) {
            $response["thumbnailUrl"] = $link;
        }

        echo json_encode($response);
    }
}

// Provide a time-bombed public link to the file.
function getTempLink($bucket, $key) {
    $client = getS3Client();
    $url = "{$bucket}/{$key}";
    $request = $client->get($url);

    return $client->createPresignedUrl($request, '+15 minutes');
}

function getObjectSize($bucket, $key) {
    $objInfo = getS3Client()->headObject(array(
        'Bucket' => $bucket,
        'Key' => $key
    ));
    return $objInfo['ContentLength'];
}

// Return true if it's likely that the associate file is natively
// viewable in a browser.  For simplicity, just uses the file extension
// to make this determination, along with an array of extensions that one
// would expect all supported browsers are able to render natively.
function isFileViewableImage($filename) {
    $ext = strtolower(pathinfo($filename, PATHINFO_EXTENSION));
    $viewableExtensions = array("jpeg", "jpg", "gif", "png");

    return in_array($ext, $viewableExtensions);
}

// Returns true if we should attempt to include a link
// to a thumbnail in the uploadSuccess response.  In it's simplest form
// (which is our goal here - keep it simple) we only include a link to
// a viewable image and only if the browser is not capable of generating a client-side preview.
function shouldIncludeThumbnail() {
    $filename = $_POST["name"];
    $isPreviewCapable = $_POST["isBrowserPreviewCapable"] == "true";
    $isFileViewableImage = isFileViewableImage($filename);

    return !$isPreviewCapable && $isFileViewableImage;
}
?>

AWS Lambda函数 http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html

// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
            .subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');

// constants
var MAX_WIDTH  = 100;
var MAX_HEIGHT = 100;

// get reference to S3 client 
var s3 = new AWS.S3();

exports.handler = function(event, context) {
    // Read options from the event.
    console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
    var srcBucket = event.Records[0].s3.bucket.name;
    var srcKey    = event.Records[0].s3.object.key;
    var dstBucket = srcBucket + "resized";
    var dstKey    = "resized-" + srcKey;

    // Sanity check: validate that source and destination are different buckets.
    if (srcBucket == dstBucket) {
        console.error("Destination bucket must not match source bucket.");
        return;
    }

    // Infer the image type.
    var typeMatch = srcKey.match(/\.([^.]*)$/);
    if (!typeMatch) {
        console.error('unable to infer image type for key ' + srcKey);
        return;
    }
    var imageType = typeMatch[1];
    if (imageType != "jpg" && imageType != "png") {
        console.log('skipping non-image ' + srcKey);
        return;
    }

    // Download the image from S3, transform, and upload to a different S3 bucket.
    async.waterfall([
        function download(next) {
            // Download the image from S3 into a buffer.
            s3.getObject({
                    Bucket: srcBucket,
                    Key: srcKey
                },
                next);
            },
        function tranform(response, next) {
            gm(response.Body).size(function(err, size) {
                // Infer the scaling factor to avoid stretching the image unnaturally.
                var scalingFactor = Math.min(
                    MAX_WIDTH / size.width,
                    MAX_HEIGHT / size.height
                );
                var width  = scalingFactor * size.width;
                var height = scalingFactor * size.height;

                // Transform the image buffer in memory.
                this.resize(width, height)
                    .toBuffer(imageType, function(err, buffer) {
                        if (err) {
                            next(err);
                        } else {
                            next(null, response.ContentType, buffer);
                        }
                    });
            });
        },
        function upload(contentType, data, next) {
            // Stream the transformed image to a different S3 bucket.
            s3.putObject({
                    Bucket: dstBucket,
                    Key: dstKey,
                    Body: data,
                    ContentType: contentType
                },
                next);
            }
        ], function (err) {
            if (err) {
                console.error(
                    'Unable to resize ' + srcBucket + '/' + srcKey +
                    ' and upload to ' + dstBucket + '/' + dstKey +
                    ' due to an error: ' + err
                );
            } else {
                console.log(
                    'Successfully resized ' + srcBucket + '/' + srcKey +
                    ' and uploaded to ' + dstBucket + '/' + dstKey
                );
            }

            context.done();
        }
    );
};

更新

使用

进行测试
$('#fineuploader-s3').fineUploaderS3({
    signature: {
        endpoint: url + "path"
    },
    formatFileName: function(filename) {
        filename.toLowerCase();     
        return filename;
    }

});

但是上传的文件在我的存储桶中仍然是大写的,我在

中得到大写的文件扩展名
    uploadSuccess: {
        endpoint: url + "path"
    },

我想在上传前将文件扩展名设为lowe case。

更新2

我尝试了这段代码,但问题仍然存在。如果我选择上传image.JPG是在Amazon S3上以random432.JPG名称保存的图像,而不是random432.jpg(扩展名为小写,因为我希望它被保存)。

    formatFileName: function(filename) {
        return filename.toLowerCase();
    }

Chrome控制台中没有错误。

1 个答案:

答案 0 :(得分:0)

在Fine Uploader中更改文件名的正确方法是使用setName API method内的submit event handler

例如,要确保所有文件扩展名都是小写的:

var uploader = new qq.FineUploader({
   /* other required config options left out for brevity */

   callbacks: {
      onSubmit: function(id, name) {
         var extension = qq.getExtension(name);
         if (extension != null) {
            var newName = name.replace(new RegExp('\\.' + extension + '$'), extension.toLowerCase());
            this.setName(id, newName);
         }
      }
   }
});