如何使用Pydio Cells API将文件/文件夹上传到Pydio Cells

时间:2019-02-14 23:06:07

标签: rest api pydio

2 个答案:

答案 0 :(得分:1)

Cells提供了S3 api与数据进行交互。使用curl进行的上传/下载操作分为以下步骤:     1.获取jwt     2.上传/下载

You can use following bash file:
./cells-download.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET FILENAME WORKSPACE_SLUG/PATH NEW_NAME_AFTTER_DOWNLOAD

./cells-upload.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET  ABS_PATH_FILE NEW_NAME WORKSPACE_SLUG/PATH

CLIENT_SECRET is found in /home/pydio/.config/pydio/cells/pydio.json >> dex >> staticClients >> Secret:

cells-download.sh
=============================

#!/bin/bash

HOST=$1 
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
DEST=$6
NEW_NAME=$7


AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}

JWT=$(curl -s --request POST \
  --url http://$HOST/auth/dex/token \
  --header "Authorization: Basic $AUTH_STRING" \
  --header 'Cache-Control: no-cache' \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')

JWT=$(echo $JWT | sed "s/\"//g")


#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


# Upload a file to AWS S3.

file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="host;x-amz-content-sha256;x-amz-date"




if [[ $(uname) == "Darwin" ]]; then
  iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
  date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
  date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
  iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
  date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
  date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi

payload_hash() {
# empty string
  echo "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
}

canonical_request() {
  echo "GET"
  echo "/${prefix}/${file}"
  echo ""
  echo "host:$HOST"
  echo "x-amz-content-sha256:$(payload_hash)"
  echo "x-amz-date:${iso_timestamp}"
  echo ""
  echo "${signed_headers}"
  printf "$(payload_hash)"
}

canonical_request_hash() {
  local output=$(canonical_request | shasum -a 256)
  echo "${output%% *}"
}

string_to_sign() {
  echo "AWS4-HMAC-SHA256"
  echo "${iso_timestamp}"
  echo "${date_scope}/${region}/s3/aws4_request"
  printf "$(canonical_request_hash)"
}

AWS_SECRET_ACCESS_KEY="gatewaysecret"

signature_key() {
  local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
  local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
  local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
  local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
  printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}

hex_key() {
  xxd -p -c 256
}

hmac_sha256() {
  local hexkey=$1
  openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}

signature() {
  string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}


curl   \
  -H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
  -H "Host: $HOST" \
  -H "Date: ${date_header}" \
  -H "x-amz-acl: public-read" \
  -H 'Content-Type: application/octet-stream' \
  -H "x-amz-content-sha256: $(payload_hash)" \
  -H "x-amz-date: ${iso_timestamp}" \
  "http://$HOST/${prefix}/${file}" --output $NEW_NAME

=============================
cells-upload.sh
=============================

#!/bin/bash

HOST=$1 
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
NEW_NAME=$6
DEST=$7



AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}

JWT=$(curl -s --request POST \
  --url http://$HOST/auth/dex/token \
  --header "Authorization: Basic $AUTH_STRING" \
  --header 'Cache-Control: no-cache' \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')

JWT=$(echo $JWT | sed "s/\"//g")


#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


# Upload a file to AWS S3.

file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date"




if [[ $(uname) == "Darwin" ]]; then
  iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
  date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
  date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
  iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
  date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
  date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi

payload_hash() {
  local output=$(shasum -ba 256 "$file")
  echo "${output%% *}"
}

canonical_request() {
  echo "PUT"
  echo "/${prefix}/${NEW_NAME}"
  echo ""
  echo "content-type:${content_type}"
  echo "host:$HOST"
  echo "x-amz-acl:public-read"
  echo "x-amz-content-sha256:$(payload_hash)"
  echo "x-amz-date:${iso_timestamp}"
  echo ""
  echo "${signed_headers}"
  printf "$(payload_hash)"
}

canonical_request_hash() {
  local output=$(canonical_request | shasum -a 256)
  echo "${output%% *}"
}

string_to_sign() {
  echo "AWS4-HMAC-SHA256"
  echo "${iso_timestamp}"
  echo "${date_scope}/${region}/s3/aws4_request"
  printf "$(canonical_request_hash)"
}

AWS_SECRET_ACCESS_KEY="gatewaysecret"

signature_key() {
  local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
  local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
  local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
  local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
  printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}

hex_key() {
  xxd -p -c 256
}

hmac_sha256() {
  local hexkey=$1
  openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}

signature() {
  string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}


curl   \
  -T "${file}" \
  -H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
  -H "Host: $HOST" \
  -H "Date: ${date_header}" \
  -H "x-amz-acl: public-read" \
  -H 'Content-Type: application/octet-stream' \
  -H "x-amz-content-sha256: $(payload_hash)" \
  -H "x-amz-date: ${iso_timestamp}" \
  "http://$HOST/${prefix}/${NEW_NAME}"

答案 1 :(得分:0)

原来我对需要一个AWS账户的Pydio Cells s3存储桶的想法是错误的。 Pydio Cells使用与AWS Buckets相同的代码或语法(不确定100%)。在使用Pydio端点https://demo.pydio.com/io时,可以使用s3存储桶访问文件系统。 io 是s3存储桶。

要进行测试,我正在使用Postman首先将名为“ Query.sql”的文件和内容放入“个人文件”工作区。

授权:AWS签名    AccessKey:使用OpenID Connect时返回的令牌。正文中包含的“ id_token”。 SecretKey:演示使用密钥:“ gatewaysecret”

高级选项AWS Region:默认值为“ us-east-1”。我不必在这里输入任何内容,但是当我将其设置为“ us-west-1”时,它仍然可以使用。 Service Name:'s3'-我发现这是必需的 Session Token:我将此留空了。

使用PUT创建文件。使用GET下载文件。

  

输入https://demo.pydio.com/io/personal-files/Query.sql

以下示例显示了如何首先创建文件,然后提取其内容/下载文件。

enter image description here

在我的GET示例中,我手动将名为Query.sql的文件放置到“个人文件”工作区中的demo.pydio.com服务器上。此示例显示如何访问数据和/或下载我手动放置到“个人文件”工作区中的Query.sql文件。

  

获取https://demo.pydio.com/io/personal-files/Query.sql

enter image description here