改进备份脚本以减少内存使用

时间:2016-07-29 16:53:53

标签: elixir phoenix-framework

由于我的数据库大小接近150mb(erlang进程崩溃),我遇到了内存问题

以下是我当前的备份脚本。有关如何改进这一点的任何建议,所以我不将整个备份加载到内存中,而是直接将其流式传输到S3?

defmodule Backup do
  require Logger
  alias MyApp.{ Repo,  BackupUploader, S3 }

  @database System.get_env("DATABASE_URL")
  @bucket Application.get_env(:arc, :bucket)
  @folder "backups"

  def start do
    Logger.info "*** Initiating database backup ***"
    backup = %BackupRequest{}

    backup
    |> dump_database
    |> upload_to_s3
  end

  defp dump_database(%BackupRequest{} = backup) do
    Logger.info "*** Dumping database ***"
    command = "pg_dump"
    args = [@database]
    {data, 0} = System.cmd command, args

    %{backup | data: data, status: "dumped"}
  end

  defp upload_to_s3(%BackupRequest{data: data} = backup) do
    Logger.info "*** Uploading to S3 bucket ***"
    key = get_s3_key
    ExAws.S3.put_object!(@bucket, key, data)

    Logger.info "*** Backup complete ***"
  end

  # Helpers
  #
  #

  defp get_s3_key do
    {{year, month, day}, {hour, minute, _seconds}} = :os.timestamp |> :calendar.now_to_datetime

    hash = SecureRandom.urlsafe_base64(32)
    date = "#{day}-#{month}-#{year}-#{hour}:#{minute}"
    key  = @folder <> "/#{date}_#{hash}_#{Mix.env}"

    key
  end

end

1 个答案:

答案 0 :(得分:0)

您可以使用S3的分段上传功能。 ex_aws支持这一点。 https://hexdocs.pm/ex_aws/1.0.0-beta1/ExAws.S3.html#upload_part/6

基本上,只需读取块中的转储然后进行上传。 希望有所帮助