(使用Rubber将Rails应用程序部署到EC2时,(Net :: SSH :: AuthenticationFailed:root)

时间:2013-03-26 06:43:17

标签: ruby-on-rails amazon-web-services amazon-ec2 capistrano

我正在关注如何将rails-app部署到EC2的Railscast本教程:

http://railscasts.com/episodes/347-rubber-and-amazon-ec2

我做了一些事情,现在每当我尝试部署时,我都会遇到此错误:

connection failed for: production.foo.com (Net::SSH::AuthenticationFailed: root)

这是一个非常模糊的错误,似乎与mac特别相关。遵循本教程的另一位用户也有错误:

http://railscasts.com/episodes/347-rubber-and-amazon-ec2?view=comments#comment_158643

这个家伙经历了类似的事情:

https://github.com/rubber/rubber/issues/182

我已经浏览了关于这个问题的每篇博文,但没有任何内容。你会如何解决这个问题?

更新

这是我尝试通过ssh连接时获得的完整堆栈跟踪:

➜  HN_Notifier_Web git:(master) ✗ ssh -vvvv -i gsg-keypair.pub ubuntu@ec2-54-242-109-133.compute-1.amazonaws.com
OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011
Warning: Identity file gsg-keypair.pub not accessible: No such file or directory.
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 53: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ec2-54-242-109-133.compute-1.amazonaws.com [54.225.178.242] port 22.
debug1: Connection established.
debug3: Incorrect RSA1 identifier
debug3: Could not load "/Users/holgersindbaek/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /Users/holgersindbaek/.ssh/id_rsa type 1
debug1: identity file /Users/holgersindbaek/.ssh/id_rsa-cert type -1
debug1: identity file /Users/holgersindbaek/.ssh/id_dsa type -1
debug1: identity file /Users/holgersindbaek/.ssh/id_dsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "ec2-54-242-109-133.compute-1.amazonaws.com" from file "/Users/holgersindbaek/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/holgersindbaek/.ssh/known_hosts:16
debug3: load_hostkeys: loaded 1 keys
debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-rsa
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-rsa,ssh-dss-cert-v01@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 126/256
debug2: bits set: 499/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 0c:2f:59:00:c6:ee:26:3f:eb:e5:aa:da:e8:33:dd:a9
debug3: load_hostkeys: loading entries for host "ec2-54-242-109-133.compute-1.amazonaws.com" from file "/Users/holgersindbaek/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/holgersindbaek/.ssh/known_hosts:16
debug3: load_hostkeys: loaded 1 keys
debug3: load_hostkeys: loading entries for host "54.225.178.242" from file "/Users/holgersindbaek/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/holgersindbaek/.ssh/known_hosts:7
debug3: load_hostkeys: loaded 1 keys
debug1: Host 'ec2-54-242-109-133.compute-1.amazonaws.com' is known and matches the RSA host key.
debug1: Found key in /Users/holgersindbaek/.ssh/known_hosts:16
debug2: bits set: 525/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /Users/holgersindbaek/.ssh/id_rsa (0x7f825141d860)
debug2: key: /Users/holgersindbaek/.ec2/gsg-keypair (0x7f825141e700)
debug2: key: /Users/holgersindbaek/.ssh/id_dsa (0x0)
debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/holgersindbaek/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug1: Offering RSA public key: /Users/holgersindbaek/.ec2/gsg-keypair
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug1: Trying private key: /Users/holgersindbaek/.ssh/id_dsa
debug3: no such identity: /Users/holgersindbaek/.ssh/id_dsa
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey).

更新

这是我的rubber.yml:

# REQUIRED: The name of your application
app_name: your_app_name

# REQUIRED: The system user to run your app servers as
app_user: app

# REQUIRED: Notification emails (e.g. monit) get sent to this address
#
admin_email: "root@#{full_host}"

# OPTIONAL: If not set, you won't be able to access web_tools
# server (graphite, graylog, monit status, haproxy status, etc)
# web_tools_user: admin
# web_tools_password: sekret

# REQUIRED: The timezone the server should be in
timezone: US/Eastern

# REQUIRED: the domain all the instances should be associated with
#
domain: foo.com

# OPTIONAL: See rubber-dns.yml for dns configuration
# This lets rubber update a dynamic dns service with the instance alias
# and ip when they are created.  It also allows setting up arbitrary
# dns records (CNAME, MX, Round Robin DNS, etc)

# OPTIONAL: Additional rubber file to pull config from if it exists.  This file will
# also be pushed to remote host at Rubber.root/config/rubber/rubber-secret.yml
#
# rubber_secret: "#{File.expand_path('~') + '/.ec2' + (Rubber.env == 'production' ? '' : '_dev') + '/rubber-secret.yml' rescue ''}"

# OPTIONAL: Encryption key that was used to obfuscate the contents of rubber-secret.yml with "rubber util:obfuscation" 
# Not that much better when stored in here, but you could use a ruby snippet in here to fetch it from a key server or something
#
# rubber_secret_key: "XXXyyy=="

# REQUIRED All known cloud providers with the settings needed to configure them
# There's only one working cloud provider right now - Amazon Web Services
# To implement another, clone lib/rubber/cloud/aws.rb or make the fog provider 
# work in a generic fashion
#
cloud_providers:
  aws:
    # REQUIRED The AWS region that you want to use.
    # 
    # Options include
    # us-east-1
    # eu-west-1
    # ap-northeast-1
    # ap-southeast-1
    # ap-southeast-2
    #
    region: us-east-1

    # REQUIRED The amazon keys and account ID (digits only, no dashes) used to access the AWS API
    #
    access_key: XXX
    secret_access_key: YYY
    account: 'ZZZ'

    # REQUIRED:  The name of the amazon keypair and location of its private key
    #
    # NOTE: for some reason Capistrano requires you to have both the public and
    # the private key in the same folder, the public key should have the
    # extension ".pub".  The easiest way to get your hand on this is to create the
    # public key from the private key: ssh-keygen -y -f gsg-keypair > gsg-keypair.pub
    #
    key_name: gsg-keypair
    key_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/*' + cloud_providers.aws.key_name].first}"

    # OPTIONAL: Needed for bundling a running instance using rubber:bundle
    #
    # pk_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/pk-*'].first}"
    # cert_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/cert-*'].first}"
    # image_bucket: "#{app_name}-images"

    # OPTIONAL: Needed for backing up database to s3
    # backup_bucket: "#{app_name}-backups"

    # REQUIRED: the ami and instance type for creating instances
    # The Ubuntu images at http://alestic.com/ work well
    # Ubuntu 12.04 Precise instance-store 64-bit: ami-eafa5883
    #
    # m1.small or m1.large or m1.xlarge
    image_type: c1.medium
    image_id: ami-b6089bdf

    # OPTIONAL: EC2 spot instance request support.
    #
    # Enables the creation of spot instance requests.  Rubber will wait synchronously until the request is fulfilled,
    # at which point it will begin initializing the instance, unless spot_instance_request_timeout is set.
    # spot_instance: true
    #
    # The maximum price you would like to pay for your spot instance.
    # spot_price: "0.085"
    #
    # If a spot instance request can't be fulfilled in 3 minutes, fallback to on-demand instance creation.  If not set,
    # the default is infinite.
    # spot_instance_request_timeout: 180

  # Use an alternate cloud provider supported by fog.  This doesn't fully work
  # yet due to differences in providers within fog, but gives you a starting
  # point for contributing a new provider to rubber.  See rubber/lib/rubber/cloud(.rb)
  fog:
    credentials:
      provider: rackspace
      rackspace_api_key: 'XXX'
      rackspace_username: 'YYY'
    image_type: 123
    image_id: 123

# REQUIRED the cloud provider to use
#
cloud_provider: aws

# OPTIONAL: Where to store instance data.
# 
# Allowed forms are:
# filesystem: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# cloud storage (s3): "storage:#{cloud_provider.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}.yml"
# cloud table (simpledb): "table:RubberInstances_#{app_name}_#{Rubber.env}"
#
# If you need to port between forms, load the rails console then:
# Rubber.instances.save(location)
# where location is one of the allowed forms for this variable
#
# instance_storage: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"

# OPTIONAL: Where to store a backup of the instance data
#
# This is most useful when using a remote store in case you end up
# wiping the single copy of your instance data.  When using the file
# store, the instance file is typically under version control with
# your project code, so that provides some safety.
#
# instance_storage_backup: "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}-#{Time.now.strftime('%Y%m%d-%H%M%S')}.yml"

# OPTIONAL: Default ports for security groups
web_port: 80
web_ssl_port: 443
web_tools_port: 8080
web_tools_ssl_port: 8443

# OPTIONAL: Define security groups
# Each security group is a name associated with a sequence of maps where the
# keys are the parameters to the ec2 AuthorizeSecurityGroupIngress API
# source_security_group_name, source_security_group_owner_id
# ip_protocol, from_port, to_port, cidr_ip
# If you want to use a source_group outside of this project, add "external_group: true"
# to prevent group_isolation from mangling its name, e.g.  to give access to graphite
# server to other projects
#
# security_groups:
#   graphite_server:
#     description: The graphite_server security group to allow projects to send graphite data
#     rules:
#       - source_group_name: yourappname_production_collectd
#         source_group_account: 123456
#         external_group: true
#         protocol: tcp
#         from_port: "#{graphite_server_port}"
#         to_port: "#{graphite_server_port}"
#
security_groups:
  default:
    description: The default security group
    rules:
      - source_group_name: default
        source_group_account: "#{cloud_providers.aws.account}"
      - protocol: tcp
        from_port: 22
        to_port: 22
        source_ips: [0.0.0.0/0]
  web:
    description: "To open up port #{web_port}/#{web_ssl_port} for http server on web role"
    rules:
      - protocol: tcp
        from_port: "#{web_port}"
        to_port: "#{web_port}"
        source_ips: [0.0.0.0/0]
      - protocol: tcp
        from_port: "#{web_ssl_port}"
        to_port: "#{web_ssl_port}"
        source_ips: [0.0.0.0/0]
  web_tools:
    description: "To open up port #{web_tools_port}/#{web_tools_ssl_port} for internal/tools http server"
    rules:
      - protocol: tcp
        from_port: "#{web_tools_port}"
        to_port: "#{web_tools_port}"
        source_ips: [0.0.0.0/0]
      - protocol: tcp
        from_port: "#{web_tools_ssl_port}"
        to_port: "#{web_tools_ssl_port}"
        source_ips: [0.0.0.0/0]

# OPTIONAL: The default security groups to create instances with
assigned_security_groups: [default]
roles:
  web:
    assigned_security_groups: [web]
  web_tools:
    assigned_security_groups: [web_tools]

# OPTIONAL: Automatically create security groups for each host and role
# EC2 doesn't allow one to change what groups an instance belongs to after
# creation, so its good to have some empty ones predefined.
auto_security_groups: true

# OPTIONAL: Automatically isolate security groups for each appname/environment
# by mangling their names to be appname_env_groupname
# This makes it safer to have staging and production coexist on the same EC2
# account, or even multiple apps
isolate_security_groups: true

# OPTIONAL: Prompts one to sync security group rules when the ones in amazon
# differ from those in rubber
prompt_for_security_group_sync: true

# OPTIONAL: The packages to install on all instances
# You can install a specific version of a package by using a sub-array of pkg, version
# For example, packages: [[rake, 0.7.1], irb]
packages: [postfix, build-essential, git-core, ec2-ami-tools, libxslt-dev, ntp]

# OPTIONAL: gem sources to setup for rubygems
# gemsources: ["https://rubygems.org"]

# OPTIONAL: The gems to install on all instances
# You can install a specific version of a gem by using a sub-array of gem, version
# For example, gem: [[rails, 2.2.2], open4, aws-s3]
gems: [open4, aws-s3, bundler, [rubber, "#{Rubber.version}"]]

# OPTIONAL: A string prepended to shell command strings that cause multi
# statement shell commands to fail fast.  You may need to comment this out
# on some platforms, but it works for me on linux/osx with a bash shell
#
stop_on_error_cmd: "function error_exit { exit 99; }; trap error_exit ERR"

# OPTIONAL: The default set of roles to use when creating a staging instance
# with "cap rubber:create_staging".  By default this uses all the known roles,
# excluding slave roles, but this is not always desired for staging, so you can
# specify a different set here
#
# staging_roles: "web,app,db:primary=true"


# OPTIONAL: Lets one assign amazon elastic IPs (static IPs) to your instances
#           You should typically set this on the role/host level rather than
#           globally , unless you really do want all instances to have a
#           static IP
#
# use_static_ip: true

# OPTIONAL: Specifies an instance to be created in the given availability zone
#           Availability zones are sepcified by amazon to be somewhat isolated
#           from each other so that hardware failures in one zone shouldn't
#           affect instances in another.  As such, it is good to specify these
#           for instances that need to be redundant to reduce your chance of
#           downtime. You should typically set this on the role/host level
#           rather than globally.  Use cap rubber:describe_zones to see the list
#           of zones
# availability_zone: us-east-1a

# OPTIONAL: If you want to use Elastic Block Store (EBS) persistent
# volumes, add them to host specific overrides and they will get created
# and assigned to the instance.  On initial creation, the volume will get
# attached _and_ formatted, but if your host disappears and you recreate
# it, the volume will only get remounted thereby preserving your data
#
# hosts:
#   my_host:
#     availability_zone: us-east-1a
#     volumes:
#       - size: 100 # size of vol in GBs
#         zone: us-east-1a # zone to create volume in, needs to match host's zone
#         device: /dev/sdh # OS device to attach volume to
#         mount: /mnt/mysql # The directory to mount this volume to
#         filesystem: ext3 # the filesystem to create on volume
#       - size: 10 # size of vol in GBs
#         zone: us-east-1a # zone to create volume in, needs to match host's zone
#         device: /dev/sdi # OS device to attach volume to
#         mount: /mnt/logs # The directory to mount this volume to
#         filesystem: ext3 # the filesystem to create on volume
#
#       # volumes without mount/filesystem can be used in raid arrays
#
#       - size: 50 # size of vol in GBs
#         zone: us-east-1a # zone to create volume in, needs to match host's zone
#         device: /dev/sdx # OS device to attach volume to
#       - size: 50 # size of vol in GBs
#         zone: us-east-1a # zone to create volume in, needs to match host's zone
#         device: /dev/sdy # OS device to attach volume to
#
#    # Use some ephemeral volumes for raid array
#    local_volumes:
#      - partition_device: /dev/sdb
#        zero: false # zeros out disk for improved performance
#      - partition_device: /dev/sdc
#        zero: false # zeros out disk for improved performance
#
#     # for raid array, you'll need to add mdadm to packages.  Likewise,
#     # xfsprogs is needed for xfs filesystem support
#     #
#     packages: [xfsprogs, mdadm]
#     raid_volumes:
#       - device: /dev/md0 # OS device to to create raid array on
#         mount: /mnt/fast # The directory to mount this array to
#         mount_opts: 'nobootwait' # Recent Ubuntu versions require this flag or SSH will not start on reboot
#         filesystem: xfs # the filesystem to create on array
#         filesystem_opts: -f # the filesystem opts in mkfs
#         raid_level: 0 # the raid level to use for the array
#         # if you're using Ubuntu 11.x or later (Natty, Oneiric, Precise, etc)
#         # you will want to specify the source devices in their /dev/xvd format
#         # see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/684875 for
#         # more information.
#         # NOTE: Only make this change for raid source_devices, NOT generic
#         # volume commands above.
#         source_devices: [/dev/sdx, /dev/sdy] # the source EBS devices we are creating raid array from (Ubuntu Lucid or older)
#         source_devices: [/dev/xvdx, /dev/xvdy] # the source EBS devices we are creating raid array from (Ubuntu Natty or newer)
#
#     # for LVM volumes, you'll need to add lvm2 to packages.  Likewise,
#     # xfsprogs is needed for xfs filesystem support
#     packages: [xfsprogs, lvm2]
#     lvm_volume_groups:
#       - name: vg # The volume group name
#         physical_volumes: [/dev/sdx, /dev/sdy] # Devices used for LVM group (you can use just one, but you can't stripe then)
#         extent_size: 32 # Size of the volume extent in MB
#         volumes:
#           - name: lv # Name of the logical volume
#             size: 999.9 # Size of volume in GB (slightly less than sum of all physical volumes because LVM reserves some space)
#             stripes: 2 # Count of stripes for volume
#             filesystem: xfs # The filesystem to create on the logical volume
#             filesystem_opts: -f # the filesystem opts in mkfs
#             mount: /mnt/large_work_dir # The directory to mount this LVM volume to

# OPTIONAL: You can also define your own variables here for use when
# transforming config files, and they will be available in your config
# templates as  <%%= rubber_env.var_name %>
#
# var_name: var_value

# All variables can also be overridden on the role, environment and/or host level by creating
# a sub level to the config under roles, environments and hosts.  The precedence is host, environment, role
# e.g. to install mysql only on db role, and awstats only on web01:

# OPTIONAL: Role specific overrides
# roles:
#   somerole:
#     packages: []
#   somerole2:
#     myconfig: someval

# OPTIONAL: Environment specific overrides
# environments:
#   staging:
#     myconfig: otherval
#   production:
#     myconfig: val

# OPTIONAL: Host specific overrides
# hosts:
#   somehost:
#     packages: []

2 个答案:

答案 0 :(得分:3)

Rubber希望在YAML文件config/rubber/rubber.yml

中获得您的EC2凭证
access_key: xxx
secret_access_key: blah
account: 123

要找到这些值:

  • 登录AWS
  • 点击屏幕右上角的用户名,然后选择Security Credentials
  • account号码位于打开页面的右上角
  • 您的access_key位于此页面的中间位置,并且有一个链接可以查看您的secret_access_key

Rubber使用这些凭据来配置您的AWS基础架构。

连接到实际服务器时,它将需要您的秘密RSA密钥。您需要告诉橡胶密钥对的名称(如EC2仪表板中所示)及其位置。再次在config/rubber/rubber.yml

key_name: my-keypair
key_file: ~/.ec2/myec2.pem

答案 1 :(得分:2)

我不确定我做错了什么以及我在这里做了什么。橡胶中似乎有一些缺陷。

最后我制作了一个新的应用程序,部署了它在Railscast的后半部分完成的方式(带有单个实例)。之后我登录AWS,点击了实例 - &gt;行动 - &gt;连接。在那里,您可以看到连接到实例的“正确”方式。