使用Pulumi创建简单的Azure HDInsight Spark群集时出错

时间:2019-05-22 21:17:13

标签: javascript azure apache-spark hdinsight pulumi

我正在尝试使用Pulumi Javascript SDK在Azure上创建HDInsight Spark群集。我遵循了Pulumi提供的关于创建“ hello world” GCP Kubernetes集群的教程,并通过github上的Pulumi Examples回购中的JavaScript示例进行了研究,但是没有成功创建集群的运气。

我已经尝试根据Pulumi存储库中的示例多次更改我的export语句,但是当我在自己的pulumi项目中运行pulumi up时,总是会遇到“未处理的异常”错误和堆栈跟踪根目录。到目前为止,我已经尝试使用以下导出语句。

    //attempt 1
    export const sparkClusterName = sparkCluster.name

    //attempt 2
    export const sparkClusterOutput = sparkCluster.output

    //attempt 3
    export const sparkEndpoint = sparkCluster.httpsEndpoint

我正在使用Visual Studio Code作为我的IDE 使用Mojave在MacBook Pro上运行所有代码 使用Pulumi版本0.17.11 使用节点版本12.1.0 同时安装了azure cli和pulumi cli工具

我的index.js程序如下:

"use strict";
const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/azure");

// Retrieve an Azure Resource Group
const resourceGroup = pulumi.output(azure.core.getResourceGroup({
    name: "MyResourceGroup",
}));
//Create Spark Cluster
const sparkCluster = new azure.hdinsight.SparkCluster("testSparkCluster", {
    clusterVersion: "3.6",
    componentVersion: {
        spark: "2.3",
    },
    gateway: {
        enable: true,
        password: "laDK#21",
        username: "USERname",
    },
    location: resourceGroup.apply(resourceGroup => resourceGroup.location),
    name: "example-hdisparkcluster",
    resourceGroupName: resourceGroup.apply(resourceGroup => resourceGroup.name),
    roles: {
        headNode: {
            password: "AAAlllddck11122$$3",
            username: "USerNameladkfj",
            vmSize: "Standard_A3",
        },
        workerNode: {
            password: "asdlfaDDF143k#@#",
            targetInstanceCount: 3,
            username: "USernaemls",
            vmSize: "Standard_A3",
        },
        zookeeperNode: {
            password: "ASDKLlkjjj##@@323",
            username: "USERname2323",
        },
    },
    storageAccounts: [{
        isDefault: true,
    }],
    tier: "Standard",
});

// Export the spark cluster
export const sparkClusterName = sparkCluster.name;
//export const sparkName = sparkCluster.output
//export const sparkEndpoint = sparkCluster.httpsEndpoint

预期的结果应该是pulumi CLI成功显示了新堆栈和正在创建的Spark群集(然后可以在Azure Portal上查看群集)。 而是抛出以下错误和堆栈跟踪:

Previewing update (dev):

     Type                 Name                         Plan       Info
 +   pulumi:pulumi:Stack  HDInsight_Spark_Cluster-dev  create     1 error

Diagnostics:
  pulumi:pulumi:Stack (HDInsight_Spark_Cluster-dev):
    error: Running program '/Users/workspace/Pulumi Workspace/Pulumi HDInsight Testing' failed with an unhandled exception:
    /Users/workspace/Pulumi Workspace/Pulumi HDInsight Testing/index.js:50
    export const sparkClusterName = sparkCluster.name;
    ^^^^^^

    SyntaxError: Unexpected token export
        at Module._compile (internal/modules/cjs/loader.js:703:23)
        at Object.Module._extensions..js (internal/modules/cjs/loader.js:770:10)
        at Module.load (internal/modules/cjs/loader.js:628:32)
        at Function.Module._load (internal/modules/cjs/loader.js:555:12)
        at Module.require (internal/modules/cjs/loader.js:666:19)
        at require (internal/modules/cjs/helpers.js:16:16)
        at /Users/workspace/Pulumi Workspace/Pulumi HDInsight Testing/node_modules/@pulumi/pulumi/cmd/run/run.js:195:20
        at Stack.<anonymous> (/Users/workspace/Pulumi Workspace/Pulumi HDInsight Testing/node_modules/@pulumi/pulumi/runtime/stack.js:76:27)
        at Generator.next (<anonymous>)
        at fulfilled (/Users/workspace/Pulumi Workspace/Pulumi HDInsight Testing/node_modules/@pulumi/pulumi/runtime/stack.js:17:58)

我对使用Azure非常陌生,因为我对Java和C / C ++的经验最多,所以对Javascript只是一个基本的了解。该程序的目的只是帮助我通过使用Pulumi来使用和了解HDInsight。

1 个答案:

答案 0 :(得分:0)

您遇到语法问题,因为JavaScript中的导出与TypeScript不同。复制的导出将在TS程序中工作,但是在JS中,您应该编写:

exports.sparkClusterName = sparkCluster.name;

如果您不是JavaScript和TypeScript的新手,建议您切换到TypeScript,并使用Visual Studio Code之类的编辑器在编辑时突出显示错误。您的代码中仍然有许多代码:enable而不是enabledvmSize中缺少zookeeperNodestorageAccountKeystorageContainerId中缺少storageAccounts

最重要的是,您的用户名和密码违反了某些政策,但您只会在运行pulumi up时发现它,而不会在编辑时发现它。

这是我的看法:

const sparkCluster = new azure.hdinsight.SparkCluster("testSparkCluster", {
    clusterVersion: "3.6",
    componentVersion: {
        spark: "2.3",
    },
    gateway: {
        enabled: true,
        password: "@P6g4KMvlhjM",
        username: "username",
    },
    name: "example-hdisparkcluster",
    resourceGroupName: resourceGroup.apply(rg => rg.name),
    roles: {
        headNode: {
            password: "0@Gkv81xt!lR",
            username: "usernamehn",
            vmSize: "Standard_A3",
        },
        workerNode: {
            password: "Wx63!ZMnePHK",
            targetInstanceCount: 3,
            username: "usernamewn",
            vmSize: "Standard_A3",
        },
        zookeeperNode: {
            password: "&43MhoXfZ5ar",
            username: "usernamezn",
            vmSize: "Standard_A3",
        },
    },
    storageAccounts: [{
        isDefault: true,
        storageAccountKey: storageAccount.primaryAccessKey,
        storageContainerId: storageContainer.id,
    }],
    tier: "Standard",
});

更新:我创建了一个full working example