我正在使用Java Sdk创建胶粘作业创建。它只有两个必需的参数Command和Glue版本。 但是我需要使用自动脚本生成来创建作业。正如我们可以在控制台中所做的那样,我们添加了数据源,由AWS Glue生成的建议脚本,Transform类型,Data Target,模式n所有。如何使用java sdk或什至使用aws胶api将这些参数添加到胶作业中。
CreateJobRequest req = new CreateJobRequest();
req.setName("TestJob2");
req.setRole("GlueS3Role");
req.setGlueVersion("1.0");
JobCommand command = new JobCommand();
command.setName("glueetl");
command.setPythonVersion("3");
**// S3 location need not to be given, as script code is auto generated by AWS glue
command.setScriptLocation(S3ScriptLocation);**
req.setCommand(command);
AWSGlue glueClient = AWSGlueClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(creds))
.build();
glueClient.createJob(req);
}
答案 0 :(得分:0)
我希望AWS Glue客户端和触发作业的逻辑的实现将以相同的方式帮助实现胶水作业的自动生成。
胶水客户端:
public GlueClient createClient() {
return GlueClient.builder()
.region(Region.of(regionName))
.credentialsProvider(ProfileCredentialsProvider.create(profileName)).build();
涂胶工:
public static String runGlueJob(GlueClient glueClient, String jobName, Map<String, String> glueArguments) {
StartJobRunResponse response = glueClient.startJobRun(StartJobRunRequest.builder().jobName(jobName).arguments(glueArguments).build());
String jobId = response.jobRunId();
logger.info("JobId: " + jobId);
return jobId;
}
要创建新的AWS Glue作业定义,我们可以下一步:
CreateJobResult jobResult = glueClient.createJob(CreateJobRequest.builder()
.command(JobCommand.builder().pythonVersion("").scriptLocation("").name("").build())
.defaultArguments()
.description()
.glueVersion()
.logUri()
.name()
.numberOfWorkers()
.role()
.tags()
.build());
然后触发作业
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/glue/AWSGlueClient.html
答案 1 :(得分:0)
您正在寻找的是createScript(CreateScriptRequest request)
-AWSJavaSDK AWSGlueClient类中的函数
不幸的是,当前版本的AWS Glue SDK不包括用于生成ETL脚本的简单功能。在“创建作业”功能中生成ETL脚本时,AWS Glue Console会在幕后执行一些操作(您可以通过查看浏览器的“网络”选项卡来查看此操作)。
使用“ DAG”来模仿它
您将需要收集CodeGenNode
和CodeGenEdge
的集合,并使用
CreateScriptRequest
.WithDagNodes(Collection<CodeGenNode> collection)
&
.WithDagEdges(Collection<CodeGenEdge> collection)
我建议您首先在AWS控制台中生成ETL脚本,并将结果与"Generate Scala Code" example中的信息进行交叉引用(此链接是为了帮助您更好地理解“ DAG”)
我最终明确建立了这个DAG结构。这是我的解决方案的摘要:
var dagNodes = new ArrayList<CodeGenNode>();
var dagEdges = new ArrayList<CodeGenEdge>();
//datasource
dagEdges.add(new CodeGenEdge().withSource(dataSourceName).withTarget(applyMappingName));
ArrayList<CodeGenNodeArg> dataSourceArgs = new ArrayList<CodeGenNodeArg>();
dataSourceArgs.add(new CodeGenNodeArg().withName("database").withValue(String.format("\"%s\"", databaseName)));
dataSourceArgs.add(new CodeGenNodeArg().withName("table_name").withValue(String.format("\"%s\"", tableName)));
dataSourceArgs.add(new CodeGenNodeArg().withName("transformation_ctx").withValue(String.format("\"%s\"", dataSourceName)));
dagNodes.add(new CodeGenNode().withId(dataSourceName).withNodeType("DataSource").withArgs(dataSourceArgs));
... //can build out many 'operations' - datasource, applymapping, selectfields, resolvechoice, datasink
var createScriptRequest = new CreateScriptRequest()
.withDagEdges(dagEdges)
.withDagNodes(dagNodes)
.withLanguage(Language.PYTHON);
awsGlueClient.createScript(createScriptRequest)
然后只需使用AmazonS3将此结果上传到S3并将此路径用于“ setScriptLocation”
PutObjectResult putObject(String bucketName, String key, String content)