这是由于S3 Sink最近发现的一个已知问题,可以通过升级AWS SDK版本解决。提供一个基本的Flink代码示例,使Streaming文件汇集到S3。
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// set S3 Credentials Provider
env.getCheckpointConfig().setCheckpointStorage(new S3FsCheckpointStorage());
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
env.enableCheckpointing(10*60_000); //10 minutes
// write operation
DataStream stream = ...
stream.addSink(buildS3StringSink("s3a://myBucket/output/prefix"));
// build StringS3 sink
private static StreamingFileSink buildS3StringSink(String outputPrefix) {
String S3_A_REGION = "eu-west-1";
final String[] AWS_CREDENTIALS = {
"aws.accessKeyId",
"aws.secretAccessKey"
};
Properties awsCredentials = new Properties();
for (String propName : AWS_CREDENTIALS)
if (System.getProperty(propName) != null)
awsCredentials.setProperty(propName, System.getProperty(propName));
final StreamingFileSink sink = StreamingFileSink
.forBulkFormat(new Path(outputPrefix), ParquetAvroWriters
.buildAvroParquetWriter(SCHEMA).withCompressionCodec(CompressionCodecName.SNAPPY))
.withBucketAssigner(new DateTimeBucketAssigner<>("yyyy/MM/dd/HH", ZoneId
.of("Europe/Warsaw")))
.withBucketCheckInterval(DEFAULT_BUCKET_CHECK_INTERVAL)
.withPartSuffix(".parquet")
.withS3Config(new S3Config(S3_A_REGION, awsCredentials));
return sink;
}
请注意,这是一个可能的解决方案,具体取决于您的应用程序配置,环境和代码。此外,请确保您的AWS SDK版本符合Flink版本要求。
上一篇:ApacheFlink-在停止和重新启动后,流应用程序不会从检查点启动
下一篇:ApacheFlink1.14.0-UnabletousepythonUDFthroughSQLDDLinJava