API Docs - v1.0.4
Tested Siddhi Core version: 5.1.21
It could also support other Siddhi Core minor versions.
S3
copy (Stream Function)
Copy a file within Amazon AWS S3 buckets.
Syntaxs3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key, <STRING> versioning.enabled)
s3:copy(<STRING> from.bucket.name, <STRING> from.key, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key, <STRING> versioning.enabled, <STRING> bucket.acl)
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
from.bucket.name | Name of the S3 bucket which is copying from |
STRING | No | Yes | |
from.key | Key of the object to be copied |
STRING | No | Yes | |
bucket.name | Name of the destination S3 bucket |
STRING | No | Yes | |
key | Key of the destination object |
STRING | No | Yes | |
async | Toggle async mode |
false | BOOL | Yes | Yes |
credential.provider.class | AWS credential provider class to be used. If blank along with the username and the password, default credential provider will be used. |
EMPTY_STRING | STRING | Yes | No |
aws.region | The region to be used to create the bucket |
EMPTY_STRING | STRING | Yes | No |
storage.class | AWS storage class for the destination object |
standard | STRING | Yes | No |
aws.access.key | AWS access key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
aws.secret.key | AWS secret key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
versioning.enabled | Flag to enable versioning support in the destination bucket |
false | STRING | Yes | No |
bucket.acl | Access control list for the destination bucket |
EMPTY_STRING | STRING | Yes | No |
Examples EXAMPLE 1
from FooStream#s3:copy('stock-source-bucket', 'stocks.txt', 'stock-backup-bucket', '/backup/stocks.txt')
Copy object from one bucket to another.
delete (Stream Function)
Delete an object from an Amazon AWS S3 bucket
Syntaxs3:delete(<STRING> bucket.name, <STRING> key)
s3:delete(<STRING> bucket.name, <STRING> key, <BOOL> async)
s3:delete(<STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class)
s3:delete(<STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region)
s3:delete(<STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> aws.access.key, <STRING> aws.secret.key)
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
bucket.name | Name of the S3 bucket |
STRING | No | Yes | |
key | Key of the object |
STRING | No | Yes | |
async | Toggle async mode |
false | BOOL | Yes | Yes |
credential.provider.class | AWS credential provider class to be used. If blank along with the username and the password, default credential provider will be used. |
EMPTY_STRING | STRING | Yes | No |
aws.region | The region to be used to create the bucket |
EMPTY_STRING | STRING | Yes | No |
aws.access.key | AWS access key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
aws.secret.key | AWS secret key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
Examples EXAMPLE 1
from FooStream#s3:delete('s3-file-bucket', '/uploads/stocks.txt')
Delete the object at '/uploads/stocks.txt' from the bucket.
uploadFile (Stream Function)
Uploads a file to an Amazon AWS S3 bucket
Syntaxs3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key, <STRING> versioning.enabled)
s3:uploadFile(<STRING> file.path, <STRING> bucket.name, <STRING> key, <BOOL> async, <STRING> credential.provider.class, <STRING> aws.region, <STRING> storage.class, <STRING> aws.access.key, <STRING> aws.secret.key, <STRING> versioning.enabled, <STRING> bucket.acl)
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
file.path | Path of the file to be uploaded |
STRING | No | Yes | |
bucket.name | Name of the S3 bucket |
STRING | No | Yes | |
key | Key of the object |
STRING | No | Yes | |
async | Toggle async mode |
false | BOOL | Yes | Yes |
credential.provider.class | AWS credential provider class to be used. If blank along with the username and the password, default credential provider will be used. |
EMPTY_STRING | STRING | Yes | No |
aws.region | The region to be used to create the bucket |
EMPTY_STRING | STRING | Yes | No |
storage.class | AWS storage class |
standard | STRING | Yes | No |
aws.access.key | AWS access key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
aws.secret.key | AWS secret key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
versioning.enabled | Flag to enable versioning support in the bucket |
false | STRING | Yes | No |
bucket.acl | Access control list for the bucket |
EMPTY_STRING | STRING | Yes | No |
Examples EXAMPLE 1
from FooStream#s3:upload('/Users/wso2/files/stocks.txt', 's3-file-bucket', '/uploads/stocks.txt')
Creates an object with the file content at '/uploads/stocks.txt' in the bucket.
Sink
s3 (Sink)
S3 sink publishes events as Amazon AWS S3 buckets.
Syntax@sink(type="s3", credential.provider.class="<STRING>", aws.access.key="<STRING>", aws.secret.key="<STRING>", bucket.name="<STRING>", aws.region="<STRING>", versioning.enabled="<BOOL>", object.path="<STRING>", storage.class="<STRING>", content.type="<STRING>", bucket.acl="<STRING>", node.id="<STRING>", @map(...)))
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
credential.provider.class | AWS credential provider class to be used. If blank along with the username and the password, default credential provider will be used. |
EMPTY_STRING | STRING | Yes | No |
aws.access.key | AWS access key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
aws.secret.key | AWS secret key. This cannot be used along with the credential.provider.class |
EMPTY_STRING | STRING | Yes | No |
bucket.name | Name of the S3 bucket |
STRING | No | No | |
aws.region | The region to be used to create the bucket |
EMPTY_STRING | STRING | Yes | No |
versioning.enabled | Flag to enable versioning support in the bucket |
false | BOOL | Yes | No |
object.path | Path for each S3 object |
STRING | No | Yes | |
storage.class | AWS storage class |
standard | STRING | Yes | No |
content.type | Content type of the event |
application/octet-stream | STRING | Yes | Yes |
bucket.acl | Access control list for the bucket |
EMPTY_STRING | STRING | Yes | No |
node.id | The node ID of the current publisher. This needs to be unique for each publisher instance as it may cause object overwrites while uploading the objects to same S3 bucket from different publishers. |
EMPTY_STRING | STRING | Yes | No |
Examples EXAMPLE 1
@sink(type='s3', bucket.name='user-stream-bucket',object.path='bar/users', credential.provider='software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider', flush.size='3',
@map(type='json', enclosing.element='$.user',
@payload("""{"name": "{{name}}", "age": {{age}}}""")))
define stream UserStream(name string, age int);
This creates a S3 bucket named 'user-stream-bucket'. Then this will collect 3 events together and create a JSON object and save that in S3.