IMPORTANT: No longer maintained

JetS3t is no longer maintained here. Please see the fork by Paul Gregoire (aka Mondain) at https://github.com/mondain/jets3t

Programmer Guide: Code Samples

The JetS3t suite includes some code samples in the codebase package org.jets3t.samples. This document gives a detailed overview of example code in the CodeSamples.java, GSCodeSamples.java, and CloudFrontSamples.java files which cover some basic JetS3t functionality. Refer to the samples package directory for other examples demonstrating advanced JetS3t functionality.

Items with a red star (*) are new or changed since JetS3t version 0.8.0

Amazon S3

Basic Connecting to S3 Create a Bucket Uploading Data or Files
Downloading Objects Listing Your Buckets and Objects Deleting Buckets and Objects
Copying Objects Moving and Renaming Objects
Reduced Redundancy Storage (RRS) Bucket Versioning Multi-Factor Authenticated Delete
Advanced Managing Metadata Securing Your AWS Credentials Access Control
Bucket Policies* Temporary Public URLs Multipart Uploads*
S3 POST Forms Activate Requester Pays for a bucket Access a Requester Pays bucket
Amazon DevPay S3 Accounts

Google Storage for Developers*

Basic* Connecting to Google Storage Create a Bucket Uploading data objects
List your buckets and objects Downloading data objects Deleting objects and buckets
ACLs* Manage Access Control Lists
Advanced* Verifying Uploads Verifying Downloads Copying Objects
Moving and Renaming objects

Threaded Service Wrapper*

Multiple Uploads Multiple Downloads Multiple Deletes

Amazon CloudFront

Manage CloudFront Distributions* Private Distributions Streaming Distributions
Object Invalidation* Non-S3 Origin*

Loading Service Credentials

The JetS3t code samples need to know your AWS or Google Storage credentials to work. All the sample classes use a utility methods in the SamplesUtils class to load your credentials from a properties file called samples.properties which must be available in the classpath. Before running the sample code classes you must create this file and add properties for the services you will access. AWS: awsAccessKey and awsSecretKey, Google Storage: gsAccessKey and gsSecretKey.

S3 Object representations

Items stored in a storage service are represented by one of two objects:

  • StorageBucket: a top-level container in which objects are stored. Every bucket is identified by a name, and this name must be unique in the service. Represented by S3Bucket for S3 or GSBucket for Google Storage
  • Object: a data object stored inside a bucket. It is identified by a key, which can be any string name, and may have additional meta-data. Represented by S3Object for S3 or GSObject for Google Storage

Amazon S3 (CodeSamples.java)

Connecting to S3

Your Amazon Web Services (AWS) login credentials are required to manage S3 accounts. These credentials are stored in an AWSCredentials object:

String awsAccessKey = "YOUR_AWS_ACCESS_KEY";
String awsSecretKey = "YOUR_AWS_SECRET_KEY";
AWSCredentials awsCredentials = 
    new AWSCredentials(awsAccessKey, awsSecretKey);

To communicate with S3, create a class that implements an S3Service. We will use the REST/HTTP implementation based on HttpClient, as this is the most robust implementation provided with JetS3t.

S3Service s3Service = new RestS3Service(awsCredentials);

A good test to see if your S3Service can connect to S3 is to list all the buckets you own. If a bucket listing produces no exceptions, all is well.

S3Bucket[] myBuckets = s3Service.listAllBuckets();
System.out.println("How many buckets to I have in S3? " + myBuckets.length);

Create a bucket

To store data in S3 you must first create a bucket, a container for objects.

S3Bucket testBucket = s3Service.createBucket("test-bucket");
System.out.println("Created test bucket: " + testBucket.getName());

If you try using a common name when you create a bucket, you will probably not be able to create the bucket as someone else will already have a bucket of that name.

To create a bucket in an S3 data center located somewhere other than the United States, you can specify a location for your bucket as a second parameter to the createBucket() method. Currently, the alternative S3 locations are Europe (EU), US West - Northern California (us-west-1), and Asia Pacific (Singapore)

S3Bucket euBucket = s3Service.createBucket("eu-bucket", S3Bucket.LOCATION_EUROPE);
S3Bucket usWestBucket = s3Service.createBucket("us-west-bucket", S3Bucket.LOCATION_US_WEST);
S3Bucket asiaPacificBucket = s3Service.createBucket(
    "asia-pacific-bucket", S3Bucket.LOCATION_ASIA_PACIFIC);

Uploading data objects

We use S3Object classes to represent data objects in S3. To store some information in our new test bucket, we must first create an object with a key/name then tell our S3Service to upload it to S3.

In the example below, we print out information about the S3Object before and after uploading it to S3. These print-outs demonstrate that the S3Object returned by the putObject method contains extra information provided by S3, such as the date the object was last modified on an S3 server.

// Create an empty object with a key/name, and print the object's details.
S3Object object = new S3Object("object");
System.out.println("S3Object before upload: " + object);

// Upload the object to our test bucket in S3.
object = s3Service.putObject(testBucket, object);

// Print the details about the uploaded object, which contains more information.
System.out.println("S3Object after upload: " + object);

The example above will create an empty object in S3, which isn't very useful. To include data in the object you must provide some data for the object. If you know the Content/Mime type of the data (e.g. text/plain) you should set this too.

S3Object's can contain any data available from an input stream, but JetS3t provides two convenient object types to hold File or String data. These convenient constructors automatically set the Content-Type and Content-Length of the object.

// Create an S3Object based on a string, with Content-Length set automatically and 
// Content-Type set to "text/plain"  
String stringData = "Hello World!";
S3Object stringObject = new S3Object("HelloWorld.txt", stringData);

// Create an S3Object based on a file, with Content-Length set automatically and 
// Content-Type set based on the file's extension (using the Mimetypes utility class)
File fileData = new File("src/org/jets3t/samples/CodeSamples.java");
S3Object fileObject = new S3Object(fileData);

If your data isn't a File or String you can use any input stream as a data source, but you must manually set the Content-Length.

// Create an object containing a greeting string as input stream data.
String greeting = "Hello World!";
S3Object helloWorldObject = new S3Object("HelloWorld2.txt");
ByteArrayInputStream greetingIS = new ByteArrayInputStream(greeting.getBytes());
helloWorldObject.setDataInputStream(greetingIS);
helloWorldObject.setContentLength(
    greeting.getBytes(Constants.DEFAULT_ENCODING).length);
helloWorldObject.setContentType("text/plain");

// Upload the data objects.
s3Service.putObject(testBucket, stringObject);
s3Service.putObject(testBucket, fileObject);
s3Service.putObject(testBucket, helloWorldObject);

// Print details about the uploaded object.
System.out.println("S3Object with data: " + helloWorldObject);

Reduced Redundancy Storage (RRS)*

You may want to store your objects using a non-standard "storage class" in some cases, such as if you are prepared to accept a reduced level of redundancy in exchange for cheaper storage.

Here is how you store an object using the Reduced Redundancy Storage (RRS) feature.

S3Object rrsObject = new S3Object("reduced-redundancy-object");

// Apply the RRS storage class instead of the default STANDARD one.
rrsObject.setStorageClass(S3Object.STORAGE_CLASS_REDUCED_REDUNDANCY);

// Upload the object as usual.
s3Service.putObject(testBucket, rrsObject);

Verifying Uploads

To be 100% sure that data you have uploaded to S3 has not been corrupted in transit, you can verify that the hash value of the data S3 received matches the hash value of your original data.

The easiest way to do this is to specify your data's hash value in the Content-MD5 header before you upload the object. JetS3t will do this for you automatically when you use the File- or String-based S3Object constructors:

S3Object objectWithHash = new S3Object(testBucket, "HelloWorld.txt", stringData);
System.out.println("Hash value: " + objectWithHash.getMd5HashAsHex());

If you do not use these constructors, you should always set the Content-MD5 header value yourself before you upload an object. JetS3t provides the ServiceUtils#computeMD5Hash method to calculate the hash value of an input stream or byte array.

ByteArrayInputStream dataIS = new ByteArrayInputStream(
    "Here is my data".getBytes(Constants.DEFAULT_ENCODING));
byte[] md5Hash = ServiceUtils.computeMD5Hash(dataIS);
dataIS.reset();        
                
stringObject = new S3Object("MyData");
stringObject.setDataInputStream(dataIS);
stringObject.setMd5Hash(md5Hash);        

Downloading data objects

To download data from S3 you retrieve an S3Object through the S3Service. You may retrieve an object in one of two ways, with the data contents or without.

If you just want to know some details about an object and you don't need its contents, it's faster to use the getObjectDetails method. This returns only the object's details, also known as its 'HEAD'. Head information includes the object's size, date, and other metadata associated with it such as the Content Type.

// Retrieve the HEAD of the data object we created previously.
S3Object objectDetailsOnly = s3Service.getObjectDetails(testBucket, "helloWorld.txt");
System.out.println("S3Object, details only: " + objectDetailsOnly);

If you need the data contents of the object, the getObject method will return all the object's details and will also set the object's DataInputStream variable from which the object's data can be read.

// Retrieve the whole data object we created previously
S3Object objectComplete = s3Service.getObject(testBucket, "helloWorld.txt");
System.out.println("S3Object, complete: " + objectComplete);

// Read the data from the object's DataInputStream using a loop, and print it out.
System.out.println("Greeting:");
BufferedReader reader = new BufferedReader(
    new InputStreamReader(objectComplete.getDataInputStream()));
String data = null;
while ((data = reader.readLine()) != null) {
    System.out.println(data);
}

Verifying Downloads

To be 100% sure that data you have downloaded from S3 has not been corrupted in transit, you can verify the data by calculating its hash value and comparing this against the hash value returned by S3.

JetS3t provides convenient methods for verifying data that has been downloaded to a File, byte array or InputStream.

        
S3Object downloadedObject = s3Service.getObject(testBucket, "helloWorld.txt");
String textData = ServiceUtils.readInputStreamToString(
    downloadedObject.getDataInputStream(), "UTF-8");
boolean valid = downloadedObject.verifyData(textData.getBytes("UTF-8"));
System.out.println("Object verified? " + valid);

List your buckets and objects

Now that you have a bucket and some objects, it's worth listing them. Note that when you list objects, the objects returned will not include much information compared to what you get from the getObject and getObjectDetails methods. However, they will include the size of each object

// List all your buckets.
S3Bucket[] buckets = s3Service.listAllBuckets();

// List the object contents of each bucket.
for (int b = 0; b < buckets.length; b++) {
    System.out.println("Bucket '" + buckets[b].getName() + "' contains:");
    
    // List the objects in this bucket.
    S3Object[] objects = s3Service.listObjects(buckets[b]);

    // Print out each object's key and size.
    for (int o = 0; o < objects.length; o++) {
        System.out.println(" " + objects[o].getKey() + " (" + objects[o].getContentLength() + " bytes)");
    }
}

When listing the objects in a bucket you can filter which objects to return based on the names of those objects. This is useful when you are only interested in some specific objects in a bucket and you don't need to list all the bucket's contents.

// List only objects whose keys match a prefix. 
String prefix = "Reports";
String delimiter = null; // Refer to the S3 guide for more information on delimiters
S3Object[] filteredObjects = s3Service.listObjects(testBucket, prefix, delimiter);

Copying objects

Objects can be copied within the same bucket and between buckets.

// Create a target S3Object
S3Object targetObject = new S3Object("targetObjectWithSourcesMetadata");

Copy an existing source object to the target S3Object. This will copy the source's object data and metadata to the target object.

boolean replaceMetadata = false;
s3Service.copyObject("test-bucket", "HelloWorld.txt", "destination-bucket", targetObject, replaceMetadata);

You can also copy an object and update its metadata at the same time. Perform a copy-in-place (with the same bucket and object names for source and destination) to update an object's metadata while leaving the object's data unchanged.

targetObject = new S3Object("HelloWorld.txt");
targetObject.addMetadata(S3Object.METADATA_HEADER_CONTENT_TYPE, "text/html");        
replaceMetadata = true;
s3Service.copyObject("test-bucket", "HelloWorld.txt", "test-bucket", targetObject, replaceMetadata);

Moving and Renaming objects

Objects can be moved within a bucket (to a different name) or to another S3 bucket in the same region (eg US or EU). A move operation is composed of a copy then a delete operation behind the scenes. If the initial copy operation fails, the object is not deleted. If the final delete operation fails, the object will exist in both the source and destination locations.

Here is a command that moves an object from one bucket to another.

s3Service.moveObject("test-bucket", "HelloWorld.txt", "destination-bucket", targetObject, false);

You can move an object to a new name in the same bucket. This is essentially a rename operation.

s3Service.moveObject("test-bucket", "HelloWorld.txt", "test-bucket", new S3Object("NewName.txt"), false);

To make renaming easier, JetS3t has a shortcut method especially for this purpose.

s3Service.renameObject("test-bucket", "HelloWorld.txt", targetObject);        

Deleting objects and buckets

Objects can be easily deleted. When they are gone they are gone for good so be careful.

Buckets may only be deleted when they are empty.

// If you try to delete your bucket before it is empty, it will fail.
try {
    // This will fail if the bucket isn't empty.
    s3Service.deleteBucket(testBucket.getName());
} catch (S3ServiceException e) {
    e.printStackTrace();
}

// Delete all the objects in the bucket
s3Service.deleteObject(testBucket, object.getKey());
s3Service.deleteObject(testBucket, helloWorldObject.getKey());

// Now that the bucket is empty, you can delete it.
s3Service.deleteBucket(testBucket.getName());
System.out.println("Deleted bucket " + testBucket.getName());

Bucket Versioning

S3 Buckets have a versioning feature which allows you to keep prior versions of your objects when they are updated or deleted. This feature means you can be much more confident that vital data will not be lost even if it is accidentally overwritten or deleted.

Versioning is not enabled for a bucket by default, you must explicitly enable it. Once it is enabled you access and mange object versions using unique version identifiers.

// Create a bucket to test versioning
S3Bucket versioningBucket = s3Service.getOrCreateBucket(
    "test-versioning");
String vBucketName = versioningBucket.getName();

// Check bucket versioning status for the bucket
S3BucketVersioningStatus versioningStatus =
    s3Service.getBucketVersioningStatus(vBucketName);
System.out.println("Versioning enabled ? "
    + versioningStatus.isVersioningEnabled());

// Suspend (disable) versioning for a bucket -- will have no
// effect if bucket versioning is not yet enabled.
// This will not delete any existing object versions.
s3Service.suspendBucketVersioning(vBucketName);

// Enable versioning for a bucket.
s3Service.enableBucketVersioning(vBucketName);

Once versioning is enabled you can GET, PUT, copy and delete objects as normal. Every change to an object will cause a new version to be created.

// Store and update and delete an object in the versioning bucket

S3Object versionedObject = new S3Object("versioned-object", "Initial version");
s3Service.putObject(vBucketName, versionedObject);

versionedObject = new S3Object("versioned-object", "Second version");
s3Service.putObject(vBucketName, versionedObject);

versionedObject = new S3Object("versioned-object", "Final version");
s3Service.putObject(vBucketName, versionedObject);

If you retrieve an object with the standard method you will get the latest version, and if the object is in a versioned bucket its Version ID will be available

versionedObject = s3Service.getObject(vBucketName, "versioned-object");
String finalVersionId = versionedObject.getVersionId();
System.out.println("Version ID: " + finalVersionId);

If you delete a versioned object it is no longer available using standard methods...

s3Service.deleteObject(vBucketName, "versioned-object");
try {
    s3Service.getObject(vBucketName, "versioned-object");
} catch (S3ServiceException e) {
    if (e.getResponseCode() == 404) {
        System.out.println("Is deleted object versioned? "
            + e.getResponseHeaders().get(Constants.AMZ_DELETE_MARKER));
        System.out.println("Delete marker version ID: "
            + e.getResponseHeaders().get(Constants.AMZ_VERSION_ID));
    }
}

... but you can use a versioning-aware method to retrieve any of the prior versions by Version ID.

versionedObject = s3Service.getVersionedObject(finalVersionId,
    vBucketName, "versioned-object");
String versionedData = ServiceUtils.readInputStreamToString(
    versionedObject.getDataInputStream(), "UTF-8");
System.out.println("Data from prior version of deleted document: "
    + versionedData);

List all the object versions in the bucket, with no prefix or delimiter restrictions. Each result object will be one of S3Version or S3DeleteMarker.

BaseVersionOrDeleteMarker[] versions =
    s3Service.listVersionedObjects(vBucketName, null, null);
for (int i = 0; i < versions.length; i++) {
    System.out.println(versions[i]);
}

List versions of objects that match a prefix.

String versionPrefix = "versioned-object";
versions = s3Service.listVersionedObjects(vBucketName, versionPrefix, null);

JetS3t includes a convenience method to list only the versions for a specific object, even if it shares a prefix with other objects.

versions = s3Service.getObjectVersions(vBucketName, "versioned-object");

There are versioning-aware methods corresponding to all S3 operations

versionedObject = s3Service.getVersionedObjectDetails(
    finalVersionId, vBucketName, "versioned-object");
// Confirm that S3 returned the versioned object you requested
if (!finalVersionId.equals(versionedObject.getVersionId())) {
    throw new Exception("Incorrect version!");
}

s3Service.copyVersionedObject(finalVersionId,
    vBucketName, "versioned-object",
    "destination-bucket", new S3Object("copied-from-version"),
    false, null, null, null, null);

AccessControlList versionedObjectAcl =
    s3Service.getVersionedObjectAcl(finalVersionId,
        vBucketName, "versioned-object");

s3Service.putVersionedObjectAcl(finalVersionId,
    vBucketName, "versioned-object", versionedObjectAcl);

To delete an object version once-and-for-all you must use the versioning-specific delete operation, and you can only do so if you are the owner of the bucket containing the version.

s3Service.deleteVersionedObject(finalVersionId,
    vBucketName, "versioned-object");

You can easily delete all the versions of an object using one of JetS3t's multi-threaded services.

versions = s3Service.getObjectVersions(vBucketName, "versioned-object");
// Convert version and delete marker objects into versionId strings.
String[] versionIds = BaseVersionOrDeleteMarker.toVersionIds(versions);
(new S3ServiceSimpleMulti(s3Service)).deleteVersionsOfObject(
    versionIds, vBucketName, "versioned-object");

Multi-Factor Authenticated Delete

For additional data protection you can require multi-factor authentication (MFA) to delete object versions.

// Require multi-factor authentication to delete versions.
s3Service.enableBucketVersioningAndMFA(vBucketName);
// Check MFA status for the bucket
versioningStatus = s3Service.getBucketVersioningStatus(vBucketName);
System.out.println("Multi-factor auth required to delete versions ? "
    + versioningStatus.isMultiFactorAuthDeleteRequired());

If MFA is enabled for a bucket you must provide the serial number for your multi-factor authentication device and a recent code to delete object versions.

String multiFactorSerialNumber = "#111222333";
String multiFactorAuthCode = "12345678";

s3Service.deleteVersionedObjectWithMFA(finalVersionId,
    multiFactorSerialNumber, multiFactorAuthCode, vBucketName, "versioned-object");

With MFA enabled, you must provide your multi-factor auth credentials to disable MFA.

s3Service.disableMFAForVersionedBucket(vBucketName,
    multiFactorSerialNumber, multiFactorAuthCode);

With MFA enabled, you must provide your multi-factor auth credentials to suspend S3 versioning altogether. However, the credentials will not be needed if you have already disabled MFA.

s3Service.suspendBucketVersioningWithMFA(vBucketName,
    multiFactorSerialNumber, multiFactorAuthCode);

Advanced Examples

Managing Metadata

S3Objects can contain metadata stored as name/value pairs. This metadata is stored in S3 and can be accessed when an object is retrieved from S3 using getObject or getObjectDetails methods. To store metadata with an object, add your metadata to the object prior to uploading it to S3.

Note that metadata cannot be updated in S3 without replacing the existing object, and that metadata names must be strings without spaces.

S3Object objectWithMetadata = new S3Object("metadataObject");
objectWithMetadata.addMetadata("favourite-colour", "blue");
objectWithMetadata.addMetadata("document-version", "0.3");

Save and load encrypted AWS Credentials

AWS credentials are your means to login to and manage your S3 account, and should be kept secure. The JetS3t toolkit stores these credentials in AWSCredentials objects. The AWSCredentials class provides utility methods to allow credentials to be saved to an encrypted file and loaded from a previously saved file with the right password.

// Save credentials to an encrypted file protected with a password.
File credFile = new File("awscredentials.enc");
awsCredentials.save("password", credFile);

// Load encrypted credentials from a file.
AWSCredentials loadedCredentials = AWSCredentials.load("password", credFile);
System.out.println("AWS Key loaded from file: " + loadedCredentials.getAccessKey());

// You won't get far if you use the wrong password...
try {
    loadedCredentials = AWSCredentials.load("wrongPassword", credFile);
} catch (S3ServiceException e) {
    System.err.println("Cannot load credentials from file with the wrong password!");
}

Manage Access Control Lists

S3 uses Access Control Lists to control who has access to buckets and objects in S3. By default, any bucket or object you create will belong to you and will not be accessible to anyone else. You can use JetS3t's support for access control lists to make buckets or objects publicly accessible, or to allow other S3 members to access or manage your objects.

The ACL capabilities of S3 are quite involved, so to understand this subject fully please consult Amazon's documentation. The code examples below show how to put your understanding of the S3 ACL mechanism into practice.

ACL settings may be provided with a bucket or object when it is created, or the ACL of existing items may be updated. Let's start by creating a bucket with default (i.e. private) access settings, then making it public.

// Create a bucket in S3.
S3Bucket publicBucket = new S3Bucket(awsAccessKey + ".publicBucket");
s3Service.createBucket(publicBucket);

// Retrieve the bucket's ACL and modify it to grant public access, 
// ie READ access to the ALL_USERS group.
AccessControlList bucketAcl = s3Service.getBucketAcl(publicBucket);
bucketAcl.grantPermission(GroupGrantee.ALL_USERS, Permission.PERMISSION_READ);

// Update the bucket's ACL. Now anyone can view the list of objects in this bucket.
publicBucket.setAcl(bucketAcl);
s3Service.putBucketAcl(publicBucket);
System.out.println("View bucket's object listing here: http://s3.amazonaws.com/" 
    + publicBucket.getName());

Now let's create an object that is public from scratch. Note that we will use the bucket's public ACL object created above, this works fine. Although it is possible to create an AccessControlList object from scratch, this is more involved as you need to set the ACL's Owner information which is only readily available from an existing ACL.

// Create a public object in S3. Anyone can download this object. 
S3Object publicObject = new S3Object(
    publicBucket, "publicObject.txt", "This object is public");
publicObject.setAcl(bucketAcl);
s3Service.putObject(publicBucket, publicObject);        
System.out.println("View public object contents here: http://s3.amazonaws.com/" 
    + publicBucket.getName() + "/" + publicObject.getKey());

The ALL_USERS Group is particularly useful, but there are also other grantee types that can be used with AccessControlList. Please see Amazon's S3 technical documentation for a fuller discussion of these settings.

AccessControlList acl = new AccessControlList();
        
// Grant access by email address. Note that this only works email address of AWS S3 members.
acl.grantPermission(new EmailAddressGrantee("someone@somewhere.com"), 
    Permission.PERMISSION_FULL_CONTROL);

// Grant control of ACL settings to a known AWS S3 member.
acl.grantPermission(new CanonicalGrantee("AWS member's ID"), 
    Permission.PERMISSION_READ_ACP);
acl.grantPermission(new CanonicalGrantee("AWS member's ID"), 
    Permission.PERMISSION_WRITE_ACP);

Bucket Policies -- offer a greater degree of access control for a bucket

Set a bucket policy that allows public read access to all objects under the virtual path "/public"

String bucketNameForPolicy = publicBucket.getName();
String policyJSON =
    "{"
    + "\"Version\":\"2008-10-17\""
    + ",\"Id\":\"EXAMPLE\""
    + ",\"Statement\": [{"
        + "\"Effect\":\"Allow\""
        + ",\"Action\":[\"s3:GetObject*\"]"
        + ",\"Principal\":{\"AWS\": [\"*\"]}"
        + ",\"Resource\":\"arn:aws:s3:::" + bucketNameForPolicy + "/public/*\""
    + "}]}";
s3Service.setBucketPolicy(bucketNameForPolicy, policyJSON);

// Retrieve the policy document applied to a bucket
String policyDocument = s3Service.getBucketPolicy(bucketNameForPolicy);
System.out.println(policyDocument);

// Delete the policy document applied to a bucket
s3Service.deleteBucketPolicy(bucketNameForPolicy);

Temporarily make an Object available to anyone

A private object stored in S3 can be made publicly available for a limited time using a signed URL. The signed URL can be used by anyone to download the object, yet it includes a date and time after which the URL will no longer work.

// Create a private object in S3.
S3Bucket privateBucket = new S3Bucket("privateBucket");
S3Object privateObject = new S3Object(
    privateBucket, "privateObject.txt", "This object is private");
s3Service.createBucket(privateBucket);
s3Service.putObject(privateBucket, privateObject);        

// Determine what the time will be in 5 minutes.
Calendar cal = Calendar.getInstance();
cal.add(Calendar.MINUTE, 5);
Date expiryDate = cal.getTime();

Create a signed HTTP GET URL valid for 5 minutes. If you use the generated URL in a web browser within 5 minutes, you will be able to view the object's contents. After 5 minutes, the URL will no longer work and you will only see an Access Denied message.

String signedUrl = s3Service.createSignedGetUrl(
    privateBucket.getName(), privateObject.getKey(), expiryDate, false);

System.out.println("Signed URL: " + signedUrl);

Multipart Uploads

Amazon S3 offers an alternative method for uploading objects for users with advanced requirements, called Multipart Uploads. This mechanism involves uploading an object's data in parts instead of all at once, which can give the following advantages:

  • large files can be uploaded in smaller pieces to reduce the impact of transient uploading/networking errors
  • objects larger than 5 GB can be stored
  • objects can be constructed from data that is uploaded over a period of time, when it may not all be available in advance.

JetS3t's MultipartUtils class makes it easy to perform mutipart uploads of your files. To upload a file in 20MB parts:

S3Object largeFileObject = new S3Object(new File("/path/to/large/file"));

List objectsToUploadAsMultipart = new ArrayList();
objectsToUploadAsMultipart.add(largeFileObject);

long maxSizeForAPartInBytes = 20 * 1024 * 1024;
MultipartUtils mpUtils = new MultipartUtils(maxSizeForAPartInBytes);

mpUtils.uploadObjects(BUCKET_NAME, s3Service, objectsToUploadAsMultipart,
    null // eventListener : Provide one to monitor the upload progress
    );

The S3Service API also provides the underlying low-level multipart operations if you need more control over the process. See the method names that start with "multipart", and the example code in TestRestS3Service#testMultipartUploads

IMPORTANT: The objects in S3 created by a multipart upload process do not have ETag header values that can be used to perform MD5 hash verification of the object data. See https://forums.aws.amazon.com/thread.jspa?messageID=234579

Create an S3 POST form

When you create and S3 POST form, anyone who accesses that form in a web browser will be able to upload files to S3 directly from the browser, without needing S3-compatible client software. Refer to the S3 POST documentation for more information.

We will start by creating a POST form with no policy document, meaning that the form will have no expiration date or usage conditions. This form will only work if the target bucket has public write access enabled.

        
String unrestrictedForm = S3Service.buildPostForm("public-bucket", "${filename}");

To use this form, save it in a UTF-8 encoded HTML page (ie with the meta tag ) and load the page in a web browser.

We will now create a POST form with a range of policy conditions, that will allow users to upload image files to a protected bucket.

String bucketName = "test-bucket";
String key = "uploads/images/pic.jpg";

Specify input fields to set the access permissions and content type of the object created by the form. We will also redirect the user to another web site after they have successfully uploaded a file.

        String[] inputFields = new String[] {
            "<input type=\"hidden\" name=\"acl\" value=\"public-read\">",
            "<input type=\"hidden\" name=\"Content-Type\" value=\"image/jpeg\">",
            "<input type=\"hidden\" name=\"success_action_redirect\" value=\"http://localhost/post_upload\">"
        };

We then specify policy conditions for at least the mandatory 'bucket' and 'key' fields that will be included in the POST request. In addition to the mandatory fields, we will add a condition to control the size of the file the user can upload.

Note that our list of conditions must include a condition corresponding to each of the additional input fields we specified above.

String[] conditions = {
    S3Service.generatePostPolicyCondition_Equality("bucket", bucketName),
    S3Service.generatePostPolicyCondition_Equality("key", key),
    S3Service.generatePostPolicyCondition_Range(10240, 204800),
    // Conditions to allow the additional fields specified above
    S3Service.generatePostPolicyCondition_Equality("acl", "public-read"),
    S3Service.generatePostPolicyCondition_Equality("Content-Type", "image/jpeg"),
    S3Service.generatePostPolicyCondition_Equality("success_action_redirect", "http://localhost/post_upload")
};

// Form will expire in 24 hours
cal = Calendar.getInstance();
cal.add(Calendar.HOUR, 24);
Date expiration = cal.getTime();
        
// Generate the form.
String restrictedForm = S3Service.buildPostForm(
    bucketName, key, awsCredentials, expiration, conditions, 
    inputFields, null, true);       

Activate Requester Pays for a bucket

A bucket in S3 is normally configured such that the bucket's owner pays all the service fees for accessing, sharing and storing objects. The Requester Pays feature of S3 allows a bucket to be configured such that the individual who sends requests to a bucket is charged the S3 request and data transfer fees, instead of the bucket's owner.

// Set a bucket to be Requester Pays 
s3Service.setRequesterPaysBucket(bucketName, true);

// Set a bucket to be Owner pays (the default value for S3 buckets)
s3Service.setRequesterPaysBucket(bucketName, false);

// Find out whether a bucket is configured as Requester pays
s3Service.isRequesterPaysBucket(bucketName);

Access a Requester Pays bucket when you are not the bucket's owner

When a bucket is configured as Requester Pays, other AWS users can upload objects to the bucket or retrieve them provided the user:

  • has the necessary Access Control List permissions, and
  • indicates that he/she is willing to pay the Requester Pays fees, by including a special flag in the request.

Indicate that you will accept any Requester Pays fees by setting the RequesterPaysEnabled flag to true in your RestS3Service class. You can then use the service to list, upload, or download objects as normal. Support for Requester Pays buckets is disabled by default in JetS3t with the jets3t.properties setting httpclient.requester-pays-buckets-enabled=false.

s3Service.setRequesterPaysEnabled(true);

Generate a Signed URL for a Requester Pays bucket

Third party users of a Requester Pays bucket can generate Signed URLs that permit public access to objects. To generate such a URL, these users call the S3Service#createSignedUrl method with a flag to indicate that the he/she is willing to pay the Requester Pays fees incurred by the use of the signed URL.

// Generate a signed GET URL for
Map httpHeaders = null;
long expirySecsAfterEpoch = System.currentTimeMillis() / 1000 + 300;
boolean isVirtualHost = false;
boolean isHttpsUrl = false;

String requesterPaysSignedGetUrl = 
    s3Service.createSignedUrl("GET", bucketName, "object-name", 
        Constants.REQUESTER_PAYS_BUCKET_FLAG, // Include Requester Pays flag  
        httpHeaders, expirySecsAfterEpoch, 
        isVirtualHost, isHttpsUrl);

Accessing Amazon DevPay S3 accounts

Amazon's DevPay service allows vendors to sell user-pays S3 accounts. To access the S3 portions of a DevPay product, JetS3t needs additional credentials that include the DevPay User Token, and the DevPay Product Token.

AWSDevPayCredentials devPayCredentials = new AWSDevPayCredentials(
    "YOUR_AWS_ACCESSS_KEY", "YOUR_AWS_SECRET_KEY",
    "DEVPAY_USER_TOKEN", "DEVPAY_PRODUCT_TOKEN");

Once you have defined your DevPay S3 credentials, you can create an S3Service class based on these and access the DevPay account as usual.

S3Service devPayService = new RestS3Service(devPayCredentials);
devPayService.listAllBuckets();

You can also generate signed URLs for DevPay S3 accounts. Here is the code to generate a linke that makes an object in a DevPay account temporary available for public download:

cal = Calendar.getInstance();
cal.add(Calendar.MINUTE, 5);

String signedDevPayUrl = devPayService.createSignedGetUrl(
    "devpay-bucket-name", "devpay-object-name", cal.getTime());

Google Storage for Developers (GSCodeSamples.java)

Connecting to Google Storage

Your Google Storage (GS) login credentials are required to manage GS accounts. These credentials are stored in an GSCredentials object:

GSCredentials gsCredentials = SamplesUtils.loadGSCredentials();

// To communicate with Google Storage use the GoogleStorageService.
GoogleStorageService gsService = new GoogleStorageService(gsCredentials);

// A good test to see if your GoogleStorageService can connect to GS is to list all the buckets you own.
// If a bucket listing produces no exceptions, all is well.

GSBucket[] myBuckets = gsService.listAllBuckets();
System.out.println("How many buckets to I have in GS? " + myBuckets.length);

Create a bucket

To store data in GS you must first create a bucket, a container for objects.

GSBucket testBucket = gsService.createBucket(BUCKET_NAME);
System.out.println("Created test bucket: " + testBucket.getName());

If you try using a common name, you will probably not be able to create the bucket as someone else will already have a bucket of that name.

Uploading data objects

We use GSObject classes to represent data objects in Google Storage. To store some information in our new test bucket, we must first create an object with a key/name then tell our GoogleStorageService to upload it to GS.

In the example below, we print out information about the GSObject before and after uploading it to GS. These print-outs demonstrate that the GSObject returned by the putObject method contains extra information provided by GS, such as the date the object was last modified on a GS server.

// Create an empty object with a key/name, and print the object's details.
GSObject object = new GSObject("object");
System.out.println("GSObject before upload: " + object);

// Upload the object to our test bucket in GS.
object = gsService.putObject(BUCKET_NAME, object);

// Print the details about the uploaded object, which contains more information.
System.out.println("GSObject after upload: " + object);

The example above will create an empty object in GS, which isn't very useful. To include data in the object you must provide some data for the object. If you know the Content/Mime type of the data (e.g. text/plain) you should set this too.

GSObject's can contain any data available from an input stream, but JetS3t provides two convenient object types to hold File or String data. These convenient constructors automatically set the Content-Type and Content-Length of the object.

// Create an GSObject based on a string, with Content-Length set automatically and
// Content-Type set to "text/plain"
String stringData = "Hello World!";
GSObject stringObject = new GSObject(TEST_OBJECT_NAME, stringData);

// Create an GSObject based on a file, with Content-Length set automatically and
// Content-Type set based on the file's extension (using the Mimetypes utility class)
File fileData = new File("src/org/jets3t/samples/GSCodeSamples.java");
GSObject fileObject = new GSObject(fileData);

If your data isn't a File or String you can use any input stream as a data source, but you must manually set the Content-Length.

// Create an object containing a greeting string as input stream data.
String greeting = "Hello World!";
GSObject helloWorldObject = new GSObject("HelloWorld2.txt");
ByteArrayInputStream greetingIS = new ByteArrayInputStream(
    greeting.getBytes(Constants.DEFAULT_ENCODING));
helloWorldObject.setDataInputStream(greetingIS);
helloWorldObject.setContentLength(
    greeting.getBytes(Constants.DEFAULT_ENCODING).length);
helloWorldObject.setContentType("text/plain");

// Upload the data objects.
gsService.putObject(BUCKET_NAME, stringObject);
gsService.putObject(BUCKET_NAME, fileObject);
gsService.putObject(BUCKET_NAME, helloWorldObject);

// Print details about the uploaded object.
System.out.println("GSObject with data: " + helloWorldObject);

Verifying Uploads

To be 100% sure that data you have uploaded to GS has not been corrupted in transit, you can verify that the hash value of the data GS received matches the hash value of your original data.

The easiest way to do this is to specify your data's hash value in the Content-MD5 header before you upload the object. JetS3t will do this for you automatically when you use the File- or String-based GSObject constructors:

GSObject objectWithHash = new GSObject(TEST_OBJECT_NAME, stringData);
System.out.println("Hash value: " + objectWithHash.getMd5HashAsHex());

If you do not use these constructors, you should *always* set the Content-MD5 header value yourself before you upload an object. JetS3t provides the ServiceUtils#computeMD5Hash method to calculate the hash value of an input stream or byte array.

ByteArrayInputStream dataIS = new ByteArrayInputStream(
    "Here is my data".getBytes(Constants.DEFAULT_ENCODING));
byte[] md5Hash = ServiceUtils.computeMD5Hash(dataIS);
dataIS.reset();

GSObject hashObject = new GSObject("MyData");
hashObject.setDataInputStream(dataIS);
hashObject.setMd5Hash(md5Hash);

Downloading data objects

To download data from GS you retrieve an GSObject through the GSService. You may retrieve an object in one of two ways, with the data contents or without.

If you just want to know some details about an object and you don't need its contents, it's faster to use the getObjectDetails method. This returns only the object's details, also known as its 'HEAD'. Head information includes the object's size, date, and other metadata associated with it such as the Content Type.

// Retrieve the HEAD of the data object we created previously.
GSObject objectDetailsOnly = gsService.getObjectDetails(BUCKET_NAME, TEST_OBJECT_NAME);
System.out.println("GSObject, details only: " + objectDetailsOnly);

If you need the data contents of the object, the getObject method will return all the object's details and will also set the object's DataInputStream variable from which the object's data can be read.

// Retrieve the whole data object we created previously
GSObject objectComplete = gsService.getObject(BUCKET_NAME, TEST_OBJECT_NAME);
System.out.println("GSObject, complete: " + objectComplete);

// Read the data from the object's DataInputStream using a loop, and print it out.
System.out.println("Greeting:");
BufferedReader reader = new BufferedReader(
    new InputStreamReader(objectComplete.getDataInputStream()));
String data;
while ((data = reader.readLine()) != null) {
    System.out.println(data);
}

Verifying Downloads

To be 100% sure that data you have downloaded from GS has not been corrupted in transit, you can verify the data by calculating its hash value and comparing this against the hash value returned by GS.

JetS3t provides convenient methods for verifying data that has been downloaded to a File, byte array or InputStream.

GSObject downloadedObject = gsService.getObject(BUCKET_NAME, TEST_OBJECT_NAME);
String textData = ServiceUtils.readInputStreamToString(
    downloadedObject.getDataInputStream(), "UTF-8");
boolean valid = downloadedObject.verifyData(textData.getBytes("UTF-8"));
System.out.println("Object verified? " + valid);

List your buckets and objects

Now that you have a bucket and some objects, it's worth listing them. Note that when you list objects, the objects returned will not include much information compared to what you get from the getObject and getObjectDetails methods. However, they will include the size of each object

// List all your buckets.
GSBucket[] buckets = gsService.listAllBuckets();

// List the object contents of each bucket.
for (int b = 0; b < buckets.length; b++) {
    System.out.println("Bucket '" + buckets[b].getName() + "' contains:");

    // List the objects in this bucket.
    GSObject[] objects = gsService.listObjects(buckets[b].getName());

    // Print out each object's key and size.
    for (int o = 0; o < objects.length; o++) {
        System.out.println(" " + objects[o].getKey() + " (" + objects[o].getContentLength() + " bytes)");
    }
}

When listing the objects in a bucket you can filter which objects to return based on the names of those objects. This is useful when you are only interested in some specific objects in a bucket and you don't need to list all the bucket's contents.

// List only objects whose keys match a prefix.
String prefix = "Reports";
String delimiter = null; // Refer to the service guide for more information on delimiters
GSObject[] filteredObjects = gsService.listObjects(BUCKET_NAME, prefix, delimiter);

Copying objects

Objects can be copied within the same bucket and between buckets.

// Create a target GSObject
GSObject targetObject = new GSObject("target-object-with-sources-metadata");

// Copy an existing source object to the target GSObject
// This will copy the source's object data and metadata to the target object.
boolean replaceMetadata = false;
gsService.copyObject(BUCKET_NAME, TEST_OBJECT_NAME, "target-bucket", targetObject, replaceMetadata);

// You can also copy an object and update its metadata at the same time. Perform a
// copy-in-place  (with the same bucket and object names for source and destination)
// to update an object's metadata while leaving the object's data unchanged.
targetObject = new GSObject(TEST_OBJECT_NAME);
targetObject.addMetadata(GSObject.METADATA_HEADER_CONTENT_TYPE, "text/html");
replaceMetadata = true;
gsService.copyObject(BUCKET_NAME, TEST_OBJECT_NAME, BUCKET_NAME, targetObject, replaceMetadata);

Moving and Renaming objects

Objects can be moved within a bucket (to a different name) or to another bucket. A move operation is composed of a copy then a delete operation behind the scenes. If the initial copy operation fails, the object is not deleted. If the final delete operation fails, the object will exist in both the source and destination locations.

// Here is a command that moves an object from one bucket to another.
gsService.moveObject(BUCKET_NAME, TEST_OBJECT_NAME, "target-bucket", targetObject, false);

// You can move an object to a new name in the same bucket. This is essentially a rename operation.
gsService.moveObject(BUCKET_NAME, TEST_OBJECT_NAME, BUCKET_NAME, new GSObject("newname.txt"), false);

// To make renaming easier, JetS3t has a shortcut method especially for this purpose.
gsService.renameObject(BUCKET_NAME, TEST_OBJECT_NAME, targetObject);

Deleting objects and buckets

Objects can be easily deleted. When they are gone they are gone for good so be careful.

Buckets may only be deleted when they are empty.

// If you try to delete your bucket before it is empty, it will fail.
try {
    // This will fail if the bucket isn't empty.
    gsService.deleteBucket(BUCKET_NAME);
} catch (ServiceException e) {
    e.printStackTrace();
}

// Delete all the objects in the bucket
gsService.deleteObject(BUCKET_NAME, object.getKey());
gsService.deleteObject(BUCKET_NAME, helloWorldObject.getKey());
gsService.deleteObject(BUCKET_NAME, stringObject.getKey());
gsService.deleteObject(BUCKET_NAME, fileObject.getKey());

// Now that the bucket is empty, you can delete it.
gsService.deleteBucket(BUCKET_NAME);
System.out.println("Deleted bucket " + BUCKET_NAME);

Manage Access Control Lists

GS uses Access Control Lists to control who has access to buckets and objects in GS. By default, any bucket or object you create will belong to you and will not be accessible to anyone else. You can use JetS3t's support for access control lists to make buckets or objects publicly accessible, or to allow other GS members to access or manage your objects.

The ACL capabilities of GS are quite involved, so to understand this subject fully please consult Google's documentation. The code examples below show how to put your understanding of the GS ACL mechanism into practice.

ACL settings may be provided with a bucket or object when it is created, or the ACL of existing items may be updated. Let's start by creating a bucket with default (i.e. private) access settings, then making it public.

// Create a bucket.
String publicBucketName = BUCKET_NAME + "-public";
GSBucket publicBucket = new GSBucket(publicBucketName);
gsService.createBucket(publicBucketName);

// Retrieve the bucket's ACL and modify it to grant public access,
// ie READ access to the ALL_USERS group.
GSAccessControlList bucketAcl = gsService.getBucketAcl(publicBucketName);
bucketAcl.grantPermission(new AllUsersGrantee(), Permission.PERMISSION_READ);

// Update the bucket's ACL. Now anyone can view the list of objects in this bucket.
publicBucket.setAcl(bucketAcl);
gsService.putBucketAcl(publicBucket);

Now let's create an object that is public from scratch. Note that we will use the bucket's public ACL object created above, this works fine. Although it is possible to create an AccessControlList object from scratch, this is more involved as you need to set the ACL's Owner information which is only readily available from an existing ACL.

// Create a public object in GS. Anyone can download this object.
GSObject publicObject = new GSObject("publicObject.txt", "This object is public");
publicObject.setAcl(bucketAcl);
gsService.putObject(publicBucketName, publicObject);

The ALL_USERS Group is particularly useful, but there are also other grantee types that can be used with AccessControlList. Please see Google Storage technical documentation for a fuller discussion of these settings.

GSAccessControlList acl = new GSAccessControlList();

// Grant access by email address. Note that this only works email address of GS members.
acl.grantPermission(new UserByEmailAddressGrantee("someone@somewhere.com"),
    Permission.PERMISSION_FULL_CONTROL);

// Grant Read access by Goodle ID.
acl.grantPermission(new UserByIdGrantee("Google member's ID"),
    Permission.PERMISSION_READ);

// Grant Write access to a group by domain.
acl.grantPermission(new GroupByDomainGrantee("yourdomain.com"),
    Permission.PERMISSION_WRITE);

Threaded Service Wrapper*

The JetS3t Toolkit includes the utility services that can perform many operations at once in either the Amazon S3 or Google Storage services: ThreadedStorageService and SimpleThreadedStorageService. These services allow you to use more of your available bandwidth and perform S3 operations much faster.

NOTE: The S3-specific multi-theaded services S3ServiceMulti and S3ServiceSimpleMulti are largely (but not entirely) made obsolete by ThreadedStorageService and SimpleThreadedStorageService. Support for performing multi-threaded operations using features that are specific to S3, such as multipart uploads, is provided by the ThreadedS3Service class with is an S3-specific subclass of ThreadedStorageService.

The ThreadedStorageService service is intended for advanced developers. It is designed for use in graphical applications and uses an event-notification approach to communicate its results rather than standard method calls. This means the service can provide progress reports to an application during long-running operations. However, this approach makes the service complicated to use. See the code for the Cockpit application to see how this service is used to display progress updates.

The SimpleThreadedStorageService class is a simplified interface to the multi-threaded service, so developers can take advantage of multi-threading without having to implement handling of callbacks or worry about other messy details.

The examples below demonstrate how to use some of the multi-threaded operations provided by SimpleThreadedStorageService.

Construct a SimpleThreadedStorageService service

To use the SimpleThreadedStorageService service you construct it by providing a pre-prepared RestS3Service or GoogleStorageService.

// Create a simple multi-threading service based on our existing S3Service
SimpleThreadedStorageService simpleMulti = new SimpleThreadedStorageService(s3Service);

Upload multiple objects at once

To demonstrate multiple uploads, let's create some small text-data objects and a bucket to put them in.

// First, create a bucket.
S3Bucket bucket = new S3Bucket(awsAccessKey + ".TestMulti");
bucket = s3Service.createBucket(bucket);

// Create an array of data objects to upload.
S3Object[] objects = new S3Object[5];
objects[0] = new S3Object(bucket, "object1.txt", "Hello from object 1");
objects[1] = new S3Object(bucket, "object2.txt", "Hello from object 2");
objects[2] = new S3Object(bucket, "object3.txt", "Hello from object 3");
objects[3] = new S3Object(bucket, "object4.txt", "Hello from object 4");
objects[4] = new S3Object(bucket, "object5.txt", "Hello from object 5");

Now we have some sample objects, we can upload them.

// Upload multiple objects.
S3Object[] createdObjects = simpleMulti.putObjects(bucket, objects);        
System.out.println("Uploaded " + createdObjects.length + " objects");

Retrieve the HEAD information of multiple objects

// Perform a Details/HEAD query for multiple objects.
S3Object[] objectsWithHeadDetails = simpleMulti.getObjectsHeads(bucket, objects);

// Print out details about all the objects.
System.out.println("Objects with HEAD Details...");
for (int i = 0; i < objectsWithHeadDetails.length; i++) {
    System.out.println(objectsWithHeadDetails[i]);
}

Download objects to local files

The multi-threading services provide a method to download multiple objects at a time, but to use this you must first prepare somewhere to put the data associated with each object. The most obvious place to put this data is into a file, so let's go through an example of downloading object data into files.

To download our objects into files we first must create a S3ObjectAndOutputStream class for each object. This class is a simple container which merely associates an object with an output stream, to which the object's data will be written.

// Create a DownloadPackage for each object, to associate the object with an output file.
DownloadPackage[] downloadPackages = new DownloadPackage[5];
downloadPackages[0] = new DownloadPackage(objects[0], 
    new File(objects[0].getKey()));
downloadPackages[1] = new DownloadPackage(objects[1], 
    new File(objects[1].getKey()));
downloadPackages[2] = new DownloadPackage(objects[2], 
    new File(objects[2].getKey()));
downloadPackages[3] = new DownloadPackage(objects[3], 
    new File(objects[3].getKey()));
downloadPackages[4] = new DownloadPackage(objects[4], 
    new File(objects[4].getKey()));

// Download the objects.
simpleMulti.downloadObjects(bucket, downloadPackages);
System.out.println("Downloaded objects to current working directory");

Delete multiple objects

It's time to clean up, so let's get rid of our multiple objects and test bucket.

// Delete multiple objects, then the bucket too.
simpleMulti.deleteObjects(bucket, objects);
s3Service.deleteBucket(bucket);
System.out.println("Deleted bucket: " + bucket);

CloudFront (CloudFrontSamples.java)

Amazon's CloudFront service acts like a Content Distribution Network (CDN) for files you have stored in your S3 acount. CloudFront is a separate service from S3, so JetS3t includes an entirely new service class for interacting with the service's API: CloudFrontService. To use the service, you will need to sign up for access to the CloudFront as well as S3.

Manage CloudFront Distributions

Construct a CloudFrontService object to interact with the service.

AWSCredentials awsCredentials = new AWSCredentials(
        "YOUR_AWS_ACCESS_KEY", "YOUR_AWS_SECRET_KEY");
CloudFrontService cloudFrontService = new CloudFrontService(awsCredentials);

List the distributions applied to a given S3 bucket

Distribution[] bucketDistributions = cloudFrontService.listDistributions("jets3t");
for (int i = 0; i < bucketDistributions.length; i++) {
    System.out.println("Bucket distribution " + (i + 1) + ": " + bucketDistributions[i]);
}

Create a new public distribution

String originBucket = "jets3t.s3.amazonaws.com";
Distribution newDistribution = cloudFrontService.createDistribution(
    new S3Origin(originBucket),
    "" + System.currentTimeMillis(), // Caller reference - a unique string value
    new String[] {"test1.jamesmurty.com"}, // CNAME aliases for distribution
    "Testing", // Comment
    true,  // Distribution is enabled?
    null  // Logging status of distribution (null means disabled)
    );
System.out.println("New Distribution: " + newDistribution);

The ID of the new distribution we will use for testing

String testDistributionId = newDistribution.getId(); 

List information about a distribution

Distribution distribution = cloudFrontService.getDistributionInfo(testDistributionId);
System.out.println("Distribution: " + distribution);

List configuration information about a distribution

DistributionConfig distributionConfig = cloudFrontService.getDistributionConfig(testDistributionId);
System.out.println("Distribution Config: " + distributionConfig);

Update a distribution's configuration to add an extra CNAME alias and enable logging.

DistributionConfig updatedDistributionConfig = cloudFrontService.updateDistributionConfig(
    testDistributionId,
    null, // origin -- null for no changes
    new String[] {"test1.jamesmurty.com", "test2.jamesmurty.com"}, // CNAME aliases for distribution
    "Another comment for testing", // Comment
    true, // Distribution enabled?
    new LoggingStatus("log-bucket.s3.amazonaws.com", "log-prefix/")  // Distribution logging
    );
System.out.println("Updated Distribution Config: " + updatedDistributionConfig);

Update a distribution's configuration to require secure HTTPS connections, using the RequiredProtocols feature.

updatedDistributionConfig = cloudFrontService.updateDistributionConfig(
    testDistributionId,
    null, // origin -- null for no changes
    new String[] {"test1.jamesmurty.com", "test2.jamesmurty.com"}, // CNAME aliases for distribution
    "HTTPS Only!", // Comment
    true, // Distribution enabled?
    new LoggingStatus("log-bucket.s3.amazonaws.com", "log-prefix/"),  // Distribution logging
    false, // URLs self-signing disabled
    null,  // No other AWS users can sign URLs
    new String[] {"https"}, // RequiredProtocols with HTTPS protocol
    "index.html" // Default Root Object
);
System.out.println("HTTPS only distribution Config: " + updatedDistributionConfig);

Disable a distribution, e.g. so that it may be deleted. The CloudFront service may take some time to disable and deploy the distribution.

DistributionConfig disabledDistributionConfig = cloudFrontService.updateDistributionConfig(
    testDistributionId, null, new String[] {}, "Deleting distribution", false, null);
System.out.println("Disabled Distribution Config: " + disabledDistributionConfig);

Check whether a distribution is deployed

Distribution distribution = cloudFrontService.getDistributionInfo(testDistributionId);
System.out.println("Distribution is deployed? " + distribution.isDeployed());

Convenience method to disable a distribution prior to deletion

cloudFrontService.disableDistributionForDeletion(testDistributionId);

Delete a distribution (the distribution must be disabled and deployed first)

cloudFrontService.deleteDistribution(testDistributionId);

Private Distributions

Origin Access Identities

Create a new origin access identity

OriginAccessIdentity originAccessIdentity = 
    cloudFrontService.createOriginAccessIdentity(null, "Testing");
System.out.println(originAccessIdentity.toString());

List your origin access identities

List originAccessIdentityList = cloudFrontService.getOriginAccessIdentityList();
System.out.println(originAccessIdentityList);

Obtain an origin access identity ID for future use

OriginAccessIdentity identity = (OriginAccessIdentity) originAccessIdentityList.get(1);
String originAccessIdentityId = identity.getId(); 
System.out.println("originAccessIdentityId: " + originAccessIdentityId);

Lookup information about a specific origin access identity

OriginAccessIdentity originAccessIdentity =
    cloudFrontService.getOriginAccessIdentity(originAccessIdentityId);
System.out.println(originAccessIdentity);

Lookup config details for an origin access identity

OriginAccessIdentityConfig originAccessIdentityConfig =
    cloudFrontService.getOriginAccessIdentityConfig(originAccessIdentityId);
System.out.println(originAccessIdentityConfig);

Update configuration for an origin access identity

OriginAccessIdentityConfig updatedConfig = 
    cloudFrontService.updateOriginAccessIdentityConfig(
        originAccessIdentityId, "New Comment");
System.out.println(updatedConfig);

Delete an origin access identity

cloudFrontService.deleteOriginAccessIdentity(originAccessIdentityId);

CloudFront Private Distributions - Private Distributions

Create a new private distribution for which signed URLs are *not* required

originBucket = "jets3t.s3.amazonaws.com";
Distribution privateDistribution = cloudFrontService.createDistribution(
    new S3Origin(originBucket, originAccessIdentityId),
    "" + System.currentTimeMillis(), // Caller reference - a unique string value
    new String[] {}, // CNAME aliases for distribution
    "New private distribution -- URL signing not required", // Comment
    true,  // Distribution is enabled?
    null,  // Logging status of distribution (null means disabled)
    false, // URLs self-signing disabled
    null,  // No other AWS users can sign URLs
    null,   // No required protocols
    null // No default root object
);
System.out.println("New Private Distribution: " + privateDistribution);

Update an existing distribution to make it private and require URL signing

updatedDistributionConfig = cloudFrontService.updateDistributionConfig(
    testDistributionId,
    new S3Origin(originBucket, originAccessIdentityId),
    new String[] {}, // CNAME aliases for distribution
    "Now a private distribution -- URL Signing required", // Comment
    true, // Distribution enabled?
    null, // No distribution logging
    true, // URLs can be self-signed
    null, // No other AWS users can sign URLs
    null,  // No required protocols
    "index.html" //Default Root Object
);
System.out.println("Made distribution private: " + updatedDistributionConfig);

List active trusted signers for a private distribution

Distribution distribution = cloudFrontService.getDistributionInfo(testDistributionId);
System.out.println("Active trusted signers: " + distribution.getActiveTrustedSigners());

Obtain one of your own (Self) keypair ids that can sign URLs for the distribution

List selfKeypairIds = (List) distribution.getActiveTrustedSigners().get("Self");
String keyPairId = selfKeypairIds.get(0);
System.out.println("Keypair ID: " + keyPairId); 

Signed URLs for a private distribution

String distributionDomain = "a1b2c3d4e5f6g7.cloudfront.net";
String privateKeyFilePath = "/path/to/rsa-private-key.pem";
String s3ObjectKey = "s3/object/key.txt";
String policyResourcePath = distributionDomain + "/" + s3ObjectKey;

Convert an RSA PEM private key file to DER bytes

byte[] derPrivateKey = EncryptionUtil.convertRsaPemToDer(
    new FileInputStream(privateKeyFilePath));

Generate a "canned" signed URL to allow access to a specific distribution and object

String signedUrlCanned = CloudFrontService.signUrlCanned(
    "http://" + distributionDomain + "/" + s3ObjectKey, // Resource URL or Path
    keyPairId,     // Certificate identifier, an active trusted signer for the distribution
    derPrivateKey, // DER Private key data
    ServiceUtils.parseIso8601Date("2009-11-14T22:20:00.000Z") // DateLessThan
    );
System.out.println(signedUrlCanned);

Build a policy document to define custom restrictions for a signed URL

String policy = CloudFrontService.buildPolicyForSignedUrl(
    policyResourcePath, // Resource path (optional, may include '*' and '?' wildcards)
    ServiceUtils.parseIso8601Date("2009-11-14T22:20:00.000Z"), // DateLessThan
    "0.0.0.0/0", // CIDR IP address restriction (optional, 0.0.0.0/0 means everyone)
    ServiceUtils.parseIso8601Date("2009-10-16T06:31:56.000Z")  // DateGreaterThan (optional)
    );

Generate a signed URL using a custom policy document

String signedUrl = CloudFrontService.signUrl(
    "http://" + distributionDomain + "/" + s3ObjectKey, // Resource URL or Path
    keyPairId,     // Certificate identifier, an active trusted signer for the distribution
    derPrivateKey, // DER Private key data
    policy // Access control policy
    );
System.out.println(signedUrl);

Streaming Distributions

The methods for interacting with streaming distributions are very similar to those for standard distributions

List the streaming distributions applied to a given S3 bucket

StreamingDistribution[] streamingDistributions = 
    cloudFrontService.listStreamingDistributions("jets3t-streaming");
for (int i = 0; i < streamingDistributions.length; i++) {
    System.out.println("Streaming distribution " + (i + 1) + ": " + streamingDistributions[i]);
}

Create a new streaming distribution

String streamingBucket = "jets3t-streaming.s3.amazonaws.com";
StreamingDistribution newStreamingDistribution = cloudFrontService.createStreamingDistribution(
    new S3Origin(streamingBucket),
    "" + System.currentTimeMillis(), // Caller reference - a unique string value
    null, // CNAME aliases for distribution
    "Test streaming distribution", // Comment
    true,  // Distribution is enabled?
    null   // Logging status
    );
System.out.println("New Streaming Distribution: " + newStreamingDistribution);

Streaming distributions can be made private just like standard non-streaming distributions. Create a new private streaming distribution for which signed URLs are *not* required

StreamingDistribution newPrivateStreamingDistribution =
    cloudFrontService.createStreamingDistribution(
        new S3Origin(streamingBucket, originAccessIdentityId),
        "" + System.currentTimeMillis(), // Caller reference - a unique string value
        new String[] {}, // CNAME aliases for distribution
        "New private streaming distribution -- URL signing not required", // Comment
        true, // Distribution is enabled?
        null, // Logging status
        true, // URLs self-signing enabled
        null // No other AWS users can sign URLs
);
System.out.println("New Private Streaming Distribution: " + newPrivateStreamingDistribution);

The ID of the streaming distribution we will use for testing

String testStreamingDistributionId = newStreamingDistribution.getId();

List information about a streaming distribution

StreamingDistribution streamingDistribution = 
    cloudFrontService.getStreamingDistributionInfo(testStreamingDistributionId);
System.out.println("Streaming Distribution: " + streamingDistribution);

List configuration information about a streaming distribution

StreamingDistributionConfig streamingDistributionConfig = 
    cloudFrontService.getStreamingDistributionConfig(testStreamingDistributionId);
System.out.println("Streaming Distribution Config: " + streamingDistributionConfig);

Update a streaming distribution's configuration to add an extra CNAME alias

StreamingDistributionConfig updatedStreamingDistributionConfig =
    cloudFrontService.updateStreamingDistributionConfig(
        testStreamingDistributionId,
        null, // origin -- null for no changes
        new String[] {"cname.jets3t-streaming.com"}, // CNAME aliases for distribution
        "Updated this streaming distribution", // Comment
        true, // Distribution enabled?
        new LoggingStatus("jets3t-streaming-logs.s3.amazonaws.com", "sdlog-") // Logging
        );
System.out.println("Updated Streaming Distribution Config: "
    + updatedStreamingDistributionConfig);

Disable a streaming distribution, e.g. so that it may be deleted. The CloudFront service may take some time to disable and deploy the distribution.

StreamingDistributionConfig disabledStreamingDistributionConfig =
    cloudFrontService.updateStreamingDistributionConfig(
        testStreamingDistributionId,
        null, // origin -- null for no changes
        new String[] {}, "Deleting distribution",
        false, // Distribution enabled?
        null   // Logging status
        );
System.out.println("Disabled Streaming Distribution Config: "
    + disabledStreamingDistributionConfig);

Check whether a streaming distribution is deployed

StreamingDistribution streamingDistributionCheck = 
    cloudFrontService.getStreamingDistributionInfo(testStreamingDistributionId);
System.out.println("Streaming Distribution is deployed? " 
    + streamingDistributionCheck.isDeployed());

Convenience method to disable a streaming distribution prior to deletion

cloudFrontService.disableStreamingDistributionForDeletion(testStreamingDistributionId);

Delete a streaming distribution (the distribution must be disabled and deployed first)

cloudFrontService.deleteStreamingDistribution(testStreamingDistributionId);

Object Invalidation

Invalidate objects in a distribution to force CloudFront to fetch the latest object data from the S3 origin.

String[] objectKeys = new String[] {"downloads.html"};

Invalidation invalidation = cloudFrontService.invalidateObjects(
    testDistributionId,
    objectKeys,
    "" + System.currentTimeMillis() // Caller reference - a unique string value
    );
System.out.println(invalidation);

Retrieve details about a prior invalidation operation

String invalidationId = invalidation.getId();

Invalidation priorInvalidation = cloudFrontService.getInvalidation(
    testDistributionId, invalidationId);
System.out.println(priorInvalidation);

List summary information about all invalidations performed on a distribution.

List<InvalidationSummary> invalidationSummaries =
    cloudFrontService.listInvalidations(testDistributionId);
System.out.println(invalidationSummaries);

Non-S3 Origin

Create a new distribution with a non-S3 (custom) origin

CustomOrigin customOrigin = new CustomOrigin(
    "www.jamesmurty.com", // DNS name
    CustomOrigin.OriginProtocolPolicy.HTTP_ONLY  // Access content over HTTP only
    // To distribute content over HTTPS use:
    // CustomOrigin.OriginProtocolPolicy.MATCH_VIEWER
    );

Distribution customOriginDistribution = cloudFrontService.createDistribution(
    customOrigin,
    "" + System.currentTimeMillis(), // Caller reference - a unique string value
    null, // CNAME aliases for distribution
    "Distribution with a non-S3 origin", // Comment
    true,  // Distribution is enabled?
    null  // Logging status of distribution (null means disabled)
    );
System.out.println("Distribution with custom origin: " + customOriginDistribution);