Upload Documents to Aws S3

When working with Amazon S3 (Simple Storage Service), you're probably using the S3 web console to download, copy, or upload files to S3 buckets. Using the console is perfectly fine, that's what it was designed for, to brainstorm with.

Particularly for admins who are used to more mouse-click than keyboard commands, the web console is probably the easiest. Even so, admins will eventually see the need to perform bulk file operations with Amazon S3, like an unattended file upload. The GUI is not the all-time tool for that.

For such automation requirements with Amazon Web Services, including Amazon S3, the AWS CLI tool provides admins with control-line options for managing Amazon S3 buckets and objects.

In this article, you volition learn how to use the AWS CLI command-line tool to upload, copy, download, and synchronize files with Amazon S3. You will also learn the nuts of providing access to your S3 bucket and configure that access profile to work with the AWS CLI tool.

Prerequisites

Since this a how-to article, there will be examples and demonstrations in the succeeding sections. For you to follow along successfully, you will need to meet several requirements.

  • An AWS account. If you don't take an existing AWS subscription, you tin can sign upwardly for an AWS Costless Tier.
  • An AWS S3 saucepan. Y'all can use an existing bucket if you'd prefer. Yet, it is recommended to create an empty bucket instead. Please refer to Creating a bucket.
  • A Windows x computer with at to the lowest degree Windows PowerShell v.one. In this commodity, PowerShell 7.0.2 will exist used.
  • The AWS CLI version 2 tool must be installed on your computer.
  • Local folders and files that yous will upload or synchronize with Amazon S3

Preparing Your AWS S3 Access

Suppose that y'all already take the requirements in place. Yous'd think you can already go and start operating AWS CLI with your S3 bucket. I mean, wouldn't information technology be squeamish if it were that simple?

For those of you who are just beginning to work with Amazon S3 or AWS in full general, this section aims to help y'all fix access to S3 and configure an AWS CLI profile.

The total documentation for creating an IAM user in AWS can be found in this link beneath. Creating an IAM User in Your AWS Account

Creating an IAM User with S3 Access Permission

When accessing AWS using the CLI, you volition need to create one or more IAM users with enough access to the resources you lot intend to piece of work with. In this section, yous volition create an IAM user with admission to Amazon S3.

To create an IAM user with access to Amazon S3, yous outset demand to login to your AWS IAM console. Under the Admission management group, click on Users. Next, click on Add together user.

IAM Users Menu
IAM Users Card

Type in the IAM user'due south name you are creating within the User name* box such as s3Admin. In the Admission blazon* option, put a check on Programmatic access. And so, click the Next: Permissions push button.

Set IAM user details
Fix IAM user details

Next, click on Attach existing policies directly. Then, search for the AmazonS3FullAccess policy proper noun and put a check on it. When done, click on Adjacent: Tags.

Assign IAM user permissions
Assign IAM user permissions

Creating tags is optional in the Add tags page, and you can just skip this and click on the Adjacent: Review button.

IAM user tags
IAM user tags

In the Review folio, you are presented with a summary of the new account beingness created. Click Create user.

IAM user summary
IAM user summary

Finally, one time the user is created, you must copy the Access key ID and the Secret admission cardinal values and save them for after user. Annotation that this is the only time that you can see these values.

IAM user key credentials
IAM user key credentials

Setting Upward an AWS Contour On Your Figurer

At present that you've created the IAM user with the appropriate access to Amazon S3, the next step is to set the AWS CLI profile on your computer.

This section assumes that yous already installed the AWS CLI version 2 tool as required. For the profile creation, you will demand the following information:

  • The Access key ID of the IAM user.
  • The Secret access key associated with the IAM user.
  • The Default region name is corresponding to the location of your AWS S3 saucepan. Yous tin check out the list of endpoints using this link. In this commodity, the AWS S3 bucket is located in the Asia Pacific (Sydney) region, and the respective endpoint is ap-southeast-2.
  • The default output format. Utilise JSON for this.

To create the contour, open up PowerShell, and type the command below and follow the prompts.

Enter the Admission key ID, Hugger-mugger access key, Default region name, and default output name. Refer to the demonstration beneath.

Configure an AWS CLI profile
Configure an AWS CLI contour

Testing AWS CLI Admission

Later on configuring the AWS CLI contour, you tin can ostend that the profile is working by running this command below in PowerShell.

The command above should list the Amazon S3 buckets that y'all have in your account. The demonstration below shows the command in action. The result shows that listing of bachelor S3 buckets indicates that the profile configuration was successful.

List S3 buckets
List S3 buckets

To larn about the AWS CLI commands specific to Amazon S3, y'all tin can visit the AWS CLI Control Reference S3 page.

Managing Files in S3

With AWS CLI, typical file management operations can be done like upload files to S3, download files from S3, delete objects in S3, and copy S3 objects to another S3 location. It's all just a matter of knowing the correct control, syntax, parameters, and options.

In the post-obit sections, the surroundings used is consists of the following.

  • Ii S3 buckets, namely atasync1and atasync2. The screenshot below shows the existing S3 buckets in the Amazon S3 console.
List of available S3 bucket names in the Amazon S3 console
List of available S3 bucket names in the Amazon S3 console
  • Local directory and files located under c:\sync.
Local Directory
Local Directory

Uploading Individual Files to S3

When you upload files to S3, yous can upload one file at a fourth dimension, or by uploading multiple files and folders recursively. Depending on your requirements, yous may choose 1 over the other that you deem appropriate.

To upload a file to S3, y'all'll need to provide two arguments (source and destination) to the aws s3 cp control.

For example, to upload the file c:\sync\logs\log1.xml to the root of the atasync1 bucket, you can apply the command below.

            aws s3 cp c:\sync\logs\log1.xml s3://atasync1/          

Note: S3 bucket names are e'er prefixed with S3:// when used with AWS CLI

Run the above control in PowerShell, but change the source and destination that fits your environment first. The output should await similar to the demonstration below.

Upload file to S3
Upload file to S3

The demo above shows that the file named c:\sync\logs\log1.xml was uploaded without errors to the S3 destination s3://atasync1/.

Use the command beneath to list the objects at the root of the S3 bucket.

Running the control above in PowerShell would outcome in a similar output, as shown in the demo beneath. As you tin can see in the output below, the file log1.xml is present in the root of the S3 location.

List the uploaded file in S3
List the uploaded file in S3

Uploading Multiple Files and Folders to S3 Recursively

The previous section showed yous how to copy a single file to an S3 location. What if y'all demand to upload multiple files from a folder and sub-folders? Surely you wouldn't want to run the same command multiple times for different filenames, right?

The aws s3 cp control has an selection to procedure files and folders recursively, and this is the --recursive option.

Every bit an example, the directory c:\sync contains 166 objects (files and sub-folders).

The folder containing multiple files and sub-folders
The folder containing multiple files and sub-folders

Using the --recursive option, all the contents of the c:\sync folder will exist uploaded to S3 while also retaining the folder structure. To test, use the example code below, only make certain to alter the source and destination appropriate to your surroundings.

You lot'll notice from the code below, the source is c:\sync, and the destination is s3://atasync1/sync. The /sync key that follows the S3 bucket name indicates to AWS CLI to upload the files in the /sync folder in S3. If the /sync folder does non be in S3, it volition exist automatically created.

            aws s3 cp c:\sync s3://atasync1/sync --recursive          

The code above will result in the output, as shown in the demonstration below.

Upload multiple files and folders to S3
Upload multiple files and folders to S3

Uploading Multiple Files and Folders to S3 Selectively

In some cases, uploading ALL types of files is not the best pick. Like, when you merely need to upload files with specific file extensions (eastward.thousand., *.ps1). Another two options available to the cp control is the --include and --exclude.

While using the command in the previous department includes all files in the recursive upload, the command below will include but the files that match *.ps1 file extension and exclude every other file from the upload.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.ps1          

The demonstration beneath shows how the code above works when executed.

Upload files that matched a specific file extension
Upload files that matched a specific file extension

Another example is if you want to include multiple unlike file extensions, you will need to specify the --include option multiple times.

The example command below volition include only the *.csv and *.png files to the copy command.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.csv --include *.png          

Running the code above in PowerShell would present you with a similar result, as shown below.

Upload files with multiple include options
Upload files with multiple include options

Downloading Objects from S3

Based on the examples you've learned in this section, you can also perform the re-create operations in reverse. Meaning, you can download objects from the S3 bucket location to the local auto.

Copying from S3 to local would crave you to switch the positions of the source and the destination. The source being the S3 location, and the destination is the local path, like the one shown beneath.

            aws s3 cp s3://atasync1/sync c:\sync          

Note that the aforementioned options used when uploading files to S3 are likewise applicable when downloading objects from S3 to local. For instance, downloading all objects using the command below with the --recursive selection.

            aws s3 cp s3://atasync1/sync c:\sync --recursive          

Copying Objects Betwixt S3 Locations

Autonomously from uploading and downloading files and folders, using AWS CLI, you can likewise copy or movement files between two S3 bucket locations.

Y'all'll find the command below using i S3 location as the source, and another S3 location equally the destination.

            aws s3 cp s3://atasync1/Log1.xml s3://atasync2/          

The demonstration beneath shows you the source file being copied to another S3 location using the control above.

Copy objects from one S3 location to another S3 location
Copy objects from one S3 location to some other S3 location

Synchronizing Files and Folders with S3

You've learned how to upload, download, and copy files in S3 using the AWS CLI commands so far. In this section, you'll learn about i more file operation command available in AWS CLI for S3, which is the sync control. The sync command only processes the updated, new, and deleted files.

In that location are some cases where you need to keep the contents of an S3 saucepan updated and synchronized with a local directory on a server. For example, you may have a requirement to keep transaction logs on a server synchronized to S3 at an interval.

Using the command below, *.XML log files located under the c:\sync folder on the local server will exist synced to the S3 location at s3://atasync1.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml          

The demonstration below shows that after running the command above in PowerShell, all *.XML files were uploaded to the S3 destination s3://atasync1/.

Synchronizing local files to S3
Synchronizing local files to S3

Synchronizing New and Updated Files with S3

In this next example, it is assumed that the contents of the log file Log1.xml were modified. The sync control should pick upward that modification and upload the changes done on the local file to S3, as shown in the demo beneath.

The command to utilize is nevertheless the same equally the previous example.

Synchronizing changes to S3
Synchronizing changes to S3

As you can see from the output in a higher place, since only the file Log1.xml was inverse locally, information technology was also the only file synchronized to S3.

Synchronizing Deletions with S3

Past default, the sync command does not process deletions. Any file deleted from the source location is not removed at the destination. Well, not unless you employ the --delete selection.

In this adjacent instance, the file named Log5.xml has been deleted from the source. The control to synchronize the files volition be appended with the --delete option, every bit shown in the code beneath.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml --delete          

When you run the command to a higher place in PowerShell, the deleted file named Log5.xml should also be deleted at the destination S3 location. The sample upshot is shown below.

Synchronize file deletions to S3
Synchronize file deletions to S3

Summary

Amazon S3 is an excellent resource for storing files in the cloud. With the use of the AWS CLI tool, the fashion you utilize Amazon S3 is further expanded and opens the opportunity to automate your processes.

In this article, you lot've learned how to use the AWS CLI tool to upload, download, and synchronize files and folders between local locations and S3 buckets. You lot've likewise learned that S3 buckets' contents can besides exist copied or moved to other S3 locations, too.

There can exist many more than use-instance scenarios for using the AWS CLI tool to automate file direction with Amazon S3. You can even attempt to combine information technology with PowerShell scripting and build your own tools or modules that are reusable. It is up to you to find those opportunities and bear witness off your skills.

Further Reading

  • What Is the AWS Command Line Interface?
  • What is Amazon S3?
  • How To Sync Local Files And Folders To AWS S3 With The AWS CLI

blackwellacesturod.blogspot.com

Source: https://adamtheautomator.com/upload-file-to-s3/

0 Response to "Upload Documents to Aws S3"

Enregistrer un commentaire

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel