Rclone

Rclone is a command-line program to manage files on cloud storage that enables seamless transitions from one storage platform to another. With Akave’s fully S3 compatible API, Rclone can be used to migrate data to and from the Akave Network with ease.

Pre-requisites

  1. Akave O3 Credentials These can be requested by contacting Akave at Akave Cloud Contact.

  2. Install dependencies (Requirements: Rclone)

Rclone Installation Guide

For all latest OS installation instructions go to https://rclone.org/install/

Mac OS Rclone install example

If you don’t already have Rclone installed, you can add it with:

brew install rclone

If Rclone is installed and you need to upgrade it use:

brew upgrade rclone

After installing or upgrading, confirm it’s installed using:

rclone version

Ubuntu OS Rclone install example

Rclone has a simple install script that will install the latest version of rclone.

sudo -v ; curl https://rclone.org/install.sh | sudo bash

After installing or upgrading, confirm it’s installed using:

rclone version

Configuration

Configure Rclone to use Akave’s S3 compatible API. For more information on Rclone S3 configuration, see Rclone S3.

Akave S3 Configuration

Configure Akave S3 for use with Rclone by running:

rclone config

Follow these steps to configure a new remote:

  1. Choose to configure a new remote
  • Select "n"
  1. Name your remote You can name this however you like
Akave
  1. Select storage type

    • Select "Amazon S3 Compliant Storage Providers..."
  2. Select provider

    • Select "Any other S3 compatible provider"
  3. Get AWS credentials from runtime

false
  • This can also be true depending on your preferred configuration, just make sure to have your environment credentials configured correctly as described in the AWS CLI docs. Leave this as false if you’d prefer to store your credentials with Rclone (recommended).
  1. Enter Access Key ID Enter the access key ID provided to you by Akave.

  2. Enter Secret Access Key Enter the secret access key provided to you by Akave.

  3. Enter Region

akave-network
  1. Enter Endpoint
https://o3-rc1.akave.xyz

Select the endpoint corresponding to your credentials from the options provided here: Akave Environment.

  1. Location Constraint Leave blank

  2. ACL Choose Default

  3. Edit advanced config Choose No

  4. Keep this remote? Choose Yes

Usage

Storj supports many of the same commands as S3 and the Akave CLI. Below is a partial list of the most commonly used commands.

Note: Replace Akave with the name you chose for your remote in the commands below.

Bucket Commands

  • Create Bucket:
rclone mkdir Akave:<bucket-name>
  • List Buckets:
rclone lsd Akave:
  • Delete Bucket:
rclone rmdir Akave:<bucket-name>
  • View Bucket:
rclone lsd Akave:<bucket-name>

File Commands

  • List Files:
rclone ls Akave:<bucket-name>
  • Upload File:
rclone copy <file-path> Akave:<bucket-name>
  • Download File:
rclone copy Akave:<bucket-name> <file-path>
  • Delete File:
rclone rm Akave:<bucket-name> <file-path>

Data Migration Example

The most common use case for Rclone is to migrate data from one storage platform to another. Here is an example of how to migrate data from AWS S3 to Akave O3 using Rclone:

1. Create a new remote for AWS S3:

rclone config

2. Create a new remote for AWS S3:

Follow the prompts to create a new remote for AWS S3 which Rclone provides detailed instructions for here:

3. Migrate data from AWS S3 to Akave O3:

rclone sync s3:<bucket-name> Akave:<bucket-name> --progress

The progress flag is optional and will show you the progress of the migration.

4. After migration, validate the data integrity in your Akave bucket by running:

rclone check s3:<bucket-name> Akave:<bucket-name> --size-only

Expected output: If successful the output should look something like this (where N is the number of files in your bucket):

NOTICE: S3 bucket rclone-test: 0 differences found
NOTICE: S3 bucket rclone-test: N matching files

If you see a message saying N hashes could not be checked, this is normal and expected as Akave does not currently support the same hash check mechanism as AWS S3.

Last updated on