Server-side object encryption with customer-provided keys (SSE-C)
You can use object encryption via the S3 API, according to the Amazon SSE-C specification. This means that you need to provide an encryption/decryption key with each request to the object.
You can store the encryption key in Barbican, and provide it to the S3 client at runtime.
Requirements
This guide assumes familiarity with the following tools:
python-openstackclient
(with thepython-barbicanclient
plugin),pwgen
,rclone
version 1.54 or later, or theaws
command-line interface (awscli
) version 1.
Prerequisites
In order to manipulate S3 objects using SSE-C, you must
- configure the OpenStack CLI,
- obtain S3-compatible credentials.
Creating encryption details
According to the SSE-C specification, in order to use server-side encryption, any S3 client needs to provide three pieces of information, which it includes in the request headers for each S3 request being made:
- Encryption algorithm: the only valid option here is AES256.
- Encryption key: a valid AES key, which means that key length must be 32 bytes.
- Encryption key checksum: the MD5 checksum of the encryption key. Your S3 API client uses this for integrity checks, and normally adds it automatically to your request.
In order to generate encryption key and store it in Barbican, proceed as follows.
-
Generate an encryption secret:
secret_raw=$(pwgen 32 1)
-
Store the secret in Barbican:
barbican_secret_url=$(openstack secret store --name objectSecret --algorithm aes --bit-length 256 --payload ${secret_raw} -f value -c 'Secret href')
-
Then, whenever you need to upload or download encrypted objects, retrieve the secret from Barbican:
secret=$(openstack secret get ${barbican_secret_url} -p -c Payload -f value)
Managing encrypted objects in S3
Once you have your encryption secret available, you can create or access enabled objects.
- Create an S3 bucket:
aws --profile <region> \ s3 mb s3://cleura cloud-encrypted
- Sync a directory to the S3 bucket, encrypting the files it
contains on upload:
aws --profile <region> \ s3 sync \ ~/media/ s3://cleura cloud-encrypted \ --sse-c AES256 \ --sse-c-key ${secret}
- Retrieve a file from S3 and decrypt it:
aws --profile <region> \ s3 cp \ s3://cleura cloud-encrypted/file.png . \ --sse-c AES256 \ --sse-c-key ${secret}
Note that attempting to download an encrypted file without providing an encryption key results in an immediate HTTP 400 (“Bad Request”) error:
$ aws --profile <region> \
s3 cp \
s3://cleura cloud-encrypted/file.png .
fatal error: An error occurred (400) when calling the HeadObject operation: Bad Request
- Create an S3 bucket:
mc mb <region>/cleura cloud-encrypted
- Sync a directory to the S3 bucket, encrypting the files it contains on upload.
Note that you must specify the encryption secret as the argument to the
--encrypt-key
option, using a syntax of<minio-alias>/<bucket-name>=<encryption-key>
:mc cp \ --recursive \ --encrypt-key "<region>/cleura cloud-encrypted=${secret}" ~/media/ <region>/cleura cloud-encrypted
- Retrieve a file from S3 and decrypt it.
Again, specify the encryption key in the same manner:
mc cp \ --encrypt-key "<region>/cleura cloud-encrypted=${secret}" <region>/cleura cloud-encrypted/file.png .
Note that attempting to download an encrypted file without providing an encryption key results in an immediate HTTP 400 (“Bad Request”) error:
$ mc cp \
<region>/cleura cloud-encrypted/file.png .
mc: <ERROR> Unable to validate source `<region>/cleura cloud-encrypted/file.png`.
You cannot use s3cmd
in combination with SSE-C.
s3cmd
does contain a client side encryption facility, using GnuPG for encryption. It also supports SSE-KMS, which is a different SSE flavor that is currently not available in Cleura Cloud.
SSE-C encryption has been implemented/fixed with version 1.54. Earlier rclone
versions won’t work.
- To start with, modify the section in your configuration file that is named after your target region, adding the
sse_customer_algorithm
option:[<region>] type = s3 provider = Ceph env_auth = false access_key_id = <access key id> secret_access_key = <secret key> endpoint = <region>.citycloud.com acl = private sse_customer_algorithm = AES256
- Create an S3 bucket:
rclone mkdir cleura cloud-encrypted
- Sync a directory to the S3 bucket, encrypting the files it contains on upload:
rclone sync ~/media/ cleura cloud-encrypted \ --s3-sse-customer-key=${secret}
- Retrieve a file from S3 and decrypt it:
rclone copy cleura cloud-encrypted/file.png \ --s3-sse-customer-key=${secret}
For more examples on how to use rclone, please use its reference documentation.