Tutorials
In this article:
Tutorials#
How to setup a custom domain for a website#
By default, a bucket running in the website mode is accessible at http://bucket-name.website.k2.cloud, where bucket-name is the name of the bucket. Instead of the above domain name, you can use your own custom domain name.
Setting up your third-level domain for a website#
You can use your own domain to host a website, for example, a third-level domain img.example.com in the example.com zone.
First of all, you need to create a bucket named img.example.com, upload the website content into it, and turn on website mode.
To make the website accessible at http://img.example.com, configure the DNS server of the example.com zone.
In the example.com zone settings, create a record of the type CNAME
:
img.example.com. IN CNAME img.example.com.website.k2.cloud.
After making these changes, the website will be accessible at http://img.example.com, as well as (by default) at http://img.example.com.website.k2.cloud.
Setting up your second-level domain for a website#
You can use your own domain to host a website, for example, a second-level example.com domain.
Create a bucket example.com, upload the website content into it, and turn on website mode.
To make the website accessible at http://example.com, configure the DNS server of the example.com zone.
Important
In this example, a second-level domain (which is also a root domain) of the example.com zone is used. The DNS specification does not allow the creation of a CNAME
record for the root domain, but some DNS servers and services allow such records to be created. Make sure your DNS server or service supports this feature. Often these records are called, for example, ALIAS
or ANAME
instead of CNAME
.
In the example.com zone settings, create a record of the type ALIAS
or ANAME
(depending on what your DNS provider uses):
example.com. IN ALIAS example.com.website.k2.cloud.
Setting up HTTPS for a website#
By default, you can access a bucket with website mode enabled by http://bucket-name.website.k2.cloud address, where bucket-name is the name of this bucket.
You can enable HTTPS support, as well as configure automatic redirect from HTTP to HTTPS.
At the moment, there is no API to automatically enable HTTPS.
Important
To enable HTTPS for a specific website, submit a request via the support portal or send an email to support@k2.cloud. Request examples are shown below.
Configure access to the website http://bucket1.website.k2.cloud over HTTPS#
To make the website http://bucket1.website.k2.cloud accessible over HTTPS, create a bucket bucket1 and enable website mode.
Then submit a request via the support portal
Request example
Subject: HTTPS for bucket1.website.k2.cloud
Description:
- Bucket name: *bucket1*
- Website domain: *bucket1.website.k2.cloud*
- Enable forced redirect from HTTP to HTTPS: *yes/no*
where:
- *Bucket name* – your bucket in the object storage (S3), which must be available over HTTPS in the website mode.
- *Website domain name* – the website name for which a certificate will be issued to enable HTTPS.
- *Enable forced redirect from HTTP to HTTPS* – if you want users accessing the website over HTTP to be automatically redirected to its HTTPS version, select *yes*. If you want to keep them able to access the website via HTTP, select *no*.
After the request validation, a certificate will be issued for the specified domain name and HTTPS will be enabled. Let’s Encrypt will be used as a Certification Authority.
Setting up access to the website in a custom domain http://img.example.com over HTTPS#
To make the website http://img.example.com accessible over HTTPS, create a bucket img.example.com and enable website mode. Then, configure your DNS so that the website is accessible at http://img.example.com.
Then submit a request via the support portal
Request example
Subject: HTTPS for img.example.com
Description:
- Bucket name: *img.example.com*
- Website domain: *img.example.com*
- Enable forced redirect from HTTP to HTTPS: *yes/no*
- Use your own certificate: *yes/no*
where:
- *Bucket name* – your bucket in the object storage (S3), which must be available over HTTPS in the website mode.
- *Website domain name* – website name for which a certificate will be issued to enable HTTPS.
- *Enable forced redirect from HTTP to HTTPS* – if you want users accessing the website over HTTP to be automatically redirected to its HTTPS version, select *yes*. If you want to keep them able to access the website via HTTP, select *no*.
- *Use own certificate*. If you want to use your own domain for a website, you can give us your certificate. In this case, you need to select *yes* and attach a certificate for the specified domain name to the request. This may be needed for testing purposes or when you want the certificate to have special attributes. If you specify *no*, then a certificate from a Let's Encrypt CA will be issued and used for this domain.
Setting up website redirect rules#
A bucket in website mode can be configured to redirect all or some incoming requests to other buckets or external resources.
The documentation contains the list of commands supported in the K2 Cloud S3 API and setup instructions AWS CLI.
To receive the current configuration of the website in a specific bucket, you can use the aws s3api get-bucket-website
command.
aws --endpoint-url https://s3.k2.cloud s3api get-bucket-website --bucket bucket1
{
"IndexDocument": {
"Suffix": "index.html"
}
}
To configure the bucket website, use the s3api put-bucket-website
command.
There are several redirect options:
Supported redirect rule parameters#
Condition
is a container for describing a condition that must be met for the specified redirect to be applied.
Redirect
is a container for information redirection. You can redirect requests to a different host, a different page, or via a different protocol. In case of an error, you can specify another error code for the return.
Block |
Parameter |
Description |
---|---|---|
|
|
The HTTP error code when to apply a redirect. If an error code is equal to this value, then the specified redirect is applied. Required if the |
|
|
The object key name prefix when to apply a redirect. For example, to redirect requests to ExamplePage.html, the key prefix will be |
|
|
The hostname in a redirect request. |
|
|
HTTP code in the response to a redirect request. |
|
|
The protocol to be used to redirect requests. By default, the protocol of the original request is used. |
|
|
The object key prefix to be used in a redirect request. For example, to redirect requests for all pages with |
|
|
The specific object key prefix is to be used in a redirect request, such as a redirect request to error.html. It can only be used if |
Redirect all requests to another resource#
If you want to redirect all requests to another resource, prepare a JSON file with the following bucket parameters bucket1.json.
File bucket1.json
{
"RedirectAllRequestsTo": {
"HostName": "new-site.com",
"Protocol": "http"
}
}
In this example, the bucket bucket1.website.k2.cloud is configured as a website. However, the configuration specifies that all GET requests for the bucket1.website.k2.cloud website endpoint will be redirected to the new-site.com host. Such a redirect can be useful when you have two websites – an old one old-site.com (bucket1.website.k2.cloud in our example) and a new one new-site.com – and wish to redirect all incoming requests from the old website to the new one.
aws --endpoint-url https://s3.k2.cloud s3api put-bucket-website --bucket old-site.com --website-configuration file://bucket1.json
Note
If you specify the RedirectAllRequestTo parameter in the configuration, you will not specify another parameter.
Configuring multiple redirect rules to another resource#
If you want to flexibly configure redirect rules to one or more objects, add routing rules.
Suppose your bucket2 contains the following objects:
index.html
docs/site1.html
docs/site2.html
If you want to rename the docs/
folder to documents/
, you need to redirect requests to the docs/
prefix to documents/
. For example, if a request for docs/site1.html
should be redirected to documents/ site1.html
, you need to update the website configuration and add a routing rule as shown in the following JSON file bucket2.json:
File bucket2.json
{
"IndexDocument": {
"Suffix": "index.html"
}
"ErrorDocument": {
"Key": "Error.html"
}
"RoutingRules": [
{
"Condition": {
"KeyPrefixEquals": "docs/"
}
"Redirect": {
"ReplaceKeyPrefixWith": "documents/"
}
}
]
}
aws --endpoint-url https://s3.k2.cloud s3api put-bucket-website --bucket bucket2 --website-configuration file://bucket2.json
Configuring multiple redirect rules to another resource#
If you want to use multiple redirect rules at the same time, provision an appropriate JSON file. For example, to configure different redirect rules for the Russian and English website versions, you need to provision a JSON file bucket3.json:
File bucket3.json
{
"IndexDocument": {
"Suffix": "index.html"
},
"ErrorDocument": {
"Key": "error.html"
},
"RoutingRules": [
{
"Redirect": {
"ReplaceKeyWith": "ru/data.html",
"HostName": "new-site.com",
"Protocol": "https",
"HttpRedirectCode": "302"
},
"Condition": {
"KeyPrefixEquals": "ru/manual/data.html"
}
},
{
"Redirect": {
"ReplaceKeyWith": "en/data.html",
"HostName": "new-site.com",
"Protocol": "https",
"HttpRedirectCode": "302"
},
"Condition": {
"KeyPrefixEquals": "en/manual/data.html"
}
}
]
}
aws --endpoint-url https://s3.k2.cloud s3api put-bucket-website --bucket old-site --website-configuration file://bucket3.json
How to configure the lifecycle of objects in a bucket#
To make storing objects in a bucket cost-effective, you can customize their lifecycle. If you use a bucket to store log files or regular reports, then at some point, there may be too many files. In this case, you can reduce the storage time and the time you spent manually deleting objects by configuring object auto-deletion from the bucket using BucketLifecycle
.
To setup the regular object deletion from the bucket, you must describe the lifecycle rules in a JSON file. Here is an example of a rule in the lifecycle.json file, according to which objects will be automatically deleted from the bucket_with_logs bucket 30 days after download.
Example of a rule from the file lifecycle.json
{
"Rules": [
{
"ID": "Expire old logs",
"Filter": {
"Prefix": "logs/"
},
"Status": "Enabled",
"Expiration": {
"Days": 30
}
}
]
}
aws --endpoint-url https://s3.k2.cloud s3api put-bucket-lifecycle-configuration --bucket bucket_with_logs --lifecycle-configuration file://lifecycle.json
Read more about the object storage.
How to setup Object Lock for an object version#
If you want to prevent objects and their versions from being deleted and overwritten, you can use S3 Object Lock methods. This instruction covers the basic steps to configure Object Lock. For more details on Object Lock principles and methods, see Object Lock API Manual. The AWS command-line utility is used to configure Object Lock (see how to install and configure it).
Creating a bucket#
It is possible to lock objects in a bucket only if Object Lock was enabled when the bucket was created. To enable it, select –object-lock-enabled-for-bucket option when you create a bucket.
aws s3api create-bucket --bucket <bucket-name> --object-lock-enabled-for-bucket --endpoint-url=https://s3.k2.cloud
Setting a default policy#
You can set a locking policy for a bucket, which by default will be applied to all objects being uploaded (unless other lockig parameters are specified for a particular object). The default policy sets the object version retention period and mode (COMPLIANCE
or GOVERNANCE
). It is not necessary to set a default policy, but otherwise you will have to set locking parameters when uploading each object you need to protect.
COMPLIANCE
mode ensures strict locking, i.e. the object version cannot be deleted or overwritten until the retention period ends. In GOVERNANCE
mode, the retention period can be modified by any user with privileges for the object storage. In the example below, the uploaded object is locked for 360 days in COMPLIANCE
mode.
aws s3api --endpoint-url=https://s3.k2.cloud put-object-lock-configuration \
--bucket <bucket-name> \
--object-lock-configuration "ObjectLockEnabled=Enabled,Rule={DefaultRetention={Mode=COMPLIANCE,Days=360}}"
The default policy can be retrieved using the get-object-lock-configuration command:
aws s3api --endpoint-url=https://s3.k2.cloud get-object-lock-configuration \
--bucket <bucket-name>
{
"ObjectLockConfiguration": {
"ObjectLockEnabled": "Enabled",
"Rule": {
"DefaultRetention": {
"Mode": "GOVERNANCE",
"Days": 360
}
}
}
}
Uploading an object#
The locking parameters specified during upload override the default Object Lock policy for a bucket. You can use this method to lock objects immediately after the bucket is created and skip setting a default policy.
In this example, GOVERNANCE
mode is set for the object to be uploaded, with a retention period until October 20, 2024.
aws s3api --endpoint-url=https://s3.k2.cloud put-object \
--bucket <bucket-name> \
--key <object-name> \
--body <path-to-file> \
--object-lock-mode GOVERNANCE \
--object-lock-retain-until-date "2024-11-20"
{
"ETag": "\"dff23456a91215d7153f85a7c127aaa9\"",
"VersionId": "<version-id>"
}
Modifying the lock parameters#
If COMPLIANCE
mode is set for the object version, you can only increase the retention period. If GOVERNANCE
mode is set, you can both increase and reduce the retention period, as well as change the retention mode. To reduce the retention period and change the locking mode, you should use the option –bypass-governance-retention.
Example of increasing the retention period.
aws s3api --endpoint-url=https://s3.k2.cloud put-object-retention \
--bucket <bucket-name> \
--key <object-name> \
--version-id <version-id> \
--retention '{ "Mode": "GOVERNANCE", "RetainUntilDate": "2024-12-01T00:00:00" }'
Example of reducing the retention period.
aws s3api--endpoint-url=https://s3.k2.cloud put-object-retention \
--bucket <bucket-name> --key <object-name> \
--key <object-name> \
--version-id <version-id> \
--bypass-governance-retention \
--retention '{ "Mode": "GOVERNANCE", "RetainUntilDate": "2024-10-01T00:00:00" }'
The current locking parameters for the object version can be found by the command get-object-retention.
aws s3api --endpoint-url=https://s3.k2.cloud get-object-retention \
--bucket <bucket-name> \
--key <object-name> \
--version-id <version-id>
{
"Retention": {
"Mode": "GOVERNANCE",
"RetainUntilDate": "2024-10-01T00:00:00.000000000Z"
}
}
Enabling Legal Hold#
You can also enable Legal Hold for an object version. This method can be applied independently of locking for a predefined period of time. When enabled, it will protect the object version from deletion even when its retention period expires.
Example of how to enable Legal Hold when uploading an object.
aws s3api --endpoint-url=https://s3.k2.cloud put-object \
--bucket <bucket-name> \
--key <object-name> \
--body <path-to-file> \
--object-lock-legal-hold-status ON
{
"ETag": "\"dff23456a91215d7153f85a7c127aaa9\"",
"VersionId": "1TrVY0XDY44WvOmMqxw04HbSIemIHtG"
}
Example of how to enable Legal Hold for an object version.
aws s3api --endpoint-url=https://s3.k2.cloud put-object-legal-hold \
--bucket <bucket-name> \
--key <object-name> \
--legal-hold Status=ON \
--version-id <version-id>
Example of how to disable Legal Hold for an object version.
Note
Legal Hold can be disabled by any project user with full privileges to the object storage.
aws s3api --endpoint-url=https://s3.k2.cloud put-object-legal-hold \
--bucket <bucket-name> \
--key <object-name> \
--version-id <version-id> \
--legal-hold Status=OFF
The current lock status for an object version can be found with the command get-object-legal-hold.
aws s3api --endpoint-url=https://s3.k2.cloud get-object-legal-hold \
--bucket <bucket-name> \
--key <object-name> \
--version-id <version-id>
{
"LegalHold": {
"Status": "ON"
}
}
Deleting an object#
When an object is deleted from a bucket with versioning enabled, the latest version of the object is not deleted — only a delete marker is created. However, if the non-locked version of the object can then be deleted, the locked version of the object is protected from deletion.
In COMPLIANCE
mode, the locked object version cannot be deleted until the retention period expires. In GOVERNANCE
mode, the version can be deleted using bypass (the option x-amz-bypass-governance-retention).
Example of deleting an object with the creation of a deletion marker.
aws s3api --endpoint-url=https://s3.k2.cloud delete-object \
--bucket <bucket-name> \
--key <object-name>
{
"DeleteMarker": true,
"VersionId": "<delete-marker-id>"
}
Example of how to delete an object version in GOVERNANCE
mode by bypassing the lock.
aws s3api --endpoint-url=https://s3.k2.cloud delete-object \
--bucket <bucket-name> \
--key <object-name> \
--version-id <version-id> \
--bypass-governance-retention
{
"VersionId": "<version-id>"
}
How to install s3cmd to use advanced object storage management features#
The s3cmd utility provides advanced features. Follow the steps below to install and set it up.
Before installation, get API access settings in the K2 Cloud management console. Click the user login in the top right corner, select Profile Get API access settings.
By default, S3cmd utility is included in Ubuntu, Debian, Fedora, CentOS, and RHEL Linux repositories, and can be installed, using the following commands.
# yum install epel-release -y # yum install s3cmd -y
# sudo apt-get install s3cmd
Then you have to configure s3cmd (access and secret keys can be found in API access settings):
S3cmd settings
# s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: <Project ID in K2 Cloud>:<Your login to K2 Cloud> Secret Key: XXXXXXXXXXXXXXXXXXXXXX Default Region [US]: Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: s3.k2.cloud Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket)s.s3.k2.cloud Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [No]: yes On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: <Project ID in K2 Cloud>:<Your login to K2 Cloud> Secret Key: XXXXXXXXXXXXXXXXXXXXXX Default Region: US S3 Endpoint: s3.k2.cloud DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.k2.cloud Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: True HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] y Save settings? [y/N] y
Now you can use S3cmd utility. To learn more about working with the utility, run s3cmd --help
.