EFS
In this article:
EFS#
General information#
Important
Currently, the service is at the technology preview stage and is only available to a limited number of users. If you are ready to try the service in the beta mode, please contact your manager or leave a request on the support portal.
EFS (Elastic File System) service provides scalable and high-availability file storage that can be shared by multiple instances, including those in different VPCs and even other networks outside the cloud. It doesn’t require allocating disk space on virtual machines or maintaining complex file system configurations.
You can quickly create an EFS in the web interface, so you only need to mount it on the right instances. EFS is accessed via Network File System (NFS) 4.1 protocol. So the file system can be mounted on any instances that support this protocol version. And you can use any compatible tools and applications to work with the file system.
High availability is ensured by the service architecture. If one of the NFS servers fails, it will be immediately restarted on another node. The file system is replicated across several data centers and can endure failure of any component, even an entire data center, without interrupting the service. After the failed NFS server is recovered, clients can keep using the mounted file system as neither remounting nor switching to another server is required.
The file system size can be increased to petabytes if necessary (see Usage restrictions).
Key concepts#
EFS handles two main types of resources:
File system – A file storage that can be configured to be accessed from different instances.
Mount target – An NFS server whose DNS name and IP address are used to mount the file system within an instance.
Not initially associated with VPC, the file system gets such association when a mount target is created. The subnet is specified where the network interface for the mount target is created. One private IP address is allocated from the subnet address space and associated with the network interface. Moreover, a special DNS service zone is created to access the mount target and a DNS name is created in this zone. A DNS server within VPC is used to resolve the DNS record.
Once you have created a mount target in a particular VPC, you cannot create a mount target in a subnet of another VPC for that file system. A particular file system can only have one mount target per availability zone. For example, if you have three subnets from the same VPC in az1
availability zone, you can only create a mount target in one of those three subnets. This mount target is accessed from other subnets in this availability zone via routing within VPC.
A file system in a particular VPC can have as many mount targets as there are availability zones. Аll instances in the same VPC can access the file system via a single mount target in any of the availability zones. However, for better performance and high availability, we recommend to create one mount target in each availability zone and use the mount target in the same zone as the instances.
To access a file system in VPC1 from subnets in VPC2, it is required to configure routing between the subnets of these VPCs via transit gateway. It is currently not possible to resolve the DNS name of a mount target within VPC2, so you should use the IP address of the mount target when mounting the file system in VPC2. This restriction will be eliminated in the future.
File system types#
Currently, the cloud supports only one file system type – Regional. Data in a file system of this type is replicated across three independent availability zones in the cloud. Replication is synchronous: if synchronous writing is completed within your OS then it is completed in all availability zones as well.
Attention
If writing is buffered or not synchronous, i.e. changed or new data goes to the OS cache memory, then writing to the file system may be in fact delayed by the OS.
Data consistency#
Several instances can be attached to the same file system and several applications on different instances can use the same file simultaneously or alternately.
Each NFS client has its own file system cache where file attributes, directory structure, and some data are stored. Caches of different instances do not synchronize with each other when the file system content is updated on one of them. Moreover, clients on different instances have no information about which files are open on other instances.
NFS server does not control the consistency of file contents, nor does it manage the file access sequence. It only guarantees the close-to-open consistency, i.e. when a client closes a file, all changes are saved to a disk.
When another client opens a file after it was closed, the current file version and its metadata are forcibly requested, ignoring the cache. This behavior is different from the one of standard file systems, where closing a file does not involve synchronizing data on disk.
File locking#
Applications can use files on different instances and via different mount targets. EFS supports a locking mechanism via system calls from applications, regardless of which mount targets are used and in which subnets they are placed. This prevents a file from being opened for writing by several applications at the same time.
Note
File locking mechanism uses advisory locking mode. This means that file locking does not prevent other applications from reading or writing the file. For proper locking and reliable file sharing, all applications should check and set locks before reading or writing a file.
Usage restrictions#
Note
The above restrictions apply only during beta testing.
The following quotas are allocated for the service:
1 file system per project;
10 TiB of stored data.
If necessary, you can increase quotas. To do this, contact the support service.
Using the file system#
In each project, you can create your own file system that instances from the VPC can use. To mount the file system on an instance, create a mount target in any of the VPC subnets in the availability zone where the instance is placed. The mount target should be in the same VPC as the instance.
Security groups control instance access to the file system. At creation, a mount target is assigned a default security group (if necessary, it can be changed). The group rules should allow inbound TCP traffic to port 2049 for all instances where you want to mount the file system (by default, all instances in this security group have access to the mount target). Moreover, the security group that is assigned to a particular instance should allow outbound TCP traffic to NFS port (2049) of the mount target.
After creating and configuring the file system, mount targets, and security groups, mount the file system directly on the instances to enable file access (see instruction for details). Once the file system is mounted, it can be used the same way as any other POSIX-compliant file system.
Note
To manage the file system, the user should be assigned EFSFullAccess policy or granted the corresponding privileges.
Creating a file system#
To create a file system:
Go to the section EFS File systems.
Click Create.
In the window that opens, set the Name tag (optional).
If you want to specify other tags, click Add tags to go to the next step and add the tags. Tags can also be assigned after creating the file system.
Click Create to complete the file system creation.
Create a mounting target#
Go to the section EFS File systems.
In the resource table, select the file system and click its ID to go to the file system page.
Open the Mount targets tab and click Create.
In the dialog window, select:
(Optional) VPC. This filter allows you to limit the selection to subnets from a specific VPC.
The subnet where the mount point will be placed.
(Optional) One or more security groups
Important
If the security group was not explicitly specified, then the created mount point is assigned the default security group for the VPC in which the subnet is located. If you changed the default rules, make sure that the new rules allow inbound TCP traffic to port 2049 for the mount target.
Click Create.
Repeat steps 1-5 to create additional mount targets in other availability zones.
Changing security groups#
If necessary, you can assign other security groups to the mount target instead of those specified when it was created.
Go to the section EFS File systems.
In the resource table, select the file system and click its ID to go to the file system page.
Open the Mount targets tab and click Change security groups.
To add a security group, select the group from the drop-down list. To remove a security group, click icon next to the group ID.
Note
At least one security group should be assigned to the mount target.
Click on Save to apply the changes.
Delete a mount target#
Note
Before deleting, we recommend to unmount the file system on all instances connected to this mount target. Otherwise, some operations on the root file system within instances, such as du
and df
commands, may be blocked.
To delete a mount target:
Go to the section EFS File systems.
In the resource table, select the file system and click its ID to go to the file system page.
Open the Mount targets tab, select a mount target in the resource table, and click Delete. You can select multiple mount targets for deletion at the same time.
In the dialog window, confirm the action.
Delete a file system#
Attention
Once a file system has been deleted, its data cannot be recovered.
Before deleting a file system, you should delete all mount targets related to it.
Go to the section EFS File systems.
Select the file system in resource table.
Click Delete.
In the dialog window, confirm the action.