Back to Blog
AWSS3StorageArchitecture

AWS S3 Files: Mount Your S3 Bucket as a File System

AWS S3 Files lets you mount an S3 bucket as a POSIX file system over NFS v4.1 — no SDK changes, no data copies, no sync jobs. Files written through the mount are real S3 objects. Here's how it works and how I set it up.

Tejas Gupta

Tejas Gupta

April 8, 2026 · 10 min read

For over a decade, AWS had two separate storage worlds: S3 for cheap, durable object storage accessed via HTTP APIs, and file systems (EFS, FSx) for POSIX access over NFS. If your workload needed both — ML training pipelines reading datasets, agentic AI agents sharing files, data pipelines staging results — you either rewrote your code to use the S3 SDK, ran a sync job between S3 and EFS, or used a FUSE-based workaround with significant limitations.

On April 7, 2026, AWS launched Amazon S3 Files — a managed file system backed directly by an S3 bucket. Your compute resources mount it over NFS v4.1 and use standard file system calls. Every write syncs back to your S3 bucket as a regular S3 object, accessible via SDK, CLI, or console. No agents. No sync jobs. No code changes to existing tools.

I set this up in my own account end-to-end. Here's exactly how to do it, what to watch out for, and what's actually going on under the hood.


How It Works

S3 Files is built on Amazon EFS infrastructure. When you create an S3 file system, AWS provisions NFS mount targets in your VPC. Your compute resources (EC2, ECS, EKS, Lambda) mount these targets over NFS v4.1 and see your S3 objects as files and directories.

S3 Files Architecture — How It Works

EC2 Instance
ECS Container
λ
Lambda
NFS v4.1
S3 Files File System
EFS-powered · POSIX-compliant
Powered by EFS
sync
S3 Bucket
Object storage · Durable
File System Layer
Object Storage Layer
Compute Clients

A few accuracy notes on the internals that are worth understanding:

Not the Same as Mountpoint for S3

AWS has an open-source tool called Mountpoint for Amazon S3 (s3fs-fuse). S3 Files is different — it's a fully managed service with complete NFS v4.1 semantics, including writes, renames, and locks. You install no software on your instances beyond the amazon-efs-utils package (pre-installed on AWS AMIs).

Step-by-Step Setup

Here's the full walkthrough — from an existing S3 bucket to a working mount on an EC2 instance.

Prerequisites

Step 1: Navigate to S3 Files

S3 Files lives inside the S3 console — not EFS. Go to the S3 service, then in the left sidebar select FilesFile systems. Click Create file system.

In the create dialog, select your bucket (versioning must be enabled) and the VPC where your EC2 instance lives. Click Create file system. The file system appears immediately in Creating state:

File system in Creating state right after creation
File system created — mount targets start provisioning across AZs automatically.

Step 2: Wait for Mount Targets

AWS provisions NFS mount targets in each AZ of your VPC. Wait a few minutes until they flip to Available:

Mount targets in Available state
All mount targets showing Available — the file system is ready.

Step 3: IAM Permissions

Your EC2 instance role needs permissions for both the file system and the underlying S3 bucket. You can use the AWS managed policy AmazonS3FilesFullAccess for quick setup, or create a scoped inline policy:

s3-files-iam-policy.json
json
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Sid": "S3FilesMount",
6 "Effect": "Allow",
7 "Action": [
8 "elasticfilesystem:ClientMount",
9 "elasticfilesystem:ClientWrite",
10 "elasticfilesystem:DescribeMountTargets"
11 ],
12 "Resource": "arn:aws:s3files:<region>:<account-id>:file-system/<fs-id>"
13 },
14 {
15 "Sid": "S3BucketAccess",
16 "Effect": "Allow",
17 "Action": [
18 "s3:GetObject",
19 "s3:PutObject",
20 "s3:DeleteObject",
21 "s3:ListBucket"
22 ],
23 "Resource": [
24 "arn:aws:s3:::<your-bucket>",
25 "arn:aws:s3:::<your-bucket>/*"
26 ]
27 }
28 ]
29}
IAM policies for S3 Files
The EC2 role needs both EFS (for the NFS mount) and S3 (for data read/write) permissions.

Step 4: Security Group

The mount targets need NFS traffic on port 2049 from your EC2 instance. Add an inbound rule on the mount target's security group allowing TCP 2049 from your EC2 instance's security group (or subnet CIDR).

Security group with NFS inbound rule on port 2049
NFS inbound rule (TCP 2049) on the mount target security group.

Step 5: Copy the File System ID

Back in the S3 Files console, note the File system ID (format: fs-xxxxxxxx) from the file system details page. You'll use this in the mount command. You can also click Attach on the file system to get the pre-generated mount command with the correct ID already filled in.

S3 Files file system details showing file system ID
File system created and active — copy the fs-ID shown here.

Step 6: Mount via EC2 Instance Connect

Open EC2 Instance Connect from the EC2 console to get a browser-based terminal — no SSH key setup needed. Then run:

mount-s3-files.sh
bash
1# Create the mount point directory
2sudo mkdir -p /home/ec2-user/s3files
3
4# Mount the file system (replace fs-ID with yours)
5sudo mount -t s3files fs-0xxxxxxxxx:/ /home/ec2-user/s3files

amazon-efs-utils is pre-installed on AWS AMIs

The mount -t s3files command requires the amazon-efs-utils package. It is pre-installed on Amazon Linux 2 and AL2023 AMIs. On Ubuntu, run sudo apt-get install -y amazon-efs-utils first.
Successful mount in terminal via EC2 Instance Connect
Mount confirmed via EC2 Instance Connect — the file system is live and writable.

From here, standard file operations work as expected. Files written to the mount appear in the S3 bucket as regular objects within minutes:

test-s3-files.sh
bash
1# Write a file through the mount
2echo "Hello S3 Files" > /home/ec2-user/s3files/hello.txt
3
4# Verify locally
5ls -al /home/ec2-user/s3files/hello.txt
6
7# Within minutes, it appears in S3 as a regular object
8aws s3 ls s3://your-bucket/hello.txt

Persist the mount across reboots

Add to /etc/fstab:
fs-0xxxxxxxxx:/ /home/ec2-user/s3files s3files _netdev,tls,iam 0 0
The iam option uses the instance's IAM role automatically — no credentials to manage.

Attaching from the EC2 Console

Alternatively, you can attach the file system when launching or modifying an EC2 instance directly from the EC2 console. AWS generates the user data script automatically, selecting the right subnet and mount point. This is handy when you want the instance to come up with the file system already mounted.


When to Use S3 Files

S3 Files is the right tool when you need file system semantics on top of S3 data — shared access across multiple compute resources, standard file I/O without SDK changes, and automatic durability through S3.

Not for these workloads

S3 Files is not a block store replacement. Don't run databases on it. For HPC/GPU clusters needing sub-millisecond IOPS, use FSx for Lustre. For on-premises NAS migration, use FSx for NetApp ONTAP. For general-purpose shared file systems without an S3 backing requirement, standard EFS is simpler.

S3 Files vs. Other Options

AspectEFS / FSxS3 Files
Access protocolNFS v4.1 / SMB / LustreNFS v4.1
Backing storeInternal (no S3 access)Your S3 bucket (objects visible)
POSIX semanticsFullFull (UID/GID in S3 metadata)
ConsistencyStrong / close-to-openNFS close-to-open
Data accessible via S3 APINoYes — same objects
Code changes neededNoNo
Bucket versioning req.N/ARequired
Storage costEFS/FSx pricingS3 pricing (much cheaper)
Best forNAS migration, HPC, generalML/AI, pipelines, S3-backed apps

Pricing

You pay for three things: storage used in the file system (at S3 rates), small file reads and all write operations to the file system, and S3 API requests during synchronisation between the file system and your bucket. Check the S3 pricing page for the S3 Files tier — storage is billed at S3 rates, which is significantly cheaper than EFS for large datasets.


Summary

The biggest practical win: existing tools that expect a file path just work. No boto3, no SDK wrappers, no sync cron. Point your tool at /mnt/s3files/ and it reads and writes S3 objects transparently.

Enjoyed this article?

If this deep-dive saved you hours of research or helped you make better architecture decisions, consider supporting my work. Every bit helps me keep writing quality technical content.

No pressure — sharing the article helps just as much!

Share