同步操作将从 Juicedata/JuiceFS 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environment. By using the widely adopted Redis and S3 as the persistent storage, JuiceFS serves as a stateless middleware to enable many applications to share data easily.
The highlighted features are:
Architecture | Getting Started | POSIX Compatibility | Performance Benchmark | Supported Object Storage | Status | Roadmap | Reporting Issues | Contributing | Community | Usage Tracking | License | Credits | FAQ
JuiceFS relies on Redis to store file system metadata. Redis is a fast, open-source, in-memory key-value data store and very suitable for storing the metadata. All the data will store into object storage through JuiceFS client.
The storage format of one file in JuiceFS consists of three levels. The first level called "Chunk". Each chunk has fixed size, currently it is 64MiB and cannot be changed. The second level called "Slice". The slice size is variable. A chunk may have multiple slices. The third level called "Block". Like chunk, its size is fixed. By default one block is 4MiB and you could modify it when formatting a volume (see following section). At last, the block will be compressed and encrypted (optional) store into object storage.
You can download precompiled binaries from releases page.
You need install Go first, then run following commands:
$ git clone https://github.com/juicedata/juicefs.git
$ cd juicefs
$ make
A Redis server (>= 2.2) is needed for metadata, please follow Redis Quick Start.
macFUSE is also needed for macOS.
The last one you need is object storage. There are many options for object storage, local disk is the easiest one to get started.
Assume you have a Redis server running locally, we can create a volume called test
using it to store metadata:
$ ./juicefs format localhost test
It will create a volume with default settings. If there Redis server is not running locally, the address could be specifed using URL, for example, redis://username:password@host:6379/1
.
As JuiceFS relies on object storage to store data, you can specify a object storage using --storage
, --bucket
, --accesskey
and --secretkey
. By default, it uses a local directory to serve as an object store, for all the options, please see ./juicefs format -h
.
Once a volume is formated, your can mount it to a directory, which is called mount point.
$ ./juicefs mount -d localhost ~/jfs
After that you can access the volume just like a local directory.
To get all options, just run ./juicefs mount -h
.
JuiceFS passed all of the 8813 tests in latest pjdfstest.
All tests successful.
Test Summary Report
-------------------
/root/soft/pjdfstest/tests/chown/00.t (Wstat: 0 Tests: 1323 Failed: 0)
TODO passed: 693, 697, 708-709, 714-715, 729, 733
Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr 0.38 sys + 2.57 cusr 3.93 csys = 9.65 CPU)
Result: PASS
Performed a sequential read/write benchmark on JuiceFS, EFS and S3FS by fio, here is the result:
It shows JuiceFS can provide 10X more throughput than the other two, read more details.
Performed a simple mdtest benchmark on JuiceFS, EFS and S3FS by mdtest, here is the result:
It shows JuiceFS can provide significantly more metadata IOPS than the other two, read more details.
For the detailed list, see juicesync.
It's considered as beta quality, the storage format is not stabilized yet. It's not recommended to deploy it into production environment. Please test it with your use cases and give us feedback.
We use GitHub Issues to track community reported issues. You can also contact the community for getting answers.
Thank you for your contribution! Please refer to the CONTRIBUTING.md for more information.
Welcome to join the Discussion and the Slack channel to connect with JuiceFS team members and other users.
JuiceFS by default collects anonymous usage data. It only collects core metrics (e.g. version number), no user or any sensitive data will be collected. You could review related code here.
These data help us understand how the community is using this project. You could disable reporting easily by command line option --no-usage-report
:
$ ./juicefs mount --no-usage-report
JuiceFS is open-sourced under GNU AGPL v3.0, see LICENSE.
The design of JuiceFS was inspired by Google File System, HDFS and MooseFS, thanks to their great work.
JuiceFS already supported many object storage, please check the list first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue to juicesync.
The simple answer is no. JuiceFS uses transaction to guarantee the atomicity of metadata operations, which is not well supported in cluster mode.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。