Last week I was messing around with the 3.10.0+ kernel, and decided to tryout F2FS.
I used these two simple scripts.
sdtest-block:
Code: Select all
#!/bin/bash
echo "48 MiB write test, 2x repeatition:";
echo -e "\nBlock size = 4M";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=4M count=12
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=4M count=12
rm ./test.dat
echo "Block size = 2M";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=2M count=24
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=2M count=24
rm ./test.dat
echo -e "\nBlock size = 1M";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=1M count=48
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=1M count=48
rm ./test.dat
echo -e "\nBlock size = 512k";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=512k count=96
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=512k count=96
rm ./test.dat
echo -e "\nBlock size = 4k";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=4k count=12288
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=4k count=12288
rm ./test.dat
echo -e "\nBlock size = 2k";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=2k count=24576
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=2k count=24576
rm ./test.dat
echo -e "\nBlock size = 1k";
echo "Test 1:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=1k count=49152
rm ./test.dat
echo "Test 2:";
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=./test.dat bs=1k count=49152
rm ./test.dat
sdtest-writezero:
Code: Select all
#!/bin/bash
echo -e "\n /dev/zero tests"
echo -e "\nWrite zero bs=1M 10MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=test-a bs=1M count=10
echo -e "\nRead zero bs=1M 10MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=test-a of=/dev/null bs=1M count=10
rm test-a
echo -e "\nWrite zero bs=1M 50MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=test-b bs=1M count=50
echo -e "\nRead zero bs=1M 50MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=test-b of=/dev/null bs=1M count=50
rm test-b
echo -e "\nWrite zero bs=1M 100MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=test-c bs=1M count=100
echo -e "\nRead zero bs=1M 100MiB"
sync; echo 1 > /proc/sys/vm/drop_caches
dd if=test-c of=/dev/null bs=1M count=100
rm test-c
Here are the results. It is all done on a SanDisk microSD HC 16GB mobile Ultra 30MB/s UHS-I Class 10, using a 3.10.0+ kernel and a 512MB RPI running raspbian.
Ext4:
Code: Select all
Ext4
Block size write test Pass 1 Pass 2 Average
48MiB write test, block size=4M 51,9 MB/s 51,8 MB/s 51,85 MB/s
48MiB write test, block size=2M 50,8 MB/s 52,7 MB/s 51,75 MB/s
48MiB write test, block size=1M 52,5 MB/s 49,2 MB/s 50,85 MB/s
48MiB write test, block size=512k 50,0 MB/s 50,3 MB/s 50,15 MB/s
48MiB write test, block size=4k 49,8 MB/s 50,3 MB/s 50,05 MB/s
48MiB write test, block size=2k 34,0 MB/s 35,8 MB/s 34,90 MB/s
48MiB write test, block size=1k 20,7 MB/s 21,2 MB/s 20,95 MB/s
Write test, block size=1M Ext4
10MiB Write 52,2 MB/s
50MiB Write 50,5 MB/s
100MiB Write 11,9 MB/s
Read test, block size=1M Ext4
10MiB Read 21,4 MB/s
50MiB Read 21,5 MB/s
100MiB Read 21,5 MB/s
F2FS:
Code: Select all
F2FS
Block size write test Pass 1 Pass 2 Average
48MiB write test, block size=4M 85,7 MB/s 82,0 MB/s 83,85 MB/s
48MiB write test, block size=2M 86,1 MB/s 84,1 MB/s 85,10 MB/s
48MiB write test, block size=1M 81,9 MB/s 86,7 MB/s 84,30 MB/s
48MiB write test, block size=512k 82,1 MB/s 83,2 MB/s 82,65 MB/s
48MiB write test, block size=4k 96,8 MB/s 94,1 MB/s 95,45 MB/s
48MiB write test, block size=2k 64,2 MB/s 62,0 MB/s 63,10 MB/s
48MiB write test, block size=1k 41,9 MB/s 43,0 MB/s 42,45 MB/s
Write test, block size=1M F2FS
10MiB Write 92,6 MB/s
50MiB Write 78,5 MB/s
100MiB Write 14,2 MB/s
Read test, block size=1M F2FS
10MiB Read 21,0 MB/s
50MiB Read 21,5 MB/s
100MiB Read 21,4 MB/s
Writing small files on F2FS sure looks very promising! I am fighting the urge to try it out as a root fs, due to lack of fsck.
On the other hand, I have another RPI, which had a bad SD card, that would silently corrupt data (silent read errors are the worst kind of errors!) and the system would start segfaulting. Every time I ran debsums, there was more corruption. I decided that was the perfect opportunity to try out BTRFS in raid1 mode. Beside the boot partition I made two equally sized partitions and a btrfs raid1 on top of that. Then I transfered the system on it (you need a custom initramfs and kernel with support for it). I was getting around 10 recoverable errors per hour and was so pleased with the setup, that I decided to transfer it to a good SD card.
Just for the sake of comparison, here are the results, but keep in mind, its raid1 on the same card. These were done on a 3.6.11+ kernel and a RPI with only 256MB of ram running Raspbian:
BTRFS, raid1, autodefrag:
Code: Select all
BTRFS (raid1, autodefrag)
Block size write test Pass 1 Pass 2 Average
48MiB write test, block size=4M 8,8 MB/s 6,0 MB/s 7,40 MB/s
48MiB write test, block size=2M 8,5 MB/s 6,8 MB/s 7,65 MB/s
48MiB write test, block size=1M 6,0 MB/s 6,4 MB/s 6,20 MB/s
48MiB write test, block size=512k 7,6 MB/s 5,6 MB/s 6,60 MB/s
48MiB write test, block size=4k 13,3 MB/s 6,9 MB/s 10,10 MB/s
48MiB write test, block size=2k 3,2 MB/s 4,2 MB/s 3,70 MB/s
48MiB write test, block size=1k 3,6 MB/s 3,8 MB/s 3,70 MB/s
Write test, block size=1M BTRFS(raid1,autodefrag)
10MiB Write 124,0 MB/s
50MiB Write 10,0 MB/s
100MiB Write 3,3 MB/s
Read test, block size=1M BTRFS(raid1,autodefrag)
10MiB Read 21,1 MB/s
50MiB Read 21,9 MB/s
100MiB Read 22,1 MB/s