![]() If anyone can corroborate and say "yea this is expected speed, there's nothing to fix" then I'll be happy to stand down and leave this here for posterity, but right now forgive me if I seem a bit sceptical. Even your example is double the speed I'm getting. There's no encryption and no authentication (on mine) so I don't see why it would only be able to reach about a quarter of the available network throughput with no contention. I guess I just find it hard to believe NFS would be just one third of the speed as SMB. Zeroes across the board while it transfers. A Linux mount defaults to async on the client side.ĭefinitely no hits on ZIL when I run zilstat. If by "tried setting the dataset to Async only", you mean you set "sync=disabled" for that dataset, then I would have expect the NFS xfer speed to increase.įYI, the NFS server works in sync mode ( SMB server works in async mode), leaving the sync=standard on the datset then allows the client to work in sync or async mode. Mount.nfs: trying text-based options nfsstat -m ![]() Mount.nfs: mount(2): Protocol not supported Mount.nfs: timeout set for Sun Nov 21 09:02:21 2021 So what can I try sudo mount -vvv -t nfs 192.168.0.99:/mnt/Bpool/test /home/chris/NFS I should be seeing roughly the same speeds as SMB, right? Before working on adding things like SLOG I want to figure out what's slowing it down at the basic level. I feel like a SLOG isn't the answer here. I've been doing hours of research on zfs to understand how write caching, sync writes and async writes work and tried setting the dataset to Async only, but it makes absolutely no difference at all. I have disabled atime on the dataset and that only gives me the speed I see now (yes, it was even slower before that). Network cards: onboard Intel gigabit, just one port no teaming.Hard drives: 6 X WD Red 3TB WD30EFRX-68EUZN0 in RaidZ2.RAM: 16GB ECC DDR3 (max the board supports).I set up another dataset outside this one to test NFS again and I see no change so it doesn't seem to be specific to the dataset and is something to do with NFS. Absolutely abysmally slow and I have no idea why. ![]() I'm happy with that.Īn NFS test through from both a raspberry pi 4 (low power device) and my surface pro 7 in windows subsystem for Linux (not low power) only gets about 3-4MB/s 29-34MB/s on second check. Roughly 90MB/s so around 720Mb/s which seems fine for SMB. I've set up an SMB dataset outside that one just for testing and a 9GB test file copied to and from the NAS is full-speed on a windows device connected via gigabit ethernet. Iperf3 tests to TrueNAS over gigabit Ethernet are full-speed, 900+Mb/s both ways. A 32GB test file done locally gives read and writes consistent with local SATA. I've done an iozone and I'm happy with those results. I've got one dataset being used by Plex and NextCloud mounted directly as jails, and local access is as expected for SATA. I'm aware this subject gets pretty much done to death but unfortunately I've been searching and searching every corner of the internet for answers to no avail.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |