But there are so many other things to consider here and so many things to worry about (why are you using iSCSI with XenServer, why is there a Windows machine involved, why is there network storage at all, what disks and RAID will you use, what is the purpose of the setup, etc.) that we would need to do a lot of analysis before making recommendations. ANd SMB rarely actually care about speed. It's impossible for network storage to be as fast (when it is the same storage.) Now recommendations for what would make sense for your setup. Would there still be a performance increase if the network storage is SSD? Any suggestions would really be appreciated :)Īlways faster local, always. My 2 port nic team is definitely the bottleneck on this machine.
Same machine also has 16tb of SAS raid10 and once you saturate the cache they seem to run around 300-350MB/s. I am playing with my phone system on a pair of Samsung 1tb 850's in raid1 see 500MB/s writes routinely. Raid10 6g SAS with big spindles 4+tb drives or raid 5/6 on SSD but a limit of 4-6tb total on the storage. The issue for most of us comes down to storage volume. On a 10g network you remove the network as a bottleneck and then will see the real benefit of the SSD's. The SSD's will handle this better than spindles because of much lower latency and higher overall IOPs capability. But 4 active clients doing heavy lifting will all see close to 100MB/s all at the same time. When you throw raid 10 Sata or SAS against SSD raid 1/5/6 your network is going to be the bottleneck even on a quad port nic team. For any single client to client transfer that is all you will ever see. On a fast workstation with a single 1g nic I get about 104MB/s going to the servers. On 1g network moving stuff from server to server I see 113MB/s with adapter teams on both servers. That OS latency is 100% new latency added to the system that was not a bottleneck with local disks. Then we have the latency coming from the Windows server that is serving this up. Your iSCSI network to the SAN easily has 100 - 1000x the latency of the SAS/SATA local connection. SAS and SATA links have essentially zero latency. Then we have to deal with the latency issues. So instead of 6Gb/s, you only have 700Mb/s. Then consider that iSCSI has huge overhead both from iSCSI itself and from TCP/IP (unless you do iSCSI RDMA) which takes 20-30% of your GigE connection away.
Of course, most disks can't saturate that link, but once you have RAID cards and caches, they sure can even if just for a little bit.
That's a base difference of 600% to 1200% on the network layer alone as a starting point. iSCSI over a GigE network has 1Gb/s for all disks combined. even decently slow local disks connect over 6Gb/s or 12Gb/s connections, with one of these per disk. Perhaps start with setting a goal of speed, from the server to the client machine of how fast you want it to be, then find the bottlenecks in why you are not there now. Think about 10GIG switches rather than 1GIG, then think of CAT6 cabling and fibre. Local is easier to make fast for cheaper, Network can be just as fast but you need to pay a lot more. Even if your network is limited to a certain speed, you might want blazing fast storage for a database because it is important that the server has a lot of speed. (There are ways of getting around this though, If you have 4 ports serving 4 different networks on gigabit, each group of users/įinally, the speed of the local storage, whether spinning or flash, you can get different levels of speed. So even if the local storage can do 400 MB per second, your limit is the network. If you have a gigabit network, you can do 80 MB per secondas a ballpark. Depending on RAID card, processor & software, you might have a bottleneck there, so make sure that is good standard kit first.