3 par system purchasing disk layout questions

Post Reply
egb
Posts: 2
Joined: Thu Feb 11, 2021 5:23 am

3 par system purchasing disk layout questions

Post by egb »

Hi All, we currently have a HP 8400 EVA with about 100TB useable storage using 450GB FC 15K disks. We want to upgrade this to a 3Par to get better performance. We connect the storage to VMware via a 8GB FC switch.
This is big purchase for us and I'd wondered if I can get some advice on the layout of disks. As a business everything does come to cost so it's about finding the best bang for buck we can get.
We are buying second hand from a trusted supplier we use, it comes with 3 year warranty as standard. The initial layout we got approval on is a mix of SSDs, SAS 10k and NL LFF.
I've attached the layout screenshots.
My questions are:
Am I right to be worried about the 36 NL drives? I'm worried the lack of IOPS on them will affect the systems performance. I guess it does depend on how we slice them up, but is the 3par clever enough for these slower disks not too affect the performance of the system?
I've asked them to quote for a 16GB FC interface on the 3par for future proofing, am I right that these will work with our 8GB FC switch? And is this a wise future proofing? I've found it hard to convert IOPS to fiber channel speed. Thanks in Advanced EGB
Attachments
3 par quote 2
3 par quote 2
3par 2.PNG (252.05 KiB) Viewed 14925 times
3par quote 1
3par quote 1
3 par 1.PNG (438.29 KiB) Viewed 14925 times
ailean
Posts: 392
Joined: Wed Nov 09, 2011 12:01 pm

Re: 3 par system purchasing disk layout questions

Post by ailean »

It would depend on the usage of your current system and the feature licences on the 3PAR system.

If running multiple tiers these two features can help;

Adaptive Flash Cache - Uses some of the flash space as a read cache for data on slower disks.
Adaptive Optimization - Moves active sub volume data up to faster disk (read/write) and non-active sub volume data down to slower disk.

If you have a lot of non-active volumes or data on volumes then that could rest on the NL drives, freeing up space on faster drives. Depends how stable your usage is, if it suddenly gets very active then it would be slow until the policy you set has a chance to notice it and move it up a tier.

Stuff you know is active can be either left on SSD or via another policy migrate between 10k/SSD depending on access.

I don't know what the reporting tools are like on EVA but you maybe able to get an idea of how much volumes at least are active.

I've run NL and 15k on a 3PAR before but we couldn't afford the features at the time to do the tiering. Since then all are arrays have only had single tiers (either 15k or SSD) so not had a chance to do this in practice (now that the features are inclusive :).

As to adding the 16Gb FC cards, yes they should work fine on 8Gb switches and you might get better performance from the SSDs with the faster cards/asics. Only issue I see on the 7000 series is that the 8Gb cards have 4 ports but the 16Gb cards only have 2 ports (the 7000/8000 series arrays have limited card slots for expansion).

Now that 64Gb FC switches are out you may find good deals on 16Gb ones to upgrade in the future.

Make sure to read the 3PAR Concepts and best practice guides, a lot of the array management side is automatic so you have to be careful not to bring too many manual practices from other systems as they can hamper the 3PAR arrays ability to give the most optimised performance.
MammaGutt
Posts: 1578
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: 3 par system purchasing disk layout questions

Post by MammaGutt »

I have some concerns... OK, I have multiple concerns.

As you are running EVA 8400s today you are well versed in the world of unsupported configurations. Like the EVA 8400 would never go (supported) above ESXi 5.1 or something the 3PAR 7000 will not be supported above ESXi 6.7

EVA 8400 was 4Gb FC and both 3PAR configurations will smash your EVA to pieces from a performance perspective unless you're doing single threaded OLTP. With 100TB usable on 450GB 15k drives you have somewhere around 300 disks each capable of up to 200 IOPs. The SSD tier of the 3PAR will smash that, while the FC and NL will not come close.

NL drives are crap, but you are likely to have a lot of dead capacity. With AO that will in 95+% of all scenarios even thkngs out nicely if done correctly. You are absolutely correct that 32 of those will not produce a lot of IOPs.

16Gbit FC on 7k systems are wasted money. As for future proofing, the 7k is already in End of Life phase, there is no new features or hardware support coming here and the product is dead in the eyes of HPE in less than 2 years (Oct 31st 2022). If you want some form of future proofing you should at least look a the 8k systems. It was released almost 6 years ago, so there should be a good amount of those in the second hand market if that's where you're buying.

One thing you will notice is that 3PAR will require signification less power and generate significant less heat.

And you could significantly increase usable capacity by increasing set size. R6 4+2 = 33% RAID overhead. Something in the back of my head tells me EVA was R5 4+1 or R6 6+2 so there was bigger set sizes there.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
egb
Posts: 2
Joined: Thu Feb 11, 2021 5:23 am

Re: 3 par system purchasing disk layout questions

Post by egb »

Thanks both, useful information there. I will get a quote for a 8400 series and see how much more it will be, you make a good point that the 7400 is old in the tooth.

Regarding the reduced 16GB Fibre channel ports on the 7400, are you sure this is the case? Is it not just the fiber channel modules replaced with 16GB ones?
ailean
Posts: 392
Joined: Wed Nov 09, 2011 12:01 pm

Re: 3 par system purchasing disk layout questions

Post by ailean »

The FC HBA ports stuff was based on the last HPE quickspec document I had for the 7000 series. Typically it's down to the PCI bus speed of the slots in the controller Nodes, I suspect the 7000 PCI slots could only drive 2x16Gb ports worth, hence the 3PAR only offered a 2 port card, instead of the 4 port 8Gb card.

We saw similar limits on our 9000 arrays, the 16Gb cards came with 4 ports but the later released 32Gb cards only had 2 ports (although there are a lot more slots on the 9000 to play with so less of an issue).

The newer 8000 series (replaced the 7000) offers 4 port 16Gb HBAs and 2 port 32Gb HBAs, probably due to newer/faster PCI bus on the controllers.

The 8000 will certainly have better life (it's still available new from HPE), the CPUs should be faster which will get more out of the SSD storage and might leave enough overhead to look at compression or dedupe features. But obviously a likely cost increase.
MammaGutt
Posts: 1578
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: 3 par system purchasing disk layout questions

Post by MammaGutt »

egb wrote:Thanks both, useful information there. I will get a quote for a 8400 series and see how much more it will be, you make a good point that the 7400 is old in the tooth.

Regarding the reduced 16GB Fibre channel ports on the 7400, are you sure this is the case? Is it not just the fiber channel modules replaced with 16GB ones?


Different HBA and ASIC. 2 port 16Gbit or 4port 8Gbit.

Anyways..... 8Gbit is 720MB/s. 2 nodes without additional HBA you're still at ~3GB/s (4×720MB/s). If you're able to get close to that in a realistic load on a 9 year old mid-range array you are lucky :) adding more ports is only useful for direct connect (no SAN fabric) or if you're going to use peer motion or remote copy.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Post Reply