MDPlatts wrote:Thanks richard - was anything mentioned about the flash-cache stuff - how it works and when its available ?
I wonder if once its de-duped in SSD whether it would stay de-duped once AO moves the data down a layer or two.
And if flash-cache and de-dupe might help work together to spread the feature into the other tiers if SSD is handling all the incoming writes with the cache and de-duping them and AO moving them down - though obviously only if the original data is still in the SSD.
I 2nd that about the flash cache.
As far as deduped data staying deduped, my guess would be there is no way to make that happen.
Think about it like this:
Let's say you have a 3 tier 7400 with SSD/FC/NL. You provision a VV out of the SSD CPG which has dedupe turned on.
When one of your hosts issues a write, the data is sent down the wire to the 3par. The 3par will accept your data in cache and acknowledge the write. Before flushing the cache to disk, the 3par performs a dedupe lookup against the "High speed index/table". If the lookup is successful (meaning the 16KB block is duplicate data), then rather that write the data, the array writes a pointer and moves on with life.
Fast forward a few hours when no one cares about the UAT copy of the SQL database you just made anymore (not even enough to delete it). AO will kick in, identify that the copy needs to be moved down to FC. AO will read the 128MB region (getting there through the pointer that dedupe created), write the 128MB region to the FC CPG, redirect the meta, and remove the pointer.
So what just happened is that the AO process rehydrated your data.
Unless there is something else going on in the background (some secret sauce that literally no one has ever heard of, because lets remember dedupe isn't exactly a new feature even if it's new to us), I would say once AO kicks in your data will get re-hydrated (at least the data that got moved to a lower tier).
The thing with it being SSD only is IMO kind of silly. I can understand needing SSD's for the index to prevent the dedupe process from being a performance hog, but they could allow the customer to add a pre-determined number of SSD's (let's say 8, or however many is necessary to give dedupe enough space to work with), dedicate those SSD's into an admin VV to be used only for the dedupe index, and then you can turn dedupe on for the whole array (spinning disk included).
Or am I crazy/missing something/completely stupid?
Regarding the 6 nines guarantee, generally I agree that things like that (including the "Get Thin Guarantee" and the "Double VM Density Guarantee") add up to nothing more that marketing nonsense. However if it's true that "Any 4 node array" qualifies, and that that's all there is to it, the fact that HP is willing to put their money where their mouth is to guarantee better reliability will help them win particularly against small VMAX/DMX/Symmetrix deployments where the customer will typically state that they need a particular array to guarantee availability (fat lot of good that did the state of Virginia a few years back though when a combination of EMC tech support incompetence and on-site staff incompetence brought the whole thing to a screeching halt)
It's a pretty much a potshot against EMC (netapp isn't actually renowned for their reliability), but I'll bet it wins them a few deals, particularly in the large array market where that kind of thing makes a difference, and the tiny array market where some people are fooled into thinking they need that kind of availability