A write cache on the other hand can be very efficient but requires much higher levels of protection and testing and at some point you still need to destage to disk to ensure consistency and that is going to take an inordinately long time for a 10TB cache. I have yet to see anyone implementing anything like those kinds of capacities for a write cache as it doesn't really make much sense. They're there to support transient spikes not your entire workload and if you don't have a backend that can destage quickly enough, then you'll be in a world of hurt if the cache ever fills, or indeed in certain failure scenarios.
With the advent of inline dedupe and low cost SSD, if you choose you now have the potential to just bypass a SSD based write or read cache completely and go straight to disk. After all it's the same SSD's, reading and writing at the same speed and if you can put 10TB's on the floor with the addition of 4:1 dedupe on top then maybe flashcache no longer holds the same importance it did for spinning disk. Not saying it won't be extremely useful but there are now potentially other commercially viable ways to skin that particular cat.
Timing of product releases isn't anywhere near as simple as a roadmap might suggest, HP aren't just looking to release feature the moment they are ready, they're looking to release features that are complimentary and provide synergies. That way Customers don't go off at a tangent implementing a particular feature only to then have to roll it back when the next comes along. The aim being to get away from those mutually exclusive features that plague much of the competition. BTW That's the No1 rule of all roadmaps - dates are very liable to change
![Smile :-)](./images/smilies/icon_e_smile.gif)