SSMC Customer Feedback

tcolbert
Posts: 9
Joined: Mon Feb 16, 2015 9:58 am

Re: SSMC Customer Feedback

Post by tcolbert »

I would like the option of not to have to use .r on the end of a SSMC created remote copy volume.

I would also like to see disk filters for a CPG.

I would like the option of being able to log into just one 3PAR and not to have every 3PAR using the same password.

On the login screen it should show which systems you are about to log into and give you the option of which ones to connect to.

(unless these areas have appeared in SSMC 3.1 - I haven't used this version yet).
wbbouldin
Posts: 49
Joined: Wed Mar 11, 2015 2:16 pm

Re: SSMC Customer Feedback

Post by wbbouldin »

tcolbert wrote:I would like the option of not to have to use .r on the end of a SSMC created remote copy volume.
It's a hold-over from IMC but you're right, it should be configurable. I'll make sure this gets on the backlog if it's not already.

tcolbert wrote:I would also like to see disk filters for a CPG.
Agreed. It's being scoped for a future release. Given its relatively high priority, it should make it in.

tcolbert wrote:I would like the option of being able to log into just one 3PAR and not to have every 3PAR using the same password.
You can do that. If you create a user/pass that only exists on one array then when you log into SSMC, you'll only see that array - even if others are connected. You don't have to log into SSMC with the same user/pass that you entered in the Admin Console. Each array can have a different user/pass if that's what you want.

tcolbert wrote:On the login screen it should show which systems you are about to log into and give you the option of which ones to connect to.
The system filter in SSMC can be used to limit the arrays seen at any given time. That's functionally equivalent to connecting to individual arrays in IMC. The benefit is that you don't have to remember the credentials for each array and enter them each time you login.

Thanks for taking the time to list your suggestions. The SSMC team very much appreciates all of the feedback from users in this group.
tcolbert
Posts: 9
Joined: Mon Feb 16, 2015 9:58 am

Re: SSMC Customer Feedback

Post by tcolbert »

Another piece of feedback (which I mentioned to 3PAR representative at a recent HPE event who said he would get back to me but didn't), is that when you create a VV within an AO configuration, in a 3 tier configuration SSMC defaults the CPG to the SSD tier which is against best practice. It also defaults the CopyCPG to the SSD tier as well, and the option to change this is hidden under the advanced features (all this as of SSMC 3.0).

I like the feature where that the AO configuration is highlighted for use (it wasn't in IMC), but if you just go with the defaults, a small SSD tier could fill up very quickly.
audiojim
Posts: 13
Joined: Wed Dec 17, 2014 4:01 pm

Re: SSMC Customer Feedback

Post by audiojim »

I think I just realized something in the SSMC that had been perplexing me for a while:

"Where was it getting its 'Device Type Capacity' data on the Dashboard???"

I don't know why it didn't dawn on me that surely it was different than the "Raw Capacity" panel, but it turns out isn't. SSMC does a better job of labeling it under the System view, where it actually labels it "SSD Allocated", "NL Allocated", etc.

Knowing how much you've actually used AND how much you've allocated are both useful. However, one shows it in GiB/TiB/etc and the other one in percentage. I think by renaming things to make it clearer and by showing GiB/TiB/etc usage and percentage for both, it would be much more helpful.

EDIT - Actually the percentages from the "Device Type Capacity" seem to match the data from the "Raw Capacity", so now I may be even more confused. One spot calls it Allocated and another one calls it Raw??? And the "Wide Striping View" under the System's Layout shows a different usage...perhaps this one is actually the Raw Usage?

Ugh.
wbbouldin
Posts: 49
Joined: Wed Mar 11, 2015 2:16 pm

Re: SSMC Customer Feedback

Post by wbbouldin »

The Device Type Capacity and Raw Capacity panels are showing pretty much the same information just in different ways. The Raw Capacity panel is more recent and was designed based on input from several user feedback sessions. Those users said that seeing the values was more important than the percentages but they wanted the bar graph to get a relative sense of how much was being used vs. the total.

The Wide Striping View under Systems/Layout is meant to show how data has been spread across cages and drives within the cages. If the percentages are vastly different between cages or drives within a cage (and you've followed best practices for populating the drives in the cages) then it's probably time to tune the system.

If we ignore the DTC panel and just work on improving the Raw Capacity panel, what changes would you like to see?

I appreciate your feedback!
wbbouldin
Posts: 49
Joined: Wed Mar 11, 2015 2:16 pm

Re: SSMC Customer Feedback

Post by wbbouldin »

tcolbert wrote:...in a 3 tier configuration SSMC defaults the CPG to the SSD tier which is against best practice.
I just tried this out with SSMC 3.1 and it's aligned to the best practices. When there are 3 tiers with SSD, FC, and NL drives, the default is the FC CPG. When there are 2 tiers with SSD and NL drives, the default is the SSD tier. The default copy CPGs mirror the default CPGs in both cases.

Sorry no one got back to you (hope that wasn't me!) but thanks for your continued feedback in this forum.
c84nks
Posts: 3
Joined: Fri Jun 02, 2017 10:55 am

Re: SSMC Customer Feedback

Post by c84nks »

Hi - I'm new to this forum but here's my experience so far with SSMC

We have several 7xxx 3PAR arrays running 3.2.1mu5 and one T800 running 3.1.3mu3. As the T800 is not supported in SSMC we are still using IMC 4.7.0 and SR3.1mu4. We will be decommissioning the T800 shortly so I have been using SSMC 3.1 in parallel. The old IMC is very slow when working with multiple arrays but I find the lists much easier to work with. I see the SSMC has dropped the monitoring of events which was always the downfall on the IMC when trying to refresh with hundreds of events coming in constantly.
SSMC's ability to search across arrays is a big bonus but a major concern is the System Reporter side. It is no substitute for the old SR MySQL database custom reports.
Here's a scenario - We start to see high CPU usage on an array (>90%) and suspect a deduped volume is being hit hard. On the old SR I could quickly chart CPU usage and identify the busiest 16 VV's sorted by peak IOPS . On the integrated SSMC SR I can only see CPU and overall disk usage but no way to find what VV is causing it without going through the VV's one at a time (some arrays have 900 VV's) !!
Is there a way of doing this or is it planned for some future release ?
wbbouldin
Posts: 49
Joined: Wed Mar 11, 2015 2:16 pm

Re: SSMC Customer Feedback

Post by wbbouldin »

Welcome to the forum!
c84nks wrote:Is there a way of doing this or is it planned for some future release ?
I think you'll find the Performance detail view under Systems to be very useful. You can quickly switch between hosts, vvs, cpu, pds, ports, and cache and see the top n or bottom n objects per category.

Alternatively, you can create specific reports to do the same thing. The default "Exported Volumes Compare by Performance" report shows the top 10 volumes by read/write/total IOPs, service time, and bandwidth. You can create a custom report if the default report doesn't do exactly what you want.

If neither of those give you what you're looking for, please let me know.
nsnidanko
Posts: 116
Joined: Mon Feb 03, 2014 9:40 am

Re: SSMC Customer Feedback

Post by nsnidanko »

Hi SSMC team,

Can you please explain what tunevv in current CPG does?

Image
c84nks
Posts: 3
Joined: Fri Jun 02, 2017 10:55 am

Re: SSMC Customer Feedback

Post by c84nks »

wbbouldin wrote:Welcome to the forum!
c84nks wrote:Is there a way of doing this or is it planned for some future release ?
I think you'll find the Performance detail view under Systems to be very useful. You can quickly switch between hosts, vvs, cpu, pds, ports, and cache and see the top n or bottom n objects per category.

Alternatively, you can create specific reports to do the same thing. The default "Exported Volumes Compare by Performance" report shows the top 10 volumes by read/write/total IOPs, service time, and bandwidth. You can create a custom report if the default report doesn't do exactly what you want.

If neither of those give you what you're looking for, please let me know.


Hi - thanks for the rapid response

Perhaps I'm missing something or need to turn on a feature somewhere.
I do not see a Performance Details option under Systems - only "Performance"
Also the only Exported Volumes options under Reports are "Performance Statistics", "I/O Time and Size Distribution" and "Real Time Performance"
Could this be limited in InForm OS 3.2.1(mu5) - is this new in 3.2.2 onwards ?

Regards Colin
Post Reply