advice for newbie in teh 3par world please
advice for newbie in teh 3par world please
Hi
My first post here and a very new to 3PAR.
We've had HP install our 3PAR and now we're migrating to it from the our old MSA.
I've been reading through this forum and have downloaded and installed and setup VMIOAnalyzer.
Our setup is 7200, with 40 * SAS 10K , and 8 * 480GB.
We have it connected to our Brocode fabric in our HP enclosure.
WE have approx 10 ESX boxes and 6 non VM'd windows boxes.
My question is how to test the 3par performance using VMIOanalzer and how to test the Windows boxes?????
Could anyone advise a simple way to test both my VMS and Windows physicals connected to the 3PAR.
ESX boxes are mainly 5.1, a couple on 5.5. Windows mainly 2008R2 and a couple of 2003R2.
We use multipathing and have checked all datastores.
Thanks in advance.
Gerry
My first post here and a very new to 3PAR.
We've had HP install our 3PAR and now we're migrating to it from the our old MSA.
I've been reading through this forum and have downloaded and installed and setup VMIOAnalyzer.
Our setup is 7200, with 40 * SAS 10K , and 8 * 480GB.
We have it connected to our Brocode fabric in our HP enclosure.
WE have approx 10 ESX boxes and 6 non VM'd windows boxes.
My question is how to test the 3par performance using VMIOanalzer and how to test the Windows boxes?????
Could anyone advise a simple way to test both my VMS and Windows physicals connected to the 3PAR.
ESX boxes are mainly 5.1, a couple on 5.5. Windows mainly 2008R2 and a couple of 2003R2.
We use multipathing and have checked all datastores.
Thanks in advance.
Gerry
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: advice for newbie in teh 3par world please
For VM use VMIOAnalyzer and the instructions that come with it.
For Windows, use IOMeter - http://www.iometer.org/
Niiice....
For Windows, use IOMeter - http://www.iometer.org/
GerryM wrote:We have it connected to our Brocode fabric in our HP enclosure.
Niiice....
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: advice for newbie in teh 3par world please
Hi Richard,
Thanks for the suggestions.
I like Bro Code as well LOL.
Any useful links on how to use IOMeter properly as I'm struggling a bit to know if what I'm doing is worthwhile.
So far I've tried 8 workers with 16 outstanding I/Os this seems to hammer both my old and new SAN but is it a suitable test?
Are there any ICFs for IOmeter that are recommended?
Thanks
Gerry
Thanks for the suggestions.
I like Bro Code as well LOL.
Any useful links on how to use IOMeter properly as I'm struggling a bit to know if what I'm doing is worthwhile.
So far I've tried 8 workers with 16 outstanding I/Os this seems to hammer both my old and new SAN but is it a suitable test?
Are there any ICFs for IOmeter that are recommended?
Thanks
Gerry
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: advice for newbie in teh 3par world please
Well first question is what are you trying to accomplish from your performance testing? Sounds like you're comparing your old (live production?) storage to your new 3PAR storage?
If that is the case, I do not recommend running IOmeter against your live production assets, as it can cause latency. Instead, use you native monitoring tools for the old array to establish a baseline IOPs, Mbps, and latency... then use benchmark tools to simulate that workload on the new incoming storage.
When I don't have specifics, I use 8k 50% read 50% random (which implies 50% writes and 50% sequential) as a generic test.
Imposing a load that matches the IOPs & Mbps of the old storage and recording the new system's latency for comparison to old system is a good start. Then add servers/workers to the test until you hit about 10ms latency mark if you can, record the IOPS & Mbps as this..
Docs on how to use the VM tool and IOmeter are one the websites where those tools are downloaded from. Lots of discussions on the internet on how to best emulate SQL, Oracle or exchange server traffic with the tools as well.
If that is the case, I do not recommend running IOmeter against your live production assets, as it can cause latency. Instead, use you native monitoring tools for the old array to establish a baseline IOPs, Mbps, and latency... then use benchmark tools to simulate that workload on the new incoming storage.
When I don't have specifics, I use 8k 50% read 50% random (which implies 50% writes and 50% sequential) as a generic test.
Imposing a load that matches the IOPs & Mbps of the old storage and recording the new system's latency for comparison to old system is a good start. Then add servers/workers to the test until you hit about 10ms latency mark if you can, record the IOPS & Mbps as this..
Docs on how to use the VM tool and IOmeter are one the websites where those tools are downloaded from. Lots of discussions on the internet on how to best emulate SQL, Oracle or exchange server traffic with the tools as well.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: advice for newbie in teh 3par world please
If you do test the Array with IOMeter make sure you use multiple workers and the simulated workloads have a high enough queue depth (# outstanding I/O's) to stress the 3PAR. Otherwise you'll see poor results whilst the array sits around waiting for incoming work to do.
Re: advice for newbie in teh 3par world please
Hi Guys,
Thanks for the replies guys I appreciate it and for the pointers on overlapping IO etc and not hitting the live 3PAR.
I suppose I'd like to test three things, A: that I'm getting the perf out of this new 3PAR that I should be, i.e. that I've got it setup and configured optimally and B: give some perf figures for the bean counters compared to our old MSA and C: what if any perf difference is between various Raids are and how I can optimise this for my setup.
However all that said I've now run into a showstopper. I think when I've created (which were subsequently deleted test VVs) and assigned them to my "host set", for all my ESX boxes, I had "auto lunID" ticked and now when I export new VVs to a host set it finishes without error but they don't appear in the host set. When I try to add them manually to the host set it fails reporting due a conflict on a specific LUNID. When I googled it people said it was due to VLUNIDing needing to be contiguous and not to use "auto lunid" etc.
I think I'll log a call!
Thanks again.
Gerry
Thanks for the replies guys I appreciate it and for the pointers on overlapping IO etc and not hitting the live 3PAR.
I suppose I'd like to test three things, A: that I'm getting the perf out of this new 3PAR that I should be, i.e. that I've got it setup and configured optimally and B: give some perf figures for the bean counters compared to our old MSA and C: what if any perf difference is between various Raids are and how I can optimise this for my setup.
However all that said I've now run into a showstopper. I think when I've created (which were subsequently deleted test VVs) and assigned them to my "host set", for all my ESX boxes, I had "auto lunID" ticked and now when I export new VVs to a host set it finishes without error but they don't appear in the host set. When I try to add them manually to the host set it fails reporting due a conflict on a specific LUNID. When I googled it people said it was due to VLUNIDing needing to be contiguous and not to use "auto lunid" etc.
I think I'll log a call!
Thanks again.
Gerry
- Attachments
-
- lunid.issue.jpg (17 KiB) Viewed 23474 times
Re: advice for newbie in teh 3par world please
I use auto ID all the time without issue. The problem I have seen is if you put volumes in a volume set then export the volume set you run into problems because then the IDs need to be contiguous. Exporting volume sets is a nightmare in 3par. We learned in the beginning to never export a volume set because it causes issues. If you have a volume set exported to one set of hosts (example ESXi cluster) and need to export the same set to another set of hosts (a different ESXi cluster) the same exact IDs must be free on both clusters or you will get an error. We only export individual volumes to host sets and ever export volume sets.
Re: advice for newbie in teh 3par world please
hdtvguy you've hit he nail on the head. I hadn't properly explained it in my post or even understood to be honest.
I've logged the call had HP on the PC and they're off looking to re-create the issue and see if an easy work around can be found.
To be honest I'm still getting my head around what the issue actually is but I can see non contiguous IDs in my Host set so I guess this is the issue.
I'll post the HP response when I've received it in case it helps someone else.
Gerry
I've logged the call had HP on the PC and they're off looking to re-create the issue and see if an easy work around can be found.
To be honest I'm still getting my head around what the issue actually is but I can see non contiguous IDs in my Host set so I guess this is the issue.
I'll post the HP response when I've received it in case it helps someone else.
Gerry
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: advice for newbie in teh 3par world please
I strongly believe the solution will be as simple as manually assigning the LUN IDs. As in manually picking one that is open on all nodes to the LUN you are adding.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: advice for newbie in teh 3par world please
On a different note... you should review your naming conventions to help with that... sky is the limit with options, and everyone has their own favorite... but this is what I do and it works great for us, and helps with reporting down the road.
VV Naming Convention:
"Host or Cluster Name" _ "LUN #" _ "Optional Description"
NTDALEXCHP1_0_C
NTDALEXCHP1_1_E
NTDALEXCHP1_2_F
AIXDALORAP1_0_ASM
AIXDALORAP1_1_ASM
AIXDALORAP1_2_ASM
AIXDALORAP1_3_ASM
AIXDALORAP1_4_BACKUPS
ESXFARM1_0_PRD1 <-- I like to match ESX admin datastore name here
ESXFARM1_1_PRD2
ESXFARM1_2_PRD3
ESXFARM1_3_DEV1
ESXFARM1_4_DEV2
Pros:
You can see in the VV name the LUN# which when talking to Sysadmins from Windows/Linux/Aix may be the only common ground between storage and os for identifying what they need grown/decommissioned.
You can see the LUN# and Hostname, or datastore name inside all of your system reporter VV reports.
If you use command line to create VV's all the parameters you need to know are in the vv name.
createvlun <name of VV> <Lun ID> <Host, or hostset>
Cons:
You have to manually assign the correct VLUN number when exporting, luckily, the name of the LUN is right there in front of you since its the name of the object you're exporting.
Can make exporting multiple LUNs tricky, you can manually set a range like 12-15, and it will assign them in the order the VVs were sorted. You can preview the LUN assignments on the last page of the wizard before you commit.
-------------------
Different topic - Testing for 3 things.
A: Given a 7200 with 40 10k SAS drives and 8x 480gb SSD drives will be difficult to benchmark with IOmeter.... I assume you have adaptive optimization and the SSD are for AO? Or do you have specific LUNs you plan to pin into SSD and not use AO? Good news is that it is really easy to configure this correctly to get all the performance you are suppose to... the bad news is that its not very difficult to configure it wrong and shoot yourself in the foot
My first look would be to create 2 CPGs, possibly 3.
SAS 10K Raid 5, set size of 4, HA magazine (I assume you only have 2 shelves for disks, each is full?)
SSD Raid 4, set size of 4, HA magazine.
Set AO in performance mode with the SAS as Tier2, and the SSD as Tier 1.
If you have dev/archive like hosts that you do not want touching your SSDs. you can create a 3rd SAS cpg just for them.
No need for IOmeter in stage A.
B: Beancounter candy. Although not mentioned, I suggest everything be thin provisioned... the amount of capacity used on the new 3PAR vs the old MSA will be a great metric. Response time/latency pre/post migration will also be good. Not sure what backup method you use, but if the MSA was a bottleneck for your backups, backup job run times will be a good metric.
You could spend a lot of time and resources trying to build a representation of real world workload that will scale up to the point of redlining the 7200, and I do not recommend running IOmeter on your production MSA. Assuming you already collect performance data on your MSA... I would compile that into a baseline of X IOps, Y Kbps, Z latency... then use IOmeter to generate relatively the same amount of IOPs and KBps and show the reduction in latency...
The REAL important benchmark will be after you have done your migration, and AO has been doing its job for about a week. Comparing your overall metrics between the 2 arrays should show a significant drop in latency.
C: Raid 5 (set size of 4) is in my opinion the performance sweet spot.
(this graphic is at least 1 generation old.)
VV Naming Convention:
"Host or Cluster Name" _ "LUN #" _ "Optional Description"
NTDALEXCHP1_0_C
NTDALEXCHP1_1_E
NTDALEXCHP1_2_F
AIXDALORAP1_0_ASM
AIXDALORAP1_1_ASM
AIXDALORAP1_2_ASM
AIXDALORAP1_3_ASM
AIXDALORAP1_4_BACKUPS
ESXFARM1_0_PRD1 <-- I like to match ESX admin datastore name here
ESXFARM1_1_PRD2
ESXFARM1_2_PRD3
ESXFARM1_3_DEV1
ESXFARM1_4_DEV2
Pros:
You can see in the VV name the LUN# which when talking to Sysadmins from Windows/Linux/Aix may be the only common ground between storage and os for identifying what they need grown/decommissioned.
You can see the LUN# and Hostname, or datastore name inside all of your system reporter VV reports.
If you use command line to create VV's all the parameters you need to know are in the vv name.
createvlun <name of VV> <Lun ID> <Host, or hostset>
Cons:
You have to manually assign the correct VLUN number when exporting, luckily, the name of the LUN is right there in front of you since its the name of the object you're exporting.
Can make exporting multiple LUNs tricky, you can manually set a range like 12-15, and it will assign them in the order the VVs were sorted. You can preview the LUN assignments on the last page of the wizard before you commit.
-------------------
Different topic - Testing for 3 things.
A: Given a 7200 with 40 10k SAS drives and 8x 480gb SSD drives will be difficult to benchmark with IOmeter.... I assume you have adaptive optimization and the SSD are for AO? Or do you have specific LUNs you plan to pin into SSD and not use AO? Good news is that it is really easy to configure this correctly to get all the performance you are suppose to... the bad news is that its not very difficult to configure it wrong and shoot yourself in the foot
My first look would be to create 2 CPGs, possibly 3.
SAS 10K Raid 5, set size of 4, HA magazine (I assume you only have 2 shelves for disks, each is full?)
SSD Raid 4, set size of 4, HA magazine.
Set AO in performance mode with the SAS as Tier2, and the SSD as Tier 1.
If you have dev/archive like hosts that you do not want touching your SSDs. you can create a 3rd SAS cpg just for them.
No need for IOmeter in stage A.
B: Beancounter candy. Although not mentioned, I suggest everything be thin provisioned... the amount of capacity used on the new 3PAR vs the old MSA will be a great metric. Response time/latency pre/post migration will also be good. Not sure what backup method you use, but if the MSA was a bottleneck for your backups, backup job run times will be a good metric.
You could spend a lot of time and resources trying to build a representation of real world workload that will scale up to the point of redlining the 7200, and I do not recommend running IOmeter on your production MSA. Assuming you already collect performance data on your MSA... I would compile that into a baseline of X IOps, Y Kbps, Z latency... then use IOmeter to generate relatively the same amount of IOPs and KBps and show the reduction in latency...
The REAL important benchmark will be after you have done your migration, and AO has been doing its job for about a week. Comparing your overall metrics between the 2 arrays should show a significant drop in latency.
C: Raid 5 (set size of 4) is in my opinion the performance sweet spot.
(this graphic is at least 1 generation old.)
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.