On a different note... you should review your naming conventions to help with that... sky is the limit with options, and everyone has their own favorite... but this is what I do and it works great for us, and helps with reporting down the road.
VV Naming Convention:
"Host or Cluster Name" _ "LUN #" _ "Optional Description"
NTDALEXCHP1_0_C
NTDALEXCHP1_1_E
NTDALEXCHP1_2_F
AIXDALORAP1_0_ASM
AIXDALORAP1_1_ASM
AIXDALORAP1_2_ASM
AIXDALORAP1_3_ASM
AIXDALORAP1_4_BACKUPS
ESXFARM1_0_PRD1 <-- I like to match ESX admin datastore name here
ESXFARM1_1_PRD2
ESXFARM1_2_PRD3
ESXFARM1_3_DEV1
ESXFARM1_4_DEV2
Pros:
You can see in the VV name the LUN# which when talking to Sysadmins from Windows/Linux/Aix may be the only common ground between storage and os for identifying what they need grown/decommissioned.
You can see the LUN# and Hostname, or datastore name inside all of your system reporter VV reports.
If you use command line to create VV's all the parameters you need to know are in the vv name.
createvlun <name of VV> <Lun ID> <Host, or hostset>
Cons:
You have to manually assign the correct VLUN number when exporting, luckily, the name of the LUN is right there in front of you since its the name of the object you're exporting.
Can make exporting multiple LUNs tricky, you can manually set a range like 12-15, and it will assign them in the order the VVs were sorted. You can preview the LUN assignments on the last page of the wizard before you commit.
-------------------
Different topic - Testing for 3 things.
A: Given a 7200 with 40 10k SAS drives and 8x 480gb SSD drives will be difficult to benchmark with IOmeter.... I assume you have adaptive optimization and the SSD are for AO? Or do you have specific LUNs you plan to pin into SSD and not use AO? Good news is that it is really easy to configure this correctly to get all the performance you are suppose to... the bad news is that its not very difficult to configure it wrong and shoot yourself in the foot
My first look would be to create 2 CPGs, possibly 3.
SAS 10K Raid 5, set size of 4, HA magazine (I assume you only have 2 shelves for disks, each is full?)
SSD Raid 4, set size of 4, HA magazine.
Set AO in performance mode with the SAS as Tier2, and the SSD as Tier 1.
If you have dev/archive like hosts that you do not want touching your SSDs. you can create a 3rd SAS cpg just for them.
No need for IOmeter in stage A.
B: Beancounter candy. Although not mentioned, I suggest everything be thin provisioned... the amount of capacity used on the new 3PAR vs the old MSA will be a great metric. Response time/latency pre/post migration will also be good. Not sure what backup method you use, but if the MSA was a bottleneck for your backups, backup job run times will be a good metric.
You could spend a lot of time and resources trying to build a representation of real world workload that will scale up to the point of redlining the 7200, and I do not recommend running IOmeter on your production MSA. Assuming you already collect performance data on your MSA... I would compile that into a baseline of X IOps, Y Kbps, Z latency... then use IOmeter to generate relatively the same amount of IOPs and KBps and show the reduction in latency...
The REAL important benchmark will be after you have done your migration, and AO has been doing its job for about a week. Comparing your overall metrics between the 2 arrays should show a significant drop in latency.
C: Raid 5 (set size of 4) is in my opinion the performance sweet spot.
(this graphic is at least 1 generation old.)