Peer Motion Problems

Post Reply
stoocake
Posts: 5
Joined: Wed May 28, 2014 5:32 am

Peer Motion Problems

Post by stoocake »

Morning all

We have a problem running Peer Motion between 2x 7400 Arrays running 3.1.3 MU5. We have fibre connectivity across 2 ports on one fabric and the IMC seems happy with the configuration. We have managed to run Offline (unexported VV) migrations successfully.

However when trying to run a Minimally Disruptive migration we are having issues with the console (version 4.6.0 and version 4.5.0 show same behaviour).

When you run through the peer motion wizard, select the host/disks and point to the source CPGs, the next button says 'Finish'. When you click this after a moment a box appears with a 'Verify' button on it. This button does not do anything, but when clicked a little exclamation appears in the bottom right of the IMC. This shows a Java issue - a Null Pointer exception.

I have tried running from my desktop (Win 7) and a management server Windows 2012) and tried a few versions of Java on my PC (6, 7 and 8 - various builds). None have made a difference.

I can provide screenshots if they'll help?

I just wondered if anyone had seen this and has managed to get around the issue?

Thanks,
Davidkn
Posts: 237
Joined: Mon May 26, 2014 7:15 am

Re: Peer Motion Problems

Post by Davidkn »

Do you mean 3.1.2 MU5, As 3.1.3 only has an MU1?

Personally I've never use peer motion, have you logged a call with support or tied the same thing through the command line?
stoocake
Posts: 5
Joined: Wed May 28, 2014 5:32 am

Re: Peer Motion Problems

Post by stoocake »

Sorry that was a typo. Yes. 3.1.2 MU5 is the version.

I have logged a call with HP and they are recreating an environment to match ours but I fear they will take time or not come up with an answer. I'm a bit stuck for ideas. I'm going to install the client on a 2008 R2 server to see if that changes anything but other than that I'm not sure what will work.
stoocake
Posts: 5
Joined: Wed May 28, 2014 5:32 am

Re: Peer Motion Problems

Post by stoocake »

Just an update for anyone who comes across a similar issue. The symptoms I described above were because there was still a volume set present exported to the server.

We have a complex arrangement where there are clustered disks to two hosts exported via a volume set and extra disks presented to each individual node of the cluster. This was confusing the GUI.

I didn't expect this was the problem at first because we managed to do around 15 migrations without the issue flagging up! However the documentation is clear that volume sets are not supported so I should have completely got rid of them from the start.

Worth bearing in mind.
Post Reply