A day in a life with Netbackup and its beloved robots.
A day in a life with Netbackup and its beloved robots.
This is an exciting time at the moment with a number of cool projects. One is which to minimise the number of media tapes we use and the other is to reduce the backup window. The answer, disk de-duplication! I’ll be looking at Symantec, HP StoreOne and Quantum offering this service, what features it offers and the costs. I’ll also be looking at budget basic system, all which must integrated into Netbackup.
After my first meeting with a Symantec sales person (not really Symantec) it turns out that no de-duplication which actually reduce the amount required to backup to tape. After trying to sell me every product under the Symantec range I found out the useful information:
These devices reduce the backup time and increase the backup rates. (Well it’s disk right, so expected)
It does cut the backup set 10:1 which is pretty good and in some cases 20:1 but lets not get too excited. So you 15TB of data will be around 1-4TB on disk. It uses block by block comparison and if 2 block are the same it just backup up one block.
The Symantec has a couple of appliances, a Media Server with disks and the de-duplication appliance. One is a bunch of disks in a chassis and the other is bunch of disks with a media server built in. I guess there’s some software built in to optimise the performance.
The de-duplication anyway you cut it is a license, which is or is not included with the device. We will see what the top venders have to offer for their BIG BUCKS! (And yes, you may need to sell you house to buy a pair!) No I’m serious!! LOL
Our beloved library failed a few weeks ago when the finger picker was unable to pick tapes. It was a good 5 years old and served us well. We would normally replace the library with an Quantum i500 or i80, but as this served a smaller data center and I found another scalar 100 on ebay pretty cheap it was worth the additional work. The ebay find had SDLT drives and we have LTO3 drives. After a little research I found that it is possible to use LTO drives in the same library but not together with SDLT drives, although I found no instructions on how to do this. So here they are after a successful implementation.
First Stage: Replacing the magazines and drives
Preparation: You need the replacement drives, storage magazines & brackets for LTO and the firmware. You will also need 11mm or 7/16 nut driver, T20 Torx driver and little flash light. Ensure you have access to the RMU (Web Interface) before continuing.
You should know that it’s not just a case of swapping out drives, the first thing you need to do is tell the library you are removing the drives. This can be done from the console panel More> Service> Drives> Repair> and select Remove. You can now safely remove the drives, failure to do this may result in errors later.
Now power down the library and unplug it then open the main door. You will need to remove the existing magazines which just pull out and the brackets. The slots for LTO have different groves for the magazines, so it a little more work.
See page 30-34 of the Scalar 100 user manual for instructions. Once the magazines have been swapped and new drives installed you can proceed to the next stage.
Second Stage: Firmware update
You will need to update the firmware of the library to support the LTO configuration. Quantum no longer publish this on their websites and I’m told they will stop supporting the product early 2011 so I’ve have a copy of the last firmware issued 6.10.004 here that I actually used for LTO3 drives, but I hold no guarantees nor except responsibilities. Also see http://www.quantum.com/ServiceandSupport/SoftwareandDocumentationDownloads/S100/Index.aspx for more information on firmware versions.
You will need to log-on to the RMU (web interface) to update the firmware, see the scalar 100 user guide for more information. You may need the update the RMU firmware but I don’t have a copy of this at present, I have 191A.00002 installed and it works.
Third Stage: Library configuration
Now you have the LTO magazines in place and the LTO drives installed and updated the firmware but the library is still reports the unit as SCALAR 100 DLT. I took the following steps to resolve this:
- From the front console navigate to More> Service> Drives> Repair and select Replace
- Click OK and OK again to the next two screen, you don’t need to remove the drive
- Click Cancel back to the Service Menu and select Drives
- Change the Drive to the next and click OK
- Repeat 1-4 until the library has seen all the drives as replaced
- Next we are going to partition the library from the main menu navigate to More> Setup> Library> Partitions
- For the simple solution we are going to select a single partition with a mailbox.
- Click Next until the marker is on 1 part MB=I/E and click OK (See image)
- The library will now reboot and should appear as SCALAR 100 LTO
Remember to delete the storage unit, drives and old library from the setup and use the wizard to re-detect the library and new drives. But don’t delete other libraries or drives that would be silly!
The above instructions are based on an actual implementation, I hope you found this information useful.
Today I am working to find out if it is possible to migrate Netbackup from one server to another using a replication of the data volume where netbackup is installed. We also like to upgrade to Windows 2008 X64 from Windows 2003 (x86). The reason for this unsupported method of netbackup migration is that a catalog restore takes 20 plus hours to restore, which exceed the maintenance window. So is there a faster way?
We are initially moving the master server from a HP server to an blade centre. We no longer require all the network cards and additional fibre ports installed in the hardware as we recently deployed a media server that managed our library. We do however need an additional media server for another data centre which will be controlled by this master server.
Lets start with the basics of the project, the main goal is to move the master server on server to another, both servers are connected to a SAN via fibres and have been allocated virtual volumes. The original master server has the Netbackup installation on a different volume that the OS. This volume has been replicated to a new virtual volume connected to the another server, so the catalog, volume database and all the images are an exact mirror of the original. The Master server will
Summery of goals
- Move the netbackup master server to another physical server
- Upgrade to Windows 2008 (Preferably X64)
- Ensure the services are available and we have connectivity the media servers
- Complete the migration within 5 hours.
Please note that this will be a test to see if this method is possible and its not recommended to move a production environment without the initial testing.
This procedure will will start will a fresh installation of windows 2008 X64 initially on the new host. Once the operating system is up and running we will present the data volume to the server and try to install Netbackup 6.5.5 X64 over the current installation path.
Testing that failed
This isn’t something new, I tried last week to present this replicated data volume to an existing Windows 2008 X64 installation with Netbackup already installed on a data volume. Then I stopped all the netbackup services and replaced the data volume.
When I tried to restart the services once again, the application cashed. This test failed.
I’ve been on holiday for the past week so sorry for the delay. I presented a copy of the Virtual Volume with the netbackup catalog data and installation on to a new Windows 2008 build. I then attempted to install Netbackup Master server over the existing installation. The installation crash during the setup and there were issues with the existing binaries.
The next attempt was to move all the data files and database configuration in to a new directory directory and then install netbackup. I made changes to the database configuration files to the new path, this can be found in \VERITAS\NetBackupDB\conf. All the services started accept the Enterprise Media Manager which couldn’t start because of a dependant service Adaptive Server Anywhere – VERITAS_NB. I believe this service maps the netbackup database, but I couldn’t work out how.
I had to abandon the installation and we will attempt to restore the catalog from tape again.
If I had more time to work on this I would try the installation again and copy just the data configuration files and catalog back to it original location after the installation on Windows 2008.
I’ve been running into issues with DFSR and VSS which causes Shadow Copy Components backups to fail because one of the VSS writers had disappeared:
2 x Windows 2008 Enterprise x64 Servers
Disk Space: C: 40Gb, D: 100GB
Services: DFSR, Citrix XenApp 5.0, Netbackup, Terminal Server.
The Issue: Netbackup had been failing with ERROR 71 (Path not found) when we tried to backup Shadow Copy Component and we also noticed EventID: 8192 errors in the applications log from VSS. It appeared that the Shadow Copy Optimization Writer had disappeared from the list of VSS writers and this is netbackup’s API dependency. In this case I opened a case with a Microsoft to troubleshoot further.
Netbackup Error: 71
Resolution: We were replicating with DFSR the local profiles for the Citrix Xenapp environment so that documents saved would appear in both location on Server A and Server B. The trouble with this method is that DFSR leaves open handles and as a result temporary profiles are created. In some cases the profile ID is locked and another one created, which in return renames the original profile id with a .bak extension in the system registry. When these backup profile IDs are created the Shadow Copy Optimization writer disappears from the writer list. I’m still asking the question to Microsoft how the profilelist and VSS “Shadow Copy Optimization” are dependant on each other.
It is NOT recommend to use DFSR to replication local user profiles i.e. C:\users and I have since removed this replication group. You will need to delete the .bak profile IDs from the following path in the registry:
an example would be: S-1-5-21-106000298-1275210071-1417001363-12345.bak
As with all system changes I would highly recommend exporting the keys or backing up the registry.
I’ve now decided to user Terminal Server user profiles with a path set to server A and this path is replicated to Server B using DFSR, with no further issues. For DR reasons the TS path is actually a CNAME which I can change depending on server status and this was tested to work fine during fail-over.