Several people have asked for the recommended way to move to a new webhost or IP address without having problems in Google. I will be doing this shortly for one of my clients and thought other people could use the info as well. We will be moving from one IP address to another IP address by changing hosts.

If you have a static site or can afford a day or so where your site can be in limbo between two IP addresses, life will be easier. If you have a dynamic site with databases and such, it’s trickier, even though the idea is the same.

Step 1.

Find a good web host and sign up for an account.

Step 2:

Make a back-up of your site at the new webhost.

Step 3:

Change DNS to point to your new web host.

Step 4:

Wait for the DNS change to propagate through the net.

Step 5:

Once you are sure people or Googlebots are fetching from the new webhost/IP address, you’re done. You can shut down the old site.

Let’s talk through these in a little more detail.

Step 1. Find a good web host and sign up for an account.

Research + references should help you find a good host. I liked my current webhost quite a bit and I did a lot of research, but the site readership was growing faster than I expected. In this example I’ll refer to things by IP addresses, and we’ll be moving from an IP address of 63.x.x.x to an IP address of 65.x.x.x. Just as a reminder, DNS is the system that maps pretty names like www.google.com to an actual Internet Protocol (IP) address that a machine can use, such as 66.132.74.122.

Step 2. Make a back-up of your site at the new webhost.

If you have a static website, this isn’t that bad; just copy the entire file structure over to the new webhost and you’re done. Harder is something like a blog, which usually has a MySQL or other database for storing posts. Harder still is some e-commerce site that has to have its database kept in a sync’ed state. In that case, you might have to set up database replication between the old location and the new location while you are doing the transition.

But let’s take the example of a WordPress blog with a MySQL database that can be down for a few hours without too much trouble. Assume that you’ve already used tar or FTP to copy the static files from one webhost to another. First, you want to create a new MySQL database at the new web host. Ideally, you can make it have the same database name and user name. If not, you’ll want to tweak the WordPress wp-config.php at the new location to update the database/username/password/etc.

Now that you’ve got the MySQL database ready to copy over, dump the old MySQL database, copy it to the new webhost, and load your database at the new location. Those three commands would look like this:

mysqldump –add-drop-table -uoldusername -poldpassword olddatabase > mysqlbackup.20051009.sql

scp mysqlbackup.20051009 user@newhost:~/

mysql -unewusername -pnewpassword -hnewdatabasehost newdatabase < ~/mysqlbackup.20051009.sql Bear in mind that you have a username/password to login to the old and new webhosts, but you also have separate username/password for the databases at each location as well. You might even have the MySQL database stored on a different host, which is why I showed the -h (host) option when restoring the database. Again, if the new host has different options for your database, you’ll need to edit your wp-config.php file or WordPress won’t be able to access your database at the new webhost. Now you have identical copies of your site at two different locations. If you’re just running a blog with a comment or two a day, it’s not a big problem if someone posts a comment or otherwise changes your database while you’re doing the transition to a new web host. If you run a big, industrial-strength forum or e-commerce site, you’ll need to do extra work to keep the two databases and/or file systems synchronized.

Step 3: Change DNS to point to your new web host.

This is the actual crux of the matter. First, some DNS background. When Googlebot(s) or anyone else tries to reach or crawl your site, they look up the IP address. Googlebot tries to do reasonable things like re-check the IP address every 500 fetches or so, or re-check if more than N hours have passed. Regular people who use DNS in their browser are affected by a setting called TTL, or Time To Live. TTL is measured in seconds and it says “this IP address that you fetched will be safe for this many seconds; you can cache this IP address and not bother to look it up again for that many seconds.” After all, if you looked up the IP address for each site with every single webpage, image, JavaScript, or style sheet that you loaded, your browser would trundle along like a very slow turtle.

You can actually see the TTL for various sites by using the “dig” command in Linux/Unix:

Time-To-Live is an important factor for a site’s DNS. Some sites like google.com, yahoo.com, and msn.com have really short DNS TTL settings like 300 to 900 seconds. Why? Well, if you have multiple data centers, you might want to take one data center down so that the data center mechanics can sprinkle fresh, magical index data onto the machines. With a short TTL, you could pull a data center’s IP address out of the rotation in just a few minutes.

That also helps explain the “Google Dance” of days gone by. The Google Dance would last for about a week, and people would see both old and new results, depending on which data center they happened to hit. The underlying reason was that each data center was brought down, loaded with new data or algorithmic settings, and then brought back up again. It took several days to switch the data at all data centers. During that time, webmasters used to love to check www2.google.com and www3.google.com because those DNS aliases usually pointed to the newest data centers. These days our production system is better equipped to switch things around quickly instead of over several days.

There, a little easter egg for the people who care about DNS. Okay, where were we? Right, switching DNS and Time-To-Live. You should care about TTL because if someone loads your website in their browser just before you update your DNS settings, and your TTL is one day, then that person’s browser will try to use your old IP address all that day.

In fact, it’s even worse. DNS is hierarchical. At the top of DNS there are 13 root servers that can handle any DNS lookup for a .com domain, but DNS caches flow all the way down to ISPs like Comcast or Cox. If someone on Comcast looked up your IP address just before you changed your DNS settings, all of Comcast would use the old IP address until the Time-To-Live expired.

So the upshot is that if you can make your TTL short (like an hour) instead of long (like a day), you’ll be in much better shape. Everyone will move to your new IP address in short order instead of having a mish-mash where some people are using the old IP address for hours.

The actual switchover process is pretty easy. Your new webhost will give you a pair of nameservers to use as the primary nameservers. You go to account settings and switch from the old webhost’s nameservers to the new webhost’s nameservers. Your registrar is smart enough to recognize nameservers that are already present in the DNS system, so it can make the change pretty much immediately. If you’re going with a nameserver that no one has ever heard of before, you might have to wait 24 hours or so for things to percolate into the system.

Step 4: Wait for the DNS change to propagate through the net.

This is mostly a function of TTL and whether you’re switching to nameservers that are already present in DNS. Remember that DNS is hierarchical, and you have to wait for DNS caches to be flushed as Time-To-Live is exceeded. If you are using a smart registrar and a well-known set of new nameservers, the switch at the root level of DNS can be pretty quick. To verify that the root servers have the new nameserver, you can use the “dig +trace domain” command in Linux/Unix. The “+trace” option tells dig to go all the way up to the DNS root servers for the lookup.

Once the nameservers are switched, you just have to wait for TTLs to expire for your new nameserver (and thus IP address) to find its way out to everyone. If you are on a Windows XP system, you can use the command “ipconfig /flushdns” to flush your machine’s DNS cache, but it probably won’t do much good by itself. Remember that DNS is cached at each level, so your ISP probably has cached the previous IP address until the TTL expires.

Step 5: Once you are sure people or Googlebots are fetching from the new webhost/IP address, you’re done. You can shut down the old site.

When you ping your domain and see your new IP address, you know that you’re getting close. Previous visitors might still be using the old IP address from their DNS cache, but new visitors are getting the new IP address. It’s still a good idea to give a day or so in case anyone had a long Time-To-Live set, but most TTLs are a day or a few hours or less. After a day or so, it should be safe to deactivate the hosting at the old location. If you want to be ultra-safe, check your logs. When you see Googlebot fetching from the new webhost and no more visitors in your logs at the old location, it’s okay to turn off your old webhost.

Moving to a different domain

Now let’s talk for a minute about moving from urdomain.com to someotherdomain.com. All other things being equal, I would recommend to stay with the original domain if possible. But if you need to move, the recommended way to do it is to put a 301 (permanent) redirect on every page on urdomain.com to point to the corresponding page on someotherdomain.com. If you can map urdomain.com/url1.html to someotherdomain.com/url1.html, that’s better than doing a redirect just to the root page (that is, from urdomain.com/url1.html to someotherdomain.com). In the olden days, Googlebot would immediately follow a 301 redirect as soon as it found it. These days, I believe Googlebot sees the 301 and puts the destination url back in the queue, so it gets crawled a little later. I have heard some reports of people having issues with doing a 301 from olddomain.com to newdomain.com. I’m happy to hear those reports in the comments and I can pass them on to the crawl/indexing team, but we may be due to replace the code that handles that in the next couple months or so. If it’s really easy for you to wait a couple months or so, you may want to do that; it’s always easier to ask crawl/index folks to examine newer code than code that will be turned off in a while.

Published On: March 1st, 2012 / Categories: Analytics /

Subscribe To Receive The Latest News

Curabitur ac leo nunc. Vestibulum et mauris vel ante finibus maximus.

Add notice about your Privacy Policy here.