Proxy multiple, or all TPC’s in TFS 2010

September 20th, 2010

MS have recently updated their information on proxying TPC’s to include multiple or all Team Project Collections.

http://msdn.microsoft.com/en-us/library/ms400735.aspx

This means you no longer need an OS for each proxy server.

Upgrade WSS2.0 to WSS3.0 on Server 2008

July 27th, 2010

There’s a lot of forum activity from people bewlidered about how to migrate from WSS2 to WSS3, especially when it involves a change of underlying OS too.

I started this process myself about a month ago and downloaded a document from MS about the migration process – the document is here (do not download it, at 128 pages, it’s about 127 pages too long)

I’ll now show you in a few easy steps how to migrate and upgrade, using the following assumptions

  • You have a WSS2.0 instance (either as part of TFS 2005/08 or not) on a server called ‘oldserver’
  • You have a test server called ‘testserver’ with an installation of WSS3.0 running on SQL 05/08.
  • Your migration destination is a server called ‘newserver’, and is also running WSS3.0

First of all you’ll be wanting to download and run the pre-scan tool. This has to be run on the ‘oldserver’ as it will mark the database as being ready to upgrade. Follow the instructions here

After this tool has be run, you’ll be wanting to take a SQL backup of your WSS content database. (i just used SQL Server Management Studio) If it’s part of TFS 2005/08 it’s probably called something like ‘WSS_Content_TFS’ . *You can find the name of your old content db name by checking the the WSS2.0 Central Admin pages.

Now, restore your database on ‘testserver’ (again I used SQL Management Studio). Once restored, you’ll now need to tell your WSS3.0 instance to attach the db. We do this through the SharePoint command line utility stsadm. The command for adding the content db into WSS is as follows:

stsadm -o addcontentdb -url http://testserver -databasename WSS_DB_NAME -databaseserver testserver

Now, you have a WSS2.0 database attached to your testserver, which is running WSS3.0 so you’ll be wanting to upgrade that now… Again, on stsadm, we use

stsadm.exe -o upgrade -inplace -url http://testserver

So now we have a WSS3.0 instance with a bang-up-to-date db attached. It’s now just a case of ‘moving’ this db onto your new server. If you’re moving to TFS 2010, you will almost certainly want to edit the Site Collection URL. For this reason (and many more) we use the stsadm commands again. At this point, I’m just interested in moving the root Site Collection from my testserver to the newserver, so we run the following

stsadm.exe -o backup -url “http://testserver” -filename sitecollection.bak

This will create a file called sitecollection.bak in the following folder:
c:\Program Files\common files\microsoft shared\web server extensions\12\bin

Now, copy that file to the same folder on your ‘newserver’. Now log on to your newserver and run the following command to restore the collection

stsadm.exe -o restore -url “http//newserver/sites/whateveryouwant” -filename sitecollection.bak

You’ve now got an up-to-date WSS3.0 site collection running on your new server, whilst the old server remains running WSS2.0.

Facebook: VP of Technology talks about how Facebook works…

October 15th, 2009

Scalability is often one of the biggest challenges you can face, when you website grows – it often grows *very* quickly. To produce systems and architecture that scale with your growth is usually spectacularly complex and expensive. When you’re talking about a site with 300 million users, that’s really only about 5 years old, you have a pretty fascinating case study. Follow the link to hear how Jeff Rothschild describes how Facebook meets the challenges in his presentation entitled High Performance and Massive Scale

mms://video-jsoe.ucsd.edu/calit2/JeffRothschildFacebook.wmv

Key stats from the presentation about Facebook’s infrastructure/scale include:

  • Facebook has 30,000 servers supporting its operations*
  • Facebook stores 80 billion images (20 billion images, each in four sizes
  • Facebook serve up 600,000 photos a second
  • Facebook creates 25 terabytes of data per day in logging data into a Hadoop cluster
  • Facebook services about 120 million queries to it’s Memcache per second
  • Facebook has c. 230 Engineers, which is a ratio of roughly 1.1 million active users per Engineer
  • Facebook operates a shared nothing architecture wherever possible
  • Facebook’s development of php, mysql, memcache and various others, has virtually all been made open source
  • Facebook sends over 1 billion outbound (transactional, typically notifcation) emails per day

* Changes daily. The bulk of which is webserver to handle the low runtime efficiency of php.

One of the most interesting developments for anyone currently using php in a commercial environment is Facebook’s current development of a php compiler, which they estimate will give them 50 – 70% increase in runtime efficiency, this might give them a small amount of breathing room in terms of current server requirements. More interestingly for the wider php community, if this complier is made open source (which I’m absolutely sure it will) it could give php a real boost in the popularity stakes.

—–

Spotlight on Windows, fantastic windows management tool, and it’s free!

July 6th, 2009

I’ve seen a lot of software meant for managing the performance of windows servers, obviously a lot of those tools are extremely specific (for services like SQL and Exchange etc) but for standard windows machines absolutely nothing I’ve seen matches the amazing GUI of Spotlight on Windows. Initially the GUI looks like it’s trying too hard, but actually it’s an amazing blend of static and real-time info. Here’s how it looks

Spotlight on Windows
click for bigger

In terms of content – if offers pretty much everything you’d expect:

CPU usage
CPU queue length
LAN usage
Disk I/O
Processes
Memory usage
Virtual Memory usage
Memory queue
Page file usage
Fixed Disk usage

it also shows the various buses between these objects and shows the pages/sec moving around the motherboard, this feature is sorely missing from many other management products, and it’s often key at determining what’s going on with your server. Each monitor has a helpful explanation in case you’re feeling a little knowledge-light.

My advice for this product would be as follows.

1. Download and Install from the Quest Software site (it’s free!)

2. Set-up your connections (File -> Connect)

3. Ensure Spotlight successfully connects and choose a 6 hour calibration period.

This last step is most important, ideally you want to let Spotlight gather data from the machine over a time-span where it’s under some load, ie a representative snapshot of its daily load. Otherwise you’ll spend a lot more time in future customising the alert levels for usage which you know is “normal”.

Once all your connection are calibrated you’ll have a management tool which can tell you more about a servers performance in a single glance than you’d think possible.

Thanks to Quest Software.

IBM RAID adapters – will it fit?

July 6th, 2009

For those of us who work with IBM server hardware, there’s a baffling array of RAID cards which may or may not fit into your server’s architecture. Without having to locate and flick through your Technical manual for each one, the following page is a wonderfully comprehensive guide to virtually every IBM RAID card ever, and a matrix at the foot of the page showing which cards are compatible with this servers.

http://www.redbooks.ibm.com/abstracts/tips0054.html

Fantastic resource!

Download and install VMWare Infrastructure Client

July 3rd, 2009

There’s a lot of people searching and posting about the VMWare Infrastructure Client that’s used for managing your ESXi Host, people don’t seem to be able to find and download it from the web.

There’s a good reason for this, it’s not widely available online!  Do install it, simply visit the IP address of your ESXi host in your favourite internet browser, and you can download the Infrastructure Client from the ESXi Host directly.

Hope this helps some of the confused!

Creating ESXi snapshot backups with ghettoVCB.sh

July 2nd, 2009

This is the second post in a series regarding the backup of ESXi Virtual Machines.  Once you’ve got ssh and ftp access sorted you’ll be able to connect remotely to the ESXi host and run scripts (which use the ESXi command line) to automate certain tasks.

One of the best scripts out there for backing up ESXi hosts is called ghetto.vcb You can find out more about the script here:

http://communities.vmware.com/docs/DOC-8760

So, here’s how to set this up

Stage 2 – implementing the ghettoVCB.sh script.

1. First of all, download the ghettoVCB.sh script, and the example file vmbackup

2. Now, you’ll need to ftp these files to you ESXi host (If you haven’t already enabled ftp access, you can find my guide on how to do it here http://www.rancidswan.com/?p=4)

3. Now, ssh to your ESXi host and edit the ghettoVCB.sh file. There are a number of options at the top of the file, but not many, and they should all be fairly self explanatory. Just for testing you can set the path of the backup location to be somewhere on the ESXi host itself. Later I’ll be writing a guide on how to setup an NFS share on a Windows 2003 server so that these snapshot backups can be pushed directly to a windows machine.

4. You’ll also need to edit the file I’ve called vmbackup. This file purely contains just the name of the machine you want to backup (note: it’s the machine’s Virtual Machine name, not it’s Computer name you use, the name has to be understandable by the ESXi host)  You can have multiple machine names if you wish, with distinct names on each line, but personally I’d recommend just having one name in the file, this will make it easier to automate individual machine backups later on)

5. OK, now you’ll need to set the permissions on the file ghettoVCB.sh like this:

chmod 777 ghettoVCB.sh

5. Now, to run the script, you simply need to write:

./ghettoVCB.sh vmbackups

The script will now run and either show a progress meter, or show you any error messages you might need to tackle!

Enabling ssh and ftp access to your ESXi host.

June 29th, 2009

This will be the first of  a series of posts regarding the backup of ESXi Virtual Machines from an ESXi host. This will probably take four or five seperate blogs to explain.

Stage 1 – Enabling ssh and ftp access on your ESXi host.

The default setting for any ESXi host is to have ssh and ftp access disabled, not great for administering the box.  To enable it:

Once the machine has booted, press ALT + F1

type: unsupported (this text will not be displayed, if you make a mistake CTRL + U clears the input)

now you will need to enter the root password.

  1. At the prompt type “vi /etc/inetd.conf”
  2. Look for the line that starts with “#ssh” and “#ftp” (you can search with pressing “/”)
  3. Remove the “#” in both places (press the “x” if the cursor is on the character)
  4. Save “/etc/inetd.conf” by typing “:wq!”
  5. Restart the management service “/sbin/services.sh restart”

The Management Service will now restart (takes a couple of mins)

Now you need to kill the inetd service;

  1. ps -a | grep inetd
  2. kill (# of inetd service)
  3. inetd

Press Alt+F2 to return the server to DCUI mode.

Voila! You can now use your favourite ssh client to access the ESXi host.