I love topics like this, they are usually good at pointing out that there isn't one rule fits all scenarios. Even from my last company to my current, simply the difference in hardware, table sizes requires an entirely different set of rules.
In my last company we would hit a million or two rows on our largest customer systems but we ran on what I would consider minimalistic hardware, single CPU, raid 5 etc.
At the current company we have 8 terabytes of data on a fiber SAN, 8 way CPU's, 8 gigs of RAM per box, clustered. Think we have 4 setups like this serving around 2 thousand people over something like 64 T1's.
Our system cache's so much data and the SAN is so effecient that even doing full table scans aren't that painful, now the director would kick my ass if I started doing that so obviously we don't =P
For my view on it, insert performance is hadly noticable no matter what index we use unless you are working with under powered hardware for the size DB, we cluster off of analysis of what is the most common queries to put the desired results on the same pages. In this case it is almost always the Identity column, or a multi-column key including a date.
This greatly reduces the number of page reads and how much cache is used per table allowing for more of the desired data vs. historical data to be cached over our entire system of 1700ish tables.
Our greatest issue is always blocking, especially with so many users located remotely, without NOLOCK on queries while SQL Server is spooling requested rows it appears to block until the row is actually sent to the user, not sure if it is supposed to behave like this but it appears to, especially with /shiver Crystal Reports.
Well enough rambling, just saying that there is no one cure all for perfromance, especially given differences in hardware resources, user demands etc. The best option is to build a test database, profile it under various loads and find a solution that works for you in your environment.
John