Ch 20 - best practices
Thanks so much for sharing your knowledge in this book. I have read and referred to it over and over again for more than a year now. I will soon need to update my weather-beaten copy and will gladly pay the price when it dies!
My question relates to pg 654, you say "unrolling loops can yield big savings on large datasets..". I wonder what you consider a large dataset? I know you mention in ch5 (arrays) that one would get a long running script error if they tried to create an array with 4,294,967,295 values but I work with data sets that iterate over 2000 - 20000 custom reference types (generally created from XML files) but I do worry sometimes that I am pushing the envelope and should look at optimizing the loops? I'd love to hear your thoughts on this as I've had difficulty finding this kind of advice on the web.