View Single Post
  #1 (permalink)  
Old November 2nd, 2011, 04:05 AM
Jgribb Jgribb is offline
Registered User
Points: 5, Level: 1
Points: 5, Level: 1 Points: 5, Level: 1 Points: 5, Level: 1
Activity: 0%
Activity: 0% Activity: 0% Activity: 0%
 
Join Date: Jul 2011
Posts: 1
Thanks: 0
Thanked 0 Times in 0 Posts
Default Ch 20 - best practices

Hi Nicolas
Thanks so much for sharing your knowledge in this book. I have read and referred to it over and over again for more than a year now. I will soon need to update my weather-beaten copy and will gladly pay the price when it dies!

My question relates to pg 654, you say "unrolling loops can yield big savings on large datasets..". I wonder what you consider a large dataset? I know you mention in ch5 (arrays) that one would get a long running script error if they tried to create an array with 4,294,967,295 values but I work with data sets that iterate over 2000 - 20000 custom reference types (generally created from XML files) but I do worry sometimes that I am pushing the envelope and should look at optimizing the loops? I'd love to hear your thoughts on this as I've had difficulty finding this kind of advice on the web.
Regards
Joel gribble
Reply With Quote