Hello all. I have a design question for all you developers.
Lets take a simple scenario: I have 1 master table that tells me what Topics an author has posted under and then a bunch of tables representing each Topic.
Needless to say these tables aren't related and i have to query the master table for the TableName Field to see what tables an author has posted under. Well now to the question.
This query can get very intense with the multiple unions i am doing and i do not want to hit the database everytime for this query so i figure hey, i'll cache the resulting datatable and cache it for 1 minute to save the DB. That works fine, but once that expires in the minute the DB gets hit up again and i can just see a ton of overhead there. My next solution was to run a job initially that would do this intense query for the most popular authors and in the process of the query, get the resulting set, store that datatable in a dataset and write that dataset out to an xml file using dataset.WriteXML, so that every subsequent request will simply read from the XML file if exists, else write a fresh one.
My main question is that once the xml file starts to get large like lets say about 10,000 records, does filling and querying that dataset resulting from the xml file start to get slow? Does the indexing in the Dataset start to cause troubles? I can do this test myself, but i was just wondering if you guys have encountered this and if you have a better solution perhaps?