Wrox Programmer Forums

Need to download code?

View our list of code downloads.

Go Back   Wrox Programmer Forums > SQL Server > SQL Server 2000 > SQL Server 2000
Password Reminder
Register
| FAQ | Members List | Calendar | Search | Today's Posts | Mark Forums Read
SQL Server 2000 General discussion of Microsoft SQL Server -- for topics that don't fit in one of the more specific SQL Server forums. version 2000 only. There's a new forum for SQL Server 2005.
Welcome to the p2p.wrox.com Forums.

You are currently viewing the SQL Server 2000 section of the Wrox Programmer to Programmer discussions. This is a community of tens of thousands of software programmers and website developers including Wrox book authors and readers. As a guest, you can read any forum posting. By joining today you can post your own programming questions, respond to other developers’ questions, and eliminate the ads that are displayed to guests. Registration is fast, simple and absolutely free .
DRM-free e-books 300x50
Reply
 
Thread Tools Search this Thread Display Modes
  #11 (permalink)  
Old November 3rd, 2005, 12:24 AM
Friend of Wrox
 
Join Date: Aug 2004
Location: Orange County, CA, USA.
Posts: 385
Thanks: 0
Thanked 0 Times in 0 Posts
Default

I follow your statment my position is more of avoiding sub selects as they can get difficult to follow when you start getting too complicated of logic and can increase the chance of deadlocks when you connect a lot of things. Caused by long running queries without releasing locks. I don't like to use no lock when I can avoid it. But breaking up complex queries and using temp tables it reduces the amount of time you retain a lock on a set of data. There is nothing wrong explicity with using sub selects. I have not worked anywhere where its prohibited, just frequently not understood. But I also find that other people trying to follow my logic can read multi step processes using temp tables easier. More ofter I get people modifying my sub selects and breaking them than I do when I break up the logic into pieces by using temp tables. Frequently I have to use ten levels of logic. If you do this with sub selects it can be very hard to follow and troubleshoot. Doing it a piece at a time is more maintainable (IMHO) easier for others to follow and easier to test if you have to change things. Simply depends on what your trying to do. Your answer is probably better for the question as asked.

Reply With Quote
  #12 (permalink)  
Old November 4th, 2005, 10:44 AM
Friend of Wrox
 
Join Date: Jun 2003
Location: Hudson, MA, USA.
Posts: 839
Thanks: 0
Thanked 1 Time in 1 Post
Default

I really don't want to argue about this, and we've hijacked this thread as it is, but I don't see how one or more sub-queries per se will result in deadlocks. Since the sub-queries are not the target of an update, the only locks a subquery will obtain will be shared locks, and only exclusive (update) locks can lead to deadlocks.

While I absolutely agree that it is not a good idea to use the nolock hint indiscriminately, some careful analysis of a locking issue may lead one to consider using it, or other locking hints, or setting the transaction isolation level appropriate for the situation.

The problem I have with using temp tables is that it gets you thinking procedurally. That is, you end up breaking the problem down into steps: first I do this query and put the results there, then I do that creating more intermediate results, then this next, etc. Efficient SQL requires that you think of problems from a set-based perspective. Operations work on the data as a whole, all at once, and not one step at a time.

From a performance perspective, properly constructed queries will almost certainly always be faster (many times way way faster) than the multi-stepped approach of temp tables (or cursors).

Of course maintainability is a very real concern. No doubt a query containing many subqueries can be difficult to understand, but isn't that what they make documentation for?

Bottom line, if it works for you and your situation, great - you really can't argue with success. Just be careful that your approach doesn't result in becoming essentially dogma, and thereby robbing you of perhaps more flexible and efficient solutions.



Jeff Mason
Custom Apps, Inc.
www.custom-apps.com
Reply With Quote
  #13 (permalink)  
Old November 4th, 2005, 12:32 PM
Friend of Wrox
 
Join Date: Aug 2004
Location: Orange County, CA, USA.
Posts: 385
Thanks: 0
Thanked 0 Times in 0 Posts
Default

I agree with not wanting to hijack the thread. But I can't agree with you on "only (update) locks can lead to deadlocks. If this were true then MS would not suggest you use nolock. Reality is they never fixed their deadlocking issues with SQL that have persisted since they bought the code from Sybase. MS also suggests you run large reports in off hours to prevent deadlocks. But then they claim they fixed the deadlock issues with SQL 6.6, then they fixed it again with 7.0, then they fixed it again with sql 2000. I have not seen it yet but I expect they will claim they fixed it again with SQL 2005. My code solution to limit the chance of deadlocks is to minimize the processing time of each individual sql hit. Thus I tend to avoid large sql statemements with multiple sub queries as these tend to contribute to deadlocks. Many small sql statemtents against the prod database combined in a temp table limit the likelyhood locking the production database.

Reply With Quote
  #14 (permalink)  
Old November 4th, 2005, 01:25 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: Hudson, MA, USA.
Posts: 839
Thanks: 0
Thanked 1 Time in 1 Post
Default

Quote:
quote:
I agree with not wanting to hijack the thread.
Looks like we've done it anyway. :D

You can't get deadlocks when only shared locks are used. SELECT queries only use shared locks, unless you have the transaction isolation level set higher than you should.

As I said, careful analysis may be required to avoid deadlocks. This means understanding the situations which may arise where exclusive locks need to be acquired. Exclusive locks are required by updates, or by "high" transaction isolation levels. You do have to be careful.

There are many ways to avoid deadlocks. Your technique of using temp tables is only one of them, and not one I'd necessarily recommend for other reasons. Indeed, there is no guarantee that technique will get rid of deadlocks entirely, if the updates strategy is poorly implemented. That is, e.g. if you attempt to acquire exclusive locks on resources in a different order in multiple processes, you'll get eventually them no matter what you do.

But, like I said, if it works for you, great.



Jeff Mason
Custom Apps, Inc.
www.custom-apps.com
Reply With Quote
  #15 (permalink)  
Old November 4th, 2005, 03:35 PM
Friend of Wrox
 
Join Date: Aug 2004
Location: Orange County, CA, USA.
Posts: 385
Thanks: 0
Thanked 0 Times in 0 Posts
Default

If you move to avoid deadlocks, you allow dirty reads, thus inaccurate uncommited records. The locking mechinism in MS-SQL is its greatest weakness as a RDBMS. I agree we hijacked this thread but I am very curious your opinion. How do you suggest handeling locking issues where you have long reports (big complex selects with many sub selects) when your also updating and inserting rows into the same database tables but want to avoid dirty reads and not upgrading to Oracle. I have already learned one thing from you, perhaps I may learn another.

Reply With Quote
  #16 (permalink)  
Old November 4th, 2005, 03:49 PM
Authorized User
 
Join Date: Sep 2005
Location: , , .
Posts: 95
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Hijacked it is
Here are a couple of suggestions

Oracle won't help you if you don't optimize your queries
There are a couple of things you can do to minimize deadlocks
Update statistics
Create usefull indexes
Defragment your tables (run DBCC SHOWCONTIG to find out your fragmentation levels)
Rebuild your indexes if defragmentation doesn't help
Make your queries sargable
Take a look at horizontal partitioning
Keep short transactions and keep the statements in the same order (very important!!)
Place frequently joined tables on different filegroups
SET DEADLOCK_PRIORITY low for your selects

“I sense many useless updates in you... Useless updates lead to fragmentation... Fragmentation leads to downtime...Downtime leads to suffering..Fragmentation is the path to the darkside.. DBCC INDEXDEFRAG and DBCC DBREINDEX are the force...May the force be with you" -- http://sqlservercode.blogspot.com/
Reply With Quote
  #17 (permalink)  
Old November 4th, 2005, 04:49 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: Hudson, MA, USA.
Posts: 839
Thanks: 0
Thanked 1 Time in 1 Post
Default

Nice list.

The only thing I can add is that while it is possible to avoid deadlocks, you can't totally avoid concurrency issues. The fact is, if you require that a query always returns totally committed data, and you also want simultaneous updates of the data, something's got to give - either the read or the update will have to wait on the other. Which one should wait is something only you can determine. Given that you have a long running query that, presumably, you are executing on demand, it seems to me that your only choice is to force the update to wait for the query to complete. This you can do by forcing the report query to obtain an update lock on the data, preventing other updates lock from being acquired. Judicious use of table/query hints can help you manage the situation.

Of course what you don't want is for both of them to be waiting on each other, and SqlMenance's list will help you avoid that, especially his 3rd to last point - probably the most important in avoiding deadlocks in the first place.

Blocking can't be avoided, but deadlocks can...

Indeed, your situation may very well have led you to partitioning the queries as you have done, by using intermediate tables to stage data for reporting. If it works, great. My only point was this may not always be the most efficient way to do things.



Jeff Mason
Custom Apps, Inc.
www.custom-apps.com
Reply With Quote
  #18 (permalink)  
Old November 7th, 2005, 03:32 PM
Friend of Wrox
 
Join Date: Aug 2004
Location: Orange County, CA, USA.
Posts: 385
Thanks: 0
Thanked 0 Times in 0 Posts
Default

So basically what your saying in varied confusing works including some Microsoft made up "sargable" is:

1) set your select priority low so they will be selected for termination in deadlock situations over inserts and updates for example. (still don't fix the deadlock issue only manages who's sql gets rejected). When you have really long SQL querries on a 24/7 database, they never complete with this logic, always being selected as the deadlock victom unless you break up the query into pieces.
2) Make and use indexes to speed up the queries and reduce the chance of a deadlock (familiar with this one, like it much) :-)
3) Defrag your data to speed excution and reduce likelyhood of deadlocks.
4) as for the Oracle comment no matter how poorly I do my Oracle I still don't get deadlocks with them, or Postgresql either and thats a free database!!! Shame on MS...
5) Make shore quick executing SQL statements or break up your code into pieces. Well thats what I was suggesting with earlier comments....

Thanks for the thoughts but in large databases particularly where they are purchased software you can't do much to fix MS's weakness. But thanks for the suggestions on how to minimize them.

Reply With Quote
  #19 (permalink)  
Old November 7th, 2005, 04:03 PM
Friend of Wrox
Points: 2,101, Level: 18
Points: 2,101, Level: 18 Points: 2,101, Level: 18 Points: 2,101, Level: 18
Activity: 0%
Activity: 0% Activity: 0% Activity: 0%
 
Join Date: Jul 2003
Location: , , .
Posts: 599
Thanks: 6
Thanked 3 Times in 3 Posts
Default

I like this hijack.

I'm wondering what DB design issues there are considering the requirement to go 10 levels deep.

Kind of off the previous hijack but wouldn't it help to break this logic up into maybe some VIEWS, SPs and possible even one way replication where you can just run your reports all day and night and not affect performance on the production DB?

Hmmmmmmm....

Reply With Quote
  #20 (permalink)  
Old November 7th, 2005, 05:36 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: Hudson, MA, USA.
Posts: 839
Thanks: 0
Thanked 1 Time in 1 Post
Default

Quote:
quote:
...some Microsoft made up "sargable"
I believe the term "sargable" was actually coined by IBM for DB2. Microsoft/Sybase refined the meaning a bit to refer specifically to the use of indexes in the search parameters of a query.

I can't help but wonder. You come across as intensely disliking MS and their products. You are certainly entitled to your opinion, but if you feel so strongly, why do you bother to: a) use the products, and b) post to SQL Server forum?

You mention that you use purchased software, so perhaps you are "stuck" with MS becuase of a purchase decision. If so, why do you stay?

Life's too short.





Jeff Mason
Custom Apps, Inc.
www.custom-apps.com
Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Combining two Queries arholly Access 1 January 16th, 2007 06:40 PM
Joining 3 Queries. rupen SQL Language 1 May 11th, 2006 11:35 AM
Combining Parameter Queries Taarnac SQL Language 0 May 4th, 2005 11:13 AM
combine 2 queries collie SQL Server 2000 2 November 29th, 2004 03:09 PM
Queries xzvi0r Access 5 September 8th, 2003 10:03 AM



All times are GMT -4. The time now is 02:14 AM.


Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
© 2013 John Wiley & Sons, Inc.