We've gained recent visibility into some of the kinds of errors
generated by our application, and one of the issues that we're seeing
(rarely) are deadlocks. We seem to get a little less than one deadlock
per day (and one day has easily millions of transactions going on, so
the ratio really isn't bad). From what I've read, it seems like a
certain amount of deadlocking is inevitable, but you can avoid it with
a few practices.
My questions are:
1) Does a .0001% deadlock rate seem high? If so, where might I learn
more about tweaking resource order, or other strategies to reduce the
frequency?
2) Is it reasonable to code the consuming applications to test every
database interaction specifically for a deadlock cause of failure, and
then retry the interaction if one occured?
Thanks for any advice,
JasonOn 5 Dec 2005 14:04:21 -0800, "jason" <iaesun@.yahoo.com> wrote:
>1) Does a .0001% deadlock rate seem high?
To whom?
I much prefer something closer to zero, and that's what I expect of
any system I'm working on.
>2) Is it reasonable to code the consuming applications to test every
>database interaction specifically for a deadlock cause of failure, and
>then retry the interaction if one occured?
Never had to, don't like the sound of it.
Detect, yes. Auto-retry, no.
J.|||Theoretically retrying is the easiest way to deal with deadlocks - fire and
forget, but is it practical? Only if you don't really care about how your
procedures actually work.
I'd suggest going trough the code and eliminating the most obvious potential
reasons for deadlocks, as suggested by Books Online. And you don't really
need to go through all procedures - using trace flag 1204 you can identify
the offending procedure and judging by the frequency of your deadlocks you
don't have a whole lot of analytical work ahead of you.
ML
http://milambda.blogspot.com/|||In my OLTP work if I know a stored procedure will always result in updates
of only a few rows then I add the "with rowlock" query hint. This makes a
huge dramatic change. And if you have a tps rate of over 10 or so, you migh
t
consider the effect of
set transaction isolation level read uncommitted
to whatever stored procedures are only doing selects. Sometimes this isn't
the right thing to do, but sometimes it is perfect. I've had performance
increases of thousands of percents when tps is high and there are many 'read
'
stored procedures. Same with 'With rowlock'.
Have fun!
"ML" wrote:
> Theoretically retrying is the easiest way to deal with deadlocks - fire an
d
> forget, but is it practical? Only if you don't really care about how your
> procedures actually work.
> I'd suggest going trough the code and eliminating the most obvious potenti
al
> reasons for deadlocks, as suggested by Books Online. And you don't really
> need to go through all procedures - using trace flag 1204 you can identify
> the offending procedure and judging by the frequency of your deadlocks you
> don't have a whole lot of analytical work ahead of you.
>
> ML
> --
> http://milambda.blogspot.com/|||If reading should not be blocked, using the default isolation level, but the
readpast locking hint may in most cases yield better results. Purely for
reading purposes, that is.
It won't, however, eliminate all deadlocks.
ML
http://milambda.blogspot.com/
Monday, March 12, 2012
question about dealing with deadlocks
Labels:
application,
database,
deadlocks,
dealing,
errorsgenerated,
gained,
kinds,
microsoft,
mysql,
oracle,
server,
sql,
visibility
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment