I have the following scenario:
a table that is accessed (update, delete, insert and select) by multiple programs. In fact they are the same, but instantiated by multiple users. This table never grows to more than 1000 rows as the program deletes data after use and inserts new data again. It's like a Supplier/Collector situation.
as this is a industrial production scenario and I must guarantee some operations, so when a user confirms any action, the program updates that table with data coming from other tables on the system.
So we implemented transaction on a lot of commands.And the result was a lot of deadlock situations.
I want some tip about what we could do for avoid those locks. In fact we don't need the transaction, we just need to guarantee that a command will run and if for any reason it fails, the whole operation gets rolled back. I don't know if there's a way to do that without using transactions.
PS: We're using SQL Server 2008 R2.
PS2: I discovered that some system tables I used in the clause FROM on the update was the big problem. Those tables are used for the whole system and gets tons of insert/update/select. So I was just locking things that should not because I didn't change data on that tables with this program.
EX:
Update t1
set x= 1
from systable1 as t
inner join systable2 t2
where .....
I guess this was the big problem, so I added hint WITH (NOLOCK)
on t and t2 and WITH (ROWLOCK)
on t1.
Other thing I must mention, this is a test ambient and we are stressing the data base and program at max, because we just can't risk to fail on production.
Can I use a checkpoint strategy to re-do the action if it fails?
Thanks.