You are currently viewing the BOOK: Beginning VB.NET Databases section of the Wrox Programmer to Programmer discussions. This is a community of tens of thousands of software programmers and website developers including Wrox book authors and readers. As a guest, you can read any forum posting. By joining today you can post your own programming questions, respond to other developersí questions, and eliminate the ads that are displayed to guests. Registration is fast, simple and absolutely free .
I've got an MS Access database table that has a primary key which is an autonumber field. This table has 5 tables attached to it, but because the (vb.net) application has now become multiuser, I now have a concurrency issues. So I want to change all the autonumbers to Guids and need a routine to do this please.
I was thinking about creating a table of guids (the size of the table's) and then doing some sort of loop and UPDATING each autonumber field with the created GUID from the table of guids but I'm not sure if this is the best approach. I have got 'cascade updates' checked within MS Access, so I am hoping the guids will replicate down into the attached tables too?
O.o Simply changing your unique identifier is not going to prevent a concurrency issue. A better suggestion might be something like this (this is just one method but you can apply the methodology anyway you like): When a user access a record, (assuming you store a last modified datetime field associated with each record) store the last modified datetime in a value along with the other values you retrieve from the database and when the user goes to update the record again, compare the last modified time in the database to the last modified value that the user retrieved from the database.
If the 2 values match you can, with a fair degree of certainty, assume that the record was not changed from the time the user pulled the record from the database to the time the user went to update that data and should allow their update into the system. If the values do not match, another user has updated the record between the time you grabbed the data and the time you are going to update the data. You now have to fall back on your business rules as to what to do. Essentially it comes down to this:
-you can totally discard the current users changes and allow the data in the table to stand
-allow the current users data to over write that which is in the table (Last in wins)
-Attempt to merge the data
Alternatively you could add a Bit column to your table that would act as a "Lock" so that when User A requests the data you set the bit value to 1 and it will remain one until the user updates the data and flips the bit value back to 0. During the time that user A is working with the data you should have a process in place that always checks when a user requests data from the table that the value of bit column for a requested row is set to 0, else the request fails.
I understand that you are working with 5 tables but you haven't explained if a user access all 5 tables of data per single request of if they access them one at a time for various operations.
Thanks ever so much for the reply. That's really helpful and has got me thinking....but just to add to the problem, here's the situation....
I have an Access database that has 8500 records in the main table (we'll call it table A for simplicity). Table A currently has a primary key which is an autonumber (1 to 8500). There are 5 other subsidary tables B, C, D, E and F all pocessing the foreign autonumber key but of course its a long type in each of these tables.
The Access database is sent out to 4 users who add, modify and delete records in table A to F. After a month I then gather back each copy of the database and attempt to amalgamate each one back into a single copy, then send it out again and repeat the cycle. Well I'm sure you can appreciate that this will be pretty much impossible to do due to 2 users doing an 'add' and creating 2 records with the same primary key. Modifications and deletions would also be to different records so I need to find a better way of storing uniqueness against each new record. Could you give me some ideas if you understand so far?
Yes, I have been using an Access database.††It resides in a directory on a webserver. The users download the latest copy, add, modify and delete records within it. Then one by one, they will run an update programme that compares their database with the webserver's and updates the changes. After the update, they transfer the new database to the weberver and the next user downloads it, updates it with their changes, transfers it etc.
Unfortunately, I am not really sure how to overcome this because there is no real intelligent way to determine which copy of X record is the most accurate one. For example:
User A and B download the data
User A updates row 5 with some data around 9am
User B updates row 5 with some data around 1pm
User A posts their changes back to the webserver and their copy of row 5 overwrites row 5 on the server
User B posts there changes back to the webserver and their copy of row 5 overwrites row 5 on the server that User A previously updated.
Which data do you keep? Which is more accurate? It is possible to write a, for lack of a better term, "merge" application that would pull row 5 from the database and compare it against row 5 that is being uploaded and then you would have to rely on business process's to make the determination of what data should be merged onto the server.
It sounds like this application needs to take advantage of SQL Server replication. I worked on a simliar application where the clients were disconnected most of the time and when they did come online - they used SQL Server replication to merge their changes with the master database.