
The reason that we haven't gone forward with this is due to one complication. If we always write to a central server, then how can we ensure that the write is replicated quickly enough to the slave to be functional? A typical example is if we add a record using an INSERT. Now, the master contains the correct information. But immediately after most INSERTs, we perform a SELECT to obtain a list of records to refresh the screen. We currently assume that we can perform the SELECT a fraction of a second after the INSERT and reliably obtain a list which includes the new record. But the replication will not have had time to occur. In fact, the replication might not happen until minutes later.
Any ideas anyone?
By reading that page on the mysql website, they seem to indicate that this strategy works best when there are few writes. So we may actually be pushing the limits of what is possible with this setup. The possible strategies we've come up with so far to address this are:
1. Write to both the master and the slave. The master will still replicate back to the slave, so all queries will have to be optimized to not fail during replication. For example, use INSERT IGNORE instead of INSERT.
2. Always write to the local database, and create a management layer to handle passing changes up to the central server.
3. Use Oracle, which has built in features for multiple masters. (the central server would have multiple masters).
Keep in mind that with any of these solutions, we have already solved the problem of conflicting primary keys. We already have a great algorithm that ensures that each local server could add records which would never have primary key conflicts with any other office. This is done by automatically paritioning the range of possible primary keys into smaller ranges. Each remote office stays within their assigned range when assigning primary keys.
I'm leaning towards solution #1, and we will start testing that to see if it's as simple as I hope it is.