Generic Data Access

This forum is for programmers who have questions about the source code.
Post Reply
fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Generic Data Access

Post by fcarlier » Tue Jun 19, 2007 7:45 am

In my branch (the fcarlier/ branch), I've been making a couple of trial changes to the way data is stored in Open Dental.

Basically, I created a "DataObjectFactory<T>" method which implements methods to add, open and delete objects from the database.

It is written in the most generic way. Every data object (that is a Table Type) has an attribute applied which defines to which table it corresponds. Every field has another attribute, defining the corresponding column in the table.
Furthermore, every data object keeps track of its own state: is it a new object (not stored in the database?), are there local modifications to certain fields or has the object been deleted?

The DataObjectFactory class is able to open a single object given an ID, or create a collection of objects given either a DataReader, a set of IDs or a SQL query.

When saving an object, a new row is created if required. Otherwise, an existing row is updated.
If a new row is inserted, the ID can be either retrieved from the database or randomly assigned. It is also possible to add a new row using a pre-defined ID.

I've applied this technique to part of the Patients class. I think this greatly simplifies the way the Patients is maintained: you no longer need to keep track of all the fields, if you add a new field to the class, your queries for loading, adding, editing and removing patients are updated automatically!

Basically, I've been creating a generic DAO (Data Access Object). I've slightly updated the DTO's (the Table Types). I think it presents an enhancement to Open Dental, mainly in terms of maintenance. The process of adding new tables or columns to existng tables should be greatly simplified.

I'm aware this is a major modification to the Open Dental structure. However, there is side-by-side compatibility of the existing DAO's and the generic DAO.

I'm eager to hear feedback from you (mainly Dr. Sparks, of course!) to know if this change can make it into Open Dental.

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Tue Jun 19, 2007 8:25 am

Thank you - Thank you - Thank you

You have done in one day what I have been attempting to do since day one. I strongly support inclusion of your changes in our trunk. But not right now. Please wait until 5.0 goes to beta, and the trunk moves to 5.1.
Jordan Sparks, DMD
http://www.opendental.com

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Tue Jun 19, 2007 9:41 am

Great! Here a couple of notes and questions:

1. All the modifications that I made to the Table Types are done using a Visual Studio Macro. It should be no problem to re-apply these changes later to the trunk, without going trough a series of merges/branches.

2. Currently, all Table Types have an extra property "PropertyChanged" (e. g. FNameChanged). A possibility would be to raise an event, too. That may be useful in some parts of the UI -- one part of the UI can react to modifications made by another part of the UI.

3. Next thing to do would be to go through all classes in OpenDental/Data Interface and replace the manual SQL code by usage of the DataObjectFactory class. As said, this is where the value of my change really is -- no more need to write your own queries! On the other hand, although the code seems to work pretty well, you'll need to do regression tests!

4. I'll try to write some basic unit tests. That is, create an object, populate its fields by random values, store it, modify it, open it, delete it and make sure everything went well.

5. There is an issue with the AutoNumber ID's in Oracle. I didn't support it yet, because I don't fully understand how it's done. I see you need to get an exclusive lock, I can't help but think there must be a better way? I think something like "SEQUENCE" may do the trink, but I'm not sure about it.

6. You're not the first one to create a CRUD application, neither am I the first one to create a Data Object Factory. NHibernate, for example, does pretty much the same thing. I'm not sure, though, they support things like random keys.

7. I noticed that (sometimes?) a boolean value is stored as TINYINT(3) in MySQL. Boolean corresponds with TINYINT(1) in MySQL, so I wondered if there is any specific reason to do this?

Once again, I'm happy you approve of this change which I think will be a major benefit to Open Dental.

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Tue Jun 19, 2007 9:57 am

6. I think I tried to do a Data Object Factory a few years ago before we had generics. Couldn't figure it out. I avoid tools like NHibernate because I don't want to get locked in to their framework. Our random keys is a perfect example of why we need more flexibility than any such framework can provide.

7. They actually take up the same amount of space in the database, so there is no real difference. The (1) just limits the user input. Sometimes, I use (3) if I might expand a boolean to be a enumeration later. Sometimes it just depends on the mood I'm in. I think I've just not been using either 3 or 1 lately, and it defaults to 3.
Jordan Sparks, DMD
http://www.opendental.com

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Automatic Incrementation In Oracle

Post by grahamde » Tue Jun 19, 2007 11:34 am

In response to fcarlier, bullet point #5:

The current design is using sequences already to handle auto-incrementation. However, there are places in the program where the insert ID must be returned to the program for use elsewhere. Since a sequence is internal to Oracle and there is no automatic way of getting the last inserted primary key number when it is generated by a sequence, I made it so that for each sequence, there is also an insert trigger. So the sequence of events for an INSERT operation in Oracle goes like this (Call the row to be inserted R, and the table it is to be inserted into table T):

1) The preference table is locked exclusively by the connection class, so that the OracleInsertID value will not be disturbed by other instances of the application while trying to retrieve the insert ID for the current instance of the Open Dental application.
2) The insert trigger for table T is activated inside the Oracle server software. The trigger runs a small section of Oracle based code to call upon the sequence S attached to table T in order to get the next insert ID. The insert ID is then used to create row R, but is also saved to the OracleInsertID value in the preference table.
3) The connection class then reads the OracleInsertID value from the preference table so it can be properly returned.
4) The preference table is unlocked.

It is unfortunate, however, that we must create our triggers and sequences separate from database conversion, in the database maintenance form, because the code for creating just 1 trigger/sequence pair is specific to each (table,primary key) tuple, and would be about 100 lines of code each (to handle cases when triggers/sequences already exist in the db, or 1 or the other exist). I would like to move away from this method altogether. We only ended up using this method because of time constraints anyway.

One way we can fix this issue is by having Open Dental always generate its own keys, for both MySQL and Oracle. Either, the calling code could specify a key value (as we already allow), or our database framework could generate them automatically. We could generate them on INSERT statements as follows (let R be the row, and T be the table):

1) Lock table T for exclusive Write access. This would allow other instances of Open Dental to continue reading data from the table without interruption.
2) Perform a select command to determine which is the currently largest key value K in table T. It is the value K that may be misrepresented if we do not lock the table.
3) Set they key of R to K+1, and perform the insert into table T.
4) Unlock table T to allow for other programs to perform data insertions again.

Of course, random primary keys would still work the same. When new rows are merged back into the other databases, a collision might still happen, but would be very rare.

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Tue Jun 19, 2007 1:05 pm

I've been reading the actual code now, and I have to say it's very elegant. The only part that's starting to scare me is the provider factory. I know it needs to be done, but there are just so many details involved. There are so many parts of the program that depend on the current code. There are so many assumptions and so many things that can go wrong.

So can we phase it in? Like you said, the new framework can run side-by-side with the old framework. And only when the new framework is used will the new code be used. This would retain the stability of the remainder of the program, something that I'm very interested in for obvious reasons. As more of the classes get moved over to the new framework, we could eventually get rid of the old framework.
Jordan Sparks, DMD
http://www.opendental.com

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Tue Jun 19, 2007 5:23 pm

I'm just going to post issues as I run across them.

1. Why are you comparing all field values when deleting? I've always just used the primary key.

2. The reason for only sending the changed values to the database is not to enhance performance. It's to prevent mulituser concurrency issues. http://www.open-dent.com/manual/refreshing.html So we really are going to need to preserve that functionality for a few of the heavily used tables. For most tables, of course, it doesn't matter one bit.
Jordan Sparks, DMD
http://www.opendental.com

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Wed Jun 20, 2007 12:11 am

jordansparks wrote:I've been reading the actual code now, and I have to say it's very elegant. The only part that's starting to scare me is the provider factory. I know it needs to be done, but there are just so many details involved. There are so many parts of the program that depend on the current code. There are so many assumptions and so many things that can go wrong.
That's true, especially the code for saving objects is rather complex. However, think of it this way: this complexity already exists in Open Dental. Every *s class, such as the Patients class, has special cases for random keys, external keys andsoforth. What the Factory does, is centralizing this complexity in once class. Currently, you have the same complexity repeated all over Open Dental. So many things that can go wrong in so many places!
So can we phase it in? Like you said, the new framework can run side-by-side with the old framework. And only when the new framework is used will the new code be used. This would retain the stability of the remainder of the program, something that I'm very interested in for obvious reasons. As more of the classes get moved over to the new framework, we could eventually get rid of the old framework.
Yes, I think that should be the plan. That's how I'm doing it (although in an accelerated way): I started with the Patient class, went through the UI to make sure I can add, edit and delete patients.

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Re: Automatic Incrementation In Oracle

Post by fcarlier » Wed Jun 20, 2007 12:15 am

grahamde wrote: The current design is using sequences already to handle auto-incrementation. However, there are places in the program where the insert ID must be returned to the program for use elsewhere. Since a sequence is internal to Oracle and there is no automatic way of getting the last inserted primary key number when it is generated by a sequence, I made it so that for each sequence, there is also an insert trigger. So the sequence of events for an INSERT operation in Oracle goes like this (Call the row to be inserted R, and the table it is to be inserted into table T):
As I understand it, you can also do this:

1. Get the next value of a certain sequency in Oracle, call it "the Id"
2. Insert a column and that Id as primary key
3. return the primary key to the code.

Something like (obviously not proper SQL):
SELECT @ID = NEXT SEQUENCE
INSERT INTO TABLE FOO(ID, Value) VALUES (@ID, @Value)
RETURN @ID

Because a sequence itself in Oracle is "concurrency-safe", you no longer need to lock any table. The whole query can be executed in one connection.

That fundamental difference is you no longer try to retrieve the Id after a row has been inserted, but you retrieve it before the row is inserted.

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Wed Jun 20, 2007 12:17 am

jordansparks wrote:1. Why are you comparing all field values when deleting? I've always just used the primary key.
I don't ;). I compare all the fields that make up the primary key. Just to make sure that if a table has two primary keys, I do the right thing.
2. The reason for only sending the changed values to the database is not to enhance performance. It's to prevent mulituser concurrency issues. http://www.open-dent.com/manual/refreshing.html So we really are going to need to preserve that functionality for a few of the heavily used tables. For most tables, of course, it doesn't matter one bit.
Okay. That should be pretty easy, though, using the PropertyChanged field. I'll see if I can implement it.

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Re: Automatic Incrementation In Oracle

Post by grahamde » Wed Jun 20, 2007 11:46 am

fcarlier wrote:
grahamde wrote: The current design is using sequences already to handle auto-incrementation. However, there are places in the program where the insert ID must be returned to the program for use elsewhere. Since a sequence is internal to Oracle and there is no automatic way of getting the last inserted primary key number when it is generated by a sequence, I made it so that for each sequence, there is also an insert trigger. So the sequence of events for an INSERT operation in Oracle goes like this (Call the row to be inserted R, and the table it is to be inserted into table T):
As I understand it, you can also do this:

1. Get the next value of a certain sequency in Oracle, call it "the Id"
2. Insert a column and that Id as primary key
3. return the primary key to the code.

Something like (obviously not proper SQL):
SELECT @ID = NEXT SEQUENCE
INSERT INTO TABLE FOO(ID, Value) VALUES (@ID, @Value)
RETURN @ID

Because a sequence itself in Oracle is "concurrency-safe", you no longer need to lock any table. The whole query can be executed in one connection.

That fundamental difference is you no longer try to retrieve the Id after a row has been inserted, but you retrieve it before the row is inserted.
Yes you "can" do this, but it also leaves the manageability issue of needing to create and manage sequences for each of the keys still. It would be nice to get rid of sequences altogether.

Derek
Software Engineer,
Open Dental Software

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Possible Improvements On The Database Framework

Post by grahamde » Wed Jun 20, 2007 11:12 pm

fcarlier,

I was looking through your database framework code in your branch. Thank you for thoughts, they are very helpful. However, after reviewing the entire design, I have a some additional work I would like everyone to consider. I have copied your branch into my own personal branch of "grahamde" so you can look at the code if you want. The additions are as follows:

Primarily I added a TableDef class in the framework which all of the table types would be expected to implement. Using inheritance in this way would allow the following benefits:

* Removes the need to have SQL handling functions in the OpenDental project and pushes them completely into the business layer. For example, for a patient object, one can insert a new object by writing the following for a patient object P: P.Update(). And the Update() function will already be implemented for every table type in the TableDef class. Also, the special SQL functions in the “Patientsâ€Â
Derek
Open Dental Software

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Thu Jun 21, 2007 12:00 am

Hi Derek,

I think that is a good idea. Basically, you're talking about implementing the Active Record data pattern (which I like).

A couple of remarks:
1. If you make the base class (in my case it was DataObjectBase) generic, eg:

Code: Select all

public class Patient : TableDef<Patient>,
you get type-safety for free!

2. I would consider adding an overload of the Update() and Remove() parameters that accept an IDbConnection. That way, you can perform multiple updates and removals on objects on the same connection, which should be faster.

Things a ActiveRecord pattern doesn't handle very well, are:
1. Opening multiple objects at the same time. You'll still need some kind of "Factory" class to handle this (or a static method on the data object, which you'll have to re-implemente very time).

2. Loading objects. You'll probably have to implement a seperate Open() method on the TableDef class.

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Thu Jun 21, 2007 9:16 am

Keep in mind that we did, in the not so distant past, already have the database access logic inside the same class as the "TableDef"/"DataObjectBase"/"TableType". I went to great effort to get away from this model, and I really don't think I'm quite ready to go back to it quite so soon. It's very clean to have objects that do nothing except represent a data structure. If we move to an Active Record pattern, then we're also talking about moving to a Domain Model pattern. While I might be open to that in the future, I was finding that it was not working for our needs very well. Let's please stick with a Table Module pattern instead. Part of my motivation is to keep our data model flexible enough to be used as a web application. So I'm trying to keep to a conceptually simpler pattern, even if it results in a bit more code duplication in the long run.
Jordan Sparks, DMD
http://www.opendental.com

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Active Record Plan or Domain Model

Post by grahamde » Thu Jun 21, 2007 10:06 am

fcarlier wrote:Hi Derek,

I think that is a good idea. Basically, you're talking about implementing the Active Record data pattern (which I like).

A couple of remarks:
1. If you make the base class (in my case it was DataObjectBase) generic, eg:

Code: Select all

public class Patient : TableDef<Patient>,
you get type-safety for free!

2. I would consider adding an overload of the Update() and Remove() parameters that accept an IDbConnection. That way, you can perform multiple updates and removals on objects on the same connection, which should be faster.

Things a ActiveRecord pattern doesn't handle very well, are:
1. Opening multiple objects at the same time. You'll still need some kind of "Factory" class to handle this (or a static method on the data object, which you'll have to re-implemente very time).

2. Loading objects. You'll probably have to implement a seperate Open() method on the TableDef class.
I am not sure how the active record plan is not type-safe.

I agree that Remove() and Update() should accept a generic connection. The code that I put together yesterday was mostly to get the idea out into the open.

You do not need a factory or a static method to open multiple objects at one time. See the updated code for TableDef.CreateObjects() in my temporary branch. The idea would be, to create a list of patients, one would write: Collection<Patient> patients=(Collection<Patient>)(new Patient().CreateObjects(DataTable T)); The compiler will always check type safety at this point.

I am not sure what you mean by needing to implement a seperate Open() method.
Derek
Open Dental Software

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Re: Active Record Plan or Domain Model

Post by fcarlier » Thu Jun 21, 2007 10:50 am

[quote="grahamde]
I am not sure how the active record plan is not type-safe.
[/quote]
I didn't mean to say it is not type-safe. Just wanted to say that by marking the base class as generic, you get type-safety for free
You do not need a factory or a static method to open multiple objects at one time. See the updated code for TableDef.CreateObjects() in my temporary branch. The idea would be, to create a list of patients, one would write: Collection<Patient> patients=(Collection<Patient>)(new Patient().CreateObjects(DataTable T)); The compiler will always check type safety at this point.
Yes, but why would you want to do that? Basically, you're creating a new, empty Patient object to create a distinct collection of objects.

It's all about

Code: Select all

Collection<Patient> patients=(Collection<Patient>)(new Patient().CreateObjects(DataTable T));
vs

Code: Select all

Collection<Patient> patients = Patient.CreateObjects(DataTable t);
vs

Code: Select all

Collection<Patient> patients = DataObjectFactory<Patient>.CreateObjects(DataTable t);
Anyway, as I understand from dr. Spark's post, he wants to keep data storage & data access in two different classes.
Frederik Carlier

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Data Definition and Storage Separation

Post by grahamde » Thu Jun 21, 2007 11:05 am

It would also be possible to keep the two class per table model by having the second class inherit from its corresponding table definition. For instance, we could have Patients inherit Patient. That way, the Patient object by itself would only define the data structure, and the Patients class would still contain the same, custom SQL functionality, while at the same time allowing one to override the basic Update() and Delete() functions in the Patients class if needed (to keep the SQL definitions out of the Patient object).
Derek
Open Dental Software

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Thu Jun 21, 2007 4:55 pm

The burden of deciding which direction to go is beginning to weigh very heavily on me. I just looked at the Patient class, which is now 1000 lines vs its original 250 lines. So adding a new table with, for example, 10 fields, would not be much simpler than it currently is. It would take about 130 lines of code, which seems about the same as it is now. I realize, however, that it would be less prone to bugs.

Derek has made a reasonable argument to me in person that data object classes which inherit from a base class might be better than a factory because methods can be overridden when functionality needs to be different than for most of the other similar classes. That said, I don't like his suggestion of Patients inheriting from Patient. They serve two different purposes, and there's no need to have them be related in this way.

fcarlier is clearly a better programmer than I am, and so I am inclined to defer to his judgement. But there are still parts of the core which I am not sure that he understands; in particular the three layer architecture as outlined at http://www.open-dent.com/manual/architecture.html. I need to make sure, for example, that he understands that the Open Dental\Data Interface\General.cs class is able to pick whether to communicate with the server component or directly with the database. I'm assuming that will be part of the provider factory, but I don't see it yet.

If we move forward with this, then there will not be a single person who deeply understands all aspects of the core. That makes me nervous. If I understood better exactly what some of this code did, it would be better. But the volume makes it difficult. The burden is probably on me at this point to develop a better understanding of what fcarlier did. It would help if I did some of the coding myself. When the time comes to transfer all of this to the trunk, I would like to personally do as much of that transfer as possible so that I will know exactly what is being changed, and so that I will be able to retain some understanding of what it's doing.

Am I correct that parameters were used to construct the queries? I eliminated parameters from the code a long time ago due to the quirks of the connector. The folks who write the connector are idiots, and their implementation of parameters is very likely to change suddenly in an incompatible way. It is unsafe to use parameters with this connector. Creating strings is much safer. I think I would insist on this for stability. You have no idea the kind of trouble they have caused our customers in the past.

Derek and I both agree that this will be an improvement, although Derek wants to take it in a slightly different direction. I think the quality of the code, however, warrants its inclusion without change.
Jordan Sparks, DMD
http://www.opendental.com

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Fri Jun 22, 2007 6:45 am

jordansparks wrote:The burden of deciding which direction to go is beginning to weigh very heavily on me. I just looked at the Patient class, which is now 1000 lines vs its original 250 lines. So adding a new table with, for example, 10 fields, would not be much simpler than it currently is. It would take about 130 lines of code, which seems about the same as it is now. I realize, however, that it would be less prone to bugs.
Yes, the data classes contain more code now. Remember, however, that this is all boilerplate code. To add a field to a data class, you only need to know:
* Is it a primary key field?
* Of which type is the field?
* What is the name of the field?
The rest of the code can all be:
* Auto-generated
* Added using copy & past
* Added using VS Snippets
Derek has made a reasonable argument to me in person that data object classes which inherit from a base class might be better than a factory because methods can be overridden when functionality needs to be different than for most of the other similar classes. That said, I don't like his suggestion of Patients inheriting from Patient. They serve two different purposes, and there's no need to have them be related in this way.
I see pros and cons for both ways of doing, however, as long as it includes a clean way of creating an array or collection of objects, it feels okay with me. Personally, I've been using an Active Record pattern for quite some time, but I've been moving away from it -- exactly because it warrants better separation of data transfer and data access. In the end, though, the two ways of doing it are just that, two different patterns. I don't think one is really "better" than the other.
But there are still parts of the core which I am not sure that he understands; in particular the three layer architecture as outlined at http://www.open-dent.com/manual/architecture.html. I need to make sure, for example, that he understands that the Open Dental\Data Interface\General.cs class is able to pick whether to communicate with the server component or directly with the database. I'm assuming that will be part of the provider factory, but I don't see it yet.
It's not there yet, but it's in the pipeline. Watch your commit logs ;)
If we move forward with this, then there will not be a single person who deeply understands all aspects of the core. That makes me nervous. If I understood better exactly what some of this code did, it would be better. But the volume makes it difficult. The burden is probably on me at this point to develop a better understanding of what fcarlier did. It would help if I did some of the coding myself. When the time comes to transfer all of this to the trunk, I would like to personally do as much of that transfer as possible so that I will know exactly what is being changed, and so that I will be able to retain some understanding of what it's doing.
Feel free to ask any questions you have about the code. In reality, it's (rather) simple:

A. For data classes
* Every field has been replaced by a property.
* Every field that corresponds to a column, has an attribute indicating it is a database column. Primary keys are marked as such
* Every field now has a "Changed" field as well. That allows you to keep track of exactly which fields have changed
* There is an IsNew property. It is set to true if the object does not exist in the database.
* There is an IsDiry property. If any value has changed, this property is set to true.

B. Factory class
* Every time an object is saved to the database, the code inspects the object. It gets a list of all fields of that object that correspond to a database column. Then the query is created on-the-fly.
* For creating objects, the reverse is done. For each field, the corresponding column is being accessed on the result set (be it a DataTable, a DataRow, or a DataReader) and its value is set.
Am I correct that parameters were used to construct the queries? I eliminated parameters from the code a long time ago due to the quirks of the connector. The folks who write the connector are idiots, and their implementation of parameters is very likely to change suddenly in an incompatible way. It is unsafe to use parameters with this connector. Creating strings is much safer. I think I would insist on this for stability. You have no idea the kind of trouble they have caused our customers in the past.
This change has been implemented. You get to choose.

NOTE: The unit tests for these classes are still to be written. They would consist of creating "garbage" data and inserting, updating and removing it from the database, using the different patterns:
- With or without remoting
- With or without usage of parameters
Frederik Carlier

User avatar
jordansparks
Site Admin
Posts: 5744
Joined: Sun Jun 17, 2007 3:59 pm
Location: Salem, Oregon
Contact:

Post by jordansparks » Fri Jun 22, 2007 8:29 am

The latest commit puts to rest my concern about fcarlier not understanding the existing code.
Jordan Sparks, DMD
http://www.opendental.com

grahamde
Posts: 52
Joined: Tue Jun 19, 2007 10:17 am

Integer Primary Keys

Post by grahamde » Fri Jun 22, 2007 5:00 pm

Hello again Frederik,

I forgot to mention that I noticed one other bug in the framework, which is actually fairly critical, but can be fixed failry easily. There are at least two places where you assume that primary keys are always integers (DataObjectInfo.GetPrimaryKeyField() and DataObjectInfo.SetPrimaryKey()), when in fact there are non-integer primary key fields in our database. For example, the county table has a CountyName (varchar type) as its primary key. There may be 1 or 2 other tables where the primary key is a string-based type as well. We may, however, be able to just restrict our primary keys to integer and string types to meet the current database layout, or we could define it more generically (in case there are primary keys of other types in the future, although I can't imagine what those would be).

Thanks for your time.
Derek
Open Dental Software

fcarlier
Posts: 75
Joined: Tue Jun 19, 2007 3:12 am
Location: Ghent, Belgium

Post by fcarlier » Sat Jun 23, 2007 2:37 am

Hi Derek,

I noticed your code comment on the primary key issue before. However, it is not true that I assume the primary key is always of the integer type -- rather on the contrary. If you pass an object to the CreateObject, WriteObject or DeleteObject methods, the code always enumerates all primary key fields and uses them in the appropriate WHERE clauses.

Only the methods that take an integer as a parameter (for examle Delete(int id)) rely on this assumption. The code, however, throws a specific exception when this condition is not met.

Because almost all tables have an integer primary key, I thought it was worth adding this extra functionality.

Hope it helps,

Frederik
Frederik Carlier

Post Reply