SQLObject is by Ian Bicking (ianb@colorstudy.com). The website is sqlobject.org.
The code is licensed under the Lesser General Public License (LGPL).
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
SQLObject is an object-relational mapper. It allows you to translate RDBMS table rows into Python objects, and manipulate those objects to transparently manipulate the database.
In using SQLObject, you will create a class definition that will describe how the object translates to the database table. SQLObject will produce the code to access the database, and update the database with your changes. The generated interface looks similar to any other interface, and callers need not be aware of the database backend.
SQLObject also includes a novel feature to avoid generating, textually, your SQL queries. This also allows non-SQL databases to be used with the same query syntax.
Currently SQLObject supports MySQL, PostgreSQL (via psycopg), SQLite, Firebird, and a DBM-based store. The DBM backend is experimental.
Python 2.2 or higher is required. SQLObject makes extensive use of new-style classes.
There are several object-relational mappers (ORM) for Python. I honestly can't comment deeply on the quality of those packages, but I'll try to place SQLObject in perspective.
SQLObject uses new-style classes extensively. The resultant objects have a new-style feel as a result -- setting attributes has side effects (it changes the database), and defining classes has side effects (through the use of metaclasses). Attributes are generally exposed, not marked private, knowing that they can be made dynamic or write-only later.
SQLObject creates objects that feel similar to normal Python objects (with the semantics of new-style classes). An attribute attached to a column doesn't look different than an attribute that's attached to a file, or an attribute that is calculated. It is a specific goal that you be able to change the database without changing the interface, including changing the scope of the database, making it more or less prominent as a storage mechanism.
This is in contrast to some ORMs that provide a dictionary-like interface to the database (for example, PyDO). The dictionary interface distinguishes the row from a normal Python object. I also don't care for the use of strings where an attribute seems more natural -- columns are limited in number and predefined, just like attributes. (Note: newer version of PyDO apparently allow attribute access as well)
SQLObject is, to my knowledge, unique in using metaclasses to facilitate this seemless integration. Some other ORMs use code generation to create an interface, expressing the schema in a CSV or XML file (for example, MiddleKit, part of Webware). By using metaclasses you are able to comfortably define your schema in the Python source code. No code generation, no weird tools, no compilation step.
SQLObject provides a strong database abstraction, allowing cross-database compatibility (so long as you don't sidestep SQLObject). This compatibility extends not just to several databases, but also to currently one non-SQL, non-relational backend (based on the dbm module).
SQLObject has joins, one-to-many, and many-to-many, something which many ORMs do not have. The join system is also intended to be extensible.
You can map between database names and Python attribute and class names; often these two won't match, or the database style would be inappropriate for a Python attribute. This way your database schema does not have to be designed with SQLObject in mind, and the resulting classes do not have to inherit the database's naming schemes.
Here are some things I plan:
See also the Plan for 0.6.
Let's start off quickly...
To begin with, let's make a database connection. Choose from one of MySQLConnection, PostgresConnection, SQLiteConnection, and FirebirdConnection, depending on what database you use.
conn = MySQLConnection(user='test', db='testdb') conn = 'mysql://test@localhost/testdb' conn = PostgresConnection('user=test dbname=testdb') conn = 'postgres://test@localhost/testdb' conn = SQLiteConnect('database.db') conn = 'sqlite://path/to/database.db' conn = DBMConnection('database/') conn = 'dbm://path/to/database/'
The rest of this will be written more-or-less in a database-agnostic manner, using the connection you define. Use SQLite if you don't have another database installed or ready -- it requires PySQLite, but doesn't require a client/server setup.
We'll develop a simple addressbook-like database. We could create the tables ourselves, and just have SQLObject access those tables, but for now we'll let SQLObject do that work. First, the class:
class Person(SQLObject): _connection = conn firstName = StringCol() middleInitial = StringCol(length=1, default=None) lastName = StringCol()
Many basic table schemas won't be any more complicated than that. The special attribute _connection defines the connection we should use (you can also set a module-level variable __connection__ which would automatically be picked up if you don't specify _connection).
Warning
The __connection__ magic variable can be a little fragile -- it has to be defined before the class is defined. This means it must be assigned above the class ...: line.firstName, middleInitial, and lastName are all columns in the database. The general schema implied by this class definition is:
CREATE TABLE person ( id INT PRIMARY KEY AUTO_INCREMENT, first_name TEXT, middle_initial CHAR(1), last_name TEXT );
This is for MySQL. The schema for other databases looks slightly different (especially the id column). You'll notice the names were changed from mixedCase to underscore_separated -- this is done by the style object. There are a variety of ways to handle that names that don't fit conventions (see Irregular Naming).
The tables don't yet exist. We'll let SQLObject create them:
Person.createTable()
We can change the type of the various columns by using something other than StringCol, or using different arguments. More about this in Subclasses of Col.
If you don't want to do table creation (you already have tables, or you want to create the tables yourself), you can just use the vague Col class. SQLObject doesn't do much type checking, allowing the database and the adapter to handle most of the type conversion. Databases generally do their own type coercion on inputs.
You'll note that the id column is not given in the class definition, it is implied. For MySQL databases it should be defined as INT PRIMARY KEY AUTO_INCREMENT, in Postgres SERIAL PRIMARY KEY, and in SQLite as INTEGER PRIMARY KEY. You can override the name, but some integer primary key must exist (though you can use non-integer keys with some extra effort).
Now that you have a class, how will you use it? We'll be considering the class defined above.
You can use the class method .get() to fetch instances that already exist. So if you wanted to fetch the Person by id 10, you'd call Person.get(10).
Warning
This is a change from SQLObject 0.5 -- before the standard constructor fetched rows from the database, and the .new() method created new rows. Now SQLObject is more like Python, where the class constructor creates a new object/row, and the .get() method fetches a row.To create a new object (and row), use class instantiation. In this case you might call Person.new(firstName="John", lastName="Doe"). If you had left out firstName or lastName you would have gotten an error, as no default was given for these columns (middleInitial has a default, so it will be set to NULL, the SQL equivalent of None).
When you create an object, it is immediately inserted into the database. SQLObject generally uses the database as immediate storage.
Here's an example of using the class:
p = Person(firstName="John", lastName="Doe") print p #>> <Person 1 firstName='John' middleInitial=None lastName='Doe'> print p.firstName #>> 'John' p.middleInitial = 'Q' print p.middleInitial #>> 'Q' p2 = Person.get(1) print p2 #>> <Person 1 firstName='John' middleInitial='Q' lastName='Doe'> print p is p2 #>> True
You'll note that columns are accessed like attributes. (This uses the property feature of Python 2.2, so that retrieving and setting these attributes executes code). You'll also note that objects are unique -- there is generally only one Person instance of a particular id in memory at any one time. If you ask for more than one person by a particular ID, you'll get back the same instance. This way you can be sure of a certain amount of consistency if you have multiple threads accessing the same data (though of course across processes there can be no sharing of an instance). This isn't true if you're using transactions.
To get an idea of what's happening behind the surface, I'll give the same actions with the SQL that is sent, along with some commentary:
p = Person(firstName="John", lastName="Doe") #>> QueryIns: # INSERT INTO person (last_name, middle_initial, first_name) # VALUES ('Doe', NULL, 'John') # #-- Not quite optimized, we don't remember the values we used to #-- create the object, so they get re-fetched from the database: #>> QueryOne: # SELECT last_name, middle_initial, first_name # FROM person # WHERE id = 1 print p #>> <Person 1 firstName='John' middleInitial=None lastName='Doe'> print p.firstName #-- Now we've saved cached the column values, so we don't fetch #-- it again. #>> 'John' p.middleInitial = 'Q' #>> Query : # UPDATE person # SET middle_initial = 'Q' # WHERE id = 1 print p.middleInitial #>> 'Q' p2 = Person.get(1) #-- Again, no database access, since we're just grabbing the same #-- instance we already had. print p2 #>> <Person 1 firstName='John' middleInitial='Q' lastName='Doe'> print p is p2 #>> True
Hopefully you see that the SQL that gets sent is pretty clear and predictable. To view the SQL being sent, pass the keyword argument debug=1 to your connection object -- all SQL will be printed to the console. This can be reassuring, and I would encourage you to try it.
As a small optimization, instead of assigning each attribute individually, you can assign a number of them using the set method, like:
p.set(firstName='Bob', lastName='Dole')
This will send only one UPDATE statement. You can also use set with non-database properties (there's no benefit, but it helps hide the difference between database and non-database attributes).
A real address book should have people, but also addresses. These examples are in personaddress.py
First, let's define the new address table. People can have multiple addresses, of course:
class Address(SQLObject): street = StringCol() city = StringCol() state = StringCol(length=2) zip = StringCol(length=9) person = ForeignKey('Person')
Note the column person = ForeignKey("Person"). This is a reference to a Person object. We refer to other classes by name (with a string) to avoid circular dependencies. In the database there will be a person_id column, type INT, which points to the person column.
Here's the Person class:
class Person(SQLObject): firstName = StringCol() middleInitial = StringCol(length=1, default=None) lastName = StringCol() addresses = MultipleJoin('Address')
We get the backreference with addresses = MultipleJoin('Address'). When we access a person's addresses attribute, we will get back a list of all the Address objects associated with that person. An example:
p = Person(firstName='John', lastName='Doe') print p.addresses #>> [] a1 = Address(street='123', city='Smallsville', state='IL', zip='50484', person=p) print [a.street for a in p.addresses] #>> ['123']
For this example we will have user and role objects. The two have a many-to-many relationship, which is represented with the RelatedJoin.
class User(SQLObject): # user is a reserved word in some databases, so we won't # use that for the table name: _table = "user_table" username = StringCol(alternateID=True, length=20) # We'd probably define more attributes, but we'll leave # that excersize to the reader... roles = RelatedJoin('Role') class Role(SQLObject): name = StringCol(alternateID=True, length=20) users = RelatedJoin('User')
And usage:
bob = User(username='bob') tim = User(username='tim') jay = User(username='jay') admin = Role(name='admin') editor = Role(name='editor') bob.addRole(admin) bob.addRole(editor) tim.addRole(editor) print bob.roles #>> [<Role 1 name='admin'>, <Role 2 name='editor'>] print tim.roles #>> [<Role 2 name='editor'>] print jay.roles #>> [] print admin.users #>> [<User 1 username='bob'>] print editor.users #>> [<User 1 username='bob'>, <User 2 username='tim'>]
In the process an intermediate table is created, role_user, which references both of the other classes. This table is never exposed as a class, and its rows do not have equivalent Python objects -- this hides some of the nuisance of a many-to-many relationship.
You may notice that the columns have the extra keyword argument alternateID. If True, this means that the column uniquely identifies rows -- like a username uniquely identifies a user. This identifier is in addition to the primary key (id), which is always present.
Note
SQLObject has a strong requirement that the primary key be unique and immutable. You cannot change the primary key through SQLObject, and if you change it through another mechanism you can cause inconsistency in any running SQLObject program (and in your data). For this reason meaningless integer IDs are encouraged -- something like a username that could change in the future may uniquely identify a row, but it may be changed in the future. So long as it is not used to reference the row internally, it is also safe to change it in the future.A alternateID column creates a class method, like byUsername for a column named username (or you can use the alternateMethodName keyword argument to override this). Its use:
print User.byUsername('bob') #>> <User 1 username='bob'> print Role.byName('admin') #>> <Role 1 name='admin'>
While the full power of all the kinds of joins you can do with a database are not revealed in SQLObject, a simple SELECT is available.
select is a class method, and you call it like (with the SQL that's generated):
peeps = Person.select(Person.q.firstName=="John") print list(peeps) #>> [<Person 1 lastName='Doe' middleInitial=None firstName='John'>] # SELECT person.id FROM person WHERE person.first_name = 'John';
This example returns everyone with the first name John. An expression could be more complicated as well, like:
peeps = Person.select( AND(Address.q.personID == Person.q.id, Address.q.zip.startswith('504'))) print list(peeps) # SELECT person.id FROM person, phone_number # WHERE (phone_number.id = person.id AND # phone_number.phone_number LIKE '612%');
You'll note that classes have an attribute q, which gives access to special objects for constructing query clauses. All attributes under q refer to column names and if you construct logical statements with these it'll give you the SQL for that statement. You can also work like this:
peeps = Person.select("""address.id = person.id AND address.zip LIKE '504%'""", clauseTables=['address'])
You may wish to use MyClass.sqlrepr to quote any values you use if you create SQL manually (quoting is automatic if you use q). Tables given in clauseTables will be added to the FROM portion (again, they are automatically picked up when using q). The table you're selecting is always assumed to be included, of course.
You can use the keyword arguments orderBy to create ORDER BY in the select statements: orderBy takes a string, which should be the database name of the column, or a column in the form Person.q.firstName. You can use "-colname" to specify descending order, or call MyClass.select().reversed().
You can use the special class variable _defaultOrder to give a default ordering for all selects. To get an unordered result when _defaultOrder is used, use orderBy=None.
Select results are generators, which are lazily evaluated. So the SQL is only executed when you iterate over the select results, or if you use list() to force the result to be executed. When you iterate over the select results, rows are fetched one at a time. This way you can iterate over large results without keeping the entire result set in memory. You can also do things like .reversed() without fetching and reversing the entire result -- instead, SQLObject can change the SQL that is sent so you get equivalent results.
You can also slice select results. The results are used in the SQL query, so peeps[:10] will result in LIMIT 10 being added to the end of the SQL query. If the slice cannot be performed in the SQL (e.g., peeps[:-10]), then the select is executed, and the slice is performed on the list of results. This will only happen when you use negative indexes.
You can get the length of the result without fetching all the results by calling count on the result object, like MyClass.select().count(). A COUNT(*) query is used -- the actual objects are not fetched from the database. Together with slicing, this makes batched queries easy to write:
start = 20 size = 10 query = Table.select() results = query[start:start+size] total = query.count() print "Showing page %i of %i" % (start/size + 1, total/size + 1)
Note
There are several factors when considering the efficiency of this kind of batching, and it depends very much how the batching is being used. Consider a web application where you are showing an average of 100 results, 10 at a time, and the results are ordered by the date they were added to the database. While slicing will keep the database from returning all the results (and so save some communication time), the database will still have to scan through the entire result set to sort the items (so it knows which the first ten are), and depending on your query may need to scan through the entire table (depending on your use of indexes). Indexes are probably the most important way to improve importance in a case like this, and you may find caching to be more effective than slicing.
In this case, caching would mean retrieving the complete results. You can use list(MyClass.select(...)) to do this. You can save these results for some limited period of time, as the user looks through the results page by page. This means the first page in a search result will be slightly more expensive, but all later pages will be very cheap.
For more information on the where clause in the queries, see the SQLBuilder documentation.
While we haven't done so in the examples, you can include your own methods in the class definition. Writing you own methods should be obvious enough (just do so like in any other class), but there are some other details to be aware of.
There are two ways SQLObject instances can come into existance: they can be fetched from the database, or they can be inserted into the database. In both cases a new Python object is created. This makes the place of __init__ a little confusing.
In general, you should not touch __init__. Instead use the _init method, which is called after an object is fetched or inserted. This method has the signature _init(self, id, connection=None, selectResults=None), though you may just want to use _init(self, *args, **kw).
You can use all the normal techniques for defining methods in this new-style class, including classmethod, staticmethod, and property, but you can also use a shortcut. If you have a method that's name starts with _set_, _get_, _del_, or _doc_, it will be used to create a property. So, for instance, say you have images stored under the ID of the person in the /var/people/images directory:
class Person(SQLObject): # ... def imageFilename(self): return 'images/person-%s.jpg' % self.id def _get_image(self): if not os.path.exists(self.imageFilename()): return None f = open(self.imageFilename()) v = f.read() f.close() return v def _set_image(self, value): # assume we get a string for the image f = open(self.imageFilename(), 'w') f.write(value) f.close() def _del_image(self, value): # I usually wouldn't include a method like this, but for # instructional purposes... os.unlink(self.imageFilename()) _doc_image = 'The headshot for the person'
Later, you can use the .image property just like an attribute, and the changes will be reflected in the filesystem by calling these methods. This is a good technique for information that is better to keep in files as opposed to the database (such as large, opaque data like images).
You can also pass an image keyword argument to the new class method or the set method, like Person.new(..., image=imageText).
All of the methods (_get_, _set_, etc) are optional -- you can use any one of them without using the others (except _doc_, since having a doc string that doesn't document anything would be silly). So you could define just a _get_attr method so that attr was read-only.
It's a little more complicated if you want to override the behavior of an database column attribute. For instance, imagine there's special code you want to run whenever someone's name changes -- you could make a subclass, and then use Person.__setattr__(self, 'lastName', value) to actually do the deed, but that's obviously very awkward -- you have to create subclasses without a real inheritance relationship, and the whole thing feels architecturally fragile. SQLObject creates methods like _set_lastName for each of your columns, but again you can't use this, since there's no superclass to reference (and you can't write SQLObject._set_lastName(...), because the SQLObject class doesn't know about your class's columns). You want to override that _set_lastName method yourself.
To deal with this, SQLObject creates two methods for each getter and setter, for example: _set_lastName and _SO_set_lastName. So to intercept all changes to lastName:
class Person(SQLObject): lastName = StringCol() firstName = StringCol() def _set_lastName(self, value): self.notifyLastNameChange(value) self._SO_set_lastName(value)
Or perhaps you want to constrain a phone numbers to be actual digits, and of proper length, and make the formatting nice:
import re class PhoneNumber(SQLObject): phoneNumber = StringCol(length=30) _garbageCharactersRE = re.compile(r'[\-\.\(\) ]') _phoneNumberRE = re.compile(r'^[0-9]+$') def _set_phoneNumber(self, value): value = self._garbageCharactersRE.sub('', value) if not len(value) >= 10: raise ValueError, 'Phone numbers must be at least 10 digits long' if not self._phoneNumberRE.match(value): raise ValueError, 'Phone numbers can contain only digits' self._SO_set_phoneNumber(value) def _get_phoneNumber(self): value = self._SO_get_phoneNumber() number = '(%s) %s-%s' % (value[0:3], value[3:6], value[6:10]) if len(value) > 10: number += ' ext.%s' % value[10:] return number
Note
You should be a little cautious when modifying data that gets set in an attribute. Generally someone using your class will expect that the value they set the attribute to will be the same value they get back. In this example we removed some of the characters before putting it in the database, and reformatted it on the way out. One advantage of methods (as opposed to attribute access) is that the programmer is more likely to expect this disconnect.The instructions above should tell you enough to get you started, and be useful for many situations. Now we'll show how to specify the class more completely.
The list of columns is a list of Col objects. These objects don't have functionality in themselves, but give you a way to specify the column.
This boolean (default False) indicates if the column can be used as an ID for the field (for instance, a username), though it is not a primary key. If so a class method will be added, like byUsername which will return that object. Use alternateMethodName if you don't like the by* name (e.g. alternateMethodName="username").
The column should be declared UNIQUE in your table schema.
The ForeignKey class should be used instead of Col when the column is a reference to another table/class. It is generally used like ForeignKey('Role'), in this instance to create a reference to a table Role. This is largely equivalent to Col(foreignKey='Role', sqlType='INT'). Two attributes will generally be created, role, which returns a Role instance, and roleID, which returns an integer ID for the related role.
There are some other subclasses of Col. These are used to indicate different types of columns, when SQLObject creates your tables.
A string (character) column. Extra keywords:
In addition to the columns, there are a number of other special attributes you can set in your class.
You can use the ForeignKey to handle foreign references in a table, but for back references and many-to-many relationships you'll use joins.
See One-to-Many Relationships for an example of one-to-many relationships.
Several keyword arguments are allowed to the MultipleJoin constructor:
Transaction support in SQLObject is left to the database. Transactions can be used like:
conn = DBConnection.PostgresConnection('yada') trans = conn.transaction() p = Person.get(1, trans) p.firstName = 'Bob' trans.commit() p.firstName = 'Billy' trans.rollback()
The trans object here is essentially a wrapper around a single database connection, and commit and rollback just pass that message to the psycopg connection.
If you want to use transactions you should also turn _cacheValues off, like:
class Person(SQLObject): _cacheValue = False # ...
All the connections support creating and droping tables based on the class definition. First you have to prepare your class definition, which means including type information in your columns (though DBMConnection do not require or use type information).
A column type is indicated by using a subclass of Col:
To create a table call createTable. It takes two arguments:
dropTable takes arguments ifExists and dropJoinTables, self-explanatory.
SQLObject classes can be manipulated dynamically. This leaves open the possibility of constructing SQLObject classes from an XML file, from database introspection, or from a graphical interface.
SQLObject can read the table description from the database, and fill in the class columns (as would normally be described in the _columns attribute). Do this like:
class Person(SQLObject): _fromDatabase = True
You can still specify columns (in _columns), and only missing columns will be added.
This is not supported in SQLite
SQLite does not support this feature
You can add and remove columns to your class at runtime. Such changes will effect all instances, since changes are made inplace to the class. There are two methods, addColumn and delColumn, both of which take a Col object (or subclass) as an argument. There's also an option argument changeSchema which, if True, will add or drop the column from the database (typically with an ALTER command).
When adding columns, you must pass the name as part of the column constructor, like StringCol("username", length=20). When removing columns, you can either use the Col object (as found in _columns, or which you used in addColumn), or you can use the column name (like MyClass.delColumn("username")).
You can also add Joins, like MyClass.addJoin(MultipleJoin("MyOtherClass")), and remove joins with delJoin. delJoin does not take strings, you have to get the join object out of the _joins attribute.
Often you will have a database that already exists, and does not use the naming conventions that SQLObject expects, or does not use any naming convention at all.
While SQLObject tries not to make too many requirements on your schema, some assumptions are made. Some of these may be relaxed in the future. (Of course, none of this applies to DBMConnection)
All tables that you want to turn into a class need to have an integer primary key. That key should be defined like:
SQLObject does not support non-integer keys (that may change). It does not support sequences in Postgres (that will change -- SERIAL uses an implicit sequence). It does not support primary keys made up of multiple columns (that probably won't change). It does not generally support tables with primary keys with business meaning -- i.e., primary keys are assumed to be immutable (that won't change).
At the moment foreign key column names must end in "ID" (case-insensitive). This restriction will probably be removed in the next release.
By default names in SQLObject are expected to be mixed case in Python (like mixedCase), and underscore-separated in SQL (like mixed_case). This applies to table and column names. The primary key is assumed to be simply id.
Other styles exist. A typical one is mixed case column names, and a primary key that includes the table name, like ProductID. You can use a different "Style" object to indicate a different naming convention. For instance:
class Person(SQLObject): _style = MixedCaseStyle(longID=True) firstName = StringCol() lastName = StringCol()
If you use Person.createTable(), you'll get:
CREATE TABLE Person ( PersonID INT PRIMARY KEY, FirstName Text, LastName Text )
The MixedCaseStyle object handles the initial capitalization of words, but otherwise leaves them be. By using longID=True, we indicate that the primary key should look like a normal reference (PersonID for MixedCaseStyle, or person_id for the default style).
If you wish to change the style globally, assign the style to the connection, like:
__connection__.style = MixedCaseStyle(longID=True)
While naming conventions are nice, they are not always present. You can control most of the names that SQLObject uses, independent of the Python names (so at least you don't have to propagate the irregularity to your brand-spanking new Python code).
Here's a simple example:
class User(SQLObject): _table = "user_table" _idName = "userid" username = StringCol(length=20, dbName='name')
The attribute _table overrides the table name. _idName provides an alternative to id. The dbName keyword argument gives the column name.
While not strictly a legacy database issue, this fits into the category of "irregularities". If you use non-integer keys, all primary key management is up to you. You must create the table yourself, and when you create instances you must pass a id keyword argument into new (like Person.new(id='555-55-5555', ...)).
The DBConnection module currently has four external classes, MySQLConnection, PostgresConnection, SQLiteConnection, and DBMConnection.
You can pass the keyword argument debug to any connector. If set to true, then any SQL sent to the database will also be printed to the console.
MySQLConnection takes the keyword arguments host, db, user, and passwd, just like MySQLdb.connect does.
MySQLConnection supports all the features, though MySQL does not support transactions (except using a special proprietary backend that I haven't used).
PostgresConnection takes a single connection string, like "dbname=something user=some_user", just like psycopg.connect. You can also use the same keyword arguments as for MySQLConnection, and a dsn string will be constructed.
PostgresConnection supports transactions and all other features.
SQLiteConnection takes the a single string, which is the path to the database file.
SQLite puts all data into one file, with a journal file that is opened in the same directory during operation (the file is deleted when the program quits). SQLite does not restrict the types you can put in a column -- strings can go in integer columns, dates in integers, etc.
SQLiteConnection doesn't support automatic class generation and SQLite does not support runtime column changes or transactions.
SQLite may have concurrency issues, depending on your usage in a multi-threaded environment.
FirebirdConnection takes the arguments host, db, user (default "sysdba"), passwd (default "masterkey").
Firebird supports all the features. Support is still young, so there may be some issues, especially with concurrent access, and especially using lazy selects. Try list(MyClass.select()) to avoid concurrent cursors if you have problems (using list() will pre-fetch all the results of a select).
Firebird support uses the kinterbasdb Python library.
DBMConnection takes a single string, which is the path to a directory in which to store the database.
DBMConnection uses flat hash databases to store all the data. These databases are created by the standard anydbm module. This is something of an experiment, and things like safety under concurrent access (multithreaded or multiprocess) should not be expected. The select interface using the magic q attribute is supported, though other SQL is not supported.
DBMConnection allows any kind of objects to be put in columns -- all values are pickled, and so only normal pickling restrictions apply.
DBMConnection does not support automatic class generation or transactions.
You can use from SQLObject import *, though you don't have to. It exports a minimal number of symbols. The symbols exported:
From SQLObject.SQLObject:
From SQLObject.DBConnection:
From SQLObject.Col: * Col * StringCol * IntCol * FloatCol * KeyCol * ForeignKey * EnumCol * DateTimeCol * DecimalCol * CurrencyCol
From SQLObject.Join: * MultipleJoin * RelatedJoin
From SQLObject.Style: * Style * MixedCaseUnderscoreStyle * DefaultStyle * MixedCaseStyle
From SQLObject.SQLBuilder:
For more information on SQLBuilder, read the SQLBuilder Documentation.