apsw-3.3.13-r1 18th February 2007
APSW provides an SQLite 3 wrapper that provides the thinnest layer over SQLite 3 possible. Everything you can do from the C API to SQLite 3, you can do from Python. Although APSW looks vaguely similar to the DBAPI, it is not compliant with that API and instead works the way SQLite 3 does. (pysqlite is DBAPI compliant - differences between apsw and pysqlite 2).
Important changes in 3.3.10 release You must explicitly call close() on Connection objects. If you do not call close then a detailed error message will be printed when Connection's destructor runs and cleanup will not happen of that connection. This is because it is not possible to raise Exceptions in destructors (actually it is, but Python just calls the destructor again which can result in an infinite loop). I did try to work around this, but with things like no control over which thread destructors run in, virtual tables, triggers, incomplete statements and SQLite's transaction control, it becomes impossible to cleanly close a connection if bits of code are returning errors. Using the close() method allows errors to be returned and it can be called again once they are fixed.
All strings returned by APSW are now in Unicode. Previously strings that consisted only of ASCII characters where returned as the string type and other strings were returned as the unicode type. This did allow easy usage with other components that knew nothing of Unicode. However it could make Unicode aware software do more work.
You can download this release in source or binary form.
Some Linux distributions also have packages. Debian users can grab the package python-apsw. Gentoo users can grab the package dev-python/apsw.
The source is controlled by Subversion accessible as http://initd.org/svn/pysqlite/apsw/trunk
APSW binds to the C interface of SQLite. That interface is stable for each
major version of SQLite (ie the C interface for any SQLite 3.x is stable, but
SQLite 4.x would be an incompatible change). Consequently you can use APSW
against any revision of SQLite with the same major version number. There are
small enhancements to the C api over time, and APSW adds support for them as
appropriate. The version number of APSW covers the version these enhancements
were added. The vast majority of changes to SQLite are in the SQL syntax and
engine. Those will be picked up with any version of APSW. The one exception
to this is experimental features in SQLite which may change API between
revisions. Consequently you will need to turn them off if you want to work
against a variety of versions of SQLite (see EXPERIMENTAL in
setup.py
). I strongly recommend you embed the SQLite library
into APSW which will put you in control of versions.
Before you do any benchmarking with APSW or other ways of accessing SQLite, you must understand how and when SQLite does transactions. See section 7.0, Transaction Control At The SQL Level of sqlite.org/lockingv3.html. APSW does not alter SQLite's behaviour with transactions. Some access layers try to interpret your SQL and manage transactions behind your back, which may or may not work well with SQLite also do its own transactions. You should always manage your transactions yourself. For example to insert 1,000 rows wrap it in a single transaction else you will have 1,000 transactions. The best clue that you have one transaction per statement is having a maximum of 60 statements per second. You need two drive rotations to do a transaction - the data has to be committed to the main file and the journal - and 7200 RPM drives do 120 rotations a second. On the other hand if you don't put in the transaction boundaries yourself and get more than 60 statements a second, then your access mechanism is silently starting transactions for you. This topic also comes up fairly frequently in the SQLite mailing list archives.
This is an example of how to use apsw, and also demonstrates all the features.
import os, sys, time import apsw ### ### Check we have the expected version of apsw and sqlite ### print "Using APSW file",apsw.__file__ # from the extension module print " APSW version",apsw.apswversion() # from the extension module print " SQLite version",apsw.sqlitelibversion() # from the sqlite library code print " SQLite version",apsw.SQLITE_VERSION_NUMBER # from the sqlite header file at compile time Using APSW file /space/apsw/apsw.so APSW version 3.3.13-r1 SQLite version 3.3.13 SQLite version 3003013 ### ### Opening/creating database ### if os.path.exists("dbfile"): os.remove("dbfile") connection=apsw.Connection("dbfile") cursor=connection.cursor() ### ### simple statement ### cursor.execute("create table foo(x,y,z)") ### ### multiple statements ### cursor.execute("insert into foo values(1,2,3); create table bar(a,b,c) ; insert into foo values(4, 'five', 6.0)") ### ### iterator ### for x,y,z in cursor.execute("select x,y,z from foo"): print cursor.getdescription() # shows column names and declared types print x,y,z ### ### iterator - multiple statements ### for m,n,o in cursor.execute("select x,y,z from foo ; select a,b,c from bar"): print m,n,o ### ### bindings - sequence ### cursor.execute("insert into foo values(?,?,?)", (7, 'eight', False)) cursor.execute("insert into foo values(?,?,?1)", ('one', 'two')) # nb sqlite does the numbers from 1 ### ### bindings - dictionary ### cursor.execute("insert into foo values(:alpha, :beta, :gamma)", {'alpha': 1, 'beta': 2, 'gamma': 'three'}) ### ### tracing execution ### def mytrace(statement, bindings): "Called just before executing each statement" print "SQL:",statement if bindings: print "Bindings:",bindings return True # if you return False then execution is aborted cursor.setexectrace(mytrace) cursor.execute("drop table bar ; create table bar(x,y,z); select * from foo where x=?", (3,)) SQL: drop table bar ; SQL: create table bar(x,y,z); SQL: select * from foo where x=? Bindings: (3,) ### ### tracing results ### def rowtrace(*results): """Called with each row of results before they are handed off. You can return None to cause the row to be skipped or a different set of values to return""" print "Row:",results return results cursor.setrowtrace(rowtrace) for row in cursor.execute("select x,y from foo where x>3"): pass SQL: select x,y from foo where x>3 Row: (4, u'five') Row: (7, u'eight') Row: (u'one', u'two') # Clear tracers cursor.setrowtrace(None) cursor.setexectrace(None) ### ### executemany ### # (This will work correctly with multiple statements, as well as statements that # return data. The second argument can be anything that is iterable.) cursor.executemany("insert into foo (x) values(?)", ( [1], [2], [3] ) ) # You can also use it for statements that return data for row in cursor.executemany("select * from foo where x=?", ( [1], [2], [3] ) ): print row ### ### defining your own functions ### def ilove7(*args): "a scalar function" print "ilove7 got",args,"but I love 7" return 7 connection.createscalarfunction("seven", ilove7) for row in cursor.execute("select seven(x,y) from foo"): print row ### ### aggregate functions are more complex ### # here we return the longest item when represented as a string def longeststep(context, *args): "are any of the arguments longer than our current candidate" for arg in args: if len( str(arg) ) > len( context['longest'] ): context['longest']=str(arg) def longestfinal(context): "return the winner" return context['longest'] def longestfactory(): """called for a new query. The first item returned can be anything and is passed as the context to the step and final methods. We use a dict.""" return ( { 'longest': '' }, longeststep, longestfinal) connection.createaggregatefunction("longest", longestfactory) for row in cursor.execute("select longest(x) from foo"): print row ### ### Defining collations. ### # The default sorting mechanisms don't understand numbers at the end of strings # so here we define a collation that does cursor.execute("create table s(str)") cursor.executemany("insert into s values(?)", ( ["file1"], ["file7"], ["file17"], ["file20"], ["file3"] ) ) for row in cursor.execute("select * from s order by str"): print row (u'file1',) (u'file17',) (u'file20',) (u'file3',) (u'file7',) def strnumcollate(s1, s2): # return -1 if s1<s2, +1 if s1>s2 else 0 # split values into two parts - the head and the numeric tail values=[s1, s2] for vn,v in enumerate(values): for i in range(len(v), 0, -1): if v[i-1] not in "01234567890": break try: v=( v[:i], int(v[i:]) ) except ValueError: v=( v[:i], None ) values[vn]=v # compare if values[0]<values[1]: return -1 if values[0]>values[1]: return 1 return 0 connection.createcollation("strnum", strnumcollate) for row in cursor.execute("select * from s order by str collate strnum"): print row (u'file1',) (u'file3',) (u'file7',) (u'file17',) (u'file20',) ### ### Authorizer (eg if you want to control what user supplied SQL can do) ### def authorizer(operation, paramone, paramtwo, databasename, triggerorview): """Called when each operation is prepared. We can return SQLITE_OK, SQLITE_DENY or SQLITE_IGNORE""" # find the operation name print apsw.mapping_authorizer_function[operation], print paramone, paramtwo, databasename, triggerorview if operation==apsw.SQLITE_CREATE_TABLE and paramone.startswith("private"): return apsw.SQLITE_DENY # not allowed to create tables whose names start with private return apsw.SQLITE_OK # always allow connection.setauthorizer(authorizer) cursor.execute("insert into s values('foo')") cursor.execute("select str from s limit 1") SQLITE_INSERT s None main None SQLITE_SELECT None None None None SQLITE_READ s str main None # Cancel authorizer connection.setauthorizer(None) ### ### progress handler (SQLite 3 experimental feature) ### # something to give us large numbers of random numbers import random def randomintegers(howmany): for i in xrange(howmany): yield (random.randint(0,9999999999),) # create a table with 500 random numbers cursor.execute("begin ; create table bigone(x)") cursor.executemany("insert into bigone values(?)", randomintegers(500)) cursor.execute("commit") # display an ascii spinner _phcount=0 _phspinner="|/-\\" def progresshandler(): global _phcount sys.stdout.write(_phspinner[_phcount%len(_phspinner)]+chr(8)) # chr(8) is backspace sys.stdout.flush() _phcount+=1 time.sleep(0.1) # deliberate delay so we can see the spinner (SQLite is too fast otherwise!) return 0 # returning non-zero aborts # register progresshandler every 20 instructions connection.setprogresshandler(progresshandler, 20) # see it in action - sorting 500 numbers to find the biggest takes a while print "spinny thing -> ", for i in cursor.execute("select max(x) from bigone"): print # newline print i # and the maximum number connection.setprogresshandler(None) ### ### commit hook (SQLite3 experimental feature) ### def mycommithook(): print "in commit hook" hour=time.localtime()[3] if hour<8 or hour>17: print "no commits out of hours" return 1 # abort commits outside of 8am through 6pm print "commits okay at this time" return 0 # let commit go ahead connection.setcommithook(mycommithook) try: cursor.execute("begin; create table example(x,y,z); insert into example values (3,4,5) ; commit") except apsw.ConstraintError: print "commit was not allowed" connection.setcommithook(None) ### ### Virtual tables ### # This virtual table stores information about files in a set of # directories so you can execute SQL queries def getfiledata(directories): columns=None data=[] counter=1 for directory in directories: for f in os.listdir(directory): if not os.path.isfile(os.path.join(directory,f)): continue counter+=1 st=os.stat(os.path.join(directory,f)) if columns is None: columns=["rowid", "name", "directory"]+[x for x in dir(st) if x.startswith("st_")] data.append( [counter, f, directory] + [getattr(st,x) for x in columns[3:]] ) return columns, data # This gets registered with the Connection class Source: def Create(self, db, modulename, dbname, tablename, *args): columns,data=getfiledata([eval(a) for a in args]) # eval strips off layer of quotes schema="create table foo("+','.join(["'%s'" % (x,) for x in columns[1:]])+")" return schema,Table(columns,data) Connect=Create # Represents a table class Table: def __init__(self, columns, data): self.columns=columns self.data=data def BestIndex(self, *args): return None def Open(self): return Cursor(self) def Disconnect(self): pass Destroy=Disconnect # Represents a cursor class Cursor: def __init__(self, table): self.table=table def Filter(self, *args): self.pos=0 def Eof(self): return self.pos>=len(self.table.data) def Rowid(self): return self.table.data[self.pos][0] def Column(self, col): return self.table.data[self.pos][1+col] def Next(self): self.pos+=1 def Close(self): pass # Register the module as filesource connection.createmodule("filesource", Source()) # Arguments to module - all directories in sys.path sysdirs=",".join(["'%s'" % (x,) for x in sys.path[1:] if len(x) and os.path.isdir(x)]) cursor.execute("create virtual table sysfiles using filesource("+sysdirs+")") # Which 3 files are the biggest? for size,directory,file in cursor.execute("select st_size,directory,name from sysfiles order by st_size desc limit 3"): print size,file,directory 1610924 apsw.so /usr/lib64/python2.4/site-packages 1527536 _lcms.so /usr/lib64/python2.4/site-packages 736616 lapack_lite.so /usr/lib64/python2.4/site-packages/Numeric # Which 3 files are the oldest? for ctime,directory,file in cursor.execute("select st_ctime,directory,name from sysfiles order by st_ctime limit 3"): print ctime,file,directory 1157247798 libcdemu.py /usr/lib64/python2.4/site-packages 1157247798 libcdemu.pyo /usr/lib64/python2.4/site-packages 1157247798 libcdemu.pyc /usr/lib64/python2.4/site-packages ### ### Cleanup ### # We must close connections connection.close(True) # force it since we want to exit
The simple way is:
python setup.py install
On Windows the above command uses Visual C++. You can use MinGW with the
command below. (If MinGW complains about missing Python functions starting
with _imp__Py_
then run mingwsetup.bat
which will
ensure your Python distribution is initialized for MinGW compilation).
python setup.py build --compile=mingw32 install
By default whatever SQLite 3 you already have on your system is used. If
you place a copy of the headers and library in a sqlite3
subdirectory then that will be used instead. It is highly recommended that
you build SQLite into APSW. Here is a quick and easy way of doing everything
on Linux/Mac or Windows with MinGW, including the SQLite library statically
into the extension (ie no external DLLs/shared libraries will be needed at
runtime). You should follow these instructions with your current directory
being where you extracted the APSW source to.
Download the SQLite 3 code.
- Windows
Get the processed .zip > mkdir sqlite3 > cd sqlite3 > unzip sqlite-source-3_3_10.zip > del tclsqlite.c > gcc -DTHREADSAFE -DNDEBUG -O3 -c *.c > ar r libsqlite3.a *.o > ranlib libsqlite3.a > cd .. > python setup.py build --compile=mingw32 install > python -c "import apsw ; print apsw.sqlitelibversion(), apsw.apswversion()"- Mac/Linux/etc
Get the normal source. $ wget http://www.sqlite.org/sqlite-3.3.13.tar.gz $ tar xvfz sqlite-3.3.13.tar.gz $ mv sqlite-3.3.13 sqlite3 $ cd sqlite3 # The static library is not built for inclusion into a seperate shared library # by default. If using gcc, then do this $ env CC="gcc -fPIC" CFLAGS="-DHAVE_DLOPEN" ./configure --enable-threadsafe --disable-tcl # otherwise do this $ env CFLAGS="-DHAVE_DLOPEN" ./configure --enable-threadsafe --disable-tcl # The CFLAGS="-DHAVE_DLOPEN bit is needed for loading dynamic # extensions. See SQLite bug 2082 $ make $ cp .libs/*.a . $ cd .. $ python setup.py install $ python -c "import apsw ; print apsw.sqlitelibversion(), apsw.apswversion()"
The extension just turns into a single file apsw.so (Linux/Mac) or
apsw.pyd (Windows). You don't need to install it and can drop it into any
directory that is more convenient for you and that your code can reach. To
just do the build and not install, leave out install
from the
lines above and add build
if it isn't already there.
If you want to check that your build is correct then you can run the unit tests. Run tests.py. It will print the APSW file used, APSW and SQLite versions and then run lots of tests all of which should pass.
Everything you can do from the SQLite 3 C API you can do from Python. The documentation below notes which C API functions are called where you can get further details on what happens. The only C function not implemented is sqlite3_collation_needed. (You can still add collations, you just can't use this function to find out about them on-demand.) Additionally sqlite3_trace is not wrapped but instead tracers are provided that have more functionality.
Some functions are marked experimental in the SQLite API. These have also
been made available, but as the SQLite documentation notes these functions
may change form or disappear in future versions of SQLite. You can exclude
these functions by commenting out the relevant line in the
setup.py
when building aspw.
Various methods create functions, collations and set various hooks and handlers. To remove the relevant function/collation/hook/handler, pass in None as the callable method.
Returns the version of the SQLite library as a string. This function calls sqlite3_libversion.
Returns the version of the APSW module.
Calls sqlite3_enable_shared_cache. This sets the shared cache mode which was introduced in SQLite 3.3. Note that it only affects the current thread and should be called before any databases are opened. If called after that then you'll get MisuseError. APSW already enforces the other conditions required to use this functionality, namely that all operations on a Connection must happen in the same thread. Consequently you can safely use this functionality. You do not need to worry about sqlite3_thread_cleanup as that is only needed when abnormally terminating a thread (something you can't do in Python). Python's normal reference counting ensures all objects are cleaned up, and SQLite automatically cleans up the shared cache when the last Connection in a thread is closed.
Registers a virtual table. See virtual tables for more details.
The purpose of the hooks is to allow the easy registration of
functions, virtual tables or similar items with each Connection as it is
created. The default value is an empty list. Whenever a Connection is
created, each item in apsw.connection_hooks
is invoked with
a single parameter being the new Connection object. If the hook raises an
exception then the creation of the Connection fails.
Note that the same values can be used in different contexts. For example SQLITE_OK and SQLITE_CREATE_INDEX both have a value of zero. For each group of constants there is also a mapping (dict) available that you can supply a string to and get the corresponding numeric value, or supply a numeric value and get the corresponding string. These can help improve diagnostics/logging, calling other modules etc. For example
apsw.mapping_authorizer_function["SQLITE_READ"]=20 apsw.mapping_authorizer_function[20]="SQLITE_READ"
This is the version of the SQLite library from the header file sqlite3.h at apsw compile time. It is expressed as an integer (eg 3003010 for SQLite 3.3.10).
These values are returned from authorizer functions. Mapping is
apsw.mapping_authorizer_return
The values are passed to authorizer functions. You can see a complete
list in the sqlite3_set_authorizer
documentation. Mapping is
apsw.mapping_authorizer_function
These values are passed to the BestIndex method of virtual tables. Mapping is
apsw.mapping_bestindex_constraints
These are extended
result codes which are provided in exceptions. Mapping is
apsw.mapping_extended_result_codes
These are SQLite result codes
which are provided in exceptions. Mapping is
apsw.mapping_result_codes
The connection class wraps a sqlite3 pointer
.
Opens an SQLite database named filename
. (This calls
sqlite3_open
behind the scenes. You must call close when finished with a Connection.
If you do not call close then the destructor will invoke
sys.excepthook
which by default will print a traceback.
You must call this method when you are done with a Connection. Behind the scenes it calls sqlite3_close. It is ok to call this method multiple times. Note that this method can raise an exception. Some examples of when exceptions are raised:
If force is True, then any executing cursors are forcibly closed.
Creates a new cursor object on this database.
This function returns the number of database rows that were changed (or inserted or deleted) by the most recently completed INSERT, UPDATE, or DELETE statement. (This calls sqlite3_changes. Read that link for some additional notes.)
This function returns the total number of database rows that have be modified, inserted, or deleted since the database connection was opened. (This calls sqlite3_total_changes. Read that link for some additional notes.)
Returns the integer key of the most recent insert in the database. (This calls sqlite3_last_insert_rowid.)
Calls sqlite3_complete which tells you if the input string comprises one or more complete SQL statements.
Sets the busy timeout. (This calls sqlite3_busy_timeout).
Sets the busy handler to callable. callable will be called with one integer argument which is the number of prior calls to the busy callback for the same lock. If the busy callback returns something that evaluates to False, then SQLite returns SQLITE_BUSY to the calling code.. If the callback returns something that evaluates to True, then SQLite tries to open the table again and the cycle repeats. (This calls sqlite3_busy_handler).
Causes any pending operations on the database to abort at the earliest opportunity. You can call this from any thread. (This calls sqlite3_interrupt).
Registers a scalar function. The callable will be called. You can specify how many arguments your function takes as the numargs parameter or supply -1 to take any amount. (This calls sqlite3_create_function).
Registers an aggregate function. (This calls sqlite3_create_function.) You can specify how many arguments your function takes as the numargs parameter or supply -1 to take any amount. When the function is called by an SQL query, the factorycallback is called without any arguments. The factorycallback needs to return a tuple consisting of three 3 items.
a context object (of any type)
a step function which is called for each row. The context object will be the first parameter, and the remaining parameters will be from the SQL statement. The return value is ignored - you supply it in final.
a final function which is called at the end. The only parameter will be the context object. The value returned is set as the return for the function. It must be a valid SQLite type. Note that the final function is always called even if an exception was raised by the step function. This allows you to ensure any resources are cleaned up.
Creates a collation with the specified name and callable. The callable will be passed two string arguments. It should return -1 if the first is less than the second, 0 if they are equal and 1 and if the first is greater than the second. Note that this controls sorting (ORDER BY in SQL) so your comparisons don't affect other SQL operations. Read more about SQLite's handling of collations. (This calls sqlite3_create_collation.) If there is an error in your Python code then 0 (ie items are equal) is returned.
The callable is invoked while SQL statements are being prepared. The intent is to allow applications to safely execute user entered SQL. The callable is called with 5 parameters:
an integer representing the operation (the constants are available
on the apsw module - eg apsw.SQLITE_CREATE_TABLE
.
A string (or None) dependent on the operation
Another string (or None) dependent on the operation
The string name of the database (or None)
Name of the innermost trigger or view doing the access (or None)
You should return apsw.SQLITE_OK
to allow the operation
or apsw.SQLITE_DENY
or apsw.SQLITE_IGNORE
as
applicable. (SQLITE_DENY is returned if there is an error in your Python
code).
This calls sqlite3_set_authorizer which contains more detailed documentation.
Sets a callable which is invoked during data changing. The callable takes four parameters:
(This calls sqlite3_update_hook.)
Sets a callable which is invoked during a rollback. The callable takes no parameters and the return value is ignored. (This calls sqlite3_rollback_hook.)
Sets a callable which is invoked just before a commit. It should return zero for the commit to go ahead and non-zero for it to be turned into a rollback. In the case of an exception in your callable, a non-zero (ie rollback) value is returned. (This calls sqlite3_commit_hook.)
Sets a callable which is invoked at the end of execution of each statement and passed the statement string and how long it took to execute. (The execution time appears to be in nanoseconds.) Note that it is called only on completion. If for example you do a SELECT and only read the first result, then you won't reach the end of the statement.
(This calls sqlite3_profile which is not documented on the SQLite web site. See the sqlite3.h header file for documentation.).
Sets a callable which is invoked every nsteps SQLite inststructions. The callable should return a non-zero value to abort or zero to continue. (If there is an error in your Python callable then non-zero will be returned). (This calls sqlite3_progress_handler which has more detailed documentation).
Enables extension loading which is disabled by default. Calls sqlite3_enable_load_extension. See note below about enabling extension loading.
Loads an extension by calling sqlite3_load_extension. If you don't specify the entrypoint then SQLite uses a default of sqlite3_extension_init.
Note that if you build SQLite on Unix then extension loading won't be built into SQLite by default. The details are in SQLite ticket 2082. You need to edit setup.py if you enable extension loading.
The Cursor class creates and executes SQLite prepared statements.
You cannot create cursors directly. The are created by calling Connection.cursor().
In general cursors are in a closed state, and you won't need to call this method. Calling while the cursor is already closed or if the Connection is closed will not result in errors. An open Cursor prevents closing of Connections. The circumstances in which they are not closed include:
If force is False then you will get exceptions if there is remaining work to do be in the Cursor such as more statements to execute, more data from the executemany binding sequence etc. If force is True then all remaining work and state information will be discarded.
Returns the Connection object to which this cursor belongs.
Executes the statements using the supplied bindings. The bindings can be supplied as a tuple or as a dict. Execution returns when the first row is available or all statements have completed. The cursor object is returned which you can use as an iterator. (See execution model for more details. This function wraps sqlite3_prepare and sqlite3_step.)
Repeatedly executes statements using each element of sequenceofbindings for the bindings each time. Execution returns when the first row is available or all statements have completed. The cursor object is returned which you can use as an iterator. (See execution model for more details. This function runs in a loop on each member of sequenceofbindings and wraps sqlite3_prepare and sqlite3_step .)
The Cursor object is an iterator, and so you can use it in a for loop
or similar situations. You can also explicitly call the
next()
method. This method returns a tuple of the contents
of a returned row or raises StopIteration after all returned rows have
been seen.
Returns a list describing each column in the current result set. Each
item is a tuple of (column name, declared column type). You should only
call this function while data is being returned such as when
next()
returns a row. This function calls sqlite3_column_name
and sqlite3_column_decltype.
Sets or gets the tracers.
Virtual Tables are a new experimental feature introduced in SQLite 3.3.7 and updated in 3.3.8. They let a developer provide underlying table implementations, while still presenting a normal SQL interface to the user. The person writing SQL doesn't need to know or care that some of the tables come from elsewhere.
Some examples of how you might use this:
You need to have 3 types of object. A module, a virtual table and a
cursor. These are documented below. You can also read the SQLite C method
documentation. At the C, they are just one set of methods. At the
Python/apsw level, they are split over the 3 types of object. The leading
x is ommitted in Python. You can return SQLite error codes
(eg SQLITE_READONLY
) by raising the appropriate exceptions (eg
apsw.ReadOnlyError
).
The module is used to create the virtual tables. Once you have a module
object, you register it with a connection by calling
Connection.createmodule("modulename", module_object)
. To create
the table, you use SQL: CREATE VIRTUAL TABLE tablename USING
modulename(arg0, arg1, ...)
This is called when a table is first created on a connection. The optional args are passed from the SQL CREATE statement.
Return two items. The first item should be a string with SQL specifying the schema for the table. The second item is an Object that implements the table methods. Later methods in the Table and Cursor objects will reference the columns by number, starting at zero with column -1 being the rowid. Every row must have a unique unchanging integer rowid up to 64bit signed integer.
Similar to Create, except it is called to establish a new connection to an existing table. The paramaters and return are the same. In general Create is called the first time a table is accessed and is where you are expect to do heavy initialisation work. Connect can take that initialized work and reuse it.
The table contains knowledge of the indices, makes cursors and can perform transactions.
These functions are used as part of transactions. At the time of writing they are undocumented in SQLite (see www.sqlite.org/cvstrac/wiki?p=VirtualTableMethods. They do not take any arguments nor return anything. You do not have to provide these methods.
These methods are the opposite of Connect
and
Create
in the module implementation. They can be considered
destructors for the virtual table instance. It is currently unclear how
SQLite behaves if you return an error (ticket 2099). Note
that SQLite does ignore errors in the Disconnect method. If you do have
one, you will get an exception from the close()
method, but
the database will actually be closed.
Returns a new cursor object.
Lets you provide an alternate function implementation for the table. This is currently not implemented in apsw because of incomplete information in SQLite to make it safe (ticket 2095).
Requests deletion of the row with the specified id.
Return None (ignored)
Create a new row. fields is a tuple the same size as the number of columns in your table containing a value for each column. rowid is the rowid to give the row. If rowid is None then you must make up a rowid you want.
Return the new rowid (if rowid was None), else ignored
Change an existing row of the table with the values in fields. fields
is a tuple the same size as the number of columns in your table
containing a value for each column. If newrowid is not the same as rowid
then you also have to change the rowid (this would happen for a query
like UPDATE table SET rowid=rowid+100 WHERE ...
)
Return None (ignored)
This is a complex method and even has its own page in the SQLite documentation. To get going initially, just returnNone
and you will be fine. Implementing this method reduces the number of rows scanned in your table to satisfy queries, but only if you have an index or index like mechanism available.Note The implementation of this method differs slightly from the SQLite documentation for the C API. You are not passed "unusable" constraints. The argv/constraintarg positions are not off by one. In the C api, you have to return position 1 to get something passed to Filter in position 0. With the APSW implementation, you return position 0 to get Filter arg 0, position 1 to get Filter arg 1 etc.
The purpose of this method is to ask if you have the ability to
determine if a row meets certain constraints that doesn't involve
visiting every row. An example constraint is
price > 74.99
. In a traditional SQL database,
queries with constraints can be speeded up with indices. If you
return None, then SQLite will visit every row in your table and evaluate
the constraint itself. Your index choice returned from BestIndex will
also be passed to the Filter method on your cursor object. Note that
SQLite may call this method multiple times trying to find the most
efficient way of answering a query. You will be passed the contraints as
a sequence of tuples containing two items. The first item is the column
number and the second item is the operation.
Example query:select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets"
.If customer is column 0, price column 2 and quantity column 5, then constraints will be:
(2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)Note that you do not get the value of the constraint (eg "Acme Widgets", 74.99 and 10 in this example).
The first item you return is None if you don't have an appropriate index for any of the constraints and SQLite should iterate over every row. If you do have a suitable index then you return as the first item a sequence the same length as constraints with the members mapping to the constraints in order. Each item can be one of None, an integer or a tuple of an integer and a boolean.
Example query:select * from foo where price>74.99 and quantity≤10 and customer=="Acme Widgets"
.Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.
Constraints (in) Constraints used (out) (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) 1, None, 0
When your Filter method in the cursor is called, constraintarg[0] will be "Acme Widgets" (customer constraint value) and constraintarg[1] will be 74.99 (price constraint value). You can also return an index number (integer) and index string to use (SQLite attaches no significance to these values - they are passed as is to your Cursor.Filter method as a way for the BestIndex method to let the Cursor.Filter method know which of your indices or similar mechanism to use.
The second argument to BestIndex is a sequence of orderbys because the query requested the results in a certain order. If your data is already in that order then SQLite can give the results back as is. If not, then SQLite will have to sort the results first.
Example query:select * from foo order by price desc, quantity asc
Price is column 2, quantity column 5 so orderbys will be:
(2, True), # True means descending, False is ascending (5, False)
Return You should return up to 5 items. Items not present in the return have a default value.
A or B
and A has 2,000 operations and B has
100 then it is best to evaluate B before A.A complete example. Query isselect * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets" order by price desc, quantity asc
Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.
BestIndex(constraints, orderbys) constraints= ( (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) ) orderbys= ( (2, True), (5, False) )You return
( (1, None, 0), # constraints used 27, # index number "idx_pr_cust", # index name False, # results are not in orderbys order 1000 # about 1000 disk operations to access index )Your Cursor.Filter method will be called with:
27, # index number you returned "idx_pr_cust", # index name you returned "Acme Widgets", # constraintarg[0] - customer 74.99 # constraintarg[1] - price
Cursors are created by the Open method in the table object. They iterate over rows returning desired values.
This is the destructor for the cursor. Note that you must cleanup. The method will not be called again if you raise an exception.
Return None (ignored)
This method is always called first to initialize an iteration to the first row of the table. The arguments come from the BestIndex method in the table object with constraintargs being a tuple of the constraints you requested. If you always return None in BestIndex then indexnum will be zero, indexstring will be None and constraintargs will be empty).
Return None (ignored)
Called to ask if we are at the end of the table. It is called after each call to Filter and Next.
Return True if the cursor is at a valid row of data, else False. Note that if you have an exception in this method, or do not provide it then False will be returned to SQLite.
Advance to the next row in the table.
Return None (ignored)
Return the current rowid. This should be an integer up to 64 bits signed.
Requests the value of the specified column number. Column numbering starts at zero with -1 being the rowid.
Return the value in that column. It must be compatible with SQLite types (ie one of None, integer, float, string or buffer (blob))
Virtual Tables are a very recent addition to SQLite and haven't been widely used yet. They do work well if all your routines work perfectly.
A big help is using the local variables recipe as described in augmented stack traces which will give you more details in errors. You may also find errors compounding. For example if you have an error in the Filter method of a cursor, SQLite then closes the cursor. If you also return an error in the Close method then the first error may mask the second or vice versa.
Also note that SQLite may ignore responses from your methods if they don't make sense. For example in BestIndex, if you set multiple arguments to have the same constraintargs position then your Filter won't receive any constraintargs at all.
All exception types have apsw.Error
as a parent. The
following exceptions can happen:
You have used an object allocated in one thread in a different thread. All objects (Connection, Cursor) must be used in the same thread they were allocated in.
You have tried to start a new SQL execute call before executing all the previous ones. See the execution model for more details.
The destructor for a Connection was called, but close() hadn't
been called on the Connection. Destructors aren't allowed to return
exceptions, so what actually happens is that
sys.excepthook
is called with this exception so you can
find where it is happening in your code. The database will also be
left open and hence leak memory and potentially file descriptors.
You have called close() on a Connection and then continued to use it.
There is an incorrect number of bindings when using tuples. Or you
supplied a dictionary of bindings and not all bindings were named in
the SQL statement. For example
select * from foo where x=:name and y=?
.
You should either use colon name style or question mark style in a
query but not both.
Note It is not considered an error to have missing keys in
a dictionary. For example cursor.execute("insert into foo
values($a,:b,$c)", {'a': 1})
is perfectly valid (note
b and c are not in the dict). For missing keys,
None/NULL will be used. This is so you don't have to add lots of
spurious values to the supplied dict. If your schema requires every
column have a value, then SQLite will generate an error due to some
values being None/NULL so that case will be caught.
A statement is complete but you try to run it more anyway!
The execution tracer returned False so execution was aborted.
An error happened loading an extension.
The table lists which Exception classes correspond to which SQLite error
codes. APSW runs with extended
result codes turned on. If you get the result
member of
an exception instance then you will get the plain error code. The
extendedresult
member of an exception instance will get you
the extended error code.
General Errors SQLITE_ERROR
SQLError SQLITE_MISMATCH
MismatchError Internal Errors SQLITE_INTERNAL
InternalError SQLITE_PROTOCOL
ProtocolError SQLITE_MISUSE
MisuseError SQLITE_RANGE
RangeError Permissions etc SQLITE_PERM
PermissionsError SQLITE_READONLY
ReadOnlyError SQLITE_CANTOPEN
CantOpenError SQLITE_AUTH
AuthError
Abort/Busy/etc SQLITE_ABORT
AbortError SQLITE_BUSY
BusyError SQLITE_LOCKED
LockedError SQLITE_INTERRUPT
InterruptError SQLITE_SCHEMA
SchemaChangeError SQLITE_CONSTRAINT
ConstraintError Memory/Disk/etc SQLITE_NOMEM
NoMemError SQLITE_IOERR
IOError SQLITE_CORRUPT
CorruptError SQLITE_FULL
FullError SQLITE_TOOBIG
TooBigError SQLITE_NOLFS
NoLFSError SQLITE_EMPTY
EmptyError SQLITE_FORMAT
FormatError SQLITE_NOTADB
NotADBError Codes in italics are no longer used by SQLite (according to headers in 3.2.7).
When an exception occurs, Python does not include frames from non-Python code (ie the C code called from Python). This can make it more difficult to work out what was going on when an exception occurred for example when there are callbacks to collations, functions or virtual tables, triggers firing etc.
This is an example showing the difference between the tracebacks you would have got with earlier versions of apsw and the augmented traceback.
Code
import apsw def myfunc(x): 1/0 con=apsw.Connection(":memory:") con.createscalarfunction("foo", myfunc) con.createscalarfunction("fam", myfunc) cursor=con.cursor() cursor.execute("create table bar(x,y,z);insert into bar values(1,2,3)") cursor.execute("select foo(1) from bar")
Original traceback | Augmented traceback |
---|---|
Traceback (most recent call last): File "t.py", line 11, in <module> cursor.execute("select foo(1) from bar") File "t.py", line 4, in myfunc 1/0 ZeroDivisionError: integer division or modulo by zero |
Traceback (most recent call last): File "t.py", line 11, in <module> cursor.execute("select foo(1) from bar") File "apsw.c", line 3412, in resetcursor File "apsw.c", line 1597, in user-defined-scalar-FOO File "t.py", line 4, in myfunc 1/0 ZeroDivisionError: integer division or modulo by zero |
In the original traceback you can't even see that code in apsw was
involved. The augmented traceback shows that there were indeed two
function calls within apsw and gives you line numbers should you need to
examine the code. Also note how you are told that the call was in
user-defined-scalar-FOO
(all function names are
uppercased).
But wait, there is more!!! In order to further aid troubleshooting, the augmented stack traces make additional information available. Each frame in the traceback has local variables defined with more information. You can print out the variables using ASPN recipe 52215.
In the recipe, the initial code in
print_exc_plus
is far more complicated than need be, and also won't work correctly with all tracebacks (it depends on f_prev being set which isn't always the case). Change the function to start like this:tb = sys.exc_info()[2] stack = [] while tb: stack.append(tb.tb_frame) tb = tb.tb_next traceback.print_exc() print "Locals by frame, innermost last"
Here is a far more complex example from some virtual tables code I was writing. The BestIndex method in my code had returned an incorrect value. The augmented traceback includes local variables using recipe 52215. I can see what was passed in to my method, what I returned and which item was erroneous. The original traceback is almost completely useless.
Original traceback |
---|
Traceback (most recent call last): File "tests.py", line 1387, in testVtables cursor.execute(allconstraints) TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean |
Augmented traceback with local variables |
Traceback (most recent call last): File "tests.py", line 1387, in testVtables cursor.execute(allconstraints) VTable = __main__.VTable cur = <apsw.Cursor object at 0x988f30> i = 10 self = testVtables (__main__.APSW) allconstraints = select rowid,* from foo where rowid>-1000 .... File "apsw.c", line 4050, in Cursor_execute.sqlite3_prepare Connection = <apsw.Connection object at 0x978800> statement = select rowid,* from foo where rowid>-1000 .... File "apsw.c", line 2681, in VirtualTable.xBestIndex self = <__main__.VTable instance at 0x98d8c0> args = (((-1, 4), (0, 32), (1, 8), (2, 4), (3, 64)), ((2, False),)) result = ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False) File "apsw.c", line 2559, in VirtualTable.xBestIndex.result_constraint indices = [4, (3,), [2, False], [1], [0]] self = <__main__.VTable instance at 0x98d8c0> result = ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False) constraint = (3,) TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean |
Read about SQLite 3 types. ASPW always maintains the correct type for values, and never converts them to something else. Note however that SQLite may convert types based on column affinity as described in that link. ASPW requires that all values supplied are one of the corresponding Python/SQLite types (or a subclass).
None in Python is NULL in SQLite
Python int or long is INTEGER in SQLite. The value represented must fit within a 64 bit signed quantity (long long at the C level) or an exception is generated. (Don't worry - aspw only uses the long type when values are too large to fit in ints).
Python's float type is used for REAL in SQLite. (At the C level they are both 8 byte quantities and there is no loss of precision).
Python's string or unicode is used for TEXT in SQLite. (ASPW automatically uses Python's string type for ASCII text and Unicode for other text.)
Python's buffer class is used for BLOB in SQLite.
All SQLite strings are Unicode. The actual binary representations can be UTF8, or UTF16 in either byte order. ASPW uses the UTF8 interface to SQLite which results in the binary string representation in your database defaulting to UTF8 as well. All this is totally transparent to your Python code.
Everywhere strings are used (eg as database values, SQL statements, bindings names, user defined functions) you can use Unicode strings. You can also use the bare Python string class, and ASPW will automatically call the unicode converter if any non-ascii characters are present.
When reading values from SQLite, ASPW uses the Python string class for any pure ASCII text, else it uses the Python unicode class.
ASPW lets you use SQLite in multi-threaded programs and will let other threads execute while SQLite is working. (Technically the GIL is released when sqlite3_step or sqlite3_open are running. The GIL is re-acquired while user defined functions, collations and the various hooks/handlers run.)
Note that you cannot use the same Connection object in multiple threads. You must allocate a new Connection object per thread. (This is a requirement of SQLite). A cursor object can only be used in the same thread as it was allocated. (Also an SQLite requirement). Fortunately ASPW will check this for you and throw a ThreadingViolationError if you try to use objects in the wrong thread. Note that your destructors also need to run in the creation thread.
If you have multiple threads and/or multiple programs accessing the same database then there may be contention for the file. SQLite will return SQLITE_BUSY which will be raised as BusyError. You can call Connection.setbusytimeout to set how long SQLite will retry for or Connection.setbusyhandler to install your own busy handler. Note that SQLite won't call the busy handler or timeout if it believes a deadlock has arisen. SQLite's locking and concurrency is described here
A cursor object can only be executing one query at a time. You cannot issue a new query from inside a trace function or from a user defined function or collation since these are called while executing a query. You can however make new cursors and use those without issue. You may want to remember the Connection object when you set your trace or user defined functions.
You can install tracers on a cursor as an easy way of seeing exactly what gets executed and what is returned. The tracers can also abort execution and cause different values to be returned. This is very useful for diagnostics and testing without having to modify your main code.
Note: You cannot issue new execute statements against the cursor your tracer was called from. If you would like to make more queries in the tracer then do them from a new cursor object.
The execution tracer is called after an SQL statement has been prepared. (ie syntax errors will have caused an exception during preparation so you won't see them with a tracer). It is called with two arguments. The first is a string which is the SQL statement about to be executed, and the second is the bindings used for that statement (and can be None). If the return value evaluates to False/None then execution is aborted with an ExecTraceAbort exception. See the example above.
The row tracer is called before each row is returned. The arguments are the items about to be returned. Whatever you return from the tracer is what is actually returned. If you return None then the whole row is skipped. See the example above.
Prior to Python 2.5, you were still limited to 32 bit quantities for items in Python such as the length of strings, number of items in a sequence etc. APSW will work correctly with those items in Python 2.5 that use 64 bits. Unfortunately SQLite is limited to 32 bit quantities for strings, blobs, number of columns etc. Consequently you will get a TooBig exception from APSW which checks if strings/buffers longer than 1GB or 2GB (depends on internal storage) are used. See SQLite ticket 2125.
This section only matters if you give multiple SQL statements in one
go to cursor.execute()
. (Statements are seperated by
semi-colons.)
SQLite does execution in two steps. First a statement is prepared, which verifies the syntax, tables and fields and converts the statement into an internal representation. The prepared statement is then run. Execution stops when a row is available, there is an error or the statement is complete.
The cursor.execute()
method automatically does the
preparing and starts execution. If none of the statements return rows
then execution will go to the end. If a row is returned then you need to
call cursor.next() to get the row values. Execution will resume as
necessary to satisfy next()
calls.
However this means that if you don't read the rows returned then the rest of your statements won't be executed. APSW will detect unexecuted previous statements and generate an exception. For example:
>>> cursor.execute("select * from foo ; create table bar(x,y,z)") >>> cursor.execute("create table bam(x,y,z)") Traceback (most recent call last): File "<stdin>", line 1, in ? apsw.IncompleteExecutionError: Error: there are still remaining sql statements to execute
Because I didn't read the results of select * from foo
then the following create table command didn't have a chance to get
executed. On the next execute that condition is detected and an exception
raised.
DBAPI is defined in PEP 249. This section desribes how APSW complies or differs from it.
There is no connect method. Use the Connection constructor instead, which only takes one parameter - the name of the database.
The Connection object and any cursors must all be used in the same thread they were allocated from. You cannot use them in different threads even if you protect them with mutexes.
Three different paramstyles are supported. You can use qmark ('... WHERE name=?'), numeric ('... WHERE name=?4') and named ('... WHERE name=:name' or '... WHERE name=$name'). Note that SQLite numbers parameters from one not zero.
The DBAPI exceptions are not used.
There are no commit or rollback methods. You should use cursor.execute("COMMIT") etc. The SQLite documentation has more details.
Several methods that are defined in DBAPI to be on the cursor are instead on the Connection object, since this is where SQLite actually stores the information. Doing operations in any other cursor attached to the same Connection object does update their values, and this makes you aware of that.
Use getdescription() instead of description. This information is only obtained on request.
There is no rowcount.
callproc is not implemented as SQLite doesn't support stored procedures.
execute returns the Cursor object and you can use it as an iterator to get the results (if any).
executemany returns the Cursor object and you can use it as an iterator to get the results (if any).
fetchone is not available. Use the cursor as an iterator, or call next() which raises StopIteration when there are no more results.
fetchmany is not available. Call next() for however many results you want.
fetchall is not available. Call next() or use a list comprehension
such as [row for row in cursor.execute("....")]
.
nextset is not applicable or implemented.
arraysize is not available as fetchmany isn't.
Neither setinputsizes or setoutputsize are applicable or implemented.
None of the date or time methods are available since SQLite 3 does not have a native date or time type.
Use the standard Python buffer class for BLOBs.
rownumber is not available.
Exception classes are not available as attributes of Connection.
Use Cursor.getconnection() to get the associated Connection object.
scroll and messages are not available.
The Cursor object supports the iterator protocol and this is the only way of getting information back.
To get the last inserted row id, call
Connection.last_insert_rowid()
. That stores the id from the
last insert on any Cursor associated with the the Connection. You can
also add select last_insert_rowid()
to the end of your
execute statements.
There is no errorhandler attribute.
pysqlite already provides a DBAPI compliant wrapper over SQLite 2 and 3. APSW only wraps SQLite 3. I suggest using APSW when you want to directly use SQLite and its functionality or are using your code to deal with database independence rather than DBAPI.
APSW has the following enhancements/differences over pysqlite 2 (wrapping SQLite 3):
*Nothing* happens behind your back. By default pysqlite tries to manage transactions by parsing your SQL for you, but you can turn it off. This can result in very unexpected behaviour with pysqlite.
SQLite's Manifest typing is used, which limits values to being supplied as integer (32/64 bit), string (utf8/unicode), double, null/None and BLOB. (An exception happens if you don't supply values in one of the supported types.) APSW doesn't do any type conversion or co-ercion. Note that SQLite will under some circumstances. pysqlite does have a mechanism for marshalling your own types but you'll need to be careful if sharing the database with code in another language.
APSW always handles Unicode correctly (this was one of the major reasons for writing it in the first place). pysqlite has since fixed many of its issues but you are still stuck with some.
You can use semi-colons at the end of commands and you can have multiple commands in the execute string in APSW. There are no restrictions on the type of commands used. For example this will work fine in APSW but is not allowed in pysqlite:
import apsw con=apsw.Connection(":memory:") cur=con.cursor() for row in cur.execute("create table foo(x,y,z);insert into foo values (?,?,?);" "insert into foo values(?,?,?);select * from foo;drop table foo;" "create table bar(x,y);insert into bar values(?,?);" "insert into bar values(?,?);select * from bar;", (1,2,3,4,5,6,7,8,9,10)): print rowAnd the output as you would expect:
$ python test.py (1, 2, 3) (4, 5, 6) (7, 8) (9, 10)
The cursor object is an iterator and returns itself from execute. (The latest SQLite has now copied this feature).
cursor.executemany()
also works with statements that
return data such as selects, and you can have multiple
statements.
pysqlite apsw from pysqlite2 import dbapi2 as sqlite def badfunc(t): return 1/0 con = sqlite.connect(":memory:") con.create_function("badfunc", 1, badfunc) cur = con.cursor() cur.execute("select badfunc(3)") import apsw def badfunc(t): return 1/0 con = apsw.Connection(":memory:") con.createscalarfunction("badfunc", badfunc, 1) cur = con.cursor() cur.execute("select badfunc(3)") C:\temp>python func.py Traceback (most recent call last): File "func.py", line 8, in ? cur.execute("select badfunc(3)") pysqlite2.dbapi2.OperationalError: user-defined function raised exception C:\temp>python afunc.py Traceback (most recent call last): File "t.py", line 8, in <module> cur.execute("select badfunc(3)") File "apsw.c", line 3660, in resetcursor File "apsw.c", line 1871, in user-defined-scalar-BADFUNC File "t.py", line 3, in badfunc return 1/0
sys.excepthook
) and are also
told where Connection was allocated.You can implement tracers and other hooks in Python.
The APSW test suite is larger and tests more functionality. Code coverage by the test suite is 94%. pysqlite is good at 81% for C code although there are several places that coverage can be improved. I haven't measured code coverage for pysqlite's Python code.
In both cases, some of the non-covered code lines require that the Python interpretter or SQLite run out of memory so 100% coverage isn't possible.
Copyright (C) 2004-2007 Roger Binns rogerb@rogerbinns.com
This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:
The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.
Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.
This notice may not be removed or altered from any source distribution.
As of this release, APSW is now co-hosted with pysqlite meaning there is one site to go to for your Python SQLite bindings. Start at http://initd.org/tracker/pysqlite/wiki
You can use this with SQLite 3.3.13 onwards. There were no API changes in SQLite 3.3.10 to 3.3.13 although some internal bugs were fixed and the 3.3.13 release is recommended over the earlier version.
Thanks to Ed Pasma for highlighting these issues.
You can use this with SQLite 3.3.10 onwards.
You can use this with SQLite 3.3.9 onwards.
You can use this with SQLite 3.3.8 onwards. There was an incompatible API change for virtual tables in SQLite 3.3.8.
sys.excepthook
, the default implementation of which prints
the exception and the traceback to sys.stderr. Note the line
number reported in the traceback is often off by 1. This is because the
destructors run "between" lines of code and so the following line is
reported as the current location.Never released as 3.3.8 came along
You can use this release against SQLite 3.3.7. There were no changes in the SQLite 3.3.6 API from 3.3.5. In SQLite 3.3.7 an API was added that allowed removing a chunk of duplicate code. Also added were Virtual Tables and loading of external modules (shared libraries). APSW had the following changes:
You can use this release against any release of SQLite 3 from 3.3.5 onwards. A bug was also fixed when reporting an error during the cleanup of an aggregate function if there had also been an error in the step function. (PyErr_WriteUnraisable(NULL) crashed on some versions of Python but not others.)
SQLite added several functions for returning metadata about result column sets. You have to compile SQLite with SQLITE_ENABLE_COLUMN_METADATA to get them. This is not the default for SQLite. I don't believe these are generally useful except in some corner cases and so they aren't wrapped. However please shout if you do need them. Note that cursor.getdescription will already give you generally useful information. (Also see the pragmas).
The test code has been converted into using the unittest module. Run python tests.py -v to get the tests run. There should be no errors.
Updated code to work correctly with new Py_ssize_t introduced in Python 2.5. See 64 bit hosts, Python 2.5 for more details on how Python and SQLite handle 64 bit sized items.
The following functions were added to SQLite and are wrapped. They are all functions defined on the Connection object.
The following functions were added/changed in SQLite C API but are not useful to be wrapped.
You can use this release against any release of SQLite 3.
SQLite 3.2.7 has several bug fixes. The undocumented experimental
function sqlite3_profile
was added, but it not present in
apsw yet.
You can use this release against any release of SQLite 3.
SQLite 3.2.2 API removed sqlite3_global_recover
. That
function was not wrapped in APSW. Note that SQLite 3.2.2 contains a bug
fix that applies when you use 64 bit integer primary keys (32 bit ints
are fine).
You can use this release against any release of SQLite 3.
SQLite 3.2.1 had one addition in the stable C API, which was a new
function named sqlite3_global_recover
. That function is not
applicable for wrapping in APSW.
You can use this release against any release of SQLite 3.
SQLITE_ALTER_TABLE
and
SQLITE_REINDEX
constants for the authorizer function.
(These constants were introduced in SQLite 3.1.3).apswversion
which returns
the version of APSW.SQLite 3.1.3 had no changes in the stable C API other than what is mentioned above. There were some new experimental functions added which are not currently documented on the SQLite website, which are not wrapped by APSW. Please contact me if you believe they will remain in SQLite and you would like them wrapped:
sqlite3_sleep
An alternative function which sleeps for
a specified number of milliseconds can be provided. By default SQLite
just uses the standard operating system call.sqlite3_expired
This function is internal to statement
execution. It would apply to the implementation of Cursor.executemany
and could in theory provide a marginal improvement in performance.sqlite3_temp_directory
can be used
before any databases are opened to set where temporary files are
created. By default SQLite just uses the standard operating system
mechanisms.There are no functional changes. The only changes were to correct some variable names in the example code (they were cut and pasted from the test code which used different names) and to make the source zip file extract its contents into a sub-directory which is the more typical way of packaging that sort of thing.
All remaining functionality in the C API for SQLite 3.0.8 is now available.
Finished this documentation.
Initial release