APSW - Another Python SQLite Wrapper

apsw-3.3.13-r1 18th February 2007

APSW provides an SQLite 3 wrapper that provides the thinnest layer over SQLite 3 possible. Everything you can do from the C API to SQLite 3, you can do from Python. Although APSW looks vaguely similar to the DBAPI, it is not compliant with that API and instead works the way SQLite 3 does. (pysqlite is DBAPI compliant - differences between apsw and pysqlite 2).

Important changes in 3.3.10 release You must explicitly call close() on Connection objects. If you do not call close then a detailed error message will be printed when Connection's destructor runs and cleanup will not happen of that connection. This is because it is not possible to raise Exceptions in destructors (actually it is, but Python just calls the destructor again which can result in an infinite loop). I did try to work around this, but with things like no control over which thread destructors run in, virtual tables, triggers, incomplete statements and SQLite's transaction control, it becomes impossible to cleanly close a connection if bits of code are returning errors. Using the close() method allows errors to be returned and it can be called again once they are fixed.

All strings returned by APSW are now in Unicode. Previously strings that consisted only of ASCII characters where returned as the string type and other strings were returned as the unicode type. This did allow easy usage with other components that knew nothing of Unicode. However it could make Unicode aware software do more work.

Table of contents

Download

Source and binaries

You can download this release in source or binary form.

Some Linux distributions also have packages. Debian users can grab the package python-apsw. Gentoo users can grab the package dev-python/apsw.

Source code control

The source is controlled by Subversion accessible as http://initd.org/svn/pysqlite/apsw/trunk

SQLite version compatibility and benchmarking

APSW binds to the C interface of SQLite. That interface is stable for each major version of SQLite (ie the C interface for any SQLite 3.x is stable, but SQLite 4.x would be an incompatible change). Consequently you can use APSW against any revision of SQLite with the same major version number. There are small enhancements to the C api over time, and APSW adds support for them as appropriate. The version number of APSW covers the version these enhancements were added. The vast majority of changes to SQLite are in the SQL syntax and engine. Those will be picked up with any version of APSW. The one exception to this is experimental features in SQLite which may change API between revisions. Consequently you will need to turn them off if you want to work against a variety of versions of SQLite (see EXPERIMENTAL in setup.py). I strongly recommend you embed the SQLite library into APSW which will put you in control of versions.

Before you do any benchmarking with APSW or other ways of accessing SQLite, you must understand how and when SQLite does transactions. See section 7.0, Transaction Control At The SQL Level of sqlite.org/lockingv3.html. APSW does not alter SQLite's behaviour with transactions. Some access layers try to interpret your SQL and manage transactions behind your back, which may or may not work well with SQLite also do its own transactions. You should always manage your transactions yourself. For example to insert 1,000 rows wrap it in a single transaction else you will have 1,000 transactions. The best clue that you have one transaction per statement is having a maximum of 60 statements per second. You need two drive rotations to do a transaction - the data has to be committed to the main file and the journal - and 7200 RPM drives do 120 rotations a second. On the other hand if you don't put in the transaction boundaries yourself and get more than 60 statements a second, then your access mechanism is silently starting transactions for you. This topic also comes up fairly frequently in the SQLite mailing list archives.

Example

This is an example of how to use apsw, and also demonstrates all the features.


import os, sys, time
import apsw

###
### Check we have the expected version of apsw and sqlite
###

print "Using APSW file",apsw.__file__                # from the extension module
print "   APSW version",apsw.apswversion()           # from the extension module
print " SQLite version",apsw.sqlitelibversion()      # from the sqlite library code
print " SQLite version",apsw.SQLITE_VERSION_NUMBER   # from the sqlite header file at compile time

   Using APSW file /space/apsw/apsw.so
      APSW version 3.3.13-r1
    SQLite version 3.3.13
    SQLite version 3003013

###
### Opening/creating database
###

if os.path.exists("dbfile"): os.remove("dbfile")
connection=apsw.Connection("dbfile")
cursor=connection.cursor()

###
### simple statement
###

cursor.execute("create table foo(x,y,z)")

###
### multiple statements
###

cursor.execute("insert into foo values(1,2,3); create table bar(a,b,c) ; insert into foo values(4, 'five', 6.0)")

###
### iterator
###

for x,y,z in cursor.execute("select x,y,z from foo"):
    print cursor.getdescription()  # shows column names and declared types
    print x,y,z

###        
### iterator - multiple statements
###

for m,n,o in cursor.execute("select x,y,z from foo ; select a,b,c from bar"):
    print m,n,o

###
### bindings - sequence
###

cursor.execute("insert into foo values(?,?,?)", (7, 'eight', False))
cursor.execute("insert into foo values(?,?,?1)", ('one', 'two'))  # nb sqlite does the numbers from 1

###
### bindings - dictionary
###

cursor.execute("insert into foo values(:alpha, :beta, :gamma)", {'alpha': 1, 'beta': 2, 'gamma': 'three'})

###
### tracing execution 
###

def mytrace(statement, bindings):
    "Called just before executing each statement"
    print "SQL:",statement
    if bindings:
        print "Bindings:",bindings
    return True  # if you return False then execution is aborted

cursor.setexectrace(mytrace)
cursor.execute("drop table bar ; create table bar(x,y,z); select * from foo where x=?", (3,))

   SQL: drop table bar ;
   SQL:  create table bar(x,y,z);
   SQL:  select * from foo where x=?
   Bindings: (3,)

###
### tracing results 
###

def rowtrace(*results):
    """Called with each row of results before they are handed off.  You can return None to
    cause the row to be skipped or a different set of values to return"""
    print "Row:",results
    return results

cursor.setrowtrace(rowtrace)
for row in cursor.execute("select x,y from foo where x>3"):
     pass

   SQL: select x,y from foo where x>3
   Row: (4, u'five')
   Row: (7, u'eight')
   Row: (u'one', u'two')

# Clear tracers
cursor.setrowtrace(None)
cursor.setexectrace(None)

###
### executemany
###

# (This will work correctly with multiple statements, as well as statements that
# return data.  The second argument can be anything that is iterable.)
cursor.executemany("insert into foo (x) values(?)", ( [1], [2], [3] ) )

# You can also use it for statements that return data
for row in cursor.executemany("select * from foo where x=?", ( [1], [2], [3] ) ):
    print row

###
### defining your own functions
###

def ilove7(*args):
    "a scalar function"
    print "ilove7 got",args,"but I love 7"
    return 7

connection.createscalarfunction("seven", ilove7)

for row in cursor.execute("select seven(x,y) from foo"):
    print row

###
### aggregate functions are more complex
###

# here we return the longest item when represented as a string

def longeststep(context, *args):
    "are any of the arguments longer than our current candidate"
    for arg in args:
        if len( str(arg) ) > len( context['longest'] ):
            context['longest']=str(arg)

def longestfinal(context):
    "return the winner"
    return context['longest']

def longestfactory():
    """called for a new query.  The first item returned can be
    anything and is passed as the context to the step
    and final methods.  We use a dict."""
    return ( { 'longest': '' }, longeststep, longestfinal)

connection.createaggregatefunction("longest", longestfactory)

for row in cursor.execute("select longest(x) from foo"):
    print row

###
### Defining collations.  
###

# The default sorting mechanisms don't understand numbers at the end of strings
# so here we define a collation that does

cursor.execute("create table s(str)")
cursor.executemany("insert into s values(?)",
                  ( ["file1"], ["file7"], ["file17"], ["file20"], ["file3"] ) )

for row in cursor.execute("select * from s order by str"):
    print row

   (u'file1',)
   (u'file17',)
   (u'file20',)
   (u'file3',)
   (u'file7',)

def strnumcollate(s1, s2):
    # return -1 if s1<s2, +1 if s1>s2 else 0

    # split values into two parts - the head and the numeric tail
    values=[s1, s2]
    for vn,v in enumerate(values):
        for i in range(len(v), 0, -1):
            if v[i-1] not in "01234567890":
                break
        try:
            v=( v[:i], int(v[i:]) )
        except ValueError:
            v=( v[:i], None )
        values[vn]=v
    # compare
    if values[0]<values[1]:
        return -1
    if values[0]>values[1]:
        return 1
    return 0

connection.createcollation("strnum", strnumcollate)

for row in cursor.execute("select * from s order by str collate strnum"):
    print row

   (u'file1',)
   (u'file3',)
   (u'file7',)
   (u'file17',)
   (u'file20',)

###
### Authorizer (eg if you want to control what user supplied SQL can do)
###

def authorizer(operation, paramone, paramtwo, databasename, triggerorview):
    """Called when each operation is prepared.  We can return SQLITE_OK, SQLITE_DENY or
    SQLITE_IGNORE"""
    # find the operation name
    print apsw.mapping_authorizer_function[operation],
    print paramone, paramtwo, databasename, triggerorview
    if operation==apsw.SQLITE_CREATE_TABLE and paramone.startswith("private"):
        return apsw.SQLITE_DENY  # not allowed to create tables whose names start with private

    return apsw.SQLITE_OK  # always allow

connection.setauthorizer(authorizer)
cursor.execute("insert into s values('foo')")
cursor.execute("select str from s limit 1")

   SQLITE_INSERT s None main None
   SQLITE_SELECT None None None None
   SQLITE_READ s str main None

# Cancel authorizer
connection.setauthorizer(None)

###
### progress handler (SQLite 3 experimental feature)
###

# something to give us large numbers of random numbers
import random
def randomintegers(howmany):
    for i in xrange(howmany):
        yield (random.randint(0,9999999999),)

# create a table with 500 random numbers
cursor.execute("begin ; create table bigone(x)")
cursor.executemany("insert into bigone values(?)", randomintegers(500))
cursor.execute("commit")

# display an ascii spinner
_phcount=0
_phspinner="|/-\\"
def progresshandler():
    global _phcount
    sys.stdout.write(_phspinner[_phcount%len(_phspinner)]+chr(8)) # chr(8) is backspace
    sys.stdout.flush()
    _phcount+=1
    time.sleep(0.1) # deliberate delay so we can see the spinner (SQLite is too fast otherwise!)
    return 0  # returning non-zero aborts

# register progresshandler every 20 instructions
connection.setprogresshandler(progresshandler, 20)

# see it in action - sorting 500 numbers to find the biggest takes a while
print "spinny thing -> ",
for i in cursor.execute("select max(x) from bigone"):
    print # newline
    print i # and the maximum number

connection.setprogresshandler(None)

###
### commit hook (SQLite3 experimental feature)
###

def mycommithook():
    print "in commit hook"
    hour=time.localtime()[3]
    if hour<8 or hour>17:
        print "no commits out of hours"
        return 1  # abort commits outside of 8am through 6pm
    print "commits okay at this time"
    return 0  # let commit go ahead

connection.setcommithook(mycommithook)
try:
    cursor.execute("begin; create table example(x,y,z); insert into example values (3,4,5) ; commit")
except apsw.ConstraintError:
    print "commit was not allowed"

connection.setcommithook(None)

###
### Virtual tables
###

# This virtual table stores information about files in a set of
# directories so you can execute SQL queries

def getfiledata(directories):
    columns=None
    data=[]
    counter=1
    for directory in directories:
        for f in os.listdir(directory):
            if not os.path.isfile(os.path.join(directory,f)):
                continue
            counter+=1
            st=os.stat(os.path.join(directory,f))
            if columns is None:
                columns=["rowid", "name", "directory"]+[x for x in dir(st) if x.startswith("st_")]
            data.append( [counter, f, directory] + [getattr(st,x) for x in columns[3:]] )
    return columns, data

# This gets registered with the Connection
class Source:
    def Create(self, db, modulename, dbname, tablename, *args):
        columns,data=getfiledata([eval(a) for a in args]) # eval strips off layer of quotes
        schema="create table foo("+','.join(["'%s'" % (x,) for x in columns[1:]])+")"
        return schema,Table(columns,data)
    Connect=Create

# Represents a table
class Table:
    def __init__(self, columns, data):
        self.columns=columns
        self.data=data

    def BestIndex(self, *args):
        return None

    def Open(self):
        return Cursor(self)

    def Disconnect(self):
        pass

    Destroy=Disconnect

# Represents a cursor
class Cursor:
    def __init__(self, table):
        self.table=table

    def Filter(self, *args):
        self.pos=0

    def Eof(self):
        return self.pos>=len(self.table.data)

    def Rowid(self):
        return self.table.data[self.pos][0]

    def Column(self, col):
        return self.table.data[self.pos][1+col]

    def Next(self):
        self.pos+=1

    def Close(self):
        pass

# Register the module as filesource
connection.createmodule("filesource", Source())

# Arguments to module - all directories in sys.path
sysdirs=",".join(["'%s'" % (x,) for x in sys.path[1:] if len(x) and os.path.isdir(x)])
cursor.execute("create virtual table sysfiles using filesource("+sysdirs+")")

# Which 3 files are the biggest?
for size,directory,file in cursor.execute("select st_size,directory,name from sysfiles order by st_size desc limit 3"):
    print size,file,directory

   1610924 apsw.so /usr/lib64/python2.4/site-packages
   1527536 _lcms.so /usr/lib64/python2.4/site-packages
   736616 lapack_lite.so /usr/lib64/python2.4/site-packages/Numeric

# Which 3 files are the oldest?
for ctime,directory,file in cursor.execute("select st_ctime,directory,name from sysfiles order by st_ctime limit 3"):
    print ctime,file,directory

   1157247798 libcdemu.py /usr/lib64/python2.4/site-packages
   1157247798 libcdemu.pyo /usr/lib64/python2.4/site-packages
   1157247798 libcdemu.pyc /usr/lib64/python2.4/site-packages

###
### Cleanup
###

# We must close connections
connection.close(True)  # force it since we want to exit

Building

The simple way is:

python setup.py install

On Windows the above command uses Visual C++. You can use MinGW with the command below. (If MinGW complains about missing Python functions starting with _imp__Py_ then run mingwsetup.bat which will ensure your Python distribution is initialized for MinGW compilation).

python setup.py build --compile=mingw32 install

By default whatever SQLite 3 you already have on your system is used. If you place a copy of the headers and library in a sqlite3 subdirectory then that will be used instead. It is highly recommended that you build SQLite into APSW. Here is a quick and easy way of doing everything on Linux/Mac or Windows with MinGW, including the SQLite library statically into the extension (ie no external DLLs/shared libraries will be needed at runtime). You should follow these instructions with your current directory being where you extracted the APSW source to.

Download the SQLite 3 code.

Windows
Get the processed .zip
> mkdir sqlite3
> cd sqlite3
> unzip sqlite-source-3_3_10.zip
> del tclsqlite.c
> gcc -DTHREADSAFE -DNDEBUG -O3 -c *.c
> ar r libsqlite3.a *.o
> ranlib libsqlite3.a
> cd ..
> python setup.py build --compile=mingw32 install
> python -c "import apsw ; print  apsw.sqlitelibversion(), apsw.apswversion()"
Mac/Linux/etc
Get the normal source.
$ wget http://www.sqlite.org/sqlite-3.3.13.tar.gz
$ tar xvfz sqlite-3.3.13.tar.gz
$ mv sqlite-3.3.13 sqlite3
$ cd sqlite3
# The static library is not built for inclusion into a seperate shared library
# by default.  If using gcc, then do this
$ env CC="gcc -fPIC" CFLAGS="-DHAVE_DLOPEN" ./configure --enable-threadsafe --disable-tcl
# otherwise do this
$ env CFLAGS="-DHAVE_DLOPEN" ./configure --enable-threadsafe --disable-tcl
# The CFLAGS="-DHAVE_DLOPEN bit is needed for loading dynamic
# extensions.  See SQLite bug 2082
$ make
$ cp .libs/*.a .
$ cd ..
$ python setup.py install
$ python -c "import apsw ; print  apsw.sqlitelibversion(), apsw.apswversion()"      

The extension just turns into a single file apsw.so (Linux/Mac) or apsw.pyd (Windows). You don't need to install it and can drop it into any directory that is more convenient for you and that your code can reach. To just do the build and not install, leave out install from the lines above and add build if it isn't already there.

If you want to check that your build is correct then you can run the unit tests. Run tests.py. It will print the APSW file used, APSW and SQLite versions and then run lots of tests all of which should pass.

API Reference

Everything you can do from the SQLite 3 C API you can do from Python. The documentation below notes which C API functions are called where you can get further details on what happens. The only C function not implemented is sqlite3_collation_needed. (You can still add collations, you just can't use this function to find out about them on-demand.) Additionally sqlite3_trace is not wrapped but instead tracers are provided that have more functionality.

Some functions are marked experimental in the SQLite API. These have also been made available, but as the SQLite documentation notes these functions may change form or disappear in future versions of SQLite. You can exclude these functions by commenting out the relevant line in the setup.py when building aspw.

Various methods create functions, collations and set various hooks and handlers. To remove the relevant function/collation/hook/handler, pass in None as the callable method.

Module methods

sqlitelibversion()

Returns the version of the SQLite library as a string. This function calls sqlite3_libversion.

apswversion()

Returns the version of the APSW module.

enablesharedcache(boolean)

Calls sqlite3_enable_shared_cache. This sets the shared cache mode which was introduced in SQLite 3.3. Note that it only affects the current thread and should be called before any databases are opened. If called after that then you'll get MisuseError. APSW already enforces the other conditions required to use this functionality, namely that all operations on a Connection must happen in the same thread. Consequently you can safely use this functionality. You do not need to worry about sqlite3_thread_cleanup as that is only needed when abnormally terminating a thread (something you can't do in Python). Python's normal reference counting ensures all objects are cleaned up, and SQLite automatically cleans up the shared cache when the last Connection in a thread is closed.

createmodule("modulename", module_object)

Registers a virtual table. See virtual tables for more details.

Module variables

connection_hooks

The purpose of the hooks is to allow the easy registration of functions, virtual tables or similar items with each Connection as it is created. The default value is an empty list. Whenever a Connection is created, each item in apsw.connection_hooks is invoked with a single parameter being the new Connection object. If the hook raises an exception then the creation of the Connection fails.

Module constants

Note that the same values can be used in different contexts. For example SQLITE_OK and SQLITE_CREATE_INDEX both have a value of zero. For each group of constants there is also a mapping (dict) available that you can supply a string to and get the corresponding numeric value, or supply a numeric value and get the corresponding string. These can help improve diagnostics/logging, calling other modules etc. For example

apsw.mapping_authorizer_function["SQLITE_READ"]=20
apsw.mapping_authorizer_function[20]="SQLITE_READ"
SQLITE_VERSION_NUMBER

This is the version of the SQLite library from the header file sqlite3.h at apsw compile time. It is expressed as an integer (eg 3003010 for SQLite 3.3.10).

SQLITE_OK, SQLITE_DENY, SQLITE_IGNORE

These values are returned from authorizer functions. Mapping is apsw.mapping_authorizer_return

SQLITE_CREATE_INDEX, SQLITE_CREATE_TABLE and 30 others

The values are passed to authorizer functions. You can see a complete list in the sqlite3_set_authorizer documentation. Mapping is apsw.mapping_authorizer_function

SQLITE_INDEX_CONSTRAINT_EQ, SQLITE_INDEX_CONSTRAINT_GT and 4 others

These values are passed to the BestIndex method of virtual tables. Mapping is apsw.mapping_bestindex_constraints

SQLITE_IOERR_READ, SQLITE_IOERR_FSYNC and 7 others

These are extended result codes which are provided in exceptions. Mapping is apsw.mapping_extended_result_codes

SQLITE_OK, SQLITE_ERROR, SQLITE_PERM and 22 others

These are SQLite result codes which are provided in exceptions. Mapping is apsw.mapping_result_codes

Connection class

The connection class wraps a sqlite3 pointer.

Connection(filename)

Opens an SQLite database named filename. (This calls sqlite3_open behind the scenes. You must call close when finished with a Connection. If you do not call close then the destructor will invoke sys.excepthook which by default will print a traceback.

close(force=False)

You must call this method when you are done with a Connection. Behind the scenes it calls sqlite3_close. It is ok to call this method multiple times. Note that this method can raise an exception. Some examples of when exceptions are raised:

If force is True, then any executing cursors are forcibly closed.

cursor()

Creates a new cursor object on this database.

changes()

This function returns the number of database rows that were changed (or inserted or deleted) by the most recently completed INSERT, UPDATE, or DELETE statement. (This calls sqlite3_changes. Read that link for some additional notes.)

totalchanges()

This function returns the total number of database rows that have be modified, inserted, or deleted since the database connection was opened. (This calls sqlite3_total_changes. Read that link for some additional notes.)

last_insert_rowid()

Returns the integer key of the most recent insert in the database. (This calls sqlite3_last_insert_rowid.)

complete(statement)

Calls sqlite3_complete which tells you if the input string comprises one or more complete SQL statements.

setbusytimeout(milliseconds)

Sets the busy timeout. (This calls sqlite3_busy_timeout).

setbusyhandler(callable)

Sets the busy handler to callable. callable will be called with one integer argument which is the number of prior calls to the busy callback for the same lock. If the busy callback returns something that evaluates to False, then SQLite returns SQLITE_BUSY to the calling code.. If the callback returns something that evaluates to True, then SQLite tries to open the table again and the cycle repeats. (This calls sqlite3_busy_handler).

interrupt()

Causes any pending operations on the database to abort at the earliest opportunity. You can call this from any thread. (This calls sqlite3_interrupt).

createscalarfunction(name, callable, numargs=-1)

Registers a scalar function. The callable will be called. You can specify how many arguments your function takes as the numargs parameter or supply -1 to take any amount. (This calls sqlite3_create_function).

createaggregatefunction(name, factorycallback, numargs=-1)

Registers an aggregate function. (This calls sqlite3_create_function.) You can specify how many arguments your function takes as the numargs parameter or supply -1 to take any amount. When the function is called by an SQL query, the factorycallback is called without any arguments. The factorycallback needs to return a tuple consisting of three 3 items.

createcollation(name, callable)

Creates a collation with the specified name and callable. The callable will be passed two string arguments. It should return -1 if the first is less than the second, 0 if they are equal and 1 and if the first is greater than the second. Note that this controls sorting (ORDER BY in SQL) so your comparisons don't affect other SQL operations. Read more about SQLite's handling of collations. (This calls sqlite3_create_collation.) If there is an error in your Python code then 0 (ie items are equal) is returned.

setauthorizer(callable)

The callable is invoked while SQL statements are being prepared. The intent is to allow applications to safely execute user entered SQL. The callable is called with 5 parameters:

You should return apsw.SQLITE_OK to allow the operation or apsw.SQLITE_DENY or apsw.SQLITE_IGNORE as applicable. (SQLITE_DENY is returned if there is an error in your Python code).

This calls sqlite3_set_authorizer which contains more detailed documentation.

setupdatehook(callable)

Sets a callable which is invoked during data changing. The callable takes four parameters:

(This calls sqlite3_update_hook.)

setrollbackhook(callable)

Sets a callable which is invoked during a rollback. The callable takes no parameters and the return value is ignored. (This calls sqlite3_rollback_hook.)

setcommithook(callable) (SQLite 3 experimental feature)

Sets a callable which is invoked just before a commit. It should return zero for the commit to go ahead and non-zero for it to be turned into a rollback. In the case of an exception in your callable, a non-zero (ie rollback) value is returned. (This calls sqlite3_commit_hook.)

setprofile(callable) (SQLite 3 experimental feature)

Sets a callable which is invoked at the end of execution of each statement and passed the statement string and how long it took to execute. (The execution time appears to be in nanoseconds.) Note that it is called only on completion. If for example you do a SELECT and only read the first result, then you won't reach the end of the statement.

(This calls sqlite3_profile which is not documented on the SQLite web site. See the sqlite3.h header file for documentation.).

setprogresshandler(callable, nsteps=20) (SQLite 3 experimental feature)

Sets a callable which is invoked every nsteps SQLite inststructions. The callable should return a non-zero value to abort or zero to continue. (If there is an error in your Python callable then non-zero will be returned). (This calls sqlite3_progress_handler which has more detailed documentation).

enableloadextension(bool) (SQLite 3 experimental feature)

Enables extension loading which is disabled by default. Calls sqlite3_enable_load_extension. See note below about enabling extension loading.

loadextension(filename, entrypoint=None) (SQLite 3 experimental feature)

Loads an extension by calling sqlite3_load_extension. If you don't specify the entrypoint then SQLite uses a default of sqlite3_extension_init.

Note that if you build SQLite on Unix then extension loading won't be built into SQLite by default. The details are in SQLite ticket 2082. You need to edit setup.py if you enable extension loading.

Cursor class

The Cursor class creates and executes SQLite prepared statements.

Cursor()

You cannot create cursors directly. The are created by calling Connection.cursor().

close(force=False)

In general cursors are in a closed state, and you won't need to call this method. Calling while the cursor is already closed or if the Connection is closed will not result in errors. An open Cursor prevents closing of Connections. The circumstances in which they are not closed include:

getconnection()

Returns the Connection object to which this cursor belongs.

execute(statements, bindings=())

Executes the statements using the supplied bindings. The bindings can be supplied as a tuple or as a dict. Execution returns when the first row is available or all statements have completed. The cursor object is returned which you can use as an iterator. (See execution model for more details. This function wraps sqlite3_prepare and sqlite3_step.)

executemany(statements, sequenceofbindings=())

Repeatedly executes statements using each element of sequenceofbindings for the bindings each time. Execution returns when the first row is available or all statements have completed. The cursor object is returned which you can use as an iterator. (See execution model for more details. This function runs in a loop on each member of sequenceofbindings and wraps sqlite3_prepare and sqlite3_step .)

next()

The Cursor object is an iterator, and so you can use it in a for loop or similar situations. You can also explicitly call the next() method. This method returns a tuple of the contents of a returned row or raises StopIteration after all returned rows have been seen.

getdescription()

Returns a list describing each column in the current result set. Each item is a tuple of (column name, declared column type). You should only call this function while data is being returned such as when next() returns a row. This function calls sqlite3_column_name and sqlite3_column_decltype.

setexectrace(callable)
setrowtrace(callable)
getexectrace()
getrowtrace()

Sets or gets the tracers.

Virtual Tables

Virtual Tables are a new experimental feature introduced in SQLite 3.3.7 and updated in 3.3.8. They let a developer provide underlying table implementations, while still presenting a normal SQL interface to the user. The person writing SQL doesn't need to know or care that some of the tables come from elsewhere.

Some examples of how you might use this:

You need to have 3 types of object. A module, a virtual table and a cursor. These are documented below. You can also read the SQLite C method documentation. At the C, they are just one set of methods. At the Python/apsw level, they are split over the 3 types of object. The leading x is ommitted in Python. You can return SQLite error codes (eg SQLITE_READONLY) by raising the appropriate exceptions (eg apsw.ReadOnlyError).

Module

The module is used to create the virtual tables. Once you have a module object, you register it with a connection by calling Connection.createmodule("modulename", module_object). To create the table, you use SQL: CREATE VIRTUAL TABLE tablename USING modulename(arg0, arg1, ...)

Create(connection, modulename, databasename, tablename, *args)

This is called when a table is first created on a connection. The optional args are passed from the SQL CREATE statement.

Return two items. The first item should be a string with SQL specifying the schema for the table. The second item is an Object that implements the table methods. Later methods in the Table and Cursor objects will reference the columns by number, starting at zero with column -1 being the rowid. Every row must have a unique unchanging integer rowid up to 64bit signed integer.

Connect(connection, modulename, databasename, tablename, *args)

Similar to Create, except it is called to establish a new connection to an existing table. The paramaters and return are the same. In general Create is called the first time a table is accessed and is where you are expect to do heavy initialisation work. Connect can take that initialized work and reuse it.

Table

The table contains knowledge of the indices, makes cursors and can perform transactions.

Begin() Sync() Commit() Rollback() (optional)

These functions are used as part of transactions. At the time of writing they are undocumented in SQLite (see www.sqlite.org/cvstrac/wiki?p=VirtualTableMethods. They do not take any arguments nor return anything. You do not have to provide these methods.

Disconnect() (optional)
Destroy()

These methods are the opposite of Connect and Create in the module implementation. They can be considered destructors for the virtual table instance. It is currently unclear how SQLite behaves if you return an error (ticket 2099). Note that SQLite does ignore errors in the Disconnect method. If you do have one, you will get an exception from the close() method, but the database will actually be closed.

Open()

Returns a new cursor object.

FindFunction(nfuncargs, funcname) not implemented yet

Lets you provide an alternate function implementation for the table. This is currently not implemented in apsw because of incomplete information in SQLite to make it safe (ticket 2095).

UpdateDeleteRow(rowid)

Requests deletion of the row with the specified id.

Return None (ignored)

UpdateInsertRow(rowid, fields)

Create a new row. fields is a tuple the same size as the number of columns in your table containing a value for each column. rowid is the rowid to give the row. If rowid is None then you must make up a rowid you want.

Return the new rowid (if rowid was None), else ignored

UpdateChangeRow(rowid, newrowid, fields)

Change an existing row of the table with the values in fields. fields is a tuple the same size as the number of columns in your table containing a value for each column. If newrowid is not the same as rowid then you also have to change the rowid (this would happen for a query like UPDATE table SET rowid=rowid+100 WHERE ...)

Return None (ignored)

BestIndex(constraints, orderbys)
This is a complex method and even has its own page in the SQLite documentation. To get going initially, just return None and you will be fine. Implementing this method reduces the number of rows scanned in your table to satisfy queries, but only if you have an index or index like mechanism available.

Note The implementation of this method differs slightly from the SQLite documentation for the C API. You are not passed "unusable" constraints. The argv/constraintarg positions are not off by one. In the C api, you have to return position 1 to get something passed to Filter in position 0. With the APSW implementation, you return position 0 to get Filter arg 0, position 1 to get Filter arg 1 etc.

The purpose of this method is to ask if you have the ability to determine if a row meets certain constraints that doesn't involve visiting every row. An example constraint is price > 74.99. In a traditional SQL database, queries with constraints can be speeded up with indices. If you return None, then SQLite will visit every row in your table and evaluate the constraint itself. Your index choice returned from BestIndex will also be passed to the Filter method on your cursor object. Note that SQLite may call this method multiple times trying to find the most efficient way of answering a query. You will be passed the contraints as a sequence of tuples containing two items. The first item is the column number and the second item is the operation.

Example query: select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets".

If customer is column 0, price column 2 and quantity column 5, then constraints will be:

   (2, apsw.SQLITE_INDEX_CONSTRAINT_GT),
   (5, apsw.SQLITE_INDEX_CONSTRAINT_LE),
   (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)

Note that you do not get the value of the constraint (eg "Acme Widgets", 74.99 and 10 in this example).

The first item you return is None if you don't have an appropriate index for any of the constraints and SQLite should iterate over every row. If you do have a suitable index then you return as the first item a sequence the same length as constraints with the members mapping to the constraints in order. Each item can be one of None, an integer or a tuple of an integer and a boolean.

None
This means you have no index for that constraint. SQLite will have to iterate over every row for it.
integer
This is the argument number for the constraintargs being passed into the Filter function of your cursor (the values "Acme Widgets", 74.99 and 10 in the example).
(integer, boolean)
By default SQLite will check what you return. For example if you said that you had an index on price, SQLite will still check that each row you returned is greater than 74.99. If you set the boolean to False then SQLite won't do that double checking.
Example query: select * from foo where price>74.99 and quantity≤10 and customer=="Acme Widgets".

Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.

Constraints (in) Constraints used (out)
   (2, apsw.SQLITE_INDEX_CONSTRAINT_GT),
   (5, apsw.SQLITE_INDEX_CONSTRAINT_LE),
   (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)
   1,
   None,
   0

When your Filter method in the cursor is called, constraintarg[0] will be "Acme Widgets" (customer constraint value) and constraintarg[1] will be 74.99 (price constraint value). You can also return an index number (integer) and index string to use (SQLite attaches no significance to these values - they are passed as is to your Cursor.Filter method as a way for the BestIndex method to let the Cursor.Filter method know which of your indices or similar mechanism to use.

The second argument to BestIndex is a sequence of orderbys because the query requested the results in a certain order. If your data is already in that order then SQLite can give the results back as is. If not, then SQLite will have to sort the results first.

Example query: select * from foo order by price desc, quantity asc

Price is column 2, quantity column 5 so orderbys will be:

  (2, True),  # True means descending, False is ascending
  (5, False)

Return You should return up to 5 items. Items not present in the return have a default value.

0: constraints used (default None)
This must either be None or a sequence the same length as constraints passed in. Each item should be as specified above saying if that constraint is used, and if so which constraintarg to make the value be in your Cursor.Filter function.
1: index number (default zero)
This value is passed as is to Cursor.Filter
2: index string (default None)
This value is passed as is to Cursor.Filter
3: orderby consumed (default False)
Return True if your output will be in exactly the same order as the orderbys passed in
4: estimated cost (default a huge number)
Approximately how many disk operations are needed to provide the results. SQLite uses the cost to optimise queries. For example if the query includes A or B and A has 2,000 operations and B has 100 then it is best to evaluate B before A.
A complete example. Query is select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets" order by price desc, quantity asc

Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.

BestIndex(constraints, orderbys)
 
constraints= ( (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), 
                  (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), 
                  (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)  )

orderbys= ( (2, True), (5, False) )

You return

( (1, None, 0),   # constraints used
  27,             # index number
  "idx_pr_cust",  # index name
  False,          # results are not in orderbys order
  1000            # about 1000 disk operations to access index
)

Your Cursor.Filter method will be called with:

27,              # index number you returned
"idx_pr_cust",   # index name you returned
"Acme Widgets",  # constraintarg[0] - customer  
74.99            # constraintarg[1] - price 

Cursor

Cursors are created by the Open method in the table object. They iterate over rows returning desired values.

Close()

This is the destructor for the cursor. Note that you must cleanup. The method will not be called again if you raise an exception.

Return None (ignored)

Filter(indexnum, indexstring, constraintargs)

This method is always called first to initialize an iteration to the first row of the table. The arguments come from the BestIndex method in the table object with constraintargs being a tuple of the constraints you requested. If you always return None in BestIndex then indexnum will be zero, indexstring will be None and constraintargs will be empty).

Return None (ignored)

Eof()

Called to ask if we are at the end of the table. It is called after each call to Filter and Next.

Return True if the cursor is at a valid row of data, else False. Note that if you have an exception in this method, or do not provide it then False will be returned to SQLite.

Next()

Advance to the next row in the table.

Return None (ignored)

Rowid()

Return the current rowid. This should be an integer up to 64 bits signed.

Column(number)

Requests the value of the specified column number. Column numbering starts at zero with -1 being the rowid.

Return the value in that column. It must be compatible with SQLite types (ie one of None, integer, float, string or buffer (blob))

Troubleshooting virtual tables

Virtual Tables are a very recent addition to SQLite and haven't been widely used yet. They do work well if all your routines work perfectly.

A big help is using the local variables recipe as described in augmented stack traces which will give you more details in errors. You may also find errors compounding. For example if you have an error in the Filter method of a cursor, SQLite then closes the cursor. If you also return an error in the Close method then the first error may mask the second or vice versa.

Also note that SQLite may ignore responses from your methods if they don't make sense. For example in BestIndex, if you set multiple arguments to have the same constraintargs position then your Filter won't receive any constraintargs at all.

Exceptions

All exception types have apsw.Error as a parent. The following exceptions can happen:

ThreadingViolationError

You have used an object allocated in one thread in a different thread. All objects (Connection, Cursor) must be used in the same thread they were allocated in.

IncompleteExecutionError

You have tried to start a new SQL execute call before executing all the previous ones. See the execution model for more details.

ConnectionNotClosedError

The destructor for a Connection was called, but close() hadn't been called on the Connection. Destructors aren't allowed to return exceptions, so what actually happens is that sys.excepthook is called with this exception so you can find where it is happening in your code. The database will also be left open and hence leak memory and potentially file descriptors.

ConnectionClosedError

You have called close() on a Connection and then continued to use it.

BindingsError

There is an incorrect number of bindings when using tuples. Or you supplied a dictionary of bindings and not all bindings were named in the SQL statement. For example select * from foo where x=:name and y=?. You should either use colon name style or question mark style in a query but not both.

Note It is not considered an error to have missing keys in a dictionary. For example cursor.execute("insert into foo values($a,:b,$c)", {'a': 1}) is perfectly valid (note b and c are not in the dict). For missing keys, None/NULL will be used. This is so you don't have to add lots of spurious values to the supplied dict. If your schema requires every column have a value, then SQLite will generate an error due to some values being None/NULL so that case will be caught.

ExecutionCompleteError

A statement is complete but you try to run it more anyway!

ExecTraceAbort

The execution tracer returned False so execution was aborted.

ExtensionLoadingError

An error happened loading an extension.

The table lists which Exception classes correspond to which SQLite error codes. APSW runs with extended result codes turned on. If you get the result member of an exception instance then you will get the plain error code. The extendedresult member of an exception instance will get you the extended error code.

General Errors
SQLITE_ERROR SQLError
SQLITE_MISMATCH MismatchError
 
Internal Errors
SQLITE_INTERNAL InternalError
SQLITE_PROTOCOL ProtocolError
SQLITE_MISUSE MisuseError
SQLITE_RANGE RangeError
 
Permissions etc
SQLITE_PERM PermissionsError
SQLITE_READONLY ReadOnlyError
SQLITE_CANTOPEN CantOpenError
SQLITE_AUTH AuthError
Abort/Busy/etc
SQLITE_ABORT AbortError
SQLITE_BUSY BusyError
SQLITE_LOCKED LockedError
SQLITE_INTERRUPT InterruptError
SQLITE_SCHEMA SchemaChangeError
SQLITE_CONSTRAINT ConstraintError
 
Memory/Disk/etc
SQLITE_NOMEM NoMemError
SQLITE_IOERR IOError
SQLITE_CORRUPT CorruptError
SQLITE_FULL FullError
SQLITE_TOOBIG TooBigError
SQLITE_NOLFS NoLFSError
SQLITE_EMPTY EmptyError
SQLITE_FORMAT FormatError
SQLITE_NOTADB NotADBError

Codes in italics are no longer used by SQLite (according to headers in 3.2.7).

Augmented stack traces

When an exception occurs, Python does not include frames from non-Python code (ie the C code called from Python). This can make it more difficult to work out what was going on when an exception occurred for example when there are callbacks to collations, functions or virtual tables, triggers firing etc.

This is an example showing the difference between the tracebacks you would have got with earlier versions of apsw and the augmented traceback.

Code

import apsw

def myfunc(x):
  1/0

con=apsw.Connection(":memory:")
con.createscalarfunction("foo", myfunc)
con.createscalarfunction("fam", myfunc)
cursor=con.cursor()
cursor.execute("create table bar(x,y,z);insert into bar values(1,2,3)")
cursor.execute("select foo(1) from bar")
Original traceback Augmented traceback
Traceback (most recent call last):
  File "t.py", line 11, in <module>
    cursor.execute("select foo(1) from bar")
  File "t.py", line 4, in myfunc
    1/0
ZeroDivisionError: integer division or modulo by zero
Traceback (most recent call last):
  File "t.py", line 11, in <module>
    cursor.execute("select foo(1) from bar")
  File "apsw.c", line 3412, in resetcursor
  File "apsw.c", line 1597, in user-defined-scalar-FOO
  File "t.py", line 4, in myfunc
    1/0
ZeroDivisionError: integer division or modulo by zero

In the original traceback you can't even see that code in apsw was involved. The augmented traceback shows that there were indeed two function calls within apsw and gives you line numbers should you need to examine the code. Also note how you are told that the call was in user-defined-scalar-FOO (all function names are uppercased).

But wait, there is more!!! In order to further aid troubleshooting, the augmented stack traces make additional information available. Each frame in the traceback has local variables defined with more information. You can print out the variables using ASPN recipe 52215.

In the recipe, the initial code in print_exc_plus is far more complicated than need be, and also won't work correctly with all tracebacks (it depends on f_prev being set which isn't always the case). Change the function to start like this:

    tb = sys.exc_info()[2]
    stack = []
    
    while tb:
        stack.append(tb.tb_frame)
        tb = tb.tb_next

    traceback.print_exc()
    print "Locals by frame, innermost last"

Here is a far more complex example from some virtual tables code I was writing. The BestIndex method in my code had returned an incorrect value. The augmented traceback includes local variables using recipe 52215. I can see what was passed in to my method, what I returned and which item was erroneous. The original traceback is almost completely useless.

Original traceback
Traceback (most recent call last):
  File "tests.py", line 1387, in testVtables
    cursor.execute(allconstraints)
TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean
Augmented traceback with local variables
Traceback (most recent call last):
  File "tests.py", line 1387, in testVtables
    cursor.execute(allconstraints)
                VTable =  __main__.VTable
                   cur =  <apsw.Cursor object at 0x988f30>
                     i =  10
                  self =  testVtables (__main__.APSW)
        allconstraints =  select rowid,* from foo where rowid>-1000 ....

  File "apsw.c", line 4050, in Cursor_execute.sqlite3_prepare
            Connection =  <apsw.Connection object at 0x978800>
             statement =  select rowid,* from foo where rowid>-1000 ....

  File "apsw.c", line 2681, in VirtualTable.xBestIndex
                  self =  <__main__.VTable instance at 0x98d8c0>
                  args =  (((-1, 4), (0, 32), (1, 8), (2, 4), (3, 64)), ((2, False),))
                result =  ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False)

  File "apsw.c", line 2559, in VirtualTable.xBestIndex.result_constraint
               indices =  [4, (3,), [2, False], [1], [0]]
                  self =  <__main__.VTable instance at 0x98d8c0>
                result =  ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False)
            constraint =  (3,)

TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean

Types

Read about SQLite 3 types. ASPW always maintains the correct type for values, and never converts them to something else. Note however that SQLite may convert types based on column affinity as described in that link. ASPW requires that all values supplied are one of the corresponding Python/SQLite types (or a subclass).

Unicode

All SQLite strings are Unicode. The actual binary representations can be UTF8, or UTF16 in either byte order. ASPW uses the UTF8 interface to SQLite which results in the binary string representation in your database defaulting to UTF8 as well. All this is totally transparent to your Python code.

Everywhere strings are used (eg as database values, SQL statements, bindings names, user defined functions) you can use Unicode strings. You can also use the bare Python string class, and ASPW will automatically call the unicode converter if any non-ascii characters are present.

When reading values from SQLite, ASPW uses the Python string class for any pure ASCII text, else it uses the Python unicode class.

Multi-threading and re-entrancy

ASPW lets you use SQLite in multi-threaded programs and will let other threads execute while SQLite is working. (Technically the GIL is released when sqlite3_step or sqlite3_open are running. The GIL is re-acquired while user defined functions, collations and the various hooks/handlers run.)

Note that you cannot use the same Connection object in multiple threads. You must allocate a new Connection object per thread. (This is a requirement of SQLite). A cursor object can only be used in the same thread as it was allocated. (Also an SQLite requirement). Fortunately ASPW will check this for you and throw a ThreadingViolationError if you try to use objects in the wrong thread. Note that your destructors also need to run in the creation thread.

If you have multiple threads and/or multiple programs accessing the same database then there may be contention for the file. SQLite will return SQLITE_BUSY which will be raised as BusyError. You can call Connection.setbusytimeout to set how long SQLite will retry for or Connection.setbusyhandler to install your own busy handler. Note that SQLite won't call the busy handler or timeout if it believes a deadlock has arisen. SQLite's locking and concurrency is described here

A cursor object can only be executing one query at a time. You cannot issue a new query from inside a trace function or from a user defined function or collation since these are called while executing a query. You can however make new cursors and use those without issue. You may want to remember the Connection object when you set your trace or user defined functions.

Tracing

You can install tracers on a cursor as an easy way of seeing exactly what gets executed and what is returned. The tracers can also abort execution and cause different values to be returned. This is very useful for diagnostics and testing without having to modify your main code.

Note: You cannot issue new execute statements against the cursor your tracer was called from. If you would like to make more queries in the tracer then do them from a new cursor object.

Execution Tracer

The execution tracer is called after an SQL statement has been prepared. (ie syntax errors will have caused an exception during preparation so you won't see them with a tracer). It is called with two arguments. The first is a string which is the SQL statement about to be executed, and the second is the bindings used for that statement (and can be None). If the return value evaluates to False/None then execution is aborted with an ExecTraceAbort exception. See the example above.

Row Tracer

The row tracer is called before each row is returned. The arguments are the items about to be returned. Whatever you return from the tracer is what is actually returned. If you return None then the whole row is skipped. See the example above.

64 bit hosts, Python 2.5

Prior to Python 2.5, you were still limited to 32 bit quantities for items in Python such as the length of strings, number of items in a sequence etc. APSW will work correctly with those items in Python 2.5 that use 64 bits. Unfortunately SQLite is limited to 32 bit quantities for strings, blobs, number of columns etc. Consequently you will get a TooBig exception from APSW which checks if strings/buffers longer than 1GB or 2GB (depends on internal storage) are used. See SQLite ticket 2125.

Execution model

This section only matters if you give multiple SQL statements in one go to cursor.execute(). (Statements are seperated by semi-colons.)

SQLite does execution in two steps. First a statement is prepared, which verifies the syntax, tables and fields and converts the statement into an internal representation. The prepared statement is then run. Execution stops when a row is available, there is an error or the statement is complete.

The cursor.execute() method automatically does the preparing and starts execution. If none of the statements return rows then execution will go to the end. If a row is returned then you need to call cursor.next() to get the row values. Execution will resume as necessary to satisfy next() calls.

However this means that if you don't read the rows returned then the rest of your statements won't be executed. APSW will detect unexecuted previous statements and generate an exception. For example:

>>> cursor.execute("select * from foo ; create table bar(x,y,z)")
>>> cursor.execute("create table bam(x,y,z)")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
apsw.IncompleteExecutionError: Error: there are still remaining sql statements to execute

Because I didn't read the results of select * from foo then the following create table command didn't have a chance to get executed. On the next execute that condition is detected and an exception raised.

DBAPI notes

DBAPI is defined in PEP 249. This section desribes how APSW complies or differs from it.

Module Interface

There is no connect method. Use the Connection constructor instead, which only takes one parameter - the name of the database.

The Connection object and any cursors must all be used in the same thread they were allocated from. You cannot use them in different threads even if you protect them with mutexes.

Three different paramstyles are supported. You can use qmark ('... WHERE name=?'), numeric ('... WHERE name=?4') and named ('... WHERE name=:name' or '... WHERE name=$name'). Note that SQLite numbers parameters from one not zero.

The DBAPI exceptions are not used.

Connection Objects

There are no commit or rollback methods. You should use cursor.execute("COMMIT") etc. The SQLite documentation has more details.

Several methods that are defined in DBAPI to be on the cursor are instead on the Connection object, since this is where SQLite actually stores the information. Doing operations in any other cursor attached to the same Connection object does update their values, and this makes you aware of that.

Cursor Objects

Use getdescription() instead of description. This information is only obtained on request.

There is no rowcount.

callproc is not implemented as SQLite doesn't support stored procedures.

execute returns the Cursor object and you can use it as an iterator to get the results (if any).

executemany returns the Cursor object and you can use it as an iterator to get the results (if any).

fetchone is not available. Use the cursor as an iterator, or call next() which raises StopIteration when there are no more results.

fetchmany is not available. Call next() for however many results you want.

fetchall is not available. Call next() or use a list comprehension such as [row for row in cursor.execute("....")].

nextset is not applicable or implemented.

arraysize is not available as fetchmany isn't.

Neither setinputsizes or setoutputsize are applicable or implemented.

Type objects

None of the date or time methods are available since SQLite 3 does not have a native date or time type.

Use the standard Python buffer class for BLOBs.

Optional DB API Extensions

rownumber is not available.

Exception classes are not available as attributes of Connection.

Use Cursor.getconnection() to get the associated Connection object.

scroll and messages are not available.

The Cursor object supports the iterator protocol and this is the only way of getting information back.

To get the last inserted row id, call Connection.last_insert_rowid(). That stores the id from the last insert on any Cursor associated with the the Connection. You can also add select last_insert_rowid() to the end of your execute statements.

There is no errorhandler attribute.

pysqlite differences

pysqlite already provides a DBAPI compliant wrapper over SQLite 2 and 3. APSW only wraps SQLite 3. I suggest using APSW when you want to directly use SQLite and its functionality or are using your code to deal with database independence rather than DBAPI.

APSW has the following enhancements/differences over pysqlite 2 (wrapping SQLite 3):

Copyright and License

Copyright (C) 2004-2007 Roger Binns rogerb@rogerbinns.com

This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.

Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:

  1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.

  2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.

  3. This notice may not be removed or altered from any source distribution.

Version History

3.3.13-r1

As of this release, APSW is now co-hosted with pysqlite meaning there is one site to go to for your Python SQLite bindings. Start at http://initd.org/tracker/pysqlite/wiki

You can use this with SQLite 3.3.13 onwards. There were no API changes in SQLite 3.3.10 to 3.3.13 although some internal bugs were fixed and the 3.3.13 release is recommended over the earlier version.

Thanks to Ed Pasma for highlighting these issues.

3.3.10-r1

You can use this with SQLite 3.3.10 onwards.

3.3.9-r1

You can use this with SQLite 3.3.9 onwards.

3.3.8-r1

You can use this with SQLite 3.3.8 onwards. There was an incompatible API change for virtual tables in SQLite 3.3.8.

3.3.7-r1

Never released as 3.3.8 came along

You can use this release against SQLite 3.3.7. There were no changes in the SQLite 3.3.6 API from 3.3.5. In SQLite 3.3.7 an API was added that allowed removing a chunk of duplicate code. Also added were Virtual Tables and loading of external modules (shared libraries). APSW had the following changes:

3.3.5-r1

You can use this release against any release of SQLite 3 from 3.3.5 onwards. A bug was also fixed when reporting an error during the cleanup of an aggregate function if there had also been an error in the step function. (PyErr_WriteUnraisable(NULL) crashed on some versions of Python but not others.)

SQLite added several functions for returning metadata about result column sets. You have to compile SQLite with SQLITE_ENABLE_COLUMN_METADATA to get them. This is not the default for SQLite. I don't believe these are generally useful except in some corner cases and so they aren't wrapped. However please shout if you do need them. Note that cursor.getdescription will already give you generally useful information. (Also see the pragmas).

The test code has been converted into using the unittest module. Run python tests.py -v to get the tests run. There should be no errors.

Updated code to work correctly with new Py_ssize_t introduced in Python 2.5. See 64 bit hosts, Python 2.5 for more details on how Python and SQLite handle 64 bit sized items.

The following functions were added to SQLite and are wrapped. They are all functions defined on the Connection object.

The following functions were added/changed in SQLite C API but are not useful to be wrapped.

3.2.7-r1

You can use this release against any release of SQLite 3.

SQLite 3.2.7 has several bug fixes. The undocumented experimental function sqlite3_profile was added, but it not present in apsw yet.

3.2.2-r1

You can use this release against any release of SQLite 3.

SQLite 3.2.2 API removed sqlite3_global_recover. That function was not wrapped in APSW. Note that SQLite 3.2.2 contains a bug fix that applies when you use 64 bit integer primary keys (32 bit ints are fine).

3.2.1-r1

You can use this release against any release of SQLite 3.

SQLite 3.2.1 had one addition in the stable C API, which was a new function named sqlite3_global_recover. That function is not applicable for wrapping in APSW.

3.1.3-r1

You can use this release against any release of SQLite 3.

SQLite 3.1.3 had no changes in the stable C API other than what is mentioned above. There were some new experimental functions added which are not currently documented on the SQLite website, which are not wrapped by APSW. Please contact me if you believe they will remain in SQLite and you would like them wrapped:

3.0.8-r3

There are no functional changes. The only changes were to correct some variable names in the example code (they were cut and pasted from the test code which used different names) and to make the source zip file extract its contents into a sub-directory which is the more typical way of packaging that sort of thing.

3.0.8-r2

All remaining functionality in the C API for SQLite 3.0.8 is now available.

Finished this documentation.

3.0.8-r1

Initial release