The data is stored and accessed on disk (it is not an in-memory database) ; the implementation has been designed to make all operations, and especially selection, as fast as possible with an interpreted language
The database is implemented as a Python iterator, yielding objects whose attributes are the fields defined when the base is created ; therefore, requests can be expressed as list comprehensions or generator expressions, instead of SQL queries :
for record in [ r for r in db if r.name == 'pierre' ]: print record.name,record.age
instead of
cursor.execute("SELECT * IN db WHERE name = 'pierre'") for r in cursor.fetchall(): print r[0],r[1]
List comprehension is only one of the different ways to select records ; direct access by record identifier is almost immediate regardless of the size of the base, and the algorithms used in the select() method make selections extremely fast in most cases
buzhug supports concurrency control by versioning, cleanup of unused data when many records have been deleted, easy links between bases, adding and removing fields on an existing base, etc
Database speed comparisons are not easy to make. I made a limited benchmark using the same use cases as SQLite's author ; it shows that buzhug is much faster than other pure-Python modules (KirbyBase, gadfly) ; SQLite, which is implemented in C, is faster, but only less than 3 times on the average
buzhug is an Open Source software, published under the revised BSD licence