zzzeek's Guide to Python 3 Porting

January 24, 2011 at 05:36 PM | Code

update 2012-11-18:

This blog post discusses a Python 3 approach that's heavily centered on the 2to3 tool. These days, I'm much more in favor of the "in place" approach, even if still supporting as far back as Python 2.4. Mako 0.7.4 is now an "in place" library, supporting Python2.4-3.x with no changes. For a good introduction to the "in place" approach, see Supporting Python 2 and 3 without 2to3 conversion.

Just the other day, Ben asked me, "OK, where is there an online HOWTO of how to port to Python 3?". I hit the Google expecting to see at least three or four blog posts with the basic steps, an overview, the things Guido laid out for us at Pycon '10 (and maybe even '09 ? don't remember). Surprisingly, other than the link to the 2to3 tool and Guido's original guide, there aren't a whole lot.

So here are my steps which I've used to produce released versions of SQLAlchemy and Mako on Pypi which are cross-compatible for Py2k and Py3k.

1. Make Sure You're Good for 2.6 at Least

Run your test suite with Python 2.6 or 2.7, using the -3 flag. Make sure there's no warnings. Such as, using the following ridiculous program:

def foo(somedict):
    if somedict.has_key("hi"):
        print somedict["hi"]

assert callable(foo)
foo({"hi":"there"})

Running with -3 has some things to say:

classics-MacBook-Pro:~ classic$ python -3 test.py
test.py:5: DeprecationWarning: callable() not supported in 3.x; use isinstance(x, collections.Callable)
  assert callable(foo)
test.py:2: DeprecationWarning: dict.has_key() not supported in 3.x; use the in operator
  if somedict.has_key("hi"):
there

So we fix all those things. If our code needs to support old versions of Python as well, like 2.3 or 2.4, we may have to use runtime version and/or library detection for some things - as an example, Python 2.4 doesn't have collections.Callable. More on that later. For now let's assume we can get our whole test suite to pass without any warnings with the -3 flag.

2. Run the whole library through 2to3 and see how we do

This is the step we're all familiar with. Run the 2to3 tool to get a first pass. Such as, when I run the 2to3 tool on Mako:

classics-MacBook-Pro:mako classic$ 2to3 mako/ test/ -w

2to3 dumps out to stdout everything it's doing, and with the -w flag it also rewrites the files in place. I usually do a clone of my source tree to a second, scratch tree so that I can make alterations to the original, Py2K tree as I go along, which remains the Python 2.x source that gets committed.

It's typical with a larger application or library that some things, or even many things, didn't survive the 2to3 process intact.

In the case of SQLAlchemy, along with the usual string/unicode/bytes types of issues, we had problems regarding the name changes of iteritems() to items() and itervalues() to values() on dictionary types - some of our custom dictionary types would be broken. When your code produces no warnings with -3 and the 2to3 tool is still producing non-working code, there are three general approaches towards achieving cross-compatibility, listed here from lowest to highest severity.

2a. Try to replace idioms that break in Py3K with cross-version ones

Easiest is if the code in question can be modified so that it works on both platforms, as run through the 2to3 tool for the Py3k version. This is generally where a lot of the bytes/unicode issues wind up. Such as, code like this:

hexlify(somestring)

...doesn't work in Py3k, hexlify() needs bytes. So a change like this might be appropriate:

hexlify(somestring.encode('utf-8'))

or in Mako, the render() method returns an encoded string, which on Py3k is bytes. A unit test was doing this:

html_error = template.render()
assert "RuntimeError: test" in html_error

We fixed it to instead say this:

html_error = template.render()
assert "RuntimeError: test" in str(html_error)

2b. Use Runtime Version Flags to Handle Usage / Library Incompatibilities

SQLAlchemy has a util package which includes code similar to this:

import sys
py3k = sys.version_info >= (3, 0)
py3k_flag = getattr(sys, 'py3kwarning', False)
py26 = sys.version_info >= (2, 6)
jython = sys.platform.startswith('java')
win32 = sys.platform.startswith('win')

This is basically getting some flags upfront that we can use to select behaviors specific to different platforms. Other parts of the library can say from sqlalchemy.util import py3k if we need to switch off some runtime behavior for Py3k (or Jython, or an older Python version).

In Mako we use this flag to do things like switching among 'unicode' and 'str' template filters:

if util.py3k:
    self.default_filters = ['str']
else:
    self.default_filters = ['unicode']

We use it to mark certain unit tests as unsupported (skip_if() is a decorator we use in our Nose tests which raises SkipTest if the given expression is True):

@skip_if(lambda: util.py3k)
def test_quoting_non_unicode(self):
    # ...

For our previously mentioned issue with callable() (which apparently is coming back in Python 3.2), we have a block in SQLAlchemy's compat.py module like this, which returns to us callable(), cmp(), and reduce():

if py3k:
    def callable(fn):
        return hasattr(fn, '__call__')
    def cmp(a, b):
        return (a > b) - (a < b)

    from functools import reduce
else:
    callable = __builtin__.callable
    cmp = __builtin__.cmp
    reduce = __builtin__.reduce

2c. Use a Preprocessor

The "runtime flags" approach is probably as far as 90% of Python libraries need to go. In SQLAlchemy, we took a more heavy handed approach, which is to bolt a preprocessor onto the 2to3 tool. The advantage here is that you can handle incompatible syntaxes, you don't need to be concerned about whatever latency a runtime boolean flag might introduce into some critical section, and in my opinion its a little easier to read, particularly in class declarations where you can maintain the same level of indentation.

The preprocessor is part of the SQLAlchemy distribution and you can also download it here. It currently uses a monkeypatch approach to work.

I've mentioned the usage of a preprocessor in some other forums and mentioned it in talks, but as yet I don't know of anyone else using this approach. I would welcome suggestions how we could do this better, such as if there's a way to get a regular 2to3 "fixer" to do it without the need for monkeypatching (I couldn't get that to work - the system doesn't read comment lines for one thing), or otherwise some approach that has similar advantages to the preprocessor.

An example is our IdentityMap dict subclass, paraphrased here, where we had to define iteritems() on the Python 2 platform as returning an iterator, but on Python 3 that needs to be the items() method:

class IdentityMap(dict):
    # ...

    def items(self):
    # Py2K
        return list(self.iteritems())

    def iteritems(self):
    # end Py2K
        return iter(self._get_items())

Above, the "# Py2K / # end Py2K" comments are picked up, and when passed to the 2to3 tool, the code looks like this:

class IdentityMap(dict):
    # ...

    def items(self):
    # start Py2K
    #    return list(self.iteritems())
    #
    #def iteritems(self):
    # end Py2K
        return iter(self._get_items())

We also use it in cases where new syntactical features are useful. When we re-throw DBAPI exceptions, its nice for us to use Python3's from keyword to do it so that we can chain the exceptions together, something we can't do in Python 2:

# Py3K
#raise MyException(e) from e
# Py2K
raise MyException(e), None, sys.exc_info()[2]
# end Py2K

The 2to3 tool turns the above into a with_traceback() call, also it does it incorrectly on Python 2.6 (was fixed in 2.7). The from keyword has a slightly different meaning than with_traceback() in that both exceptions are preserved in a "chain". Run through the preprocessor we get:

# start Py3K
raise MyException(e) from e
# end Py3K
# start Py2K
#raise MyException(e), None, sys.exc_info()[2]
# end Py2K

After the preprocessor modifies the incoming text stream, it passes it off to the 2to3 tool where the remaining Python 2 idioms are converted to Python 3. The tool ignores code that's already Python 3 compatible (luckily).

3. Create a dual-platform distribution with Distutils/Distribute

Now that we have a source tree that becomes a fully working Python 3 application via script, we can integrate this script with our setup.py script using the use_2to3 directive. Clarification is appreciated here, I think the case is that distutils itself allows the flag, but only if you have Distribute installed does it actually work. The guidelines in Porting to Python 3 — A Guide are helpful here, where we reproduce Armin's code example entirely:

import sys

from setuptools import setup

# if we are running on python 3, enable 2to3 and
# let it use the custom fixers from the custom_fixers
# package.
extra = {}
if sys.version_info >= (3, 0):
    extra.update(
        use_2to3=True,
        use_2to3_fixers=['custom_fixers']
    )


setup(
    name='Your Library',
    version='1.0',
    classifiers=[
        # make sure to use :: Python *and* :: Python :: 3 so
        # that pypi can list the package on the python 3 page
        'Programming Language :: Python',
        'Programming Language :: Python :: 3'
    ],
    packages=['yourlibrary'],
    # make sure to add custom_fixers to the MANIFEST.in
    include_package_data=True,
    **extra
)

For SQLAlchemy, we modify this approach slightly to ensure our preprocessor is patched in:

extra = {}
if sys.version_info >= (3, 0):
    # monkeypatch our preprocessor
    # onto the 2to3 tool.
    from sa2to3 import refactor_string
    from lib2to3.refactor import RefactoringTool
    RefactoringTool.refactor_string = refactor_string

    extra.update(
        use_2to3=True,
    )

With the use_2to3 flag, our source distribution can now be built and installed with either a Python 2 or Python 3 interpreter, and if Python 3, 2to3 is run on the source files before installing.

I've seen several packages which maintain two entirely separate source trees, one being the Python 3 version. I sincerely hope less packages choose to do it that way, since it means more work for the maintainers (or alternatively, slower releases for Python 3), more bugs (since unit tests aren't run against the same source tree), and it just doesn't seem like the best way to do things. Eventually, when Python 3 is our default development platform, we'll use 3to2 to maintain the Python 2 version in the other direction.

4. Add the Python :: 3 Classifier!

I forget to do this sometimes, like the example above, remember to add 'Programming Language :: Python :: 3' to your classifiers ! This is the primary method of announcing that your package works with Python 3:

setup(
    name='Your Library',
    version='1.0',
    classifiers=[
        # make sure to use :: Python *and* :: Python :: 3 so
        # that pypi can list the package on the python 3 page
        'Programming Language :: Python',
        'Programming Language :: Python :: 3'
    ],
    packages=['yourlibrary'],
    # make sure to add custom_fixers to the MANIFEST.in
    include_package_data=True,
    **extra
)

Further Reading

Guido's own porting guide:

http://docs.python.org/release/3.0.1/whatsnew/3.0.html

Armin Ronacher's porting guide:

http://lucumr.pocoo.org/2010/2/11/porting-to-python-3-a-guide/

Armin again, writing forwards-compatible Python code:

http://lucumr.pocoo.org/2011/1/22/forwards-compatible-python/

Dave Beazley, Porting Py65 (and my Superboard) to Python 3:

http://dabeaz.blogspot.com/2011/01/porting-py65-and-my-superboard-to.html


The Enum Recipe

January 14, 2011 at 12:46 PM | Code, SQLAlchemy

In the most general sense an enumeration is an exact listing of all the elements of a set. In software design, enums are typically sets of fixed string values that define some kind of discriminating value within an application. In contrast to a generic "dropdown" list, such as a selection of timezones, country names, or years in a date picker, the enum usually refers to values that are also explicit within the application's source code, such as "debit" or "credit" in an accounting application, "draft" or "publish" in a CMS, "everyone", "friends of friends", or "friends only" in your typical social media sell-your-details-to-the-highest-bidder system. Differing values have a direct impact on business logic. Adding new values to the list usually corresponds with the addition of some new logic in the application to accommodate its meaning.

The requirements for an application-level enumeration are usually:

  1. Can represent a single value within application logic with no chance of specifying a non-existent value (i.e., we don't want to hardcode strings or numbers).
  2. Can associate each value with a textual description suitable for a user interface.
  3. Can get the list of all possible values, usually for user interface display.
  4. Can efficiently associate the discriminatory value with many database records.

Representing an enumerated value in a relational database often goes like this:

CREATE TABLE employee_type (
    id INTEGER PRIMARY KEY,
    description VARCHAR(30) NOT NULL
);

CREATE TABLE employee (
    id INTEGER PRIMARY KEY,
    name VARCHAR(60) NOT NULL,
    type INTEGER REFERENCES employee_type(id)
);

INSERT INTO employee_type (id, description) VALUES
    (1, 'Part Time'),
    (2, 'Full Time'),
    (3, 'Contractor');

Above we use the example of a database of employees and their status. Advantages of the above include:

  1. The choice of "employee type" is constrained.
  2. The textual descriptions of employees are associated with the constrained value.
  3. New employee types can be added just by adding a new row.
  4. Queries can be written directly against the data that produce textual displays of discriminator values, without leaving the database console.

But as we all know this approach also has disadvantages:

  1. It's difficult to avoid hardcoding integer IDs in our application. Adding a character based "code" field to the employee_type table, even making the character field the primary key, can ameliorate this, but this is not information that would otherwise be needed in the database. Our DBAs also got grumpy when we proposed a character-based primary key.
  2. To display choices in dropdowns, as well as to display the textual description of the value associated with a particular piece of data, we need to query the database for the text - either by loading them into an in-memory lookup ahead of time, or by joining to the lookup table when we query the base table. This adds noise and boilerplate to the application.
  3. For each new data-driven enumerative type used by the application, we need to add a new table, and populate.
  4. When the descriptive names change, we have to update the database, tying database migration work to what would normally be a user-interface-only update.
  5. Whatever framework we build around these lookup tables, doesn't really work for enumerations that don't otherwise need to be persisted.
  6. If we moved to a non-relational database, we'd probably do this completely differently.

Basically, this approach is tedious and puts information about the enum further away from the application code than we'd prefer.

An alternative to the lookup table is to use a database supplied enumeration. Both MySQL and Postgresql (as of 8.3) offer an ENUM type for this purpose. It's fairly straightforward to create an approximation of an ENUM datatype in most databases by using a CHAR column in conjunction with a CHECK constraint, that tests incoming rows to be within one of a set of possible values.

SQLAlchemy provides an Enum type which abstracts this technique:

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Enum
Base = declarative_base()

class Employee(Base):
    __tablename__ = 'employee'

    id = Column(Integer, primary_key=True)
    name = Column(String(60), nullable=False)
    type = Column(Enum('part_time', 'full_time', 'contractor', name='employee_types'))

On backends that support ENUM, a metadata.create_all() emits the appropriate DDL to generate the type. The 'name' field of the Enum is used as the name of the type created in PG:

CREATE TYPE employee_types AS ENUM ('part_time','full_time','contractor')

CREATE TABLE employee (
    id SERIAL NOT NULL,
    name VARCHAR(60) NOT NULL,
    type employee_types,
    PRIMARY KEY (id)
)

On those that don't, it emits a VARCHAR datatype and additionally emits DDL to generate an appropriate CHECK constraint. Here, the 'name' field is used as the name of the constraint:

CREATE TABLE employee (
    id INTEGER NOT NULL,
    name VARCHAR(60) NOT NULL,
    type VARCHAR(10),
    PRIMARY KEY (id),
    CONSTRAINT employee_types CHECK (type IN ('part_time', 'full_time', 'contractor'))
)

In the case of PG's native ENUM, we're using the same space as a regular integer (four bytes on PG). In the case of CHAR/VARCHAR, keeping the size of the symbols down to one or two characters should keep the size under four bytes (database-specific overhead and encoding concerns may vary results).

To combine the ENUM database type with the other requirements of source-code level identification and descriptive naming, we'll encapsulate the whole thing into a base class that can be used to generate all kinds of enums:

class EnumSymbol(object):
    """Define a fixed symbol tied to a parent class."""

    def __init__(self, cls_, name, value, description):
        self.cls_ = cls_
        self.name = name
        self.value = value
        self.description = description

    def __reduce__(self):
        """Allow unpickling to return the symbol
        linked to the DeclEnum class."""
        return getattr, (self.cls_, self.name)

    def __iter__(self):
        return iter([self.value, self.description])

    def __repr__(self):
        return "<%s>" % self.name

class EnumMeta(type):
    """Generate new DeclEnum classes."""

    def __init__(cls, classname, bases, dict_):
        cls._reg = reg = cls._reg.copy()
        for k, v in dict_.items():
            if isinstance(v, tuple):
                sym = reg[v[0]] = EnumSymbol(cls, k, *v)
                setattr(cls, k, sym)
        return type.__init__(cls, classname, bases, dict_)

    def __iter__(cls):
        return iter(cls._reg.values())

class DeclEnum(object):
    """Declarative enumeration."""

    __metaclass__ = EnumMeta
    _reg = {}

    @classmethod
    def from_string(cls, value):
        try:
            return cls._reg[value]
        except KeyError:
            raise ValueError(
                    "Invalid value for %r: %r" %
                    (cls.__name__, value)
                )

    @classmethod
    def values(cls):
        return cls._reg.keys()

Where above, DeclEnum is the public interface. There's a bit of fancy pants stuff in there, but here's what it looks like in usage. We build an EmployeeType class, as a subclass of DeclEnum, that has all the things we want at once, with zero of anything else:

class EmployeeType(DeclEnum):
    part_time = "part_time", "Part Time"
    full_time = "full_time", "Full Time"
    contractor = "contractor", "Contractor"

If we're trying to save space on a non-ENUM platform, we might use single character values:

class EmployeeType(DeclEnum):
    part_time = "P", "Part Time"
    full_time = "F", "Full Time"
    contractor = "C", "Contractor"

Our application references individual values using the class level symbols:

employee = Employee(name, EmployeeType.part_time)
# ...
if employee.type is EmployeeType.part_time:
    # do something with part time employee

These symbols are global constants, hashable, and even pickleable, thanks to the special __reduce__ above.

To get at value/description pairs for a dropdown, we can iterate the class as well as the symbols themselves to get 2-tuples:

>>> for key, description in EmployeeType:
...    print key, description
P Part Time
F Full Time
C Contractor

To convert from a string value, as passed to us in a web request, to an EmployeeType symbol, we use from_string():

type = EmployeeType.from_string('P')

The textual description is always available directly from the symbol itself:

print EmployeeType.contractor.description

So we have application level constants, textual descriptions, and iteration. The last step is persistence. We'll use SQLAlchemy's TypeDecorator to augment the Enum() type such that it can read and write our custom values:

from sqlalchemy.types import SchemaType, TypeDecorator, Enum
import re

class DeclEnumType(SchemaType, TypeDecorator):
    def __init__(self, enum):
        self.enum = enum
        self.impl = Enum(
                        *enum.values(),
                        name="ck%s" % re.sub(
                                    '([A-Z])',
                                    lambda m:"_" + m.group(1).lower(),
                                    enum.__name__)
                    )

    def _set_table(self, table, column):
        self.impl._set_table(table, column)

    def copy(self):
        return DeclEnumType(self.enum)

    def process_bind_param(self, value, dialect):
        if value is None:
            return None
        return value.value

    def process_result_value(self, value, dialect):
        if value is None:
            return None
        return self.enum.from_string(value.strip())

The idea of TypeDecorator, for those who haven't worked with it, is to provide a wrapper around a plain database type to provide additional marshaling behavior above what we need just to get consistency from the DBAPI. The impl datamember refers to the type being wrapped. In this case, DeclEnumType generates a new Enum object using information from a given DeclEnum subclass. The name of the enum is derived from the name of our class, using the world's shortest camel-case-to-underscore converter.

The addition of SchemaType as well as the _set_table() method represent a little bit of inside knowledge about the sqlalchemy.types module. TypeDecorator currently does not automatically figure out from its impl that it needs to export additional functionality related to the generation of the CHECK constraint and/or the CREATE TYPE. SQLAlchemy will try to improve upon this at some point.

We can nicely wrap the creation of DeclEnumType into our DeclEnum via a new class method:

class DeclEnum(object):
    """Declarative enumeration."""

    # ...

    @classmethod
    def db_type(cls):
        return DeclEnumType(cls)

So the full declaration and usage of our type looks like:

from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

class EmployeeType(DeclEnum):
    part_time = "P", "Part Time"
    full_time = "F", "Full Time"
    contractor = "C", "Contractor"

Base = declarative_base()

class Employee(Base):
    __tablename__ = 'employee'

    id = Column(Integer, primary_key=True)
    name = Column(String(60), nullable=False)
    type = Column(EmployeeType.db_type())

Our Employee class will persist its 'type' field into a new ENUM on the database side, and on the Python side we use exclusively EmployeeType.part_time, EmployeeType.full_time, EmployeeType.contractor as values for the 'type' attribute.

The enum is also ideal for so-called polymorphic-discriminators, where different values indicate the usage of different subclasses of Employee:

class Employee(Base):
    __tablename__ = 'employee'

    id = Column(Integer, primary_key=True)
    name = Column(String(60), nullable=False)
    type = Column(EmployeeType.db_type())
    __mapper_args__ = {'polymorphic_on':type}

class PartTimeEmployee(Employee):
    __mapper_args__ = {'polymorphic_identity':EmployeeType.part_time}

TypeDecorator also takes care of coercing Python values used in expressions into the appropriate SQLAlchemy type, so that the constants are usable in queries:

session.query(Employee).filter_by(type=EmployeeType.contractor).all()

A runnable demo of the enumeration recipe is packed up at decl_enum.py