::

theory.pm

theory.so is No More

§

Until last week, I had two newish blogs. This one, theory.pm, was to be my Perl blog. The other one, theory.so, was my database blog. I thought it would be a good idea to have separate blogs for separate audiences, but it turns out I don’t post enough to make much difference. And now, as of last week, I let the theory.so domain expire. Control the .so domain was turned over to Somalia a few months ago, and domain renwal fees went way up. Since I had so few posts over there (14 since August, 2013), I decided it was a good time to just merge it with theory.pm and be done with it.

So my apologies if a bunch of my old posts just showed up in your RSS readers. (You all still use RSS readers, right?). This is a one-time merging of the two blogs, so should not happen again.

Well…maybe. Now I have a total of 25 posts on theory.pm (since July 2013), which is still pretty paltry. I’m thinking it’s silly to have this thing separate from my original blog, Just a Theory, so I might eventually merge that blog, too. Not sure what domain I’ll use for it. Maybe I’ll go back to justatheory.com. Or maybe I’ll use one of the other domains I registered, like the recently added theory.one. Or maybe theory dot something else.

Not that you care. Good on you for reading this far. I would have stopped before now. You’re a better person than I.

Please Test Pod::Simple 3.29_3

§

I’ve just pushed Pod-Simple 3.29_v3 to CPAN. Karl Williamson did a lot of hacking on this release, finally adding support for EBCDIC. But as part of that work, and in coordination with Pod::Simple’s original author, Sean Burke, as well as pod-people, we have switched the default encoding from Latin-1 to CP-1252.

On the surface, that might sound like a big change, but in truth, it’s pretty straight-forward. CP-1252 is effectively a superset of Latin-1, repurposing 30 or so unused control characters from Latin-1. Those characters are pretty common on Windows (the home of the CP family of encodings), especially in pastes from Word. It’s nice to be able to pick those up essentially for free.

Still, Karl’s done more than that. He also updated the encoding detection to do a better job at detecting UTF-8. This is the real default. Pod::Simple only falls back on CP1252 if there are no obvious UTF-8 byte sequences in your Pod.

Overall these changes should be a great improvement. Better encoding support is always a good idea. But it is a pretty significant change, including a change to the Pod spec. Hence the test release. Please make sure it works well with your code by installing it today:

cpan D/DW/DWHEELER/Pod-Simple-3.29_3.tar.gz
cpanm DWHEELER/Pod-Simple-3.29_3.tar.gz

Oh, and one last thing: If Pod::Simple fails to properly recognize the encoding in your Pod file, you can always use the =encoding command early in your Pod file to make it explicit:

=encoding CP1254

Build Modern Perl RPMs with rpmcpan

§

iovation + Perl = Love

We’ve been using the CentOS Perl RPMs at iovation to run all of our Perl applications. This has been somewhat painful, because the version of Perl, 5.10.1, is quite old — it shipped in August 2009. In fact, it consists mostly of bug fixes against Perl 5.10.0, which shipped in December 2007! Many of the modules provided by CentOS core and EPEL are quite old, as well, and we had built up quite the collection of customized module RPMs managed by a massive spaghetti-coded Jenkins job. When we recently ran into a Unicode issue that would best have been addressed by running a more modern Perl — rather than a hinky workaround — I finally sat down and knocked out a way to get a solid set of Modern Perl and related CPAN RPMs.

I gave it the rather boring name rpmcpan, and now you can use it, too. Turns out, DevOps doesn’t myopically insist on using core RPMs in the name of some abstract idea about stability. Rather, we just need a way to easily deploy our stuff as RPMs. If the same applies to your organization, you can get Modern Perl RPMs, too.

Here’s how we do it. We have a new Jenkins job that runs both nightly and whenever the rpmcpan Git repository updates. It uses the MetaCPAN API to build the latest versions of everything we need. Here’s how to get it to build the latest version of Perl, 5.20.1:

./bin/rpmcpan --version 5.20.1

That will get you a nice, modern Perl RPM, named perl520, completely encapsulated in /usr/local/perl520. Want 5.18 instead: Just change the version:

./bin/rpmcpan --version 5.18.2

That will give you perl518. But that’s not all. You want to build CPAN distributions against that version. Easy. Just edit the dists.json file. Its contents are a JSON object where the keys name CPAN distributions (not modules), and the values are objects that customize our RPMs get built. Most of the time, the objects can be empty:

"Try-Tiny": {},

This results in an RPM named perl520-Try-Tiny (or perl518-Try-Tiny, etc.). Sometimes you might need additional information to customize the CPAN spec file generated to build the distribution. For example, since this is Linux, we need to exclude a Win32 dependency in the Encode-Locale distribution:

"Encode-Locale": { "exclude_requires": ["Win32::Console"] },

Other distributions might require additional RPMs or environment variables, like DBD-Pg, which requires the PostgreSQL RPMs:

"DBD-Pg": {
    "build_requires": ["postgresql93-devel", "postgresql93"],
    "environment": { "POSTGRES_HOME": "/usr/pgsql-9.3" }
},

See the README for a complete list of customization options. Or just get started with our dists.json file, which so far builds the bare minimum we need for one of our Perl apps. Add new distributions? Send a pull request! We’ll be doing so as we integrate more of our Perl apps with a Modern Perl and leave the sad RPM past behind.

Sqitch on FLOSS Weekly

§

Yours truly was feature in this week’s episode of FLOSS Weekly, talking about Sqitch. I feel pretty good about this interview, despite continually banging on my legs, the table, and the mic. It’s interesting to try to communicate what Sqitch is about purely by talking.

If it’s enough to get you interested in giving a try, try installing it and using working through one of the tutorials:

Localize Your Perl Apps with this One Weird Trick

§

Nota Bene: This is a republication of a post that originally appeared in the 2013 Perl Advent Calendar.


These days, gettext is far and away the most widely-used localization (l10n) and internationalization (i18n) library for open-source software. So far, it has not been widely used in the Perl community, even though it’s the most flexible, capable, and easy-to use solution, thanks to Locale::TextDomain.1 How easy? Let’s get started!

Module Internationale

First, just use Locale::TextDomain. Say you’re creating an awesome new module, Awesome::Module. These CPAN distribution will be named Awesome-Module, so that’s the “domain” to use for its localizations. Just let Locale::TextDomain know:

Establish Dominion
1
use Locale::TextDomain 'Awesome-Module';

Locale::TextDomain will later use this string to look for the appropriate translation catalogs. But don’t worry about that just yet. Instead, start using it to translate user-visible strings in your code. With the assistance of the Locale::TextDomain’s [comprehensive documentation], you’ll find it second nature to internationalize your modules in no time. For example, simple strings are denoted with __:

Trekkie Greetings
1
say __ 'Greetings puny human!';

If you need to specify variables, use __x:

Animal Thanks
1
2
3
4
say __x(
   'Thank you {sir}, may I have another?',
   sir => $username,
);

Need to manage plurals? Use __n:

Obligatory Python Reference
1
2
3
4
5
say __n(
    'I will not buy this record, it is scratched.',
    'I will not buy these records, they are scratched.',
    $num_records
);

If $num_records is 1, the first phrase will be used. Otherwise the second.

Sometimes you gotta do both, mix variables and plurals. __nx has got you covered there:

Deeper Understanding
1
2
3
4
5
6
say __nx(
    'One item has been grokked.',
    '{count} items have been grokked.',
    $num_items,
    count => $num_items,
);

Congratulations! Your module is now internationalized. Wasn’t that easy? Make a habit of using these functions in all the modules in your distribution, always with the Awesome-Module domain, and you’ll be set.

Encode da Code

Locale::TextDomain is great, but it dates from a time when Perl character encoding was, shall we say, sub-optimal. It therefore took it upon itself to try to do the right thing, which is to to detect the locale from the runtime environment and automatically encode as appropriate. Which might work okay if all you ever do is print localized messages — and never anything else.

If, on the other hand, you will be manipulating localized strings in your code, or emitting unlocalized text (such as that provided by the user or read from a database), then it’s probably best to coerce Locale::TextDomain to return Perl strings, rather than encoded bytes. There’s no formal interface for this in Locale::TextDomain, so we have to hack it a bit: set the $OUTPUT_CHARSET environment variable to “UTF-8” and then bind a filter. Don’t know what that means? Me neither. Just put this code somewhere in your distribution where it will always run early, before anything gets localized:

Control Decode
1
2
3
4
5
6
use Locale::Messages qw(bind_textdomain_filter);
use Encode;
BEGIN {
    $ENV{OUTPUT_CHARSET} = 'UTF-8';
    bind_textdomain_filter 'Awesome-Module' => \&Encode::decode_utf8;
}

You only have to do this once per domain. So even if you use Locale::TextDomain with the Awesome-Module domain in a bunch of your modules, the presence of this code in a single early-loading module ensures that strings will always be returned as Perl strings by the localization functions.

Environmental Safety

So what about output? There’s one more bit of boilerplate you’ll need to throw in. Or rather, put this into the main package that uses your modules to begin with, such as the command-line script the user invokes to run an application.

First, on the shebang line, follow Tom Christiansen’s advice and put -CAS in it (or set the $PERL_UNICODE environment variable to AS). Then use the POSIX setlocale function to the appropriate locale for the runtime environment. How? Like this:

Localization Boilerplate
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/perl -CAS

use v5.12;
use warnings;
use utf8;
use POSIX qw(setlocale);
BEGIN {
    if ($^O eq 'MSWin32') {
        require Win32::Locale;
        setlocale POSIX::LC_ALL, Win32::Locale::get_locale();
    } else {
        setlocale POSIX::LC_ALL, '';
    }
}

use Awesome::Module;

Locale::TextDomain will notice the locale and select the appropriate translation catalog at runtime.

Is that All There Is?

Now what? Well, you could do nothing. Ship your code and those internationalized phrases will be handled just like any other string in your code.

But what’s the point of that? The real goal is to get these things translated. There are two parts to that process:

  1. Parsing the internationalized strings from your modules and creating language-specific translation catalogs, or “PO files”, for translators to edit. These catalogs should be maintained in your source code repository.

  2. Compiling the PO files into binary files, or “MO files”, and distributing them with your modules. These files should not be maintained in your source code repository.

Until a year ago, there was no Perl-native way to manage these processes. Locale::TextDomain ships with a sample Makefile demonstrating the appropriate use of the GNU gettext command-line tools, but that seemed a steep price for a Perl hacker to pay.

A better fit for the Perl hacker’s brain, I thought, is Dist::Zilla. So I wrote Dist::Zilla::LocaleTextDomain to encapsulate the use of the gettext utiltiies. Here’s how it works.

First, configuring Dist::Zilla to compile localization catalogs for distribution: add these lines to your dist.ini file:

LocaleTextDomain is your dist.ini
1
2
[ShareDir]
[LocaleTextDomain]

There are configuration attributes for the LocaleTextDomain plugin, such as where to find the PO files and where to put the compiled MO files. In case you didn’t use your distribution name as your localization domain in your modules, for example:

Go with a Java-y Domain
1
use Locale::TextDomain 'com.example.perl-libawesome';

Then you’d set the textdomain attribute so that the LocaleTextDomain plugin can find the translation catalogs:

The Domain in DNS
1
2
[LocaleTextDomain]
textdomain = com.example.perl-libawesome

Check out the configuration docs for details on all available attributes.

At this point, the plugin doesn’t do much, because there are no translation catalogs yet. You might see this line from dzil build, though:

Nothing to See Here
1
[LocaleTextDomain] Skipping language compilation: directory po does not exist

Let’s give it something to do!

Locale Motion

To add a French translation file, use the msg-init command2:

French Neuveau
1
2
% dzil msg-init fr
Created po/fr.po.

The msg-init command uses the GNU gettext utilities to scan your Perl source code and initialize the French catalog, po/fr.po. This file is now ready translation! Commit it into your source code repository so your agile-minded French-speaking friends can find it. Use msg-init to create as many language files as you like:

Multilingualism
1
2
3
4
5
% dzil msg-init de ja.JIS en_US.UTF-8 en_UK.UTF-8
Created po/de.po.
Created po/ja.po.
Created po/en_US.po.
Created po/en_UK.po.

Each language has its on PO file. You can even have region-specific catalogs, such as the en_US and en_UK variants here. Each time a catalog is updated, the changes should be committed to the repository, like code. This allows the latest translations to always be available for compilation and distribution. The output from dzil build now looks something like:

Catalogs Plugged
1
2
3
4
po/fr.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/ja.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/en_US.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/en_UK.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.

The resulting MO files will be in the shared directory of your distribution:

She Want Mo Mo Mo
1
2
3
4
5
% find Awesome-Module-0.01/share -type f
Awesome-Module-0.01/share/LocaleData/de/LC_MESSAGES/Awesome-Module.mo
Awesome-Module-0.01/share/LocaleData/en_UK/LC_MESSAGES/Awesome-Module.mo
Awesome-Module-0.01/share/LocaleData/en_US/LC_MESSAGES/Awesome-Module.mo
Awesome-Module-0.01/share/LocaleData/ja/LC_MESSAGES/Awesome-Module.mo

From here Module::Build or ExtUtils::MakeMaker will install these MO files with the rest of your distribution, right where Locale::TextDomain can find them at runtime. The PO files, on the other hand, won’t be used at all, so you might as well exclude them from the distribution. Add this line to your MANIFEST.SKIP to prevent the po directory and its contents from being included in the distribution:

No PO
1
^po/

Mergers and Acquisitions

Of course no code base is static. In all likelihood, you’ll change your code — and end up adding, editing, and removing localizable strings as a result. You’ll need to periodically merge these changes into all of your translation catalogs so that your translators can make the corresponding updates. That’s what the the msg-merge command is for:

Merge Management
1
2
3
4
5
6
% dzil msg-merge
extracting gettext strings
Merging gettext strings into po/de.po
Merging gettext strings into po/en_UK.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po

This command re-scans your Perl code and updates all of the language files. Old messages will be commented-out and new ones added. Commit the changes and give your translators a holler so they can keep the awesome going.

Template Scan

The msg-init and msg-merge commands don’t actually scan your source code. Sort of lied about that. Sorry. What they actually do is merge a template file into the appropriate catalog files. If this template file does not already exist, a temporary one will be created and discarded when the initialization or merging is done.

But projects commonly maintain a permanent template file, stored in the source code repository along with the translation catalogs. For this purpose, we have the msg-scan command. Use it to create or update the template, or POT file:

Scanners
1
2
% dzil msg-scan
extracting gettext strings into po/Awesome-Module.pot

From here on in, the resulting .pot file will be used by msg-init and msg-merge instead of scanning your code all over again. But keep in mind that, if you do maintain a POT file, future merges will be a two-step process: First run msg-scan to update the POT file, then msg-merge to merge its changes into the PO files:

1
2
3
4
5
6
7
% dzil msg-scan
extracting gettext strings into po/Awesome-Module.pot
% dzil msg-merge
Merging gettext strings into po/de.po
Merging gettext strings into po/en_UK.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po

Lost in Translation

One more thing, a note for translators. They can, of course, also use msg-scan and msg-merge to update the catalogs they’re working on. But how do they test their translations? Easy: use the msg-compile command to compile a single catalog:

Translation Comilation
1
2
% dzil msg-compile po/fr.po
[LocaleTextDomain] po/fr.po: 195 translated messages.

The resulting compiled catalog will be saved to the LocaleData subdirectory of the current directory, so it’s easily available to your app for testing. Just be sure to tell Perl to include the current directory in the search path, and set the $LANGUAGE environment variable for your language. For example, here’s how I test the [Sqitch] French catalog:

Translation Comilation
1
2
3
4
% dzil msg-compile po/fr.po
[LocaleTextDomain] po/fr.po: 148 translated messages, 36 fuzzy translations, 27 untranslated messages.
% LANGUAGE=fr perl -Ilib -CAS -I. bin/sqitch foo
"foo" n'est pas une commande valide

Just be sure to delete the LocaleData directory when you’re done — or at least don’t commit it to the repository.

TL;DR

This may seem like a lot of steps, and it is. But once you have the basics in place — Configuring the Dist::Zilla::LocaleTextDomain plugin, setting up the “textdomain filter”, setting and the locale in the application — there are just a few habits to get into:

  • Use the functions __, __x, __n, and __nx to internationalize user-visible strings
  • Run msg-scan and msg-merge to keep the catalogs up-to-date
  • Keep your translators in the loop.

The Dist::Zilla::LocaleTextDomain plugin will do the rest.


  1. What about Locale::Maketext, you ask? It has not, alas, withsthood the test of time. For details, see Nikolai Prokoschenko’s epic 2009 polemic, “On the state of i18n in Perl.” See also Steffen Winkler’s presentation, Internationalisierungs-Framework auswählen (and the English translation by Aristotle Pagaltzis), from German Perl Workshop 2010.

  2. The msg-init function — like all of the dzil msg-* commands – uses the GNU gettext utilities under the hood. You’ll need a reasonably modern version in your path, or else it won’t work.

Sqitch Goes Vertical

§

I released Sqitch v0.996 today. Despite the minor version increase, this is a pretty big release. I’m busy knocking out all the stuff I want to get done for 1.0, but the version space is running out, so just a minor version jump from v0.995 to v0.996. But a lot changed. A couple the biggies:

Goodbye Mouse and Moose, Hello Moo

If you’re not a Perl programmer, you probably aren’t familiar with Moose or its derivatives Mouse and Moo. Briefly, it’s an object system. Great interface and features, but freaking huge—and slow. Mouse is a lighter version, and when we (mostly) switched to it last year, it yielded a 20-30% speed improvement.

Still wasn’t great, though. So on a day off recently, I switched to Moo, which implements most of Moose but without a lot of the baggage. At first, there wasn’t much difference in performance, but as I profiled it (Devel::NYTProf is indispensable for profiling Perl apps, BTW), I was able to root out all trace of Moose or Mouse, including in CPAN modules Sqitch depends on. The result is around a 40% speedup over what we had before. Honestly, it feels like a new app, it’s so fast. I’m really happy with how it turned out, and to have shed some of the baggage from the code base.

The downside is that package maintainers will need to do some work to get the new dependencies built. Have a look at the RPM spec changes I made to get our internal Sqitch RPMs to build v0.996.

MySQL Password Handling

The handling of MySQL passwords has also been improved. Sqitch now uses the $MYSQL_PWD environment variable if a password is provided in a target. This should simplify authentication when running MySQL change scripts through the mysql client client.

Furthermore, if MySQL::Config is installed, Sqitch will look for passwords in the client and mysql sections of your MySQL configuration files (~/.my.cnf, /etc/my.cnf). This should already happen automatically when executing scripts, but Sqitch now tries to replicate that behavior when connecting to the database via DBI.

Spotting the $MYSQL_PWD commit, Ștefan Suciu updated the Firebird engine to use the $ISC_PASSWORD when running scripts. Awesome.

Vertically Integrated

And finally, another big change: I added support for Vertica, a very nice commercial column-store database that features partitioning and sharding, among other OLAP-style functionality. It was originally forked from PostgreSQL, so it was fairly straight-forward to port, though I did have to borrow a bit from the Oracle and SQLite engines, too. This port was essential for work, as we’re starting to use Vertical more and more, and need ways to manage changes.

If you’re using Vertica, peruse the tutorial to get a feel for what it’s all about. If you want to install it, you can get it from CPAN:

cpan install App::Sqitch BDD::ODBC

Or, if you’re on Homebrew:

brew tap theory/sqitch
brew install sqitch_vertica

Be warned that there’s a minor bug in v0.996, though. Apply this diff to fix it:

@@ -16,7 +16,7 @@ our $VERSION = '0.996';

 sub key    { 'vertica' }
 sub name   { 'Vertica' }
-sub driver { 'DBD::Pg 2.0' }
+sub driver { 'DBD::ODBC 1.43' }
 sub default_client { 'vsql' }

 has '+destination' => (

That fix will be in the next release, of course, as will support for Vertica 6.

What Next?

I need to focus on some other work stuff for a few weeks, but then I expect to come back to Sqitch again. I’d like to get 1.0 shipped before the end of the year. To that end, next up I will be rationalizing configuration hierarchies to make engine selection and deploy-time configuration more sensible. I hope to get that done by early October.

Managing Sqitch with Make

§

chromatic:

This saves me a few dozen keystrokes and a few seconds every time I make a database change. If that sounds trivial to you, good. A few keystrokes and a few seconds are trivial. My brainpower isn’t trivial. Those keystrokes and seconds mean the difference between staying in the zone and fumbling around trying to remember commands I don’t use all day every day. They save me minutes every time I use them, if you count the friction of switching between “How do I do this in Sqitch again? What’s the directory layout here?” and “What was I really working on?”

Nice application of a Makefile to eliminate boilerplate. A couple of notes, though:

Nice post. A couple comments and questions:

  • As of Sqitch v0.990, you can pass the --open-editor option to the add command to have the new files opened in your editor.

  • If you want to add a pgTAP test with a new change, see this post.

  • What is the call to sqitch status for? Since its output just goes to /dev/null, I don’t understand the point.

  • Also as of v0.990, you can specify Sqitch targets. The -d, -u, and other options then override values in the target URI.

  • I really want to get Sqitch to better understand and work with VCSs. An example would be to have it automatically git add files created by sqitch add. Another might be a Git config setting pointing to the Sqitch config file. Alas, I don’t know when I will have the tuits to work on that.

Lots of room for growth and improvement in Sqitch going forward. You post provides more food for thought.

Templating Tests with Sqitch

§

Back in September, I described how to create custom deploy, revert, and verify scripts for various types of Sqitch changes, such as adding a new table. Which is cool and all, but what I’ve found while developing databases at work is that I nearly always want to create a test script with the same name as a newly-added change.

So for the recent v0.990 release, the add command gained the ability to generate arbitrary script files from templates. To get it to work, we just have to create template files. Templates can go into ~/.sqitch/templates (for personal use) or in $(sqitch --etc-path)/templates (for use by everyone on a system). The latter is where templates are installed by default. Here’s what it looks like:

List Default Templates
1
2
3
4
> ls $(sqitch --etc-path)/templates
deploy  revert  verify
> ls $(sqitch --etc-path)/templates/deploy
firebird.tmpl  mysql.tmpl  oracle.tmpl  pg.tmpl  sqlite.tmpl

Each directory defines the type of script and the name of the directory in which it will be created. The contents are default templates, one for each engine.

To create a default test template, all we have to do is create a template for our preferred engine in a directory named test. So I created ~/.sqitch/templates/test/pg.tmpl. Here it is:

Default pgTAP template
1
2
3
4
5
6
7
8
9
10
11
12
SET client_min_messages TO warning;
CREATE EXTENSION IF NOT EXISTS pgtap;
RESET client_min_messages;

BEGIN;
SELECT no_plan();
-- SELECT plan(1);

SELECT pass('Test [% change %]!');

SELECT finish();
ROLLBACK;

This is my standard boilerplate for tests, more or less. It just loads pgTAP, sets the plan, runs the tests, finishes and rolls back. See this template in action:

Add a change.
1
2
3
4
5
6
> sqitch add whatever -n 'Adds whatever.'
Created deploy/whatever.sql
Created revert/whatever.sql
Created test/whatever.sql
Created verify/whatever.sql
Added "whatever" to sqitch.plan

Cool, it added the test script. Here’s what it looks like:

Generated test script
1
2
3
4
5
6
7
8
9
10
11
12
SET client_min_messages TO warning;
CREATE EXTENSION IF NOT EXISTS pgtap;
RESET client_min_messages;

BEGIN;
SELECT no_plan();
-- SELECT plan(1);

SELECT pass('Test whatever!');

SELECT finish();
ROLLBACK;

Note that it replaced the change variable in the call to pass(). All ready to start writing tests! Nice, right? If we don’t want the test script created – for example when adding a column to a table for which a test already exists – we use the --without option to omit it:

Add change without test
1
2
3
4
5
> sqitch add add_timestamp_column --without test -n 'Adds whatever.'
Created deploy/add_timestamp_column.sql
Created revert/add_timestamp_column.sql
Created verify/add_timestamp_column.sql
Added "add_timestamp_column" to sqitch.plan

Naturally you’ll want to update the existing test to validate the new column.

In the previous templating post, we added custom scripts as for CREATE TABLE changes; now we can add a test template, too. This one takes advantage of the advanced features of Template Toolkit. We name it ~/.sqitch/templates/test/createtable.tmpl to complement the deploy, revert, and verify scripts created previously:

CREATE TABLE test template
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
-- Test [% change %]
SET client_min_messages TO warning;
CREATE EXTENSION IF NOT EXISTS pgtap;
RESET client_min_messages;

BEGIN;
SELECT no_plan();
-- SELECT plan(1);

SET search_path TO [% IF schema %][% schema %],[% END %]public;

SELECT has_table('[% table or change %]');
SELECT has_pk( '[% table or change %]' );

[% FOREACH col IN column -%]
SELECT has_column(        '[% table or change %]', '[% col %]' );
SELECT col_type_is(       '[% table or change %]', '[% col %]', '[% type.item( loop.index ) or 'text' %]' );
SELECT col_not_null(      '[% table or change %]', '[% col %]' );
SELECT col_hasnt_default( '[% table or change %]', '[% col %]' );

[% END %]
SELECT finish();
ROLLBACK;

As before, we tell the add command to use the createtable templates:

Create table with typed columns
1
2
3
4
5
6
> sqitch add corp_widgets --template createtable \
  -s schema=corp -s table=widgets \
  -s column=id -s type=SERIAL \
  -s column=name -s type=TEXT \
  -s column=quantity -s type=INTEGER \
  -n 'Add corp.widgets table.'

This yields a very nice test script to get you going:

Generated Table Test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
-- Test corp_widgets
SET client_min_messages TO warning;
CREATE EXTENSION IF NOT EXISTS pgtap;
RESET client_min_messages;

BEGIN;
SELECT no_plan();
-- SELECT plan(1);

SET search_path TO corp,public;

SELECT has_table('widgets');
SELECT has_pk( 'widgets' );

SELECT has_column(        'widgets', 'id' );
SELECT col_type_is(       'widgets', 'id', 'SERIAL' );
SELECT col_not_null(      'widgets', 'id' );
SELECT col_hasnt_default( 'widgets', 'id' );

SELECT has_column(        'widgets', 'name' );
SELECT col_type_is(       'widgets', 'name', 'TEXT' );
SELECT col_not_null(      'widgets', 'name' );
SELECT col_hasnt_default( 'widgets', 'name' );

SELECT has_column(        'widgets', 'quantity' );
SELECT col_type_is(       'widgets', 'quantity', 'INTEGER' );
SELECT col_not_null(      'widgets', 'quantity' );
SELECT col_hasnt_default( 'widgets', 'quantity' );


SELECT finish();
ROLLBACK;

I don’t know about you, but I’ll be using this functionality a lot.

Brent Simmons is Not Wrong

§

Brent Simmons:

Database people are already gasping for air, because they know what’s coming. Instead of creating a separate table for attachment metadata, I created an attachments column in the notes table and just encoded the attachment metadata there.

On iOS it uses Core Data’s built-in object archiving feature. On the server it’s stored as JSON.

This is wrong, surely; it’s not how to do this. Except, in this case, it is. Incomplete object graphs are wrong; inefficient and slower syncing with more complex server-side code is also wrong.

This is less wrong than the alternatives.

Some database folks might be gasping for air, but not those of us steeped in relational theory. In Database in Depth, relational theorist C.J. Date poses a question:

In Chapter 1, I said that 1NF meant that every tuple in every relation contains just a single value (of the appropriate type, of course) in every attribute position—and it’s usual to add that those “single values” are supposed to be atomic. But this latter requirement raises the obvious question: what does it mean for data to be atomic?

Well, on page 6 of the book mentioned earlier, Codd defines atomic data as data that “cannot be decomposed into smaller pieces by the DBMS (excluding certain special functions).” But even if we ignore that parenthetical exclusion, this definition is a trifle puzzling, and not very precise. For example, what about character strings? Are character strings atomic? Every product I know provides several operators on such strings—LIKE, SUBSTR (substring), “||” (concatenate), and so on—that clearly rely on the fact that character strings in general can be decomposed by the DBMS. So are those strings atomic? What do you think?

The whole book is worth a read, especially the first four chapters, as it does an excellent job of dispelling the myth that complex data types are verboten in a properly normalized relational model. Another gem:

But I could have used any number of different examples to make my point: I could have shown attributes (and therefore domains) that contained arrays; or bags; or lists; or photographs; or audio or video recordings; or X rays; or fingerprints; or XML documents; or any other kind of value, “atomic” or “nonatomic,” that you might care to think of. Attributes, and therefore domains, can contain anything (any values, that is). All of which goes a long way, incidentally, toward explaining why a true “object/relational” system would be nothing more nor less than a true relational system—which is to say, a system that supports the relational model, with all that such support entails.

This is exactly why PostgreSQL offers array, XML, and JSON data types: because sometimes, they are exactly what you need to properly model your system.

So you go, Brent, you’re doing it exactly right.

Sqitch on Target

§

At the end of the day last week, I released Sqitch v0.990. This was a pretty big release, with lots of changes. The most awesome addition, in my opinion, is Named Deployment targets.

In previous versions of Sqitch, one could set default values for the database to deploy to, but needed to use the --db-* options to deploy to another database. This was fine for development: just set the default on localhost and go. But when it came time to deploy to other servers for testing, QA, or production, it was a bit of a PITA. At work, I ended up writing deployment docs that defined a slew of environment variables, and our operations team needed to adjust those variables to deploy to various servers. It was ugly, and frankly a bit of a pain in the ass.

I thought it’d be better to have named deployment targets, so instead of changing a bunch of environment variables in order to set a bunch of options, we could just name a target and go. I borrowed the idea from Git remotes, and started a database URI spec (mentioned previously) to simplify things a bit. Here’s how it works. Say you have a PostgreSQL Sqitch project called “Flipr”. While doing development, you’ll want to have a local database to deploy to. There is also a QA database and a production database. Use the target command to set them up:

Sqitch Targeting
1
2
3
sqitch target add dev db:pg:flipr
sqitch target add qa db:pg://sqitch@qa.example.com/flipr
sqitch target add prod db:pg://sqitch@db.example.com/flipr

Like Git remotes, we just have names and URIs. To deploy to a database, just name it:

Deploy to Dev
1
sqitch deploy dev

Want to deploy to QA? Name it:

Deploy to QA
1
sqitch deploy qa

This works with any of the commands that connect to a database, including revert and status:

Targeted revert, Status
1
2
sqitch revert --to @HEAD^^ dev
sqitch status prod

The great thing about this feature is that the configuration is all stored in the project Sqitch configuration file. That means you can commit all the connection URIs for all likely targets in directly to the project repository. If they change, just change them in the config, commit, and push.

Targets don’t always have to be configured in advance, of course. The names essentially stand in for the URIs, so you can connect to an unnamed target just by using a URI:

URI Targeting
1
sqitch log db:postgres://db1.example.net/flipr_export

Of course there are still defaults specific to each engine. I generally like to set the “dev” target as the default deploy target, like so:

Configure Default Target
1
sqitch config core.pg.target dev

This sets the “dev” target as the default for the PostgreSQL engine. So now I can do stuff with the “dev” target without mentioning it at all:

Rebase Dev Target
1
sqitch rebase --onto HEAD^4

Named targets may also have a couple other attributes associated with them:

  • client: The command-line client to use for a target.
  • registry: The name of the Sqitch registry schema or database, which defaults to, simply, sqitch.

Now that I’ve started using it, I can think of other things I’d like to add to targets in the future, including:

Pretty cool stuff ahead, IMO. I’m grateful to work for letting me hack on Sqitch.