Using Power Architect and Liquibase Efficiently

In the first part of this blog series, I have motivated the combined usage of SQL Power Architect and LiquiBase. This part takes a closer look on how to actually work with our tool-chain. My colleague Thomas Kellerer, who is also involved in the Power Architect development, provides some great tips and tricks.

Before we start, let me lay down a few preliminaries of the example. At mgm, we start with a blank slate only rarely when it comes to data models. Instead, in the majority of cases, we start with some kind of existing data model or database and adapt it. So in our example we assume that we already have either an existing database or a data model (Power Architect file) to start from.

Getting the Physical Data Model into Power Architect

When starting with an existing database, i.e. reverse engineering, we simply add the connection information to Power Architect which then retrieves the meta data about the tables and relations and key constraints automatically for us. This approach will get you a working model. However, be aware that if you intend to copy more sophisticated concepts like check constraints, non-standard indices (e.g. Oracle’s Bitmap indexes), triggers, views or stored procedures as well, that Power Architect is not yet able to reverse engineer them automatically. You have to extract this information manually from your existing database and enter the concepts as special “Custom SQL” statements to LiquiBase separately. We will deal with this issue later and more in depth.

Tip: Before you start editing a file save the original state as a copy this way you can make a diff of the original and modified versions later on to check for example if all the intended changes have been made. Even if you use a versioning tool like subversion where you can check out any historical status keeping a “diff” copy at hand while doing edits can save time and effort.

Export Options in Power Architect

Now we edit the model, for example we add new tables and relations. Let’s take the following film database and add awards information:

The existing Data Model we use as a starting point in our project.

Modified data model (new tables film_awards and awards to the right).

This is only a very small example. In a real project there are usually many more tables and we can add or delete tables, columns, relationships or rename them, add or delete key constraints or indices or even drag and drop tables or columns from another model that we opened in parallel.

Once we have either adapted the existing data model (or created a new one), we add the new data model version as a .architect file to subversion. The next step is to export the physical data model or the changes to the model so we can put it into a Liquibase Changeset and/or create documentation.

We usually export the complete model to a Liquibase Changelog only once and later on only export changs as Changesets. (The Changeset export feature was added to Power Architect by my colleague Thomas Kellerer.) Changeset are manually appended to the projects Liquibase Changelog. This once again saves time and effort.

To export the whole as a Changelog we choose “Tools – Forward Engineer…” in the Power Architect menu. This opens a dialog that offers not only to create SQL Scripts directly but also to create a Liquibase Changelog, with various options to choose from. We will deal with the question which option to choose when and which values to enter later on.

Export the complete model to a Liquibase XML Changelog using Tools - Forward Engineer...

To create a working initial Changelog this full export has to be supplemented by the Liquibase start and end tags, which we usually accomplish by pasting the export into a previously prepared .xml file between the respective tags:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
  xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9

http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">

PASTE THE CHANGELOG INFORMATION EXPORTED FROM POWER ARCHITECT HERE

</databaseChangeLog>

Once the initial Changelog has been set up, any future changes are exported as deltas, i.e. only the changes that have been made and appended to the existing Changelog. To identify changes, we choose the “Tools – Compare data models” in the Power Architect menu and use the initial unmodified architect file which we saved as a working copy as mentioned before (or if we forgot this step, we check out the initial version once more from Subversion).

The pop-up dialog allows us to choose what we want to compare and how the output of the comparison shall be presented.

Use Compare Data Models to product changes as a Liquibase Changeset.

As you can see in the screenshots above, the Export as well as the Compare Screen offers a variety of formatting options for the resulting Liquibase Changesets. Let us now examine which choices we have found helpful and which choice applies in which situation.

Choosing the Optimal Liquibase Changeset Size

A LiquiBase Changelog is divided in separate changesets that identify sets of changes. As indicated in the screenshots above, Power Architect offers an option to “Generate separate changesets” since Version 1.0.0. But why not keep all changes of an edit session in one big Changeset? Wouldn’t that be easier? The answer is that bigger is not always better, as the best size of your Changesets depends on the kind of DBMS you are using.

When using a DBMS that does not support transactional DDL (e.g. Oracle, SQL Server), each DDL statement should be put into a single Changeset. The reason is that if a Changeset fails, this makes fixing the cause of the failure a lot easier.

Consider the following Changeset:

<changeSet author="1.0" id="42">
  <dropForeignKeyConstraint baseTableName="TABLE_1"
                            constraintName="FK_TABLE_1_2"/>
  <dropForeignKeyConstraint baseTableName="TABLE_2"
                            constraintName="FK_TABLE_2_3"/>

  <dropTable tableName="TABLE_1"/>
  <dropTable tableName="TABLE_2"/>

  <addColumn tableName="OLD_TABLE">
    <column name="NEW_COLUMN" type="INTEGER"
             defaultValueNumeric="0">
      <constraints nullable="false"/>
    </column>
  </addColumn>
  <sql>
    ALTER TABLE OLD_TABLE
      ADD CONSTRAINT CHECK_NEW_COLUMN
      CHECK (NEW_COLUMN IN (0,1))
  </sql>

  <dropPrimaryKey tableName="TABLE_4"
                  constraintName="PK_TABLE_4Y"/>
  <dropColumn tableName="TABLE_4" columnName="OLD_COL_4"/>
  <addPrimaryKey tableName="TABLE_4"
                 constraintName="PK_TABLE_4"
                 columnNames="COL_1,COL_2,COL_3"/>
</changeSet>

The change to the primary key might fail if there is data in the database that prevents dropping it. If this step fails, the other DDL statements have already been applied. It does not help to fix the data and then re-run the Changelog, as the Changeset will now fail when it comes to the very first statement (as that has already been done before).

If each of the DDL statements is put into a single Changeset, LiquiBase knows that the previous Changesets have been already applied and will not run them again. Thus, fixing our problem is only a matter of fixing the data in TABLE_4 and re-running the whole Changelog.

On the other hand if you are using a DBMS that supports transactional DDL (e.g. PostgreSQL, DB2), DDL statements that belong together should be put into one single Changeset, so they will either succeed or fail together.

Identifying Liquibase Changesets with ID/author tags

LiquiBase uses the tags ID and author (in connection with the file name) to uniquely identify every Changeset. The Export and Compare dialogs offer the option to add author and ID automatically, but we have often found that it is not initially clear what should be added here. There are no musts here but we have found it highly advisable to set down some kind of convention to make the lives of everyone easier.

We find that the simplest way of using the ID is to use incremental numbers to keep a sequence inside the Changelog. This is also what the Power Architect export supports. Just set the “Start at” to the last ID number+1 and tick the “Generate ID attribute” to let the Export handle the ID assignment for you. Quite a time-saving option, isn’t it? (Liquibase also allows non numeric IDs, but there is currently no way to generate them automatically.)

As different people will add adaptations to the data model, it is good practice to use an author tag that helps understanding when a change was added and by which person or for which version. There exists various practical conventions, for example some projects use the maven source code version (pom) as author tag, while others use the developer’s username. The most important thing is not what is chosen but that the usage is consistent.

Generating Documentation

A last word on exporting from Power Architect: By using a custom XSLT template, we use the HTML Report generator to create Wiki documentation on the fly from the physical data model in Power Architect. With a bit of knowledge in the creation of XSLT templates, many different forms of documentation can be created quickly and without further efforts.

The next part will focus on LiquiBase and how to get the most out of it, e.g. how to partition the Changelog, using custom SQL statements for triggers, stored procedures, etc. and we will include a few tips concerning different DBMS.

Share

Leave a Reply

*

3 Responses to “Using Power Architect and Liquibase Efficiently”

  1. Benjamin says:

    Hi,

    thanks for the interesting blog post. I’m using SQL Power*Architect. 1.0 and in this version the menu “Extras” is now called “Tools” and the option “Create SQL Scripts…” is called “Forward Engineer…”

    Benjamin

  2. [...] Using power architecture and liquibase efficiently [...]