Liquibase: Our setup in a larger scale project

In this post, we want to show you our Liquibase setup in a larger scale project that we’ve been developing for some time now.

Gather Requirements

First off, a bit more information about the project and the whole project environment:

  • The software developed in this project consists of different applications
  • Some applications use the same database, some use different ones
  • The software runs in multiple branch offices of a company
  • Not every application runs in every branch office
  • Some of the data in the databases of the branch offices is the same as in the others, some isn’t
  • We use maven with profiles to build for the different branch offices, because they need different config
  • As we took over the project from the company, the applications and also the database already existed for some years

This results in some requirements for our liquibase setup:

  • We need a configuration for each application, because they don’t neccessarily use the same database, and we can’t be sure that in every branch office, this setup is the same
  • We need different configurations for each branch office
  • We need different liquibase scripts for each branch office for some data, while we need the same scripts for other data

Liquibase setup

The pom file

Because we use maven to build our projects, we also want to use it to build and execute the liquibase scripts. Luckily, liquibase brings a maven plugin out of the box. So we created a new maven project and added the liquibase maven plugin to it. We configured it to run on the install phase of maven, because we want to preprocess the scripts before they are executed (to fill in the parameters). The scripts and additional config files will be located in the src/main/resources folder of our project.
As it needs a connection to the database, don’t forget to add the needed database driver dependencies! Also change the liquibase artifact corresponding to your database!


The resource filtering is needed, because we’ll use placeholders in the liquibase files.
In the configurations, we specify to output the whole sql that is executed by liquibase to be exported to a specific file. Furthermore we use a different configuration file, based on the environment that liquibase is built against. With this, we can specify some configs that are the same for each test server, for each staging server, or for each production server regardless of the branch, without the need to put it in every maven profile (more on the maven profiles later).

The folder structure

As for the liquibase folder structure, we set it up as followed:

├── changes                   <-- folder for changes
├── data                      <-- folder for the initial data imports
│   ├── all                   <-- Liqiubase scripts , that are executed
│   │   │                         for every branch
│   │   ├── csv               <-- CSV files for data imports
│   │   ├── data-xyz-001.xml  <-- liquibase scripts
│   │   └── data-xyz-002.xml
│   ├── branchX               <-- Branch specific folder. contains scripts and
│   └── branchY ...               csv files, that are specific for this branch
├── init                      <-- folder for database init (executed as SYSDBA)
├── install                   <-- folder for some more database initialisation.
│                                 that can be executed as the actual user,
│                                 table creation and stuff
├── db.changelog.xml          <-- Liquibase Changelog that contains references
│                                 to the single liquibase scripts
└── db.init.xml               <-- Initial Liquibase Changelog that
                                  has to be executed as @SYSDBA@
                                  and sets up the schemas and users (init folder)

The liquibase changelog

Here’s an example for the changelog file:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<databaseChangeLog xmlns=""
    <include relativeToChangelogFile="true" file="install/all/tables.xml"/>
    <include relativeToChangelogFile="true" file="install/all/procedures.xml"/>
    <include relativeToChangelogFile="true" file="changes/all/table_add_column_xyz.xml"/>
    <include relativeToChangelogFile="true" file="changes/${projectname.branch}/adjust_procedure_asd.xml"/>

As you can see, we have used the ${projectname.branch} placeholder in the path of a changelog. The file that is referenced there, has to be added for each of the branches, because this changelog is also used for every branch. This can be somewhat inconvenient in some times, when you only have to add a change to one of the branches, but that should not happen that often. It’s more likely (at least for our case) that you have to adjust the same thing for all branches, but a little differnt, or fill some table with different data.
Also, the right execution order of the scripts is secured this way. Furthermore, we don’t have to create and update one changelog for every branch, where it can easily happen, that one file is left out and it goes through unnoticed. In our setup, if you forget to add a file that’s declared in the changelog, that’s another case, because you will know it as soon as you execute the script for the specific branch. So we considered this to be the best method to address multiple branches.
You can also use the placeholder in other places, like the loadUpdateData tag, where you can specify a .csv file from which liquibase will load data. There, You’ll only need to add the changelog to the ‘all‘ folder and the .csv files in each branch folder. Furthermore, we are

maven profiles

To configure and execute liquibase, we use different maven profiles. We need to specify the url, username and password for each server, so we have one profile for each of them. The properties that are the same based on the environment (test, stage, prod), are defined in a config file included from the pom (as already seen above), so we also need to add a property for the environment in each profile. Like this we can create a liquibase profile for each application of an environment of a branch (yup, there are quite some profiles because of this, but it is simply needed – you don’t have to keep them in your settings.xml all the time, though, so it isn’t that much of a pain, once they are created 😛 ). By setting the username and password locally in the maven settings.xml, we also keep sure that no passwords are commited in our version control.
example profile:


With this config, it uses the property file target/classes/ (keep in mind, the file initially lies in the folder src/main/resources, but because we build the project before we execute liquibase, it is then located under target/classes/ , with its parameters replaced by our properties).


Here we map our properties from the profiles to the actual liquibase property names and also set a few other liquibase configs.
For scripts you need to execute in another schema as the one the db user has set as the default schema, we also set the defaultSchemaName property of liquibase (mainly the case, if we execute scripts as the SYSDBA user).

Execution & Conclusion

Because of the use of maven, we can execute all of the changes from our local machines very easy:

mvn clean install -Pxyz-test

If you connect against a remote server, you are even warned with a dialogue that contains the database name, url and username, it wants to execute the scripts on, before the scripts are actually executed. So you can check them again and abort the migration if you used the wrong profile.
With this setup we can now add scripts for only one branch, multiple branches, or all branches, without having to worry to forget to add one change to a branch and leaving the error unnoticed. Even if we forget to put some file in the folder of one branch, our changelog file is global for all branches! So if we try to execute it the next time, liquibase notices the missing file and informs us about this (and aborts the execution). And because we don’t have different folders for the environments, but only the branches, this gets noticed on the test machines.
Please let us know what you think of our approach and if you know an even better one!


  1. How can we do in case I have multiple schemas and having multiple user under every schema?